13 July 2010

Cloud Services and e-Commerce

A pretty big thing at work is the Payment Card Industry Data Security Standard, hereby referred to as PCI-DSS. It is a set of fairly strict security requirements that anybody wishing to do anything with processing credit cards must abide by. The standard can be summarized with simple rules like have a secure network, do not store things in plain text, use virus scanners and intrusion detection systems, etc.

Every year, the Payment Card Industry Security Standards Council (PCI-SSC) sends a Qualified Security Assessor (QSA) to assess the network for agreement lapses (ANAL). This costs the company millions of dollars on equipment like full-disk encryption, intrusion detection systems and tons of person-hours, but ultimately makes your credit card information safer. Aside from that, there is a fine on the order of $100,000 per month for not being compliant and there is a risk of losing the ability to run transactions at all (although given the volume of transactions that Fiserv does, I would imagine that number might be even larger).

The e-Commerice business has exploded in my lifetime, continues to do well in this economy and is not likely to ever stop growing. On a directly related note, more and more people are building web sites and selling things over the internet. Of these, very few meet PCI-DSS. It shows, too -- around 80% of unauthorized credit card transactions involve small merchants. Many small businesses do not bother with compliance and live with the fines because it is actually cheaper than trying to secure everything (and less effort).

And why should the bother trying to meet some ridiculous standard? It is so easy to hook up transaction processing to your little web server (violation), on the same network you give your employees WiFi with (at least 3 violations), store that information for future use (violation)...well, you get the idea. It is completely unreasonable to expect people to actually read the rules, much less understand them. Even if vendors made perfectly secure software (they don’t), you cannot expect every client to know how to set up an intrusion detection system or have in-depth knowledge of what a good security policy is. You can not even trust that the virus scanner is up-to-date.

Those are the kinds of e-Commerce businesses whose security would benefit the most to moving to a more secure infrastructure like Amazon EC2 or Google App Engine. Not only would the system be more secure, but there are tons of other benefits from maintainability to flexibility. If somebody had a little Python or Java module to drop into a Google App Engine web project, I can almost guarantee that the site would be more secure than if the developer had done the same thing on Bob's Server Farm. But, nobody writes generic cloud-based point-of-sale software. Why? Because it would be impossible for it to meet the PCI compliance standard.

The reason is section 12.8.2 of the PCI-DSS:

Maintain a written agreement that includes an acknowledgement that the service providers are responsible for the security of cardholder data the service providers possess.

And 12.8.4:

Maintain a program to monitor service providers’ PCI-DSS compliance status.

In short, the cloud service provider must maintain their PCI stamp of approval and they must shoulder some of the responsibility. That rules out Amazon’s EC2: their service agreement specifies that they will take no responsibilities whatsoever. Google says the same thing about App Engine. Microsoft takes a similar stance with Windows Azure (I would link you, but they only offer the ToS in Word documents, which is completely brain-damaged). None of these cloud computing platforms is going to take on the liability of meeting the PCI specification and it is likely that they never will.

Does this mean that cloud computing and e-Commerce are destined to never meet? Not quite - there is always going to be Google Checkout and PayPal. Both of them have very customizable shopping cart implementations and are fully qualified to process credit card transactions. At that point, you are going to have to live with the fairly significant surcharge associated with those services.

Unfortunately, that appears to be the very limit of what is possible on any cloud system. The only possibility of moving away is for a developer to roll their own PayPal which resides on their own PCI-compliant infrastructure. The funny thing about doing something like that is that such a system would probably be less secure than running the same system on the public cloud. Essentially, one would be providing software as a service (SaaS) to a platform as a service (PaaS) on an infrastructure as a service (IaaS) (side note: aren't web acronyms fun?).

Another big issue with providing shopping cart functionality through any PayPal-like system is that it limits you to the web. This is a real shame because the internet has so much more potential. A piece of software like Steam could not exist on the cloud without some extremely clever single-sign on (SSO) hacking. Of course, once you are to the level of a desktop application, you are free to make multiple calls to places all over, but that is a really bad security practice.

My ultimate question is: Who will break, the PCI or a cloud service provider? I very much doubt that the PCI-SSC is going to quickly change their stance on anything, since, like any standards body, they are extremely slow to react (they do not address plain-old virtualization yet). Will one of the existing cloud service providers step up an become PCI-compliant? I highly doubt this as well.

My money is on the problems being solved by Google Checkout, PayPal or some new, but similar, service. I would love to see a web service-based alternative to those services. Combined with the emerging OAuth 2.0, developers could do whatever they want and have it all bundled up in a nice secure package. I really think there is a market for this -- it would open up these fun new elastic hosting solutions to all the Web 2.0 connectivity we have come to love. There is money to be made and it's all pretty exciting.

12 July 2010

3D TV is Not Coming Anytime Soon

A few days ago, Ubisoft made a prediction that 3D TVs would be in every single home in the United States and people will rejoice. This is something the major players in the tech/media industry are all pushing for with advertising galore. Every advertising break during the world cup featured that damn Samsung ad.

Fact: People would like to watch TV without looking like Bono.
I'm not a fashion expert, so I'm not going to criticize what the glasses look like, but I will criticize the fact that you have to wear them. Well, not criticize so much as point out an obvious technical problem: Where are you going to put them? For years, Americans have struggled with the ideal place to put the remote. There has been ongoing, well-funded research on ideal locations for the remote and experts still furiously debate the subject. Hell, my father caved and bought one of these things so that he could never possibly lose it:
So where are we going to put our glasses? Unlike remotes, you have to have the glasses on to get any sort of viewing experience. Unlike remotes, if put them in the couch, you can scratch them and then you have to buy new ones. Unlike remotes, they do not have an established location in the sitting room because of a lack of research in ideal stupid glasses placement. Okay, that last one will heal with time, but that time factor works largely against the adoption rate (especially Ubisoft's vision of adoption rate).

So how do you deal without the glasses? Well, Microsoft has a pretty cool idea. The problem with their technology is that the system is limited to a certain number of viewers. Right now, one system can show 3D to only 2 people or 2D to 4 people. Ouch. Sure, the technology will only improve with time, but there will always be an absolute limit to that technology. Which means if you want to broadcast a 3D image to n people where n is very large, you're stuck with the glasses (for now).

Oh, except that even on the glasses-based TVs, you generally only get two pairs of the active shutter glasses with the set. Sure, you can go buy more. Newegg sells a pair of LG glasses for the low prices of $116. WHAT? Gulp.

Fact: People don't actually want this technology (yet?).

I have yet to talk to a person who really wants to have a 3D TV. Not buy one, but have one. If you are going to give them away, people will use them as your boring old 1080p 2D TVs. But don't take my empirical evidence for it -- survey says: the Japanese are 'not interested' in 3D TV: 67.4% of them said they did not intend to upgrade. If technophile Japan doesn't care, what hope is there for adoption in the United States?

Fact: The technology is wholly underwhelming.

My problem is that I have been seeing in 3D almost my entire life, which makes 3D TV not actually that impressive. The advertisements claim that it is more immersive, but at the end of the day you are still looking at a box with pretty lights. So when I see crap like this, which somehow implies that 3D TV looks more realistic than the real world, I can't help but shake my head.

If you want more immersion, give us sound, smells and sensations. I want the feelies! I was going to take this paragraph down a sarcastic path where I imply that it is not the content of the display but the induced sensations that really matter, but it turns out Aldous Huxley has already got that covered.

Fact: People do not consume TV in a way that works with 3D TV.

The demos at E3 all conveniently had people standing in front of the television, but have you ever seen a 3D TV from an angle? It looks like crap. Remember LCDs when they first came out? It is kind of like that, except not only are the colors completely wrong, the entire picture depth is thrown out of whack and you see it double again. It is even worse if you see the TV from an angle below (like if your TV is on a stand and you are sitting on the floor). People watch the news while making dinner. People have arranged their sitting rooms as a matter of looks, not as a matter of greatest TV viewing experience.

The industry seems to think we live in a world completely centered around watching television. A common example given is sports. Imagine if everybody watched sports in 3D, it would be like you are right there! says the industry. This through process is preposterous. Are you going to hand out 3D glasses for your next Super Bowl party? Will the bouncer at the sports bar be handing out glasses at the door? It's fine in theaters, because you are there to do one and only one thing: watch the movie.

And how does this hold up to the multitasking generation? We very rarely watch the TV without doing something else at the same time. Can you imagine the hell that would be if every one of your devices was 3D? What if they all used different technologies and required different glasses? Yikes.

Fact: People do not have extra money lying around to throw at this sort of crap.

In 2007, the 1080p revolution really started to take hold. Since then, over 40 million HDTVs have been sold. Even I am amazed at how good a plain-old DVD upscaled to 1080p looks. Sure, a broadcast at 1080i looks way better, but the upscaling algorithms look pretty damn nice. People are finally buying Blu-ray disc players and are amazed at the clarity.

Blu-ray is interesting because it is actually capable of playing 3D content, except the standard of encoding was only recently agreed upon, so not all players are actually capable of this. Luckily, Blu-ray players have the capability of being firmware patched to support the new standard. The problem is that upgradeable firmware is more expensive to produce, so many of the cheaper players cannot be upgraded. Oh, and the cheaper players are what most people bought. I would hate to be a customer support person fielding that call:
Customer: Yeah, I just bought Avatar in 3D and it is not playing 3D.
Support: What is the model number of your player?
Customer: The what?
etc, etc, etc.
Customer: What do you mean this version is not 3D capable? Avatar says that it will work in any blue ray player on the box!

Fact: 3D causes headaches.

I knew this from when I wore my first pair of active shutter glasses at a Samsung demonstration almost 4 years ago. Within about 15 seconds, I had to take the glasses off and get a drink of water. The technology has definitely gotten less headache-inducing, but I still cannot watch more than about 30 minutes of 3D without wanting to bash my own head in with the back of a claw hammer to relieve the pain (if you ever want to torture me, just strap me to a chair, tape open me glazzies and make me watch 3D films). It is definitely not a universal sentiment, but I am not the only one.



Gaming is going to be the field where 3D technology really takes off, which is why I have a vested interest in the topic. Most of the problems I have listed in this post go away with gaming, since gamers are always looking for immersion, are limited by the amount of controllers anyway, sit in front of the television and are not terribly chuffed about spending a little extra bit of money. Okay, so right now it is a lot of money, but the prices will drop soon enough. My only concern is how this is going to work with things like Kinect. Will your glasses fall off when you are moving around? Not a really big deal, though.

I realize that this is the way technology is moving, but give it at least 10 years (minimum) before we even near a halfway adoption rate. The uptake will be significantly slower than the HD revolution (the first HDTV was made in 1998), but it will come. Although I will say that gamers will be the first adopters, so I guess that makes Ubisoft sort of right.

07 July 2010

What I Would Like in a Programming Language (Part 2)

Continuing from part 1...

Functions

Functions are what does the work. You can have your objects exist and hold a bunch of data, but without functions to do work on and with this data, we would live in a world of pure XML, which I think everybody can agree would be horrific. We use functions to find our social security number, pay our taxes and help the landlady with her garbage. I'm not saying that objects are guilty of virtually every computer crime we have a law for, but functions should really take a more prominent role in programming languages. There are two types of functions that I have special feelings for, the pure function on the lambda function.

Pure and Constant Functions

Pure functions are awesome. For those who hate reading, a pure function can be summarized by saying that it is a function that cannot alter anything but itself: no global memory, no I/O devices - nothing but the stack. The GCC documentation also defines a constant function as a special case of a pure function: it does not even read from global memory.  The C function strlen would be an example of a pure function, since it reads but does not alter global memory (dereferencing the pointer is considered an access to global memory).  A function like sqrt is considered a constant function since it touches nothing (as would Q_rsqrt).

Okay, so what's the point? There are three main reasons: optimization, multi-processing and verification. Optimization from marked pureness comes in two forms: dead code elimination and common subexpression elimination. Explaining how this works is a blog post on its own, but LWN did a pretty good job of this already. In summary: since the compiler can guarantee more about your code, it can do more about it.

Multi-processing comes in the form that once you are aware that a function is constant, you know that only the parameters you pass it matter. This means that all you need to do is move the data of the function to the processor running it and let it go. That's pretty abstract...how about a bigger example? Say you wanted to do some difficult task like find things in a million images. The constant function in this case would be the evaluation of an individual image against the set of feature descriptors. In the end, a central system can hand out a feature set and an image to a bunch of computers individually, knowing that they do not touch anything and getting their results at will because nothing effects anything else. Cool, huh? Now imagine if your compiler did this automatically. Awesome stuff.

The last thing I said was verification, which is probably the most important. What I mean by this is that you should be able to mark a function as pure and have the compiler check this for you. The most helpful case I imagine is based on the fact that a pure function can only call other pure functions (or constant, because a constant is a pure function). Likewise, a constant function can only call other constant functions. So you can easily guarantee that everything you do is working exactly like you expect, which is just fantastic.

I actually really like the way GCC already does this for C and C++ and I wish it would become more prominent in other languages. A similar feature in .NET is Microsoft Code Contracts, which is a pretty sweet tool that fits nicely with their system (although I would like to see it more prominently featured - a first-class citizen in the .NET world).

Lambda Functions

Lambda functions are awesome. I am not just saying that because one of my three readers would kill me with a rusty spork if I said otherwise, but because they are genuinely awesome. Lambda functions are one of the reasons I prefer C# and Scala over Java. The comparison of those three languages is actually a great example of why I think that lambda functions should be a first-order member of any language that wants to call itself awesome because Java's lack of them. Sure, anonymous inner classes can act useful, but lambda functions are more of the culture of the language. As a functional programmer, I find it irritating that I have to write my own Function class in basically every Java project that I do. It is the fact that they are not already there keeps them from populating the Java library. Imagine Java with something like Linq and take off a lot of random code bloat. Hmm...I just described Scala. People have been asking for lambdas in Java for a while and it looks like they are finally coming.

Yes, I realize I just harped on about Java, Scala and C#. My point is that lambda functions are just plain awesome and you should put them in your language no matter what, because they are incredibly beneficial. If C++0x can add lambda functions to that horror of a compilation model, you can too!

Self-Modifying Code

Optimization based on run-time properties

So let's say I have a structure called a Vector4, which contains four floating point numbers (all aligned properly in memory). If I have two of these things and want to add them together, I would like to do it really quickly (especially since this is something I do all the time). I can do this really quickly on x86 with the addps instruction from the SSE instruction set. However, I would really like my code to work perfectly fine on CPUs that do not support SSE and work faster on those that do. All in a single executable so the user does not even realize what is happening. Intel uses a technique in all their compilers called "CPU dispatching," which I think is a horrible name since that name is already taken by the actual CPU dispatcher. Whatever.

Anyway, there is all sorts of cool stuff you can do with this. In a language that allows you to express your intentions (the what instead of the how), this sort of thing could be taken to the max. Language writers should look to the way SQL servers optimize queries -- it is pretty cool and I think lessons from SQL could be taken into a compiled language. Related: Optimizing Hot Paths in a Dynamic Binary Translator.

Multi-stage compilation

Say what you will, but just-in-time compilation is really cool. Believe it or not, some people do not like to distribute their source code to all their customers (crazy, huh?). However, very few people have problems delivering byte code to people. Every decent scripting language has some sort of intermediate representation and some of the most popular languages today compile to a byte code. LLVM uses an intermediate representation so that it can perform common operations like optimization on any input language and easily generate code for multiple architectures. Bart de Smet had a good blog post on JIT optimization in .NET byte code. Pretty cool stuff.

Yeah, so there is a startup cost of having to compile the intermediate language to native architecture and extra expense of having to have a compiler sitting around on every system you want to run software on. But it's really not that bad, especially considering how cheap hard drive space is these days. And for really performance-critical things, you can do something like ahead-of-time compilation for a specific architecture (like Mono).