11 October 2010

XNA Game Studio 4.0 for Windows XP

Microsoft has been not-so-subtly trying to phase out Windows XP for quite some time now. Who can blame them? XP is 9 years old. However, some of us are still stuck in 32-bit Windows XP because of insane hardware issues (don't ask, I don't even understand) or because we are Luddites. If you fall into either one of the preceding categories and you really want to develop games with XNA Game Studio 4.0, you probably have had a hell of a time trying to find the XNAGS4 installer. You know, the one without all that Windows Phone crap that you do not care about that requires Windows Vista or Windows 7. Or perhaps that's just how I feel.

Here is the (hopefully) permanent link to XNA Game Studio 4.0 that works with Windows XP:

If that link is somehow down, get XNA Game Studio 4.0 from me:


22 September 2010

A Fast, General A* in C# (Part 1)

As I work more on moving things into the Vapor .NET Client Library, sometimes I add functionality that I feel needs to be covered in a place which is more expressive than the XML comment system. For that, I turn here. I will be talking about Vapor.Graph.Searcher.AStar (snapshot as I write this).

Before I present the algorithm itself, I would like to mention two of the data structures that I use from the library which are not specific to A*. The first is the HashedSet<T>, which is almost identical in behavior to .NET's HashSet<T>, except that it is available on the .NET Compact Framework and does not allow for removing individual items, just clearing the whole set. I also use a structure called an OrderedQueue<T>, which is similar to a priority queue, except that the items themselves provide the ordering (through an IComparer<T>). It is backed by a min-heap, which means it is quite ideally suited for use in this algorithm.

Well, I have delayed quite long enough. Without further ado:

public static Path<TNode, TTransition> AStar<TNode, TTransition, TScore, TAdder>
    TNode startNode,
    TNode goalNode,
    [NotNull] CostFunction<TNode, TScore> travelCost,
    [NotNull] HeuristicFunction<TNode, TScore> estimateCost,
    [NotNull] GenerateFunction<TNode, TTransition> nextGenerator,
    [NotNull] TAdder scoreAdder,
    [NotNull] IEqualityComparer<TNode> nodeEqualityComparer,
    [NotNull] IComparer<TScore> scoreComparer
   where TAdder : IAdder<TScore>
   var searched = new HashedSet<TNode>(nodeEqualityComparer);
   var toSearch = new OrderedQueue<AStarSearchNode<TNode, TTransition, TScore>>
                          new AStarSearchNodeComparer<TNode, TTransition, TScore, TScoreComparer>(scoreComparer)
    var initCost = estimateCost(startNode, goalNode);
      Step.Create(startNode, default(TTransition)),

   while (!toSearch.IsEmpty)
    var current = toSearch.Dequeue();

    // skip if we've already searched this
    if (searched.Contains(current.Node))

    // check if this is the solution
    if (nodeEqualityComparer.Equals(current.Node, goalNode))
     return BuildPath(current);


    foreach (var next in nextGenerator(current.Node))
     var costForNext = scoreAdder.Add(current.CostToHere, travelCost(current.Node, next.Result));
     var heuristic = estimateCost(next.Result, goalNode);
     var estimated = scoreAdder.Add(costForNext, heuristic);

     var nextNode = AStarSearchNode.Create


   return null;

What is with this insane amount of generality?

In general, this is a pretty good demonstration of just how general the type system of C#/.NET will allow you to go. Although that does not address the underlying question of: WHY? This started as a quick way to search for best paths on a 2D grid and worked quite nicely. Soon after, I needed to use a graph search algorithm for image unification. Instead of writing another implementation of A* (or any other graph-search algorithm), I decided to generalize the one I currently had. The ultimate reason why I want this to be as general as possible is because it is way easy to specialize a function for ease-of-use with wrapper functions, but almost impossible to generalize.

I do have a reason for every single type parameter!

TNode and TTransition
These are unavoidable type parameters. TNode is the type of nodes we are searching on and over, while TTransition is some transition which takes us from one TNode to another. There is no way to know what domain-specific types a user would want here.

This is the type of score value used in all cost calculations: it is the return type of a CostFunction<TNode, TScore> and HeuristicFunction<TNode, TScore>. Here's where my judgement gets questionable. Why would one need to do this? Aren't all scores just ints? Somebody might have ready that last statement an thought: What about float/double/long/MyOwnScoreType? The difference in effort of implementing one type of score and all types of scores (including those not yet made) was minimal. I do not care what type of score that you use, just that I can add it to another and tell if one is smaller than another (like concepts in C++).

This is an IAdder<T> which takes two scores and adds them together. Why have the type parameter and not just pass an IAdder<T>? This allows the compiler to do some crazy optimizations for us. The contents of a IAdder<T>.Add probably look something like this:
public int Add(int x, int y)
    return x + y;
This is an ideal candidate for inlining. However, if the compiler only knows that scoreAdder is an IAdder<int>, it must make a virtual call and, thus, cannot inline. Even if the compiler cannot inline the calls, we save one boxing conversion per call (which is good, because A* is something that is probably called a whole bunch of times).

Why aren't nodeEqualityComparer and scoreComparer done like this? In 99% of cases, people are going to be using EqualityComparer<T>.Default and Comparer<T>.Default for these parameters, which will force the system to fall back to the worst-case of making the virtual calls. Perhaps I will change it in the future, but right now the advantages are very few.

Wow, all this talk and I've only managed to justify the function signature!

13 July 2010

Cloud Services and e-Commerce

A pretty big thing at work is the Payment Card Industry Data Security Standard, hereby referred to as PCI-DSS. It is a set of fairly strict security requirements that anybody wishing to do anything with processing credit cards must abide by. The standard can be summarized with simple rules like have a secure network, do not store things in plain text, use virus scanners and intrusion detection systems, etc.

Every year, the Payment Card Industry Security Standards Council (PCI-SSC) sends a Qualified Security Assessor (QSA) to assess the network for agreement lapses (ANAL). This costs the company millions of dollars on equipment like full-disk encryption, intrusion detection systems and tons of person-hours, but ultimately makes your credit card information safer. Aside from that, there is a fine on the order of $100,000 per month for not being compliant and there is a risk of losing the ability to run transactions at all (although given the volume of transactions that Fiserv does, I would imagine that number might be even larger).

The e-Commerice business has exploded in my lifetime, continues to do well in this economy and is not likely to ever stop growing. On a directly related note, more and more people are building web sites and selling things over the internet. Of these, very few meet PCI-DSS. It shows, too -- around 80% of unauthorized credit card transactions involve small merchants. Many small businesses do not bother with compliance and live with the fines because it is actually cheaper than trying to secure everything (and less effort).

And why should the bother trying to meet some ridiculous standard? It is so easy to hook up transaction processing to your little web server (violation), on the same network you give your employees WiFi with (at least 3 violations), store that information for future use (violation)...well, you get the idea. It is completely unreasonable to expect people to actually read the rules, much less understand them. Even if vendors made perfectly secure software (they don’t), you cannot expect every client to know how to set up an intrusion detection system or have in-depth knowledge of what a good security policy is. You can not even trust that the virus scanner is up-to-date.

Those are the kinds of e-Commerce businesses whose security would benefit the most to moving to a more secure infrastructure like Amazon EC2 or Google App Engine. Not only would the system be more secure, but there are tons of other benefits from maintainability to flexibility. If somebody had a little Python or Java module to drop into a Google App Engine web project, I can almost guarantee that the site would be more secure than if the developer had done the same thing on Bob's Server Farm. But, nobody writes generic cloud-based point-of-sale software. Why? Because it would be impossible for it to meet the PCI compliance standard.

The reason is section 12.8.2 of the PCI-DSS:

Maintain a written agreement that includes an acknowledgement that the service providers are responsible for the security of cardholder data the service providers possess.

And 12.8.4:

Maintain a program to monitor service providers’ PCI-DSS compliance status.

In short, the cloud service provider must maintain their PCI stamp of approval and they must shoulder some of the responsibility. That rules out Amazon’s EC2: their service agreement specifies that they will take no responsibilities whatsoever. Google says the same thing about App Engine. Microsoft takes a similar stance with Windows Azure (I would link you, but they only offer the ToS in Word documents, which is completely brain-damaged). None of these cloud computing platforms is going to take on the liability of meeting the PCI specification and it is likely that they never will.

Does this mean that cloud computing and e-Commerce are destined to never meet? Not quite - there is always going to be Google Checkout and PayPal. Both of them have very customizable shopping cart implementations and are fully qualified to process credit card transactions. At that point, you are going to have to live with the fairly significant surcharge associated with those services.

Unfortunately, that appears to be the very limit of what is possible on any cloud system. The only possibility of moving away is for a developer to roll their own PayPal which resides on their own PCI-compliant infrastructure. The funny thing about doing something like that is that such a system would probably be less secure than running the same system on the public cloud. Essentially, one would be providing software as a service (SaaS) to a platform as a service (PaaS) on an infrastructure as a service (IaaS) (side note: aren't web acronyms fun?).

Another big issue with providing shopping cart functionality through any PayPal-like system is that it limits you to the web. This is a real shame because the internet has so much more potential. A piece of software like Steam could not exist on the cloud without some extremely clever single-sign on (SSO) hacking. Of course, once you are to the level of a desktop application, you are free to make multiple calls to places all over, but that is a really bad security practice.

My ultimate question is: Who will break, the PCI or a cloud service provider? I very much doubt that the PCI-SSC is going to quickly change their stance on anything, since, like any standards body, they are extremely slow to react (they do not address plain-old virtualization yet). Will one of the existing cloud service providers step up an become PCI-compliant? I highly doubt this as well.

My money is on the problems being solved by Google Checkout, PayPal or some new, but similar, service. I would love to see a web service-based alternative to those services. Combined with the emerging OAuth 2.0, developers could do whatever they want and have it all bundled up in a nice secure package. I really think there is a market for this -- it would open up these fun new elastic hosting solutions to all the Web 2.0 connectivity we have come to love. There is money to be made and it's all pretty exciting.

12 July 2010

3D TV is Not Coming Anytime Soon

A few days ago, Ubisoft made a prediction that 3D TVs would be in every single home in the United States and people will rejoice. This is something the major players in the tech/media industry are all pushing for with advertising galore. Every advertising break during the world cup featured that damn Samsung ad.

Fact: People would like to watch TV without looking like Bono.
I'm not a fashion expert, so I'm not going to criticize what the glasses look like, but I will criticize the fact that you have to wear them. Well, not criticize so much as point out an obvious technical problem: Where are you going to put them? For years, Americans have struggled with the ideal place to put the remote. There has been ongoing, well-funded research on ideal locations for the remote and experts still furiously debate the subject. Hell, my father caved and bought one of these things so that he could never possibly lose it:
So where are we going to put our glasses? Unlike remotes, you have to have the glasses on to get any sort of viewing experience. Unlike remotes, if put them in the couch, you can scratch them and then you have to buy new ones. Unlike remotes, they do not have an established location in the sitting room because of a lack of research in ideal stupid glasses placement. Okay, that last one will heal with time, but that time factor works largely against the adoption rate (especially Ubisoft's vision of adoption rate).

So how do you deal without the glasses? Well, Microsoft has a pretty cool idea. The problem with their technology is that the system is limited to a certain number of viewers. Right now, one system can show 3D to only 2 people or 2D to 4 people. Ouch. Sure, the technology will only improve with time, but there will always be an absolute limit to that technology. Which means if you want to broadcast a 3D image to n people where n is very large, you're stuck with the glasses (for now).

Oh, except that even on the glasses-based TVs, you generally only get two pairs of the active shutter glasses with the set. Sure, you can go buy more. Newegg sells a pair of LG glasses for the low prices of $116. WHAT? Gulp.

Fact: People don't actually want this technology (yet?).

I have yet to talk to a person who really wants to have a 3D TV. Not buy one, but have one. If you are going to give them away, people will use them as your boring old 1080p 2D TVs. But don't take my empirical evidence for it -- survey says: the Japanese are 'not interested' in 3D TV: 67.4% of them said they did not intend to upgrade. If technophile Japan doesn't care, what hope is there for adoption in the United States?

Fact: The technology is wholly underwhelming.

My problem is that I have been seeing in 3D almost my entire life, which makes 3D TV not actually that impressive. The advertisements claim that it is more immersive, but at the end of the day you are still looking at a box with pretty lights. So when I see crap like this, which somehow implies that 3D TV looks more realistic than the real world, I can't help but shake my head.

If you want more immersion, give us sound, smells and sensations. I want the feelies! I was going to take this paragraph down a sarcastic path where I imply that it is not the content of the display but the induced sensations that really matter, but it turns out Aldous Huxley has already got that covered.

Fact: People do not consume TV in a way that works with 3D TV.

The demos at E3 all conveniently had people standing in front of the television, but have you ever seen a 3D TV from an angle? It looks like crap. Remember LCDs when they first came out? It is kind of like that, except not only are the colors completely wrong, the entire picture depth is thrown out of whack and you see it double again. It is even worse if you see the TV from an angle below (like if your TV is on a stand and you are sitting on the floor). People watch the news while making dinner. People have arranged their sitting rooms as a matter of looks, not as a matter of greatest TV viewing experience.

The industry seems to think we live in a world completely centered around watching television. A common example given is sports. Imagine if everybody watched sports in 3D, it would be like you are right there! says the industry. This through process is preposterous. Are you going to hand out 3D glasses for your next Super Bowl party? Will the bouncer at the sports bar be handing out glasses at the door? It's fine in theaters, because you are there to do one and only one thing: watch the movie.

And how does this hold up to the multitasking generation? We very rarely watch the TV without doing something else at the same time. Can you imagine the hell that would be if every one of your devices was 3D? What if they all used different technologies and required different glasses? Yikes.

Fact: People do not have extra money lying around to throw at this sort of crap.

In 2007, the 1080p revolution really started to take hold. Since then, over 40 million HDTVs have been sold. Even I am amazed at how good a plain-old DVD upscaled to 1080p looks. Sure, a broadcast at 1080i looks way better, but the upscaling algorithms look pretty damn nice. People are finally buying Blu-ray disc players and are amazed at the clarity.

Blu-ray is interesting because it is actually capable of playing 3D content, except the standard of encoding was only recently agreed upon, so not all players are actually capable of this. Luckily, Blu-ray players have the capability of being firmware patched to support the new standard. The problem is that upgradeable firmware is more expensive to produce, so many of the cheaper players cannot be upgraded. Oh, and the cheaper players are what most people bought. I would hate to be a customer support person fielding that call:
Customer: Yeah, I just bought Avatar in 3D and it is not playing 3D.
Support: What is the model number of your player?
Customer: The what?
etc, etc, etc.
Customer: What do you mean this version is not 3D capable? Avatar says that it will work in any blue ray player on the box!

Fact: 3D causes headaches.

I knew this from when I wore my first pair of active shutter glasses at a Samsung demonstration almost 4 years ago. Within about 15 seconds, I had to take the glasses off and get a drink of water. The technology has definitely gotten less headache-inducing, but I still cannot watch more than about 30 minutes of 3D without wanting to bash my own head in with the back of a claw hammer to relieve the pain (if you ever want to torture me, just strap me to a chair, tape open me glazzies and make me watch 3D films). It is definitely not a universal sentiment, but I am not the only one.

Gaming is going to be the field where 3D technology really takes off, which is why I have a vested interest in the topic. Most of the problems I have listed in this post go away with gaming, since gamers are always looking for immersion, are limited by the amount of controllers anyway, sit in front of the television and are not terribly chuffed about spending a little extra bit of money. Okay, so right now it is a lot of money, but the prices will drop soon enough. My only concern is how this is going to work with things like Kinect. Will your glasses fall off when you are moving around? Not a really big deal, though.

I realize that this is the way technology is moving, but give it at least 10 years (minimum) before we even near a halfway adoption rate. The uptake will be significantly slower than the HD revolution (the first HDTV was made in 1998), but it will come. Although I will say that gamers will be the first adopters, so I guess that makes Ubisoft sort of right.

07 July 2010

What I Would Like in a Programming Language (Part 2)

Continuing from part 1...


Functions are what does the work. You can have your objects exist and hold a bunch of data, but without functions to do work on and with this data, we would live in a world of pure XML, which I think everybody can agree would be horrific. We use functions to find our social security number, pay our taxes and help the landlady with her garbage. I'm not saying that objects are guilty of virtually every computer crime we have a law for, but functions should really take a more prominent role in programming languages. There are two types of functions that I have special feelings for, the pure function on the lambda function.

Pure and Constant Functions

Pure functions are awesome. For those who hate reading, a pure function can be summarized by saying that it is a function that cannot alter anything but itself: no global memory, no I/O devices - nothing but the stack. The GCC documentation also defines a constant function as a special case of a pure function: it does not even read from global memory.  The C function strlen would be an example of a pure function, since it reads but does not alter global memory (dereferencing the pointer is considered an access to global memory).  A function like sqrt is considered a constant function since it touches nothing (as would Q_rsqrt).

Okay, so what's the point? There are three main reasons: optimization, multi-processing and verification. Optimization from marked pureness comes in two forms: dead code elimination and common subexpression elimination. Explaining how this works is a blog post on its own, but LWN did a pretty good job of this already. In summary: since the compiler can guarantee more about your code, it can do more about it.

Multi-processing comes in the form that once you are aware that a function is constant, you know that only the parameters you pass it matter. This means that all you need to do is move the data of the function to the processor running it and let it go. That's pretty abstract...how about a bigger example? Say you wanted to do some difficult task like find things in a million images. The constant function in this case would be the evaluation of an individual image against the set of feature descriptors. In the end, a central system can hand out a feature set and an image to a bunch of computers individually, knowing that they do not touch anything and getting their results at will because nothing effects anything else. Cool, huh? Now imagine if your compiler did this automatically. Awesome stuff.

The last thing I said was verification, which is probably the most important. What I mean by this is that you should be able to mark a function as pure and have the compiler check this for you. The most helpful case I imagine is based on the fact that a pure function can only call other pure functions (or constant, because a constant is a pure function). Likewise, a constant function can only call other constant functions. So you can easily guarantee that everything you do is working exactly like you expect, which is just fantastic.

I actually really like the way GCC already does this for C and C++ and I wish it would become more prominent in other languages. A similar feature in .NET is Microsoft Code Contracts, which is a pretty sweet tool that fits nicely with their system (although I would like to see it more prominently featured - a first-class citizen in the .NET world).

Lambda Functions

Lambda functions are awesome. I am not just saying that because one of my three readers would kill me with a rusty spork if I said otherwise, but because they are genuinely awesome. Lambda functions are one of the reasons I prefer C# and Scala over Java. The comparison of those three languages is actually a great example of why I think that lambda functions should be a first-order member of any language that wants to call itself awesome because Java's lack of them. Sure, anonymous inner classes can act useful, but lambda functions are more of the culture of the language. As a functional programmer, I find it irritating that I have to write my own Function class in basically every Java project that I do. It is the fact that they are not already there keeps them from populating the Java library. Imagine Java with something like Linq and take off a lot of random code bloat. Hmm...I just described Scala. People have been asking for lambdas in Java for a while and it looks like they are finally coming.

Yes, I realize I just harped on about Java, Scala and C#. My point is that lambda functions are just plain awesome and you should put them in your language no matter what, because they are incredibly beneficial. If C++0x can add lambda functions to that horror of a compilation model, you can too!

Self-Modifying Code

Optimization based on run-time properties

So let's say I have a structure called a Vector4, which contains four floating point numbers (all aligned properly in memory). If I have two of these things and want to add them together, I would like to do it really quickly (especially since this is something I do all the time). I can do this really quickly on x86 with the addps instruction from the SSE instruction set. However, I would really like my code to work perfectly fine on CPUs that do not support SSE and work faster on those that do. All in a single executable so the user does not even realize what is happening. Intel uses a technique in all their compilers called "CPU dispatching," which I think is a horrible name since that name is already taken by the actual CPU dispatcher. Whatever.

Anyway, there is all sorts of cool stuff you can do with this. In a language that allows you to express your intentions (the what instead of the how), this sort of thing could be taken to the max. Language writers should look to the way SQL servers optimize queries -- it is pretty cool and I think lessons from SQL could be taken into a compiled language. Related: Optimizing Hot Paths in a Dynamic Binary Translator.

Multi-stage compilation

Say what you will, but just-in-time compilation is really cool. Believe it or not, some people do not like to distribute their source code to all their customers (crazy, huh?). However, very few people have problems delivering byte code to people. Every decent scripting language has some sort of intermediate representation and some of the most popular languages today compile to a byte code. LLVM uses an intermediate representation so that it can perform common operations like optimization on any input language and easily generate code for multiple architectures. Bart de Smet had a good blog post on JIT optimization in .NET byte code. Pretty cool stuff.

Yeah, so there is a startup cost of having to compile the intermediate language to native architecture and extra expense of having to have a compiler sitting around on every system you want to run software on. But it's really not that bad, especially considering how cheap hard drive space is these days. And for really performance-critical things, you can do something like ahead-of-time compilation for a specific architecture (like Mono).

21 April 2010

What I Would Like in a Programming Language (Part 1)

Compiler Intrinsics

Intrinsic functions and properties are wonderful things. All the decent C family languages have intrinsic functions like sizeof and alignof, but sometimes you need support for more. While a language designer can try to think of every possible need and expose a good intrinsic for all potential future requirements, this is ultimately a losing battle for pretty obvious reasons. It would be really great if a user could extend the intrinsic properties of the compiler with their own domain-specific needs. I am imagining the compiler gets something like an extension sheet along with the source code so that these properties can be added to the system quite trivially. It would have make it so that people could extend the compiler without actually having to recompile the compiler -- it is the culture of extension that you really care about. Of course, I have no idea how the implementation would work, but it would be nice to have.

Ultimately, this could give a system like C++ type traits that do not feel like a complete hack. Of course, C++ type traits are extremely powerful, but frankly, they were never meant to do what they now do and, of course, just don't feel right. If concepts were not dropped from the C++0x standard, we could be about halfway to a cleaner solution; as it stands, we are stuck with using type traits.

To run completely away with the idea, something on the order of having a Lisp-like macro system where you have an extra program which spits out some abstract syntax tree from the input would be totally awesome. Okay, so this feature already exists in perfect form in Lisp, but I would love to see it in other languages as well.

Unit Testing


Haskell is a wonderful place to draw examples from. A framework like QuickCheck is just awesome. Because let's face it: Nobody likes writing unit tests. Now, I'm not saying that they are completely unnecessary, just that they are are pain in the ass to write. Say you have a function with the signature sqrt(x : real) : real.

If you wanted to write some unit tests for this function, you would pound away at some known values of various square roots. For brevity, I'll eliminate specifying some range of results that we consider "valid."

assert_equal(sqrt(4), 2)
assert_equal(sqrt(100), 10)
assert_equal(sqrt(2), 1.4142135)

Okay, that's halfway decent and pretty clear what I mean. But let's face it: out of the limited representational power of a real (whatever that may be), I am testing a very pathetic subset of all the possibilities. There might be negative values that somehow work or positive ones that do not - which is especially probable for very small or very large numbers. What would be nice is something with a signature like this:

sqrt(x : real @AcceptableRange(0 .. INFINITY))
: real @ResultCheck(r => (r * r) == x)

So the syntax is not completely readable, but the idea is that we are attaching some annotations to the parameter and the result of the function. These can be processed by whoever might need them, similar to the use of .NET attributes and Java annotations. However, I have stretched the allowable syntax to whatever you want -- in this case, a ranged primitive and a lambda function. The possibilities are endless! Compliers for the language doing static checking could find failures before they happen and IDEs could assist people with their problems.

Better yet, we could use...

Static Type Checking

I am a huge believer in static type checking. From the example above, things like just having a type modifier called unsigned makes exceptional conditions of the sqrt function impossible, which is really nice, since this is what you actually mean. Potential errors due to negative numbers are eliminated at compile time, because you simply cannot compile when there is a chance for an error. Compiler-enforced consistent behavior is an awesome thing.

User-Definable Primitive-looking Types

Let's say you are writing a math function that has the signature rotate(thing : Shape, angle : Real). This function rotates the Shape called thing by angle radians. Oh, you could not tell that I use radians by the method signature? That's a problem...

If we were doing some C++, we might have lines like:
typedef float Radians;
typedef float Degrees;

So the signature would look like: void rotate(Shape* thing, Radians angle). Now the method signature tells you which kind of unit you are using. Of course, the problem here is obvious: since Radians and Degrees are actually the same type, we are free to convert between the two and the compiler will not actually care that there is a difference (because, as far as it is concerned, there is not a difference).

So how can you make the compiler care? In C++, this is notoriously difficult (although possible). Once again, let us pretend that there is something perfect out there for me that looks like this:
type Radians is real range 0 .. 2 * PI
type Degrees is real range 0 .. 360

And then one could specify that there exists a scalar conversion between the two units:
conversion Radians <=> Degrees is implicit direct

That looks a little funny, but stay with me for a second. The <=> token means that the conversion is two-way. After that, I added some words just to show that my dream is really powerful. implicit means that there is an implicit conversion (as opposed to an explicit one) - the compiler is allowed to freely convert the units between each other (assuming it follows the rules of conversion). Then, direct is just a way to specify that there are no fancy conversion rules: the compiler just figures out that 180/PI is a good conversion factor for the number specified.

var angle = 180 degrees
rotate(thing, angle)

This is kind of along the lines of Ada with a little bit of extra conversion logic. A convenient system like this would be great for certain NASA contractors.

I've decided to split this incredibly long post up...

15 April 2010

Implications of Apple's iPhone Lockdown

If you have been living under a rock this week and have not heard about Apple's wonderful new iPhone Developer License Agreement, here is the part that is bothering people:

3.3.1--Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

I am not exactly a businessman, but this does seem like a rather bad move on Apple's part. Apple seems to lack an official stance of the matter, but the response of Steve Jobs to John Gruber's blog is the closest I have seen. I think the John summarized Apple's reasoning in the 4th paragraph (emphasis mine):

So what Apple does not want is for some other company to establish a de facto standard software platform on top of Cocoa Touch. Not Adobe’s Flash. Not .NET (through MonoTouch). If that were to happen, there's no lock-in advantage.

It is Apple's platform which they used to take over the "hip" smartphone market with elegant simplicity, solid design and good marketing. Somehow, despite the incredible lockdown on distribution (to the point where you can develop an application and just have it rejected), people are climbing over each other to develop exclusive applications for it. It seems like everyone and their dog is making an iPhone application and it is the de facto "easy mobile platform," despite the fact that other SDKs exist and look awesome.

John then goes on to say this:

Then Apple releases major new features to iPhone OS, and that other company’s toolkit is slow to adopt them. At that point, it’s the other company that controls when third-party apps can make use of these features.

How is that relevant? As an application developer, layers of abstraction are a good thing. In the simplest case, consider a library like SDL. It allows me to develop an application concurrently for Windows, Linux, OSX and whatever else SDL, despite the drastic differences in windowing systems and system libraries. For the majority of the graphics things I need, SDL provides a convenient way to develop it for all target platforms simultaneously. In the corner cases where I need functionality that SDL does not provide, I can work down to system-specific OpenGL calls. The mark of the good library is that it abstracts the most common functionality and lets you get more specific when you need to. If the abstraction layer does not provide the functionality you need it, make the system call yourself. If the abstraction layer does not allow you to do this, get a new library.

I can understand that Apple does not want a bad standard taking over their magical platform, but unlike a web standard, when you compile a language, assemble the generated code and run it on a processor, the processor is completely agnostic to what the original input language was. I'm willing to bet that when Adobe's software is transforming your ActionScript code to run on the iPhone, they are making the appropriate system calls. There is also a good chance that the code that Adobe emits automatically is more efficient than the average iPhone developer's. If application developers need some functionality provided by Apple's SDK but not yet wrapped in a high-level call, they can just make the SDK call. If the system does not allow this, it is time to get a new one.

All Apple is doing is locking down their system so that people have to use their tools to develop for them. While a great money-grabbing move: Does it help the customers? While John provides the common assertion that cross-platform apps are bad on Apple platforms, I would argue that has less to do with the fact that the application is cross-platform and more to do with developer time. Many Windows+OSX apps are worse on OSX because developers do not care enough about the OSX version (which is probably justified, considering the percentage of users of both systems). Besides, there are already systems which provide cross-platform support and I will site QML as just one of such systems. Ultimately, how well an application works on the systems is a property of how much time and effort a developer puts into it. And if a developer is tasked with making the same app for a web browser and the iPhone, then having to write common functionality once will give them more time to dedicate to the making the application feel right and will give consistency across platforms, both of which lead to an ultimately better user experience. And if the abstraction layer does not make it easier to develop, then it is time to get a new one.

Enter the Apple fanboys. Steve makes the case that Apple does not want to tie themselves to a single platform and seems to think that XCode is the only thing capable of cross-compilation or that Apple is somehow innovative in this regard. He also seems to think that the iPad's processor is not of ARM architecture and that, for some reason, Apple is artificially restricting people to running their apps on an emulator. If he is right, then Apple is epically stupid, since it is obvious they are not lacking in a compiler for whatever the A4 chip is. This is my favorite quote:

Apple is sowing the groundwork to make architecture changes seamless—developers will only need to flip a switch to give their apps blazing, native performance.

All debate about what the A4 is aside, it is ridiculous to think this argument is a legitimate justification for the programming language lockdown. If Apple's toolchain really does generate applications that are so superior to others that it outweighs the cost of having to develop two simultaneous apps, then people will just use their system. Apple would win on the merits of being better and would not need to lock down their system in the first place. Apple is no longer competing on the grounds of producing a better system; they are effectively saying that they do not wish to compete and are just locking Adobe out. Lack of competition ultimately hurts the customer.

But Adobe is not the only one Apple is screwing over with this policy - it raises legitimate concerns for abstractions like Qt-iPhone. They are probably safe, since Qt is mostly C++. However, some could argue that with uic and moc, that Qt code can no longer be considered "C++." Of more concern are tools like Unity 3D, whose front page boldly currently advertises "Unity iPhone 1.7, Now with iPad support." Sorry, guys! And what about the people who no longer wish to live in the world of imperative languages? Nope.

Its sad that the iPhone is what people will consider the spark that started the era of ubiquitous mobility, when that perception was nothing but an amazingly well-played marketing game.

UPDATE: The latest Unity mailing list letter contains this gem (emphasis mine):

As most folks already know, Apple introduced a new Terms of Service (ToS) with their iPhone OS 4 that has some Unity users concerned. While we don't yet have a definitive statement to share at this time, we have a healthy, ongoing dialogue with Apple and it seems clear that it's not their intention to block Unity and will continue our efforts to ensure compliance with the Terms.

I feel bad for the Unity developers, who will probably have to do a some amount of work to "insure compliance." What a pointless waste of energy. For more on this, check out David Helgason's blog. What surprises me is that Unity is in direct violation of the words of their terms of service. It is not really debatable — user applications simply are not written in "Objective-C, C, C++, or JavaScript."

This goes back to my emphasis: "...it seems clear that it's not their intention to block Unity..." So what is their intention? It is clear that Adobe is not getting this same treatment. Was Apple's only intention to block Adobe? That is astoundingly pathetic.

01 February 2010

XML Serialization, the XNA Content Pipeline and Wumpus World

When replying to this Stack Overflow question, I realized that the poster was needlessly storing and loading XML through XML serialization at game time instead of using XNA's Content Pipeline.  As many people wanting to make a decent game, he wanted to create an editor and he wanted that editor to use his in-game engine.  This is a great idea, as it will let those designing worlds for your game (and you) see what things will actually look like in the resulting output.  It also opens the door for build-and-play scenarios where you do all design work in one application without the burden of the design-save-build-save-get into proper mode in actual game-test loop.

Unfortunately, his mistake was that he fell into the trap of .NET's XmlSerializer.  It is an easy trap to fall into because it is so simple and convenient to use.  Besides, the XNA Custom Content Pipeline is not advertised well enough and is actually quite obnoxious to use correctly (and the documentation is rather dull).  Luckily for all of my two readers, I have read that documentation already!

In today's example, I'm going to be operating on the wonderful game of Wumpus World.  I'm going to be making visualization and editor for the game, the reader can tie in input controls or whatever to make it a full game.

Setting Up the Project
First things first, we need to set up the project.  Obviously, you'll want to start off with your XNA Game - mine is called "Wumpus."  To make a custom pipeline extension, right-click your solution, perform the clicks to add a new project, and in the "XNA Game Studio $version" section, find "Content Pipeline Extension Library."  The convention here is to append ".Pipeline" to the namespace of your project, so mine is named "Wumpus.Pipeline."  There is automatically a ContentProcessor which is so creatively titled ContentProcessor1.  I'm not sure who at Microsoft made the decision to put these useless default files in every project, but you can just delete it.

Anyway, to actually use the things in your pipeline extension, your content project in the XNA game needs to have a reference to the pipeline project.  To do this, right click the content project, click "Add Reference," then go to the "Projects" tab of the dialog that comes up and select "Wumpus.Pipeline" (or whatever you named your pipeline project).

So now your content project can build with the pipeline extensions, but how do we build the objects in the extension library?  You'll most likely want to use the objects you've already created in your library, so try to add a reference to your main game through the same method.  I say "try to add" because this will not work.  It will tell you that you have a circular dependency.  Why?  Right now, there are three projects in your solution:
  • Wumpus -- This implicitly references Wumpus (Content).
  • Wumpus (Content) -- This references Wumpus.Pipeline.
  • Wumpus.Pipeline -- This cannot reference Wumpus, because that would make a loop.
In reality, this should not be a problem, since the content build project only needs the pipeline project while building and does not even exist at run time, so there is no circular reference.  However, Visual Studio cannot figure this out, so we have to work around this.

The solution is to create another project containing the core components of your game and reference that from both the game engine and the pipeline extension.  This avoids any possibility of circular references.  So add a new Windows Game Library to your project.  The convention is to append ".Core" to the name of the library.  As you probably know, Visual Studio will assume that is the namespace you would like to use for all classes in the project.  However, you won't be putting classes in the Wumpus.Core namespace, so you can change that behavior by opening the properties window for the core library and changing the "Default Namespace" to Wumpus (or whatever you want).

So here is the solution structure as it stands:
  • Wumpus.Core -- This has it's own content project, but you don't need to put anything in it.
  • Wumpus -- References Wumpus (Content) and Wumpus.Core.
  • Wumpus (Content) -- References Wumpus.Pipeline.
  • Wumpus.Pipeline -- References Wumpus.Core.
Now we're shaping up!

Originally I had mentioned an editor which uses the in-game engine and all the objects, so let's do that.  Add another project to your solution, but this one is going to be a WPF Application (you could also do a Windows Forms application, but WPF makes GUI creation fun and easy).  I named mine "Wumpus.Editor."  Add a reference to Wumpus.Core (since you're going to be using objects from it) and Wumpus (since we want to use the engine components from that).

As clearly indicated from my wonderful drawing, Wumpus.Core is the center of the attention.

Code Architecture
In the interests of brevity, I'm going to leave out the parts of the code that are boring.  They're all part of the download package at the bottom of this post.  Basically, there are two classes: TileQuality, which is an enum with qualities such as breezy, gold, stinky and wumpus with bitwise flags and WumpusWorld, which contains a grid of TileQualitys.  They both reside in Wumpus.Core, since they are needed for all phases of design

Being able to compose a full application by integrating many small pieces and making them communicate is wonderful.  However, it requires strict adherence to the one object serves one purpose philosophy of object-oriented programming.  Unfortunately, the Microsoft.Xna.Framework.Game class is often a blatant violation of this philosophy, as it encourages putting all the drawing, user interaction, asset management and everything else into this one place.  Knowing this, it is important to put different parts of the application in different places.  For this demo, the only part that will be on its own will be the graphics engine.

So, create a new class in Wumpus (because that is where engine things go) called GraphicsEngine (creative, I know).  In a real project, you might want to make an interface like IEngineModule or something so that you can easily manage all the engine parts, but you can just implement IDisposable.  Since we're abstracting things, we question: What does a graphics engine need?  I'm thinking it needs a reference to the GraphicsDevice so that it can do its job and a ContentManager so that it can locate and load graphics resources.  This is my constructor signature:

public GraphicsEngine(GraphicsDevice device, ContentManager manager);

This gives us the freedom to use the graphics engine any time we have those two items, instead of relying on the engine to go out and discover those things itself (which is a very easy way to accidentally create class coupling).  This is the basis of dependency injection.  Of course, if we wanted to be real legit about this, we could plug in a real dependency injection framework and work out what needs what from some configuration, but that's overkill here.

The sole job of the graphics engine is, given a WumpusWorld, draw it to the display device:

public void Draw(WumpusWorld world);

We are going to call this from our game, so we will need to have an instance of GraphicsEngine in our WumpusGame class.  Following the XNA patterns, Initialize is probably the best place to create the graphics engine, since we should have a reference to a valid device (assuming you've made a GraphicsDeviceManager and initialized it, as per the default):

m_graphics = new GraphicsEngine(GraphicsDevice, Content);

So instead of performing the drawing ourselves in WumpusGame.Draw, we delegate this task to the GraphicsEngine and trust it to do everything right:

protected override void Draw(GameTime gameTime)

Pretty easy, huh?  Of course, m_world is null right now, so you will want to instantiate it somewhere (doing it in LoadContent makes the most semantic sense).  If you have the free time, you can edit the world by hand and play the game:
Wumpus World Game

I know, the artwork is amazing.

Saving and Loading
As much fun as it is to create these worlds in code, it would be nice if we could persist these things outside of code.  For many things, .NET's automatic XML serialization is a wonderful thing and can be used.  However, my WumpusWorld class has a two-dimensional indexed property (public TileQuality this[int row, int col] { get; set; }), which cannot be serialized, so I wrote my own methods for saving and loading:

public void Save(Stream stream);
public static WumpusWorld Load(Stream stream);

These methods will be called by anything that wants to save or load for the simple text format (hereby referred to as .wump).  Here is some example output:

Breeze, Stench
Breeze, Gold
Breeze, Stench
Breeze, Wumpus

Originally, though, I had talked about extending XNA's content pipeline, so we're going to make a reader and writer to convert to and read from the XNB binary format.  This lets you do fun things like verify every single file at build time, take advantage of automatic compression and deployment and a slew of other features gained by drinking the XNA Kool-Aid.  OOOH YEAH!
  1. ContentImporter: Reads an object from disk into memory.  Input is a file name and the output is some .NET object.  In our case, we're reading a .wump text file and outputting a WumpusWorld.
  2. ContentProcessor: Takes a .NET object, runs some operations on it and outputs some other .NET object.  You can perform whatever arbitrary modification to the object you need to.  We are not using it here, but I'm mentioning it because these can be very helpful (Shawn Hargreave's Pre-Multiplied Alpha Processor is a good example).
  3. ContentTypeWriter: Takes the .NET object that has been imported and optionally processed and writes it to disk with a binary serializer.  The input is the processed .NET object and the output is a binary serialized file.  This is the last step that is taken by the pipeline project.
  4. ContentTypeReader: Reads from the binary serialized stream and outputs the .NET in-game object.  This is in your actual game engine code.
All of these classes are very simple -- here is the entire WumpusWorldImporter class:

[ContentImporter(".wump", DisplayName = "Wumpus World Importer")]
public class WumpusWorldImporter : ContentImporter<WumpusWorld>
    public override WumpusWorld Import(string filename, ContentImporterContext context)
        //    use the built-in load function
        return WumpusWorld.Load(File.OpenRead(filename));

Simple, eh?  The ContentImporter attribute lets Visual Studio know that when we add a .wump file to the content build project, we would like to automatically use this content importer and the text to display the name of the importer with.  Look in the code for all the other class, but there is not much that needs explanation.  Most of the ugly stuff is out of the way so that you are presented with this nice, clean interface.

Create a .wump file (might I suggest using the "example output" from a few paragraphs ago?) and adding it to the Wumpus's content build project.  If everything worked properly, Visual Studio will associate the importer, not attach a processor, locate the writer at build time and associate the correct reader when the ContentManager needs it.  In short, you will be able to load a WumpusWorld with a simple call like this:

m_world = Content.Load("World/wumpus");

Pretty slick stuff.

The Editor
So, we would really like people to be able to create new worlds not by editing some text file, but by using a GUI tool.  As great as the NeoForce Controls are, they just do not have the sheer amount of things in the .NET forms nor are they as easy to design with (unless someone has made a WYSIWYG editor).  Wouldn't it be great to somehow embed the GraphicsEngine that we have already made into a heavyweight GUI framework?  In this demo, I'll be using WPF, because it is shiny, new and fun to work with.

The classic way to work with WinForms is to use Microsoft's WinForms Graphics Device, which is quite helpful.  This allows you to make an XNA GraphicsDevice render to the surface of a regular control.  However, this relies on the control having a handle, which is not present WPF (at least in a way that you can see).  This is easily solved through use of WindowsFormsHost, which lets you host a WinForms control in a WPF environment.

"ctl_formsHost" />

On initialization, attach our world control as the child element of the host:

ctl_worldView = new WumpusWorldControl();
m_graphicsService = GraphicsDeviceService.AddRef(ctl_worldView.Handle,
ctl_formsHost.Child = ctl_worldView;

The final output of my editor looks like this:
Wumpus World Editor 
Stunning beauty.

Code Files
The code here is not at all a "finished product." It would be pretty easy to extend the editor by doing things like adding smell and breeze automatically when a Wumpus or pit is added or making a world larger than 4x4.

It is released under the Apache License, Version 2.0 and should be used for good, not evil.
Wumpus Source Code