Implementing a wave-distortion shader with Nvidia FX Composer

I spent most of the day fighting with one of most difficult and arcane pieces of software I’ve ever used: Nvidia FX Composer. I think it was written five or so years ago by members of a suicide cult, who upon releasing it promptly all drank the poison cool aid. Nvidia makes a pretense of running a customer support forum for the product, but most of the posts are desperate questions without a single response. On the other hand, the included documentation is relatively massive… but years out of date and therefore occasionally completely inaccurate, which in many ways is worse than not having any at all. Of course, my own total lack of experience writing HLSL didn’t help, either.

In the end I persevered and got this effect implemented and merged into my game:

Once I finally figured out how to use it, FX Composer was a huge help in being able to rapidly experiment with different effect parameters and see the result. Here’s what the software looks like once it stops kicking you in the balls:

What I had so much trouble with was figuring out how to convince the software that my shader was a post-processing effect to be rendered onto the entire scene, not onto a 3D object in the scene. After a lot of research and experimentation, I started with a working post-processing shader provided as a sample and starting taking things out of it until it broke. That’s when I learned you need these special semantics to tell FX Composer that the shader is for post-processing:

float Script : STANDARDSGLOBAL <
    string UIWidget = "none";
    string ScriptClass = "scene";
    string ScriptOrder = "postprocess";
    string ScriptOutput = "color";
    string Script = "Technique=Main;";
> = 0.8;

If this looks like gibberish to you, it does to me too. But these are the magic words needed to get FX Composer to let you apply a shader to an entire scene. It’s also important that you have a vertex shader defined in your technique, even if you’re only using a pixel shader — nothing will work otherwise. Here’s a pass-through shader you can use:

struct VertexShaderInput
    float4 Position : POSITION0;
	float2 Txr1: TEXCOORD0;
struct VertexShaderOutput
    float4 Position : POSITION0;
	float2 Txr1: TEXCOORD0;

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
    VertexShaderOutput output;
    output.Position = input.Position;
	output.Txr1 = input.Txr1;
    return output;

The final piece of the puzzle was getting the updated shader back working in XNA. There are lots of articles on this on Google, so no need for me to repeat it here. But I did discover something interesting: if you have an entirely 2D game, not a single 3D triangle rendered anywhere, then using a vertex shader in your effect, even the pass-through shader, will cause no pixels to get through to your pixel shader. The solution is to just comment out the vertex shader in your pass. Since FX Composer requires this shader and XNA requires it be absent, toggling the comment is an easy way to make both systems happy.

technique Main <
	string Script =
> {
    pass Pass0 <
       	string Script= "RenderColorTarget0=;"
    //VertexShader = compile vs_2_0 VertexShaderFunction();
	PixelShader = compile ps_2_0 PS_wave();

This was a bit more of a rabbit hole than I really intended to climb into, but at least I learned a few somethings.

Also, I filmed the motion-capture sequences for the main character’s animations today in Roark’s back yard. I need to do a lot of video editing on the result, then the hard labor of cleaning up all the pixels and shoving it into the game, but that’s tomorrow. For now, enjoy this outtake from the shoot:

Prototyping the sonar excavator

Today I plunged in to the arcane and confusing world of Direct3d pixel shaders, which I will have to put to good use if I want this game to look even a little impressive. Here’s a prototype of a shader that renders a wave of energy advancing outward from the character’s position:

This is to enable a tool I’m excited to implement, something I’ve been calling the sonic excavator, which will help players find hidden passages.

In many exploration games, there’s some tool that allows you to inspect the game world to see if any solid-looking terrain is vulnerable to one of your weapons, thereby opening up a secret passage. Super Metroid has the X-ray scope, Shadow Complex has the flashlight, and Insanely Twisted Shadow Planet has a sort of scanning beam. Of these three, I consider the latter two failures, since they take any mystery out of exploring. In Shadow Complex, there’s literally no reason not to keep your flashlight on all the time, and since it makes any vulnerable terrain you walk past light up like a neon sign, there’s almost never any doubt about where secret passages are. In ITSP, your scanning beam just tells you what tool can affect a given object. This isn’t as bad as Shadow Complex, since you have to switch to your scan beam and then wave it toward the thing you want to scan, rather than it just happening all the time in the background. The x-ray scope in Super Metroid is the best of these, since 1) it’s an optional pickup that you find relatively late in the game, and 2) it’s a bit cumbersome to use all the time, since it stops the action and it takes a while to scan the whole screen.

Exploration games have quite a lot in common with mystery novels — the author gives the player hints and clues and then lets them figure things out by themselves. It’s a kind of pact between the creator and the consumer, a promise that “I’m telling you everything you need to know, but you’ll still have to think.” Many modern games break this pact by holding the player’s hand every step of the way, turning the game into a kind of moderately interactive guided sightseeing tour. Just to make sure this metaphor is squeezed good and dry, the flashlight in Shadow Complex is the equivalent of the following passage in a mystery novel:

As the butler entered the foyer, Detective Sanchez noticed he had spatters of blood on his pristine white gloves and a sooty hand print on his starched collar. Sanchez looked at the dead chimney sweep crumpled on the ground, his throat opened and a trail of bloody footprints leading to the door through which the butler had just entered. Just then he got a text message from headquarters, and pulled his phone from his pocket to see that the butler was wanted for murder in six other states and had spells of psychotic rage.

Except in an actual mystery novel, the above passage would be a guarantee that the butler was innocent. If modern video game designers were in charge of writing this hypothetical novel, it would wrap up six pages later with the butler’s successful conviction.

Anyway, I want to restore some sense of mystery to the player, and the way you do that is by letting go of their hand and letting them figure things out on their own. The sonar excavator is an exploration tool that aims to keep the player guessing. It will fire a pulse of sound at a wall, and if there are any points of interest hit, such as a destructible tile, it will respond with a ping sound, which varies based on what was hit. You’ll start with a very narrow beam, and throughout the course of the game your successful exploration will be rewarded with upgrades to the excavator that make the beam wider, or makes the target that you hit glow, or also kills weak enemies, that sort of thing.

Speaking of sounds, I played around with a lo-fi sound creation tool created for Ludum Dare called sfxer. It’s pretty keen, and quickly gave me some basic chip-tuney sounds for lasers and landing after a jump. I’m almost to the point where I want to start plugging in sounds for a lot of the game content, and this will let me prototype placeholders really rapidly.

Baby steps to a complete engine

I got three small-ish features cranked out today: 4-way shooting, selectively (not universally) destructible tiles, and doors. Have a look:

Doors and selectively destructible tiles are both easily edited using an object layer in Tiled, and both can be keyed in to respond only to particular weapons, again using the map editor.

Right now the doors are three tiles high, and the character is two high. I’m a bit torn on this point: I think I would ideally have them both be one tile higher, mostly just because that’s what Super Metroid does. I’m kidding, kind of. In any case, the reason that Super Metroid did this was because they couldn’t easily deal in partial tile distances, and they needed to have three distinct character heights for the Morph Ball, kneeling, and standing (one, two, and three tiles high respectively). And Samus is pretty chunky in Super Metroid, even in profile — assuming she’s six feet tall, that makes her two feet wide at the shoulder. Must be the armor? My motion capture model is pretty svelte, but I won’t have a really firm idea on the proportions until I prototype the actual motion-captured sprites.

Still thinking about yesterday’s project, I did quite a bit of reading about various camera controls used in other games. There’s remarkably little out there on this topic considering how fundamental it is to a game’s playability. I started with this article someone posted to /r/gamedev on reddit today, then found this analysis of Super Mario World.

It’s very interesting to me how very few people would pay attention to the behavior of the camera in a 2D side-scrolling game, despite it being of such paramount importance. I can honestly say that I had never considered how the camera in Super Mario World worked until watching that. In some sense, you could say that a camera system (like most software) is like plumbing — you only notice it when it doesn’t work properly.

Of course there has been plenty of advancement in this field since Nintendo’s heyday, even if you almost never hear it mentioned, especially for 2D games. Insanely Twisted Shadow Planet did some inspiring work on this front.

In fact, I was inspired to download it from XBLA, and I’m going to go play it now.

Room-based camera control

When exploration is the driving goal of your game, you want to make it easy to navigate around the game world and contextualize your current location. Breaking the world into discrete rooms is important to this end, since it helps the player break the problem of navigation down into manageable chunks and contains any puzzles and encounters in a known area. Today I implemented rooms in my map engine:

I’m not quite satisfied with the transition between rooms, but it’s good enough for the time being. As you can see from the video, the game loads and unloads the dynamic content (like enemies) and physics model of each room as needed. This will make it a lot easier for the physics engine to cope with lots of on-screen action as the world grows in size, since only the current room is simulated at any time.

I have a couple things to note about this implementation before I sign off.

First, rooms are rectangles. At first blush, this would seem to impose a constraint on level design: that any two room exits to the west, for example, would need to be vertically aligned. That’s not actually the case, owing to how it’s implemented. There are two ways to exit a room: either enter a door or leave the room’s boundary. Doors will be smart enough to guide a player into any adjacent (or overlapping) room in their direction, even if they don’t sit on the room boundary, and I can add “virtual doors” to hidden passages if I absolutely need to. I’ll post a screenshot or two if and when this comes up to demonstrate what I’m talking about.

Second, before this change, the position of a tile was located at its center. This makes some things convenient, but when it comes to working with Tiled maps, where you can draw object layers on top of tile layers, you really want this position to be the top-left corner of a tile, so that what you see in the editor will match up with the world you build in the physics engine. I took the time to make that change, since it was leading to some less-than-obvious behavior in the implementation.

At this point I have the basic mechanics of level creation, destruction, and movement working about how I want, so I’ll be moving on to prototyping various weapons and power ups to figure out which ones will feature in the game. Actually, I lied: before I do that I need to get shootable doors working and add selective destructiveness to the mapping system. But those are both small, manageable changes I hope to bang out tomorrow.

SWT listeners are incompatible with Java lambdas

Programming in C# for the last few weeks has really gotten me to pay attention to all of Java’s inadequacies. One of the biggest ones, which C# handles so well, is lambda functions, or closures. Java has been promising to add support for them forever, and it looks like it’s finally going to happen in Java 8. But I’m not exactly watering at the mouth, mostly because the way they have chosen to implement it is going to make adoption by existing libraries a huge pain in the ass. Here’s the problem:

Interfaces that have just one method are called functional interfaces. Lambda expressions can be used anywhere we have a functional interface.

Sure, there are lots of interfaces with just one method, like Runnable, Callable, FileFilter, etc., and all of these will be able to enjoy the nice new syntactic sugar. This:

button.addActionListener(new ActionListener() { 
    public void actionPerformed(ActionEvent e) { 

Becomes this:

button.addActionListener(e -> { ui.dazzle(e.getModifiers()); });
button.addActionListener(e -> { ui.dazzle(e.getModifiers()); });

Pretty nice, huh?

Well, there’s just one problem: you can’t do this if the interface required by the method signature has more than one method defined. In my other life I spend a lot of time writing Eclipse plugins using a pretty nice windowing toolkit called SWT. Most of SWT’s listener interfaces have more than one method defined on them. SelectionListener has two. FocusListener has two. DropTargetListener has five. So any method that takes one of these interfaces can’t join in the syntax sugar party, sorry.

This problem isn’t specific to SWT — all the classes above were basically adapted from analogues in Java’s own AWT. Most libraries declare mostly non-functional interfaces that require you to implement more than one method to use. SWT and other frameworks get around this by providing abstract classes they call Adapters, such as SelectionAdapter. When you don’t want to implement one of the interface’s methods, you just supply an anonymous inner class that extends the adapter, rather than the interface, like so:

button.addSelectionListener(new SelectionAdapter() { 
    public void widgetSelected(SelectionEvent e) { 

There is absolutely no support in Java 8 for lambdas that use this wide-spread pattern. You won’t be able to use lambdas anywhere you currently do something like this.

I understand that it’s tough to add language features, but people need to be able to use the new features, and use them consistently, for them to be useful. Forcing me to hop back and forth between anonymous inner classes and the nice new lambda syntax is just terrible. I think the attitude I’m supposed to adopt is that whenever I get to use the new syntactical sugar I’m being given a special treat, but I just can’t look at it that way.

As an alternative, the compiler could (this is just off the top of my head) generate an anonymous class for you with the Adapter pattern, allowing you to, in effect, implement only part of an interface as you saw fit. This would dovetail nicely with the new default implementation for interfaces, don’t you think?

If anyone has a tricky way to get around this issue by adding some layers of abstraction, I’m all ears.

Live footage test of motion capture

I told Roark to come over wearing all black, or at the very least dark clothing. He showed up in faded jeans. But that was good enough to provide proof of concept for my de-rezzing motion capture program. Here he is going through a couple test motions:

You can tell that the jeans just aren’t dark enough to provide a good contrast against the backdrop — if I crank the brightness threshold up far enough for them to get colored in, we start to pick up other elements of the scene as well. For actual game animation footage, we determined he needs to be wearing something like a black unitard (ideally) on a nice bright white background, preferably on an overcast day to avoid any sharp shadows. That will give me the pixelated humanoid character, which I can then paint like a coloring book into whatever outfits I want, adding in things like guns and helmets in post production. We’re scheduled to shoot next Thursday, so with any luck I should have the basic running, jumping, and ducking animations to show off some time in early July.

2D pixel-graphic “motion capture”

Continuing on the same theme from yesterday, I added video processing capability to my homespun de-resolution-izer. Here it is in action with some stock footage I pilfered:

All that remains is to test it on some actual live footage of a dark character moving against a light background. If anyone has some of that lying around, let me know.

Also, I must continue to sing the praises of the graphical Form editor in Visual Basic. At one point today, I wanted to group a bunch of controls into a panel. So I dragged a Panel control over from the toolbox, resized it to fit around all the bits I wanted grouped together… and it just worked. Did exactly what I wanted it to, which was to enclose all the controls into a container that moves as a unit. I really had no idea that designing simple GUIs could be this painless.

And now for something completely different

Graphics aren’t really my forte, but the one part of the game I’m dedicated to making look great is character animation. Months before I started this project, I hit upon the idea of rotoscoping live footage to make the basis of the game’s main character. This technique isn’t new — Jordan Mechner filmed his little brother running and jumping and used it as the basis of Prince of Persia. But I have a distinct graphical style for the main character in mind: low-resolution, but fluidly animated. I haven’t seen that done much in other games, and I’m interested to see how it will turn out.

My plan for achieving this vision is to automate the task of de-rezzing video footage, so that ideally it will need relatively little cleanup to be usable in prototypes of the game. Today I wrote the first part of that algorithm, which will take an image of a silhouette and make it blocky looking. Here it is in action:

The next step is to use a video processing library and do the same thing for every frame of a video. Then we just dress Roark up all in black and film him in front of a white surface, and the software does (much of) the rest. I have pretty high hopes for this technique, since it’s one of the few instantly distinctive aspects of the game.

On another note, developing Windows forms is really easy! I had no idea. Hats off to Microsoft for making the tools so easy to use. Compared to writing any graphical component in a framework like SWT, this is total child’s play. On the whole I haven’t been too sold on the idea that Visual Studio is the greatest development environment known to man, as many people will insist. Honestly, even with Resharper installed, it’s still inferior to stock Eclipse in a lot of ways. It felt like taking a step back tool-wise, even as I got used to using it. The GUI builder was the first time I felt like Visual Studio has a significant, clear edge, and I couldn’t have written this simple GUI in an afternoon without it.

Someone to fight with

I couldn’t resist pulling in the little crawling guy from Metroid, despite having gotten my hackles up about possible copyright violations just yesterday. Here he is:

He’ll wander in a straight line until he hits something, then turn around. Five shots kills him. Touching him hurts the player, who doesn’t have any visible health at the moment, but you can see the knock-back effect. I quite like it when the player is standing still, but I think the effect of landing on the monster leaves something to be desired. Right now I’m just drawing a line between the two centers of mass, then throwing the character in that direction. This works fine, but it would be better if the player was guaranteed to be knocked out of harm’s way by this effect. There will be a period of invulnerability after a hit, but you don’t want to end up landing on a monster that just damaged you.

The knock-back effect is configurable at runtime via the same mechanism I whipped up earlier, and it took a little tuning to get it looking reasonable. I’m considering making it proportionate to the amount of damage dealt, but I’m not sure how I feel about that. Nobody likes getting flung across the room, except maybe on very special occasions.

I should also mention that the tiles will reappear right on top of the enemy characters at this point — that’s a bug I’ll address tomorrow, along with doing some major clean up work that’s been a long time coming.

Simple animation, ducking, and intelligent tile reformation

It’s amazing what I can get done when I don’t have to install a raccoon-proof cat door, mow the lawn, and fight inscrutable bugs for half the day:

The first thing you’ll notice is that I’m no longer inviting a cease and desist letter from Nintendo. Clint Bellanger provided the excellent stand-in character, who I’m choosing to call Mr. Bland. His main superpower is avoiding lawsuits. Oh, and he can duck to shoot things around his knees. He doesn’t have a gun, and I haven’t bothered to program in his jumping animation yet, but I wanted to make sure that I had a reasonable idea how I was going to animate the characters before going too much further.

The other big change was to prevent tiles from reappearing in the space occupied by the main character, so that you can now shoot your way through big sections of tiles and not have to worry about falling through the scenery or being ejected out of the hole, which are the two scenarios that were most common before. It didn’t work, anyhow.

Next up: enemies!