Imagine yourself at the point of starvation, blood caking over your lacerated forearms, every muscle in your body begging you to stop as you summon what little strength you have to limp forward through a dark and twisted forest. You know the end is near. The trail of red stretching out behind you tells the story, one drop at a time. Had there been another path for you in life? Was the sacrifice worth it? Did it have to end like this?
But then, just when all hope seems lost, your shaking hand pushes aside the underbrush and there it is, the shining beacon of all that is good and just in the world, the fabled land that rewards the bravest and most noble souls with eternal life and happiness: CaseyTown. It is a land where the birds sing in emergent counterpoint, where children laugh and play and know nothing of war, where humans live in harmony with nature, where newcomers are greeted with their own personal roasted pig and a crown of rare orchids before being ceremonially plunged into a restorative bath of chocolate milk and rock salt (it exfoliates — look it up).
You can rest now, weary one. Your suffering has ended.
Just kidding! That would all be super cool, but actually CaseyTown is just a test map that I made for
The Witness. It has none of those things I just mentioned, unfortunately. I don’t even think there is a roasted pig mesh checked into source control, so I couldn’t instantiate one in CaseyTown even if I’d wanted to.
I made CaseyTown because I’m a big believer in fast turnaround times. In my mind, one of the most important things on any project is minimizing the amount of time it takes for a change to be testable. Ideally, it should take less than ten seconds for a programmer to go from a changed source file to a running executable.
The Witness, thankfully, has a codebase that compiles very quickly, so it does not take long to make a change and produce a new executable.
Running the executable, on the other hand, is a different story.
The Witness, like many modern games, streams assets continuously as the player moves around. As such, there are no “loading screens” or “level changes” when playing
The Witness, just one initial preload when starting a new game (or, equivalently, loading an existing game). This is great for the player, because loading screens suck and they would mar the wonderful sense of place that
The Witness creates. But for the developers, it unfortunately doesn’t help because that “one initial preload” happens every time you run the executable. This drastically worsens the turnaround time, and places a nasty overhead on code debugging.
Because of this, when I first started working on
The Witness, I looked for a way to fix this problem. Startup time seemed dominated by asset checking related to the giant island world on which
The Witness took place, so I checked for some way I could make a separate world that had almost nothing in it. This world would load much faster and also have the added benefit that I could construct test cases there without worrying that I might break something in the real game.
Fortunately,
The Witness engine had originally started as an offshoot of the Braid codebase, and as such, it had the concept of levels built into it. But unfortunately, the functionality for loading and saving individual levels had atrophied over the years of Witness development, and it was no longer possible to actually load and save worlds other than the main island world. It took a little digging, poking, and prodding of the world management code, but eventually I was able to restore it to working order, and CaseyTown was born.
Nearly everything I did on the Witness happened in CaseyTown first. The new movement system, all the upgrades to the editor, and of course the grass planting system all happened in CaseyTown before they happened on the Witness island. And today, for the first time — and I urge you, for your own personal safety and health, to not become overexcited at the prospect — I would like to take you, dear reader, to the magical land of CaseyTown.
If you recall the opening section of the very first grass article, you will note that all of this work on grass was done in the run-up to the Sony PS4 announcement. Time was short, so in order to get everyone up to speed on the grass changes as fast as possible, I loaded up CaseyTown and recorded a short video that explained how everything worked. Thankfully, I happened to save the video, so I can show you exactly what the grass planting system looks like in action:
Now, you may notice that the ground is this ugly red particleboard looking thing, and that the grass is, for some reason, gray. Why is this? The answer is because this is what happens when you instantiate a generic inanimate object and a default grass entity. Look, I said that CaseyTown was a magical place, OK? I did not say that there was a lot of attention to detail there.
Anyway, hopefully the video gives you a good visual picture of the various features the grass system had to support. In the following two sections, I’ll go through each one in detail and talk about how it was implemented.
Blue Noise with Varying Minimum Distance
If you look back at the second grass article, you’ll notice that I always referred to the “minimum distance” between points as if this were some consistent thing that was true across the whole pattern. As you saw in the video, it can be beneficial to think of this distance as varying, both spatially and as a property of the individual things being distributed.
But how do blue noise generators work if there is no consistent value for “minimum distance”?
The very first time I wrote code to do this (not in
The Witness, but on a much earlier project), I don’t know that there was any official published explanation of how to do it. So I improvised. In both the brute force and the neighborhood sampling techniques, you only really need a local notion of what the minimum distance is, so it turns out to be very easy to write the code so that the distance is variable. Whenever you go to check if a point can be placed somewhere, instead of seeing if it is greater than some fixed minimum distance, you instead check to see if it is greater than the sum of two distances: one for the new point being tested, and one for the existing point against which you are checking it.
In other words, you consider each point to have not just a position, but also a radius. When you store a new point, you remember both its position and the radius that it had, which you are free to determine via any method you choose — randomly varying, varying by the distance from some central point, sampled from a bitmap, etc. Whenever points are checked for proximity, instead of using some constant minimum distance, you just use the sum of the radii of the two points being checked.
Similarly, in the neighborhood sampling case, when you need to determine how far from a base point you should sample to generate new test points, you simply use the sum of the radii again: the base point’s radius plus the radius of the new point you want to place. This can be a little bit tricky if you are doing something complicated for your radius function, such as sampling from a bitmap, because it does mean that you will have to guess what the radius will be for the test point before you actually compute where the test point will be. But fortunately, that is not only easy to solve iteratively if you really want to solve it exactly, but also usually unnecessary, as the distribution functions you’re likely to want to use are often very smooth, which means you don’t need to be exact in your guess.
Again, that was just what I intuitively did the first time I wrote code like this, and it’s what I’ve done ever since, because it is a straightforward way to make varied spacing work with the existing schemes. More recently, there have been papers that talk about more serious ways to make variable blue noise patterns (for example, see
this paper on multi-class blue noise), so you might want to read up on the newer techniques that are out there just in case they offer better options. I myself haven’t read much in the way of recent blue noise research, so I can’t say whether there are significant improvements to be had or whether it’s mostly just increased formalism that doesn’t actually make a meaningful visual improvement in the resulting pattern.
A lot of the stuff in the video revolved around how obstacles are treated. There are two separate things happening, both of which are trivial pieces of code, but I’ll explain what each one does for clarity.
The first was a modification to how grass avoided obstacle geometry. The original grass system used a sphere test to see if grass could be placed at a point without hitting existing world geometry. The sphere size was based on the size of the grass mesh being placed. This was causing problems for the artists, because sometimes they wanted the grass to be planted closer to obstacles than that test allowed. So literally all I did for that was add a parameter that subtracted from the radius of the sphere, so grass could be “nudged” closer to obstacle boundaries as necessary.
The second was the addition of a secondary raycast check to handle interiors and overhangs. The original grass system did one raycast per test point to determine where the ground actually was, since the ground can be uneven terrain made out of multiple meshes and so on. You can imagine the grass as being random points on a flat plane (at the grass entity’s height) that then get projected down onto whatever is beneath them.
This meant that, in the original system, if there was a large obstacle that extended above the height at which the grass entity was placed, then the grass would end up planting inside that obstacle. The reason for this was because the single raycast used to find the ground would only see the ground, and not the obstacle mesh enclosing it. The workaround for this would be to move the grass entity’s position higher in space, so it was above the top of all obstacle meshes, but this seemed clumsy to me and might prove impossible to do in places like narrow canyons with winding ground heights.
So I added a second raycast to grass planting. Once an initial ground point is found, I cast a second ray back upwards and see what the first triangle is that gets hit. Because
The Witness uses backface culling, I know that all triangles must have their normals facing outward, which means I can then use the winding of the triangle with respect to the grass point to tell whether the ray is leaving an obstacle or entering it.
If the ray is leaving an obstacle mesh, then the grass point must be inside the mesh, so I discard that point as being interior (and we never want grass to plant inside obstacles, since obviously you could never see the grass there). If the ray would be entering an obstacle mesh, then that means there’s something overhanging the grass, and I check the distance against an artist-specified parameter that says how much headroom grass wants to have in order to grow. If the headroom is insufficient, I discard the point.
Now, I should mention that this technique for testing where something is in the world with respect to enclosures is not foolproof. Raycasting, as handy as it is, is not exact. Unless all your meshes are watertight, and your ray-triangle intersection code completely robust (which it rarely is in a game), it is entirely possible for you to raycast right through one side of a solid object without actually hitting anything. With something like grass planting, where hundreds of thousands of points will be tested over the course of a planting, it is entirely likely that the raycast will fail at some point.
This is totally acceptable for grass planting, because the result of a failed enclosure test is minimal. But it’s worth pointing out that, if you need to robustly test whether or not a point is inside an object, you have to be careful about these sorts of things in a way that I was not doing with the grass planting system. I was just calling the engine’s default raycast routine, but I probably would have had to implement something of my own had I needed to ensure that enclosure testing was always accurate.
At this point, I would like to be able to say something very triumphant here, that casts me as the humble hero, saving the day (in a grass sense) for
The Witness team on the eve of the Sony PS4 announcement. Certainly that makes for the best story, and everyone knows that when you write programming articles, you are supposed to make yourself sound very smart and capable, carefully omitting all the stupid things you did and the mistakes you made so it sounds as if you just showed up, pulled some spectacular code out of your… pocket… and great celebration ensued.
Well, Witness Wednesday isn’t really that kind of a series. As you can tell by the fact that I often discuss dozens of things I implemented that didn’t work before telling you about the one that did (if one did at all), the heroics are not really forthcoming. The Sony PS4 announcement situation was no different.
The actual impact on the PS4 trailer from my upgraded grass system was quite literally non-existent. Despite the forecast that grass touch-up would be required, it turned out that none of the shots really had any grass problems that were particularly objectionable, and everyone had too much to do to bother playing with the new system anyway. Cue sad trombone music here.
But it was not all for naught, dear reader. Happily, a month or two later, the artists did get into the new “fancy” grass system and started using it, so now there are quite a few grasses in
The Witness that are “fancy”. So although the fancy grasses were not needed for the trailer, they ended up being a useful addition nonetheless, and that’s definitely a good thing.
Well, OK, it’s mostly a good thing. But there was one catch. If I’d checked in the grass changes and nobody had ever used them, then that would have been that. But once the artists started using the new grass features, my code was subject to the scrutiny that only comes from actual production use. No longer safe in the confines of CaseyTown, where the grasses grow freely and feed the spotted unicorns and their… like… baby spotted unicorns or whatever, the “fancy” grass system found out that life on the full Witness island was more demanding.
I believe it was
Orsi Spanyol who ended up explaining to me that when the artists had initially given me their feature requests and said they wanted better control over the behavior of grass at the edges, they weren’t just talking about the behavior at the edges of the grass planting region. What they actually needed was control over grass planting at the fully irregular edges produced by the grass system’s obstacle avoidance. The “fancy” grass, much like the previous system, didn’t offer any control for this whatsoever.
And for good reason. The actual boundary of the planted grass isn’t known until after the grass is already planted, so it’s a bit difficult to factor that into the distribution. But, if I was being honest with myself, the quotes were never going to come off “fancy” if I couldn’t at least give the artists what they’d initially asked for, so I went to fix the problem.
The crux is really that, during grass planting, no matter what your algorithm, you always have the information about where the boundary is just a bit too late to do anything with it. For example, in the neighborhood sampling case, you only know where a boundary is when you attempt to branch out from some base point and find that you cannot place a new point because of an obstacle. At that time, it’s far too late to do anything about the base point, or the points around the base point, which all may potentially need to have their planting distances altered if the artist specified different spacing for the boundary than for the interior.
Not seeing the potential for anything clever, I decided to fall back on a brutish solution: plant the grass twice. I figured, if I don’t know where the boundary is because I haven’t planted the grass yet, the most trivial solution is just to plant the grass a second time once the boundary is known. But obviously this requires some way of storing the boundary in a prepass, then querying it during the real planting to get a measure of distance.
There was one huge thing working in my favor that made this problem trivial instead of tricky: because the actual grass planting pass would still be checking for obstacles itself, it would always reproduce the boundary exactly. So I didn’t have to be able to recreate the boundary from the first pass, I just had to be able to get some approximate distance metric out of it.
This to me screamed “signed distance field”, or in more practical parlance, “texture map”. If you imagine that you magically had a picture where the brightness was the distance to the grass boundary, then obviously you could just sample that picture like a texture map and get the distance everywhere. It wouldn’t be exact, but it would be close enough for driving parameter values, and that was all I needed.
So I basically just made that picture. Because I already had a grid data structure that I’d made for accelerating the point distance queries of the original algorithm, I just added a data member to the grid that was, for each grid point, the minimum distance from that point to the boundary of the grass. By default these were all initialized to a maximum value, and then I updated them in the grass planting prepass.
For the prepass samples, I picked purely random points in the grass planting region (not blue noise, just plain old white noise). If a sample was a valid place where grass could have been planted, I’d do nothing. But if a sample hit a place that was obstructed, I would update the distance values in the grid if any of them would be lowered by their distance to the new sample. Even after a relatively modest number of samples, this produces a totally usable distance grid.
Once I had the grid distances, the rest was trivial. In the actual grass planting pass, I just bilinearly interpolated the grid distances for the point at which I was planting, and that gave me an approximate distance to the closest grass boundary. Problem solved.
You may have noticed that I carefully avoided describing one key aspect of the grass system as shown in the video: parameter interpolation over the pattern. Although I’ve talked about how to support the varying parameters once you know what they are, I didn’t actually say how you vary them in the first place.
The reason I’ve been circumscribing that aspect, although it is admittedly a rather trivial piece of code, is because I thought it might be fun to spend a whole Witness Wednesday talking about interpolation in general. Grizzled game programmer types are likely to know everything there is to know about interpolation, but a lot of other walks of programming don’t get much exposure to the breadth and depth of the subject, so it seemed like a good idea to hold off on discussing the grass parameter interpolation until next week when I can have more room to go off on really long meandering tangents.
And that article, I believe, will mark the conclusion of the grass portion of Witness Wednesday.