Index

8.09.2014

World Representation


It's been a while since my last post, a lot of brain storming (as opposed to actual coding) has been going on. I've been exploring world generation and possible data structures for representing the world in memory. This all required that I make some important defining decisions about the world.


At the time of this writing, I have yet to decide on making the world dynamic (modifiable). This is primarily because of the cost of updating the world, finding a balance between CPU/GPU loads, and doing so over the network between the game server and clients. Another issue involves when a client joins a server, they must download the current state of the world from the server as it exists after any modification that has occurred. There are tricks and techniques for minimizing this data flow, so that the entire world volume doesn't need to be uploaded to each connecting client. Right now this is not a priority, or even a necessary feature for what I am trying to accomplish with this current project. This could change!


Being that the world is volumetric (voxels) it was clear that there needed to be a data structure to hold the volume in memory, not just for rendering, but also physical interactions with entities. Following with Revolude's original design - the world would be procedurally generated (as opposed to loaded) when a game was started, or joined. This data structure would require that it could perform efficient read/write access.


I examined the possibilities of a few different algorithms and data structures for dealing with volumetric data. A lot of algorithms for storing compressed volume data rely on the fact that the data represents direct visual data, like the pixels of an image, in the form of red/green/blue/alpha color channels. This allows for all sorts of neat 'lossy' image-compression style tricks to be used, which forsake exact reproduction of the original compressed data in exchange for smaller size. For applications where the volumetric data represents relatively finely detailed data, these algorithms are great.


For my project the worlds are not that large, or that finely detailed. Voxel size is on par with games like Minecraft and Infiniminer. The primary difference from these games is that voxels will not be cubes. Voxels are also represented using a single byte to store a material-type value, allowing for up to 255 different materials (material zero is empty space). Worlds are also not intended to be infinite. Instead of having worlds extend infinitely through procedural means, they will be a finite size, and wrap around seamlessly. There will be no edge to the map, nor will it go on forever into places that nobody will ever go.


I'm still settling on a specific world size. With the sizes I'm considering, storing the raw voxel data becomes feasible, without the use of any sort of sparse data structure. For example, with a world size of 1024x1024x256 the size of the world data is then 256 megabytes. Each horizontal slice of the world is one megabyte. The only qualm I have with just leaving the data sitting in memory, when virtually every machine capable of running the game has enough memory, is cache coherency. The larger the volume, the further apart in memory two neighboring voxels could lie. This is not good for performance!


It's arguable that using flat storage for a finite volume won't produce a significant slow-down when the volume is queried for things like collision detection and rendering. Personally, I just don't want to be using more memory than I need to, especially if a byproduct of reducing the size is gained speed. Above all else, I love the challenge implementing a sparse data structure ;)


The first obvious option on everyone's mind is a sparse voxel octree. This is a wonderful structure, but can become computationally expensive to iterate as it deepens. One strategy I had a while ago to 'enhance' the octree is to allow each node to have more than 8 children. Instead of 2x2x2, it could be 4x4x4, for 64 child nodes. This would allow an octree with a dimension of 1024 (gigavoxel) and 10 tree levels to only have 5 levels to iterate through from root to leaf. The issue here is that there would be excessive 'empty' child nodes all over a volume. This would be the price for faster iteration.


Another strategy, one which I became quite fond of since my last post, is to store volumes as run-length encoded columns. This is especially attractive because it effectively reduces a volume into a few 2D images, and would work extremely well where voxels represent a limited number of materials (as opposed to highly variable 32-bit RGBA values). Many areas of the volume would be one or two column 'segments'. This approach was almost the one that I finally settled with, but I was having implementation indecision issues, trying to find a 'sweet spot' balance between speed, size, and simplicity. My obsessiveness with finding the perfect solution became a hindrance to progress.


Ultimately, this lead me to fall back on an older idea, which is really an amalgam of a few ideas. At the coarsest level, I represent a volume using a sort of hash-table. This is just a flat array that divides the world up into fixed size 'chunks'. The trick here, then, is to use a sparse voxel octree to represent the contents of each chunk. Using some efficient design, a chunk that is entirely one material (including empty) is stored as four bytes. The rest of the chunks, which I call 'boundary' chunks, are stored using a variable number of octree nodes. Each one has its own pool of octree nodes from which it builds the sparse octree representing the organization of voxels and their material indices. This node pool starts out small, and is reallocated as needed by doubling its size each time it fills up.


Currently I am working with chunks of 32x32x32, which is 32768, or 32k, of voxels. This seems like a nice round number, because I can fit an octree node index into 15-bits (16th bit flags leaf-node status). Now, in an octree, if each voxel could be a different material (32k different materials) then there would be an overflow because inner nodes would require more address space for 4680 nodes, but I am willing to bet that with less than 256 possible voxel materials this 'special' case chunk will never occur. Most chunks will never even see 10k of nodes.


With chunks that are 32k voxels in size, this means that a 64-gigavoxel volume (4096^3) would consist of 32k chunks. The flat array of chunks for a 64-gigavoxel volume would be less than a megabyte. The total size of the chunks themselves could vary, but would average less than 100 megabytes. This is really great for 64 gigavoxels. Now, I'm not going to be representing a world that is 64 gigavoxels, I don't think. I'm thinking smaller, because the goal here is a game that is more of a multiplayer online battle arena than some sort of vast expanse for an MMORPG.


This is actually how all volumes will be represented in memory, in terms of materials, and out of chunks. Some game objects will be displayed using volumes, some using other graphical techniques. This is a global 'voxel' solution.


Links of interest:
Visualization of Large RLE-Encoded Voxel Volumes

An Analysis of Minecraft-like Engines
Ken Silverman's Voxlap Page