Index

11.12.2014

Procedural Modeling and Animation


I failed at maintaining at least one post per month, a lot of distractions are abound! I've been trudging away nonetheless. The project is at a point where I am leaping from one milestone to the next, some days being spent refactoring smaller support code and/or adding functionality to various support systems.

I have just added some code for dealing with quaternions for representing object orientations, as euler angles will just not suffice for the physics interactivity I am striving for. To represent rotational velocities I have also added code to handle axis-angle representations, because quaternions inherently limit rotation to 180 degrees, or in the case of rotational velocity 30 RPM.

One feature of the engine that I was looking to implement is procedural model scripting. The goal here is to allow a user to easily script a game 'model'. A model consists of vertices defined for the three basic primitives which are points, lines, and triangles. Each of these vertices have a RGBA value for rendering a color.

The primary advantage of having scripted models are that anybody can edit them, without learning modeling software. All you need is a text editor - which I aim to build into the engine in some capacity (after making a release) to eliminate the need for alt-tab madness. Scripting a model is a matter of cleverly putting together a series of commands that rotate and translate to reach the desired position of each vertex for the desired primitive type.

Another advantage of using procedurally scripted models is that game server admins can run games with their own custom models, which are quickly and easily transferred over the network to player clients. Clients can then generate the actual models by executing the procedures defined therein. This is huge because it is another goal to allow server admins to run entirely customized games without requiring players to download and install anything manually.


A script for a 'sprial tree' model, demonstrating the use of nested loops.

Scripts are also afforded the ability to 'randomize' various parameters. Things like translate, loop, rotate, etc. can all be randomized using a minimum value and a range value. Each time an entity requests a renderable model 'instance' the system checks if the model is invariant or not. If the model is not invariant, this means it uses randomized parameters in some way, and must be re-generated for each instance requested. This allows the system to take one model script, and generate many variations using different seed values to generate the randomized parameters for the operations that use them.

Another feature of the modeling system is the ability to push/pop the current matrix and loop counters. This allows for recursive modeling of hierarchical things like trees, plants, and other fractal shapes.


Some spiral trees, and dynamic mesh player model.

Along with scripting individual procedural models I have implemented an animation system that I devised a long while back. This was yet another chance to embark on a journey that strayed from the norm. I love skeletal animation, and inverse-kinematics for making a skeletal model interact seamlessly with the game world, but I do not love the mind-numbing rote-implementation of features that all the creative work has already been done around. To me, programming is about problem solving, the deeper and more abstract the problem, the more rewarding it is to me. Infact, making a game all by itself isn't that rewarding to me (earning a living is good, though). It's the process and the challenges of making something involved that I find rewarding, and I wish more programmers felt the same. At any rate...

Conventional animation systems involve manipulating a mesh model using a virtual skeleton with virtual joints. Keyframed animation is stored as angles for each joint, and is easily to interpolate for smooth animation between keyframes. Getting any keyframed animation to smoothly and seamlessly interact with the world and external forces like wind, inertia, gravity, collisions, etc.. is tricky in and of itself. It's a challenging problem. Some games only let the model/skeleton interact when the character is dead, allowing the body to flop around in response to external forces. This is known as 'rag-doll' physics. There are various solutions now for handling these sorts of things, both for animating and dead character models. There's even one solution that dynamically generates/modifies keyframed animations for things like walking, so that it looks as though the character is actually negotiating bumpy terrain with strategic footstep positions.

I did not want to plug in a solution, and I did not want to pursue a solution that was too involved. This is where the dynamesh comes in. Dynamesh is just a abbreviation of the term 'dynamic mesh'. A dynamic mesh is just a spring mesh, where vertices are referred to as 'nodes' and the edges connecting two 'nodes' are called 'links'. This is a simple system where each node is given a size (zero is valid) and a mass, and each link is given a rigidity value that dictates how will it retains it should retain its length.

This system is simple enough to simulate. It consists of two parts - a particle simulation for the nodes themselves, and an iterative link-constraint system that pushes and pulls the nodes of each link in an attempt to 'settle' the mesh.

So far, I have determined three uses for this system:

The first use is entity collision volume representation. Along with using spheres, capsules, axis-aligned bounding-boxes, and the like, it's nice to allow for more detailed collision hulls for bigger more complicated entities.

The second use of the dynamesh system, which operates in tandem with the first use, is rigid body physics. It is an automatic feature of this system to allow all the nodes to be in any orientation, with no real 'orientation' at any point in the code. Discerning anything like an 'orientation' involves examining the relationship between node positions, and comparing it to the original default orientation. This isn't too hard or expensive to do. Entities can use a dynamic mesh as not just their collision hull, but also to innately handle collision response and resolution. This enables highly interactive entity physics behaviors.

The third use is animation. Not only can you define a dynamesh that is rigid, but you can define one for a character, or a vehicle, or anything with moving parts. With one pair of nodes you can have a ball and socket type joint. With three nodes you can have a hinge. Through clever use of nodes and links you can create just about anything, and the neat thing is that simulating the nodes as particles that are affected by external forces and collisions allows for highly dynamic interactivity automatically, without any special-case code at all.

In this case I am using dynameshes for character animation, while allowing for an entity to have one scripted procedural model that is permanently attached to them, as well as one dynamesh, which can have models attached to its links. This makes it simple enough to define a character dynamesh.

Keyframed animation is a matter of storing node 'goals' for each keyframe, and pushing those nodes toward their goals when a specific animation is playing. In this case I am procedurally generating running animations through some simple manipulation of foot/knee nodes. Dynameshes must define anchor nodes, which are used to fix the mesh to the entity using them. Entities are essentially dragging the mesh around the world, unless the entity type specifies that it is to assume the physical state of the dynamesh, then the anchor nodes are used to dictate an entity's position/orientation.


Links:
Lightweight Procedural Animation ... by Ian Horswill
NeHe Productions: Rope Physics
QUELSOLAAR.COM: Confuce - Procedural Character Animator

9.21.2014

Collision Detection with Distance Fields


One of the great things about doing a Minecraft-style voxel engine, where the entire world is made of cubes, is collision detection. It's very cheap to detect which voxels one should compare a game entity's collision hull against, and the math is very simple.

Minecraft: The cubic-voxel world game of the century.


One of the lame things about doing a Minecraft-style voxel engine is that Notch did it already (not to mention Minecraft's inspiration: Infiniminer). I am not making a cubic Voxel world. I'm not really even making a voxel world. I'm using voxels as an intermediate representation in generating my world. I don't even plan on making it very big, or modifiable. I'm not using marching cubes to make smooth terrain, and I'm not using boxy voxels either.


my voxel game thus far

After all the generation and stuff, the end product is rendered as polygons (triangles). I could handle collision detection between objects and the world in terms of spheres and triangles, or cylinders and triangles, or any mathematically simple shape and triangles. But triangles are yucky, and I don't like dealing with them. I especially do not enjoy the thought of detecting *which* triangles to test intersections on. Back in the days when I was a big fan of BSP trees, this would have been fun, but not anymore. I've moved on to bigger and better things (or, whatever).

What's nice about cubic voxels is that you can pretty much just index an array using your object's position and see if there are any voxels intersecting. In a Wolfenstein 3D style raycaster this is great for collision detection, it's just so simple to do. That's great and all, but I'm not using cubes. I'm using octahedrons that behave like metaballs and 'merge' when neighboring eachother. This complicates things.


Collision primitives from Cryengine.


My first ideas consisted of first assuming every object to be a sphere, or pill shape, and do a quick (hah) spatial search for the closest voxels. Then detect which faces are facing the entity, which edges and planes it intersects, calculate where any 45 degree planes may be (corners/edges) and handle the collision accordingly by moving the position of the object, and calculating a new velocity based on elasticity and friction values. This method seemed ideal for handling collisions using the sparse volume representation in memory only, and the sparse volume structure doesn't store information about the shape of the voxels, they are all just cubes as far as it is concerned. It was important to me that a slope of voxels could be ran up/slid down/rolled across/etc smoothly, without any stair-stepping. I wanted a 45 degree plane to behave like one.

The most difficult part of this solution was handling large objects with multiple collision points with different groups of voxels around themselves. I quickly realized that a distance field representation of the whole scene would solve all of my problems - efficiently detecting collisions with all object sizes while behaving as though diagonal voxels were one continuous 45 degree surface.



In this video of a 2D prototype I did last week you can see the green 'voxels' and the distance field rendered as a blue/red gradient that is generated using a simple distance transform that propagates distances over the 'volume' in two passes. The circle is just a point in space which is used to index the distance field, which the value returned from is compared against the circle's radius. If the circle's radius is greater than the value at its center in the distance field then it is intersecting. Then it's a matter of checking the surrounding distance field values and calculating the gradient of area of intersection to determine a sufficient approximation of the 'normal' to properly de-intersect the circle and reflect its velocity with some amount of elasticity for a bounce effect.


As a bonus, there are other uses for distance field representations of a 3D scene. Distance representations are very handy for any line/path tracing, so it lends itself well to calculating lighting and shadowing. It is also useful for AI pathfinding and obstacle avoidance. Fluid dynamics can also benefit from a distance field representation for properly drifting particle effects around the scene realistically. I've already uploaded the same distance field to the GPU as a 3D texture to perform some ambient occlusion in the vertex shader, which has increased the visual depth of the game scene. It will also make the job of illuminating game entities much simpler.

The only downside to the distance field representation is memory usage. So far I'm just using a flat array, because I don't really see a very worthwhile means of compressing data as incoherent as a distance field. Conventional sparse structures, like octrees, will not be of much use. What would probably work best is a more continuous approach, like a cosine transform. Maybe dividing it up into 8x8x8 blocks and performing a discrete cosine transform (ala JPEG) would be a decent means of representing the data in memory? Each distance field query would then result in performing a bunch of cosine calls though, unless some quantization could negate most of them. Compression artifacts would yield bumpy surfaces, however.


Links of interest:
Wikipedia: Distance Transform
Gamasutra: Advanced Collision Detection Techniques

8.09.2014

World Representation


It's been a while since my last post, a lot of brain storming (as opposed to actual coding) has been going on. I've been exploring world generation and possible data structures for representing the world in memory. This all required that I make some important defining decisions about the world.


At the time of this writing, I have yet to decide on making the world dynamic (modifiable). This is primarily because of the cost of updating the world, finding a balance between CPU/GPU loads, and doing so over the network between the game server and clients. Another issue involves when a client joins a server, they must download the current state of the world from the server as it exists after any modification that has occurred. There are tricks and techniques for minimizing this data flow, so that the entire world volume doesn't need to be uploaded to each connecting client. Right now this is not a priority, or even a necessary feature for what I am trying to accomplish with this current project. This could change!


Being that the world is volumetric (voxels) it was clear that there needed to be a data structure to hold the volume in memory, not just for rendering, but also physical interactions with entities. Following with Revolude's original design - the world would be procedurally generated (as opposed to loaded) when a game was started, or joined. This data structure would require that it could perform efficient read/write access.


I examined the possibilities of a few different algorithms and data structures for dealing with volumetric data. A lot of algorithms for storing compressed volume data rely on the fact that the data represents direct visual data, like the pixels of an image, in the form of red/green/blue/alpha color channels. This allows for all sorts of neat 'lossy' image-compression style tricks to be used, which forsake exact reproduction of the original compressed data in exchange for smaller size. For applications where the volumetric data represents relatively finely detailed data, these algorithms are great.


For my project the worlds are not that large, or that finely detailed. Voxel size is on par with games like Minecraft and Infiniminer. The primary difference from these games is that voxels will not be cubes. Voxels are also represented using a single byte to store a material-type value, allowing for up to 255 different materials (material zero is empty space). Worlds are also not intended to be infinite. Instead of having worlds extend infinitely through procedural means, they will be a finite size, and wrap around seamlessly. There will be no edge to the map, nor will it go on forever into places that nobody will ever go.


I'm still settling on a specific world size. With the sizes I'm considering, storing the raw voxel data becomes feasible, without the use of any sort of sparse data structure. For example, with a world size of 1024x1024x256 the size of the world data is then 256 megabytes. Each horizontal slice of the world is one megabyte. The only qualm I have with just leaving the data sitting in memory, when virtually every machine capable of running the game has enough memory, is cache coherency. The larger the volume, the further apart in memory two neighboring voxels could lie. This is not good for performance!


It's arguable that using flat storage for a finite volume won't produce a significant slow-down when the volume is queried for things like collision detection and rendering. Personally, I just don't want to be using more memory than I need to, especially if a byproduct of reducing the size is gained speed. Above all else, I love the challenge implementing a sparse data structure ;)


The first obvious option on everyone's mind is a sparse voxel octree. This is a wonderful structure, but can become computationally expensive to iterate as it deepens. One strategy I had a while ago to 'enhance' the octree is to allow each node to have more than 8 children. Instead of 2x2x2, it could be 4x4x4, for 64 child nodes. This would allow an octree with a dimension of 1024 (gigavoxel) and 10 tree levels to only have 5 levels to iterate through from root to leaf. The issue here is that there would be excessive 'empty' child nodes all over a volume. This would be the price for faster iteration.


Another strategy, one which I became quite fond of since my last post, is to store volumes as run-length encoded columns. This is especially attractive because it effectively reduces a volume into a few 2D images, and would work extremely well where voxels represent a limited number of materials (as opposed to highly variable 32-bit RGBA values). Many areas of the volume would be one or two column 'segments'. This approach was almost the one that I finally settled with, but I was having implementation indecision issues, trying to find a 'sweet spot' balance between speed, size, and simplicity. My obsessiveness with finding the perfect solution became a hindrance to progress.


Ultimately, this lead me to fall back on an older idea, which is really an amalgam of a few ideas. At the coarsest level, I represent a volume using a sort of hash-table. This is just a flat array that divides the world up into fixed size 'chunks'. The trick here, then, is to use a sparse voxel octree to represent the contents of each chunk. Using some efficient design, a chunk that is entirely one material (including empty) is stored as four bytes. The rest of the chunks, which I call 'boundary' chunks, are stored using a variable number of octree nodes. Each one has its own pool of octree nodes from which it builds the sparse octree representing the organization of voxels and their material indices. This node pool starts out small, and is reallocated as needed by doubling its size each time it fills up.


Currently I am working with chunks of 32x32x32, which is 32768, or 32k, of voxels. This seems like a nice round number, because I can fit an octree node index into 15-bits (16th bit flags leaf-node status). Now, in an octree, if each voxel could be a different material (32k different materials) then there would be an overflow because inner nodes would require more address space for 4680 nodes, but I am willing to bet that with less than 256 possible voxel materials this 'special' case chunk will never occur. Most chunks will never even see 10k of nodes.


With chunks that are 32k voxels in size, this means that a 64-gigavoxel volume (4096^3) would consist of 32k chunks. The flat array of chunks for a 64-gigavoxel volume would be less than a megabyte. The total size of the chunks themselves could vary, but would average less than 100 megabytes. This is really great for 64 gigavoxels. Now, I'm not going to be representing a world that is 64 gigavoxels, I don't think. I'm thinking smaller, because the goal here is a game that is more of a multiplayer online battle arena than some sort of vast expanse for an MMORPG.


This is actually how all volumes will be represented in memory, in terms of materials, and out of chunks. Some game objects will be displayed using volumes, some using other graphical techniques. This is a global 'voxel' solution.


Links of interest:
Visualization of Large RLE-Encoded Voxel Volumes

An Analysis of Minecraft-like Engines
Ken Silverman's Voxlap Page


7.21.2014

Procedural Content, and Aiming Too High


At the outset of this project my initial aim was to allow full control over the entirety of a scene, as far as level design and editing is concerned. After sitting down and carefully considering potential routes of implementing functionality to allow users to create custom worlds and other assets, I've come to the decision that the work required to provide an interface for editing such assets will only hinder my ability to actually complete this project.

With my previous project, Revolude, it seemed rather slick to procedurally generate the world when the game starts. Whoever was responsible for launching the game server had access to various sliders and coloration options they could play with to customize the 'style' of the world that people would experience while playing in the game they are hosting.

the "start game" menu screen from my previous project 'Revolude'


This functionality alone was simple enough to implement, and was (to my mind) the happy medium between designing your own level, and choosing an existing one from a list when launching a game. A world seed value (not exposed via the UI menu screen) along with all of the server's world-generation parameters are gathered up and sent off to connecting clients who are joining the game, so that they can generate their own local copy of the entire world for rendering and collision detection prediction.

One of the primary advantages to this approach is that every server can be running a completely unique world without clients being required to download or store any map geometry from the server. The only thing transferred are some parameters for the procedural generation of the world. This is, in my mind, the great advantage to using procedural methods.

What with my non-existent experience concerning floating point determinism, I did run into some serious bugs where trees would sometimes be planted, or not, in specific 'close call' spots that seemed too steep to some CPUs, and not too steep on others. This resulted in worlds that were somehow slightly different on two machines that were playing in the same server. These sorts of issues are resolvable, especially if you are aware of them before writing any code in the first place.

Nonetheless, I really like the idea of providing a user with procedural tools to create a base scene volume, from which they could construct their vision by hand. These worlds would be saved, in compressed form, to disk, and would be uploaded to connecting clients. I envisioned a full built-in editor for flying around sculpting worlds, placing entities and detail props, etc.. Along with a procedural materials editor, an entity voxel volume editor, and possibly a synthesized sound editor (Revolude actually featured console-scripted sound synthesis, with enveloping and a few effects, but required hand-editing sound parameters in an external text editor and alt-tabbing back and forth to listen to the resulting sound).

It just seems like too much work for what I'm trying to accomplish. What I would rather do is give a preview to a user that is starting a game. This would just be a more advanced version of what Revolude does. Storing to disk, and transmitting actual compressed volume data to clients just sounds too expensive, and defeats what I initially hoped to accomplish by utilizing procedural methods in the first place.


6.21.2014

Networking Game State and Events

Distributing the game state among multiple machines can be tricky business, depending on the complexity of your game state itself. Most games treat the game state as an array of entities, sometimes indexed using some sort of a scene graph or other hierarchical organization. At the end of the day, though, it's just a list.

To make designing the game more flexible (what with scripting, etc) usually these entities are represented as merely an entity index, and a chunk of data associated with that entity. This data 'blob' is then structured into meaningful values through the scripting system, which divides it up into entity state variables like origin, velocity, model index, etc.

On the networking side of things the engine discerns what state changes have been made to each entity that are relevant to the machine on the other end of the connection. Typically this consists of the server trying to figure out what information is relevant to each client about all the entities in the game, based on what the server knows the client has received so far, and what the client needs to know for its perspective of the game state.

This entails a complex system of backing up copies of the game state to generate a delta-compressed update unique to each client's situation, based on what the client has acknowledged having received as far as the game state is concerned. Fortunately, things are much simpler for the client, only being required to update the server concerning the player's activities in a much simpler scope.

The system I've been developing the past few days simplifies things, for the most part. It could also potentially simplify peer-to-peer type games, as an emergent property of its design. But for now I'm focusing on utilizing a server for simulating most of the game state, while allowing clients to submit their own simulation states. The goal, for simplicity's sake, is to utilize the same system on both ends for dealing with generating and sharing the game state.

This requires a system that works virtually like two servers talking to one-another. The only difference is that the client has a player manipulating the game state that it's responsible for relaying to the server, which in turn relays them to other clients.

One strategy for simplifying the description of entity types, while simultaneously simplifying serialization of entity states for network transmission, is to utilize entity type templates. Instead of having a scripting language that describes (in some sort of VM-executed code) setting the state of an entity, one variable at a time, why not just have static entities that are essentially copied to real entities based on specific triggers or events occuring?

Then, all that's needed to be transferred over the network when an entity's non-continuous state evolves (eg: entity flags, boolean states, model index, physics type, etc) is an ID value for the specific entity template that the entity should become. This eliminates the need to submit any specific entity properties outside of the continuous state (like origin, velocity, etc). The server could send these templates to clients, so that servers can be running customized games.

Utilizing an event-system, for things like player chat messages, player actions (jump, shoot, kill), entities changing their type/template, etc. allows the game state to be divided into discrete 'frames' that are not based on units of time, but units of actual state change. Each machine, upon generating an event, both executes the event and queues it to be distributed to remote systems.

Clients distribute their locally-generated events to the server (through a Quake3 style re-send until acknowledgement received method) which execute the events, and queue them to be further distributed to all clients. One global queue can be used on the server, in a circular buffer, and each client connected will continuously provide the server with acknowledgement of the last received event.

Right now I am implementing the events system which consists of an event execution mechanism (giant switch/case) and data parsing. Data comes in as a simple void* pointer, which is either forwarded from a network-parsed event, or from wrapper functions which allow engine code to invoke specific events with actual variable parameters by lumping them into data identical to what can be found in a network event.

Links of interest:

6.19.2014

Multiplayer, Netcode, Etcetera


A big giant portion of my game project is multiplayer. Since the days of Quake3 I have found it hard to imagine working on much else than a game which can be played online among friends, enemies, and total strangers. I've also been an avid fan of Counter-Strike since the beta days, and the ultra competitive game play enthuses me to this very day.

Anybody who played QuakeWorld (TF, 2fort4 anyone?) surely remembers the requisite skill of being able to lead your targets in order to actually hit them. This was acceptable to us back then. It was considered a fact of life, and was simply unavoidable. It was also something that could be improved upon.

During what I like to think of as "The Counter-Strike Days" a programmer at Valve Software by the name of Yahn Bernier developed a new approach to networking multiplayer games in the Half-Life engine. It was a two-pronged strategy that consisted of client-side prediction and server-side lag compensation.

Client-side prediction is actually just a technique for disguising the fact that the server is doing all the authoritative simulation and that there is internet-induced latency in communicating the state of the simulation to the player clients which are interacting with it. It goes a long way toward making the game feel more responsive, without actually making the simulated interaction of game objects more responsive. Only superficial aesthetic aspects of the game state can be predicted, and are not necessarily an accurate depiction of the actual game state.

Server-side lag compensation is the closest thing to an actual solution which minimizes the effect network latency has on game play and game simulation responsiveness. More often than not, it works quite well. If you aim directly at an opponent, and fire, the hit will register almost as accurately (but not as immediately) as if the game were being run on the local machine. The server effectively re-winds the game state before performing physics and intersection tests.

On paper it sounds great. In practice, it isn't perfect at coping with latency jitter - or, latency variance (fluctuating ping) among players, which can throw off the compensation wildly. This is a fact that is not well-known. It is the heart of clips being emptied point-blank to no avail. It is also a good reason to avoid playing over WiFi connections, which are prone to random interference and ping spikes.

One more strategy, to smoothing out the game experience, is interpolation. In most modern games, this involves the client storing up multiple updates from the server, so that it can play them as smoothly and accurately as possible - by knowing the starting point and following point of an objects motion. In Counter-Strike Source, for example, this is fixed at 100 milliseconds. So that no matter what the actual network latency is, it is always compounded with an extra 100 millisecond delay for the sake of smoothing out object motion.


This image of Counter-Strike Source with sv_showhitboxes enabled displays the last known position of an entity as received from the server before the 100 millisecond interpolation delay that is used to smooth movement


At the end of the day, it seems like this is a lot of work just to make the game as responsive as possible without actually reducing or eliminating the actual latency between clients and servers. As long as the server is the only 'true' simulation of the game, these techniques or minimizing the effects of latency on game play are as good as it's going to get without upgrading the actual internet itself.

The reason that many games use the above techniques is because they are the best there is right now, and nobody believes that anything better can be done. Thus, nobody really explores different options. They see server-side simulation authority as imperative, because it's the only way to make the game secure from cheaters hacking and hackers cheating. My strategy is to do everything different than what is considered 'right' by many developers.

Firstly, I believe in letting the player's simulation occur client-side. I believe that game play can only be furthered by removing the lag component almost entirely from the player interacting with the simulation. This would be the equivalent of affording some game authority to the client-side prediction being used already.

Of course, the issue of hacking and abuse is the first thought to cross the minds of virtually anybody who understands the difference between client-side and server-side game logic. This is alleviated using a simple array of sanity checks for various 'vulnerable' circumstances. There is always a requisite state, or group of states, which allows a particular following state.

By closely examining the evolving state of a client's side of the game, on the server, it is easy to determine the likelihood that the game has been altered, hacked, etc. Each client will have a dynamic score indicating the probability of game tampering, and a threshold for this value which will invoke consequences (eg: kickban) once reached.

So, lets say that we are simulating a player's influence on the game locally on the client, and we have our hacking detection on the server, and everything is peachy. There's still the issue of latency when the game state is sent to clients. Clients will be interacting with an older state than other players are seeing. If two players are in a firefight, and one tries to take cover, he could end up dropping dead after reaching a safe spot simply because the other player could still shoot him where he was out in the open. This is not fun!

What if we could just predict where other players could be all the time? Or at least guess, so that we're closer to seeing where they actually are, as opposed to where they were? This is called extrapolation. Most games only rely on extrapolation when there is a lag spike, or dropped update packet, and the client's simulation runs out of updates to interpolate between to keep things moving smoothly.

I propose utilizing extrapolation exclusively to substitute for interpolation. When an update is received it should be used to project where the object currently is (based on latency) and where it will be by the estimated time the next update will be received. In the interim the engine can begin interpolating from wherever the object is at the moment (end of previous update extrapolation) to this predicted position.

This will not be nearly as accurate at showing the actual path as existing interpolation/delay methods, but since the server isn't the boss anymore this doesn't even matter! Infact, it will be extraordinarily inaccurate for higher pings.

At a round-trip time of 50 milliseconds (ping), an object will be 25 milliseconds ahead of the position received. If updates are 20hz then we can add another 50 milliseconds (plus or minus whether the update is early/late). So all we need to do is project the received origin out by 75 milliseconds and start interpolating the position to this new spot. A better approach would be to estimate an interpolation vector, scaled so the position will reach the projected position at 75 milliseconds, but continue moving the object with that same vector if the next update doesn't make it in time. When a new update comes, it will cause a smooth correction, and the accuracy no longer is significant as far as manipulating the state of the object locally (shooting and killing it).

Now objects will be more closely where they really are for clients and the server. At least, they won't be far behind like existing methods force them to be. This allows for firefights and interactions to be far more engaging, because it will drastically reduce the take-cover-and-die phenomenon.

The one side-effect of extrapolation is rubber-banding. Let's say we are viewing an object that is stationary, and our client has 200 ping (100ms latency). The object begins moving, and we receive the update 100ms later that is has moved a certain amount so far, and is moving at a certain velocity. Now, we take our stationary local copy of the object and have to accelerate it to the position we predict it will be by the next update, moving it faster than it actually was. Once we receive the second update about its motion, we should be pretty synced up, so long as it keeps moving in a straight line, but there will be a noticeable drop in its movement speed, back to its real speed, when that occurs.

This image depicts a player path (moving upwards) and what it looks like linearly interpolated from one predicted position to the next. If one were to also extrapolate velocity this could be smoothed further.


Inversely, when the object stops moving, we will still be simulating it moving beyond its stopping point, and our simulation will be forced to bounce it back to its resting position. Objects will effectively be racing around to be where they really are, always drifting around. This can be hidden by not allowing abrupt movement changes using low acceleration and friction values, but is not always fun because it lowers the game pace. Further smoothing of positions and velocities can be applied, but ultimately the smoother the result, the less real-time it will be. You will be trading smoothness for delay.

Links of interest:

6.18.2014

Voxel Polygonization and Game Engines

 Hi, and welcome to my blog. I wanted to have a place online for the world to see what I am doing, something to keep me focused. Right now I am currently polishing an algorithm I had conceived a few years ago for generating a triangular mesh from a voxel volume. From the algorithm's inception I've understood what I wanted it to output. It wasn't until recently whilst starting on my (latest) game engine project that I finally had the insight to help me solve writing the algorithm.

 I was trying to nail down what sort of world I wanted my current game project to present to players. In my previous game engine project, Revolude, the terrain was a simple heightmap and a static ROAM type algorithm was used to perform a simple polygon reduction before dumping each terrain chunk to a display list consisting of triangle strips and fans.

Revolude, with some pseudo-trees and buildings, and some missiles hitting the ground.

 I was just about to opt for creating a much more involved version of this same type of terrain when I had a stroke of ingenuity. The algorithm is logically very simple, and produces the equivalent output of what I would like to call the dual of a cubrille mesh. In other words, it takes a Minecraft-esque type mesh and converts every face into a single point, and each point into a triangle. The dual of a cube is an octahedron, and the algorithm I wanted to create essentially produced an octahedral mesh from a voxel volume.

A cube, and its 'dual' - an octahedron. Notice how each of the six cube corners becomes a single triangle, and each cube face is collapsed into its center point.

From an outside standpoint, this really seems very simple to do. One approach could be to first generate a simple boxy cube mesh, and then try to brute force convert it into its dual, an octahedral mesh. This became an appealing option after only a while, but I only approach problems with the intent of devising a novel solution, so that was out of the question.

Ideally, I was hoping to create an algorithm in some way similar to marching cubes - where each cube was inspected along with its neighbors to produce geometry. Something that achieved the desired output through such an immediate method was the goal. This approach, however, became much more involved than I had initially believed it could be.

The end product works in multiple passes, on the volume as a whole. The first pass examines 8 voxels at a time, determining whether vertices should be placed, and where. The second pass connects vertices to form edges. Edges can only exist between vertices where the dot product of their normals is greater than or equal to zero. So, vertices must either be facing in the same direction, or up to 90 degrees apart. The third pass involves detecting triangles formed by edges. This is done by searching for 'complement' edges for each vertex, where two edges connected to the vertex are connected to a third edge.

The last operation involves detecting all the left-over quads formed by edges, by doing a similar search to the triangle detecting pass. Instead of looking at the edges connected to a point, we look at the edges connected to an edge, and see if there is an opposite edge on the ends of its connected edges.

These operations, naively implemented, can be extremely slow. Fortunately, it is not too expensive to store extra connectivity information which is used to quickly search edges, vertices, and triangles for related geometry. The end product can then be reduced to simple vertex information and vertex indices for triangles.



Here is a quick video of the algorithm at work with an evolving volume. Forgive the choppiness of the video, my little netbook isn't very awesome. The slowest part of this demo is generating the volume itself. The next step will be to utilize a compressed data structure as the data source, as opposed to raw volume data.