Holocraft - Process for Fabricating Specular Holograms

The clearest video of a Matthew Brand hologram on the web.

It would seem that my blog is getting a bit more traffic now due to my previous post about my adventures with specular holography. I decided that it would be prudent to offer up my holograms in a crowdfunded fashion. I just launched a relatively modest campaign for $5000, which would fund a low-end CNC machine, aluminum stock, and a few months rent on a little tiny office/studio space where I can setup the machine and 'get my hologram on'.

'Holocraft', a program that generates toolpaths for a hologram's fabrication.

Originally, when I set out to do this project, my plan was to refine the cardstock holograms to a viable product that could eventually fund a CNC machine for creating high quality metal plate holograms, but the cardstock holograms are just not quite "there". The cost/benefit for getting them to look pristine is just not a viable route. But, I am sure someone would like to have them just as a novelty item, or maybe even to frame and hang on their wall. I am just not comfortable directly marketing them as a finished product, because I personally wouldn't want one myself. I want a heavy duty super duper shiny metal hologram!

I ask that if you think this project is cool, and want to see more stuff happen, please spread the word about it and tell your friends. Facebook, Tweet, and otherwise social-network the snot out of this blog and/or my Indiegogo campaign. I have this strong feeling that this medium is the tip of the iceberg, and that a lot of energy will change form as a result of it. It just needs to get out there in front of people and on their mind.

I've already figure out virtually everything there is to figure out about how to run Holocraft output on the CNC machine, what cutting tools to experiment with, what grade aluminum I should use, and have already sourced every single thing that I will need to buy to make it all happen.

After much consideration, weighing the pros and cons of every available desktop CNC google would show me, I settled on an X-Carve by Inventables, which is controlled by an Arduino that's running the open source g-code interpreter GRBL. Worst-case it hold a tolerance of 5-thousandths of an inch, which is pretty sloppy by professional fabrication standards, but is just good enough for my holographic endeavors. It also boasts a 31x31 inch work space, which is a far cry better than the size that most other machines offer. Some of them offer less than a foot of working space, but most of them have working area dimensions that are less than two feet. I want to be able to make holograms that are relatively large-format. Once you get too big, though, the fact that the light source isn't infinitely far away starts to interfere and distort the hologram, because each area of the hologram is then receiving incident light at more widely varying angles. This can be compensated for, but it's going to require special attention and more math.

The X-Carve CNC mill I aim to acquire for holographic purposes.
After a bit of research I've opted to go with 1/16th inch plates of 1100 aluminum alloy. 1100 aluminum is 99% pure, which is much more pure than all of the alloys, with the exception of 1050 and 1060. Pure aluminum is much softer to work with, and machine, with a Brinell hardness of 28 (copper is at 35, lead is 5, steel is 150). This means that the tiny cutters used to machine the hologram reflector grooves will not wear down as quickly, which is a good thing. 1100's softness and purity lend themselves well to optical applications, because it can be polished to a mirror shine without impurities mucking it up.

Another issue that cropped up was the fact that CAM software typically isn't designed to handle the sort of input that Holocraft generates. Typically, when importing a path of some kind, these programs like to assume it's a closed shape, that you either want to use as a hole, or an island/extrusion of some kind. In this case it's neither, I just want the machine to cut an arbitrary groove as output by Holocraft.

Of all the different high-end CAM software which I was able to locate trial-versions of to investigate, none of them were going to be able to import Holocraft data and use it properly. I was about to give up when I came across MakerCam, which is actually just a Flash applet that runs in a browser ( By the grace of awesomeness it happens to do exactly what I need it to do. It will import an SVG that Holocraft outputs, and allows me a 'follow path' operation, and let me output the g-code as such. However, it struggles a *little* with the high number of paths that Holocraft outputs, which can be upwards of tens of thousands of individual little curved reflecting groove optics, which tempts me to just implement direct g-code generation capabilities into Holocraft. It would not be the first time that I wrote a program which generates g-code.

'Chamfer King' - One of the several CNC machining utilities that I wrote for my dad's CNC shop back in the day.

What I really need to do now is brush up on my 3D modeling skills, and start producing my own content to 'holographize' that actually represents my own creative self-expression, of which I have plenty to draw upon. I've only been testing out Holocraft using models on the various online repositories of 3D-printer models. There are some really decent paid models which would make some good holograms, but I'm more interested in making good holograms that depict what I want them to depict, not just what works.



Holocraft - Adventures with Light, in Time and Space

 Bitphoria is not dead. I did, however, take a break over the summer to take care of some Real-Life situations. Recently I have mostly been cleaning up code, doing a second-pass over the code to add in error-checking wherever I didn't when I initially wrote the code. I also managed to work out the bugs and kinks in the networking that were highly problematic. There were a handful of bugs that were total project-killers at the time that I decided to take a break. Once I came back to it, they ended up being simple one-line fixes that I had originally feared would require revamping entire swaths of code. Exciting times. Bitphoria is alive and well, and I will be posting more updates about further progress sooner than later.

 In the meantime, I have recently found myself distracted with another project. My wife and I run a business out of our home, selling crafts we create ourselves using various computer controlled printers and cutting machines. There is a sort of pressure to come up with new products to make and sell to keep ourselves relevant in the online marketplace. This side project seems to me to be quite a lucrative endeavor that I am excited to be able to work on.

 Now, in the late 90's, when I was barely a teenager, I discovered a website by a man named William Beaty. This page was about his discovery and adventures with something that is now referred to as 'scratch' or 'abrasion' holography. At the time I thought it was interesting, but only cared about learning about conventional holography, using lasers, and producing interference patterns in photographic media. I hadn't really thought much about scratch holograms since then, until one night a few weeks ago.

William Beaty's Website as of 1999 (via I specifically remember this ASCII art from my initial discovery of his webpage.

 I was goofing around in the dark in the bathroom of our home with my four year old daughter, with a flashlight. Incidentally I was telling her about the 'ghosts' I wanted to make in Bitphoria, that chase the player around, just like the ones in Pacman. She asked what they looked like, and being quick on my feet I smudged one into the mirror above the sink. We discovered and observed a few neat optical properties of this greasy smudge I made. Shining the flashlight on it casted a shadow on the ceiling of the smudge, because it wasn't reflecting the light to the ceiling the way the rest of the mirror was. I also noticed the holographic depth effect being produced by the fact that both my eyes were receiving a different specular configuration of light reflecting from the smudge.. This reminded me of the scratch hologram website, particularly where he explains that he witnessed a holographic effect in a car's windshield that exhibited the same properties and behavior.

Beaty's 20-year old photograph of the hand smudges that inspired his experiments.

 The site I originally stumbled across as a teenager is still up today, and I managed to track it down that night. I don't know exactly when or why I became fixated. Maybe I had unconsciously wanted to make these holograms all my life, at some point, and it seemed that point in time had come.

 The gist of scratch holography is that by using a drafting compass, or some other means for producing circular arcs, one can embed scratches, or grooves, into a reflective surface, which will then catch the light producing specular glints that shift depending on the angle of the viewer/light/surface. The end result is that one can produce points of light that behave as if they are suspended behind or infront of the medium or material. The circular arc scratches that produce these points of light can be configured to yield various shapes and designs that exhibit themselves with a holographic effect.

A collection of Beaty's hand-drawn holograms, and a diagram depicting producing a 'V' hologram on an appropriate surface medium, showing the resulting configuration of arc-scratches that produce a V-shaped hologram out of reflected light.

 My first thought was to automate the process, to produce the best possible scratch holograms. It quickly became apparent, in my googleage that I was not the only person to have this idea. I did a lot of searching, and it would seem that there is only one program that became somewhat popular online, called 3DSilhouette. It is a VisualBasic application, and is simply not available online anymore. From what I gather you can email the author for a copy, and he will provide instructions that it must be installed to a specific directory path on your computer in order to work.

 3DSilhouette creates a variety of output. It can output the actual scratch arc positions, or it can output a pattern of vertical lines that allow one to use a compass by anchoring the compass to one end of the line, and then reaching the scratching-end of the compass to the other end of the line to get the proper arc-radius, and then go ahead and make the arc itself. This all seems great, if you want to make scratch holograms by hand.

Raul's scratch-hologram generator program, which loads .3DS model files (as output by 3D Studio Max) and calculates circular arcs that are to be made to reproduce the model in a holographic form via scratch holography.

 The only other program I was able to find anything about was in the form of a video, and the program was a part of a project by a team of MIT students who were creating both the software and the machine to produce the holograms in a completely automated fashion. This Youtube video is all that seems to exist of the man's work, as he has posted no updates about the program, machine, or it's output.

 This program appeared to be very promising, and possibly of higher quality than Raul's 3DSilhouette program. But there is not one other shred of evidence that this program ever existed online for the public, or that it was ever something that actually did anything, and could have just as easily sat and on someone's hard-drive since the making of this video. Judging by the video, however, it does appear to produce the best possible scratch hologram by way of orchestrating a plethora of circular arcs required to reproduce the supplied 3D model data, to some degree.

 After some reading, and Youtubing, I learned that utilizing circular arcs, per-se, as an optical surface are not the ideal geometry if you wanted a rigid holographic effect that didn't show distortion and collapse the 3D scene being displayed. This distortion can be seen on Raul's videos of the scratch holograms his software output patterns for. It's very obvious that the 3D effect is rather distorted, and over-exaggerated, in that the perspective of the 3D scene rotates too much when the viewer moves very little horizontally. This is not desirable.

 Several people have explored scratch holography, in search of optimization of these mechanically produced holograms. One person, named Matthew Brand, had the means and the know-how to go about figuring exactly what was needed to produce the best-possible holographic effect using the scratch hologram medium. He managed to discern the exact math that would allow the calculation of an optical surface topology that would yield distortion-free holograms via the specular reflection of light.

 Brand began this project in 2008, and demonstrated that an optical surface could be machined, which foliates (or approximates) the necessary mathematically accurate surface calculated that would produce the desired holographic effect for a given set of 3D points. His extension of scratch holography is referred to as 'Specular Holography', in that it hinges on calculating how to manipulate a surface and the resulting specular reflectivity so that a much higher fidelity holographic effect is produced. Brand made scratch/abrasion holography into child's-play, and refined the art into something that nobody else has been able, or willing, to explore since.

 He went on to produce over one-hundred works, most of which were a part of an art installation at the 'MoMath' exhibit (Museum of Mathematics) in New York. Currently, the Specular Holography installation is being sold off, for $1,200 per hologram, via

 That is quite a pretty penny that these holograms are fetching. In the meantime, Brand has moved on to bigger and better things (ie: Lumography). He wrote a whitepaper on the workings of Specular Holography, which was published in 2010. Since then, it seems that nobody has taken interest in his work, to expand on it, or explore it further. Brand himself has not even followed through with a second paper that he mentions being necessary in his existing paper. It would seem that Specular Holography, in all of its glorious precision and beauty, has yet to catch on as a medium.

 To my mind, it's something that has the potential to 'catch-on' and go mainstream.

 At any rate, it became clear that if I wanted to create an automated means for producing marketable holograms, I would have to write my own 'hologram generator' program that incorporates Brand's mathematical derivations. I spent a few days decrypting the academic conventions Brand conveys his ideas through, and managed to glean from his paper what was required of a surface to depict rigid beautiful holographic surfaces.

 By happenstance, one of our cutting machines had recently 'died' - a Craftwell eCraft Die Cutting Machine, which we used to create half of the products that we sell online to support ourselves. This machine died whilst being utilized to produce prototype halloween decorations in early September of this year. Now, when I say 'died', I mean that the bearings which are inside the blade-holding assembly, which performs the actual cutting of the paper/cardstock, 'became' jammed (long story).

 The machine still functions as designed, moving the blade head where it's supposed to go on the cutting medium, except for the fact that we can no longer cut out material via the requisite swiveling blade mechanism that it relies on to be able to cut at all. That is to say that the machine still attempts to cut, it just cannot due to the fact that the blade is essentially stuck in one orientation and cannot follow the direction of the cut anymore.. But the software and machine, otherwise, are still operational.

 In the face of my wife telling me to throw the machine out for weeks, or months even, I resisted, knowing all the while that someday I would figure out *something* that I could do with the poor feat of engineering that had lost its way.

 The night with my daughter, and being reminded of Beaty's website as a kid, I had finally found an application that I could re-purpose our otherwise 'dead' machine for, breathing some life back into it's utility and value, thus turning it from a liability, as something just taking up space and collecting dust, into a valuable asset. All that needed to be done was bridge the gap between the idea of making holograms, and producing an actual marketable product, and the resulting income from profits would further fund life, ventures, Bitphoria, etcetera.

The Craftwell eCraft Die Cutting Machine, since discontinued after we acquired ours during the holiday season of 2012. It's a machine that required some 'finesse', thus frustrating many buyers and leading to its demise.

 My work was cut out for me. I had to figure out how to write a program that generates the necessary arcs to produce a viable hologram, and output an SVG file that could be imported into the eCraft software. Then, figure out how to fashion a means for the machine itself to actually produce the proper effect on an unknown material that would yield a hologram as the end result. I began exploring my options for a material or medium that I could scribe light-catching grooves into.

 Initial thoughts were about foil, adhered to cardstock, and using the built-in pen tool of the eCraft to produce relatively cylindrical grooves that would catch the light reliably when embedded in foil/cardstock. After some preliminary tests with foil/cardstock and a ballpoint pen, it was clear that the arcs would not be able to create sharp enough 'glints' of light, and were too hazy and blurry to be usable.

 The next idea was to use something like Mylar adhered to cardstock. Surely Mylar was reflective enough, and flexible/yielding enough to allow these ballpoint-pen embedded grooves to form and produce the necessary specular glints that were required, in order for a hologram to work. After trial-and-error, with Elmer's glue, cardstock, Mylar, it became clear that the moisture in the glue was warping the cardstock too much, and was simply unusable. My next idea was to try to use petroleum jelly (Vaseline, Aquaphor) instead, because I knew that it would not 'absorb' into the cardstock and cause it to expand and warp. This *did* produce perfectly flat sheets of cardstock with Mylar adhered to it, but was a tedious process involving a big greasy mess, and a rubber roller to roll the greasy blob as flat as possible between the Mylar and regular cardstock.

 It became clear that simply using reflective cardstock was the answer. It's effectively cardstock with Mylar adhered to it in a factory, and so it can be expected to be the best possible material for a limited fabrication means. So I picked some up and it seems to be working out as well as one could expect. It took some trial-and-error to determine what sort of tool I could engineer (ie: a finishing-nail I filed down using my late father's tools), that was capable of embedding grooves into the reflective cardstock surface which catches the light at a shallow-enough angle to be a viable means of creating a holographic product which we can market to the average consumer.

 The only requirement for viewing these holograms is that they are positioned below a light source that's not too far infront of the hologram. Brand's work on Specular Holography allows one to compute an optical surface capable of depicting a given set of 3D points for a given illumination point altitude, relative to the hologram itself. One can calculate the best possible configuration/foliation of the optical surface of the hologram for various illumination altitudes, where the light source is progressively more infront and less above the hologram. If you take a hologram calculated to work with an illumination point positioned almost directly above it, the 3D depth effect is magnified as one moves the light source further and further infront of the hologram. The grooves catch the light as steeper and steeper angles, resulting in the glints moving across the curve of the grooves more and more as one's perspective changes. This effectively turns the hologram into a smear of light because the specular 'glints' stretch out as every position of the groove approaches an angle that can reflect light to the viewer.

 It is my belief that a hologram produced to operate ideally at a 22.5 degree illumination angle (where the light source is half-way between a 45-degree angle and being positioned directly above the hologram) one could reliably decorate a wider variety of rooms with an overhead light source and have the holographic effect operate as much as can be expected. This is what I believe to be a marketable hologram, because it will work in the widest variety of situations that the holograms will probably wind up.

 Once I figured out a viable material for creating the holograms on, the next step was actually writing a program that could properly calculate tool paths for the hologram grooves, output them as an SVG file which could then be loaded into the eCraft software, and then actually produce the hologram on the reflective cardstock. Somehow I opted for the name 'Holocraft', combining 'hologram' and 'eCraft'.

 I spent some time surfing 3D model websites, and reading up on various model formats. I opted to use the STL, or 'Stereolithography', model file format, which happens to be one of the primary formats used to store and convey 3D printer models. Many of the websites that serve as platforms for communities of 3D printer enthusiasts are chocked full of STL models one can download and print on their own 3D printer. Simultaneously, the STL file format itself is extremely simple to load and parse.

 After a few days I had something usable. Initially I was working with circular arcs, to get everything up and running, and produce something interesting. I had to figure out the best size and width/height ratio for the arcs, and also learned how to use the arc path command in SVG so that I wouldn't have to output individual points for the arcs, but could instead utilize the built-in capabilities of the SVG vector image format. This would effectively allow the eCraft software to determine the best possible set of points to re-create the arcs itself, and keep the SVG files being output as small as possible.

Holocraft, in its earliest form, before it was actually generating correct toolpaths.

 After things began shaping up, I re-worked the arcpath code to create the proper hyperbolic toolpaths described in Matthew Brand's whitepaper on Specular Holography. This allowed me to then produce holograms that do not distort and warp beyond a viewing position that was directly infront of the hologram. This meant that my arcpath SVG output was no longer viable. Initially it appeared that I would have to start outputting giant SVG files with many points being plotted along each curve. After some more reading about the SVG format, it appeared that I could utilize the cubic Bezier curve functionality in the SVG format to keep the output small. Then it became a matter of calculating the starting and ending control points that manipulate the Bezier curve from a set of points that lie on eah hyperbola. This took a day or two to fully figure out and write into Holocraft.

 At this point in time I am still working on Holocraft, it is unfinished. I have added a few other features. One in particular allows the user to select from a few different modes of generating the points used to construct the hologram from, instead of being limited to using a model's vertices or a selection of points on polygon edges. Using procedural texturing techniques I am generating a variety of different patterns and textures of points on the surface of the models that can be used to create a hologram. This allows for a wider variety of models to be usable to generate holograms from, instead of being limited to simple low-polygon models, now larger more complex models can be used and a surface pattern texture can be generated to simplify its appearance so as to prevent too many overlapping grooves from being generated, which only serves to corrupt their ability to reflect light.

 I am also in the middle of finishing up some occlusion culling code that allows the model to obstruct itself and 'chop' the grooves into smaller and or shorter segments so that a point of light can appear to disappear behind the foreground parts of the model as the viewer's perspective changes.

 As great as all of this sounds, the reflective cardstock isn't the best medium, nor is the eCraft the best machine for producing a hologram with. The cardstock can only support so many grooves before the hologram turns into a mess that no longer catches the light properly. The cardstock is also prone to warping with enough grooves, and even shredding the reflective material if the tool goes over the same spot too many times. Putting the finished holograms into a heat press has shown to be somewhat effective in flattening out some of the warping, but it's not one-hundred percent effective. Using the fabrication method I've fashioned requires some finessing with Holocraft, to try and squeeze as many grooves into the hologram as possible without them hindering one-another by overlapping too much, which is extremely easy to do.

 Ideally I'd like to offer up the reflective cardstock holograms as a product in our online store. The best possible way to use them is to frame them in a way that keeps them flattened as much as possible, and hang them or set them where there is a light shining on them from above. The reality is, though, that better holograms are to be had. By fabricating holograms into metal, using a CNC machine, many more grooves can be embedded to create a more vivid and detailed hologram, with a greater specularity than the reflective cardstock can produce, making holograms brighter and sharper. This can be seen looking at images of Brand's holograms. They are simply beautiful.

 That is why I will be launching a crowdfunding campaign, either to fund a simple low-end CNC setup, or fund a CNC retrofit kit for my late father's manual Bridgeport manual milling machine that's setup and ready to go, but that nobody is using for anything at all. I have a nagging sense that I owe it to him to put his old stuff to use as much as I can, in his spirit, and in celebration of his life and who he was. A part of me wishes I had discovered this project while he was still alive and well, and it could have been one of the father-son projects that will always seem too few and far in-between.

William Beaty's Hand Drawn Holograms Page -
Scratch Holography Software -
Abrasion Hologram Printer Video -
Light and Illusion by Matt Brand -
M.Brand's Specular Holography Paper -


BITPHORIA, The Game Itself

It occurred to me that most of what I write about on this blog has been the technical side of my thoughts and ideas while working on my game BITPHORIA. I haven't really been posting much in the way of actual progress on the game itself.

I thought I'd take a moment to share what is going on with BITPHORIA. As of now, by my estimation, the game engine is 80% done, and the game itself is roughly 15% completed. I am currently on the cusp of moving from working in the engine to working in the default game scripts.

I have spent a long while tweaking the visuals over the months, and trying out different little tricks, in an attempt to refine the overall appearance of the game into something that is stimulating and attention-grabbing. My entire philosophy on anything is to make it so visually appealing that anybody who sees a screenshot will automatically find themselves looking for a video, and anybody who sees a video will want to play the game.

If BITPHORIA doesn't captivate, visually, through a screenshot, then something needs to change. I don't want to make a product that isn't good enough to sell itself. 

I think that these screenshots portray the overall aesthetic and graphical design of what the final game will consist of. You'll probably notice the low frame-rates that my netbook achieves. It's playable on here, but you will want something with a half-way decent GPU to perform the raymarching on the 3D texture materials. There will be options to reduce the demand on the GPU so that it will be smoother for players with budget/older hardware.

The scripting system is mostly in place. There are a handful of features that I aim to implement to expand functionality further, but the majority of all the commands for scripting each system are present and operational. There is still a lot of validation and verbose warning/error stuff that I need to go back in and write in there, to help aspiring mod developers along.

Documenting the sets of commands for each system is another task that is needed. I am not sure about when this will happen, because I intend to do most of the scripting for the game myself, and so it isn't something I have a need for until the game is released. Until it's released, I am really the only person who will be using it, and I'd like to finish BITPHORIA as soon as possible.

Netcode is operational. Players can start a server and it can be joined from another machine, on a LAN, or over the internet. There is no server-browsing in the menu yet, but that is on the todo list, which goes along with other menu UI features I'd like to add in for various things. One in particular is a sort of holographic preview of the world-volume that would be generated from the current seed value. As a server admin adjusts a slider for the seed value it updates the preview of the world so the user can get an idea for the type of layout that their game will offer other players who join in.

I have a good number of sound effects already in there that I have produced on my own, some of which have yet to find a use. There are 23 different sound effects, and only about half of them have found a place in the current scripts. I feel that I will use up the leftovers and need to make some more sounds before all the sounds are done.

I have also produced several 2-minute looping music tracks, that suit the general low-fi 8-bit aesthetic that underlies the graphical style of BITPHORIA. I'm not sure if all of them will make the final cut, and I'm not sure if server admins who start a game will be able to choose one themselves or if one will be randomly chosen based on the seed value for their game.

Because of the way the scripting works, where a set of scripts defining one game 'mode' is kept in a folder with the name of that game mode, users will be able to duplicate a game script folder and use it as a base for their own modded game mode. The scripts are relatively simple script files, and all that is needed is the documentation for the various commands that each scripted system utilizes. Anybody will be able to edit their script files to customize their game modes, or just create their own new one from scratch.

When someone has produced a BITPHORIA mod, there is no need for other people to manually download and install anything to their BITPHORIA installation. You simply see what game mode servers are playing in the server browser, and automatically download and run the mod when you join in. Infact, no scripts are loaded when you join a server, you only execute whatever scripts the server executes. This allows complete modding freedom. Anybody with BITPHORIA can play your mod, instantly.

If someone wanted to run their own server using a specific mod, they would need to manually download and install the mod scripts. This could change, I may set it up to allow servers to have the option to 'allow mod copying' for clients, at which point the server would let clients download the actual script text files and save them to their game for later use.

All of this is working, the game is currently playable as a simple little deathmatch game. The UI is vastly incomplete, there are no options for setting up a game, or joining a game. The menu system is started, but not currently 'fleshed out'. It is merely a framework with some minor functionality to traverse menu hierarchies using buttons. There are also nice little editboxes for editing configuration strings :)

I started the code base for BITPHORIA exactly a year ago today. It has just over 16k lines of actual code (not counting comments or whitespacing). I have never written 16k lines for one project in my life, nor have I ever worked for a year straight on one project. I have high hopes for BITPHORIA, not as something that will make me rich and famous (although, one can hope), but as something that the gaming industry takes notice of. I figure, at the very least, it will serve as a good portfolio piece if I ever breakdown and decide to get a job working for someone else (ughh).

I feel I have something valuable to contribute to gaming, as a whole, as well as anybody who aspires to make games or learn programming. I just want to be as valuable a resource as I possibly can, whether that means as a provider of fun and interesting games, or creative inspiration.

I hope people find my ideas as intriguing and enjoyable as I do making them come to life.


Forays Into Entropy Coding

One of the many minutiae that concerns me is bandwidth consumption. The fact of the matter is that the internet is not a particularly forgiving means of conveying data from one place to another. It is merely *the* means for conveying data. It is what we have to work with; everyone with a different connection.

Some poor souls are forced to use dial-up, way out there, in the middle of nowhere, and others are privileged with fiber optic connections (we could use a visit, Google). In the middle are the broadband users, with varying capability, via DSL or cable.

A 'D-' for my perfectly usable connection. It's only near-failing
if the application in question is failing the user.

You can see here that my home cable connection has a bandwidth of roughly half a megabyte downstream and 100k/sec upstream. Nobody reads/writes/sends/receives anything in bits (except for programmers), so I like to look at these things in terms of bytes, because they are infinitely more relevant to me (and you). You can see that my connection's score is a 'D-'. I could see that if my priorities involved watching 1080p video. Instead, I'd give my connection a 'B+' because it is something I almost never have to think about, it is plenty fast for my needs. I'd give it an 'A' if it weren't for the random outage that occurs once every few months for an hour or two.

The reality is that it's not the connection that matters, it's how the application uses the connection, and what the end-user's experience is. It makes no difference how I obtain the experience, via 56k or 1-Gbps fiber, as long as the experience is 'A' worthy. Even the newest consumer GPUs are brought to a crawl by games made by those who have no idea what they are doing. This doesn't mean the GPU isn't up to snuff, it means the game designer is doing gamers a disservice by not taking a realistic idea of common hardware configurations into consideration, especially if they took their money for it.

My strategy with BITPHORIA is to make something new, and interesting, that takes advantage of newer hardware capabilities to perform novel rendering, without requiring the most up-to-date setup. Being a multiplayer game, this applies to a player's internet connection as well.

If I can support the vast majority of the existing configurations out there, then that maximizes the potential player base, which translates to customers. Primarily, though, I don't want to leave anybody out. I want the high-end gaming rig players to be happy with their investment, and I also want the newbies on netbooks to be able to enjoy a rousing session of BITPHORIA.

I don't want people to be forced to play on large fiber-connected servers. I want a newbie with a netbook on a wifi connection be be able to host a game server, that can host at least a few players. Even a 'low-end' broadband connection like my own only has only a 100kb/sec upstream, which could easily be saturated if I were to host a server running any popular FPS game with 8 players. In order to make this possible there must be a minimal amount of data traversing the network connections between the server and player clients.

Naturally there are several strategies for minimizing bandwidth usage when conveying a game state across a network connection. Quantizing, or 'bit-packing' various data based on its type and behavior is one extremely important method. Typically, values for angles/orientation/etc are represented and dealt with as floating-point values (or double-precision, if your application demands it). Floating points values (aka 'floats') are 32-bits, and sometimes only a small range of their capable range is used. For instance, in a game, you may have objects with velocities that never go above a certain speed. This knowledge can be used to effectively remove the extra unused-bits from velocity information about an object.

Another strategy is avoiding sending redundant data, and only send certain properties when they change, instead of re-sending the same information over-and-over. This applies to things like an object's position in the game, and orientation/angles. If the object is stationary, there is no need to send this information about it.

Another issue that comes up is the game's network update rate. The update rate, in most client/server games, must be as high as possible without putting too much strain on the server or client connections. With lower update rates the game can begin to feel a little sloppy, especially to gamers who have acquired a fine sense for such things. I've seen game servers with their update rates so high that some player connections couldn't keep up. This is just plain unacceptable. Some games keep their update rates really low because they are sending too much data per-update to be able to have it any higher without making the game unplayable for slower connections.

Keeping a low update rate is another possible strategy, and needs fine tuning alongside other important aspects of the networking that handles interpolation and extrapolation, and maintaining the game simulation's fidelity.

Compressing the network data on it's way in/out before actually sending it is the strategy I am currently working to employ in BITPHORIA. My initial plan was to just follow suit with Quake3's use of a static-Huffman encoding, which breaks down as a simple method of re-assigning byte values a new binary code, where more frequently appearing values are represented using a smaller bit code, and less frequent values use a larger bit code. This is a form of entropy coding.

With entropy encoding it's all about exploiting the known likelihood that a byte will be a certain value. This is orthogonal to dictionary encoders, which operate on exploiting the fact that there are usually repeating-patterns in data. Entropy encoding doesn't care where the values are in the stream, they can be clumped together next to like-values, or spread evenly, and the output will be the same size as long as there are the same number of each possible value. Dictionary encoders typically produce a much better compression than entropy encoders, but are much slower. They also do not operate well on small pieces of data, and are better equipped to compress large data.

Generating Huffman codes for symbols a1..a4 using their probability to
build a tree from which the codes are derived (ie: 0 = left child, 1 = right child).
There are two major entropy encoding algorithms that exist, Huffman coding, and arithmetic/range coding. The deal here is that Huffman can be reduce to, as I mentioned above, a simple exchange of byte values for bit codes to be output. This works well as a simple array/table look up in code. Arithmetic/range coding lends itself to better compression ratio, because the resulting bitcodes generated more closely suit the probabilities of each possible value, and therefore produces output that is closer to the actual informational content of a piece of data. The catch is that arithmetic/range encoding is more CPU intensive.

Range coding represents data as a single value generated
by recursively narrowing down each symbol in it's "range".

Now, to be honest, I could probably get away with simply using either, and nobody would know the difference. This is where my neuroses comes into play. If I can do better, I will do better. So after some research I saw potential in the idea of using arithmetic coding, specifically range-encoding, which is the integer-based version of arithmetic coding.

After a day I came up with my very own entropy encoding, which was essentially a bastardized hybrid of Huffman and range encoding combined. Without an academic background in math, I was simply fumbling around, hoping to stumble across a discovery. The goal was to produce the speed of Huffman encoding with the higher precision of range encoding. The end result, dubbed "binary-search encoding" has roughly the speed of Huffman, with neither the compression ratio of Huffman or range encoding. So that was basically a failure. I was able to compress a 512 kilobyte sample of BITPHORIA's networking data down to 405kb. So that was a compression ratio of ~1.26, whereas a simple Huffman encoder can get the same data down to 341kb, a ratio of ~1.5. My binary-search encoding was not gonna fly, at least not in this situation.

Arithmetic coding does the same as, or better than, Huffman, because Huffman is essentially a special case of arithmetic encoding where value probabilities are powers of two. This is why it cannot achieve an encoding that is closer to the actual informational content of a piece of data.

During my research to better understand range encoding, and why it works as well as it does, I was hoping to incorporate these principles into my little algorithm to get better compression than Huffman, even if it wasn't as good as true range encoding. This is when I stumbled across Asymmetric Numeral Systems, and Finite State Entropy. A new algorithm recently developed and even more recently made to be as fast as Huffman encoding, with the compression of range/arithmetic encoding.

ANS captures the raw essence of arithmetic coding, without the convoluted means of obtaining such an encoding. At the end of the day the system breaks down encoding and decoding into a table of bitcodes for each possible byte value, just like an optimized Huffman implementation does. The end result, though, is a better choosing of bitcodes for byte values by maintaining an internal 'state' from which encoding a symbol into some bits yields a new 'state' for the next one.

My initial attempt that utilized a binary search was flawed in that it had to 'start from scratch' with each symbol that was to be encoded. There was no internal state being maintained, and each symbol was treated as a lone isolated incident without any context. ANS maintains this context, which allows it to utilize less bits for encoding/decoding.

If you enjoy compression and information theory, please explore these links!


RealTime Data Compression - Finite State Entropy -
Asymmetric Numeral System -
Simple and fast Huffman Coding -


Game Logic Scripting and Networking

I've been very distracted from working on my project, and this blog, since the holiday season. Various circumstances are resolving themselves, finally, and work will resume. I've also been somewhat stalled out trying to wrap my brain around the topic of this post, and thought it wise to take something of a break from wracking my brain in pursuit of the 'ultimate solution'.

One of the important features of the engine is that it should be easily moddable. My goal is to not only produce a game for people to play, but also a game they can manipulate and customize to further derive enjoyment from. This is also something that I feel affords me maximum engine re-usability insofar as creating and releasing another game is concerned. I have a serious aversion to hard-coding game-specific behavior and logic, because it always gets tangled up in the rest of the engine code, making it a mess to change certain aspects of the engine when trying to build a new game out of it.

The top priority is allowing the people who host game servers online to customize the game in any way they like without players being required to manually download and install anything externally just to play. Server admins should be able to customize everything about the game that people experience when they join their game. Players should be able to see all game servers running on the same engine, and choose between the different games/mods that each server is running. Being that virtually all of the assets and resources used for generating the game experience are scripted procedurally, clients quickly download these procedures and 'rules' upon connecting and the entire game experience they encounter is dictated by the scripted configuration of the game on the server.

Games that are almost entirely hard-coded into the engine usually feature customization of the constant values for things like weapon damage amounts, and other little nuanced values like this, but the behavior of the game itself is otherwise 'stuck' the way that it is. Typically they have some sort of text file where the configuration exists, delineating variables and their values for controlling physics and game behaviors. This is simple enough, and plenty sufficient for smaller projects of a less serious nature.

Most games utilize some form of a scripting language to accomplish the de-coupling of game logic from the game engine itself. There are others which simply incorporate the use of an external compiled binary, e.g. a DLL file. Having a background in reverse engineering and 'hacking' games, I can say that using a DLL is probably the most insecure thing a programmer could do. Operating systems are equipped with all sorts of debugging APIs and features that enable hackers to have a field day with such games.

Another top priority alongside game customization is the quality of multiplayer networking and the resultant online gameplay. It's pretty standard now to just design a server/client model using all the usual tricks that have been around for the past decade to mitigate internet latency and packet loss, to smooth out the appearance of the gameplay that is actually occurring on the host machine that is being simulated remotely. Everything you see on the screen is a virtual lie, and the typical bag of tricks are designed with the intent to please the player with promises that can't always be kept.

It is my opinion that the existing techniques are sub-par and that it is time we begin to explore other options, and come up with new ideas. For my project I am turning conventional networking strategy on its head. In my networking model the client has equal authority as the server and other clients as far as the game state is concerned. The server merely maintains the game rules and authority over who can be connected and participating in the game. It also serves to route the game state between clients as it evolves. No single machine retains the absolute state of the game, and all machines are participating equally in the progress and simulation of the game state as it unfolds.

To make all of this possible, combining a sort of peer-based game state simulation along with client/server networking model, as well as keeping the system for user-made mods in a manageable and user-friendly state, I have opted to use a console-scripted system that is made up of a handful of smaller 'systems' of commands. Everything in the engine is scripted using sets of commands in this fashion.

There are three components to this setup. At the core we have entity 'types', which are a set of parameters that define a specific entity. Properties that don't change about a type of game object are represented as a 'type'. Things like a model, conditional logic, physics behaviors, etc. Properties that are consistent across all instances of an entity type are thus considered aspects of that type.

Secondly, we have entity 'functions'. These are small sets of 'operations' to perform on a given entity. Things like playing sounds, spawning particles, or entities, inflicting damage, etc. These functions are referenced by an entity type's conditional logic definitions. Conditional logic is hard-coded into the engine, there are only a certain set of conditions which the engine detects about an entity and, in turn, executes any logic for those conditions as defined in the entity's type. Conditions such as when an entity touches the world, or another entity, or gets damaged or killed, for example.

Functions can perform a number of operations, but they cannot change anything about the original entity type it is executing upon. However, if something is to change about a specific entity's type, it can simply be changed to a new type with different static properties. If a player is supposed to go from a walking physics to a ragdoll physics, simply change the player entity's type from "player" to "deadplayer", where the physics settings differ accordingly.

This makes the process and/or job of designing entities pretty simple and painless. They can be edited in notepad, and reloaded in a snap. It becomes easy to create variations of the default game.

It also simplifies the networking model. The typical setup most games use for networking the game state do so by 'delta-compressing' entity updates, comparing an older state to the present state to determine what aspects or properties changed and need to be transmitted. This allows developers to define any number of entity changes to occur over the evolution of the game state, and have everything reach from one machine to the other over a network connection.

My implementation boils this same design down around the fact that a lot of times there are many entities which have properties that never change over the course of their existence. These properties can all be lumped together and conveyed in a minimal number of packet bytes by simply indicating which entity should be which type, when that entity becomes that type.

The actual networking system continuously relays 'events' to the other side. The game logic, in the form of entity functions, is responsible for invoking events which have a networked component to them. Events like particles, sounds, etc. all are 'networkable events' - in that they should be seen by other participants in the game. These are queued up to be transmitted in the next outgoing update. An entity's type being set is an example of an event that is serialized and queued up for network transmission.

Not all entity function operations have a networked component. Some things are meant to only occur on the local machine, and even networked operations will stay local if the entity type is defined as being local-only (eg: client-side detail entities). If an entity changes into something completely different, and everything about it changes, this is not a large update. The local machine simply indicates which type the entity is now.

Along with the events queue is a prioritized list of entity positions, velocities, angles, etc.. All the location information about an entity gets tacked onto the update after all the events. Positional updates are 'optional', in that they don't always need to get to the other side the way events are supposed to. Entity positions are prioritized by the entity's proximity to the client's player entity. Entities within a certain distance of the player have their positional information included with every update being sent to them. Once entities become further out, the number of updates they are included in per second begins to lessen down to a bare minimum.

This is a screenshot of the ill-fated Revolude game, circa 2010.

Now one idea I had, back in the Revolude days, was to perform a similar network conveyance of entity properties, and logic, by sending game logic function indices to clients, telling them what functions to execute to bring an entity's state "up to speed". This made sense in my head, but in practice there was an issue between preventing functions from overlapping or overwriting eachother's changes.

The solution was to divide up the game logic into a server-logic and client-logic. Sometimes the two had the same pieces of code but used it in different ways. The server's job was to control the actual state of the game, and direct how clients should be simulating their end, which entities are where, and what functions they are executing.

It never worked out, fully. The Revolude build I still have is wrought with networking bugs. A poorly thought-out event networking system wasn't ensuring all events made it across, in order. Objects can be seen turning topsy-turvy, appearing and disappearing, or never existing (but leaving evidence that they did). It's a nightmare I am happy to never return to.

The networking in BITPHORIA is awesome, though. I am very happy with how the game is coming along.

BITPHORIA, in its current form.