15 Aug 2012


I'm currently playing around with some ideas for a new rendering pipeline in Nebula3, strictly experimental stuff, based on the experience so far with Drakensang Online. I'll call it Twiggy because it's so damn slim :)

The driving force is that my focus has moved away from consoles towards more "interesting" and especially more open platforms in the past 2 years, namely the web, mobile, and desktop platforms with some sort of built-in app-shop (or good old Steam for that matter).

The way-of-thought towards the "grand vision" is basically:
  •  Selling software in stores on clunky silver disks is sooo last century...
  • Downloading and installing a complete game for half-an-hour or more before being able to play is not much better (it happens that I'm forgetting about half of the demos I'm starting to download on my 360 until I'm deleting them at some later time because I need to make room for new demos, which I will never play as well, ...)
  • Regardless of platform (web, mobile, or app-shops on desktop platforms) the experience should be true "click and play". Click a button, and start to play a few seconds later without the player wanting to start another activity in the meantime.
  • This means a small initial download of a few megabytes (5..20 MB or so), and from then on it's all about:
How much new data can we present to the user per second?

Let's be optimistic and assume somewhere between 2 and 12 Mbit/sec for the civilized areas of the world...

This means the asset data which is streamed into the game still must be small. It's no longer about how much data fits on a DVD or on Bluray, we can store as much data as we want on web servers. It's all about how fast the user eats through the data while playing the game and how this compares to the available bandwidth. Which is why data compression, sharing and reuse is so important today and the size of a Bluray disc which Sony hyped so much a few years back has become irrelevant.

It's a good thing if the game world is built from many small, similar building blocks which can be recombined as much as possible (of course without having an obviously repetitive game world). This is something which works very, very well in Drakensang Online.

With all those thoughts and the experience from the current rendering architecture (what works well and what doesn't), here are some starting points of what Twiggy will be:
  • It will initially be built on OpenGL instead of D3D, since the majority of target platforms has some flavour of OpenGL as rendering API.
  • The feature base-line will be OpenGL ES 2.0
  • The performance base-line will probably be an iPad2
  • BUT: it should scale up with additional features, and especially additional fillrate on more powerful target platforms (up to desktop GPUs)
  • GPU performance is much more important then CPU performance. Some potential target platforms (like Adobe Alchemy 2) will cross-compile C++ to some VM byte code, while providing full access to the GPUs power. It's important that a game runs smooth even with such a limited "software CPU".
  • Optimized for rendering scenes built from an extremely high number of simple 3D objects (that's where the data size savings mainly come from)
  • Ditch the "fat render thread" in favour of a very slim render thread behind a simple push buffer. The fat-render-thread design in Nebula3 works, but it is complex. It needs to run in lock-step with the main-thread, needs to transfer data back to the main-thread, and has a lot of redundant code on the render-thread and main-thread side. That's a bit too much hassle for the advantages this design provides.
  • More orthogonality throughout the render pipeline: for instance hardware-instancing should be usable with less restrictions (currently, only simple, static 3D objects can be hardware-instanced), make skinned characters and "normal" 3D objects more alike and share more features between them (e.g. using the powerful character animation system for everything, not just characters).
The way there is long, since this is a weekend-project, and I will happily throw away and rewrite large chunks of code if they don't feel right. The first step will be the new CoreGraphics2 and Resource2 subsystems, which will implement the new slim render thread, and a resource management system which is more powerful then the current one, yet with much less and simpler code. More on that later.


StiX said...

It's nice to see new posts... even once in a year and a half :D

eRiC Werner said...

he Floh! Oh very cool! That all sounds like approaching a new and shiny adventure! :D

One thing that always came into my mind when seeing Drakensang Online technique was Demoscene intros. Ever "downloaded" a 4k or 64k?...

Of course coder art is not always of the greatest taste ;] but some input from that for instance texture and geometry generators could improve asset size even more!
I mean of course the extremes are somehow bad:
* generate all:
tiny size > bad artist possibilities > high precalc time > low download time
* dcc all:
huuuuuge size > good artist possibilities > no precalc time > unlimited download time
on the other side: downloading might not affect already fluent rendering. Generating geometry or textures might do!

So with this in mind you might be inbetween the chairs. Which is good!
I also always remember you quoting carmack: "Data beats cleverness" and I think the truth is once more inbetween!
Of course a game like Drakensang Online looks just ridiculously incredible already when considering its size! But to be honest: Another type of game? Closer to the characters? Another point of view...? The style of the game is quite benificial to the technical background.

but what could be done?
* Textures: Will artists get rid of Photoshop? Hahaa!! :D.. Maybe an on client texture upscale, sharpen, or generated overlay textures is possible. One will need pixel textures of course. But maybe combined with something like Substance or SVG too?
* Geometry: Terrain could be more or less generated on client. As well as on client meshsmoothed versions of downloaded polygon soup.
* Animation data: No baked animation streams but using original, maybe even compessed keys plus on client baking if keeping linear iterpolation is a performance issue.
* Asset data: Maybe asset attributes are not the big deal here .. but A thing I got to know from 64k intro dev: They only save delta attributes and reuse identical values across all operators / assets.

like said: generating everything is not optimal, does not help with possible game start either. Maybe there is a way to get a balance between some content generation load on the client and data delivery through the net.

but anyway. Didn't I somewhere read that your biggest "problem" is the navmesh?

Floh said...

Good points Eric! Lots of food for thought :) Some more point: low-polygon cage-mesh plus tesselation plus displacement mapping for smooth level-of-detail transitions and maybe more data-size reduction. This puts the focus purely on texture compression (since the displacement texture defines the geometry detail). Textures can be more aggressively compressed with a lossy compressor like WEBP and converted to DDS on the fly...