I finally reserved a few hours to finish up on Assassin's Creed. Yawn... all this cheap pseudo-science crap was really getting on my nerves towards the end. It seems that every stupid conspiracy theory ever invented has been stuffed into AC's "story".
The game itself was actually quite a chore during the last few hours, because nothing really unexpected or new happened (yes, the game tries to have some sort of a surprise ending, but one can see this coming from a mile away). For the little variety the game-play offers, the game just feels too long IMHO. Everything is just too stretched-out. Overall it's a pretty good game, but now that it's over I feel somewhat ... unsatisfied. The fact that the game doesn't seem to have a real ending doesn't help either. Yes, I discovered the credits ... but what the hell now?
16 Dec 2007
4 Dec 2007
Mass Effect
Holy shit, Mass Effect is f*cking E-P-I-C. The "acting" and story-telling is better then in any Star Wars movie, game-play is (surprisingly) very Deus Ex'ish. The last 2 or 3 hours are simply breathtaking (I clocked in at about 36 hours with most of the side quests). Mass Effect is simply one of the best games ever made. There are some technical glitches (most notable and annoying is the extreme texture popup, especially after loading a saved game and at the beginning of some cutscenes... only low-detail textures are displayed while the high-detail textures are streamed in, sometimes it takes several seconds until everything looks right). There's also some stuttering during game play (most likely also loading related). The graphics quality is a mixed bag, some of the less important NPCs look very bad compared to the extremely detailed and convincing main characters, most of the environment graphics looks last-gen, interestingly one of them (Ilos) looks extremely good, almost like it has been done by a different team, or the designers had a lot more time reserved for this one location. But all in all these are just very tiny flaws and don't stop the game from being an absolute masterpiece.
17 Nov 2007
AC's Political Correctness BS
Assassins Creed displays something along the lines of "this product is a work of fiction and has been produced by a multiethnical team of various religious beliefs" when it starts up. This is the most hilarious/concerning quote I have ever seen in a video game. They should have written instead "we are a scared bunch of spineless pussies who are afraid that our studios are blown up by christian and/or muslim fundamentalists". What's coming next, a disclaimer in a World War 2 game, that members of various - uh - political beliefs have worked on the team just to appease the nazis and communists? The end is near I say.
The game is much more controversial where it probably didn't intend to be: game-design and game-play. There are many things I absolutely love, and one or two things I absolutely hate. I haven't played through yet, so I'm not qualified to write a "proper review", but I think I can already say that it is definitely a must-have game. It does many things different then the mainstream, and that's already enough reason to buy the game, even if it tries sometimes to be too innovative and too avant-garde which may put off some players. We'll have to see how it does commercially. I hope it does well to demonstrate to publishers that innovation may actually pay off. As a gamer I'm usually more conservative though, a game should do at most one or two completely new things, and use established mechanics and even clichés for all the other elements in the game, otherwise I'm not really feeling comfy and at home, especially in the beginning of a game. On the other hand, some games have to be the forerunners and crash-test-dummies for new game-play mechanics. Otherwise we'd still be stuck with Pong.
The game is much more controversial where it probably didn't intend to be: game-design and game-play. There are many things I absolutely love, and one or two things I absolutely hate. I haven't played through yet, so I'm not qualified to write a "proper review", but I think I can already say that it is definitely a must-have game. It does many things different then the mainstream, and that's already enough reason to buy the game, even if it tries sometimes to be too innovative and too avant-garde which may put off some players. We'll have to see how it does commercially. I hope it does well to demonstrate to publishers that innovation may actually pay off. As a gamer I'm usually more conservative though, a game should do at most one or two completely new things, and use established mechanics and even clichés for all the other elements in the game, otherwise I'm not really feeling comfy and at home, especially in the beginning of a game. On the other hand, some games have to be the forerunners and crash-test-dummies for new game-play mechanics. Otherwise we'd still be stuck with Pong.
15 Nov 2007
Nebula3 November SDK
Alright, here's the new Nebula3 SDK: http://www.radonlabs.de/internal/n3sdk_nov2007.exe
Some notable new features:
There are a quite few known issues in this release:
So this release is a bit rough, I hope the next one will be more polished. But I really wanted to push it out the door instead of spending even more time to fix stuff.
Here are a few screenshots:
This is the testviewer.exe with one global light (cannot cast shadows yet, I'm planning to implement Parallel-Split Shadow Maps for global light sources), plus 2 spot lights which cast VSM shadows.
The next one is a screenshot of the new testgame.exe sample application, with a player-controllable avatar and some physics boxes to play around with:
Enjoy!
-Floh.
Some notable new features:
- first version of the Application Layer plus a small sample app
- now also works on ATI cards which support shader model 3.0 (X-above-1000)
- a new global light source type
- resources now loaded from zip archive
- many bugfixes
There are a quite few known issues in this release:
- some (new) VSM-related visual artefacts in the shadow system (light leaking and shadow grain near some shadow casters), this must be fixed by tuning some VSM parameters which I didn't quite have the time to do right
- new Application Layer sample app has clumsy controls and camera
- shadows are not rendered when shadow caster is not visible which leads to shadow pop-up in the App Layer sample application
- re-building assets requires the Nebula2 Toolkit For Maya
- some new Doxygen docs pages are messy
- ODE, OPCODE and GIMPACT not credited yet in the docs
So this release is a bit rough, I hope the next one will be more polished. But I really wanted to push it out the door instead of spending even more time to fix stuff.
Here are a few screenshots:
This is the testviewer.exe with one global light (cannot cast shadows yet, I'm planning to implement Parallel-Split Shadow Maps for global light sources), plus 2 spot lights which cast VSM shadows.
The next one is a screenshot of the new testgame.exe sample application, with a player-controllable avatar and some physics boxes to play around with:
Enjoy!
-Floh.
12 Nov 2007
COD4
Call Of Duty 4 rocks in every aspect. No point in babbling about it here, you really have to see for yourself. If you only play one FPS this year, make it COD4. If the game would have a R6-Vegas-style cover system, it would reach critical mass and collapse into a singularity of pure gaming perfection, thus destroying earth and human civilization. Maybe that's why they left it out.
4 Nov 2007
Shadow Fixes
Phew, I think I have fixed most of the pressing problems in the VSM code. I didn't manage to get the number of executed instructions in the pixel shader down to a reasonable number with dynamic branching as I was hoping. So I went the shader-library way and added shader variations for 1 local light, 2 local lights, etc... up to 8 local lights, so that performance with a low number of dynamic lights has become dramatically better especially on graphics cards with low fillrate (shader optimizations aren't a big priority yet however). Shader model 3.0 still has some annoying restrictions when trying to do a single-pass/multiple-light shader, so the übershader approach isn't very practical in the end. Hmpfh. I'm almost starting to consider deferred lighting... at least as a second option. Shadow quality has been improved a little bit because some bugs in the 2x2 downsample and gaussian blur filters have been fixed, finally I'm now emulating bilinear filtering when sampling the shadow buffer in the pixel shader so that shadow borders should now be properly smoothed also on cards which don't support bilinear filtering of G16R16F texture formats (like ATI's).
I'll try to get a new SDK out of the door next week, it's a little more complicated to coordinate a release now that 3 people are actively working on N3.
I'll try to get a new SDK out of the door next week, it's a little more complicated to coordinate a release now that 3 people are actively working on N3.
Bored To Death
Alright, I think I've had it with JRPGs. Eternal Sonata came out 2 weeks before in Germany. I've enjoyed the demo quite a bit actually so I gave it a try. The graphics are very, very pretty, and technically the game is very solid. But: story and gameplay ... oh where to begin... what the fuck is up with the cutscenes? Very pleasing to the eye but even the cheapest, most shittiest German telenovela is more entertaining. On the one hand, everything is so extremely childish that if I would be a 10-year old boy I would be completely embarrassed to watch this shit. On the other hand it's about heavy stuff like drug abuse, terminal illness, revolution and war. WTF? And they are tooo loooong. I swear it took 15 or 20 minutes of completely useless chatter until the game actually started. Well at least it's possible to skip the cutscenes... which brings us to the second problem: gameplay. This game is so extremely linear that watching a DVD offers about the same set of choices. You can pause, rewind, or go forward. There is almost no sense of exploration, no choices in customization (everything is just linearly upgraded), shops are pretty much useless, they basically just offer to buy some piece of equipment which you may have missed in the previous area during exploration (and it's nearly impossible to miss something because every treasure chest is right near The Way). Combat is actually quite fun because it's more action oriented compared to the traditional purely turn based battle, but it's too repetitive to keep me interested in the long run (yes, my attention span has suffered a lot since I switched to console gaming). Eternal Sonata has a very pretty, expensive looking shell, but unfortunately it seems to be completely hollow inside.
On to something completely different: Naruto! I never heard the name in another context. Guess it's some anime that's currently running in the USA, maybe here in Germany too on some obscure TV channel. I've seen the pretty screenshots, read some good things about the game, so I bought it in favour of the Orange Box which came out last week as well (Portal will have to wait a little bit I guess). And the game doesn't disappoint at all. The graphics are really impressive: stylish, great, consistent art direction and a lot of attention to little details. I've spent quite some time just running around in the world and looking at all the pretty scenery. Gameplay is a shameless mixture of GTA and Tekken with a very little bit of JRPG.... aaalriiiight, let me explain: The GTA aspect is there because the village where the game takes place is basically your sandbox where you can run around and solve small quests and do errands for the villagers. At the start of the game everybody hates you because you're that small annoying little brat who wants to be a ninja. By helping the villagers with their problems, more and more of them start to like you, which is good because villagers that like you can give you directions in future quests. The JRPG aspect is there because combat doesn't take place in the game world, but as you encounter an enemy, the actual fighting will happen in a small fighting arena just as in a typical JRPG (thankfully that's where the similarity to JRPGs ends). The combat is just like in an arcade fighter with simple combos, blocking, throws and special attacks. The special attacks are especially funny, because they are accompanied by absurd and completely overdone anime effects SHADOW-CLONE-JUTSU!!! Blam-Blam-Splat-Wooosh!!! You get the general idea :) They even managed to sneak Quick Timer Events into the game in a way that isn't annoying, which earns an extra point because usually I hate QTEs. The story is told through traditional 2D anime clips which look like they're taken directly from the original TV series and is - well... mildly entertaining to me, but probably a big deal for fans of the original series.
On to something completely different: Naruto! I never heard the name in another context. Guess it's some anime that's currently running in the USA, maybe here in Germany too on some obscure TV channel. I've seen the pretty screenshots, read some good things about the game, so I bought it in favour of the Orange Box which came out last week as well (Portal will have to wait a little bit I guess). And the game doesn't disappoint at all. The graphics are really impressive: stylish, great, consistent art direction and a lot of attention to little details. I've spent quite some time just running around in the world and looking at all the pretty scenery. Gameplay is a shameless mixture of GTA and Tekken with a very little bit of JRPG.... aaalriiiight, let me explain: The GTA aspect is there because the village where the game takes place is basically your sandbox where you can run around and solve small quests and do errands for the villagers. At the start of the game everybody hates you because you're that small annoying little brat who wants to be a ninja. By helping the villagers with their problems, more and more of them start to like you, which is good because villagers that like you can give you directions in future quests. The JRPG aspect is there because combat doesn't take place in the game world, but as you encounter an enemy, the actual fighting will happen in a small fighting arena just as in a typical JRPG (thankfully that's where the similarity to JRPGs ends). The combat is just like in an arcade fighter with simple combos, blocking, throws and special attacks. The special attacks are especially funny, because they are accompanied by absurd and completely overdone anime effects SHADOW-CLONE-JUTSU!!! Blam-Blam-Splat-Wooosh!!! You get the general idea :) They even managed to sneak Quick Timer Events into the game in a way that isn't annoying, which earns an extra point because usually I hate QTEs. The story is told through traditional 2D anime clips which look like they're taken directly from the original TV series and is - well... mildly entertaining to me, but probably a big deal for fans of the original series.
3 Nov 2007
The Nebula3 Application Layer
Nebula3's Application Layer provides a standardized high-level game framework for application programmers. It's the next version of Mangalore (Nebula2's application framework) integrated into Nebula3. N3's Application Layer is the result of years of refactoring (one could say that this started with our very first big project "Urban Assault" 10 years ago). Mangalore has now reached a rather stable state, where no "big" design changes are intended in the foreseeable future. Thus programmers familiar with Mangalore will immediately find their way around in the Nebula3 Application Layer.
Mangalore had officially been started in 2003 when we learned the hard way that it is critical for an independent developer to work on several titles for several publishers at once to spread out the risks (German publishers have the unfortunate tendency to go nearly bankrupt quite frequently). So one very important design goal for Mangalore was to provide a common game application framework for a company where several teams work on game-projects of different game genres, and most importantly, a game-framework which enables us to do small - yet complete - games in a very short time-frame (our canonical "most minimal project" has a 3 month production-time-frame, and a team size of 5 people full-time (2 programmers, 2 graphics guys, 1 level designer), plus a half or third of a project manager). To make a long story short: building several "small projects" in parallel is many times harder then doing one "big project" because there are many parts of a game-project which simply don't scale down with the overall project size. On the content side, our strictly standardized asset pipeline in the form of our toolkit was the precondition for our multi-project strategy because it enabled us to switch modelers and animators between projects with practically no overhead (for instance it is not uncommon to build all graphical assets for a small project in only a few weeks, to directly share character animations between projects, or to have key people (like for instance character animators) help out in another project for a few days).
Mangalore (and now the N3 Application Layer) basically tries to solve the same problems in the programming domain, to enable our application programmers to start a new project with very small setup-overhead, and to help implement its game-play features on-time and on-budget. The following bullet points are probably the most important Mangalore features:
The Application Layer is built around the following basic concepts:
An Entity represents a single game object, like an enemy space-ship, the player avatar, or a treasure-chest. An important aspect of the Application Layer is that the class Game::Entity is not sub-classed to add functionality, instead Attributes and Properties define the actual game functionality of the entity. We learned very early that representing game objects through a class hierarchy in a complex game project brings all types of problems, where spaghetti-inheritance, fat base classes and redundant code are just the obvious ones (let's say you do a traditional realtime strategy game with vehicle units, so you create a Vehicle class, and derive a GroundVehicle and an AirVehicle class, which can implement navigation on the ground and in the air respectively, but now you want to add combat functionality, hmm... create a CombatGroundVehicle class, and a CombatAirVehicle class? Or would it be better to create a CombatVehicle class and derive GroundCombatVehicle and AirCombatVehicle? But wait, some of the vehicles should gather resources instead of fighting... you get the general idea, I think every game programmer faces this problem very early in his career).
That's where Properties come into play. A property is a small C++ object which is attached to an Entity object and which adds a specific piece of functionality to that entity. For instance, if a specific entity type should be able to render itself, a GraphicsProperty is attached, if it should be able to passively bounce and roll around in the game world, a PhysicsProperty is required, if an entity should be able to control the camera, a CameraProperty is added, and so on. Or for the above strategy game, a GroundMovementProperty, AirMovementProperty, CombatProperty and a GatheringProperty would be required, which are combined to create the functionality of the different unit types. Ideally, Properties are as autonomous and independent of each other as possible, so that they can be combined without restrictions (in a real world project, some Properties will always depend on each other, but experience has shown that the resulting usage restrictions are mostly acceptable). Properties pretty much solve the inheritance-problems outlined above. A new entity type is not implemented by subclassing from the Game::Entity class, instead new entity types are defined by combining several specialized properties.
The next problem we need to solve is the interface and communication problem. Properties need to communicate with each other (either with Properties attached to the same Entity, or with Properties attached to other Entities). Calling C++ methods on other Properties would ultimately introduce a lot of dependencies between Property classes which we want to avoid because this would very soon result in the same set of restrictions we just solved by fixing the Entity-class-inheritance-problem with Properties. Virtual methods would help a little bit because it would limit the dependencies on a few base classes, but this would just move the inheritance problem from entities to properties. Messages are used to solve this dilemma. Think of a Message as the next abstraction level of a virtual method call. While a virtual method call requires that the caller knows a target object and a base C++ interface, a Message just requires the caller to know the target object in the form of an Entity but no specific interface. The Entity will route the Message to interested Properties which will ultimately process the Message. The caller usually doesn't know which Property processes a Message, or whether the Message is even processed at all.
A C# programmer would now raise his arm and remark that this is exactly what Interfaces, Delegates and Events are for. And of course he's completely right. Alas, we're working in old-school C++. There is however more room for optimization in the Application Layer in the future (for instance, Message dispatching could be optimized by implementing a delegate-like mechanism).
Attributes are typed key/value pairs attached to entities. Each entity has a well-defined set of attributes which completely represents the persistent state of the entity. Attributes are used to describe the initial state of an entity, and to save and load the state of an Entity using the standard Load/Save mechanism of the Application Layer. Usually, a specific property is responsible for a subset of the entity's attributes. As a general rule, a specific attribute should only be manipulated by one property, but may be read by everyone.
Many features of a game-application don't fit into entities and are better implemented in some central place (the overall code design of some of our early projects suffered a lot from failing to understand that NOT EVERYTHING is an entity). For instance handling input or controlling the camera in one central place is sometimes (often?) better then spreading the same functionality over several Property classes, which then may need to spend a lot of effort to synchronize and communicate with other properties). Those global aspects of game-logic are put into subclasses of Manager. Managers are typical Singleton objects and usually provide services which need to be available "everywhere" in the game code.
Finally, a GameFeature is a new concept which didn't exist in Mangalore. A GameFeature is more or less just a nifty name for a bunch of source files which compile into a static link library consisting of related Properties, Managers, Messages and Attributes (all of them optional). GameFeatures are mainly an infrastructure to encourage and standardize code reuse between projects. The idea is that at the beginning of a project the lead programmer starts by picking and choosing from the existing set of GameFeatures those which suit his project, and (ideally) to group new functionality into new GameFeatures which may be of use for later projects. The Application Layer comes with a small number of standard GameFeatures, enough to implement a generic 3D application with a player-controllable avatar, chase camera and passive physics for environment objects, but it's really intended for more complex stuff, like a dialog system, a generic inventory system, or lets say, the guts of a typical "horse riding game" ;)
The Application Layer isn't perfect and doesn't solve all problems of course. For instance it may be difficult to decide whether a piece of functionality should better go into Properties or into a Manager. Also, a typical problem we are frequently facing is that functionality is split into too many, too small parts which results in too many, too fine-grained Property classes, which in turn requires too much communication between those properties. So a project *still* needs an experienced and pragmatic lead programmer which lays down the basic infrastructure of the code at the beginning of the project.
Mangalore had officially been started in 2003 when we learned the hard way that it is critical for an independent developer to work on several titles for several publishers at once to spread out the risks (German publishers have the unfortunate tendency to go nearly bankrupt quite frequently). So one very important design goal for Mangalore was to provide a common game application framework for a company where several teams work on game-projects of different game genres, and most importantly, a game-framework which enables us to do small - yet complete - games in a very short time-frame (our canonical "most minimal project" has a 3 month production-time-frame, and a team size of 5 people full-time (2 programmers, 2 graphics guys, 1 level designer), plus a half or third of a project manager). To make a long story short: building several "small projects" in parallel is many times harder then doing one "big project" because there are many parts of a game-project which simply don't scale down with the overall project size. On the content side, our strictly standardized asset pipeline in the form of our toolkit was the precondition for our multi-project strategy because it enabled us to switch modelers and animators between projects with practically no overhead (for instance it is not uncommon to build all graphical assets for a small project in only a few weeks, to directly share character animations between projects, or to have key people (like for instance character animators) help out in another project for a few days).
Mangalore (and now the N3 Application Layer) basically tries to solve the same problems in the programming domain, to enable our application programmers to start a new project with very small setup-overhead, and to help implement its game-play features on-time and on-budget. The following bullet points are probably the most important Mangalore features:
- provide a fully functional generic "skeleton application" so that something is visible on the screen from Day One of the project
- integrates with our standardized level-design and modeling work-flows, and thus provides a common "social interface", mindset and vocabulary between our programmers, modelers/animators and level-design guys
- has very few, very strictly defined concepts how an application should implement its game-specific code (it's relatively easy for a Mangalore programmer to decide the "where" and "how" questions when faced with the implementation of a new game feature)
- it's modular enough to enable a team of programmers to work on the game-logic without stepping on the other programmers' feet too much
- "New Game", "Continue Game", "Load Game" and "Save Game" are standardized features (that's an important one: Load/Save works right from the beginning, and mostly without requiring the programmer to write a single line of load/save-related code)
- provide a growing set of reusable modules for future projects (we formalized this a bit more in the N3 Application Layer compared to Mangalore by introducing so called "GameFeatures")
The Application Layer is built around the following basic concepts:
- Entity: a game object, container for Properties and Attributes, manipulated through Messages
- Property: implement per-entity aspects of game-logic
- Manager: implements global aspects of game-logic
- Attribute: key/value pairs representing the persistent state of an Entity
- Message: used for communication between entities
- GameFeature: groups Properties, Managers, Messages and Attributes by functionality
An Entity represents a single game object, like an enemy space-ship, the player avatar, or a treasure-chest. An important aspect of the Application Layer is that the class Game::Entity is not sub-classed to add functionality, instead Attributes and Properties define the actual game functionality of the entity. We learned very early that representing game objects through a class hierarchy in a complex game project brings all types of problems, where spaghetti-inheritance, fat base classes and redundant code are just the obvious ones (let's say you do a traditional realtime strategy game with vehicle units, so you create a Vehicle class, and derive a GroundVehicle and an AirVehicle class, which can implement navigation on the ground and in the air respectively, but now you want to add combat functionality, hmm... create a CombatGroundVehicle class, and a CombatAirVehicle class? Or would it be better to create a CombatVehicle class and derive GroundCombatVehicle and AirCombatVehicle? But wait, some of the vehicles should gather resources instead of fighting... you get the general idea, I think every game programmer faces this problem very early in his career).
That's where Properties come into play. A property is a small C++ object which is attached to an Entity object and which adds a specific piece of functionality to that entity. For instance, if a specific entity type should be able to render itself, a GraphicsProperty is attached, if it should be able to passively bounce and roll around in the game world, a PhysicsProperty is required, if an entity should be able to control the camera, a CameraProperty is added, and so on. Or for the above strategy game, a GroundMovementProperty, AirMovementProperty, CombatProperty and a GatheringProperty would be required, which are combined to create the functionality of the different unit types. Ideally, Properties are as autonomous and independent of each other as possible, so that they can be combined without restrictions (in a real world project, some Properties will always depend on each other, but experience has shown that the resulting usage restrictions are mostly acceptable). Properties pretty much solve the inheritance-problems outlined above. A new entity type is not implemented by subclassing from the Game::Entity class, instead new entity types are defined by combining several specialized properties.
The next problem we need to solve is the interface and communication problem. Properties need to communicate with each other (either with Properties attached to the same Entity, or with Properties attached to other Entities). Calling C++ methods on other Properties would ultimately introduce a lot of dependencies between Property classes which we want to avoid because this would very soon result in the same set of restrictions we just solved by fixing the Entity-class-inheritance-problem with Properties. Virtual methods would help a little bit because it would limit the dependencies on a few base classes, but this would just move the inheritance problem from entities to properties. Messages are used to solve this dilemma. Think of a Message as the next abstraction level of a virtual method call. While a virtual method call requires that the caller knows a target object and a base C++ interface, a Message just requires the caller to know the target object in the form of an Entity but no specific interface. The Entity will route the Message to interested Properties which will ultimately process the Message. The caller usually doesn't know which Property processes a Message, or whether the Message is even processed at all.
A C# programmer would now raise his arm and remark that this is exactly what Interfaces, Delegates and Events are for. And of course he's completely right. Alas, we're working in old-school C++. There is however more room for optimization in the Application Layer in the future (for instance, Message dispatching could be optimized by implementing a delegate-like mechanism).
Attributes are typed key/value pairs attached to entities. Each entity has a well-defined set of attributes which completely represents the persistent state of the entity. Attributes are used to describe the initial state of an entity, and to save and load the state of an Entity using the standard Load/Save mechanism of the Application Layer. Usually, a specific property is responsible for a subset of the entity's attributes. As a general rule, a specific attribute should only be manipulated by one property, but may be read by everyone.
Many features of a game-application don't fit into entities and are better implemented in some central place (the overall code design of some of our early projects suffered a lot from failing to understand that NOT EVERYTHING is an entity). For instance handling input or controlling the camera in one central place is sometimes (often?) better then spreading the same functionality over several Property classes, which then may need to spend a lot of effort to synchronize and communicate with other properties). Those global aspects of game-logic are put into subclasses of Manager. Managers are typical Singleton objects and usually provide services which need to be available "everywhere" in the game code.
Finally, a GameFeature is a new concept which didn't exist in Mangalore. A GameFeature is more or less just a nifty name for a bunch of source files which compile into a static link library consisting of related Properties, Managers, Messages and Attributes (all of them optional). GameFeatures are mainly an infrastructure to encourage and standardize code reuse between projects. The idea is that at the beginning of a project the lead programmer starts by picking and choosing from the existing set of GameFeatures those which suit his project, and (ideally) to group new functionality into new GameFeatures which may be of use for later projects. The Application Layer comes with a small number of standard GameFeatures, enough to implement a generic 3D application with a player-controllable avatar, chase camera and passive physics for environment objects, but it's really intended for more complex stuff, like a dialog system, a generic inventory system, or lets say, the guts of a typical "horse riding game" ;)
The Application Layer isn't perfect and doesn't solve all problems of course. For instance it may be difficult to decide whether a piece of functionality should better go into Properties or into a Manager. Also, a typical problem we are frequently facing is that functionality is split into too many, too small parts which results in too many, too fine-grained Property classes, which in turn requires too much communication between those properties. So a project *still* needs an experienced and pragmatic lead programmer which lays down the basic infrastructure of the code at the beginning of the project.
25 Oct 2007
Status Update
Quick update:
One more reason to get N3 up and running on the Wii ASAP ;)
- A couple of new Wii and DS devkits have arrived this week, one of the Wii-kits is for me so I will join the party soon, at least part-time.
- I'm very busy working on Drakensang at the moment, so there won't be a lot of new stuff in the next Nebula3 release from my side, after all I fixed quite a few bugs in the zip-archive code, so that all runtime data is now loaded by default from an export.zip archive (it's also working on the Xbox360, but I will add support for the 360's native compressor at some later time because it's most likely faster then the generic zlib).
- Johannes Kosanetzky pretty much finished up the Application layer by porting the essential parts of Mangalore over to Nebula3, this will be in the next SDK release even if it's still work in progress. The general plan here is to have a nice little demo application with simple physics, a player-controllable avatar and a chase camera. All with complete integration into our existing level design workflow.
- Malte has been working on porting the Foundation layer over to the Wii, and is now almost finished with it. The Wii specific code won't be part of the public SDK of course.
- The Wii-port got a priority boost recently. Because of this we will essentially take a shortcut and add a new "Nebula2 backward compatibility" layer. This layer will contain essential Nebula2 subsystems (for instance animation, audio, characters, etc...) which are just brought over to N3 without alot of refactoring, so that we can get a good base for a real-world project ASAP. The plan is to replace those N2-subsystem step-by-step with properly refactored systems in parallel to actual project development, so that we have enough time for proper refactoring and to implement platform-specific optimizations where applicable.
One more reason to get N3 up and running on the Wii ASAP ;)
20 Oct 2007
Level Design And Build System Thoughts
I've been discussing with Bernd a lot lately how we could improve our build system and level design process over the next months. Turnaround times for complete and incremental builds, and also the "local turnaround" of level designers are starting to become critical for a "big" project like Drakensang. A complete (nightly) build is now at around 11 hours (this includes recompiling and rebuilding everything, building an installer and uploading the result to an FTP site). An incremental build (during the day) takes at least half an hour and up to 2 hours (without building the installer and uploading). To put this into perspective, Drakensang has about 7000 textures and about 4500 to 5000 3D models (I don't have the exact numbers at hand because I'm not at the Labs right now), the runtime data for the whole game is currently about 4 GB in size.
For level designers, there are 2 separate turnaround-time problems: updating the work machine with the new data from the last nightly build (this may take somewhere between half an hour and and hour or so), and the time it takes to test a change in the actual game (we're working with Maya as level design tool right now, not an ingame editor).
We have a few holy dogmas at Radon Labs:
We also have a some other secret dogmas in our canon, but they don't affect the build system or level design work flow so I won't utter them here :)
We could easily chicken out by giving up daily builds for instance. But this would most likely create "Ivory Tower" pockets inside the company. Tendencies like this happen all the time, they are dangerous for the project, and must be fought the instant they show up.
Instead we stepped back and thought about how a perfect build-system and a perfect level-design system would look like. The whole problem is really 3 separate (but somewhat related) problems:
Point (1) is relatively easy. I think the only worthwhile improvement can be gained by distributing the workload across several build slave machines. We already invested serious optimization work into our Maya exporters, so there's not much more to gain there. Setting up a distributed build system is an interesting job, but not too complicated if you have control over all build tools.
Point (2) is more interesting. The question here is "do we really need to distribute all the build data to all workplaces?". That's 4 GB of uncompressed data per day per workplace, but a level-designer typically only needs a fraction of that data during a normal work day, which typically looks like this:
There are several problems here:
Above a specific project size and complexity, level design becomes more and more frustrating because more and more time is spent waiting for results.
Now here's the actual point of the whole post: What if level-design would actually be fun? We could improve the fun-factor a lot if the level designers would immediately see results, and could directly work together with others. What if level-design would be like mixture between a Wiki and a multiplayer game?
Here's how we think our level design should work in the future:
So we would give up Maya as level design tool in favor of a "collaborative ingame level editor". The collaborative/multiplayer part sounds like a gimmick, but it's actually very important because it solves the data collision problem. Since all changes are immediately distributed to all level-designers, there's no danger that several conflicting data sets are created (the longer 2 separate data branches are evolving, the more likely collisions will occur which will be difficult to resolve).
Up until a few days ago I would have scrapped the whole idea and declared it as impossible. Implementing an ingame editor which would suit all the different genres we are doing sounded like opening a can of worms. But in the end it isn't that difficult (for a distributed system it's actually necessary to have an "ingame-editor"). We already have a lot of the basic building blocks in place:
The devil is always in the details of course. But I think this is a pretty good plan to fundamentally improve our level design process in future projects.
For level designers, there are 2 separate turnaround-time problems: updating the work machine with the new data from the last nightly build (this may take somewhere between half an hour and and hour or so), and the time it takes to test a change in the actual game (we're working with Maya as level design tool right now, not an ingame editor).
We have a few holy dogmas at Radon Labs:
- Daily Build: everybody must work on the most current data which is at most 1 day old
- "Make Game": creating a complete build must be fully automated, and happen on a central build machine
- The Toyota Rip Cord (don't know if this is translated correctly, it's the "Toyota Reißleine" in German): if there is no working build, production essentially stops until the problem is identified and resolved (and the responsible person has been ritually tared and feathered).
- One Tool For One Job: don't use several different tools for the same job (all 3D modeling is done in Maya for instance)
We also have a some other secret dogmas in our canon, but they don't affect the build system or level design work flow so I won't utter them here :)
We could easily chicken out by giving up daily builds for instance. But this would most likely create "Ivory Tower" pockets inside the company. Tendencies like this happen all the time, they are dangerous for the project, and must be fought the instant they show up.
Instead we stepped back and thought about how a perfect build-system and a perfect level-design system would look like. The whole problem is really 3 separate (but somewhat related) problems:
- reduce build time
- distribution of build data to workplaces
- reduce turnaround times for level-designers
Point (1) is relatively easy. I think the only worthwhile improvement can be gained by distributing the workload across several build slave machines. We already invested serious optimization work into our Maya exporters, so there's not much more to gain there. Setting up a distributed build system is an interesting job, but not too complicated if you have control over all build tools.
Point (2) is more interesting. The question here is "do we really need to distribute all the build data to all workplaces?". That's 4 GB of uncompressed data per day per workplace, but a level-designer typically only needs a fraction of that data during a normal work day, which typically looks like this:
- level designer comes in at the morning, and pulls the most current build data from the nightly build
- level designer cvs-edits the files he needs to work on
- level designer works inside Maya and several specialized tools, like dialog and quest editors
- level designer needs to check his changes in the game frequently (involves starting the game)
- in the evening, level designer cvs-commits his work and goes home
- the build machine creates the new nightly build for the next day
There are several problems here:
- in the morning, a lot of time is wasted to just update the runtime data of the workplace machines
- the local turnaround time to check changes in the game is too long (somewhere between 1 and 3 minutes)
- when the level designer checks in his work in the evening, subtle collisions may occur with the work of other level designers (this is especially critical in "persistent-world-games" like Drakensang)
Above a specific project size and complexity, level design becomes more and more frustrating because more and more time is spent waiting for results.
Now here's the actual point of the whole post: What if level-design would actually be fun? We could improve the fun-factor a lot if the level designers would immediately see results, and could directly work together with others. What if level-design would be like mixture between a Wiki and a multiplayer game?
Here's how we think our level design should work in the future:
- level designer comes in the morning, and starts the game in level-design mode
- the game notifies the level designer that an update is available, updating only involves pulling a new executable
- the game connects to a central game server, which holds the actual game data in a database, and the graphical/audio content through a network share
- the level designer creates, places and destroys game objects directly in the game, all changes are distributed via the game server to other level designers working "nearby"
- to test the changes, the level designer presses a play button, and after a few (very few!) seconds, the editor will change into game-mode (it is very important to strictly separate the edit-mode from the game-mode, because application programmers should never have to care about level editor stuff)
- the ingame level editor is augmented with specialized tool windows written in C#, some of them generic (i.e. a nifty table view), some of them project-specific (i.e. an inventory editor)
- in the evening, the level designer shuts down the machine and goes home
So we would give up Maya as level design tool in favor of a "collaborative ingame level editor". The collaborative/multiplayer part sounds like a gimmick, but it's actually very important because it solves the data collision problem. Since all changes are immediately distributed to all level-designers, there's no danger that several conflicting data sets are created (the longer 2 separate data branches are evolving, the more likely collisions will occur which will be difficult to resolve).
Up until a few days ago I would have scrapped the whole idea and declared it as impossible. Implementing an ingame editor which would suit all the different genres we are doing sounded like opening a can of worms. But in the end it isn't that difficult (for a distributed system it's actually necessary to have an "ingame-editor"). We already have a lot of the basic building blocks in place:
- We can pillage a lot of ideas from our current "Remote Level Design" system. At the moment, we can run Maya and the game side by side, and changes in Maya immediately show up in the game, this is nice for tweaking lighting parameters for instance.
- Game data already lives completely in a lightweight local database (SQLite). This gives us a lot of advantages:
- a game entity is completely described by a simple set of named, typed attributes
- a game entity always corresponds to a single row in a database table
- all "other data" already lives in database tables (even hierarchical data, like dialog trees)
- all data manipulations can be expressed with a very small subset of SQL (INSERT, UPDATE and DELETE ROW)
- The only operations that must be supported by the generic ingame level editor must be "Navigate", "Create Entity", "Update Entity", "Destroy Entity", where creation is always a duplication either from a template or from another entity. More complex operations, or different views on the game data, will be implemented in C# tools which are connected through a standardized plugin interface.
- With Nebula3's TcpServer/TcpClient classes and orthogonal IO subsystem as base it should be relatively easy to setup the required in-game client/server system
- We are already using some specialized editor tools written in C# (we did some of them in MEL before, and C# is a HUGE improvement over MEL especially for GUI stuff)
The devil is always in the details of course. But I think this is a pretty good plan to fundamentally improve our level design process in future projects.
16 Oct 2007
New Radon Labs Web Page
Our new company web page has gone live over the weekend! I really love the bright and colorful design. Apart from the fresh new design the other reason for the refactoring was better maintainability. Now that we're churning out several games a year it must be easy to update the website with game flyers and news. And it was suprisingly hard to get there. In the beginning we wanted to entrust an external company with our web page. But although Berlin is packed full with web designers it is suprisingly hard to find a good one worth his money. Most of them only seem to care about the artistic side, and more or less ignore usability and maintainablity. What we needed was a carefully engineered webpage that also had to look good. After some failed attempts we finally did it internally, and now we have a solution that's easy to maintain and integrates nicely into our company infrastructure.
14 Oct 2007
Google Docs
I recently had to wipe and setup my notebook (it's amazing how fast it feels now again, XP really does rot over time), and with it went the pre-installed Microsoft Office (which I just noticed today). But from time to time I have to write a proper document, nothing fancy, all I usually require is to structure the document with different types of headings, and to insert an image or a table here and there. I'm also often switching between my private notebook which I carry around everywhere and my work desktop, which sometimes involves copying said documents around manually. So in lack of an installed MS Word I gave Google Docs a try, and so far I like it. It even makes a nice blog editor, the builtin Blogger editor messes up source code for instance, but with Google Text everything is ok:
Let's see if tables work as well:
Nice :)
Ptr<MyNS::MyClass> myObj = MyNS::Create();Instead of Tab, one must use the "Indent More" and "Indent Less" functions, otherwise the indents will disappear when exporting to Blogger, but all in all this is a huge step forward from Blogger's builtin editor.
IndexT i;
for (i = 0; i < myObj->NumItems(); i++)
{// a commentmyObj->ProcessItem(i);}
Let's see if tables work as well:
Bla | Blub |
123 | 234 |
Nice :)
7 Oct 2007
In Praise Of Ninja Gaiden
So I finally played through Ninja Gaiden (Black) in Normal difficulty, boldly moved on to Hard difficulty - which by the way gives a whole new meaning to the word hard - got totally destroyed at the Alma boss battle, and immediately started a new session on Normal again. That's how f*cking cool Ninja Gaiden is. The first play-through is just standard procedure to become familiar with the environments and to know the story just enough to ignore it in the following play-throughs. After these minor nuisances are put aside, the game becomes the true Ninja Gaiden, reduced to the gamer on one side and Ryu's weapon on the other, connected through the game pad. And this essence of the game, unleashing combos, Hayabusa's unbelievable moves, the perfect feedback through audio, gore and rumble is what sets Ninja Gaiden far above all other fighting games I have played so far.
Playing NG is almost like an exercise in meditation. The moment one loses focus and allows the mind to wander off the disciple is mercilessly punished. The difficulty level of Ninja Gaiden is the stuff of legends. But the game is never unfair. Becoming better at the game is purely a matter of training and honing one's skills, and never of pure luck. Sometimes another weapon, a new combo or another way to use the environment makes all the difference, but most of the time it's simply about not losing focus and keeping calm.
So in conclusion I award Ninja Gaiden a score of 11 out of 10, and that's after deducting 2 points for lame story and setting :)
I also played a few hours into Blue Dragon, enough to realize that old-school JRPGs are simply not my thing. Conan is alright but the combat lacks oompfh (maybe I'm spoiled by NG). I'm also feeling a slightly suspicious urge to play Dynasty Warriors (Empires) which I cannot really explain. There's something fascinating about the 3 levels of the game (the turn-based strategy phase on the top level, the Z-like elements of conquering bases during the game, and finally the pure hacking and slaying on the lowest level) which shines through the layers of bad graphics and voice acting.
Playing NG is almost like an exercise in meditation. The moment one loses focus and allows the mind to wander off the disciple is mercilessly punished. The difficulty level of Ninja Gaiden is the stuff of legends. But the game is never unfair. Becoming better at the game is purely a matter of training and honing one's skills, and never of pure luck. Sometimes another weapon, a new combo or another way to use the environment makes all the difference, but most of the time it's simply about not losing focus and keeping calm.
So in conclusion I award Ninja Gaiden a score of 11 out of 10, and that's after deducting 2 points for lame story and setting :)
I also played a few hours into Blue Dragon, enough to realize that old-school JRPGs are simply not my thing. Conan is alright but the combat lacks oompfh (maybe I'm spoiled by NG). I'm also feeling a slightly suspicious urge to play Dynasty Warriors (Empires) which I cannot really explain. There's something fascinating about the 3 levels of the game (the turn-based strategy phase on the top level, the Z-like elements of conquering bases during the game, and finally the pure hacking and slaying on the lowest level) which shines through the layers of bad graphics and voice acting.
27 Sept 2007
Nebula3 September SDK
Here's the Nebula3 Sep 2007 SDK. This is - as always - work in progress.
Some of the new stuff:
The Test Viewer has been tested on nVidia 6600 and 7800 cards only. The 6xxx cards don't seem to support texture filtering on G16R16F textures, thus the shadow edges won't be properly smoothed (they still look quite ok though). I think the ATI cards are also unable to filter FP textures. The proper solution would be to implement the linear filtering in the shader. This is planned but not implemented yet.
Enjoy!
-Floh.
Some of the new stuff:
- new and improved HTTP debug pages
- added some more subsystem docs
- new Base namespaces which contains all XxxBase classes (mainly to keep the Doxygen documentation clean)
- work-in-progress Lighting subsystem with dynamic spotlights and soft shadows
- RenderTarget resolve-to-texture more flexible (resolve texture can have different size from the render target, and can resolve into a sub-rectangle of the resolve texture)
The Test Viewer has been tested on nVidia 6600 and 7800 cards only. The 6xxx cards don't seem to support texture filtering on G16R16F textures, thus the shadow edges won't be properly smoothed (they still look quite ok though). I think the ATI cards are also unable to filter FP textures. The proper solution would be to implement the linear filtering in the shader. This is planned but not implemented yet.
Enjoy!
-Floh.
25 Sept 2007
Nope...
I give up, I seriously can't take the pain any longer. No Halo for me until I can get my hands on the english version. Whoever's responsible for the german audio track in Halo3 either needs to be fired or promoted to a position where he can't do any more harm. Why oh why couldn't they simply keep the original audio track and provide german subtitles?
German Halo3 WTF???
Nooooooooo........ ...... oooooooh!!!
I can't believe this shit! No English audio track in Halo3? It would be ok if the German voice track wouldn't be such a terrible piece of shit. Seriously who did they hire to do the voice overs? Interns? Halo fanboys? It is very easy to get very good voice actors in Germany for a reasonable price. People who actually dub movies as their profession, not some backyard junkies who apparently did the Halo3 voice overs. What an epic fuckup. The voice acting in Halo3 would be just barely ok for a porn movie, not Microsoft's flagship game on the 360.
This is going to be hard...
I can't believe this shit! No English audio track in Halo3? It would be ok if the German voice track wouldn't be such a terrible piece of shit. Seriously who did they hire to do the voice overs? Interns? Halo fanboys? It is very easy to get very good voice actors in Germany for a reasonable price. People who actually dub movies as their profession, not some backyard junkies who apparently did the Halo3 voice overs. What an epic fuckup. The voice acting in Halo3 would be just barely ok for a porn movie, not Microsoft's flagship game on the 360.
This is going to be hard...
Smooth Shadows
I'm currently working on the dynamic shadow system. This is a shot from the PC-version with 2 shadow-casting spotlights (animated lights, it actually looks much cooler in motion):
I have decided to use Variance Shadow Mapping in Nebula3, a relatively new approach which allows the shadow buffer to be linearly filtered. This is a big win because post-process filters can be applied on the shadow buffers, and all the hardware filtering features of the graphics card to fight aliasing (mipmapping, min/mag filtering, anistropic filtering, etc...) can be used when sampling the shadow map. It produces wonderfully smooth shadows from relatively low-res shadow buffers (the scene above uses 256x256 shadow maps).
The current implementation uses one "big" shadow buffer (512x512, pixelformat is G16R16F) which is re-used for all shadow-casting light sources:
This is then downsampled to 256x256:
Finally the downsampled buffer is blurred into a shared shadow buffer which stores the shadow buffers of all active shadow casting light sources into a single texture:
The pixel junk in the right half are the two unused slots, since there are only 2 (out of currently 4) shadow casting light sources in the scene. A texture array would be a better solution here, because at the moment sampling will "leak" into the neighbouring shadow buffer (for spot lights this isn't a big problem, since this will happen outside of the light cone). But texture arrays don't exist on DX9, so unfortunataly that's not an option.
There's still a lot of work to do on the lighting system, but the intermediate results look very promising. I think I will first bring the 360 version uptodate before moving along. It's lagging behind a few days now.
I have started to play through Ninja Gaiden Black again. Even among current 360 titles it would still look quite good, and it runs wonderfully on the 360 in 720p and 16:9. I finally want to beat the game on Normal (first time I only managed Ninja Dog... yeah I know). Everything about this game is simply kick-ass (well... except from the "story", yawn...). A real f*cking shame Team Ninja didn't release Sigma on the 360 as well. This was actually the only time where I was tempted to get a PS3, but... I seriously can't justify spending 600 Euro just for one game (which I already played).
I also got Halo3 today, I didn't expect to see it on shelf already (European launch is tomorrow). It stood there right amongst all the old 360 titles. Germany is still PC-land. No doubt ;)
I have decided to use Variance Shadow Mapping in Nebula3, a relatively new approach which allows the shadow buffer to be linearly filtered. This is a big win because post-process filters can be applied on the shadow buffers, and all the hardware filtering features of the graphics card to fight aliasing (mipmapping, min/mag filtering, anistropic filtering, etc...) can be used when sampling the shadow map. It produces wonderfully smooth shadows from relatively low-res shadow buffers (the scene above uses 256x256 shadow maps).
The current implementation uses one "big" shadow buffer (512x512, pixelformat is G16R16F) which is re-used for all shadow-casting light sources:
This is then downsampled to 256x256:
Finally the downsampled buffer is blurred into a shared shadow buffer which stores the shadow buffers of all active shadow casting light sources into a single texture:
The pixel junk in the right half are the two unused slots, since there are only 2 (out of currently 4) shadow casting light sources in the scene. A texture array would be a better solution here, because at the moment sampling will "leak" into the neighbouring shadow buffer (for spot lights this isn't a big problem, since this will happen outside of the light cone). But texture arrays don't exist on DX9, so unfortunataly that's not an option.
There's still a lot of work to do on the lighting system, but the intermediate results look very promising. I think I will first bring the 360 version uptodate before moving along. It's lagging behind a few days now.
I have started to play through Ninja Gaiden Black again. Even among current 360 titles it would still look quite good, and it runs wonderfully on the 360 in 720p and 16:9. I finally want to beat the game on Normal (first time I only managed Ninja Dog... yeah I know). Everything about this game is simply kick-ass (well... except from the "story", yawn...). A real f*cking shame Team Ninja didn't release Sigma on the 360 as well. This was actually the only time where I was tempted to get a PS3, but... I seriously can't justify spending 600 Euro just for one game (which I already played).
I also got Halo3 today, I didn't expect to see it on shelf already (European launch is tomorrow). It stood there right amongst all the old 360 titles. Germany is still PC-land. No doubt ;)
19 Sept 2007
Nebula3 State Switch
A couple of interesting things have happened recently here at the labs:
- Nebula3 has now the status of an official Radon Labs project, which means it has a budget and two of our elite-veterans (Johannes and Malte) will start working fulltime on Nebula3. This will accelerate development immensely. One or even two prototype projects will accompany N3 development which will define clear "real-world-goals" for feature planning. I feel this is exactly the right time to add manpower because all the important architectural decisions have happened, and now it's time to (a) broaden the feature set, and (b) start to revise and "port" Nebula2 and Mangalore subsystems which don't need a complete redesign.
- Our first Wii devkit has arrived (actually, it arrived already a few weeks ago), and now with the additional forces added to N3 development, we're going to start a Wii N3 version which will be developed side by side with the 360- and PC-version.
12 Sept 2007
Status update
2 Sept 2007
Even more HTTP (and some Bioshock)
Last HTTP post, promised :)
I've added image support to the HTTP server this Saturday. Turns out no Base64 encoding is necessary (don't know why I assumed this), it's perfectly fine to just send the raw image data over the line. I wrote a StreamTextureSaver class which can save the content of a texture into a stream in a couple of formats (JPG, BMP, PNG and DDS). It's platform-specific and just uses the D3DXSaveTextureToFileInMemory() function on Win32 and Xbox360, so the Nebula3 code is really small. To send a texture to a web browser, the following steps are necessary
There's a special method SaveScreenshot() in the RenderDevice, which basically does the same thing, but uses the backbuffer as the image source. This special case is necessary, because the backbuffer cannot be exposed as a texture (well it could, but this would add unnecessary overhead).
To capture a screenshot from the currently running Nebula3 application into the browser, simply navigate to
http://127.0.0.1:2100/display/screenshot
This produces a PNG screenshot. To get the screenshot as JPEG:
http://127.0.0.1:2100/display/screenshot?fmt=jpg
To retrieve the content of a shared texture resource (including render targets):
http://127.0.0.1:2100/texture?img=[resId]
For instance to get the content of the example Tiger's texture:
http://127.0.0.1:2100/texture?img=textures:examples/tiger.dds
This will only return currently loaded texture resources. If the texture is not currently loaded, a "404 Not Found" will be returned.
The HttpServer is going to become an extremely useful debugging tool. One can easily navigate through an application's runtime data on a much higher level then a source-level debugger allows, since the data can be presented in an application-specific way. It's the perfect complement to source-level debugging and other specialized debugging and profiling tools like PerfHUD or PIX. And since it's HTTP everything also works over the network by design. This is especially useful for console development, where it's often not possible or desirable to add a complex in-game user interface just for debugging and visualization purposes.
Beyond debug visualizations the whole HTTP communication stuff is really inspiring. Imagine what would be possible with XUL (Mozilla's user interface XML dialect).
Some Bioshock-in-progress notes: I'm about 8 hours into the 360-version. As expected the game is pretty damn near perfect (I have only 1 small gripe: the human character models... compared to the graphics quality of the environment and the Big Daddies and Rosies they really look quite ugly). But overall: a masterpiece! Definitive must-play for everybody who loves computer games. The immersion and mood can't be described with words. It's obvious that the developer dodged any face-to-face dialogs (and right so). Yesterday I almost expected to actually meet a sane person face-to-face... but nope, it's all tapes and monitors, I was a little bit disappointed at first... but honestly, it wouldn't have looked very good (quality-wise) if they had chosen to talk directly to other characters. And handling the story in such an "impersonal" way actually adds a lot to the loneliness and depressive mood of the game. The German localization is *excellent*. Usually I'm playing the original version, but Bioshock's voice-over localization is extremely well done. "Movie-quality-well-done".
I've added image support to the HTTP server this Saturday. Turns out no Base64 encoding is necessary (don't know why I assumed this), it's perfectly fine to just send the raw image data over the line. I wrote a StreamTextureSaver class which can save the content of a texture into a stream in a couple of formats (JPG, BMP, PNG and DDS). It's platform-specific and just uses the D3DXSaveTextureToFileInMemory() function on Win32 and Xbox360, so the Nebula3 code is really small. To send a texture to a web browser, the following steps are necessary
- lookup the texture object in the SharedResourceServer
- create a StreamTextureSaver and attach it to the texture object and the output stream (which represents the body of the HTTP response)
- call Save() on the texture, this will save the image data to the output stream
- set the matching MediaType on the content stream (i.e. "image/png")
- that's it, the HttpResponseWriter will wrap everything into a valid HTTP response message and send it off to the web browser; everything is in-memory, no disk i/o is involved
There's a special method SaveScreenshot() in the RenderDevice, which basically does the same thing, but uses the backbuffer as the image source. This special case is necessary, because the backbuffer cannot be exposed as a texture (well it could, but this would add unnecessary overhead).
To capture a screenshot from the currently running Nebula3 application into the browser, simply navigate to
http://127.0.0.1:2100/display/screenshot
This produces a PNG screenshot. To get the screenshot as JPEG:
http://127.0.0.1:2100/display/screenshot?fmt=jpg
To retrieve the content of a shared texture resource (including render targets):
http://127.0.0.1:2100/texture?img=[resId]
For instance to get the content of the example Tiger's texture:
http://127.0.0.1:2100/texture?img=textures:examples/tiger.dds
This will only return currently loaded texture resources. If the texture is not currently loaded, a "404 Not Found" will be returned.
The HttpServer is going to become an extremely useful debugging tool. One can easily navigate through an application's runtime data on a much higher level then a source-level debugger allows, since the data can be presented in an application-specific way. It's the perfect complement to source-level debugging and other specialized debugging and profiling tools like PerfHUD or PIX. And since it's HTTP everything also works over the network by design. This is especially useful for console development, where it's often not possible or desirable to add a complex in-game user interface just for debugging and visualization purposes.
Beyond debug visualizations the whole HTTP communication stuff is really inspiring. Imagine what would be possible with XUL (Mozilla's user interface XML dialect).
Some Bioshock-in-progress notes: I'm about 8 hours into the 360-version. As expected the game is pretty damn near perfect (I have only 1 small gripe: the human character models... compared to the graphics quality of the environment and the Big Daddies and Rosies they really look quite ugly). But overall: a masterpiece! Definitive must-play for everybody who loves computer games. The immersion and mood can't be described with words. It's obvious that the developer dodged any face-to-face dialogs (and right so). Yesterday I almost expected to actually meet a sane person face-to-face... but nope, it's all tapes and monitors, I was a little bit disappointed at first... but honestly, it wouldn't have looked very good (quality-wise) if they had chosen to talk directly to other characters. And handling the story in such an "impersonal" way actually adds a lot to the loneliness and depressive mood of the game. The German localization is *excellent*. Usually I'm playing the original version, but Bioshock's voice-over localization is extremely well done. "Movie-quality-well-done".
28 Aug 2007
Nebula3 August SDK
Here's the Nebula3 "August SDK". I'm trying to do monthly releases from now on, whether big features are actually ready or not.
Download here.
Keep in mind that this is massive work in progress etc...etc... For instance, if you don't have a SM3.0 graphics card you will very likely hit an assertion when starting the test viewer, there's no code in the application classes which checks the hardware config for compability yet, also some of the executables may hit asserts.
Some of the new stuff:
The little red cube is the bounding box of the tank. As you can see it doesn't fit, and the tank disappears as soon as the cube is outside the view volume. That's a bug with the Nebula2 legacy resource loaders, the bounding box of the Model isn't updated from the loaded data. But at least it demonstrates that the view volume culling works, he...
The next big feature is probably dynamic lights and shadows. But I don't think this will fit into September. Since lighting is the probably the most important part of a rendering engine, I'll take my time to do it right.
Enjoy,
-Floh.
PS: The executables are linked against the April 2007 DirectX SDK. You may have to install a matching runtime for the executables to start (not sure whether MS finally fixed the D3DX DLL hell). Or... just recompile everything with the DX SDK you have on you machine...
Download here.
Keep in mind that this is massive work in progress etc...etc... For instance, if you don't have a SM3.0 graphics card you will very likely hit an assertion when starting the test viewer, there's no code in the application classes which checks the hardware config for compability yet, also some of the executables may hit asserts.
Some of the new stuff:
- the rendering system is now up and running in a very early state (no lighting, occlusion culling, etc...)
- FrameShaders (comes with a very simple example frameshader)
- integrated http-server with some debugging pages
- (very limited) support for loading Nebula2 resources
- asynchronous resource loading (the small white cube you see for the first few frames is the placeholder resource which is shown while the actual object is still loading in the background)
- new input subsystem with support for keyboard, mouse and gamepads (tested with Xbox360 gamepads)
- minidump-generation when an assert or error is triggered
The little red cube is the bounding box of the tank. As you can see it doesn't fit, and the tank disappears as soon as the cube is outside the view volume. That's a bug with the Nebula2 legacy resource loaders, the bounding box of the Model isn't updated from the loaded data. But at least it demonstrates that the view volume culling works, he...
The next big feature is probably dynamic lights and shadows. But I don't think this will fit into September. Since lighting is the probably the most important part of a rendering engine, I'll take my time to do it right.
Enjoy,
-Floh.
PS: The executables are linked against the April 2007 DirectX SDK. You may have to install a matching runtime for the executables to start (not sure whether MS finally fixed the D3DX DLL hell). Or... just recompile everything with the DX SDK you have on you machine...
26 Aug 2007
More fun with HTTP...
I pretty much finished the HttpServer stuff over the weekend. All in all it took me about 10 hours of work (I don't even want to imagine how long it would have taken in Nebula2, probably a week or even longer, and the result wouldn't look half as elegant). The HttpServer is a singleton with a TcpServer inside, and a number of attached HttpRequestHandlers. A HttpRequestHandler is a user-derivable class which processes an http request (decoded by a HttpRequestReader) and produces a content stream (usually an HTML page) which is wrapped into a HttpResponseWriter object and sent back to a web browser. HttpRequestHandlers may accept or reject a http request. When a new request comes in from a web browser, the HttpServer asks each HttpRequestHandler in turn whether it wants to handle the request. The first handler that accepts will process the request. If no attached request handler accepts, the "default handler" has to produce a valid response (the default handler usually serves the "home page" of the Nebula3 application).
To simplify producing valid HTML pages, I wrote a HtmlPageWriter. This is a subclass of StreamWriter and has an interface which allows to insert HTML elements and text into the stream. All basic HTML markup elements up to tables are supported, so a HttpRequestHandler can do pretty advanced stuff if it wants to.
Here's how it works from the outside:
When a Nebula3 application is running (should be derived from App::RenderApplication, this sets up everything automatically), start a web browser on the same machine, and type the address http://127.0.0.1:2100 into the address field. This opens a HTTP connection on the local machine on port 2100 (it's also possible of course to do this from another machine as long as the port 2100 isn't blocked).
The browser should now display something like this:
This is basically the debug home page of the application. At the top there's some basic info like the company and application name that have been set in the Application object. The calendar time displays what the application thinks the current time is. It will update when hitting F5 in the browser.
The Available Pages section lists all attached HttpRequestHandler objects. By deriving new classes from HttpRequestHandler and attaching an instance to the HttpServer singleton the Available Pages list will grow automatically.
To go to the application's Display debug page, either click on the link, or type http://127.0.0.1:2100/display into the browser's address bar. This should bring up the following page:
The Display page lists the properties of the currently open display, and some information about the display adapters in the system.
But there's also more powerful stuff possible. The IO page lists (among other things) all currently defined assigns, and how they resolve into file system paths. The IO page can be reached through the URI http://127.0.0.1:2100/inout. However, the request handler that builds the IO debug page also has a complete directory lister integrated, so one can check what files the application actually can see under its assigns. Typing http://127.0.0.1:2100/inout?ls=home: into the browser's address field produces this page:
You can actually click on subdirectories and navigate through the entire directory structure. This is all basic stuff which I wrote in the last 3 hours (in the ICE train to Berlin), so it's really simple and fast to get results from the HttpServer system. The one thing that's missing now is a Base64Writer so that it would be possible to send binary data over the line (especially image data), and of course a few more HttpRequestHandlers which expose more debug information to the web browser.
To simplify producing valid HTML pages, I wrote a HtmlPageWriter. This is a subclass of StreamWriter and has an interface which allows to insert HTML elements and text into the stream. All basic HTML markup elements up to tables are supported, so a HttpRequestHandler can do pretty advanced stuff if it wants to.
Here's how it works from the outside:
When a Nebula3 application is running (should be derived from App::RenderApplication, this sets up everything automatically), start a web browser on the same machine, and type the address http://127.0.0.1:2100 into the address field. This opens a HTTP connection on the local machine on port 2100 (it's also possible of course to do this from another machine as long as the port 2100 isn't blocked).
The browser should now display something like this:
This is basically the debug home page of the application. At the top there's some basic info like the company and application name that have been set in the Application object. The calendar time displays what the application thinks the current time is. It will update when hitting F5 in the browser.
The Available Pages section lists all attached HttpRequestHandler objects. By deriving new classes from HttpRequestHandler and attaching an instance to the HttpServer singleton the Available Pages list will grow automatically.
To go to the application's Display debug page, either click on the link, or type http://127.0.0.1:2100/display into the browser's address bar. This should bring up the following page:
The Display page lists the properties of the currently open display, and some information about the display adapters in the system.
But there's also more powerful stuff possible. The IO page lists (among other things) all currently defined assigns, and how they resolve into file system paths. The IO page can be reached through the URI http://127.0.0.1:2100/inout. However, the request handler that builds the IO debug page also has a complete directory lister integrated, so one can check what files the application actually can see under its assigns. Typing http://127.0.0.1:2100/inout?ls=home: into the browser's address field produces this page:
You can actually click on subdirectories and navigate through the entire directory structure. This is all basic stuff which I wrote in the last 3 hours (in the ICE train to Berlin), so it's really simple and fast to get results from the HttpServer system. The one thing that's missing now is a Base64Writer so that it would be possible to send binary data over the line (especially image data), and of course a few more HttpRequestHandlers which expose more debug information to the web browser.
25 Aug 2007
Fun with HTTP
Now that Nebula3 is getting a little more complex it's important to know what's going on inside a running application. In Nebula2, there were a couple of in-game debug windows (a texture browser, a "watcher variable" browser, and so on...). But creating new windows was a difficult and boring job (especially because of all the layout code). For Nebula3 I wanted something easier and more powerful: a simple built-in HTTP server which serves HTML pages with all types of debug information. The idea isn't new, others have done this already, but with Nebula2's IO and networking subsystems it wasn't a trivial task to write a HTTP server.
Turns out in Nebula3 it's just a couple of lines (I was actually surprised myself a little bit, even though I wrote the stuff), the TcpServer class already handles all the connection stuff, throw in a couple of stream readers and writers to decode and encode HTTP requests and responses and you have a simple HTTP server written in a couple of hours.
Here's roughly how it works:
Here's the first message served from a Nebula3 application into a web browser:
What's missing now is some sort of HtmlWriter to write out simple HTML pages with the actual content, and a HttpImageWriter to send images to the web browser (to provide a gallery of all currently existing Texture objects).
Fun stuff.
Turns out in Nebula3 it's just a couple of lines (I was actually surprised myself a little bit, even though I wrote the stuff), the TcpServer class already handles all the connection stuff, throw in a couple of stream readers and writers to decode and encode HTTP requests and responses and you have a simple HTTP server written in a couple of hours.
Here's roughly how it works:
- create and open a TcpServer object
- once per frame (or less frequently) poll the TcpServer for an array of TcpClientConnections (these are all the outstanding requests from web browsers)
- for each TcpClientConnection:
- attach a HttpRequestReader to the receive stream
- attach a HttpResponseWriter to the send stream
- decide what to send back based on the request, and fill the response with the result
- call TcpClientConnection::Send()
Here's the first message served from a Nebula3 application into a web browser:
What's missing now is some sort of HtmlWriter to write out simple HTML pages with the actual content, and a HttpImageWriter to send images to the web browser (to provide a gallery of all currently existing Texture objects).
Fun stuff.
21 Aug 2007
Dead Rising: 2nd Chance
The first time I looked at Dead Rising when it came out I was like "meh, broken game mechanics". The things that put me off where the time-window mission system and the save-mechanism. I'm glad I gave it a second chance last week, now I can't stop playing it, trying to get all the different endings and I still have to beat the final boss before the mighty BIOSHOCK comes out.
There are 2 things you just need to accept to enjoy the game:
There are 2 things you just need to accept to enjoy the game:
- you need to hit level 10 before the zombie killing is really fun
- "replay" is the central gameplay element, you'll never be able to solve everything in a single play-through
Nebula3 PowerPoint slides
Here's the PowerPoint of the talk I gave yesterday at the GCDC in Leipzig:
n3talk_gcdc07.zip
If you don't have PowerPoint, there's a standalone reader available for download from MS, just google for "PowerPoint viewer".
GCDC is always fun. Unfortunately I could only stay for a few hours and then had to return to Berlin for work. GC is going to be HUUUGE this time. Looks like they've added an additional hall, the business center had to move into a new smaller hall across the "campus". I've been going to GC every year since it started, but I'm still impressed by the Leipzig Exhibition Centre's architecture and size, it's like a trip into the future.
n3talk_gcdc07.zip
If you don't have PowerPoint, there's a standalone reader available for download from MS, just google for "PowerPoint viewer".
GCDC is always fun. Unfortunately I could only stay for a few hours and then had to return to Berlin for work. GC is going to be HUUUGE this time. Looks like they've added an additional hall, the business center had to move into a new smaller hall across the "campus". I've been going to GC every year since it started, but I'm still impressed by the Leipzig Exhibition Centre's architecture and size, it's like a trip into the future.
14 Aug 2007
First Render: Xbox360
Ok, here's the Tiger example scene on the 360:
Visually not very impressive, just as the last screen. The point is that the basic Nebula3 render loop is now also running on the 360 (complete with asynchronous resource loading), side by side through exactly the same highlevel code and from the same source-assets as the PC-version.
It took me a bit longer then planned because of the 360's tiled rendering architecture, and because the rendering code is *not* just a straight port from the PC version (you pretty much can do this on the 360, it's just not as optimal as using the more 360-specific APIs).
Because of the tiled rendering I had to do a few basic changes to the CoreGraphics subsystem in order to hide the details from the higher level code:
CoreGraphics now gets hinted by the higher level rendering code what the rendered stuff actually is, for instance, depth-only geometry, solid or transparent geometry, occlusion checks, fullscreen-posteffects etc...
The second change is that RenderTargets have become much smarter. A RenderTarget object isn't just a single rendering surface as in Nebula2, instead it may contain up to 4 color buffers (with multisampling if supported in the specific configuration) and an optional depth/stencil buffer, it has Begin()/End() methods which mark the beginning and end of rendering to the render target (this is where the rendering hints come in, to let the RenderTarget classes perform platform-specific actions before and after rendering).
A render target now also knows how to resolve its content most efficiently into a texture or make it available for presentation. Traditionally, a render target and a texture was the same thing in Direct3D. So you could render to a render target, and when rendering was finished the render target could immediately be used as a texture. This doesn't work anymore with multisampled render targets, those have to be resolved into a non-multisampled texture using the IDirect3DDevice9::StretchRect() method. On the 360 everything is still a little bit different depending on the rendering scenario (720p vs. 1080p vs. MSAA types). So the best thing was to hide all those platform-specifics inside the RenderTarget object itself. A Nebula3 application doesn't have to be aware of all of these details, it just sets the current render target, does some rendering, and either gets a texture from the render target for subsequent rendering, or presents the result directly.
I also started to work on a "FrameShader" system. This is the next (simplified) version of Nebula2's RenderPath system. It's basically a simple XML schema which contains a description of how a frame is exactly rendered. It has 2 main purposes:
Visually not very impressive, just as the last screen. The point is that the basic Nebula3 render loop is now also running on the 360 (complete with asynchronous resource loading), side by side through exactly the same highlevel code and from the same source-assets as the PC-version.
It took me a bit longer then planned because of the 360's tiled rendering architecture, and because the rendering code is *not* just a straight port from the PC version (you pretty much can do this on the 360, it's just not as optimal as using the more 360-specific APIs).
Because of the tiled rendering I had to do a few basic changes to the CoreGraphics subsystem in order to hide the details from the higher level code:
CoreGraphics now gets hinted by the higher level rendering code what the rendered stuff actually is, for instance, depth-only geometry, solid or transparent geometry, occlusion checks, fullscreen-posteffects etc...
The second change is that RenderTargets have become much smarter. A RenderTarget object isn't just a single rendering surface as in Nebula2, instead it may contain up to 4 color buffers (with multisampling if supported in the specific configuration) and an optional depth/stencil buffer, it has Begin()/End() methods which mark the beginning and end of rendering to the render target (this is where the rendering hints come in, to let the RenderTarget classes perform platform-specific actions before and after rendering).
A render target now also knows how to resolve its content most efficiently into a texture or make it available for presentation. Traditionally, a render target and a texture was the same thing in Direct3D. So you could render to a render target, and when rendering was finished the render target could immediately be used as a texture. This doesn't work anymore with multisampled render targets, those have to be resolved into a non-multisampled texture using the IDirect3DDevice9::StretchRect() method. On the 360 everything is still a little bit different depending on the rendering scenario (720p vs. 1080p vs. MSAA types). So the best thing was to hide all those platform-specifics inside the RenderTarget object itself. A Nebula3 application doesn't have to be aware of all of these details, it just sets the current render target, does some rendering, and either gets a texture from the render target for subsequent rendering, or presents the result directly.
I also started to work on a "FrameShader" system. This is the next (simplified) version of Nebula2's RenderPath system. It's basically a simple XML schema which contains a description of how a frame is exactly rendered. It has 2 main purposes:
- grouping render batches into frame passes (e.g. depth pass, solid pass, alpha pass) and thus eliminating redundant per-batch state switches (state which is constant across all objects in apass is set only once)
- easy configuration of offscreen rendering and post effects without having to recompile the application
4 Aug 2007
The Nebula3 Render Layer: Graphics
The Graphics subsystem is the highest level graphics-related subsystem in the Render Layer. It's basically the next version of the Mangalore graphics subsystem, but now integrated into Nebula, and connected much tighter to the lower level rendering code. The basic idea is to have a completely autonomous graphics "world" with model-, light- and camera-entities and which only requires minimal communication with the outside world. The main operations on the graphics world are adding and removing entities to/from the world, and updating their positions.
Since the clear separation line between Mangalore's Graphics subsystem and Nebula2's Scene subsystem has been removed completely in Nebula3, many concepts can be implemented with less code and communication overhead.
The Graphics subsystem will also be the "multithreading border" for asynchronous rendering. Graphics and all lower-level rendering subsystems will live in their own fat-thread. This is fairly high up in the Nebula3 layer model, but I choose this location because this is where the least amount of communication needs to happen between the game-play related code and the graphics-related code. With some more "built-in autonomy" of the graphics code it should be possible to run the game code at a completely different frame rate then the render code, although it needs to be determined through real-world experience how practical that is. But it's definitely something I'll try out, since often there's no reason to run game play code at more then 10 frames per second (Virtua Fighter fans may disagree).
The most important public classes of the Graphics subsystem are:
A CameraEntity describes a view volume in the graphics world. It provides the View and Projection matrices for rendering.
A LightEntity describes a dynamic light source. The exact properties of Nebula3 light sources haven't been laid-out yet, but I'm aiming for a relatively flexible approach (in the end a light source isn't much more then a set of shader parameters).
Stages and Views are new concepts in the Nebula3 Graphics subsystem. In Mangalore there was a single graphics Level class where the graphics entities lived in. There could only be one Level and one active Camera entity at any time. This was fine for the usual case where one world needs to be rendered into the frame buffer. Many game applications require more flexible rendering, it may be necessary to render 3d objects isolated from the rest of the graphics world with their own lighting for use in GUIs, additional views into the graphics world may be required for reflections or things like surveillance monitors, and so on... In Mangalore, the problem was solved with the OffscreenRenderer classes, which are simple to use but were added as an after-thought and have some usage restrictions.
Nebula3 provides a much cleaner solution to the problem through Stages and Views. A Stage is a container for graphics entities and represents its own little graphics world. There may exist multiple stages at the same time but they are completely isolated from the other stages. An entity may only be connected to one stage at a time (although it is easily possible to create a clone of an existing entity). Apart from simply grouping entities into a graphics world, the main job of a Stage is to speed up visibility queries by organizing entities by their spatial relationship. An application may implement radically different visibility query schemes by deriving subclasses of Stage.
A View object renders a view into a stage through a CameraEntity into a RenderTarget. There may be any number of View objects connected to any stage. View objects may depend on each other (also on Views connected to a different stage), so that updating one View may force another View to update its RenderTarget first (this is handy when one View's rendering process requires the content of another View's render target as a texture). View objects completely implement their own render loop. Applications are free to implement their own render strategies in subclasses of View (e.g. one-pass-per-light vs. multiple-lights-per-pass, render-to-cubemap, etc...).
So, in summary, a Stage completely controls the visibility query process, while a View completely controls the rendering process.
One of the main jobs of the Graphics subsystem is to determine what actually needs to be rendered by performing visibility queries between entities. A visibility query establishes bi-directional visibility links between entities. Visibility links come in 2 flavors: camera links and light links. Camera links connect a camera to the models in its view volume. Since visibility links are bi-directional, a camera knows all the models in its view volume, and a model knows all the cameras it is visible through. Light links establish the same relationship between lights and models. A light has links to all models it influences, and a model knows all lights it is influenced by.
The most important class to speed up visibility queries is the internal Cell class. A Cell is a visibility container for graphics entities and child cells. A Cell must adhere to 2 simple rules:
When a graphics entity is attached to a Stage, it will be inserted into the lowest level Cell which "accepts" (completely contains, usually) the entity. When updating the transformation or bounding volume of a graphics entity it will change its position in the Cell hierarchy if necessary.
Stages are populated through the StageBuilder class. An application should derive from StageBuilder to create the initial state of a Stage by adding Cells and Entities to it. Nebula3 comes with a standard set of StageBuilders which should suffice for most applications.
This was just a rough overview of the Graphics subsystem. Since there exists only a very basic implementation at the moment, many details may change over the next few weeks.
Since the clear separation line between Mangalore's Graphics subsystem and Nebula2's Scene subsystem has been removed completely in Nebula3, many concepts can be implemented with less code and communication overhead.
The Graphics subsystem will also be the "multithreading border" for asynchronous rendering. Graphics and all lower-level rendering subsystems will live in their own fat-thread. This is fairly high up in the Nebula3 layer model, but I choose this location because this is where the least amount of communication needs to happen between the game-play related code and the graphics-related code. With some more "built-in autonomy" of the graphics code it should be possible to run the game code at a completely different frame rate then the render code, although it needs to be determined through real-world experience how practical that is. But it's definitely something I'll try out, since often there's no reason to run game play code at more then 10 frames per second (Virtua Fighter fans may disagree).
The most important public classes of the Graphics subsystem are:
- ModelEntity
- CameraEntity
- LightEntity
- Stage
- View
A CameraEntity describes a view volume in the graphics world. It provides the View and Projection matrices for rendering.
A LightEntity describes a dynamic light source. The exact properties of Nebula3 light sources haven't been laid-out yet, but I'm aiming for a relatively flexible approach (in the end a light source isn't much more then a set of shader parameters).
Stages and Views are new concepts in the Nebula3 Graphics subsystem. In Mangalore there was a single graphics Level class where the graphics entities lived in. There could only be one Level and one active Camera entity at any time. This was fine for the usual case where one world needs to be rendered into the frame buffer. Many game applications require more flexible rendering, it may be necessary to render 3d objects isolated from the rest of the graphics world with their own lighting for use in GUIs, additional views into the graphics world may be required for reflections or things like surveillance monitors, and so on... In Mangalore, the problem was solved with the OffscreenRenderer classes, which are simple to use but were added as an after-thought and have some usage restrictions.
Nebula3 provides a much cleaner solution to the problem through Stages and Views. A Stage is a container for graphics entities and represents its own little graphics world. There may exist multiple stages at the same time but they are completely isolated from the other stages. An entity may only be connected to one stage at a time (although it is easily possible to create a clone of an existing entity). Apart from simply grouping entities into a graphics world, the main job of a Stage is to speed up visibility queries by organizing entities by their spatial relationship. An application may implement radically different visibility query schemes by deriving subclasses of Stage.
A View object renders a view into a stage through a CameraEntity into a RenderTarget. There may be any number of View objects connected to any stage. View objects may depend on each other (also on Views connected to a different stage), so that updating one View may force another View to update its RenderTarget first (this is handy when one View's rendering process requires the content of another View's render target as a texture). View objects completely implement their own render loop. Applications are free to implement their own render strategies in subclasses of View (e.g. one-pass-per-light vs. multiple-lights-per-pass, render-to-cubemap, etc...).
So, in summary, a Stage completely controls the visibility query process, while a View completely controls the rendering process.
One of the main jobs of the Graphics subsystem is to determine what actually needs to be rendered by performing visibility queries between entities. A visibility query establishes bi-directional visibility links between entities. Visibility links come in 2 flavors: camera links and light links. Camera links connect a camera to the models in its view volume. Since visibility links are bi-directional, a camera knows all the models in its view volume, and a model knows all the cameras it is visible through. Light links establish the same relationship between lights and models. A light has links to all models it influences, and a model knows all lights it is influenced by.
The most important class to speed up visibility queries is the internal Cell class. A Cell is a visibility container for graphics entities and child cells. A Cell must adhere to 2 simple rules:
- if a Cell is completely visible, all its entities and child Cells must be completely visible
- if a Cell is completely invisible, all its entities and child Cells must be completely invisible
When a graphics entity is attached to a Stage, it will be inserted into the lowest level Cell which "accepts" (completely contains, usually) the entity. When updating the transformation or bounding volume of a graphics entity it will change its position in the Cell hierarchy if necessary.
Stages are populated through the StageBuilder class. An application should derive from StageBuilder to create the initial state of a Stage by adding Cells and Entities to it. Nebula3 comes with a standard set of StageBuilders which should suffice for most applications.
This was just a rough overview of the Graphics subsystem. Since there exists only a very basic implementation at the moment, many details may change over the next few weeks.
29 Jul 2007
First Render
First Nebula3 screenshot fresh from my notebook (please excuse the jpg artefacts):
It's the Tiger tank from our toolkit examples, loaded through the legacy N2 resource loaders. Visually, it's nothing to write home about of course - it's rendered using the most simple textured shader possible. But nonetheless this is a very important milestone for Nebula3 because all essential building blocks of the Render Layer are now up and running, and a simple render loop is possible. A lot of things still need to be done of course: putting the renderer into its own thread, occlusion culling, realtime lights and shadows, lots of tests and benchmarks, and so on... but once the first triangle is on the screen, the rest is easy ;)
It's the Tiger tank from our toolkit examples, loaded through the legacy N2 resource loaders. Visually, it's nothing to write home about of course - it's rendered using the most simple textured shader possible. But nonetheless this is a very important milestone for Nebula3 because all essential building blocks of the Render Layer are now up and running, and a simple render loop is possible. A lot of things still need to be done of course: putting the renderer into its own thread, occlusion culling, realtime lights and shadows, lots of tests and benchmarks, and so on... but once the first triangle is on the screen, the rest is easy ;)
24 Jul 2007
The Nebula3 Render Layer: CoreGraphics
The CoreGraphics subsystem is mainly a compatibility wrapper around the host's 3d rendering API. It's designed to support a Direct3D/OpenGL-style API with programmable shaders without any functionality- or performance-compromises. The general functionality of the CoreGraphics subsystem is roughly the same as the Nebula2 gfx2-subsystem, however CoreGraphics fixes many of the issues that popped up during the lifetime of the Nebula2 graphics system.
At first glance, CoreGraphics looks much more complex then Nebula2 because there are many more classes. The reason for this is simply that CoreGraphics classes are smaller and more specialized. The functionality of most classes can be described in one simple sentence, while Nebula2 had quite a few classes (like nGfxServer2) which were quite big because they tried to do several things.
A typical Nebula3 application won't have to deal very much with the CoreGraphics subsystem, but instead with higher-level subsystems like Graphics (which will be described in a later post).
Some of the more important design goals of CoreGraphics are:
At first glance, CoreGraphics looks much more complex then Nebula2 because there are many more classes. The reason for this is simply that CoreGraphics classes are smaller and more specialized. The functionality of most classes can be described in one simple sentence, while Nebula2 had quite a few classes (like nGfxServer2) which were quite big because they tried to do several things.
A typical Nebula3 application won't have to deal very much with the CoreGraphics subsystem, but instead with higher-level subsystems like Graphics (which will be described in a later post).
Some of the more important design goals of CoreGraphics are:
- allow ports to Direct3D9, Direct3D10 and Xbox360 without ANY compromises:
- CoreGraphics allows much more freedom when porting to other platforms, instead of Nebula2's porting-through-virtual-functions approach, CoreGraphics uses porting-by-conditional-typedefs (and -subclassing). A port is free do override any single class without any performance compromises (e.g. platform-dependent inline methods are possible)
- improved resource management:
- Nebula3 decouples resource usage and resource initialization. Initialization happens through ResourceLoader classes, which keeps the actual resource classes small and tight, and the resource system is much more modular (to see why this is a problem that had to be solved, look at Nebula2's nTexture2 class)
- less centralized:
- instead of one big nGfxServer2 class there are now several more specialized singletons:
- RenderDevice: handles rendering of primitive groups to a render target
- DisplayDevice: handles display-setup and -management (under Win32: owns the application window, runs the Windows message pump, can be queried about supported fullscreen modes, etc...)
- TransformDevice: manages the transformation matrices required for rendering, takes View, Projection and Model matrices as input, and provides inverted and concatenated matrices (like ModelViewProjection, InvView, etc...)
- ShaderServer: the heart of the shader system, see below for details
- improved offscreen rendering:
- rendering to an offscreen render target is now treated as the norm, as opposed to Nebula2 which was still designed around the render-to-backbuffer case, with offscreen-rendering being possible but somewhat awkward
- vastly improved shader system:
- provides the base to reduce the overhead for switching and updating shaders when rendering typical scenes with many different objects and materials
- as in Nebula2, a Shader is basically a Direct3D effect (a collection of techniques, which are made of passes, which are collections of render states)
- ShaderInstances are cloned effects with their own set of shader parameter values
- setting shader parameters is now much more direct through ShaderVariables (same philosophy as DX10)
- ShaderVariations and ShaderFeature bits: A shader may offer different specialized variations which are selected through a feature bit mask. For instance a feature may be named "Depth", "Color", "Opaque", "Translucent", "Skinned", "Unlit", "PointLight", and a shader may offer specialized variations for feature combinations like "Depth | Skinned", "Color | Skinned | Unlit", "Color | Skinned | PointLight". The high level rendering code would set feature bits as needed (during the depth pass, the Depth feature would be switched on for instance), and depending on the current feature bit mask, the right specialized shader variation would automatically be selected for rendering. Together with the right asset tools, ShaderVariations and ShaderFeatures should help a lot to fix the various maintenance and runtime problems associated with programmable shaders (think shader-library vs. über-shaders and so on...).
- handling DeviceLost/Restored and of WinProc mouse and keyboard messages is now handled through general EventHandlers instead of being hardwired into the graphics system
- VertexBuffer and IndexBuffer are back as public classes
- vertex components now support compressed formats like Short2, Short4, UByte4N, etc...
- DisplayDevice offers several convenience methods to get the list of supported display modes or the current desktop display mode, and to get detailed information about the current display (hardware, vendor and driver version info)
- one can now actually check whether 3d rendering is supported on the current host by calling the static RenderDevice::CanCreate() method before actually opening the application window
23 Jul 2007
Nebula3 SDK - Jul 2007
Here's the new July 2007 Nebula3 SDK release. I had to do some restructuring in the directory structure because of the Xbox360 specific stuff (which is not contained of course). Compiling is much faster now because I switched to precompiled headers. Please keep in mind that everything in the Render Layer is still under construction. The next few posts will mainly describe the various Render Layer subsystems.
Have fun!
-Floh.
Have fun!
-Floh.
7 Jul 2007
Overlord!
What a delicious game! It's one of those rare cases where a great idea (evil Pikmins) actually turns into a great game. There are so many odds against such an "old-school" game-design-centric cross-genre game (most importantly finding a publisher which is willing to take the risk, usually your name has to be Miamoto or Wright if you show up at a publisher with a game concept like this...) that it seems like a little wonder to me that this little gem actually saw Gold. There must be 50 similar projects which didn't make it past the prototype stage. Go out and buy this game, something like this only happens once every 3 years or so :)
PS: I'm trying to get a new Nebula3 source release out the door ASAP. Still no 3d rendering though, there are some delays on the rendering code due to the Xbox360 work and I'm also very busy working on Drakensang at the moment.
PS: I'm trying to get a new Nebula3 source release out the door ASAP. Still no 3d rendering though, there are some delays on the rendering code due to the Xbox360 work and I'm also very busy working on Drakensang at the moment.
24 Jun 2007
Dirt!
MS should feel embarrassed that their platform-exclusive-1st-party-showcase racing sim Forza2 doesn't look half as pretty as the 3rd-party-multi-platform game Dirt (dIRT? DiRT? DIrT? whatever...) I went straight from playing Forza to Dirt and my first reaction was basically the German equivalent of "OMG that looks f*cking AWESOME!". Sure, Forza has another focus, it's a hardcore racing sim for enthusiasts, has a much more realistic driving model, online features and a multiplayer mode which can actually be called "multiplayer". While Dirt isn't an arcade racer by any means, it definitely feels a bit more "arcadey" then Forza2. Still, Forza should at least have tried to come closer to Dirt-level graphics. Compared to Dirt, Forza2 kinda looks like the boring civilian flight simulator with the ultra-realistic flight model and simplistic graphics that only hobby pilots fly.
The graphics in Dirt are truly "next gen", which in comparison can't really be said about Forza. I would go as far and say that Dirt is at the moment the second best looking 360 game after Gears, and the new "other" graphics-showcase on the 360.
PS: Don't judge Dirt's graphics by the demo on Xbox Live. The demo tracks there have hardly any vegetation in them. The European and Japanese tracks in the full version look much much better. And the immersion and sense of speed when racing a wet narrow forest track at 180 km/h in Dirt is simply unbelievable.
The graphics in Dirt are truly "next gen", which in comparison can't really be said about Forza. I would go as far and say that Dirt is at the moment the second best looking 360 game after Gears, and the new "other" graphics-showcase on the 360.
PS: Don't judge Dirt's graphics by the demo on Xbox Live. The demo tracks there have hardly any vegetation in them. The European and Japanese tracks in the full version look much much better. And the immersion and sense of speed when racing a wet narrow forest track at 180 km/h in Dirt is simply unbelievable.
23 Jun 2007
Status Update...
Busy...
Working on the Xbox360 is total joy. Unfortunately I can't go into details because of NDA. I did some work on the original Xbox already, so it's not all new to me, but I was once again pleasantly surprised how painless the XDK setup is. All you have to do is plug-in some cables, double-click the installer, and after the installation ends (10..15 minutes) you're ready to compile and remote-debug the samples inside Visual Studio. Very impressive. The APIs basically offer everything a game programmer ever needs (and misses on the PC), the documentation is excellent. There's a wealth of high level APIs, but it's absolutely possible (and relatively painless) to go down to the metal if needed. Both is very important on a console. When the high level APIs are missing, too much time is wasted reinventing wheels (I guess that's one of the reasons why ports from the 360 to PS3 often take so long), and the low level stuff is necessary for optimizing performance-critical code (which is usually not possible or worthwhile on the PC because of all the different hardware configurations).
At the moment I only have 2 or 3 hours daily and the weekends to actually work on Nebula3, but despite that it's coming along nicely.
The Nebula3 Foundation Layer is already up and running on the 360. This is not a quick "just-fix-the-compile-errors" port", but a "proper" port which makes use of the 360's specialties. I had to take 2 things out, and move into a new "Add-On Layer": the HTTP stuff and the Database subsystem. The only other things that needed to be fixed were some data alignment issues and (naturally) byte order issues in the BinaryReader/BinaryWriter classes. I'll add some testing and benchmarking classes next week and then move on to the Render Layer.
Considering the differences between the 2 platforms (different byte order, 32 bit vs. 64 bit, different compiler back ends), everything was completely painless. The usual first experience when bringing code to a new platform is looking at pages of compiler warnings scrolling by. On the 360 SDK most of Nebula3 compiled and ran out of the box on warning level 4 without warnings.
I really wish MS would bring ALL the high level XDK APIs and tools over to Windows! They already started with XACT, XInput and PIX, but there's so much more cool stuff on the 360 that's missing on the PC and which would make a PC game programmer's life so much easier...
PS: Prince Of Persia Classic rocks. Best Arcade game ever :)
Working on the Xbox360 is total joy. Unfortunately I can't go into details because of NDA. I did some work on the original Xbox already, so it's not all new to me, but I was once again pleasantly surprised how painless the XDK setup is. All you have to do is plug-in some cables, double-click the installer, and after the installation ends (10..15 minutes) you're ready to compile and remote-debug the samples inside Visual Studio. Very impressive. The APIs basically offer everything a game programmer ever needs (and misses on the PC), the documentation is excellent. There's a wealth of high level APIs, but it's absolutely possible (and relatively painless) to go down to the metal if needed. Both is very important on a console. When the high level APIs are missing, too much time is wasted reinventing wheels (I guess that's one of the reasons why ports from the 360 to PS3 often take so long), and the low level stuff is necessary for optimizing performance-critical code (which is usually not possible or worthwhile on the PC because of all the different hardware configurations).
At the moment I only have 2 or 3 hours daily and the weekends to actually work on Nebula3, but despite that it's coming along nicely.
The Nebula3 Foundation Layer is already up and running on the 360. This is not a quick "just-fix-the-compile-errors" port", but a "proper" port which makes use of the 360's specialties. I had to take 2 things out, and move into a new "Add-On Layer": the HTTP stuff and the Database subsystem. The only other things that needed to be fixed were some data alignment issues and (naturally) byte order issues in the BinaryReader/BinaryWriter classes. I'll add some testing and benchmarking classes next week and then move on to the Render Layer.
Considering the differences between the 2 platforms (different byte order, 32 bit vs. 64 bit, different compiler back ends), everything was completely painless. The usual first experience when bringing code to a new platform is looking at pages of compiler warnings scrolling by. On the 360 SDK most of Nebula3 compiled and ran out of the box on warning level 4 without warnings.
I really wish MS would bring ALL the high level XDK APIs and tools over to Windows! They already started with XACT, XInput and PIX, but there's so much more cool stuff on the 360 that's missing on the PC and which would make a PC game programmer's life so much easier...
PS: Prince Of Persia Classic rocks. Best Arcade game ever :)
13 Jun 2007
CvsMonitor rocks
If you use CVS for version control and don't know CvsMonitor yet you should definitely have a look at it. It provides in-depth stats to cvs repositories and most importantly, provides a changeset view, so you can see immediately who's responsible for breaking the nightly build, hehe...
Here's the link: http://ali.as/devel/cvsmonitor/index.html
Here's the link: http://ali.as/devel/cvsmonitor/index.html
9 Jun 2007
More Forza
Holy sh*t. I just discovered that one can download the car setup and the entire replay session of the top 100 players of each career race... No longer guessing how exactly the world's #1 player managed to finish the same race 2 minutes faster, you can watch every single second of it, complete with telemetry information. Awesome. I'm also proud to report that I just finished the Suzuka Circuit at the "Kumho Tire 250HP International" event at position 260 out of roughly 81.000 players who raced there so far. Not too shabby eh? :)
8 Jun 2007
Forza2!
Yo gang, check out my pimped out Camaro :o) It's way overpowered and handles like a container ship, so it's totally useless for racing. But that's what a muscle car is all about, right? Need to improve my paint-job skills though...
Love the game so far... It's much, much deeper then GT4. Could have more tracks though, and the environment graphics are a bit meh for a 360 title. Oh, and definitely not enough BMW models. I'd like to see the new M3 and the Z4 Coupé please.
Love the game so far... It's much, much deeper then GT4. Could have more tracks though, and the environment graphics are a bit meh for a 360 title. Oh, and definitely not enough BMW models. I'd like to see the new M3 and the Z4 Coupé please.
3 Jun 2007
Tomb Raiding
Being a veteran home computer and PC gamer I was never really interested in the Tomb Raider games. But a lack of good new 360 games in April and May, the fact that I heard good things about Tomb Raider Legends and the unbeatable price tag of 20 Euro (new) seduced me to give ol' Lara a try. And I must say I'm enjoying the game quite a bit. The graphics are relatively simple but next-genish-enough (nice dynamic lighting and shadowing, although overall the game is a bit dark). The character designs differ a lot in quality. Lara is well done, but most of the other story characters ... not so much. I like the fact that Crystal Dynamics went with a clear comic style for the characters instead of going the realistic route or some mishmash style. And Lara's British accent is simply hot, thank the gods they didn't choose one of those typical American mickey mouse voices. The game feels best when climbing around in the levels, controlling Lara is very intuitive and the animations are very well done and connect to each other nicely. But where's light, there's also shadow:
- Interactive cut scenes: this Quick Timer Event crap needs to go. It's the single poorest game play mechanic in all gaming history. The point is, humanity knows this since Dragon's Lair. But those who don't learn from history are doomed to repeat it. QTE's reduce the gamer to a trained lab-monkey who needs to push a button when the red light flashes to get his next cookie. Personally I'm slightly offended by this shit. Why did we suffer through millions of years of evolution when a chimpanzee could do the exact same job??
- Some of the boss fights require heavy guesswork. At one time I even had to look for a walk-through in the intertubes because I was stuck. That's just poor. I don't want to trial-and-error my way through the game, I want to have logical clues presented how to proceed next, please. In contrast, most of the in-game puzzles are very well done and logical, but may require some serious hand-eye-coordination.
Math lib changes
While working on the new graphics and scene subsystems I eventually came to the point where some math code was needed (managing the view, world and projections transforms and their combinations, and flattening matrix hierarchies into world space). My original plan was to create a low level functional math library which looks much like HLSL and uses SSE intrinsics for performance. I started this stuff and soon it became clear that it would be quite an undertaking to correctly implement and test all this. And then there would only be an SSE implementation, when there's still SSE2..4, 3DNow around, and completely different intrinsics on the Xbox360 and other platforms. Sure, the problem could be solved by throwing manpower at it. But that's never a good idea for solving programming problems. So I looked around for a more pragmatic solution and found it in the form of the D3DX math library. The D3DX math functions are very complete, specialized for games, and support all current (and presumably future) vector instructions sets under the hood, and the 360 math library basically offers the same feature set (although it's much more down to the metal).
There are 2 disadvantages:
There a few other aspects to consider:
Another basic change I wanted to do for some time was to differentiate between points and vectors. There is now a Math::point and a Math::vector class which both derive from the generic Math::float4 class. A point describes a position in 3d space and a vector describes a direction and length in 3d space, generalized to homogeneous 4d space (the w component of a point is always 1.0, the w component of a vector is always 0.0). By creating 2 different classes with the right operator overloading one can encode the computation rules for points and vectors right into the C++ code, so that the compiler throws an error when the rules are violated:
So the new math lib basically looks like this:
The following low level classes directly call D3DX functions:
* matrix44 (D3DXMatrix functions)
* float4 (D3DXVec4 functions)
* quaternion (D3DXQuaternion functions)
* plane (D3DXPlane functions)
All other classes (like bbox, sphere, line, etc...) are generic and use functionality provided by the above low level classes. There is also a new scalar type (which is just a typedef'ed float), which helps in porting to some platforms (for instance, on NintendoDS, all math code is fixed point, so a scalar would be typedef'ed from one of the fixed point types). I still have to write a complete set of test and benchmark classes for the math library, but for now I'm quite happy that a very big chunk of work has been reduced to about 2 days of implementation time.
There are 2 disadvantages:
- additional calling overhead since D3DX functions are not inlined
- not portable to other platforms except DirectX and Xbox360
There a few other aspects to consider:
- With C++ math code, performance shouldn't get into the way of convenience. For instance, a proper operator+() always costs some performance because a temporary object must be constructed (the return value). But it's much more convenient and readable to use C++ operator overloading in generic game code instead of using (for instance) intrinsics. The point is to pay special attention to inner loops and use lower level code there when it actually makes sense.
- There should only be very few places in Nebula where heavy math code is actually executed on the CPU (in Nebula2 these are: particle systems, animation code, computing shadow caster geometry for skinned characters). In Nebula3 these tasks are either offloaded to the GPU, or will be obsolete). In general, the CPU should NEVER touch geometry per-frame and per-vertex.
Another basic change I wanted to do for some time was to differentiate between points and vectors. There is now a Math::point and a Math::vector class which both derive from the generic Math::float4 class. A point describes a position in 3d space and a vector describes a direction and length in 3d space, generalized to homogeneous 4d space (the w component of a point is always 1.0, the w component of a vector is always 0.0). By creating 2 different classes with the right operator overloading one can encode the computation rules for points and vectors right into the C++ code, so that the compiler throws an error when the rules are violated:
- (point + point) is illegal
- (point * scalar) is illegal
- point = point + vector
- vector = point - point
- vector = vector + vector
- vector = vector - vector
- vector = vector * scalar
- etc...
So the new math lib basically looks like this:
The following low level classes directly call D3DX functions:
* matrix44 (D3DXMatrix functions)
* float4 (D3DXVec4 functions)
* quaternion (D3DXQuaternion functions)
* plane (D3DXPlane functions)
All other classes (like bbox, sphere, line, etc...) are generic and use functionality provided by the above low level classes. There is also a new scalar type (which is just a typedef'ed float), which helps in porting to some platforms (for instance, on NintendoDS, all math code is fixed point, so a scalar would be typedef'ed from one of the fixed point types). I still have to write a complete set of test and benchmark classes for the math library, but for now I'm quite happy that a very big chunk of work has been reduced to about 2 days of implementation time.
Subscribe to:
Posts (Atom)