I have only been working very sporadically on the new N3 animation code during the past few months, restarting from scratch several times when I felt I was heading into a wrong direction. In the past couple of weeks I could finally work nearly full-time on the new animation and character subsystems, and not too early, because only then I really was satisfied with the overall design. The new animation system should fix all the little problems we encountered during the development of Drakensang and offer a few new features which we wished we had earlier. And it resides under a much cleaner and more intuitive interface then before (this was the main reason why I started over several times, finding class interfaces which encapsulate the new functionality, but are still simple to work with).
One of the earliest design decisions was to split the animation code into two separate subsystems. CoreAnimation is the low level system which offers high-performance, simple building blocks for a more complex higher level animation system. The high-level Animation subsystem sits on top of CoreAnimation and provides services like mapping abstract animation names to actual clip names, and an animation sequencer which allows easy control over complex animation blending scenarios.
The main focus of CoreAnimation is high performance for basic operations like sampling and mixing of animation data. CoreAnimation may contain platform-specific optimizations (although none of this has been implemented so far, everything in CoreAnimation currently works with the Nebula3 math library classes). CoreAnimation also assumes that motion-capturing is the primary source of animation data. Like sampled audio vs. MIDI, motion capture data consists of a large amount of animation keys placed at even intervals instead of a few manually placed keys at random positions in the time-line. The advantage is that working with that kind of animation data can be reduced to a few very simple stream operations, which are well suited for SSE, GPUs or Cell SPUs. The disadvantage to a spline-based animation system is of course: a lot more data.
Spline animation will be used in other parts of Nebula3, but support for this will likely go into a few simple Math lib classes, and not into its own subsystem.
Although not limited to, CoreAnimation assumes that skinned characters are the primary animation targets. This does not in any way limit the use of CoreAnimation for animating other types of target objects, but the overall design and optimizations "favours" the structure and size of animation data of a typical character (i.e. hundreds of clips, hundreds of animation curves per clip, and a few hundred to a few thousand animation keys per clip).
Another design limitation was, that the new animation system needs to work with existing data. The animation export code in the Maya plugin, and the further optimization and bundling during batch processing isn't exactly trivial, and although much of the code on the tools side would benefit from a cleanup as well (as is usually the case for most of the tools code in a production environment), I didn't feel like rewriting this stuff as well, especially since there's much more work in the tools-side of the animation system compared to the runtime-side.
So without further ado I present: the classes of CoreAnimation :)
- AnimResource: The AnimResource class holds all the animation data which belongs to one target object (for instance, all the animation data for a character), that is, an array of AnimClip objects, and an AnimKeyBuffer. AnimResources are normal Nebula3 resource objects, and thus can be shared by ResourceId and can be loaded asynchronously.
- StreamAnimationLoader: The StreamAnimationLoader is a standard stream loader subclass which initializes an AnimResource object from a data stream containing the animation data. Currently, only Nebula2 binary .nax files are accepted.
- AnimKeyBuffer: This is where all the animation keys live for a single AnimResource. This is just a single memory block of float4 keys, no information exists in the key buffer how the keys relate to animation clips and curves. However, the animation exporter tools make sure that keys are arranged in a cache-friendly manner (keys are interleaved in memory, so that the keys required for a single sampling operation are close to each other in memory).
- AnimClip: An AnimClip groups a set of AnimCurves under a common name (i.e. "walk", "run", "idle", etc...). Clip names are usually the lowest level component a Nebula3 application needs to care about when working with the animation subsystem. Clips have a number of properties and restrictions:
- a human-readable name, this is stored and handed around as a StringAtom, so no copying of actual string data happens
- a clip contains a number AnimCurves (for instance, a typical character animation clip has 3 curves per skeleton-joint, one for translation, rotation and scaling of each joint)
- all anim curves in a clip must have the same key duration and number of keys
- a clip has a duration (keyDuration * numKeys)
- a pre-infinity-type and post-infinity-type defines how a clip is sampled when the sample time is outside of the clip's time range (clamp or cycle).
- AnimCurve: An AnimCurve groups all the keys which describe the change of a 4D-value over a range of time. For instance, the animated translation of a single joint of a character skeleton in one clip is described by one animation curve in the clip. AnimCurves don't actually hold the animation keys, instead they just describe where the keys are located in the AnimKeyBuffer of the parent AnimResource. AnimCurves have the following properties:
- Active/Inactive: an inactive AnimCurve is a curve which doesn't contribute to the final result, for instance, if an AnimClip only animates a part of a character skeleton (like the upper body), some curves in the clip are set to inactive. Inactive curves don't have any keys in the key buffer.
- Static/Dynamic: an AnimCurve whose value doesn't change over time is marked as static by the exporter tool, and doesn't take up any space in the anim key buffer.
- CurveType: this is a hint for the higher level animation code what type of data is contained in the animation curve, for instance, if an AnimCurve describes a rotation, the keys must be interpreted as quaternions, and sampling and mixing must use spherical operations.
- Animation keys: There isn't any "AnimKey" class in the CoreAnimation system, instead, the atomic key data type is float4, which may be interpreted as a point, vector, quaternion or color in the higher level parts of the animation system. There is no support for scalar keys, since most animated data in a 3d engine is vector data, and vector processing hardware likes its data in 128 bit chunks anyway.
- AnimEvent: Animation events are triggered when the "play cursor" passes over them. The same concept has been called "HotSpots" in Nebula2. AnimEvents haven't been implemented yet actually, but nethertheless they are essential for synchronizing all types of stuff with an animation (for instance, a walking animation should trigger events when a foot touches the ground, so that footstep sounds and dust particles can be created at the right time and position, and finally events are useful for synchronizing the start of a new animation with a currently playing animation (for instance, start the "turn left" animation clip when the current animation has the left foot on the ground, etc...).
- AnimSampler: The AnimSampler class only has one static method called Sample(). It samples the animation data from a single AnimClip at a specific sampling time into a target AnimSampleBuffer. This is one of the 2 "front-end-features" provided by the CoreAnimation system (sampling and mixing). The AnimSampler is used by the higher level Animation subsystem.
- AnimMixer: Like the AnimSampler class, the AnimMixer class only provides one simple static Method called Mix(). The method takes 2 AnimSampleBuffers and a lerp value (usually between 0 and 1) and mixes the samples from the 2 input buffers into the output buffer (k = k0 + l * (k1 - k0)). The AnimMixer is used for priority-blending of animation clips higher up in the Animation subsystem.
- AnimSampleBuffer: The AnimSampleBuffer holds the resulting samples of the AnimSampler::Sample() method, and is used as input and output for the AnimMixer::Mix() method. An important difference from the AnimKeyBuffer is that the AnimSampleBuffer also has a separate "SampleCounts" array. This array keeps track of the number of the sampling operations which have accumulated for every sample while sampling and mixing animation clips into a final result. This is necessary for mixing partial clips correctly (clips which only influence a part of a character skeleton). The AnimSampler::Sample() method will set the sample count to 1 for each key which was sampled from an active animation curve, and to 0 for each inactive sample curve (which means, the actual sample value is invalid). Later when mixing 2 sample buffers the AnimMixer::Mix() method will look at the input sample counts, and will only perform a mixing operation if both input samples are valid. If one input sample is invalid, no mixing will take place, instead the other (valid) sample will be written directly to the result. If both input samples are invalid, the output sample will be invalid as well. Finally, the AnimMixer::Mix() method will set the output sample counts to the sum of the input sample counts, thus propagating the previous sample counts to the next mixing operation. Thus, if at the end of a complex sampling and mixing operation the sample count of a specific sample is zero, this means that no single animation clip contributed to that sample (which probably should be considered a bug).
That's it so far for the CoreAnimation subsystem, next up is the Animation subsystem which builds on top of CoreAnimation, and after that the new Character subsystem will be described, which in turn is built on top of the Animation system.