As I wrote in my CoreAnimation post, the Nebula2 animation system was in dire need of a new design and a clean rewrite. During Drakensang’s development and another research project which focused on facial animation, we had to add new features on a system which wasn’t designed for those capabilities. The resulting system worked, but just barely, and from a design standpoint it became quite an un-maintainable mess. When working within the constraints of a commercial project (defined by milestone dates, features and quality), this is often the only choice, it’s usually better to have something working at the end of the day (even though it’s a bit ugly internally), then to slip milestones. At least it was clear, what features would be important for a clean redesign, and how they shouldn’t be implemented.
With Nebula3 as an experimentation test-bed I started to work on a new animation system even before Drakensang was finished. Some of the main problems of the old system were:
- animation low-level and high-level features where implemented all over the place, some in Nebula2, some in Mangalore, some in the application
- sampling at random points in time wasn’t properly supported (the cross blending code required that the sampling time was modified at positive increments)
- there was no priority blending, and blend weights had been normalized to add up to 1, this gave “funny results” unless the application provided “carefully tuned” blend weights
- blending of partial animation (where only a part of a character skeleton is animated) wasn’t properly supported at first, and sort-of hacked in later
- it wasn’t possible to blend 2 identical animations at different sampling times
- animation clips could not be started in the future or past
- … and a lot of small bugs and quirks which resulted from the growing complexity of the animation system
The first design choice was to split the animation code into 2 subsystems: CoreAnimation and Animation. CoreAnimation has already been described here.
While CoreAnimation mainly cares about resource management, the higher-level Animation system provides features to implement complex animation blending scenarios.
When starting the Animation subsystem I was considering blend trees. The leaves of a blend tree would be sampling nodes, which sample a single animation clip at a specific time, and the nodes of the tree would accept sampled data at the input connectors, mix (or otherwise process) the incoming data into a single output connector, until, at the root node of the tree, the final result would be available. After a few weeks of work and several interface revisions it became clear that working with such a system would be much more complicated then even with the old Nebula2 animation system. When feeding the blend tree system with “real world scenarios” I never ended up with a satisfactory simplicity, instead even relatively simple blending scenarios looked incredibly complex.
Thus I scrapped blend trees, and started anew with a more straight-forward priority-blending system based on animation tracks (roughly like Maya’s Trax Editor works). With this new approach, everything suddenly fell into place, and after just a few days, a first implementation was finished.
The new Animation system has 2 important object types: AnimJob and AnimSequencer.
An AnimJob represents a single animation with the following attributes:
- Start Time: The start time of an AnimJob, can be in the past or in the future.
- Duration: The duration of the AnimJob, doesn’t have to correlate with an animation clip’s length, it can also be infinite.
- Blend Priority: The AnimSequencer class implements priority-blending, where higher priority clips dominate lower priority clips. Thus a high priority clip with a blend weight of 1.0 will completely obscure any previous lower priority clips.
- Blend Weight: The final weight used for priority blending inbetween the fade-in and fade-out period.
- Fade-In / Fade-Out Time: the time used to smoothly blend the clip with the current previous result.
With the Start Time, Duration and Blend Priority attributes, AnimJobs can be arranged by an AnimSequencer object in a 2D coordinate system where the horizontal axis is time, and the vertical axis is blend priority. When sampling at a given point in time, the AnimSequencer first finds all animation jobs which cross the sampling position. Then, starting with the lowest priority animation job, each active job is evaluated and the resulting animation samples are priority-blended with the previous blending result.
AnimJob is just a base class which can be subclassed to hook custom functionality into the blending process (like inverse kinematics, or some lookat-target functionality). At the moment there is only one specific subclass: PlayClipJob, which simply samples an animation clip.
The new Animation subsystem fixes pretty much all problems of the old Nebula2 system:
- automatic blend-weight normalization has been replaced with priority-blending which is much more useful in typical blending scenarios
- it’s now possible to correctly evaluate animations at random points in time, with correct cross-fading
- correct blending of partial animations (where an animation only manipulates parts of a character skeleton) is now a standard feature
- animation clips can now be blended with themselves
- animation jobs can be started in the future or in the past
Overall, the new Animation system is much simpler, robust, easier to use and easier to understand.
A few things are still missing which proved useful in the past, or which we identified as a nice-to-have-feature:
- Some sort of name-mapping for animation clips:
- In Drakensang, every character had about 400..600 animations, most of those were combat animation variations (different attack types, different weapons, with or without shield, etc…), but the animations actually fell into only a few categories (like attack, idle, walk, …).
- It would be nice if a mapping mechanism would exist, where the application sets a few variables (like type of weapon in hand, shield in hand, etc…), and then maps an abstract animation name like “attack” to a specific name like “male_twohandedsword_attack1” by resolving a few user-defined mapping rules.
- Some sort of finite-state-machine to define how animations relate to each other.
- This is mainly useful to automatically play transition animations. For instance, if a character currently has no weapon equipped, but he needs to play an attack animation, the state machine would decide that a “draw sword” animation needs to be played first.