An Automated Pipeline for Generating Run-Time Rigs

On account of getting the flu at GDC, and then having to catch up on all of my work that wasn’t getting done, I’ve been a little behind, but I wanted to make sure I got up a write-up for my main conference presentation from GDC this year.

What’s This About?

Hat Fail
This post is not meant to blow your mind, but to turn that hat around.

In order to set the proper tone, I want to clarify a few things up front.

  1. I’ve had to implement run-time rigs in the majority of games I’ve worked on, but I don’t consider myself an expert. I don’t even consider myself to be especially smart, just really persistent.
  2. This presentation was not meant to blow anyone’s mind or to show some cool new technology, but more to point out something that you can already do with things you have.
  3. My implementation was done with Unity and Autodesk Maya, and I chose to implement specific features from Maya. I hope it’s obvious, but you should be able to do this no matter what software you’re using—your own or otherwise

That being said, I do want to clarify a couple points for anyone who is not a Unity user, so it is clearer what exactly my tools do. The basic idea is that Unity uses the FBX SDK to import data from a variety of applications. However, it also lets you save application-native files directly into your project. The way it works, in the case of Maya, is that Unity will launch Maya headless as a child process and run a MEL script, FBXMayaExport.mel, which converts the Maya file into FBX that Unity can read. Moreover, Unity allows you to modify incoming data with AssetPostprocessor scripts, which can read custom attributes from FBX data. What my tools do is import a custom Python module which adds a bunch of user properties to the scene, which I can then read and operate on from an AssetPostprocessor script.

Unity Asset Pipeline
Overview of Unity's asset pipeline and where I hijack it.

What’s the Problem?

There are at least two core problems that I chose to focus on here. The first is helper joints and the second is sharing animation data.

Sausage Fest
There's no excuse for letting your game become a sausage-fest.
1. Helper Joints: In a traditional pipeline, you bake down helper joint animation like everything else (e.g., you sample at some regular interval and then just read back an animation curve). The obvious problem here is that when procedural control takes over, such as physics, you no longer have data for the potentially infinite number of contexts, so what do you do? In many cases, the helper joints may just stop in their tracks, and so your character’s deformation will be contingent upon the pose he or she was in before entering the state of procedural control.

Two Heads Are Better Than One
Two heads are better than one, especially when they can share animation data.
2. Sharing Animation Data: In a traditional pipeline, you have a limited number of strategies for sharing animation data, each directed at slightly different problems. The first thing you can do is an offline retarget of data from one move set to another character, which saves you the overhead of re-creating animations, but still bloats your data set. The other basic option you have is to ensure your characters all have the same androgynous or cookie-cutter-white-male proportions and features. The basic problem with either case is that data overhead (and potentially production overhead) is limiting your creative options. If you want to have some dramatically different characters, you are sore out of luck!

Some Defintions

My basic position is that both of these problems can be solved by implementing run-time rigs. At this point, I need to take a quick detour into definition-land to make sure my point is totally clear.

Rigs: when I use the term rig, I mean it in a very general sense. I’m not talking about skinning models, but about transforming data. That being said, when I use the term, a rig is simply an abstraction layer for translating a small number of high-level inputs into a large number of low-level outputs. For example, in an IK setup, the IK goal position is the high-level input, which in turn drives a set of limb rotation values; all the animator really cares about is where the end of the chain is, not what angles the joints need to be. Another example might be custom sliders for facial animation, as when you have a single float value driving a bunch of vertex positions (as with blendShapes) or joint transformations (as with a joint-based facial setup).

Rigs: Defined
A rig is simply an abstraction layer for translating a small number of high-level inputs into a large number of low-level outputs.

Offline Rigs: Many workflows only take advantage of rigs offline, during the content creation phase. The basic idea is that rigs are used to make it easier for animators to create large amounts of data pretty quickly. As with the helper joint problem pointed out prior, the foremost limitation is that objects will only animate properly in those contexts where data exist, so many effects are lost when procedural control (such as physics) takes over.

Offline Rigs: Defined
Offline implementations simply leverage rigs in a way that allows animators to create large amounts of data easily.

Offline rigs result in equally problematic data quantity issues, as the amount of data being created scales exponentially as a function of the number of animations and the complexity and depth of object hierarchies. At the same time, however, the amount of input information remains relatively stable (e.g., your animators still only care about where the arm moves no matter how many other parts might move with it; they still only care about hitting specific visemes, no matter how many morph targets you’re blending or how many joints need to move). As noted previously, these data growth issues are typically combated by awkward, creativity-limiting proportional restrictions, or by really aggressive animation data compression.

Run-Time Rigs: The basic idea behind run-time rigs is that your game data set only consists of the symbolic information, and the translation into low-level outputs all occurs in the run-time engine. One of the key advantages to this approach is that it allows deterministic, procedural animations to function independently of data (as in the case of helper joints).

Run-Time Rigs: Defined
Run-time implementations off-load the symbolic translation process to the run-time engine, exchanging some computation for a reduction in data.

At this point, I hope it is pretty clear that this is not a totally revolutionary idea. Developers use run-time rigs in cases such as animating vehicle suspension systems, making characters gaze at interesting objects while they walk around, or using IK to grab things or plant feet. Likewise, there are mainstream middleware packages that enable run-time retargeting and physically-driven locomotion. However, such middleware presently only exists for very specific use cases, and so arbitrary use cases—such as faces, muscles, props, helper joints, and so on—all must be implemented manually with home-grown tools. I’m suspicious run-time implementations in these cases are neglected due to the overhead in synchronizing content with code (or maybe just good old hat fail).

My basic approach to these problems was to implement a system for automating the generation of run-time rigs. In staking out this territory, I want to argue in favor of both sides to this coin.

Why Run-Time Rigs?

On of the biggest wins from run-time rigs is aesthetic. In the case of helper joints, while you may in some situations get better motion interpolation, the more obvious gain is that stuff still works when procedural control takes over! The other big gain here, however, is in cases of retargeting: the more reusable motions you have, the more different characters (and possible motions) you can explore. Per-rig translation of the same data lets you reuse the same high-level data in cases like faces, to take one example. You can see most of these gains in the second part of my tutorial video for my Unity Maya Extensions.

I also think run-time rigs can be a technical win. With a careful implementation, you’re freeing up memory for essentially minor computation. I say minor because a great many operations common to rigs entail pretty basic linear algebra, so they’re potentially good candidates for an SPU or a vector coprocessor (for example, even the 1st-gen iPhone has VFP, which permits 8 operations in one instruction). To make this argument by way of example, I’ll talk a little bit about the shoulder joint.

A couple years ago, at least some people out there were really drinking the dual-quaternion Kool-Aid, and thought it would solve all kinds of problems. I’m hoping most people have by now realized that DQ skinning, while great, is not free of problems, but simply has different problems both in terms of deformation and cost. In the case of games though, it’s still not especially practical to implement DQ skinning in many cases because of its higher instruction count (which you’re paying for on your whole model, as opposed to only those joints where it’s truly needed). As such, another popular approach that has emerged is to use pose-space correction, either with morph targets or helper joints. While this approach certainly works for some cases, a complex joint like a shoulder potentially needs a large number of poses (and blends) to capture all of its different important configurations. On the other hand, you can easily implement a single-parameter algorithm to drive the base helper joint (for anyone who has looked at my AM_ShoulderConstraint plug-in, it uses this basic process).

Extend your arm out to your side, so it is parallel to the ground, and put your finger from your other hand at the base of the deltoid, just lateral to your acromion process. If you rotate your arm anywhere through this lateral elevation and/or twist your arm around its long axis, your finger should stay more or less on the top of your arm.

Shoulder Up Vector at Lateral Orientations
The up-vector for the first twist joint basically points upwards when the arm is at any lateral elevation.

Now, with your finger still in place, lower your arm to your side. Rotate your arm around its lengthwise axis and your finger should stay more or less to the outside.

Shoulder Up Vector at Lowered Orientations
The up-vector for the first twist joint basically points laterally when the arm is at any lowered orientation.

Now, again with your finger still in place, raise your arm up above your head. If you rotate your arm around its lengthwise axis here, your finger should stay in more or less the same place (back, and to the left…or right, if your left arm is up).

Shoulder Up Vector at Raised Orientations
The up-vector for the first twist joint basically points 'back, and to the left' when the arm is at any raised orientation.

Hopefully, what you’ve realized in this exercise, is that the first twist joint in the shoulder can be represented as an aim constraint that points down the length of the shoulder, and whose up-vector can be driven by a single parameter (the angle of elevation in the space of the ribcage; e.g., the shoulder’s aim axis compared to the aim axis on the ribcage). If you were to represent this literally, it would look something like the following pseudocode:

// twist joint points down length of upper arm
twist.forward = upperArm.forward;

// get angle between spine aim and upper arm aim axes
float angle = Vector3.Angle(spine.forward, upperArm.forward);

// divide by 90 and subtract 1 to get number [-1,1]
float interpAmt = elevationAngle*0.011111111111111f - 1f;

// slerp between target axes based on current angle
if (interpAmt < 0f)
    twist.up = Vector3.Slerp(rest, lowered, -interpAmt);
else
    twist.up = Vector3.Slerp(raised, rest, interpAmt);

// orthonormalize rotation matrix
twist.right = (twist.up ^ twist.forward).normalized;
twist.up = (twist.forward ^ twist.right); 

The great thing about math though, is that you can simplify this even further and save on some substantial operations (e.g., arccossine), yet get the exact same result.

// twist joint points down length of upper arm
twist.forward = upperArm.forward;

// get dot product of spine aim and upper arm aim axes
float dot = spine.forward * upperArm.forward;

// lerp between target axes based on current dot product
if (dot < 0f)
    twist.up = Vector3.Lerp(rest, lowered, -dot);
else
    twist.up = Vector3.Lerp(rest, raised, dot);

// orthonormalize rotation matrix
twist.right = (twist.up ^ twist.forward).normalized;
twist.up = (twist.forward ^ twist.right);

Why Automate?

I really hope there's not a whole lot to say about this topic at this point in history, but I think there are a couple things worth noting. Obviously, you can reduce the likelihood of human error in your pipeline, ensure your DCC rigs and game code are always synchronized, and eliminate intermediate implementation steps that require dragging personnel off other tasks. From a resource perspective, apart from the obvious issue that manual translation is time-consuming, it also comes with a built-in expiration date. If you're doing another game (i.e., not a sequel), you can potentially lose a lot of the investment you made in manual translations. On the other hand, an automated tool is already paid for when you start your next game, and it is iteratively improving as you run into new use-cases you want to support.

The other main advantage to automating in my view (especially as someone who works on a lot of small, short-form games), is that it enables more creative risk-taking. Not only does it improve your iteration time for making changes to rigs, but it also enables you to make changes late in your production cycle with fewer ramifying consequences. If your core data set (e.g., the high-level/symbolic data) are all still in place, there's much less overhead in removing or adding features (e.g., extra helper joints or behaviors) to entities; you do not need to retarget your whole data set onto the adjusted hierarchy and re-export everything. The other big creative advantage, in my view, is that the potential for leveraging extant data makes it much cheaper to add variety, so your characters don't all have to be the cookie-cutter white male.

What to Generate?

If I have you on board at this point, the only real question is what exactly you want to generate. In this respect, I see two key options.

1. Rig Definition: You may want to export your rig definition as e.g., XML of some kind and then implement a behavior graph system to try to achieve feature parity with something like Maya's Dependency Graph. I want to make it clear that I've not yet gone down this road, but I'm suspicious of the overhead in effectively adding another layer (e.g., performing simple operations like multiplication or conditional statements using nodes). The other issue here is that things like Maya's DG are designed in ways to very carefully control the flow of information (using lazy evaluation), which potentially makes it a complex system to replicate.

2. Source Code: In all games where I've implemented run-time rigs, I've simply created source code to replicate what was going on in the DCC rig. Not only does this approach control the amount of overhead I get from implementing a run-time rig, but it also gives me some latitude in optimizing redundant operations that I simply don't get from a behavior graph system. For instance, in a fully automated approach, I can quite easily use regular expressions to convert divisions into multiplications, consolidate operations on literals, and so on.

That being said, I want to talk really quickly about a couple of case studies.

Method 1 (The Wrong Way)

The Wrong Way to Make Games
An ethnographer's rendition of me working on WWE.

One of the first games jobs I took was a WWE Wrestling game (possibly one of the most nightmarish genres from an animation perspective.) The basic problem was that the (ever-fluctuating) roster was to consist of as many as 60+ characters of different sexes, sizes, and so on, all of whom were to share the same move set and to be retargeted with a physics-based locomotion system at run-time. Although the game was eventually canceled after I left that position, I did learn quite a bit from the experience.

We were working with a custom engine and tool set, and were using CAT control rigs in 3D Studio Max. As such, my job was mostly to devise a system of helper joints (which I drove with MaxScript controllers) and to convert all of these behaviors into C++ for in-game evaluation. The approach I took was to try to devise a one-size-fits-all set of expressions that would work for all characters. A careful reader has probably identified a host of problems by this point!

Range of characters in a WWE game
Wrestling games have a wide range of characters. (Thanks to Thuan Do for the Big Show render, and to Troy Perry for the Rey Mysterio render.)

Probably the biggest problem from my perspective was the attempt at designing a one-size fits all rig to work with all characters. It was a (never-ending) iterative design process, and the rig ended up relying on all kinds of scalars that I tried to develop using other known body proportions. Wrestling games have a wide range of characters, so when you get a rig up and running on your average guy, you then need to get it working on an enormous guy. When you've got it working for them, you need to make sure it works on the shortest guy. And then you need to make sure it will work on a female character.... The other obvious issue is that the process was not automated, so synchronization was tedious (but at least I was able to manage all the tedium myself).

On the plus side, you'll get a good framework for an all-purpose human rigging tool, and you'll be able to impress your friends at parties with your arcane knowledge of human anatomy.

Impress People at Parties
I met my wife at a party by impressing her with my anatomy knowledge. It looked like this.

Method 2 (Doing it Less Wrong

Raptor Raptor
Screee!!!

Most of the other projects I've worked on fall into this category—lots of small games with limited memory budgets and short production cycles. These games tended to vary quite a bit from one product to the next, so there wasn't much opportunity for re-use, and they featured all kinds of arbitrary things (muscles, machines, props, and so on). They also tended to feature physics-driven or some other kind of procedural motion techniques. As such, the basic approach was to set up Maya expressions, constraints, and node networks on the DCC side, and manually translate them into C# for run-time.

For the types of games in which I used this approach, it worked well enough because it fit with a rapid prototyping environment. There was a lot less R&D required than doing things like master rigs, and it was super fast for things that aren't going to change much, such as vehicle suspension. On the other hand, there was in some cases a little more time required due to having to set things up individually (not to mention more places to miss things since the process wasn't automated). Moreover, since all of the games on which I used this approach were so different, there was no real portability of investment from one game to the next. Eventually, I got to the point where I realized I was setting things up by hand like this in basically every game.

Method 3 (Something a Little Better)

Ben Cloward and Shawn McClelland
Ben Cloward and Shawn McClelland as seen at GDC 2010.

At GDC 2010, there was a technical animation panel. When asked about where things were headed, one of the things that Tim Borelli brought up was how they were implementing run-time expressions in a game he was working on at the time. I politely thought to myself "WTF do people seriously not do this already?" and posed the question even more politely to Tim when it came time to verbalize it (thanks for being a great sport, Tim!). Ben Cloward chimed in that more so than the run-time implementation of rigs, what would be cool would be the ability to export arbitrary rig definitions, things like expressions and so on, to which I politely thought to myself "Touché, sir!" As such, I diligently went home after GDC and decided now was a great time to work on a rig exporter for Unity, which I have now made freely available. Since it's been covered in the video I posted above, I'll just talk a little about the two key parts of the system.

Maya Python: When my Python module is invoked during Unity's export process, the basic idea is that it adds user properties (or custom attributes) to the scene to describe different parts of the rig. The first (and most straightforward) part of this process is storing properties for different nodes I support as components. However, my tool also supports expression nodes, which are a little more complex. For each expression node, the module adds custom attributes for all reference fields that will end up in the C# class. These references are any supported nodes connected to an expression node.

I also store user properties for each jointOrient attribute for connected joints (as quaternions). After that, there's the not-so-simple process of converting expressions into C# MethodBodies, which includes

  • Stripping comments
  • Locating all variables and their types
  • Resolving naming conflicts with keywords and classes
  • Consolidating literal expressions and optimizing divisions
  • Correcting assignment syntax and converting from Maya's right-handed coordinates to Unity's left-handed coordinates

From the standpoint of having done manual conversions in the past, I feel like the biggest payoffs are probably optimizing the literals and divisions and automating the conversion from one coordinate system to another. I feel sorry for anyone who wants to try to do this with 3D Studio Max (for a variety of reasons). Right now, I only support expressions instead of node networks since it was an easy starting point, but I'd certainly like to add support in the future, especially as Maya's node-based workflows improve.

Download my free Maya Extensions from the Unity Asset Store
Download my free Maya Extensions from the Unity Asset Store!
Unity: The second step of the process is the AssetPostprocessor scripts in Unity. The first AssetPostprocessor imports blendShape data, the second imports node definitions, and the third imports expressions. When importing expressions, the fields and MethodBodies are all plugged into a StringBuilder to properly format the generated code, the first line of which is tagged with the unique identifier of the asset that generated the code (so I can use editor scripts to clean up my project if I have generated source code not being used).

The only tricks worth mentioning are pretty Unity-specific, but the basic idea is that if the contents being generated by the StringBuilder differ from those existing (or if the file doesn't already exist), then the .cs file is written to disk, the asset database is refreshed, importation of the .cs file is forced using the ForceSynchronousImport option, and the current asset being processed is reimported so that in the next pass the source code exists in its new form, the assemblies have recompiled, the new class can be reflected, and the fields can all be linked up. (Incidentally, for Unity users out there, the ForceSynchronousImport mode is required to make this work, yet it strangely seems to crash Unity in other use cases!) In the end, as useful as this approach may be, it is not without its own potential snags.

Foremost, you are fundamentally letting "artists" put code in the game. On the other hand, the output doesn't need to be human-readable, so you can automate some optimizations (such as the literal consolidation and conversion of divisions). Moreover, if you opt for an open implementation, such as expression nodes or MaxScript controllers, you may need to set some limits on how many terms you support. For example, Maya allows on-demand execution of any commands inside of backticks, so my system only supports basic math commands in these cases. Maya also has the benefit of strict typing, which makes it easier to convert. The other issue is that there's not really a way to make Euler angles not suck. If you want to support them, you need to decide whether or not you want to support multiple decomposition orders. That notwithstanding, if you're storing your orientations internally as quaternions, there's not really an efficient way to decompose into Euler angles outside the [-180, 180] range (REMEMBER: orientation and rotation are different things). The other issue to sort out is if you want to support jointOrient attributes, as it adds two quaternion multiplications to the already not inexpensive Euler decomposition (one to get into the joint's pretransformation space, and then another to get back out).

However, I maintain that an automated approach is still a net win. The big advantage for me is enabling changes late in production, as well as potentially a little more creative risk-taking. Moreover, the tool improves over time as you hit it with new use cases, which makes it simple to transport from one project to another (in general, you'll only be interfacing with math libraries, which aren't likely to be changing). One could also use some of these techniques to enable cross-application rig translation, either through FBX user properties or XML sidecar objects. This sort of tool is also a good point of discussion for working with your riggers to help them explore alternatives and improve their technical understanding of the cost of different operations.

(As a footnote, I want to mention some specific suggestions as a result of some of the issues the Nexon guys mentioned in their talk. First, which should be obvious for Maya users, you should avoid building cycles into your rigs. Second, remember that changing a transform matrix will cascade changes down the object's hierarchy, which may add up quickly! As such, if you're implementing run-time helper joints, you should try to ensure they're all terminal nodes in your hierarchy as much as possible.)

Final Words

Mentors Are Helpful
Thanks Rick and Doug, wherever you are.
At bottom, communication and mentorship as essential to this setup. If your riggers are not rigging in ways that can leverage run-time computation, then this system is of little use. I want to again emphasize that I don't see myself as an especially smart person, just really persistent. I owe a lot to the mentors I've had over the years: those who actually poured over math with me, and those who just gave me a chance to try stuff out even though my title said "artist."

Clearly, you also want to ask yourself if you should even bother. If animation isn't a big part of your games, then implementing one-off, manual translations may be sufficient. There's also no magic bullet. I see this sort of approach as just another tool that can be both used and misused, so make sure you're basing your decisions on data (game performance as well as time investment). Nonetheless, if your rigs have any parts that behave procedurally and deterministically, then you can probably benefit. Driving helper joints with common constraints can be pretty cheap! Point constraints are basically just a vector lerp, aim constraints are a vector subtraction and orthonormalization, and orient constraints are a quaternion nlerp (or slerp or possibly slime if you want to get fancy).

The trend on the content-creation side is for things to become higher-level over time (remember how people modeled before there were sculpting tools?), and I think it may be a concept worth applying to data, too. I also feel like this may be a good bet as cross-platform development grows. There is an opportunity to store a common set of data and simply unpack it based on each target platform's strengths (e.g., a mobile and web game could share the same data set as a console SKU, and the console would just do more with the data).

Finally, I want to emphasize that I see "expression" or "rig" code as just another piece of data in your pipeline. From a pipeline perspective, the code/data dichotomy is pretty irrelevant, and everything is just something you can check into your version control system. Thinking about it in this way may help it become more obvious how to fit it into your existing pipeline.

More Information

Download my source code!

Read to Learn Stuff:

12 thoughts on “An Automated Pipeline for Generating Run-Time Rigs”

  1. I love when people share their experience instead of just “theorycraft”. (I value wisdom/persistence over intelligence, I guess.) This article is exactly one of those from experience and wisdom. It’s awesome. Thank you for sharing!

  2. Hi! I don’t personally know Blender, but I’d frankly be surprised if it didn’t provide facilities to achieve all these same things.

  3. Hi Adam,

    Thanks for the response, I am going to look into it, the other thing I had was I needed to be able to play portions of the animation as well, not necessarily the whole animation. I am an experience developer, but weak on the modeling aspect. If I can’t move this forward, would you be interested in getting me started on it, paid of course.

    Thanks,
    Donald

  4. Hi, I’ve been meddling about the similar things, and came across this article, great stuffs!

    Just to clarify, do you basically have zero animation clips for your skeleton, the clips are now for the controllers only ? Or, do you only do run-time rigs for those helper joints, not the entire skeleton ?

    So, you still bake clips for your “base” skeleton, and on top of that, you have this run-time rigs system for your helper joints ?

    Thanks!

  5. Hey Karl,

    It really depends on the situation. Most of the time, yes, there will be animation curves baked onto the base skeleton, and then only joints which have deterministic/procedural secondary motion will be evaluated at run-time. An example of an exception might be a face where I need to retarget animation, in which case the data is all stored in some abstraction layer.

  6. Hi Adam,

    Thanks for the response! I’m weighing the pros/cons of going full non-baked clips for the base skeleton as well, which will require a run-time implementation of maya’s IK handle. HumanIK does something similar I think.

  7. There’s a lot of premature optimization going on here. Changing n / 11 to n * 0.01111111111 isn’t going to be faster (it just obfuscates your code–every compiler will convert a constant divide to a constant multiply when it makes sense), and eliminating one acos() in code that only gets run once per corrective shape won’t matter at all. It doesn’t matter to me if you write ugly code because you don’t use a profiler, but this page might give others bad habits…

Leave a Reply

Your email address will not be published. Required fields are marked *