Tracy goes on to detail the team's new runtime rig logic system, which allows the moods and facial expressions to be universally applied to characters. Rather than creating, for instance, an expression structure of /happy/male/happy_male01, or similar, the team can just cut to "happy01," in this case, then apply that globally and allow the runtime to work out the rest.
"Finally, there's one really big piece of tech that we're bringing online now. We call it "runtime rig logic" for the faces. We have varying facial skeletons, everybody's got a bit of a different face. A lot of games [...] unify the skull shape and unify the neck shape, and everybody's the same. That doesn't work for actors – especially not for really recognizable ones, like Gary Oldman, or Gillian Anderson, or Mark Hamill. You're going to know that that doesn't really look like him. So, what we do is we do still have all these unique rigs, but what we have is a system within the engine that is actually consuming unified animation data and it applies all the offsets to that animation data so we can drive that rig. What it means is that we can share animation across anybody, which is super cool. A smile on Gillian Anderson is actually the same data for a smile on Mark Hamill. We can share all this data. This is a pretty big deal."
This system allows for procedurally driven character reactions, granting greater depth to the potential count of animations and expressions within the game. The rig logic skeleton has 183 inputs that drive the entirety of the face, and that's true for every character – at least, every human character. It's likely true for other humanoids, but we didn't ask about other potential species. Those 183 inputs work with another ~220 skin joints, creating a highly detailed rig.
Tracy expanded:
"The other thing with it is [that] we can procedurally drive things on the character, and have the rig react as if we were exporting animation. A big example of this is eyes. What we have is 'look posing' system – I'll get into that. We'll drive where the eyes are looking. The problem with this is that the eyes are all connected to blend shapes; usually, if any other game would do this, you see the eyes move around but none of the blend shapes would do anything because it just doesn't know that you're moving those eyes around. [...] With Rig Logic online, we can say, 'move those eyes,' and rig logic knows 'OK, I need to move this blend shape here, do this wrinkle here.' So we're getting really awesome performance even from procedurally generated data.
"[...] We apply this to every single head. There's kind of a workflow reason for that, and it's that I don't want to deal with two different pipelines. [...] I'd rather just, 'everybody's rig logic,' perfect. There's an implementation reason, too. Say in Star Marine I want a guy [to look angry] when he fires. If everybody has unique faces, I'd have to have, 'OK this guy is in his stance, he's got his weapon, and he's firing.' Here's angryface_male01, angryface_male02, they'd all be different animations for the same thing. Now what I do is I just say 'angryface,' and it'll figure out what face it's playing on and it'll do it. This is our whole mentality of content creation: Let's do it intelligently so we're not stuck here making thousands of things so that it takes us 10 years to make this game."