With respect to 3Lateral, I was specifically talking about their ML rig used in 1946 (and I suspect they may be using the same kind of rig for Hellblade 2). The panel they participated in at GDC gave a brief explainer as to why what we're seeing in the trailer looks so good -- what makes it different from the generic output you mentioned above.
Hmm, I'll try to follow up, but there's some overlapping and jumps to conclusions that's confusing me a bit... are you talking about the face motion capture gear rig, or the digital rig for outputting the characters?
The facial rig that 3Lateral uses (which I believe is actually built by Cubic Motion, who has now become 3Lateral's partner at Epic) is state-of-the-art and proprietary... but it's also not
that different from what other studios use/experiment with in mocap. It's a camera or two mounted on a helmet, and it watches your face, sometimes with markers, nowadays hopefully not, to record facial performance. (If you want to get into it,
there's a $250 device by professional studio Rokoko which just mounts on an iPhone and gets useable results; it even can integrate into a MetaHuman.)
The digital rig is, largely, Epic's MetaHuman system. MetaHuman is the culmination of all the years 3Lateral spent building a face recreation framework (with physics-based animation and ML deforming), which led to them being bought by Epic in 2019. Much of their technology and creation system was condensed into a single turnkey character creation system for both professional and amateur designers called MetaHuman. All of 3Lateral's digital character experience is infused into MetaHuman, but that doesn't doesn't mean that for example Aloy is a MetaHuman, despite shared DNA in the process of her creation. There are different parts of the 3Lateral service system to utilize or replace per project/developer.
Even with MetaHuman characters though, the model rig is just the beginning of the process. Actors get scanned, then they still need to be modeled (both to apply the scan data to the model rig but also to modify the unwanted or incongruent aspects of the scan,) then they still need to be textured (even though the scan captured the real person, PBR and subsurface scattering depth and wrinkles and even blood flow maps need to be laid into the texture, plus skin just needs to be retouched despite the fact that it's captured from the real character,) then they still need to be rigged for motion (because eyes move differently and teeth may stretch the mouth and just whatever generic aspect the model starts with is not going to be good enough to actually work for character needs,) and then the model still needs to be animated (because once it can move its eyes and flap its gums and do all the things that the mocap character captured, you then have to have it act out the scene, and a photorealistic character still often needs to reprocess the mocap data in order to actually get into all the positions and make all the expressions that the actor gives in their performance.) Even with all the impressive capacity of the MetaHuman system, it still takes skill, budget, time, and reworking to make it look as elite as Hellblade 2 or Marvel 1943.
They're essentially creating a custom trained rig for a specific actor, if I understand it correctly. The actor does a series of expressions for a rig that was scanned and mapped to their face specifically. Basically, only that actor can use that rig (this is, of course, a departure from the Guerrilla Studios implementation, in where they scanned the face of a Norwegian actress, but they're using Ashly Burch for all the performance capture that animates that face).
Maybe you could point to the part of the Epic presentation where they talked about this, but not that I know of?
Neither the capture cams or the digital model rig would be wholly unique to only one performer, though both would also be customized for each use. There's "one MetaHuman", in that there's eyes and a mouth and skin that moves how we know these things to move, but then that just serves as the underlying structure and foundation of movement for everything you customize and attach from a scan to make a character look unique or like a person. And there can be aspects of the MetaHuman rig rerigged specifically for a performer's personality and body, so say Stallone was made into a MetaHuman, you would build the affects of Bell's Palsy scarring into the MetaHuman's range of expression and mouth quirks, but you would still start with the same mannequin.
You can take data intended for one MetaHuman and do some work to assign it to a different model. It's just mocap data and audio, you might need to retrain or might have other issues of incompatibilities but the basics in the model are still there, even if the MetaHuman isn't human...
I think what you're saying is that at one point, Ziva might have been completely capable of offering this exact service given that they were also using machine learning to train rigs. You say this is what was used for the Ogre we see in Hellblade 2. Sad to hear that Ziva is no more.
The Ogre in Hellblade 2 is a different toolset at Ziva; it was their Ziva VFX body deformation sim system. They took a standard body shape, changed factors on its gravity and body mass to account for it being many times bigger and heavier than a normal human, added drippy and droopy bits of flesh and gristle, did hand-made animation of that model to approximate how they want it to perform, then fed it into a machine-learning system to crunch all the data of how all the muscles would move and how the fat would shift and how the dangling flesh would contract or swing with every little move of its massive bulk, and the result is a realistic ogre that can act out a scene in realtime. Some of these same systems are in their Face Trainer (and the ogre's face was I assume using performance capture mixed with
I guess what I was trying to say was that we know from the leaks that Insomniac was chasing a level of fidelity they saw in Hellblade 2. I think it's fair to assume they were talking about the facial fidelity.
They were talking shop for producers. It's not that they saw Hellblade 2 and thought, Oh shit, we need that or else we'll suck. (Also, they have "that" already; they've worked with the same companies that NT does.) They were setting a standard of conversation for the project, so that everybody involved in the production and management knew the goal (and also have an idea of budget and studios to prepare the project for, since they have insight into what it cost and who did the work to achieve those Hellblade results.)
When I think about what really sets Hellblade 2 apart, it's those facial animations blended with photorealistic materials. That's what makes everybody say "wow." Maybe there are other vendors that could help them achieve that for Wolverine, but we KNOW that 3Lateral has nailed it with their ML rigs. And now that Ziva has dissolved, I gotta imagine 3Lateral is still among the best.
Among the best, sure. Probably the best overall, particularly in the interactive field. (There's a reason Epic bought them.) But there are a bazillion companies on the market doing all kinds of bits of work that is advanced and good for developers, and most of them fly totally off the public's radar. You yourself said you have never heard of 3Lateral until just now. There are lots of companies out there doing work you will never know about on products you still will enjoy. (Probably also 3Lateral has sub-subcontractor studios doing some of the work on the photographic materials or animation resolving or other detail work, so some of the work you are wowed by could be by someone outside 3Lateral.)
I hear what you're saying. 3Lateral alone doesn't make characters look "wow." There are other contributing factors, including the capabilities of the teams leveraging what they provide. I get that. The point I'm making is that I've seen 3Lateral's services contribute to the best gen 9 character models we've seen so far: Hellblade 2 and 1946...Their implementation, when leveraged correctly, gets groundbreaking results.
Eh, but what I'm saying is that the best results come from the best talent funded by the best warchest so they can go out and get the best subcontract service providers in the field while spend the most time working on the product. 3Lateral already has a rolodex of masters (Insomniac, Kojima Productions, Ninja Theory, Guerrilla, Capcom, Sony Santa Monica, Epic of course, Andy Serkis even, but it's bit of a Chicken-or-the-Egg situation. Did the best-looking games ever made come from great developers gestating their creation using 3Lateral's tech, or did 3Lateral's tech elevate itself because great developers used it to hatch the best-looking games ever made? It's a little of both, but it's not a matter of like ILM or Bust.
Also, as far as MetaHumans... I'm going to timestamp this and see if the prediction comes true, but I do think it's possible that we'll get a little sick of or used to "the look of MetaHumans", the way we have some other Unreal Engine tools. (I feel like I can spot Nanite artifacts already in fast motion, though I don't know if that's just errors from it being early implementation or if micropolygon tessellation will always have some slight tell to its shift... or if I'm just a kook imagining problems.) Maybe the flexibility will be so robust that there won't be visible patterns, but I have a feeling we are over time going to be able to tell a MetaHuman from a different character model/performance base. And if that happens, we're going to appreciate the studios which didn't just use "the best" as if MetaHuman was an industry standard. But we'll see about that...