• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Graphical Fidelity I Expect This Gen

Guys, this is it. This is the matrix awakens moment for video game character faces. this will change gaming forever. (if devs embrace it)



The aloy comparison blew me away. wtf.

pxWjH29.gif


The AI is removing the subtle ND facial animations but the face still looks great.

DtkA1fl.gif


Bethesda games get the biggest upgrade.
KCb6it7.gif

What’s to embrace? It’s FAR from realtime and still plastered with weird artifacts.

But who knows, DLSS is also a weird thing that is working pretty well now. So maybe in the future we will get techniques like that in realtime.
 

PeteBull

Member
Tekken 2(1996 console launch) to Tekken 3(1998 console launch) on same psx looked like true next gen jump, in comparision new forza both to forza 7 and to gt7 looks only bit better but not crazy better, i dont blame it/devs but its obviously not the next gen fidelity we want, nor the wow factor we hope to see.
 

Neo_game

Member
Generational leap?

UbJO29v.gif


The track is done from scratch I think so yeah it is next gen in that sense I guess. But not sure if this is also a bug or not. You can easily notice the grass does not look green unlike Forza7 from 5:50, 5:56, kerbs with red and yellow strips are also washed out. Even the car paints looks off to me. Forza7 is doing better job in this. It is pretty evident that blacks in Forza7 is better as if it is OLED vs LED in the Forza8.



I noticed this in IGN non buggy video too which they uploaded again. Things like grass, the car paint, etc looks off. This game is definitely using some filter.
 

SlimySnake

Flashless at the Golden Globes
What’s to embrace? It’s FAR from realtime and still plastered with weird artifacts.

But who knows, DLSS is also a weird thing that is working pretty well now. So maybe in the future we will get techniques like that in realtime.
Its a plugin that devs can add to their engine much like havoc or other third party solutions like dlss. it will help devs like bethesda and bioware produce better looking characters and animations that their current modeling and procedural animation systems simply cant handle. the game has somewhere around 200-300 characters you interact with. each with their own lines of dialogue. bethesda simply couldnt animate them so they wrote a one size fits all lipsync algorithm that results in somewhat wooden performances. now imagine if they just let AI handle it. Either way, its being programmed, but this time the AI is doing it. no need to develop an algorithm. no need to model any of these characters. just provide it one picture and it will do the rest. it will save time and resources.

Then there is something even i didnt think of until someone else mentioned it in the other thread. If the ML hardware is creating these faces on the fly then you dont need the GPU to render any of these fancy graphical features. That would get offloaded to the machine learning hardware. no idea how intensive that would be but assuming next gen consoles have some kind of ML hardware like tensor cores DLSS uses, all of that processing will be done on those leaving the GPU a lot more power for the devs to spend resources elsewhere.

Sony studios did something similar in the PS3 days. They put AA which is traditionally done on the GPU on to the cell processing units. That allowed them to use the GPU resources into pushing other graphics features which is partly why sony first party games looked better towards the tail end of the gen compared to the xbox 360 exclusives that half 2x more vram and a way better GPU.

You can see just how much power it takes to render the TLOU2 character models at their highest level of fidelity. They clearly have to downgrade the faces to get all the other things rendered in the level. Now imagine if during gameplay they let AI handle the face during gameplay?

Gameplay
FY9pdjVXwAEUDCY

Cutscene
EeOqsh4XYAYRPDx
 

SlimySnake

Flashless at the Golden Globes
That's it guys, we completely fucking lost him.

Even worse when he was bragging about the vanilla sarah not looking bad...he needed plastic doll sarah to realize that vanilla looked like a 60 years old cunt.
Didnt take you for a chubby chaser. Not judging but if you can like old fat aloy thats great. I just prefer my girls skinny and attractive.

MHzKQIU.jpg
 

SlimySnake

Flashless at the Golden Globes
Gameplay.
EdzQrQNXYAE0xQk

FFI7iErXsAECmTd

FTt33TwWYBcAcRl


Npc.
FFI7vwcX0AEZFkL
lol photomode and DoF effects add more detail to the faces. Also, that first screenshot looks hilariously bad. what are you even doing? are you seriously making the argument that ND's cutscene models and gameplay models are the same?
 

Musilla

Member
lol photomode and DoF effects add more detail to the faces. Also, that first screenshot looks hilariously bad. what are you even doing? are you seriously making the argument that ND's cutscene models and gameplay models are the same?
Why do you always take everything as an attack?

No, obviously the models don't look the same during gameplay as they do in scenes, but it doesn't look as bad as in your screenshot either.
 
Playing through AC Mirage now, not sure what folks on here think about it.

I’m actually impressed by the visualsX there’s a pleasant uplift in geometry especially on buildings and objects in the environment. Lighting also seems to be solid.

Body animations are pretty fluid as you’d expect, facial animations are pretty solid but nothing to write home about. Facial IQ and detail could have done with little more work.

My one gripe is the weird chromatic aberration, it’s like your playing the game wearing sunglasses on at times. I wish there was an option to toggle it off. Overall it’s still a visually pleasant game IMHO.
 

Lethal01

Member
Which is why I picked the scenes that are real-time. Or maybe you didn't even realize it was real-time and you just own-goaled yourself lol.

You didn't though, you picked a mix a realtime and pre-rendered in engine cutscenes
They even mix the two elements in a single shot, suppose I can't expect you to notice.:messenger_grinning:
image.png

And even those prerendered cutscenes still aren't even near up to the level of that raytraced trailer.

Again though
Final Fantasy 16 intro achieved very similar lighting quality to SM2 CG trailer where Miles, Peter, and Venom were fighting.
No, no moment in any game out gets even close to that lighting quality.
You can look at the most bland, most flat wall in that trailer and it will have far more accurate lighting.
I'd love for you to prove those last two sentences to be an exaggeration and to remind me of some amazing looking console game I forgot about but if you really think the lighting quality of ff16 is close to that of the Spiderman trailer you are beyond help.

They aren't using raytraced GI or raytraced reflections in ff16, it's most likely the usual GI probes.

Edit: FF16 looks great, stop making me bring it down by having to compare it to graphics rendered at 0.1 fps on 6090 class GPUs
 
Last edited:

ChiefDada

Gold Member
You didn't though, you picked a mix a realtime and pre-rendered in engine cutscenes
They even mix the two elements in a single shot,
suppose I can't expect you to notice.:messenger_grinning:

Donald Trump GIF by Election 2016





The impressive battle sequence at the start of the game, it does switch to prerendered video, sandwiched between two real-time clips



But thanks again for admitting that you are impressed with PS5 real-time graphics sooooo much that you confused it with a pre-rendered video. A feat you said wouldn't be possible until PS117 or something ridiculous like that!!!
 
Donald Trump GIF by Election 2016









But thanks again for admitting that you are impressed with PS5 real-time graphics sooooo much that you confused it with a pre-rendered video. A feat you said wouldn't be possible until PS117 or something ridiculous like that!!!


Who gives a shit it's a cutscene ...we spend less than 1% of our time watching cutscenes, which we all know look better than real time gameplay. Why are you arguing something that represents a fraction of significance of the experience we have playing games?

Try comparing only gameplay for just 1 week. Let's see if you can do it.
 
Last edited:

ChiefDada

Gold Member
No, no moment in any game out gets even close to that lighting quality.
You can look at the most bland, most flat wall in that trailer and it will have far more accurate lighting.

You mean like here when the shadow of Venom's tentacles pop in 5 frames behind schedule and Spiderman lacks contact shadows/hardening as he's running on the building?

 

Lethal01

Member
Donald Trump GIF by Election 2016









But thanks again for admitting that you are impressed with PS5 real-time graphics sooooo much that you confused it with a pre-rendered video. A feat you said wouldn't be possible until PS117 or something ridiculous like that!!!


Nah, that's prerendered footage within engine, much like the original last of us did on PS3. They already confirmed that the battle of hundreds of people they are using pre rendered graphics.

Again though, it's still IN-ENGINE They use it to push the amount of objects/animations/shadow resolution and such but the rendering engine it's still the same and has the same issues but again as if it were running on a pro console and while handing a larger amount of characters. That's the point, to match the realtime graphics but with larger scale and resolution than they can do in realtime, Matching actual raytraced rendering engines would be a completely different feat and is indeed something not happening until ps17.

To say otherwise would be like saying we already reached pixar graphics because we beat the prerendered stuff in the last of us 1.

You mean like here when the shadow of Venom's tentacles pop in 5 frames behind schedule and Spiderman lacks contact shadows/hardening as he's running on the building?



The lighting here is accurate, the shadows are as they should be.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Your boss does:
i said setpiece, not cutscene. the fight following the botched Abby execution (bummer) is lit using amazing lighting effects and then you follow the torches out of the forest. thats a setpiece.

the cutscene is the best looking cutscene in the game, but i was talking about the setpiece here.

i dont mind comparing cutscenes to gauge visual fidelity as long as we stick to cutscene v cutscene comparisons, but i think you are wrong about those FF16 cutscenes being realtime. Any cutscene with that many people on screen is not realtime. just like the ship chase later on in the story.
 

ChiefDada

Gold Member
Nah, that's prerendered footage within engine, much like the original last of us did on PS3. They already confirmed that the battle of hundreds of people they are using pre rendered graphics.

Sigh, you are now in denial about an objective truth. Imagine continuing to argue with someone who swears the sky is purple.

Nbc GIF by The Blacklist
 

ChiefDada

Gold Member
i said setpiece, not cutscene. the fight following the botched Abby execution (bummer) is lit using amazing lighting effects and then you follow the torches out of the forest. thats a setpiece.

the cutscene is the best looking cutscene in the game, but i was talking about the setpiece here.

i dont mind comparing cutscenes to gauge visual fidelity as long as we stick to cutscene v cutscene comparisons, but i think you are wrong about those FF16 cutscenes being realtime. Any cutscene with that many people on screen is not realtime. just like the ship chase later on in the story.

I wholeheartedly agree and in no way was my comment meant as a slight to you.
 

Lethal01

Member
Sigh, you are now in denial about an objective truth.

What is you're proof that this is the truth? the fact that DF said it? while the creator said otherwise? Oki
When the game comes to PC we will see.

regardless, The games realtime lighting is nowhere near the spiderman trailer, The games prerendered lighting is also far from the spiderman trailer
I maintain you are wrong about what's realtime, but even putting aside whether it's realtime or not not it's still not matching the trailer which is the claim you were trying to push.


Imagine continuing to argue with someone who swears the sky is purple.
My feelings exactly. But again, I do love my charity.
 
Last edited:

CamHostage

Member
Its a plugin that devs can add to their engine much like havoc or other third party solutions like dlss. it will help devs like bethesda and bioware produce better looking characters and animations that their current modeling and procedural animation systems simply cant handle. the game has somewhere around 200-300 characters you interact with. each with their own lines of dialogue. bethesda simply couldnt animate them so they wrote a one size fits all lipsync algorithm that results in somewhat wooden performances. now imagine if they just let AI handle it. Either way, its being programmed, but this time the AI is doing it. no need to develop an algorithm. no need to model any of these characters. just provide it one picture and it will do the rest. it will save time and resources.

Then there is something even i didnt think of until someone else mentioned it in the other thread. If the ML hardware is creating these faces on the fly then you dont need the GPU to render any of these fancy graphical features. That would get offloaded to the machine learning hardware. no idea how intensive that would be but assuming next gen consoles have some kind of ML hardware like tensor cores DLSS uses, all of that processing will be done on those leaving the GPU a lot more power for the devs to spend resources elsewhere...


Wait, are you talking about the Corridor Digital post or something else?

As I understand it, there's no plugin to utilize here, and it's not an animation-saving solution per se. It's a face morphing tool (and really a face replacement app) built on the "deep face" (not deep fake) open source face analysis project InsightFace. Picsi replaces an existing face with the photo image and human anatomy model, and movement in the video is remapped to its particular version of the image/motion model. Everything done in the video was post-processed of existing faces (from YT clips,) and CD proposes what this would mean as a plugin, but all the work would need to be done ahead of time anyway. Seems like the process would be all the animation work, all the lighting work, most of the modeling work (including all the musculature and phoneme mapping,) but instead of finely building a face, you'd use a generic Metahuman or whatever and then employ this AI tool to place the actor's face on top of the model.




I also don't see that you would save much on processing even if the face replacement approach was fast and accurate and human-like, since you would still need a face to replace? You wouldn't need to break your back making highly-detailed faces with modeled pores and retinas if the replacement image was high enough in fidelity and depth (and maybe you could save resources by just using a generic avatar face that gets morphed over for all characters?), but you'd still need to put the character in the scene and have them perform. Having a face doesn't give you an actor.

Results are intriguing (probably there will be plenty of RTX Remix projects using such approaches,) and we're already face-scanning real people and motion-capturing live faces and mapping personal scans onto MetaHuman pre-rigged models anyways so the "artistry" of game design is already on the way out for machine-assisted processes, so I get where the excitement is here. But I'm not seeing why Corridor Digital or you are pegging this as the future. It's one model of human facial animation, and it's going to apply that same singular, generic model on every face it applies over an artist's or actor's portrayal of a character.
 
Last edited:

CamHostage

Member
I'm at unreal fest right now in a talk about what's coming in 5.4 and is set to have a lot of significant performance features, but there's still no way it's as production ready as they claimed 5 would be...

I think this talk is being currently Livestreamed fyi

E: 5.4 is getting frostbite hair!

Seems to not be fully livestreamed? I'm not seeing a slew of "Why Unreal 5.4 is a HUGE Deal..." vlogger posts going around touting the features as a gift to gaming yet, so I don't know if some of the more interesting content you mention is in there or not. The UE ProjectBoard as always has some tantalizing new bits and promising improvements listed, some of which are in that UE5.4 realm.


Here's the livestreamed videos I can see posted from Unreal Fest 2023. They're quite techy and not jammed with wow-factor, but if any of these topics grab somebody's interests, there are jumpers on the vids to those specific sessions.

Day 1 Part 1 : UEFN Roadmap * Fabulous Content for Fab (Creating Assets to Sell on Fab) * Behind the Scenes on ESPN’s ‘NHL Big City Greens Classic’ Event * Extending Unreal Engine to Create the StoryTech of Hogwarts Legacy * Building a Business Creating the Metaverse
Day 1 Part 2 : ICVR’s Real-Time Pipeline * Optimizing UE5: Rethinking Performance Paradigms for High-Quality Visuals Pt.1 (Nanite and Lumen) * Look Development with Substrate and Lumen * Developer Iteration and Efficiency in Unreal Engine * How NASA Is Using Simulations and Game Engine Technologies to Help Get Us Back to the Moon
Day 1 Part 3 : Blueprints: What, When, Why, and How? * Creating Cinematics for Games, Film, TV, and Broadcast with Unreal Engine * Features and Tips for UE in ’23 * Unleashing the Power of Unreal Engine: Animation Pipeline for Artists and Studios * Procedural Content Generation Tools in UE5 Overview and Roadmap
Day 2 Part 1 : Project Avalanche: a Dedicated Toolkit for Broadcast Graphics and Motion Design * Stylization in UEFN * Adding Verse to Your Creative Toolbelt * Unreal Editor for Fortnite as a Rapid Prototyping Tool * Using Animation in a Production Environment in UE5 and UEFN * Building Bigger: Changing Your Workflow for Building Worlds instead of Scenes
Day 2 Part 2a : State of the Union: Virtual Production * Underwater ICVFX: The Making of Emancipation * Optimizing UE5: Rethinking Performance Paradigms for High-Quality Visuals - Pt.2: Supporting Systems* Reimagining the Horror: Lessons Learned from Layers of Fear * Finding the Fun in Fortnite: How to Create Player-Retentive Games in UEFN * Stylization in Animation and FX
Day 2 Part 2b : Optimizing UE5: Rethinking Performance Paradigms for High-Quality Visuals - Pt.2: Supporting Systems * Reimagining the Horror: Lessons Learned from Layers of Fear * Finding the Fun in Fortnite: How to Create Player-Retentive Games in UEFN * Stylization in Animation and FX
Day 2 Part 3 : Ascendant Studios: Building a Big-Budget UE5 Game from Scratch * The Bright Future of Mobile Ray Tracing in Unreal Engine * Unlocking Creativity * Advanced Debugging in Unreal Engine * Rendering Roadmap: More Data, More Speed, More Pixels, More Fidelity * Unreal Engine Development Update
Day 3 Part 1 : Making a Movie in the Cloud: Empowering Collaborative Filmmaking * Against the Trend: Using Realistic Engine for Stylized Games * Cultural Relevance: Telling Local Stories through UEFN and RealityScan

I think Day 2 Part 3 is probably what has the most info of "the future", but it's still a dry and un-hype-filled scan for gamers. (I did see a talk about PSO pre-caching to help cut down and dedupe shader caching issues, which sounds promising...) Most folks probably will just want to wait for a breakdown or more realistically the next UE5 showcase event with a trailer using these features and improvements.

(And, like you said, this is all underlying tech, but real-world execution is still slow in coming and frustratingly low in "next-genified" games since the promises and production-readiness of UE5 were way ahead of what's been delivered and proofed. Awesome stuff, as always, but there's reasons why that "this is next-gen" boom is still dangling out there years after the consoles launched, and why next-gen features we assumed would be day one are still only sparsely or hesitantly used so much later into the gen.

(*BTW, I always assumed "frostbite hair" was a baked animation, but the real strand system is fun to dig into and see how many scenarios like ponytails and curly hair they built fixes for as well as how many issues they're still dealing with and contending performance tradeoffs over.)
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Wait, are you talking about the Corridor Digital post or something else?

As I understand it, there's no plugin to utilize here, and it's not an animation-saving solution per se. It's a face morphing tool (and really a face replacement app) built on the "deep face" (not deep fake) open source face analysis project InsightFace. Picsi replaces an existing face with the photo image and human anatomy model, and movement in the video is remapped to its particular version of the image/motion model. Everything done in the video was post-processed of existing faces (from YT clips,) and CD proposes what this would mean as a plugin, but all the work would need to be done ahead of time anyway. Seems like the process would be all the animation work, all the lighting work, most of the modeling work (including all the musculature and phoneme mapping,) but instead of finely building a face, you'd use a generic Metahuman or whatever and then employ this AI tool to place the actor's face on top of the model.




I also don't see that you would save much on processing even if the face replacement approach was fast and accurate and human-like, since you would still need a face to replace? You wouldn't need to break your back making highly-detailed faces with modeled pores and retinas if the replacement image was high enough in fidelity and depth (and maybe you could save resources by just using a generic avatar face that gets morphed over for all characters?), but you'd still need to put the character in the scene and have them perform. Having a face doesn't give you an actor.
Nah it was another user in the main thread. I am aware that there is no plugin but the guys in the video said that this could become one. thats the entire thesis of the video that this tech can indeed be put into games, and once integrated into the engine could provide some stunning results.

And no you wont need a face, you just need a picture. the AI algorithm fills in the rest. so you could have a very basic mesh of like 10 polygons like Snake in MGS1 and it will do the rest. yes, you will need ML hardware but that will free up the GPU from rendering the character face. who knows maybe in the future that frostbite hair gymwolf jerks off to every other week might actually be done by AI.


Results are intriguing (probably there will be plenty of RTX Remix projects using such approaches,) and we're already face-scanning real people and motion-capturing live faces and mapping personal scans onto MetaHuman pre-rigged models anyways so the "artistry" of game design is already on the way out for machine-assisted processes, so I get where the excitement is here. But I'm not seeing why Corridor Digital or you are pegging this as the future. It's one model of human facial animation, and it's going to apply that same singular, generic model on every face it applies over an artist's or actor's portrayal of a character.

Because face scanning, hiring actors and mocapping are a time consuming and expensive process that produces results that need dozens of animators to go in and retouch. This is NOT something bethesda can do. Anyone who has played starfield knows its not a space exploration game, its a dialogue driven game set in space. like 60% of the game is dialogue trees and there is just no way they couldve mocapped it like Naughty Dog mocaps 4-5 hours of cutscenes than spends 3 years handkeying the facial animations.
 

CamHostage

Member
And no you wont need a face, you just need a picture. the AI algorithm fills in the rest. so you could have a very basic mesh of like 10 polygons like Snake in MGS1 and it will do the rest. yes, you will need ML hardware but that will free up the GPU from rendering the character face.

Eh, no, still needs a face.

Or else, it needs to know all the parameters of the lighting scenario and the motions of the performer (or motion data of the digital character,) and then supposedly it would calculate its own face anatomy algorithm (and lighting and decals/details and possibly collision) to approximate every frame of character data to make just the face image, which would then be composited into the image (like the LA Noire video faces, but not as a texture, as a fully processed layer of imagery) without polygonal detail but with all the complications of needing to calculate that polygonal detail/behavior inside the scene... seems like you'd just used polygons.

These demos worked because they put a face on a face.

This is face morph technology. There needs to be a face so that they can replace the face. In the cases where they've put faces over old games, they've put a modern hi-res face over an old low-fi face, and it works about as well as you'd expect characters with cartoonish lipflaps suddenly getting photorealistic makeovers would work. The concept of having this at the root level with a complete ML-trained dataset for anatomy proportions/motion to take one simple image (not even a texture, a source for a texture) and apply it as the face imagery for an otherwise 3D character could be an interesting experiment, but nothing about it seems to be a savings or even an improvement. (For instance, they're doing AI-generated voices from simple text, with inflection based on ML results of actors' habits, so if you then continue to run that into the digital actor's performance of face movement and gestures you could have NPCs delivering full performances from just a line of text, but that's why you have MetaHuman with all the detail and expression of a face to perform those lines.)

Because face scanning, hiring actors and mocapping are a time consuming and expensive process that produces results that need dozens of animators to go in and retouch. This is NOT something bethesda can do. Anyone who has played starfield knows its not a space exploration game, its a dialogue driven game set in space. like 60% of the game is dialogue trees and there is just no way they couldve mocapped it like Naughty Dog mocaps 4-5 hours of cutscenes than spends 3 years handkeying the facial animations.

If every actor in every game will be using the exact same ml-trained face algorithm, you're going to be glad for the days of dozens of animators going in and retouching faces...

I still don't agree with your outlook of this technology. The face still needs to be acted and mocapped and modeled, the only savings is that you get to morph one face photo instead of using more complicated, exacting equipment (with the machine making all the determinations of what to sub in for detail everywhere that's not shown in the face photo, unlike a proper face/body scan.) And because they are reversing the InsightFace algorithm for face alignment to apply naturally recognizable facial proportions and movement instead of artist-created approximations, it looks mostly pleasing when morphed over existing works.

This can "fix" shitty Bethesda faces made from shared parts in an outdated character generator by putting real people's mugs in there instead, but Bethesda still has to go through most the work of making the shitty face and also hiring the actor to voice and portray the face before you can slap a morphing photo over it.
 
Last edited:

Neilg

Member
Seems to not be fully livestreamed?

Thanks for collecting all of those!
Few of those that were streamed I missed and wanted to check out.
I think they very intentionally avoided sounding markety during the talks so videos extrapolating new features and what it means for games don't get made en masse - best left for the marketing dept to handle later. Everything was very dry and technical.

Rendering roadmap on day 2.3 was the talk I was in when I made that post.
The procedural tools talk was interesting but similar to many things I saw, they're building ground up features that already exist in max and Houdini, except as it's the first version and they want it to do everything, it's over complicated. They're set to change a lot.
 

Represent.

Represent(ative) of bad opinions


Good watch/interview. Really highlights the importance of 30fps for any dev that wants to push fidelity.

Straight up says "60fps is a huge, huge compromise"

In regards to games not being 4K60:
"Would you rather us make the game on the weakest platform, or would you rather us take advantage of the most powerful platforms?"

Talks about how the word "Optimize" is thrown around by FrameRate warriors as if its some type of magic.

They were able to get the performance mode to run "mostly" at 60. Lots of sacrifices made in that mode. Their focus was absolutely on getting a rock solid 30.

"Im a big performance mode guy, but I play this at 30 because its so smooth and the visuals are just incredible"

Playing this at 60fps sounds like a lesser experience.
 
Last edited:

ChiefDada

Gold Member


Good watch/interview. Really highlights the importance of 30fps for any dev that wants to push fidelity.

Straight up says "60fps is a huge, huge compromise"

In regards to games not being 4K60:
"Would you rather us make the game on the weakest platform, or would you rather us take advantage of the most powerful platforms?"

Talks about how the word "Optimize" is thrown around by FrameRate warriors as if its some type of magic.

They were able to get the performance mode to run "mostly" at 60. Lots of sacrifices made in that mode. Their focus was absolutely on getting a rock solid 30.

"Im a big performance mode guy, but I play this at 30 because its so smooth and the visuals are just incredible"

Playing this at 60fps sounds like a lesser experience.


I love it. Let developers make the game how they intend. A not so perfect 60 fps mode is fine for the sake of more choices. I really hope they consider the possibility of 40fps. I'm rooting for Remedy.
 

Luipadre

Member


Good watch/interview. Really highlights the importance of 30fps for any dev that wants to push fidelity.

Straight up says "60fps is a huge, huge compromise"

In regards to games not being 4K60:
"Would you rather us make the game on the weakest platform, or would you rather us take advantage of the most powerful platforms?"

Talks about how the word "Optimize" is thrown around by FrameRate warriors as if its some type of magic.

They were able to get the performance mode to run "mostly" at 60. Lots of sacrifices made in that mode. Their focus was absolutely on getting a rock solid 30.

"Im a big performance mode guy, but I play this at 30 because its so smooth and the visuals are just incredible"

Playing this at 60fps sounds like a lesser experience.


Yeah im sticking with 30 on alan wake for sure
 

GymWolf

Gold Member
I feel like aw2 is gonna let down some people in here expecting something full nextgen that obliterate hfw...i already saw some questionable things for a supposedly 30 fps game (it is not if they can squeeze 60 fps mode but let represent dream for a bit).

Oh and before you ask, yes, i'm pretty sure that a small, stricted nextgen only game should obliterate a crossgen gigantic open world game.
 
Last edited:
Top Bottom