• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Specs Leak are Real, Releasing Holiday 2024(Insider Gaming)

Perrott

Member
Nah, who will play that again? GT is meant to be played forever unlike story-driven games.
Sony's biggest story-driven games have very long legs in terms of sales though and they are staying at full price for much, much longer than they did last generation. So I'd say it's in Sony's best interest to bring back the spotlight to their Marvel's Spider-Man games, their The Last Of Us re-releases, Ratchet & Clank: Rift Apart, Horizon: Forbidden West, God of War: Ragnarök, Demon's Souls and Returnal by delivering PS5 Pro patches with enhanced performance and image quality through PSSR and RT features.

And that's not to say that their live-service offerings, Gran Turismo 7 and Helldivers 2, shouldn't get support either. Those are actually the ones that absolutely will receive it.
 

Ashamam

Member
I tell you one thing I would like the Pro to support, and thats a bump in the remote play resolution. At least to 1440p.
 

ChiefDada

Gold Member
Nah, who will play that again? GT is meant to be played forever unlike story-driven games.

If RTGI is in play (and it should be) I will immediately dive in for a second playthrough. There's also dlc opportunity for ragnarok in case people need further incentives beyond new hardware capabilities.
 
Consider that component costs have been increasing a lot, so base PS5 must be selling at a nice loss. Sony wants to improve their profitability, so very likely won't sell the Pro at a loss, or will try to sell it with a minimal loss.

In fact, Sony may end increasing the price of base PS5 again.
What you should expect is 599 without a disc drive minimum up to 699 without a disc drive at maximum I lean towards the former
Yeah, $599 is my bet as well.

$699 is suicide IMO. $599 is the same price difference between PS4 and PS4 Pro and Sony already said a price drop on the PS5 would be hard.
 

Imtjnotu

Member
Yeah, $599 is my bet as well.

$699 is suicide IMO. $599 is the same price difference between PS4 and PS4 Pro and Sony already said a price drop on the PS5 would be hard.
the price of the ps5 and its components have been set in stone. the cost increasing to us does nothing to them and their vendors who have already signed contracts for supply.

same with the pro. manufacturing will start over the next few months and sony has already selected and purchased supplies.

those numbers are done. price is the only thing left for them to announce
 

Ashamam

Member
the price of the ps5 and its components have been set in stone. the cost increasing to us does nothing to them and their vendors who have already signed contracts for supply.
Sony demonstrated with the PS4 they don't just sit still on manufacturing costs. Contracts have durations, they get renegotiated, investment in assembly automation kick in etc etc. So whilst based on various comments they aren't seeing great gains on supplier inputs, I suspect they have managed to drive down the cost a bit. Eg: less material used in heatsinks, lower shipping volume (smaller box) etc.

Would be interesting to know if they started the generation with a similar level of assembly automation to what they finished on the PS4. There was a presentation at some point about the new PS4 plant having about 4 people on the floor total. Assembly rate was 1 per 30 seconds off the line. Although I don't think I ever heard whether they scaled up to multiple lines etc. The reported rate at 1 million per year is too low for a single assembly line to be doing much to help the overall supply numbers.

 
Last edited:

FireFly

Member
This is the type of thoughtful analysis that is completely absent at DF. I really like his content and would love it if he expanded into consoles a bit as well.


He reaches pretty much the exact same conclusion that Alex did; that more complex ray tracing effects incur a proportionally higher frame time cost on AMD hardware.

Only he's not measuring that cost in ms because he's just comparing overall frame rates at given quality level.
 

Xyphie

Member
He reaches pretty much the exact same conclusion that Alex did; that more complex ray tracing effects incur a proportionally higher frame time cost on AMD hardware.

Only he's not measuring that cost in ms because he's just comparing overall frame rates at given quality level.
He’s saying things we’ve known for years.

Poster in question is just one of those people that shits up every DF thread on this forum with bad faith arguments, out of context quotes and Alex's weird BDSM photo shoots. For instance, this video DF did way back around RDNA2 and Ampere launch has what's actual the best methodology for judging RT performance (looking at the frame time cost of enabling a specific RT effect), and comes to the same conclusion i.e. the heavier the RT load is, the worse AMD cards do.
 

winjer

Gold Member
For instance, this video DF did way back around RDNA2 and Ampere launch has what's actual the best methodology for judging RT performance (looking at the frame time cost of enabling a specific RT effect), and comes to the same conclusion i.e. the heavier the RT load is, the worse AMD cards do.

LOL, some people still don't understand that frame time and frame rate are the same thing, just presented in a different way.
What Alex is doing, is what everyone does, measuring the final frame rate. But he just divides it by the time. So if a game is running at 60 fps, then it's 16.667 ms.
It's the same basic stuff.

If you want a proper analysis of RT on RDNA2 and Ampere, then take a look at the one done by Chips and Cheese.
Here we have data for how the BVH is structured, how is the occupancy in the caches, the ALUs, etc.
In comparison, that DF analysis is amateurish.
 

Xyphie

Member
LOL, some people still don't understand that frame time and frame rate are the same thing, just presented in a different way.
What Alex is doing, is what everyone does, measuring the final frame rate. But he just divides it by the time. So if a game is running at 60 fps, then it's 16.667 ms.
It's the same basic stuff.

If you want a proper analysis of RT on RDNA2 and Ampere, then take a look at the one done by Chips and Cheese.
Here we have data for how the BVH is structured, how is the occupancy in the caches, the ALUs, etc.
In comparison, that DF analysis is amateurish.

He's specifically trying to isolating the RT cost from the raster cost, which was my point in that it's the better method than looking at the raw frame rate/time between GPU A and GPU B. You could of course do that using an IVH profiler tool like Nsight, but that's just another layer on top of that.
 

winjer

Gold Member
He's specifically trying to isolating the RT cost from the raster cost, which was my point in that it's the better method than looking at the raw frame rate/time between GPU A and GPU B. You could of course do that using an IVH profiler tool like Nsight, but that's just another layer on top of that.

He is just doing what everyone else did, measure frame rate while turning RT on and off. He just did it using frame time instead of frame rate.
Anyone can do that, it's as simple as taking 1000 and dividing by the frame rate.
So at 60 fps, just do 1000/60 = 16.667ms. If then it lowers to 40 fps, just do 1000/45 = 22.222ms
Applications like RTSS, do this automatically, as they report frame rate and frame times.
But doing this with the final frame rate/time, shows little info about what is really going on under the hood.

The test that actually tried to "isolate RT cost" was that review by Chips and Cheese, where they actually tried to see where the RT instructions where hitting, during the execution pipeline.
 

Xyphie

Member
He is just doing what everyone else did, measure frame rate while turning RT on and off. He just did it using frame time instead of frame rate.
Anyone can do that, it's as simple as taking 1000 and dividing by the frame rate.
So at 60 fps, just do 1000/60 = 16.667ms. If then it lowers to 40 fps, just do 1000/45 = 22.222ms
Applications like RTSS, do this automatically, as they report frame rate and frame times.
But doing this with the final frame rate/time, shows little info about what is really going on under the hood.

The test that actually tried to "isolate RT cost" was that review by Chips and Cheese, where they actually tried to see where the RT instructions where hitting, during the execution pipeline.

Again, you don't seem to get my point. I think it's very simple, I think in general RT performance of a given GPU should be presented as a delta from the rasterized performance when presented as an aggregate and ideally broken down in a per effect basis until we're in the age of RT-only games. This is because the actual RT performance is a baked cake enmeshed in a given average frame time. If reviewers use Nsight to extract that delta nothing really changes and would be even better. Most RT performance is for instance presented like this, which I find somewhat mostly pointless in isolation, I'd rather have a graph of +% increased frame time cost compared to the rasterized baseline than that.
 
Last edited:

winjer

Gold Member
Again, you don't seem to get my point. I think it's very simple, I think in general RT performance of a given GPU should be presented as a delta from the rasterized performance when presented as an aggregate and ideally broken down in a per effect basis until we're in the age of RT-only games. This is because the actual RT performance is a baked cake enmeshed in a given average frame time. If reviewers use Nsight to extract that delta nothing really changes and would be even better. Most RT performance is for instance presented like this, which I find somewhat mostly pointless in isolation, I'd rather have a graph of +% increased frame time cost compared to the rasterized baseline than that.

You still don't understand that what I'm saying is that Alex analysis is very basic.
It's something that anyone can do. There is no technical depth to it.
It's just running the game with and without RT and seeing the difference. This is something that any person can do.
So let's not pretend that Alex is some master guru of 3D rendering analysis.
 

Xyphie

Member
You still don't understand that what I'm saying is that Alex analysis is very basic.
It's something that anyone can do. There is no technical depth to it.
It's just running the game with and without RT and seeing the difference. This is something that any person can do.
So let's not pretend that Alex is some master guru of 3D rendering analysis.

I've never said it's perticularly complex. It's not. But the effort that Alex puts into his videos has better methodology than >95% of hardware reviewers out there that just runs the same canned default benchmark in 5-10 games at 3 resolutions.
 
Last edited:

winjer

Gold Member
I've never said it's perticularly complex. It's not. But the effort that Alex puts into his videos has better methodology than >95% of hardware reviewers out there that just runs the same canned default benchmark in 5-10 games at 3 resolutions.

Sorry, but it's not.
He makes a lot of mistakes. From using bad terminology, to bad methodology in his test bench.
And he is extremely biased in his analysis.
There are dozens of better professional hardware reviewers than him.
Seriously, he isn't even the best in Digital Foundry.
 
Sorry, but it's not.
He makes a lot of mistakes. From using bad terminology, to bad methodology in his test bench.
And he is extremely biased in his analysis.
There are dozens of better professional hardware reviewers than him.
Seriously, he isn't even the best in Digital Foundry.
I'm curious to know an example where methodology was ”bad”? I would have assumed that all contributions are peer reviewed before publication (ie either rich or John will go through Alex videos and vice versa) beforehand. So, any major methodology shortcomings SHOULD be caught.
 

Gaiff

SBI’s Resident Gaslighter
Sorry, but it's not.
He makes a lot of mistakes. From using bad terminology, to bad methodology in his test bench.
And he is extremely biased in his analysis.
There are dozens of better professional hardware reviewers than him.
Seriously, he isn't even the best in Digital Foundry.
Curious but who is the best at DF? None of them is great. Alex can be excellent when he wants to but oftentimes, he isn’t.
 

SlimySnake

Flashless at the Golden Globes
Sorry, but it's not.
He makes a lot of mistakes. From using bad terminology, to bad methodology in his test bench.
And he is extremely biased in his analysis.
There are dozens of better professional hardware reviewers than him.
Seriously, he isn't even the best in Digital Foundry.
Hardware unboxed, gamernexus come to mind. DF and Alex are like the jayz2cents of console YouTubers. They put in the effort and almost always arrive at the wrong conclusion.
 

SlimySnake

Flashless at the Golden Globes
Curious but who is the best at DF? None of them is great. Alex can be excellent when he wants to but oftentimes, he isn’t.
John is the best but he’s a bit of a drama Queen and gets triggered a lot so he purposely tries to keep his takes safe. Especially when it comes to exclusives from either party. But he’s got real mental health issues to worry about so i will always give him a pass. Alex makes too many assumptions when it comes to console gaming and Richard does as well but at least Richard knows when to stop and draw the line.

From best to worst:
John
Richard
Alex
Oliver
Hitler
Stalin
Mao
Tom Morgan
 

Gaiff

SBI’s Resident Gaslighter
John is the best but he’s a bit of a drama Queen and gets triggered a lot so he purposely tries to keep his takes safe. Especially when it comes to exclusives from either party. But he’s got real mental health issues to worry about so i will always give him a pass. Alex makes too many assumptions when it comes to console gaming and Richard does as well but at least Richard knows when to stop and draw the line.

From best to worst:
John
Richard
Alex
Oliver
Hitler
Stalin
Mao
Tom Morgan
Can’t agree with this at all. Richard being better than Alex at anything is just no. John I can agree because despite having less technical knowledge, he’s more open-minded and objective and doesn’t have an agenda against consoles.

Still, I think in terms of knowledge and who understands the most concepts, it’s Alex by a wide margin, even if he has some major misse (Halo Infinite).
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Can’t agree with this at all. Richard being better than Alex at anything is just no. John I can agree because despite having less technical knowledge, he’s more open-minded and objective and doesn’t have an agenda against consoles.

Still, I think in terms of knowledge and who understands the most concepts, it’s Alex by a wide margin, even if he has some major misse (Halo Infinite).
I think it appears this way because he likes to shout out those fancy terms to make himself look smart. Very similar to nx gamer in that respect. I don’t think anyone of them is an actual graphics programmer so it would be a mistake to assume they know anything substantial. They are no more knowledgeable than posters on gaf and era who hang out in threads like this. In fact it was in the very threads where some posters look at the xsx benchmarks pre launch at the hot chips presentation and extrapolated ps5 performance to find that ps5 did have advantages in some key areas. Areas ms themselves had identified as crucial. Meanwhile df kept parroting ms claims verbatim until launch.


Regardless, Richard has been in the business and doing this far longer than Alex. Richard does lean on him for RT knowledge but Richard has more experience in virtually everything else. I kinda wish Richard didn’t take his word as gospel because Alex is where the ps5 is too weak for rt nonsense came from. It was audio only rt at first then shadows only rt then reflections only rt and Richard just nodded along because he thought Alex knew what he was talking about.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
I think it appears this way because he likes to shout out those fancy terms to make himself look smart. Very similar to nx gamer in that respect. I don’t think anyone of them is an actual graphics programmer so it would be a mistake to assume they know anything substantial. They are no more knowledgeable than posters on gaf and era who hang out in threads like this. In fact it was in the very threads where some posters look at the xsx benchmarks pre launch at the hot chips presentation and extrapolated ps5 performance to find that ps5 did have advantages in some key areas. Areas ms themselves had identified as crucial. Meanwhile df kept parroting ms claims verbatim until launch.
Alex is on beyond3d and I did see him engage with people actually involved in game development and they usually like his methodology and often do point out that a lot of the technical language he uses is in fact correct. He isn't an engineer or a software developer but you don't need that background to do what he does.
Regardless, Richard has been in the business and doing this far longer than Alex. Richard does lean on him for RT knowledge but Richard has more experience in virtually everything else. I kinda wish Richard didn’t take his word as gospel because Alex is where the ps5 is too weak for rt nonsense came from. It was audio only rt at first then shadows only rt then reflections only rt and Richard just nodded along because he thought Alex knew what he was talking about.
Richard is the boss and has more business savvy than Alex but he isn't better at all.

Regardless of all of this, I did see several folks involved in game development praise DF but this was over a year ago (maybe even 2). I would say the quality of their coverage since then has been very hit-or-miss with a lot of misses.
 

SlimySnake

Flashless at the Golden Globes
Well they continue getting dev access so they are clearly doing something right. Id just caution that they should be used as a great tool for basic analysis and direct developer interviews. Anything beyond that like hardware comparisons they seem to continuously fuck up.
 

Gaiff

SBI’s Resident Gaslighter
Well they continue getting dev access so they are clearly doing something right. Id just caution that they should be used as a great tool for basic analysis and direct developer interviews. Anything beyond that like hardware comparisons they seem to continuously fuck up.
I would argue they were doing something right up until the release of the current consoles or so.

At this point, they have built connections and have a network in the industry so devs access might be more due to their relationships than expertise.

Their last few pieces haven’t impressed me much if I’m being honest.
 
Last edited:

winjer

Gold Member
I'm curious to know an example where methodology was ”bad”? I would have assumed that all contributions are peer reviewed before publication (ie either rich or John will go through Alex videos and vice versa) beforehand. So, any major methodology shortcomings SHOULD be caught.

Here is a few examples.
The Ryzen 3600 that they use, has a memory latency of 90+ns. As reported by their own tests.
With basic XMP, this should be in the range of 70ns. I don't know why their test bench is performing so badly, but it's he kind of thing that heavily skews results, as Zen2 is very memory dependent.
This could be that they have the memory improperly configured. Or that they have a ton of bloatware running in the background. Or a faulty Windows installation. But whatever the issue is, it invalidates their results.
Not only this reduces average frame rates, but it can also cause more stutters.

When DLSS1 was released, every other reviewer and gamer was trashing it for how bad it looked. But Alex was praising it, as if he could not see how bad it looked.
Another issue is that he and all of DF still don't understand that a BVH is a data structure. But they still think it's geometry that can be projected and seen.
Then there is the whole thing with their obsession with motion blur. Something that is despised by almost every PC gamer, as it drasti9cally reduces image quality. But Alex always turns it on.
Watching him talk about image clarity and disocclusion artifacts and then turning on motion blur is so aggravating.
And then there is his extreme fanboyish for Xbox and NVidia. Or his fanboyish for Crysis, that is so bad, that even Crytek roll their eyes in disgust, when they do interviews.

Now mind you, I'm not saying he does everything wrong. There is value in his reviews.
But he is not the master analyst some people think he is.
 
Last edited:

FireFly

Member
When DLSS1 was released, every other reviewer and gamer was trashing it for how bad it looked. But Alex was praising it, as if he could not see how bad it looked.
Another issue is that he and all of DF still don't understand that a BVH is a data structure. But they still think it's geometry that can be projected and seen.
Then there is the whole thing with their obsession with motion blur. Something that is despised by almost every PC gamer, as it drasti9cally reduces image quality. But Alex always turns it on.
Watching him talk about image clarity and disocclusion artifacts and then turning on motion blur is so aggravating.
And then there is his extreme fanboyish for Xbox and NVidia. Or his fanboyish for Crysis, that is so bad, that even Crytek roll their eyes in disgust, when they do interviews.

Now mind you, I'm not saying he does everything wrong. There is value in his reviews.
But he his not the master analyst some people think he is.
So I checked and Alex was actually highly critical of DLSS 1.0 when he first tested it in a game.



And Alex's point about motion blur is that at a given framerate, objects that are moving fast enough will see significant gaps between frames where motion information is not captured. This is an artifact of having a fixed framerate that can reduce the perceptual smoothness of the image. Alex is claiming that motion blur can be used to help address these gaps. Or in particular that per object motion blur with a high enough sample rate and an appropriately tuned shutter speed, can increase perceived smoothness on fast moving objects for some people.

He's not saying that everyone should turn on motion blur. Nor is he saying that all forms of motion blur are good. In particular he singles out camera motion blur for reducing fidelity in the center of the screen and games that don't have a high enough sample rate and don't allow for shutter speed control.

 
Last edited:

winjer

Gold Member
So I checked and Alex was actually highly critical of DLSS 1.0 when he first tested it in a game.



When it first showed up in BFV, he was praising DLSS1 as a big advancement.
He then probably saw that everyone was trashing it and decided to change his tune.

And Alex's point about motion blur is that at a given framerate, objects that are moving fast enough will see significant gaps between frames where motion information is not captured. This is an artifact of having a fixed framerate that can reduce the perceptual smoothness of the image. Alex is claiming that motion blur can be used to help address these gaps. Or in particular that per object motion blur with a high enough sample rate and an appropriately tuned shutter speed, can increase perceived smoothness on fast moving objects for some people.

He's not saying that everyone should turn on motion blur. Nor is he saying that all forms of motion blur are good. In particular he singles out camera motion blur for reducing fidelity in the center of the screen and games that don't have a high enough sample rate and don't allow for shutter speed control.



The answer to that question is never.
Even when I was playing PC games at 30 or 60 fps, I would always turn off motion blur.
It ruins image quality and does little to address motion smoothness. Worst yet, it has an impact on performance, though small.
So it's always a net loss using motion blur. And DF, including Alex, always seems to have it on.
 

FireFly

Member
When it first showed up in BFV, he was praising DLSS1 as a big advancement.
He then probably saw that everyone was trashing it and decided to change his tune.
Was this before or after he tested it? Since the BFV DLSS patch arrived only 3 days before his Metro Exodus video.
The answer to that question is never.
Even when I was playing PC games at 30 or 60 fps, I would always turn off motion blur.
It ruins image quality and does little to address motion smoothness. Worst yet, it has an impact on performance, though small.
So it's always a net loss using motion blur. And DF, including Alex, always seems to have it on.
To be clear, we're talking about per object, not per camera motion blur. And your claim would need to be that no one honestly prefers the extra perceptual smoothness. That you know somehow better than other people how their visual systems work.
 
Last edited:

winjer

Gold Member
Was this before or after he tested it? Since the BFV DLSS patch arrived only 3 days before his Metro Exodus video.

I forget exactly when he did it. But it was in a video about BFV.

To be clear, we're talking about per object, not per camera motion blur. And your claim would need to be that no one honestly prefers the extra perceptual smoothness. That you know somehow better than other people how their visual systems work.

It's still motion blur. Smudging the image just to try to pretend it's smoother, it's a silly concept.
It's much better to turn motion blur off, get a bit of extra performance and much better image quality.
 

FireFly

Member
It's still motion blur. Smudging the image just to try to pretend it's smoother, it's a silly concept.
It's much better to turn motion blur off, get a bit of extra performance and much better image quality.
It's not smudging the whole image. It's smudging particular objects according to their motion vectors to minimise the the "gaps" between adjacent frames.

I don't see how that's any more a "silly concept" than the idea that a set of still images can be turned by your brain into continuous motion. If you don't perceive the gaps between frames then your brain is already doing something magical. In that context we shouldn't expect "hacks" to make sense.

It would be like arguing that of course binaural audio can't work because there's only ever a stereo signal. So all those people claiming that they perceive positional audio must just be mistaken.
 

winjer

Gold Member
It's not smudging the whole image. It's smudging particular objects according to their motion vectors to minimise the the "gaps" between adjacent frames.

I don't see how that's any more a "silly concept" than the idea that a set of still images can be turned by your brain into continuous motion. If you don't perceive the gaps between frames then your brain is already doing something magical. In that context we shouldn't expect "hacks" to make sense.

It would be like arguing that of course binaural audio can't work because there's only ever a stereo signal. So all those people claiming that they perceive positional audio must just be mistaken.

Because just smudging an object does nothing to the amount of frames.
It's not even adding "fake" frames, as with frame generation. It's just making image quality much worse.
 

FireFly

Member
Because just smudging an object does nothing to the amount of frames.
It's not even adding "fake" frames, as with frame generation. It's just making image quality much worse.
The perceived fluidity of motion depends on the transitions between each frame and not just their quantity. And that depends on the speed the objects are moving. A (subsonic) bullet will seem to be moving a lot less smoothly than a walking person because the distance covered in each frame is much greater.

If you double the frame rate, the number of transitions will double but the projectile will still look comparatively more jerky, until the frame rate is high enough to match the per-pixel speed of the projectile.

But by adding a trail to the projectile, the gap between its appearance in each subsequent frame is reduced, communicating to the brain that it is a smoothly moving object.

To me this is "extra information" in a similar way that binaural audio is giving the brain extra information about how to interpret sounds, without providing a separate sound source.
 
Last edited:

winjer

Gold Member
The perceived fluidity of motion depends on the transitions between each frame and not just their quantity. And that depends on the speed the objects are moving. A (subsonic) bullet will seem to be moving a lot less smoothly than a walking person because the distance covered in each frame is much greater.

If you double the frame rate, the number of transitions will double but the projectile will still look comparatively more jerky, until the frame rate is high enough to match the per-pixel speed of the projectile.

But by adding a trail to the projectile, the gap between its appearance in each subsequent frame is reduced, communicating to the brain that it is a smoothly moving object.

To me this is "extra information" in a similar way that binaural audio is giving the brain extra information about how to interpret sounds, without providing a separate sound source.

Sorry, but comparing binaural audio to motion blur is just silly.

Personally, I don't see any improvements in fluidity by using motion blur. Just ugly graphics.
 

FireFly

Member
Sorry, but comparing binaural audio to motion blur is just silly.

Personally, I don't see any improvements in fluidity by using motion blur. Just ugly graphics.
Presumably because of the way your brain interprets the blur, which will differ between individuals. We already have plenty of evidence that some people see per object motion blur as being of benefit, and some people don't. Neither side is "wrong" because perception is inherently subjective.
 

Pedro Motta

Gold Member
Sorry, but comparing binaural audio to motion blur is just silly.

Personally, I don't see any improvements in fluidity by using motion blur. Just ugly graphics.
Everyone is entitled to their opinion, but what you're saying is objectively wrong. Motion blur reduces judder between frames. It's something that is present in your own vision and something that film cameras simulate as well.
 

winjer

Gold Member
Everyone is entitled to their opinion, but what you're saying is objectively wrong. Motion blur reduces judder between frames. It's something that is present in your own vision and something that film cameras simulate as well.

No it doesn't. Maybe it does for some people that have less visual acuity.
But for me it's the same 66.66ms judder, but with uglier graphics.
 

Ownage

Member
Ended up buying a PS5 Slim instead of waiting for a Pro. Will ride this one out until PS6.

Looking forward to playing the FF7 series.
 
Last edited:

Pedro Motta

Gold Member
No it doesn't. Maybe it does for some people that have less visual acuity.
But for me it's the same 66.66ms judder, but with uglier graphics.
What do you mean it doesn't? It's a fact. That's why OLED TV's have horrible judder, because of their instant pixel response removes all motion blur from one frame to the other making it look janky, just like any video game without motion blur. Unless you're running it at 120hz at least.
 

winjer

Gold Member
What do you mean it doesn't? It's a fact. That's why OLED TV's have horrible judder, because of their instant pixel response removes all motion blur from one frame to the other making it look janky, just like any video game without motion blur. Unless you're running it at 120hz at least.

At this point you are just making excuses. I'm from the time of CRTs, with the most instant pixel response and there was no such issue.
You like motion blur, then you can keep it on. But I will always keep it off, especially considering I play at higher frame rates.
 
Last edited:

ChiefDada

Gold Member
Ended up buying a PS5 Slim instead of waiting for a Pro. Will ride this one out until PS6.

Looking forward to playing the FF7 series.

Really? You held out so long, what's another 6 months? I traded mine in a month or so with the cash earmarked for PS5 Pro.
 
Top Bottom