If on the Pro games still used only 4x AF then that would be beyond pathetic ...what a weird thing to draw a line on for possible complaints
I am willing to bet it will. For whatever reason, that seems to be an area that devs always cheap out on. I say devs because there is no reason even on the PS5 as is that should be the case. Have you seen the frametime impact of enabling it on the PC? Practically non-existent.
My guess is that devs don't bother with that because they feel that from the average viewing distance of a console gamer, its not something they may notice.
If the performance mode on the Pro looks as good as current quality modes on ps5 then yeah I will play that. I sew your point though, there will be Quality modes on Pro that exceed current Quality modes on PS5 original and I'll likely find myself in the same predicament where I'll want the best graphics possible.
And that will ALWAYS be the case with consoles, or any fixed platform for that matter. Whatever you can do, or however good you can make your game look at 60fps, you will be able to make it look better at 30fps. With these mid-gen refreshes, our only saving grace, (if you would go as far as call it that) is that devs likely would not waste the time to do that.
Everything I am seeing about the PS5pro, is telling me that the push is to get devs to just take their current fidelity mode, and apply PSSR to that in a way that gives them 60fps while remaining close to or matching the 30fps fidelity mode on the base PS5. That's going to be work enough for devs, so I think it's unlikely that devs would also focus on a PS5pro 3fps super fidelity mode.
In those games that do have a CPU bottleneck that completely prevents a 60fps mode however, then the PS5pro mode would just be a hyper fidelity mode at around 40fps.
Yes but that's delusional. nVidia has a lot of experience in machine learning and Sony is not going to reach DLSS in their first attempt. I think it'll be a step up over FSR but still not nVidia or Apple.
I disagree, well not entirely, but mostly. Intel was able to have a respectable implementation of ML-based reconstruction with XeSS on their first attempt. Hell, even the apple you mentioned got it right on their first attempt too. Furthermore, its not like this tech is new tech now, Sony already knows this tech, and everyone has done it, so surely getting into it now would be easier than saying doing it 5 years ago.
Lastly, ML reconstruction is not some sort of secret sauce, and the fact that FSR can even exist that doesn't have the benefit of AI acceleration, is proof that this is something that is purely mathematical and reproducible. You can even train an AI upscaler and build out your own algorithm, using AI hardware from a different vendor. That is, Sony can build the PSSR algorithm using Nvidia tensor hardware. At the end of the day, DLSS, XeSS, PSSR....etc is just a software stack handled by AI hardware.
Not saying right off the jump, its going to be as good or better than DLSS, but saying we shouldn't put it beyond Sony to come very close. I mean have you seen what insomniac does with TI reconstruction?
it was implied in the previous comment as well as on compatibility issues
Well, I don't know what thats about. You don't need more die space to do a clock bump. There could be compatibility issues, but even that is limited when on the same architecture, as only when switching archs would you really likely see compatibility issues.
I will say it again, considering all else Sony did on the PS5pro, a CPU clock bump is... or at least should have been the easiest thing they could have done. It would take very little out of them and their entire PS5pro design to have bumped that CPU clock up to even 4.2GHz.
But that is what I also feel stands out to me, that they didn't do it, basically screams out something very obvious. They cut out the one thing/upgrade they felt they needed the least. And the only way they arrive at that conclusion is if they are looking at the CPU very differently from people like you. You look at games having a CPU "bottleneck" because they are running at nothing higher than 30fps, they look at the CPU utilization percentage across all games and can see that the games that are not having 60fps modes, typically have the worst CPU utilization. And they have the means to profile every game running on their platform.
And that's a developer problem, not a hardware problem.
Hell just look at the recently released rise of the Ronin...have you at least seen the DF thread on that game? Its FIDELITY mode is barely doing above 1080p at 30fps with no RT. Its "fidelity" mode. And a 1.8Tf console from 2013 had more immersive open world games which at 30fps looked significantly better than it. That is how messed up some devs can be when they actually have the power to do whatever.
And this is something I think people don't factor in. The same way more power can be used to push framerates, rez, visual features... etc, which are all consumer-facing uses of that power, is the same way devs can use that power to drop their game development budget, by doing "just enough" to go to market. And use the power to brute force poorly optimized code. Thats the dev facing use of that power.