an xbox 30% more powerful than a playstation, what do you think about that ?

There was that mlid rumor a few weeks back about the focus of the PS6 to be cheap and efficient. That might be the kind of engineering we really need by then. A $1000 xbox could be twice as powerful? I have no idea how another xbox console could fit into this landscape, though. Much less than the one of late 2027 where it has been a year since anyone has seen an xbox section in a store.
 
Last edited:
What are you talking about? Cost of PSSR is similar to FSR2 or a bit higher, nothing new - it's not doing stuff that Reflex is doing on PC (cutting latency on game level).

@Xyphie posted cost of DLSS already:

1b49d1b87512af62ce44832854a8bd6302bc22b6.png


And when it comes to competitive gaming, many console games have 60fps locks, 120hz at best and in that mode they rarely even use PSSR. On PC you can play them in 240+FPS so what latency would that have?
I'm familiar with that DLSS3 performance mode table, but can you actually claim that is representative of the contiguous processing time from a native frame being in VRAM to being DLSS3 upscaled and flipped to the framebuffer for display - in the way Cerny's PS5 Pro technical presentation can actually be validated for PSSR's contiguous journey from unified RAM to being flipped to the framebuffer?

We don't even know from that table if it includes VRAM to GPU cache transfer latency for the tensor image, or if that is just a calculation of how many OPS per pixel DLSS3 performance mode does and then calculated against real-world TOPs throughput of each card based on reference clocks, tensor units, etc.

Do we even have a real-world example of identical native game scene rendering at 1080p without DLSS3 and identical settings but with DLSS3 performance mode without Frame-gen outputting at 3840x2160 with a dip in frame-rate equal to native render frame-rate but lowered by adding the value in the table?

So if taking the RTX 2080ti as an example, with a game scene at native 1080p57fps, would the identical DLSS3 performance mode output for 4K be [1 / ((1/57)+ 1.26ms)] = 2160p53fps?

I highly doubt it, as those table numbers look synthetic with the way they scale and even if true, would alter on VRAM/GPU cache contention or misses - unlike PSSR which the PS5 Pro CUs have dedicated OP instructions to commandeer the CUs and register memory for executing PSSR immediately.

Given that frame-gen is even more prevalent in DLSS4 I suspect latency hiding is even greater in DLSS4 given the new algorithm processes the image details together holistically - implying greater data residency and GPU cache use - rather than DLSS3's individual filter passes - but as Nvidia's documentation is so opaque compared to Cerny's PSSR info, I'm happy to be proved wrong and actually see DLSS latency validated computationally and with real world games.
 
Isn't the Xbox mode just a visual skin to hide the desktop mode? I know MS were planning to build a completely gaming focused hybrid OS at one point, but not sure they're there yet
It's definitly not just cosmetic, it frees up ram and improves power efficiency.
they need to release a laptop/desktop version asap.

 
So... a PC? Power doesn't matter if you have the same games running the same. We are at the point of diminishing returns. Let's say GTA6 runs 30fps on the PS5 Pro but 60fps on the theoretical Xbox. That's cool but are people going to switch to Xbox for that? I don't think so. Meanwhile, most other games are just going to be 60fps tops which the PS5 Pro can already do. Most people aren't going to have 120hz displays or even care.
 
PlayStation
Switch
PC
That's gaming here forward.
Been like that for years here. And I even ignored Playstation for a while, during the PS4 era. Xbox brand feels extremely damaged at this point. Their whole identity has strayed too far from what it used to be during the OG Xbox era.

There's no edge, their exclusives have lost their power as titles, as IP, not to mention losing their exclusivity. There was something mysterious, edgy and "Americana" about the Xbox, for me as a European.

These days Xbox is synonymous with Windows, with "corporate" and "failures". And I think the worst thing they did has been failing to capitalize on their bought IP/studios. And focusing so damn much on their Gamepass service.

It's a bit of a shame that they didn't move in the direction I liked.
 
I'm familiar with that DLSS3 performance mode table, but can you actually claim that is representative of the contiguous processing time from a native frame being in VRAM to being DLSS3 upscaled and flipped to the framebuffer for display - in the way Cerny's PS5 Pro technical presentation can actually be validated for PSSR's contiguous journey from unified RAM to being flipped to the framebuffer?

We don't even know from that table if it includes VRAM to GPU cache transfer latency for the tensor image, or if that is just a calculation of how many OPS per pixel DLSS3 performance mode does and then calculated against real-world TOPs throughput of each card based on reference clocks, tensor units, etc.

Do we even have a real-world example of identical native game scene rendering at 1080p without DLSS3 and identical settings but with DLSS3 performance mode without Frame-gen outputting at 3840x2160 with a dip in frame-rate equal to native render frame-rate but lowered by adding the value in the table?

So if taking the RTX 2080ti as an example, with a game scene at native 1080p57fps, would the identical DLSS3 performance mode output for 4K be [1 / ((1/57)+ 1.26ms)] = 2160p53fps?

I highly doubt it, as those table numbers look synthetic with the way they scale and even if true, would alter on VRAM/GPU cache contention or misses - unlike PSSR which the PS5 Pro CUs have dedicated OP instructions to commandeer the CUs and register memory for executing PSSR immediately.

Given that frame-gen is even more prevalent in DLSS4 I suspect latency hiding is even greater in DLSS4 given the new algorithm processes the image details together holistically - implying greater data residency and GPU cache use - rather than DLSS3's individual filter passes - but as Nvidia's documentation is so opaque compared to Cerny's PSSR info, I'm happy to be proved wrong and actually see DLSS latency validated computationally and with real world games.

I can give you data. 1920x1080 TAA, 3840x2160 TAA, 3840x2160 with DLSS Performance (1080p), all with Reflex off:

The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-05-58.png
The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-07-10.png
The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-06-33.png


And for H:FW:

Screenshot-9-10-2025-08-08-19.png
Horizon-Forbidden-West-Complete-Edition-v1-5-80-0-9-10-2025-08-08-46.png
Horizon-Forbidden-West-Complete-Edition-v1-5-80-0-9-10-2025-08-09-04.png


This is latency measurement - not frametime:

tekip1hzQaPLndGE.jpeg


And nvidia probably only measured algorithm cost, I doubt they added cost of post processing that now have to render in 4K and not in 1080p (or HUD).

Edit: Reflex off vs. on (native 4k):

The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-27-53.png
The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-27-30.png
 
Last edited:
Xbox One X vs PS4 Pro. Which one was more successful?

Xbox Series X vs PS5. Which one was more successful?

How powerful a console is no longer matters as much as it used to in this age of dynamic resolutions and image upscaling. The Xbox Series X was more powerful on paper than the PS5 but time and time again we've seen games that look the same on both platforms and now that the Xbox games are also on PS5 have seen that there really is no tangible difference between the two. Sometimes Xbox Series X has a small win, sometimes it is PS5. However, PS5 is often the lead format for games development which is why I suspect there are many examples of games where there are missing features or graphical issues on Xbox Series X versions.

Don't care about how powerful the next Xbox is, if there is one. What I am interested in are the exclusive games and since I can play Xbox ones on my PC or PS5 Pro then the hardware is no longer of any interest no matter how good it is. I simply do not need it.
 
It will anyway be outclassed by a PC that will be better than that. If you're for power go the PC route. Consoles are for stability, exclusives and/or special/different HW to make a difference for gameplay. Considering how much Microsoft is tied to the PC and OS world I really do not understand why they still have not changed strategy from a HW one to a pure SW one. Transform the whole Xbox concept in just a new mode for windows (it's already a limited small er kernel, separate from the standard windows kernel that you can boot into with isolation from your PC or maybe launching with virtualization, side to side) and make development on it easier that on everything else. Then license the xbox kernel core to HW vendors to create HW on it. And let third parties do additional stuff on it like on windows. You could embedd by default xbox store inside it but leave possibilities for competitions so that regulators would be happy... It's just he whole Windows history just again (and Also the Android one). If they keep on going like that steam and android will eat their market in the portable space and maybe also in the "couch" space ?
 
Cheaper investment for them partnering up with pre-build PC manufacturers than making a dedicated console that won't sell.

That said, they will need to massively strengthen their third party publishing presence to offset what they'll lose by leaving the console market. The majority of people still on Xbox, will just move to another console. If they think their users will automatically decide to follow them into this PC venture, then they'll be in for a rude awakening.
They're losing people right now, and that's with two SKUs still around.

The one area I can see MS doing well is cloud gaming. Just not anytime soon. The infustrucure they have in place now is shit, but they have all the resources to be the best at it.
I think they will be dominant in that area. The Xbox brand itself will probably be long gone, and we'll end up with some MS Gaming cloud streaming behemoth.
 
Last edited:
I can give you data. 1920x1080 TAA, 3840x2160 TAA, 3840x2160 with DLSS Performance (1080p), all with Reflex off:

Py808NH.jpeg
EzuZzAP.jpeg
8psqx2F.jpeg


And for H:FW:

qj44pQi.jpeg
5YTqZSs.jpeg
z4pFlJf.jpeg


This is latency measurement - not frametime:

s65dJPX.jpeg


And nvidia probably only measured algorithm cost, I doubt they added cost of post processing that now have to render in 4K and not in 1080p (or HUD).

Edit: Reflex off vs. on (native 4k):

hSgV8NO.jpeg
s9bwwYy.jpeg
All your images are showing "content not viewable in your region" (the UK), but is that a yes or no to the specific question I asked - that I've requoted?

Do we even have a real-world example of identical native game scene rendering at 1080p without DLSS3 and identical settings but with DLSS3 performance mode without Frame-gen outputting at 3840x2160 with a dip in frame-rate equal to native render frame-rate but lowered by adding the value in the table?

So if taking the RTX 2080ti as an example, with a game scene at native 1080p57fps, would the identical DLSS3 performance mode output for 4K be [1 / ((1/57)+ 1.26ms)] = 2160p53fps?
 
They've been selling to the fullest in the pandemic years. There is no way Xbox Series is below 30m.
They were mostly selling Series S consoles due to Sony having manufacturing issues for a good while. A lot of people just wanted something to tide them over until they could secure a PS5.

I still can't see them anywhere close to 30. More like 20.
 
They were mostly selling Series S consoles due to Sony having manufacturing issues for a good while. A lot of people just wanted something to tide them over until they could secure a PS5.

I still can't see them anywhere close to 30. More like 20.
Series S is also an Xbox (like my iPhone lol). I agree that the Series X sales must be around twenty five units sold.
 
Xbox needs to be cheaper to compete, not stronger. They'll bomb if indeed they go that route. I'm mean I think they are DOA either way. The public perception of the brand and the damage done is too much to overcome.
 
You end up with games having sharper textures, more defined shadows and higher overall resolution for that console.

Things 95% of the target audience doesn't even notice or care about.
and it really makes no technical sense… You could make the case in 2027 for a console built on 2nm fab process with dedicated AI cores, 24GB of faster GDDR6 RAM (~768- 864 GB/s with less latency due to the better fab nodes and improved controllers) and 10GB/s+ SSD targeting $350-$450 and still have a bigger leap.

Edit: you could get 18TF out of this in a much more modern and efficient design, new architecture with dedicated AI cores. 50% improvement over PS5 Pro in raw power alone.
 
Last edited:
Power is always important, but it needs to be substantial. If all you can offer is larger numbers within a dynamic resolution, then don't do it.

The difference needs to be significant, like the difference between path raytracing being possible on one but not the other. And obviously, you need to make exclusive $500 million games versus your competitor's $100 million games. But if your game flops, prepare to be out of the market in two years
Following this logic, a powerful console needs to enter the market 2 years after its competitor, as the N64 did, as the Xbox did, but in 2 years its competitor has already dominated sales, and when you realize it, your competitor will be launching a new console and all your effort will have been wasted.
 
All your images are showing "content not viewable in your region" (the UK), but is that a yes or no to the specific question I asked - that I've requoted?

I changed hosting site:

 
Last edited:
I can give you data. 1920x1080 TAA, 3840x2160 TAA, 3840x2160 with DLSS Performance (1080p), all with Reflex off:

The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-05-58.png
The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-07-10.png
The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-06-33.png


And for H:FW:

Screenshot-9-10-2025-08-08-19.png
Horizon-Forbidden-West-Complete-Edition-v1-5-80-0-9-10-2025-08-08-46.png
Horizon-Forbidden-West-Complete-Edition-v1-5-80-0-9-10-2025-08-09-04.png


This is latency measurement - not frametime:

tekip1hzQaPLndGE.jpeg


And nvidia probably only measured algorithm cost, I doubt they added cost of post processing that now have to render in 4K and not in 1080p (or HUD).

Edit: Reflex off vs. on (native 4k):

The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-27-53.png
The-Last-of-Us-Part-II-v1-6-10721-0105-9-10-2025-08-27-30.png
//assumption: TAA's motion vectors are calculated and supplied to the DLSS3 process & the other parts of a TAA-like history blending are done by Nvidia's solution too, we can treat //the DLSS processing as containing a full TAA pass, which actually favours the DLSS calculation given the TAA-pass is typically lighter after ML-AI inferencing.

What card is that on - is it a 2080ti with the 1.26ms DLSS3 processing time? and is dynamic resolution and Frame-gen disabled?

From the first set of Last of Us 2 (PS4 game)images, using the first and last images' FPS values, so comparing TAA 1080p177fps DLSS3 off to TAA 2160p144fps DLSS3 on, the DLSS3 processing difference is 1.295ms which is pretty close to the quoted value from a 2080ti, if smidge over.

The HFW(PS5 game) second set of images TAA 1080p127fps and TAA+DLSS3 on 2160p103 fps is effectively 2ms, which is double the cost of PSSR.

So If you aren't on a 2080ti and are on a 4080 or 4090, that processing cost is effectively 2-4 times more than the processing cost quoted in that table, and from those results it looks like the higher fidelity of the underlying game the higher the processing time of the DLSS ML inferencing component.
 
Prior to Nintendo taking their market leading handheld and merged it with their flopping home console setup to make a hybrid the 30% hardware performance marketing would have worked IMO to take Nintendo market share and some PlayStation market share in the home console market, but in making a hybrid they have effectively made the controller peripheral ownership for gamers a dilemma.

In the past buying just a Nintendo handheld and any other two home consoles resulted in needing only two consoles and two sets of controllers, but now owning a Nintendo handheld or home console(hybrid) already commits a gamer to one set of controller peripherals, meaning for most gamers they are going to end up with PlayStation filling the remaining spot no matter what delta difference Xbox provide in hardware - unless they are in the niche of buying all three - because to buy Xbox hardware in preference to one of the others, or both requires Xbox's catalogue and promise of future exclusive games to be the best or 2nd best, which is much harder to do when you don't do exclusives.
 
//assumption: TAA's motion vectors are calculated and supplied to the DLSS3 process & the other parts of a TAA-like history blending are done by Nvidia's solution too, we can treat //the DLSS processing as containing a full TAA pass, which actually favours the DLSS calculation given the TAA-pass is typically lighter after ML-AI inferencing.

What card is that on - is it a 2080ti with the 1.26ms DLSS3 processing time? and is dynamic resolution and Frame-gen disabled?

From the first set of Last of Us 2 (PS4 game)images, using the first and last images' FPS values, so comparing TAA 1080p177fps DLSS3 off to TAA 2160p144fps DLSS3 on, the DLSS3 processing difference is 1.295ms which is pretty close to the quoted value from a 2080ti, if smidge over.

The HFW(PS5 game) second set of images TAA 1080p127fps and TAA+DLSS3 on 2160p103 fps is effectively 2ms, which is double the cost of PSSR.

So If you aren't on a 2080ti and are on a 4080 or 4090, that processing cost is effectively 2-4 times more than the processing cost quoted in that table, and from those results it looks like the higher fidelity of the underlying game the higher the processing time of the DLSS ML inferencing component.

4070ti super.

You have to remember that graph (I think) is missing all this stuff that even with internal geometry resolution being much lower still have to render in native 4k (or lower but still higher than 1080p output).

This cost is missing for DLSS table and I'm sure it's also missing for PSSR because it's game dependent. Every game will have different things, shadow maps, alphas (smoke, fire), HUD, volumetric lights, DOF etc.

People often forget about this and expect the same performance from 4k DLSS performance and native 1080p. This have to be added to the cost of reconstruction.

I also focused on latency here because this is what you were talking about, pure cost in frametime is different.
 
If its basically a PC then it will likely have more overhead to deal with which is likely why it will need more ram and raw compute power than the PS6. The PS6 will still be fully optimized for gaming without a bloated OS.

I don't think we will see major differences between the two consoles. Both should have great ray tracing and upscaling tech.
 
Top Bottom