Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Well, I'm not sure whether or not @Ascend was aware that the duration of render buffer output cannot be measured with input data, because his post implied that he thought so. However, I knew that was the case, which is why I calculated only the amount of time that each console's GPU would have to read the input data and render it.
Oh I'm aware. I don't often show all the cards, because I'm trying to make a point. I was trying to find any reference as to how they compare on pure SSD throughput. Obviously you're going to load a bunch of different textures, there's the decompression, then there's mapping, culling, shading, post-processing among many other things, to then output the final frame.
If the PS5 can load 2 GB in 0.27 seconds or 270 ms, and the XSX is twice as slow, that means it loads the same size in 0.54 seconds or 540ms. Both of those are way above the required 16.7 ms for a single frame. The question then arises what is so magical about the 270 ms rather than the 540 ms.

One of the things of Cerny's talk is that he focused on the importance of the SSD and their solution for it. His talk was mainly about HDD vs SSD, not the XSX SSD vs the PS5 SSD. An HDD has a throughput of about 50MB/s, which is where the figure of 100x faster comes from. To load that same 2GB, it would take you 40 seconds at that speed, or, 40000 ms. That is a HUGE difference. So the question is, how can anyone argue that the 270 ms will allow for full elimination of loading screens, and 540 ms does not?

I'll simply say it again. The PS5 will have advantage with its RAM allocation, and that is basically it.
 
If Tidux is talking about RAM reservation for OS, why is he talking about processors inside APU?
Because the I/O co-processors help reduce the RAM needed for the OS?


The way I understand it, is that its not that they have added extra hardware (and thus consumed valuable die area) specifically for the OS, more a case of existing custom hardware used for I/O has an additional benefit of reducing the amount of RAM the OS needs to consume to function properly.
 
Last edited:
Tht shit is doa for 300 with 4 or 6tf. I don't care if it's a cheaper alternative it's just not what ppl are asking for. At 200 would have done fine.

Where I live (Switzerland), you can buy a Switch Lite for 229 CHF. A Switch with a Joy-Con is 349 CHF. I don't think they will ever be able to pull a 200 with Lockhart.
 
Is the premise that an equivalent GPU with less CU's but higher TF due to clocks is still less RT capable? How is this possible?
XSX has 44% more RT units but each RT unit on PS5 does 22% more work at any given time. This closes the gap to 18% (same as compute)
Droping the resolution by 18% should free enough resources for PS5 to match XSX output.
RT performance scales with CUs not clockspeed.
Dude read more before spreading misinformation. This isn't your run of the mill online gaming comments section 🤦‍♂️
 
Last edited:
Tht shit is doa for 300 with 4 or 6tf. I don't care if it's a cheaper alternative it's just not what ppl are asking for. At 200 would have done fine.


It's not what we're asking for, but realize that we're a niche within a niche.

Parents who just know their child unit wants in on next gen (when stuff inevitably doesn't run on XBO) with their friends couldn't give a fuck if little Timmy gets 12Tf or 4.

Don't @ me Neogaf but I'm kinda thinking Microsoft's Min/Maxing strategy next gen is smart. If this is true at least. You get the enthusiasts excited for the most powerful box, and you get the price sensitive who just want an entry ticket to next gen.


RT performance scales with CUs not clockspeed. That's according to multiple sources including Digital Foundry.

52CUs at 1MHz would perform the same? Stahp, like all GPU hardware it's execution hardware x clock speed. They're saying it scales with the number of CUs, all else being equal.
 
Last edited:
----------Data Transfer Rates--------- (More data per unit time is better)

PlayStation 5

Uncompressed Data Transfer Rate of PS5's SSD: 5.5GB/s = 5500 MB/s = 5500MB/1000ms = 5.5MB/ms

Compressed Data Transfer Rate of PS5's SSD: 8GB/s = 8000 MB/s = 8000MB/1000ms = 8MB/ms

Compressed Data Transfer Rate of PS5's SSD: 9GB/s = 9000 MB/s = 9000MB/1000ms = 9MB/ms

Xbox Series X

Uncompressed: Data Transfer Rate of XSX's SSD: 2.4GB/s = 2400MB/s = 2400MB/1000ms = 2.4MB/ms

Compressed: Data Transfer Rate of XSX's SSD: 4.8GB/s = 4800MB/s = 4800MB/1000ms = 4.8MB/ms

----------Time that it takes each console's I/O system to transfer a full 4K frame's worth of uncompressed data to RAM---------- (Less time is better)

Size of 4K Frame with 32Bit Color Depth = 33.2MB

PlayStation 5

@5.5GB/s (uncompressed): 33.2MB/(5.5MB/ms) = 6.036ms

@8GB/s (compressed): 33.2MB/(8MB/ms) = 4.15ms

@9GB/s (compressed): 33.2MB/(9MB/ms) = 3.69ms

Xbox Series X

@2.4GB/s (uncompressed): 33.2MB/(2.4MB/ms) = 13.83ms

@4.8GB/s (compressed): 33.2MB/(4.8MB/ms) = 6.92ms

----------Time that each console's GPU has to read the data from RAM and render it while maintaining a render rate of 60 frames per second---------- (More time is better, because it's more time to get the work done)


1 second = 1000ms
1000ms/60 frames = 16.67ms

PlayStation 5

16.67ms - 6.036ms = 10.634ms (for uncompressed data)

16.67ms - 4.15ms = 12.52ms (for compressed data)

16.67ms - 3.69ms = 12.98ms (for compressed data)

Xbox Series X

16.67ms - 13.83ms = 2.84ms (for uncompressed data)

16.67ms - 6.92ms = 9.75ms (for compressed data)

----------Number of pixels that each console's GPU can render in the amount of time that it has to read the data from RAM and render it while maintaining 60 frames per second---------- (More pixels are better)


PlayStation 5

64 ROPs x 2,230 x 1000 = 142,720,000 pixels per second = 142,720 pixels/ms

(142,720 pixels/ms) x 10.634ms = 1,517,684.48 pixels (after processing uncompressed data at 5.5GB/s)

(142,720 pixels/ms) x 12.52ms = 1,786,854.4 pixels (after processing compressed data at 8GB/s)

(142,720 pixels/ms) x 12.98ms = 1,852,505.6 pixels (after processing compressed data at 9GB/s)

Xbox Series X

64 ROPs x 1,825 x 1000 = 116,800,000 pixels per second = 116,800 pixels/ms

(116,800 pixels/ms) x 2.84ms = 331,712 pixels (after processing uncompressed data at 2.4GB/s)

(116,800 pixels/ms) x 9.75ms = 1,138,800 pixels (after processing compressed data at 4.8GB/s)

----------Number of texels that each console's GPU can render in the amount of time that it has to read the data from RAM and render it while maintaining 60 frames per second---------- (More texels are better)

PlayStation 5

144 TMUs x 2,230 x 1000 = 321,120,000 texels per second = 321,120 texels/ms

(321,120 texels/ms) x 10.634ms = 3,414,790.08 texels (after processing uncompressed data at 5.5GB/s)

(321,120 texels/ms) x 12.52ms = 4,020,422.4 texels (after processing compressed data at 8GB/s)

(321,120 texels/ms) x 12.98ms = 4,168,137.6 texels (after processing compressed data at 9GB/s)

Xbox Series X

208 TMUs x 1,825 x 1000 = 379,600,000 texels per second = 379,600 texels/ms

(379,600 texels/ms) x 2.84ms = 1,078,064 texels (after processing uncompressed data at 2.4GB/s)

(379,600 texels/ms) x 9.75ms = 3,701,100 texels (after processing compressed data at 4.8GB/s)
Neat number crunching you did there. Here's another one for your analysis.

A 4K frame is 3840x2160 pixels = 8,294,400 pixels per frame.

According to your calculations, neither one of these nextgen machines can render a 4K image in the 16.667 ms frame time of a 60 fps game!!

It gets even better. A 1080p frame is 1920x1080 = 2,073,600 pixels per frame. OH NO, even the mighty PS5 with its 64 ROPS and 2.23 GHz can't render a 1080/60 game!!!

And thats for performing just a single rasterization operation per pixel.

Oh whoa is me how can it be!!!??? Perhaps because your taking a bunch of numbers you don't understand (or maybe you do...) and misrepresenting them. For instance, you are off by a factor of a 1000 on your frequencies. Those are supposed to be Gigahertz not Megahertz. Then there's the 33 MBs of data per 4K frame which you based everything off of... I just... what? Did you just look up a file size of a picture on your phone or something?

There's a lot more data involved in computing a 4k 3D image than just a 4k colormap. Cerny's example for a nextgen game was loading in 4 GBs of compressed assets in order to support rendering a unique view in the same scene a half second later. That's still 4 GB being rendered at 60 fps after you've turned, which is a GPU running computation on up to 240 GBs of data in a second (unless he factored in some unique animation, sound etc in that 4 GB example). Regardless, explain to me how the PS5 SSD can supply at least 240 GB of data per second to support computation of a 60 fps game. It can't, but the RAM can.

Go back and look at your math, but then also realize you completely skipped over RAM bandwidth and tried to come up with rendering frames straight from SDD and then redo your calcs and let's see what shakes out. This is what tells me you have no idea what you put in your post.

I CAN'T STRESS THIS ENOUGH, devs are not going to be relying on the SDD in PS5 to load all new rendering assets into RAM for EVERY frame. If that were the case, we wouldn't need more than 133 MB of RAM because the PS5 could load and flush it with new data every 16.6 ms... but as you are probably now realizing, it doesn't work like that (yet) because that isn't fast enough for current gen games. What they will do is swap partial elements out over a relatively long series of frames to render a potentially unique frame like in Cerny's 4 GBs / half second turn rate example (30ish frames with 133 MB new data available per frame to give you a brand new 4 GB image to render at 60fps at the end of your turn).

I mean it'd be a neat thought experiment but you also have to factor in SDD seek times vs GDDR latency (10-100 times slower) per frame if you want to start talking about using it as reram. I think you'd quickly realize we aren't there yet. Next nextgen we may be. Though this gets us much closer than we were before.

edit - In hindsight, I was way too condescending in those first few paragraphs. Apologies. Not gonna remove so as to own up to my assholeishness
 
Last edited:
----------Time that each console's GPU has to read the data from RAM and render it while maintaining a render rate of 60 frames per second---------- (More time is better, because it's more time to get the work done)

1 second = 1000ms
1000ms/60 frames = 16.67ms

PlayStation 5
16.67ms - 6.036ms = 10.634ms (for uncompressed data)

16.67ms - 4.15ms = 12.52ms (for compressed data)

16.67ms - 3.69ms = 12.98ms (for compressed data)

Xbox Series X
16.67ms - 13.83ms = 2.84ms (for uncompressed data)

16.67ms - 6.92ms = 9.75ms (for compressed data)
The whole process is continuous. The ROP is not somehow inactive while you're loading from the SSD, so as I already mentioned, you cannot subtract the amount of ms it takes to load data from the time it takes to render a frame. That would be a batch process rather than a continuous one. That makes the rest of your calculations simply wrong.


----------Number of pixels that each console's GPU can render in the amount of time that it has to read the data from RAM and render it while maintaining 60 frames per second---------- (More pixels are better)

PlayStation 5
64 ROPs x 2,230 x 1000 = 142,720,000 pixels per second = 142,720 pixels/ms

(142,720 pixels/ms) x 10.634ms = 1,517,684.48 pixels (after processing uncompressed data at 5.5GB/s)

(142,720 pixels/ms) x 12.52ms = 1,786,854.4 pixels (after processing compressed data at 8GB/s)

(142,720 pixels/ms) x 12.98ms = 1,852,505.6 pixels (after processing compressed data at 9GB/s)

Xbox Series X
64 ROPs x 1,825 x 1000 = 116,800,000 pixels per second = 116,800 pixels/ms

(116,800 pixels/ms) x 2.84ms = 331,712 pixels (after processing uncompressed data at 2.4GB/s)

(116,800 pixels/ms) x 9.75ms = 1,138,800 pixels (after processing compressed data at 4.8GB/s)
The amount of ROPs for the XSX has not been confirmed to be 64.

More importantly, by this logic, the XSX would be incapable of outputting a 4K image if it uses uncompressed data, which is bullocks. Why? How can the PS4 Pro or XOX output 4K when they don't have a near as powerful I/o system, compression system or storage? It's another confirmation that you cannot subtract the ms time the way you did
So, the hardware itself is capable of up to 22GB/s

That's the limit
Saying it like that is confusing, because it is not clear that the 22GB/s is output data. In all cases it is 5.5GB of input data, and the output data depends on the amount of compression of the file, not the performance of the decompression block, which is a constant.
 
It is not that simple, you are making uninformed assumptions on partially available data. I don't have that much free time to explain how market and sales work and don't see any sense in even trying or argue about it here. So I would suggest to avoid this topic of "potential sales" and focus discussion on technical matters.
I already have a bachelor in BA and a master in economics but I appreciate the help.
 
From Tom Warren over on ERA....



Oh MS don't even think about it lol....


Ooh. Now I'm wondering about what Phil said about being comfortable with the price, with the new rumblings that Lockhart is a go. Maybe he meant their base next gen price, and not the X in particular.

If 499 would be "bold" given the BoM...

X @ 550?
PS5 at 450?
S at 350?
 
Ooh. Now I'm wondering about what Phil said about being comfortable with the price, with the new rumblings that Lockhart is a go. Maybe he meant their base next gen price, and not the X in particular.

If 499 would be "bold" given the BoM...

X @ 550?
PS5 at 450?
S at 350?

If MS wants to get their shit pushed in lol, sure.

They HAVE to meet the PS5 price IMO...
 
Then there's the 33 MBs of data per 4K frame which you based everything off of... I just... what? Did you just look up a file size of a picture on your phone or something?
He actually copied that from me, from one of my earlier posts. I assumed completely raw data with zero compression, just to evaluate the absolute worst case scenario. I calculated it this way;

4K res = 3840x2160 = 8,294,400 pixels
With 32-bit color depth, each pixel consists of 32 bits of data, so you get 32 x 8294400 = 265,420,800 bits
Convert to bytes is 265,420,800 / 8 = 33,177,600 bytes
Convert to MB, you get 33.2 MB

It is completely clear he has no idea what he's doing though.
 
Last edited:
If MS wants to get their shit pushed in lol, sure.

They HAVE to meet the PS5 price IMO...

Yeah but I don't know that they can do that with the X if Tom is saying 499 would be a stretch. People seem to think that Microsoft would be happy to lose a bunch of money to win because they're a big company that can handle it, but that's just not how departments work and Satya made some hard choices on money losing departments before.

That's why I think they're doing the min/maxing thing, it hedges their bets. They have the most powerful GPU on a console and that matters to some, they did that by risking a high price, the hedge is the low cost system.
 
If Lockhart is less RAM and less SSD speed, we're screwed 😂😂😂

Scalability is a thing. They won't be targeting the same fidelity , but obviously I do hope that MS doesn't skimp on Lockhart SSD. I don't want the base line to be any lower than the XSX three or four years from now, that would really suck.
 
Last edited:
Ooh. Now I'm wondering about what Phil said about being comfortable with the price, with the new rumblings that Lockhart is a go. Maybe he meant their base next gen price, and not the X in particular.

If 499 would be "bold" given the BoM...

X @ 550?
PS5 at 450?
S at 350?

This setup will mean PS5 domination!! whom am I kidding? PS5 WILL dominate no matter what 🔥
 
I know that Corden and co. are not welcomed here, but since Windows Central has been spot on terms of Xbox Series X specs, i think i trust them with Lockhart too


Windows Central sometimes gets windows leaks, but they're wrong A LOT - these are all next gen news headlines from the site :
  • Next Xbox family to use Arcturus GPU NOT Navi. Rumor.
  • On this week's podcast ... A new Cortana, Lockhart is dead, and more
  • Report: Microsoft to reveal next-generation consoles at E3 2019, Halo Infinite as launch title
  • Another 'leak' says there will be a PS5 'Pro' to go against Xbox Series X
How many of these look right? This is most of them.

tbh the only time they seem to be right is when they update an article after the reveal - like here https://www.windowscentral.com/xbox-scarlett-anaconda-lockhart-specs before the reveal they did get the RAM right at 16GB but there wasn't much else..

I don't think there is any real evidence for Lockhart as a product - maybe it was an idea at one time - but we haven't had any credible leaks that it exists as a product in any form .. this whole thing reminds me of the 12-13TF PS5 noise before the "Road to PS5" reveal - it got louder and louder the nearer to the event .. we know how that turned out.

Not sure about Tom Warren either - in Dec he was telling the world Xbox was behind, nobody has dev kits etc - a couple months later we saw final hardware as will be sold to consumers ... kindof contradictory imo .. [edit - ignore this Jez Corden not Tom Warren ]
 
Last edited:
XSX has 44% more RT units but each RT unit on PS5 does 22% more work at any given time. This closes the gap to 18% (same as compute)
Droping the resolution by 18% should free enough resources for PS5 to match XSX output.

Dude read more before spreading misinformation. This isn't your run of the mill online gaming comments section 🤦‍♂️
They can drop res to 1800-something with smart techniques to fill the gap anytime if that means freeing resources to fps and image quality, native 4k is such a waste.
 
Probably less RAM, less CUs and less storage on SSD.

I just don't believe that a "lockhart" type box is viable for Microsoft. I mean, mid range and gamer geek types will buy the series X. Who would be the target for lockhart? Gamers that are super budget conscious live on resold games...on disk. So no disk drive? Wouldn't fly very well. Worst case, if lockhart will really play everything that series X will and "good enough" on a regular HD TV, then I think MS would cannibalize their sales of the series X. I just don't see this as a winning strategy for them and think it would be more likely to do them harm than good. I honestly think that the whole lockhart rumor is more likely just linked to the fact that MS has said that the Series X games will also run on Xbox one, etc. They will bump those units down in price and try to get more people into their eco system that way. Makes more sense to me than developing a completely new product and trying to place that down into the low end market segment. I dunno. We'll see. But the fact that Series X has been revealed and no mention of a "lockhart" has me pretty much convinced that lockhart as a separate product simply doesn't exist. Time will tell!
 
They can drop res to 1800-something with smart techniques to fill the gap anytime if that means freeing resources to fps and image quality, native 4k is such a waste.

Dynamic resolution is here to stay. It even has the added bonus of allowing future consoles to exploit it. It helps with BC.
 
Damn looks like no PS5 news this week. Makes sense though, here in America the NFL draft is this week and will take up A TON of mindshare for people this week. Feel like Sony wouldn't want to have any kind of reveal when there's another big event distracting people. Same for Microsoft.
 
Just wondering why Cerny didn't go with leviathan instead of kraken?


oodle_typical_vbar.png
Many textures in games are lossy block compressed formats based on DXT, which by default don't compress well because of the overlapping BC techniques in zlib and DXT, so even using these rates and sizes for comparison, the real results with a variety of game based data could lead to a different conclusion.

With the two Co-processors (in the PS5's IO complex) presumably being programmable, because they are 'processors' - my guess is being used for processing input data and the other for output data – the choice of kraken might be changeable.

Committing the hardware resources to parts of an IO complex without Playstation being guaranteed a decent level of compression on every large type of data they will pass through it doesn't seem consistent with their ecofriendly design credentials. I would speculate that the 5GB/s is for small burst data sizes or data that lives in RAM in zlib format,, and at the other end of the scale the 22GB/s figure will be for raw float vertex data and Tempest engine data.
 
Last edited:
XSX has 44% more RT units but each RT unit on PS5 does 22% more work at any given time. This closes the gap to 18% (same as compute)
Droping the resolution by 18% should free enough resources for PS5 to match XSX output.

Dude read more before spreading misinformation. This isn't your run of the mill online gaming comments section 🤦‍♂️
MS claims to have access to 13 TF of power only for RT, nothing to do with the 12 TF of GPU compute power. So you may be the one spreading misinformation.
At the very least let's wait for the games. Oops, that's right. we've already seen Minecraft pathtracing flawlessly implemented on XSX and on the PS5 side we've basically been told "that's a lot of rays of light to manage but we'll see what we can do".
 
Licensing is required for Xbox and Xbox 360 games as they require repacking the original game in a layer of custom emulator. If native bc is present for xbox one games then it will not require relicensing. And even after having native bc not all games work 100%. Certain PS1 games didn't work on PS2. Certain PS2 games had issues even on the hardware based bc PS3s.
How so?
 
Windows Central sometimes gets windows leaks, but they're wrong A LOT - these are all next gen news headlines from the site :
  • Next Xbox family to use Arcturus GPU NOT Navi. Rumor.
  • On this week's podcast ... A new Cortana, Lockhart is dead, and more
  • Report: Microsoft to reveal next-generation consoles at E3 2019, Halo Infinite as launch title
  • Another 'leak' says there will be a PS5 'Pro' to go against Xbox Series X
How many of these look right? This is most of them.

tbh the only time they seem to be right is when they update an article after the reveal - like here https://www.windowscentral.com/xbox-scarlett-anaconda-lockhart-specs before the reveal they did get the RAM right at 16GB but there wasn't much else..

I don't think there is any real evidence for Lockhart as a product - maybe it was an idea at one time - but we haven't had any credible leaks that it exists as a product in any form .. this whole thing reminds me of the 12-13TF PS5 noise before the "Road to PS5" reveal - it got louder and louder the nearer to the event .. we know how that turned out.

Not sure about Tom Warren either - in Dec he was telling the world Xbox was behind, nobody has dev kits etc - a couple months later we saw final hardware as will be sold to consumers ... kindof contradictory imo .. [edit - ignore this Jez Corden not Tom Warren ]
I agree. they just need to match the PS5 in price since they have similar BOMs and take their chances. No need to release a less powerful console. Start designing the Series X2 instead.
 
Last edited:
there will be lockheart or xbox series s, z, y... whatever the name is otherwise ms would not be given name
series x that suggest more then 1 console.
 
MS claims to have access to 13 TF of power only for RT, nothing to do with the 12 TF of GPU compute power. So you may be the one spreading misinformation.
This is what RT dedicated hardware does a RDNA2 feature
PS5 has less RT units but each RT unit is 22% faster. There's nothing unknown about this: clock speeds do affect RT

Also what MS actually said:
Andrew Goossen said:
Without hardware acceleration, this work could have been done in the shaders, but would have consumed over 13 TFLOPs alone
 
Last edited:
I just don't believe that a "lockhart" type box is viable for Microsoft. I mean, mid range and gamer geek types will buy the series X. Who would be the target for lockhart? Gamers that are super budget conscious live on resold games...on disk. So no disk drive? Wouldn't fly very well. Worst case, if lockhart will really play everything that series X will and "good enough" on a regular HD TV, then I think MS would cannibalize their sales of the series X. I just don't see this as a winning strategy for them and think it would be more likely to do them harm than good. I honestly think that the whole lockhart rumor is more likely just linked to the fact that MS has said that the Series X games will also run on Xbox one, etc. They will bump those units down in price and try to get more people into their eco system that way. Makes more sense to me than developing a completely new product and trying to place that down into the low end market segment. I dunno. We'll see. But the fact that Series X has been revealed and no mention of a "lockhart" has me pretty much convinced that lockhart as a separate product simply doesn't exist. Time will tell!
MS probably think super budget gamers will choose the Lockart+Gamepass combo, but they could simply grab an even cheaper One S+Gamepass if they don't care about graphics. You think Lockart will cannibalize SX, but this no-next-gen-exclusives policy will make One S cannibalize Lockart.
 
Neat number crunching you did there. Here's another one for your analysis.

A 4K frame is 3840x2160 pixels = 8,294,400 pixels per frame.

According to your calculations, neither one of these nextgen machines can render a 4K image in the 16.667 ms frame time of a 60 fps game!!

It gets even better. A 1080p frame is 1920x1080 = 2,073,600 pixels per frame. OH NO, even the mighty PS5 with its 64 ROPS and 2.23 GHz can't render a 1080/60 game!!!

And thats for performing just a single rasterization operation per pixel.

Oh whoa is me how can it be!!!??? Perhaps because your taking a bunch of numbers you don't understand (or maybe you do...) and misrepresenting them. For instance, you are off by a factor of a 1000 on your frequencies. Those are supposed to be Gigahertz not Megahertz. Then there's the 33 MBs of data per 4K frame which you based everything off of... I just... what? Did you just look up a file size of a picture on your phone or something?

There's a lot more data involved in computing a 4k 3D image than just a 4k colormap. Cerny's example for a nextgen game was loading in 4 GBs of compressed assets in order to support rendering a unique view in the same scene a half second later. That's still 4 GB being rendered at 60 fps after you've turned, which is a GPU running computation on up to 240 GBs of data in a second (unless he factored in some unique animation, sound etc in that 4 GB example). Regardless, explain to me how the PS5 SSD can supply at least 240 GB of data per second to support computation of a 60 fps game. It can't, but the RAM can.

Go back and look at your math, but then also realize you completely skipped over RAM bandwidth and tried to come up with rendering frames straight from SDD and then redo your calcs and let's see what shakes out. This is what tells me you have no idea what you put in your post.

I CAN'T STRESS THIS ENOUGH, devs are not going to be relying on the SDD in PS5 to load all new rendering assets into RAM for EVERY frame. If that were the case, we wouldn't need more than 133 MB of RAM because the PS5 could load and flush it with new data every 16.6 ms... but as you are probably now realizing, it doesn't work like that (yet) because that isn't fast enough for current gen games. What they will do is swap partial elements out over a relatively long series of frames to render a potentially unique frame like in Cerny's 4 GBs / half second turn rate example (30ish frames with 133 MB new data available per frame to give you a brand new 4 GB image to render at 60fps at the end of your turn).

I mean it'd be a neat thought experiment but you also have to factor in SDD seek times vs GDDR latency (10-100 times slower) per frame if you want to start talking about using it as reram. I think you'd quickly realize we aren't there yet. Next nextgen we may be. Though this gets us much closer than we were before.
Maybe if you'd like an answer you'll lay off the childish pettiness?
 
MS probably think super budget gamers will choose the Lockart+Gamepass combo, but they could simply grab an even cheaper One S+Gamepass if they don't care about graphics. You think Lockart will cannibalize SX, but this no-next-gen-exclusives policy will make One S cannibalize Lockart.

Yeah, good point. I could see it going that way also. Bottom line, I just don't see the whole "lockhart" thing as a winner for MS and think it represents more danger than anything. As with most product lines, the low end 'budget' models usually have little to no margin. Companies make more profit with services and then with high end hardware.
 
Xbox One was cheaper than PS4 six years out of the seven we have this gen. That didn't help it in the slightest vs Playstation.

Lockhart is a valid attempt by MS but it will fail, and give a lower baseline to Xbox gamers next gen.
 
15 % drop is more than 1800p, more like 2000p as you have 2 axis.
Exactly that is why even with all raw power difference between ps4 and xbox one you only get 1080p vs 900p which is
around 44% more pixels .... just like 1.3 TF multiply by 1.44 is 1.872 very similar to TF of ps4.

The TF doesn't scale 1:1 by resolution that is true the problem here was the xbox one has some more bottlenecks compare to
ps4 in other things like bandwidth in memory.
 
Question: a drop from 4K to 1080 p wouldn't reduce the load on the GPU by (around) four times? In this sense, 4 TFs seems appropriate. We have seen Pro and X multiplaying GPU power mainly for res.
From 4K 60 fps to 1080 30 fps you would save a lot of power, and you can still decrease other things like utilize a different AA or less particles.
This could allow to develop mainly for SeX and PS5, and than scale back. I suppose it would be the contrary if Series S will be the most selled console, but we all know that probably PS5 will instead.
 
Last edited:
Licensing is required for Xbox and Xbox 360 games as they require repacking the original game in a layer of custom emulator. If native bc is present for xbox one games then it will not require relicensing. And even after having native bc not all games work 100%. Certain PS1 games didn't work on PS2. Certain PS2 games had issues even on the hardware based bc PS3s.

Licensing is going to be required no matter what if BC involves games being available digitally, which MS seems to be very committed to, with all but a handful of BC titles being available completely digitally, no disc required.
 
I agree. they just need to match the PS5 in price since they have similar BOMs and take their chances. No need to release a less powerful console. Start designing the Series X2 instead.


That's the question..... If MS does release Lockhart and Sony releases a PS5 PRO in 3 years Would MS release a 3rd console for the same Gen ?
 
Because the original game is played as is without any alteration to the game package(no emulation wrapper on game to game basis). Kind of like how PSP games on PSVita work just like that(the digital downloads) but in cases of say Red Dead Revolver as PS2 classic on PS4 which is actual PS2 game which is wrapped with a emulator with settings specific for the said game will require relicensing from the publisher.
 
It's not what we're asking for, but realize that we're a niche within a niche.

Parents who just know their child unit wants in on next gen (when stuff inevitably doesn't run on XBO) with their friends couldn't give a fuck if little Timmy gets 12Tf or 4.

Don't @ me Neogaf but I'm kinda thinking Microsoft's Min/Maxing strategy next gen is smart. If this is true at least. You get the enthusiasts excited for the most powerful box, and you get the price sensitive who just want an entry ticket to next gen.




52CUs at 1MHz would perform the same? Stahp, like all GPU hardware it's execution hardware x clock speed. They're saying it scales with the number of CUs, all else being equal.

Don't agree but thts just my opinion. We all have them. Thanks for responding though.
 
Question: a drop from 4K to 1080 p wouldn't reduce the load on the GPU by (around) four times? In this sense, 4 TFs seems appropriate. We have seen Pro and X multiplaying GPU power mainly for res.
From 4K 60 fps to 1080 30 fps you would save a lot of power, and you can still decrease other things like utilize a different AA or less particles.
This could allow to develop mainly for SeX and PS5, and than scale back. I suppose it would be the contrary if Series S will be the most selled console, but we all know that probably PS5 will instead.
Not exactly depending the scene workload in GPU could be even less.

For example I check some benchmark in many games and with same GPU if the bandwidth is enough the workload is around 3 times.

I think 4 times is the worst scenario possible. I think the idea of Xbox is use the same cpu but with less gpu and memory size and memory
bandwidth so you can pass your games only scaling resolution.

The problem will be when the first title which use a resolution very inferior to 4k appears (1440p reconstructed to 4k) the lockhart is
possible will have some artifacts in the image but well we know for Microsoft doesn't care the problems in performance or resolution of the base
console because the pages will talk more about XSX even if the budget console is the principal in their user base.
 
This is what RT dedicated hardware does a RDNA2 feature
PS5 has less RT units but each RT unit is 22% faster. There's nothing unknown about this: clock speeds do affect RT

Also what MS actually said:
Sony has made no comments about how RT is implemented in their console (other than stress how demanding RT is) and your assumption that it is the exact same architecture as in the XSX, just provided by fewer CUs, has not been verified yet. Let's wait and see. All we know for now is that one manufacturer was confident enough to showcase not just RT but PT in March, while the other's lead designer said he was flabbergasted that it is even successfully attempted on consoles. You can focus on the artificial, self manufactured number of 18% difference in RT HW acceleration but it looks like 18% of quite a lot.
 
If Tidux is talking about RAM reservation for OS, why is he talking about processors inside APU?
Why do you keep data on RAM for OS? To have the actual data available to use by the OS at any time.
So what happens if you can stream data from SSD to RAM fast enough to OS use it? You don't need to keep the data on RAM anymore.

Basically that... with high seeds SSDs you eliminate the needs to OS keep most of the data on RAM.. even background APP can go SSD and just back to RAM when active.

That alone eliminate a lot of RAM usage by the OS.

The IO-coprocessors is to paralelize the SSD access... imagine if Game is using the main IO Controller then the OS can use the IO-coprocessors to access the SSD without need to wait Game end it tasks.... so you avoid both Game affecting OS and OS affecting Game.
 
Last edited:
Status
Not open for further replies.
Top Bottom