Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Desktop GPU sweet spot=/=APU sweet spot especially given that whatever they've learned for rdna2/7nm+ COULD be implemented for consoles at 7nm. We know rdna1 refresh is coming in 2020 and it's probably safe to assume efficiency will be improved. Even a slight improvement of 10% can move that sweet spot quite a bit.


IIRC 54 requires 3 SEs in which case you'd end up with a die that's bigger than a 56cu/2 se setup. 54 would be bigger, more expensive, and have less CUs. Does that sound as something Cerny would do?

Interesting. So theoretically would you venture the sweetspot would shift, say, 50-75MHz? So instead of 1.7GHz - 1.8GHz it could be closer to 1.75 GHz - 1.85GHz? Just as an example?

Granted I also agree that even if the systems are using a somewhat older rdna (not the refresh you mention), they are probably going to use modified versions of the recently revealed 4800U mobile CPU, so that will give them a lot of headroom in terms of the TDP to free up to other things like the GPU.

Already posted?




As always, until officially proven, take with a grain of salt.

Yeah, this got posted a long while ago when it first popped up. Irregularities on both fronts. PS5 SSD is too small (500GB is paltry), XSX RAM amount is off (even assuming 320-bit bus (I'm thinking it's a 384-bit bus, same as the X), they would most probably go with a flush of 2GB chips, for 20GB GDDR6. If it's a 384-bit bus, it'd be 24GB GDDR6. Might as well take advantage of that bulk pricing and go the full way with one size type 🤷‍♂️ ).

I guess RDNA 1.5 is meant to indicate RDNA 1/2 hybrid design, that is possible for both. PS5 TF count is too high given what we know from the benchmarks, and even from what can be speculated on a chip fitting 48 active CUs. Also the bandwidth is slightly unrealistic; would entail 9x 16Gbps chips, for a 288-bit bus. But it seems like a very odd amount for a bus bandwidth, why not just go 320-bit and make it an even number?

Really it reads more like a fan's specs wishlist than anything credible.

Only thing that conflicts with that is the notation on the data leak that Oberon is "the full chip."

But the "full chip" has also had bug fixes done to things like the memory controller on successive steppings, so there isn't much telling what is on the chip which isn't actually showing up in the data results atm.
 
Last edited:
Yes I believe you, this is what he told me, I believe in Colbert more than these unnamed sources like Jason Schreier or Mark Cerny lying to us about Hardware RT YOOO!!
WV9VuZO.png
He is really that pathetic lol
 
Interesting. So theoretically would you venture the sweetspot would shift, say, 50-75MHz? So instead of 1.7GHz - 1.8GHz it could be closer to 1.75 GHz - 1.85GHz? Just as an example?

Granted I also agree that even if the systems are using a somewhat older rdna (not the refresh you mention), they are probably going to use modified versions of the recently revealed 4800U mobile CPU, so that will give them a lot of headroom in terms of the TDP to free up to other things like the GPU.



Yeah, this got posted a long while ago when it first popped up. Irregularities on both fronts. PS5 SSD is too small (500GB is paltry), XSX RAM amount is off (even assuming 320-bit bus (I'm thinking it's a 384-bit bus, same as the X), they would most probably go with a flush of 2GB chips, for 20GB GDDR6. If it's a 384-bit bus, it'd be 24GB GDDR6. Might as well take advantage of that bulk pricing and go the full way with one size type 🤷‍♂️ ).

I guess RDNA 1.5 is meant to indicate RDNA 1/2 hybrid design, that is possible for both. PS5 TF count is too high given what we know from the benchmarks, and even from what can be speculated on a chip fitting 48 active CUs. Also the bandwidth is slightly unrealistic; would entail 9x 16Gbps chips, for a 288-bit bus. But it seems like a very odd amount for a bus bandwidth, why not just go 320-bit and make it an even number?

Really it reads more like a fan's specs wishlist than anything credible.
given what you know from the benchmarks? PS5 already had benchmarks? can you share it?
 
The 500GB of storage on ps5 is a dead giveaway that its fake
I'm pretty sure the PS4 Pro devkits came with 500gb and 1tb hard drives standard so I don't exactly find it unrealistic that PS5 would have a vastly superior 500gb SSD inside it's devkits. 18gb of memory instead of just simply 16 is what has me slightly confused.
 
18gb memory would be just confusing, it doesn't make any sense.

As for the lower HDD, I would expect a next gen dev kit to have a larger base HDD. Even 1tb retail units isn't going to cut it for long, and I imagine very soon we will be getting 2tb at least, but for a dev kit? 500gb for these is waaaaay too small, way too small.

Stranger things have happened, but I don't buy it. The other stuff is sort of what I would expect, nearly anyway. Seems a bit too far fetched for me.
 
I'm pretty sure the PS4 Pro devkits came with 500gb and 1tb hard drives standard so I don't exactly find it unrealistic that PS5 would have a vastly superior 500gb SSD inside it's devkits. 18gb of memory instead of just simply 16 is what has me slightly confused.

Could be 16 + 2 for OS (the additional 4 GB just for devkit). I am actually not expecting anything above 500 GB SSDs. Because of cost, because of selling higher prices SKUs - and because they are simply not needed. Cerny already said that he expects game sizes to be smaller next gen because SSDs do away with redundant data. And PS4 sits at a ratio of 11 games per console. 430 GB (usable space) / 11 = enough space for the average customer who buys two games a year or less. Curiously enough OEM prices for 500 GB NVMe SSDs are now around the same level as 500 GB HDDs back in 2013.

But the TF in that leak don't make sense. Navi 10 (and anything based on it) is limited to 40 CUs. Good luck trying to get 12.6 TF out of it. That GPU would need to run at 2.45 GHz with no CUs deactivated for yields. LOL.
 
Last edited:
Thank you! How the heck did you know that I'm from Oman :messenger_tears_of_joy: I just added that to my profile after your replay. I thought signing here I can't add much in terms of next gen market but I can bring a wider image of the gaming map around the globe outside the US. Xbox is extremely rare here, you can find it in few big shops, but the ratio is more than 1:100 for PS4. I once found the Xbox One X being sold for as cheap as 100-110 Omani Rials ($260-285 USD).

Although Xbox 360 didn't sell much here, but it was considered a very successful console mainly to play pirated games for around 1-2 Omani Rials each (~$2.6-5.2 USD) or even have a bundle preinstalled. This matter probably is not well-known but Xbox 360 thrived because of piracy. That explains all the DRM and 24hrs online verification coming to Xbox One.

For Xbox One, some buy it only to play Forza games, ONLY! Here we have like amusement buildings that you can play pS4/Xbox on a seat or rent a room with Netflix/PS4/sport subscription and a pc for general use. My friends for example are renting a room for like 120-140 OMR monthly (~$311-363 USD) that we gather in and watch matches or chat etc, and we've been holding one exact room for more than a year. So such amusement buildings buy A LOT of PS4's mostly for football gamers (FIFA/PES), and rarely Xbox One's for Forza Horizon.

Funny thing is PS4's are not loud, but I hear a lot of the opposite on the internet, not sure why (I believe them anyway maybe something has to due with versions exported to some markets).

For people don't know where I'm talking from:

oman-location-map-max.jpg


And I've finished my high school in Virginia, US (11th-12th grades) back in 2002-2004 :)
That's very interesting, the room bit.

I know the same is for most middleeast countries, regarding PS vs Xbox brand.

Regarding how I knew your Omani, well you said this:

our national TV "Oman TV"
:messenger_winking:
 
18gb memory would be just confusing, it doesn't make any sense.

As for the lower HDD, I would expect a next gen dev kit to have a larger base HDD. Even 1tb retail units isn't going to cut it for long, and I imagine very soon we will be getting 2tb at least, but for a dev kit? 500gb for these is waaaaay too small, way too small.

Stranger things have happened, but I don't buy it. The other stuff is sort of what I would expect, nearly anyway. Seems a bit too far fetched for me.

I am hoping that PS5 comes with 24GB of GDDR6 or HBM and an addtional 4-8GB of DDR4 for the OS, but I have a feeling we are only going to see 16GB of GDDR6 or HBM, which if they do go that low I hope to hell they put an additional 8GB DDR4 for the OS so none of that high speed memory gets wasted on the OS.

Could be 16 + 2 for OS. I am actually not expecting anything above 500 GB SSDs. Because of cost, because of selling higher prices SKUs - and because they are simply not needed. Cerny already said that he expects game sizes to be smaller next gen because SSDs do away with redundant data. And PS4 sits at a ratio of 11 games per console. 430 GB (usable space) / 11 = enough space for the average customer who buys two games a year or less. Curiously enough OEM prices for 500 GB NVMe SSDs are now around the same level as 500 GB HDDs back in 2013.

I know with the SSD it will do away with redundant data, but I am guessing with the increase in texture sizes and other in game assets the file sizes are still going to be massive this next gen. I am hoping they will put a minimum of 1TB in the console, I know it will cost a little more to do that, but it will just be better in the long run, otherwise we will fill that 500GB with just a few games.
 
ThaMonkeyClaw ThaMonkeyClaw i wouldnt expect anything over 20, 16+4 is the sweet spot, and 8gb for the OS is utterly terrible, and would make me question any OS that uses that much. Things should be using less memory than before, not 4X more.

As for HBM? Forget it, it's not happening, not this time anyway.

I'd happily take the 1TB@3.8GB/s over the 500GB@5.5GB/s 🤷‍♂️

A difference of 1.7gb/sec is so utterly tiny that I wouldn't even expect to see any visible difference 99.9% of the time, unless you are shifting a massive amount of data constantly. Even DF would have a hard time making much visual sense of it in tech tear downs. It's essentially the same bloody thing, and hardly worth shouting about, which is why I doubt this.
 
Last edited:
It has partnership with UEFA Champions League for the whole competitions since years back. I think since PS1, not only that competition, you see Sony and Playstation all over the place. See this picture it's in 1997 in the Champions League. The team is Juventus, the biggest Italian team and one of the biggest in Europe.

Juventus+1997.jpg
My favorite Juve jersey. I still have it :)
 
ThaMonkeyClaw ThaMonkeyClaw i wouldnt expect anything over 20, 16+4 is the sweet spot, and 8gb for the OS is utterly terrible, and would make me question any OS that uses that much. Things should be using less memory than before, not 4X more.

As for HBM? Forget it, it's not happening, not this time anyway.

I know anything over 16GB (especially has HBM) is pretty much a pipe dream, but one can always hope! The cost difference between 4GB and 8GB DDR4 is so minimal that I would want to see 8GB in there, which I am not expecting the OS to use all of that, but it allows them to add features down the road and also gives a lot of breathing room for capturing game video, etc. I just really don't want them wasting any of the higher speed memory on anything outside of the game, they always seem to take a sizeable chunk for the OS and other functions and that drives me crazy because I feel like that holds back some games, I don't want it to be like that this next gen.
 
ThaMonkeyClaw ThaMonkeyClaw i wouldnt expect anything over 20, 16+4 is the sweet spot, and 8gb for the OS is utterly terrible, and would make me question any OS that uses that much. Things should be using less memory than before, not 4X more.

As for HBM? Forget it, it's not happening, not this time anyway.



A difference of 1.7gb/sec is so utterly tiny that I wouldn't even expect to see any visible difference 99.9% of the time, unless you are shifting a massive amount of data constantly. Even DF would have a hard time making much visual sense of it in tech tear downs. It's essentially the same bloody thing, and hardly worth shouting about, which is why I doubt this.
the different is nearly 50%. it is not tiny. it is huge. we will see the different right away and with years the gap is going to be just bigger.
 
ThaMonkeyClaw ThaMonkeyClaw its not so much that it's cheaper or not, but that small 2gb is cheap as chips when bought in bulk. It's several other factors, including the actual build. You can just slap an extra 2gb on an existing setup and have it run perfectly fine, there's a lot to take into account. Even if they had aimed for 18 all along, you're still going to run into a lot of other issues along the way with usage. It's not always a case of more is better, you need the complete package to make the most of these parts.

As for the reserved OS memory, I would bet money on both systems having at least 4gb of at least ddr4 in a separate pool for this that doesn't eat into the main memory pool.

16ddr6+4ddr4 is actually damn good for what these systems will do, and more isn't really needed.
 
Do you prefer a console with LED light as indicators and fancy touching mechanism ? or a classic one ?

same question goes for the controller ..
 
ThaMonkeyClaw ThaMonkeyClaw its not so much that it's cheaper or not, but that small 2gb is cheap as chips when bought in bulk. It's several other factors, including the actual build. You can just slap an extra 2gb on an existing setup and have it run perfectly fine, there's a lot to take into account. Even if they had aimed for 18 all along, you're still going to run into a lot of other issues along the way with usage. It's not always a case of more is better, you need the complete package to make the most of these parts.

As for the reserved OS memory, I would bet money on both systems having at least 4gb of at least ddr4 in a separate pool for this that doesn't eat into the main memory pool.

16ddr6+4ddr4 is actually damn good for what these systems will do, and more isn't really needed.

I still want me 8GB of DDR4 for the OS, and I am not going to settle down until I get it! :messenger_tears_of_joy:


WDQM8FU.jpg
 
Makes sense, it's the same full chip, but with disabled CU clusters.
Full chip was referring to the CU count. Also the third mode was referred to as native mode, while the other two were BC modes (Gen 0 and Gen 1). And to make that perfectly clear: The existance of more CUs would make it utterly pointless to run the console at 2 GHz. What you are pretending is that PS5 has a PS4 mode, a PS4 Pro mode and a PS4 Pro boost mode (which basically goes from 0.9 Ghz to 2 Ghz at the same 36 CU) and a PS5 native mode with 54 CU. If that ran at 2 GHz it would be a 13.8 TF console. So it won't. Then it doesn't have the cooling to run at 2 GHz. Then how does it do it in PS4 Pro boost mode? It doesn't. And there your theory of them testing a boost mode (which is crazy on its own, because right, AMD tested all those APUs but the Sony one they only tested for BC and not in its native mode?) comes to an end.

And let's not forget that most PS4 Pro enhanced titles don't even run at an unlocked frame rate or a dynamic resolution that can go to native 4K, which is when it would actually make sense to give more than double the performance to a BC mode. Loads of the games even use checkerboard rendering. You can't make them reach a native 4K output without patches. At which point you might as well just do a native PS5 patch. What you are suggesting is something that is possible on Xbox, because every One X game runs at dynamic 4K. So any additional power on Series X will just make it hit that target. But PS4 Pro with their focus on reconstruction, no chance.

So all in all you are proposing a super hot BC mode that is utterly pointless. All so you don't have to believe in a 9.2 TF APU back in June.
 
ThaMonkeyClaw ThaMonkeyClaw i wouldnt expect anything over 20, 16+4 is the sweet spot, and 8gb for the OS is utterly terrible, and would make me question any OS that uses that much. Things should be using less memory than before, not 4X more.

As for HBM? Forget it, it's not happening, not this time anyway.



A difference of 1.7gb/sec is so utterly tiny that I wouldn't even expect to see any visible difference 99.9% of the time, unless you are shifting a massive amount of data constantly. Even DF would have a hard time making much visual sense of it in tech tear downs. It's essentially the same bloody thing, and hardly worth shouting about, which is why I doubt this.
Um yeah not too sure about that.
 
Tales from my ass but would 16GB + 2GB GDDR6 mean the 2GB could be a cache for the HDD before streaming assets into the 16GB?
 
Last edited:
the different is nearly 50%. it is not tiny. it is huge. we will see the different right away and with years the gap is going to be just bigger.

It's not 50%. And I've made several write ups in this very thread based on working with actual game engines utilising SSDs, search them out. The only way you will see any decent result is if one SSD is entry level and the other high, so 8 vs 2. 5.5 vs 3.8 is absolutely nothing in the scheme of things, and doesn't ring true with what Cerny has talked about. It's nothing to shout from the rooftops as some massive improvement.

Essentially, for 5.5 vs 3.8, in a real world application you would see negligible difference in loading assets while ingame and playing (cpu dependant), and for actual loading of data, you are talking millisecond difference. A 150gb data load, (something that a) doesn't exist as the actual loads are small and b) assumes we are not decompressing which would change speed based on cpu threading) will load in 27.27 seconds on this theoretical PS5. On this theoretical SX, it would load in 39.47 seconds. Again, this is disregarding cpu instruction speed and decompression as well as all other logic.

A more REAL WORLD example however would be we are loading up front 12gb of data, which makes more sense for an average memory use on a modern game.

In this example, the PS5 would load this in 2.18 seconds, and the SX would load it in 3.15 seconds. Again, assuming no game logic and cpu instruction difference.

I've written many times about SSD usage, and why the whole "PS5 will be next gen amazing because of is!" Is utterly horse radish. Sure, it will be fantastic, yes, but BOTH machines will. Don't forget the above examples are with the same CPU. If the SX is faster, it will decompress the archives quicker and close the gap even more.

BONUS TIME: of course this alL also depends on the random read speed and the quality of the controller of each!

It's all simple math. But I've written about it a LOT of times here, so feel free to search :)

TLDR: If these values are what is "real" then Sony have no real bragging rights,
l about the SSD, which makes me believe it's really not.
 
Could be 16 + 2 for OS (the additional 4 GB just for devkit). I am actually not expecting anything above 500 GB SSDs. Because of cost, because of selling higher prices SKUs - and because they are simply not needed. Cerny already said that he expects game sizes to be smaller next gen because SSDs do away with redundant data. And PS4 sits at a ratio of 11 games per console. 430 GB (usable space) / 11 = enough space for the average customer who buys two games a year or less. Curiously enough OEM prices for 500 GB NVMe SSDs are now around the same level as 500 GB HDDs back in 2013.

But the TF in that leak don't make sense. Navi 10 (and anything based on it) is limited to 40 CUs. Good luck trying to get 12.6 TF out of it. That GPU would need to run at 2.45 GHz with no CUs deactivated for yields. LOL.

I've been thinking for a while now that PS5 is probably a modified Navi 10, with other features like ray-tracing integrated into the silicon. It's most likely using a 256-bit memory bus (since it's most likely to be using GDDR6 as the primary memory), so a GPU with 48 active CUs would be fed pretty well with that. It's possible for Micron GDDR6 to be overclocked IIRC, to 18Gbps, or 2.25 GB per channel, making one chip 72 GB/s and 8 chips 576 GB/s (hell, that could actually be why the Reddit leak mentions 576 GB/s for PS5 bandwidth).

The reason I'm still insisting on a GPU with 48CUs maximum is because it fits detailed die measurements, takes into account deactivated CU cores the PS4 Pro had in regression testing for PS4 compatibility, lines up with the claimed upper limit in the ITMedia figure when accounting architectural conversions (13.8 Radeon = between 10.4TF - 11.04TF Navi with Navi>GCN efficiency improvements of 20-25%), and fits within Navi's claimed sweetspot of 1.7GHz to 1.8GHz.

That said, Disco_ Disco_ has also mentioned that the systems might be using a revised Navi that has even more architectural efficiency improvements....I think they said something like around an additional 10% IIRC? So with that in mind, a next-gen console would only need around 8.97TF - 9.6TF Navi to hit around 13.8 Radeon VII performance. Which could be realistically done with a Navi chip at 40CUs (all turned on) around 1750MHz on the low end (gives about 8.96TF), which would fit with the "older" sweetspot (assuming what Disco says is true).

Personally I still think there's another benchmark to pop up for Oberon at some point with more CUs and RT being tested on it, could likely pop up around time of GDC in March. Some people seem to think Arden is Oberon-related, but I don't see it that way. We haven't seen any other steppings with Arden be datamined since that Github leak.

So I guess in terms of my own personal range for PS5 performance, I'd put it between 8.96TF (lowest, 40CUs)to 12.16 TF (highest; 48CUs & throwing in possible RDNA efficiency shifting the sweetspot around 10*, i.e upper end of 1.8GHz now becomes 1.98GHz). HOWEVER, that depends on if the RDNA efficiency improvements Disco_ mentioned are real and present with Oberon.

If not, and efficiency over GCN is closer to 20%-25% instead of 30%-35%, then I'm not even wanting to entertain Oberon being a 40 CU max chip because if so, while it's possible we'd be looking at all the CUs being active, the most you'd get is 8.7TF. It's even worst if CUs (even if just two) are disabled. But I hedge my bets on the chip actually having 48CUs (or 52 total with 4 disabled), that way if the sweetspot is the same as usual then it'd be closer to 10.4TF to 11 TF.

Also gotta keep in mind that the CPU they're using is likely a modified version of the 4800U; that APU has a TDP of only 15W (which includes the CUs on-board), tho that is with the CPU at the base clock (which is something like 2.3GHz or up to 2.7GHz last I checked). So that could realistically be giving them room to push the GPU clocks around 2GHz and while that looks bad for pure GPU TDP, it's balanced out by a relatively low-powered CPU (then again assuming the XSX is using a similar CPU, would it not also be possible for them to push the GPU clocks to around 2GHz or so?).
 
Yes I believe you, this is what he told me, I believe in Colbert more than these unnamed sources like Jason Schreier or Mark Cerny lying to us about Hardware RT YOOO!!
WV9VuZO.png

Not sure why one needs 'to understand the technological dependencies' to decipher the statement from Jason that both consoles are more powerful than a RTX 2080 which is a 11.4TF card. It is not as if he based it on a single source, that information was based on his conversation with multiple developers.

Series X we already know is around ~12TF mark, PS5 would be in that territory too if you believe Jasons' statement. Which given his track record should be obvious unless one has gotten a spec sheet which disproves those claim.
 

18GB would suggest some oddities in the memory configuration. Assuming you're looking at 1GB or 2GB modules, or some combination of the two, 18GB is an odd number combined with the standard bus sizes typically used (you generally need some symmetrical balance). 16GB, 20GB, 24GB, 32GB, are proper numbers, 36GB would be unusual as well.
 
It's not 50%. And I've made several write ups in this very thread based on working with actual game engines utilising SSDs, search them out. The only way you will see any decent result is if one SSD is entry level and the other high, so 8 vs 2. 5.5 vs 3.8 is absolutely nothing in the scheme of things, and doesn't ring true with what Cerny has talked about. It's nothing to shout from the rooftops as some massive improvement.

I don't think that was ever his intention. All he said was that the PS5 SSD was faster than anything available on PC back in April. And showed that it was 19x faster than PS4 HDD. Going by around 100 MB/s read speeds on PS4 that puts PS5 at around 1.9 GB/s. That's the only data we have right now.
 
18GB would suggest some oddities in the memory configuration. Assuming you're looking at 1GB or 2GB modules, or some combination of the two, 18GB is an odd number combined with the standard bus sizes typically used (you generally need some symmetrical balance). 16GB, 20GB, 24GB, 32GB, are proper numbers, 36GB would be unusual as well.

Exactly. This "leak" screams of a fan who just thought "oh, this is a bit bigger, that will work" without actually thinking about what he was writing.

Edit: I better mention before any racing loons come at me, that this doesn't mean their SSD isn't going to be great. It is, no doubt. It's just that it's not as great as you all are making out. I'm sure the PS5 will be amazing in its own ways over the SX, and I have no doubt it will be the better overall system. Whether or not it's faster, that remains to be seen.

I don't think that was ever his intention. All he said was that the PS5 SSD was faster than anything available on PC back in April. And showed that it was 19x faster than PS4 HDD. Going by around 100 MB/s read speeds on PS4 that puts PS5 at around 1.9 GB/s. That's the only data we have right now.

Of what little they have talked about and shown, the SSD seems to be their shining light, their "look how bad ass we are" moment. For it to be this small a gap as this "leak" suggests, would be like comparing cocks of 6 inches and 6.1. It's like... Well... Still a cock 🤣
 
Last edited:
18GB would suggest some oddities in the memory configuration. Assuming you're looking at 1GB or 2GB modules, or some combination of the two, 18GB is an odd number combined with the standard bus sizes typically used (you generally need some symmetrical balance). 16GB, 20GB, 24GB, 32GB, are proper numbers, 36GB would be unusual as well.

What about 11GB on nVidia cards and the like?
 
It's not 50%. And I've made several write ups in this very thread based on working with actual game engines utilising SSDs, search them out. The only way you will see any decent result is if one SSD is entry level and the other high, so 8 vs 2. 5.5 vs 3.8 is absolutely nothing in the scheme of things, and doesn't ring true with what Cerny has talked about. It's nothing to shout from the rooftops as some massive improvement.

Essentially, for 5.5 vs 3.8, in a real world application you would see negligible difference in loading assets while ingame and playing (cpu dependant), and for actual loading of data, you are talking millisecond difference. A 150gb data load, (something that a) doesn't exist as the actual loads are small and b) assumes we are not decompressing which would change speed based on cpu threading) will load in 27.27 seconds on this theoretical PS5. On this theoretical SX, it would load in 39.47 seconds. Again, this is disregarding cpu instruction speed and decompression as well as all other logic.

A more REAL WORLD example however would be we are loading up front 12gb of data, which makes more sense for an average memory use on a modern game.

In this example, the PS5 would load this in 2.18 seconds, and the SX would load it in 3.15 seconds. Again, assuming no game logic and cpu instruction difference.

I've written many times about SSD usage, and why the whole "PS5 will be next gen amazing because of is!" Is utterly horse radish. Sure, it will be fantastic, yes, but BOTH machines will. Don't forget the above examples are with the same CPU. If the SX is faster, it will decompress the archives quicker and close the gap even more.

BONUS TIME: of course this alL also depends on the random read speed and the quality of the controller of each!

It's all simple math. But I've written about it a LOT of times here, so feel free to search :)

TLDR: If these values are what is "real" then Sony have no real bragging rights,
l about the SSD, which makes me believe it's really not.
hilariously funny :messenger_tears_of_joy:
 
Surely it's fake then. :messenger_winking_tongue:

It's probably fake anyways, maybe.
Surely it's fake then. :messenger_winking_tongue:

It's probably fake anyways, maybe.

Bro too much smoke not to have fire. All those 12 numbers I'm sure it's around tht amount. Seriously believe it's edging out nextbox but we shall see.

Cerny lives in New York? I should go visit his ass 😂😂😂😂 but he looks like he might shoot me with a tranquilizer dart and try to do things 😂😂😂🤦‍♂️
 
Last edited:
I don't think that was ever his intention. All he said was that the PS5 SSD was faster than anything available on PC back in April. And showed that it was 19x faster than PS4 HDD. Going by around 100 MB/s read speeds on PS4 that puts PS5 at around 1.9 GB/s. That's the only data we have right now.

Don't forget, that speeds that they showed in April specifically stated:

The devkit, an early "low-speed" version, is concealed in a big silver tower, with no visible componentry.

Low speed being they key here, we have no clue what the true top speed will be on the PS5's SSD, if it is 3GB/sec+ I will be more than happy, anything higher is just icing on the cake!

https://www.wired.com/story/exclusive-sony-next-gen-console/
 
Last edited:
I've been thinking for a while now that PS5 is probably a modified Navi 10, with other features like ray-tracing integrated into the silicon. It's most likely using a 256-bit memory bus (since it's most likely to be using GDDR6 as the primary memory), so a GPU with 48 active CUs would be fed pretty well with that. It's possible for Micron GDDR6 to be overclocked IIRC, to 18Gbps, or 2.25 GB per channel, making one chip 72 GB/s and 8 chips 576 GB/s (hell, that could actually be why the Reddit leak mentions 576 GB/s for PS5 bandwidth).

The reason I'm still insisting on a GPU with 48CUs maximum is because it fits detailed die measurements, takes into account deactivated CU cores the PS4 Pro had in regression testing for PS4 compatibility, lines up with the claimed upper limit in the ITMedia figure when accounting architectural conversions (13.8 Radeon = between 10.4TF - 11.04TF Navi with Navi>GCN efficiency improvements of 20-25%), and fits within Navi's claimed sweetspot of 1.7GHz to 1.8GHz.

That said, Disco_ Disco_ has also mentioned that the systems might be using a revised Navi that has even more architectural efficiency improvements....I think they said something like around an additional 10% IIRC? So with that in mind, a next-gen console would only need around 8.97TF - 9.6TF Navi to hit around 13.8 Radeon VII performance. Which could be realistically done with a Navi chip at 40CUs (all turned on) around 1750MHz on the low end (gives about 8.96TF), which would fit with the "older" sweetspot (assuming what Disco says is true).

Personally I still think there's another benchmark to pop up for Oberon at some point with more CUs and RT being tested on it, could likely pop up around time of GDC in March. Some people seem to think Arden is Oberon-related, but I don't see it that way. We haven't seen any other steppings with Arden be datamined since that Github leak.

So I guess in terms of my own personal range for PS5 performance, I'd put it between 8.96TF (lowest, 40CUs)to 12.16 TF (highest; 48CUs & throwing in possible RDNA efficiency shifting the sweetspot around 10*, i.e upper end of 1.8GHz now becomes 1.98GHz). HOWEVER, that depends on if the RDNA efficiency improvements Disco_ mentioned are real and present with Oberon.

If not, and efficiency over GCN is closer to 20%-25% instead of 30%-35%, then I'm not even wanting to entertain Oberon being a 40 CU max chip because if so, while it's possible we'd be looking at all the CUs being active, the most you'd get is 8.7TF. It's even worst if CUs (even if just two) are disabled. But I hedge my bets on the chip actually having 48CUs (or 52 total with 4 disabled), that way if the sweetspot is the same as usual then it'd be closer to 10.4TF to 11 TF.

Also gotta keep in mind that the CPU they're using is likely a modified version of the 4800U; that APU has a TDP of only 15W (which includes the CUs on-board), tho that is with the CPU at the base clock (which is something like 2.3GHz or up to 2.7GHz last I checked). So that could realistically be giving them room to push the GPU clocks around 2GHz and while that looks bad for pure GPU TDP, it's balanced out by a relatively low-powered CPU (then again assuming the XSX is using a similar CPU, would it not also be possible for them to push the GPU clocks to around 2GHz or so?).
There have been PS5 APU die measurements? Can you link me to them? I thought we had only seen the Xbox APU?

And that's the thing with the Xbox APU, based on the console design (chimney effect and a huge fan on top) they should be able to run the APU at 2 GHz. I don't think they would want to, because they wouldn't want to risk another RROD. But that's also why Sony probably has no interest in running something that fast and hot. They did for the devkit, absolutely. That's why that thing looks like it looks and according to a dev sounds like a jet engine. But that's with overhead for all the dev tools and performance analysis. A 2 GHz APU in a consumer console would almost certainly require a liquid cooled solution. Which adds cost and issues when after some years the hoses become porous. Also requires more energy for the radiator than for a simple fan. It just looks like a bad idea. I would rather go with 40 CU at 1.8 GHz to reach 9.2 TF than 36 with 2 GHz. Yes, that results in lower yields, but you can save that money with a cheaper cooling solution and less likeliness for hardware failures. Plus opens up TDP for the raytracing cores, which aren't cheap either. Based on Nvidia hardware they add around 20 Watt to the TDP. Which takes away from what you have for the rest of the GPU.
 
Not sure why one needs 'to understand the technological dependencies' to decipher the statement from Jason that both consoles are more powerful than a RTX 2080 which is a 11.4TF card. It is not as if he based it on a single source, that information was based on his conversation with multiple developers.

Series X we already know is around ~12TF mark, PS5 would be in that territory too if you believe Jasons' statement. Which given his track record should be obvious unless one has gotten a spec sheet which disproves those claim.
Yeah and why would Jason even say RTX 2080 if he "doesn't understand the technological dependencies" like what Colbert is saying? Shouldn't Jason not understand how powerful an RTX 2080 also?

It is really obvious that Colbert is just spinning things and statements around just to fit his agenda or his ego, I do not know why he does this, he even put Kleegamefan on "ignore" when Klee stated that PS5 is as powerful or more powerful than XSX....That tells you everything about him 🤣🤣🤣
 
What about 11GB on nVidia cards and the like?

Typically the result of a bin, generally the deactivation of one or more memory controllers. The proper design of the 2080ti was 12x1GB, but the decision was made to deactivate one 32bit controller (likely due to yield issues). That's not something that I would expect to see in an AMD design, they don't have as much flexibility in deactivating controllers due to their shader design. Would certainly be a waste in a console design.
 
Last edited:
Status
Not open for further replies.
Top Bottom