• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
What i personnaly expect from next gen :
1) no noise. My ps4 pro is amazing but....this can be acheived i guess
2) more new aaa ips. I felt a bit disapointed with this genon this regard.
3) focus on local split screen. Since next gen will target 4k and as wider tvs become the norm (=>65 inch), local slip screen 4 players @1080p each should offer plenty enough clarity.
4) an app store extending beyond games. I d like to see an app to replace my popcorn hour.
5) i d love a system that would bring retro compat by byibg some rights or do a share profit deal. Imagine my ps5 with c64, cpc6128, atariSt, amiga game players. Totally targetted to middleaged guys...
6) some remasters / followup of :
- Motorstorm pacific rift
- einhander
- xenon 2
 
“I did QA testing for RDR 2 and GTA 6, the latter starting preproduction in 2014 I believe. I was using a PS5 devkit. The game entered large scale development sometime around late 2016 during the timeframe Rockstar began marketing Red Dead Redemption 2.”

Imagine being able to fast travel at any point in the map without loading times. :messenger_open_mouth:
 


After reading this post. Now, I also want HBM to happen! HBM with a storage class memory he said is perfect. ReRAM is a storage class memory.
 
This is also what the game dev from B3D told me in PM. Bad latency is the reason ray-tracing is expensive.

ReRam being only 10 times slower than main memory sounds awesome, but remember even main ram is usually considered much too slow.
Accessing main memory from GPU (speaking about the dedicated memory on a discrete PC GPU here) often means a latency of 800(!) cycles.
GPUs (try to) solve this with caches and using massive hyper threading so other work can be done while waiting.
Both does not work well with RT, which is the major technical reason it is so expensive (aside the problematic time complexity of the algorithm in general).

Now I want HBM to happen too. :messenger_grinning_smiling:
 
Since no insider has mentioned the ps5 using gddr6, maybe osirisblack is correct and it’ll be hbm2 with ddr4.

Osirisblack mention of an "extra storage" could also hint at ReRAM

Basing off that new reddit post, it looks like HBM + Storage Class Memory is perfect. ReRAM is a storage class memory.
 

psorcerer

Banned
After reading this post. Now, I also want HBM to happen! HBM with a storage class memory he said is perfect. ReRAM is a storage class memory.

Here we go again.
Everything above is a half truth.
Yes, because RT uses random sampling it is essentially a random memory access.
No, you cannot solve that.
Again, THERE IS NO SOLUTION.
"Low latency CPU memory controllers" won't help you. a) they are not really "low latency" they just hide it well, b) they have exact same problem with the random access
But you can employ some clever tricks to make it suck less, like ray-coherency hashing algorithms (in Nvidia cards).
 
Here we go again.
Everything above is a half truth.
Yes, because RT uses random sampling it is essentially a random memory access.
No, you cannot solve that.
Again, THERE IS NO SOLUTION.
"Low latency CPU memory controllers" won't help you. a) they are not really "low latency" they just hide it well, b) they have exact same problem with the random access
But you can employ some clever tricks to make it suck less, like ray-coherency hashing algorithms (in Nvidia cards).

I still want HBM to happen because I'm sure it'll help. We haven't seen Sony and AMD's implementation of RT yet.
 

Aceofspades

Banned


After reading this post. Now, I also want HBM to happen! HBM with a storage class memory he said is perfect. ReRAM is a storage class memory.


HBM is simply better than GDDR6. Smaller, faster, more efficient only downside was cost. If Sony managed HBM2 on PS5 that would be an excellent engineering decision.

I'm skeptical about this whole HBM setup but I wish we can get it in the end.
 

R600

Banned
Its 16GB of GDDR6 on 256bit bus and 16-18Gbps clocks + 4GB DDR4 for OS.

So looking at 528-544GB/s on nice and narrow bus, but at very high chip clocks, therefore simpler and smaller SOC, although quite a performer.

Add 36-40 active CUs at 2.0GHz and you get 9-10TF console, with RT+VRS, Zen2, SSD and alot of RAM for possible 399$ on a relative loss (60-70$).

IMO this would be a megaton as 299$ Lockhart would look VERY unattractive against it, and Anaconda for 100$ more would get you 20% more TF all else staying the same. Doesnt sound like such a great deal knowing X had 50% more TF then Pro and yet very few could spot the difference.

Also remember Rian saying they are looking to have fastest user transition ever on PS5 gen? Well, I doubt 500$ would make it as fast as they are projecting. Also, at 500$, Lockhart does look like a good deal, but not at 399$.
 
Last edited:

DJ12

Member
Its 16GB of GDDR6 on 256bit bus and 16-18Gbps clocks + 4GB DDR4 for OS.

So looking at 528-544GB/s on nice and narrow bus, but at very high chip clocks, therefore simpler and smaller SOC, although quite a performer.

Add 36-40 active CUs at 2.0GHz and you get 9-10TF console, with RT+VRS, Zen2, SSD and alot of RAM for possible 399$ on a relative loss (60-70$).

IMO this would be a megaton as 299$ Lockhart would look VERY unattractive against it, and Anaconda for 100$ more would get you 20% more TF all else staying the same. Doesnt sound like such a great deal knowing X had 50% more TF then Pro and yet very few could spot the difference.

Also remember Rian saying they are looking to have fastest user transition ever on PS5 gen? Well, I doubt 500$ would make it as fast as they are projecting. Also, at 500$, Lockhart does look like a good deal, but not at 399$.
Here he goes again speaking in 'facts' when in reality he knows nothing.

Even if you believe Gonzalo was an iteration of the ps5s Apu (which is solely based on the ASSUMPTION of the Twitter guy) it's not indicative of the final ps5 or its memory type.
 

R600

Banned
Here he goes again speaking in 'facts' when in reality he knows nothing.

Even if you believe Gonzalo was an iteration of the ps5s Apu (which is solely based on the ASSUMPTION of the Twitter guy) it's not indicative of the final ps5 or its memory type.
Except it is. I mean, continue on with HBM2 and cheap deals Sony got from Samsung, but dont get butthurt when someone posts something you dont like.

Oberon, Gonzalo, Flute...Prospero? All having sam GPU id (Ariel 13E9). You got anything better then guys who datamine AMD codenames, chips and benchmarks 24/7?
 
Last edited:

pawel86ck

Banned
hey there -iam a fresh Member of the Neogaf Community. Mostly intrested in Console Tech and i was lurking a looong time before finaly making the subscription a couple days ago.
with that being said - Nvidia had this Panel at the GDC China where we saw their CEO anouncing their new slick MaxQ Gaming Laptop Products. Here is the timestamp i talk about.
It is shown on the big screen:



RTX 2080 (powered)
>Next Gen Console

So my Question would be:

is this basicly a confirmation about next gen being weaker than RTX2080 ? with Nvidias connections within the industry they could probably obtain informations about Next Gen Dev Kits. Or the other Way around - if they dare to show this on a big screen - must they not be pretty shure about that?
The WindowsCentral Piece about Next Gen Xbox did point out 12TF RDNA wich would be faster than a ordinary RTX 2080. It would be on par with RTX 2080 Super i guess. Also Richard Leadbetter in his Piece did assume a 12TF RDNA.
Jason Schreier in his Podcast mentioned Next Gen would be on par with RTX 2080.. So with all that Rumors - What should we think about that Claim from Nvidia. Would they claim such a thing even if they are not shure about it , or straight out lieing about it?
What you guys think.
With all those rumors about NExt Gen being similar powerfull as a RTX 2080 i would be uncomfortable about a Next Gen being weaker than a MaxQ laptop...

Geforce RTX 2080 Max-Q is around 6TF gpu :messenger_tears_of_joy:. It would be so funny seeing next gen consoles at 6TF, especially when people expect now 12-14TF.

What's interesting Nv chart mention next gen "console", not "consoles". so that's probably 4TF lockhart, and Nv marketing at it's worst.
 
Last edited:

DJ12

Member
Except it is. I mean, continue on with HBM2 and cheap deals Sony got from Samsung, but dont get butthurt when someone posts something you dont like.

Oberon, Gonzalo, Flute...Prospero? All having sam GPU id (Ariel 13E9). You got anything better then guys who datamine AMD codenames, chips and benchmarks 24/7?
The same guy that adds ? To the end of all his guess work?

That finds data even you don't agree with eg 2.0ghz.

Do me a favour. Lol

These aren't facts so stop bleeting on like they are.

It's just your opinion which has been proven wrong several times which I'm guessing you hope no one is going to trawl through this thread for specific examples.
 

R600

Banned
The same guy that adds ? To the end of all his guess work?

That finds data even you don't agree with eg 2.0ghz.

Do me a favour. Lol

These aren't facts so stop bleeting on like they are.

It's just your opinion which has been proven wrong several times which I'm guessing you hope no one is going to trawl through this thread for specific examples.
Jesus you are a brainlet.
 

SmokSmog

Member
Something is on the way. AMD? CES ?

Sapphire make exclusive gpus for AMD like EVGA for Nvidia
kvzwwqsprk641.jpg
 
What's the point of having an SSD of 4gb/s -10gb/s (some people's estimation) if the data that is to be read from the SSD is compressed? Can a cost-effective ASIC decompression chip even process that amount of data in real-time? If it can't then what's the point of having that much bandwidth? If we assume that your decompressor can, how fast can it do it considering the data has to go back and forth, eat a lot of bandwidth, and then have to end up in the main RAM.

Ultra-fast SSD only make sense if you can read from it directly. But it entails that your data has to be decompressed. But now you have a problem of storage capacity. Therefore, cache approach makes a lot of sense.

If you're doing cache approach then you only really have 4 options due to write endurance:

a. DDR3/4 - expensive
b. SLC Nand - relatively slow
c. 3DXpoint PCM - exclusive to Intel
d. ReRAM - Sony's own baby

It's hard to think that they would choose another technology when they have one they need to promote., provided that it's ready and it's cost-effective. If not, SLC Nand looks to be the cheapest and most viable.
 
AMD had their high end/server cards run on hbm. Navi12 is supposed to be a 40cu max product, not the big guy coming later.

That means it might be being produced at scale now considering they're starting to commoditize it.

That means HBM might have a big customer ordering in bulk.
 

xool

Member
Estimation/speculation for the bill of materials for PS5, taken from Reddit:

28u1tZw.jpg


Any thoughts?

No complaints

I did an estimate here https://www.neogaf.com/threads/next...-leaks-thread.1480978/page-165#post-254837746 https://www.neogaf.com/threads/next...-leaks-thread.1480978/page-201#post-255135835

Without ReRAM I got to $500 with a top end $165 APU .. other estimates are very similar ball park (slight different grouping)

Long time ago we did some APU cost estimates [here] , [revised] .. we really don't know APU costs though, or 7nm yields - that's the big mystery .. everything else is fairly well known.

(I don't think ReRAM next year though)
 
Last edited:

SmokSmog

Member
No complaints

I did an estimate here https://www.neogaf.com/threads/next...-leaks-thread.1480978/page-165#post-254837746

Without ReRAM I got to $500 with a top end $165 APU .. other estimates are very similar ball park (slight different grouping)

Long time ago we did some APU cost estimates [here] , [revised] .. we really don't know APU costs though, or 7nm yields - that's the big mystery .. everything else is fairly well known.

(I don't think ReRAM next year though)
7NM TSMC defect rate 0.09cm2 at this moment.

400mm2 APU almost 100 full functional dies.
Wafer cost 7.5-10k?


Samsung 1.4cm2 defect rate ( not viable for production)

 
Last edited:

xool

Member
7NM TSMC defect rate 0.09cm2 at this moment.

400mm2 APU almost 100 full functional dies.
Wafer cost 7.5-10k?
The original assumption was a ~$10,000 cost per 7nm 300mm wafer (here , and here )

So $10,000 / 100 is $100 per 400mm2 die, better than the $165 we had before
[edit typos stuff]
 
Last edited:

R600

Banned
The original assumption was a ~$10,000 cost per 7nm wafer (here , and here )

So $10,000 / 100 is $100 per 400mm2 die, better than the $165 we had before
Yap, though they will also be paying for licencing to AMD. I dont know how much it is per chip, but I guess 15-20$ for Zen2 + Navi is not out of question. Maybe more...
 

Haxxor777

Neo Member
Here he goes again speaking in 'facts' when in reality he knows nothing.

Even if you believe Gonzalo was an iteration of the ps5s Apu (which is solely based on the ASSUMPTION of the Twitter guy) it's not indicative of the final ps5 or its memory type.
Well actually the Gonzalo thing is the most probable/believable as of right now. Even DF did a whole vid on it. So I wouldn't be so negative about it if I were you.
 

quest

Not Banned from OT
Yap, though they will also be paying for licencing to AMD. I dont know how much it is per chip, but I guess 15-20$ for Zen2 + Navi is not out of question. Maybe more...
Do we know what it was last generation? I am assuming it would be much more this time since the zen 2 and so far navi are actually look good. No discount for bottom of the barrel parts like last time. The zen 2 IP is worth something.
 

R600

Banned
Do we know what it was last generation? I am assuming it would be much more this time since the zen 2 and so far navi are actually look good. No discount for bottom of the barrel parts like last time. The zen 2 IP is worth something.
Yea that is why I said it could be higher then that. AMD is now in financially good place, with great portfolio and competitive products. I doubt they will be taken to cleaners on this one...
 

ksdixon

Member
What does all these terms/methods mean for the end gamer?

no loading times? Lesser time for fast travel? OS/menu being more fluid and not lagging? Faster installing of games to storage?
 

bitbydeath

Member
Well actually the Gonzalo thing is the most probable/believable as of right now. Even DF did a whole vid on it. So I wouldn't be so negative about it if I were you.

Even if it were true then we haven’t seen what’s final and Prospero also part of the same rumour is said to be the biggest compute leap ever made.
 
What's the point of having an SSD of 4gb/s -10gb/s (some people's estimation) if the data that is to be read from the SSD is compressed? Can a cost-effective ASIC decompression chip even process that amount of data in real-time? If it can't then what's the point of having that much bandwidth? If we assume that your decompressor can, how fast can it do it considering the data has to go back and forth, eat a lot of bandwidth, and then have to end up in the main RAM.

Ultra-fast SSD only make sense if you can read from it directly. But it entails that your data has to be decompressed. But now you have a problem of storage capacity. Therefore, cache approach makes a lot of sense.

If you're doing cache approach then you only really have 4 options due to write endurance:

a. DDR3/4 - expensive
b. SLC Nand - relatively slow
c. 3DXpoint PCM - exclusive to Intel
d. ReRAM - Sony's own baby

It's hard to think that they would choose another technology when they have one they need to promote., provided that it's ready and it's cost-effective. If not, SLC Nand looks to be the cheapest and most viable.

3D Xpoint is not exclusive to Intel; Micron also produces it. And Sony isn't the only company making ReRAM. In fact, none have yet to produce a ReRAM IC at even 8Gb (1GB) size (or even 512MB last I checked) which has been commercialized to a mass scale, and it's already heading into 2020.

Also SLC NAND is not a good choice in that it lacks the bit/byte-addressable write performance of the other three; data has to still be erased in blocks, like the lower-quality NANDs. NOR Flash has a very distinct advantage of byte-addressable reads, but writing to NOR is incredibly slow.

Out of those four choices DDR3/4 and 3D Xpoint PCM are the most likely choices if either two want an in-between cache offering DRAM-style read and write performance & alterability. As for the SSDs having compressed or uncompressed data, it will ultimately come down to how developers code their games and how much bloat they have in terms of assets. Games with smart design will find ways to maintain relatively small amount of asset data sizes and use matrix transformation codes on pixels of assets (stored to arrays) to change values in real-time. GPUs can handle that since they excel in massive parallelism of data calculation, much like DSPs.

That way more data of the game can be placed uncompressed on the SSD that way it doesn't need to be constantly decompressed when accessed, by whatever methods the system uses to decompress the data.
 
Status
Not open for further replies.
Top Bottom