Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
R R600

Nice job with the screencap looks like they pulled the article. That's probably ~220W average gaming consumption for the 5700 XT and ~175W for the 5700 Pro. We'll have to wait for TPU and Guru3d for nuanced measurements for average and peak, but that's essentially their listed TBP. Tuned and power capped 5700 Pro with +8GB GDDR6(16W) and downclocked Zen 2 8c/16t CPU(30-40W) and everything else would still be over 200W, while PS4 Pro was ~155W and X1X ~175W total. :pie_thinking:
Note that full system power of 570 is higher then X, when 570 is not as good as what X is packing.

Also 9900K with boost is considerably more power hungry then 3.2 Zen2, I would say at least ~50W. Add to that system memory that is alread there and consoles with GPU ~8.5TF would be able to fit inside 200W limit.
 
Last edited:
Funny if true.

Gemüsepizza said:
Saw this string today in Unity.exe (2019.2b/3a only):

xboxtwov1kvk.jpg



Don't know if this is actually something related to next-gen, but if it is, that's probably just a placeholder name.
 
random words
what does any of this have to do with Samsung 7nm euv and nvidia?
you don't even know what HYBRID RT is, you clearly dont pay attention to my posts, AGAIN AMD is using SHADERS for RT.
Neither do you apparently
if you want backward comp its easier with RDNA 1
Thats just your uneducated speculation
aying that RDNA 1 is flexible and can handle Hybrid RT if Devs wants
Further speculation on your part and wrong at that
Shes saying RDNA is scalable and can adapt new features such as hw RT, next gen RDNA (RDNA2) is the next step for that
A much more well thought report on that 21st May strategy meeting, from Yahoo Finance:
Sony starts talking PS5 strategy
Some key points
We plan to do that by further improving the computational power of the console, measured in TFLOPS

Yep Sony will mention TFs
11TF RDNA2 minimum likely pushing 12TF+ to claim over 3x the computational power of the PS4 Pro
 
Last edited:
So Gonzalo was a pachinko machine this whole time? :messenger_tears_of_joy:
its just a bad translation at the end, is talking of both ps5 and xbox2
Just a random Chinese article speculating on rumors.
IF either of the next consoles are on EUV like many here hope then I would contend the decision for that would have been made back in 2015. I really doubt they chose EUV back then as it would've been a huge risk but I won't complain if they somehow pull it off.
Cerny claimed 6 years of development for PS4 (2 planning/choosing chip and 4 engineering)
He said the chip has to be mostly locked 2 years into engeneering development (2018) that gives plenty of time to include it on the plans.
 
Last edited:
what does any of this have to do with Samsung 7nm euv and nvidia?

Neither do you apparently

Thats just your uneducated speculation

Further speculation on your part and wrong at that
Shes saying RDNA is scalable and can adapt new features such as hw RT, next gen RDNA (RDNA2) is the next step for that


Yep Sony will mention TFs
11TF RDNA2 minimum likely pushing 12TF+ to claim over 3x the computational power of the PS4 Pro
They could mention that ps5 9tflops is = ps4 12tflop gcn. they can spin it in terms of marketing. just a thought.
 
They could mention that ps5 9tflops is = ps4 12tflop gcn. they can spin it in terms of marketing. just a thought.
No. They can't.

There is no difference between flops at all... GCN FLOPS are identical to Intel FLOPS, Navi FLOPS, nVidia FLOPS.

Navi indeed has less compute power (FLOPS) than Vega and that is why Vega 7nm (Radeon VII) is still the top card from AMD and GCN will continue being the Archeture used in Prosumer, HPC, etc cards.
 
Last edited:
really people are saying that since navi offers like 1.25 more performance per watt than gcn. 9 = 12.5 or something like that.
Perf. per watt has nothing to do with FLOPS.

Navi has better IPC than Vega so that translate to better perf. per watt.

Vega has way more compute power than Navi... aka FLOPS.
 
Last edited:
FLOPS is a mensurable metric for all silicon processor... it just meas how many single-point operations a processor can do per second... the calc, the metric is equal to all processors in the world.

Vega ~13TFs means it can do 13 trillion FP32 operations per second.
Navi ~9TFs means it can do only 9 trillion FP32 operations per second.

Vega is strong compute focused while Navi is not.
 
Last edited:
I thought the multiplier was at least 1.4? Or you think AMD is being too optimistic?
I did it for two reasons
  1. Its based on cherry picked benchmarks
  2. Vega is bottlenecked at the frontend and shows less performance than what its raw numbers suggest compared to polaris
Interesting quote. SSDs usually don't impact graphics rendering speed at all other than reducing pop-in just by being so much faster than HDDs. But to mention it specifically as an assist to the GPU
It refers to spiderman demo where the faster transversal was explained by the increase in rendering speed
Navi XT (with more CUs and lower clocks) is absolute max they can go.
Not really
On 7nm DUV
  • 54CUs @ 1400Mhz = 9.67TF (lowball)
  • 54CUs @ 1500Mhz = 10.36TF (very likely)
  • 54CUs @ 1550Mhz = 10.7TF
  • 54CUs @ 1592Mhz = 11TF (best case scenario)
On 7nm EUV
  • 60CUs @ 1500Mhz = 11.5TF
  • 60CUs @ 1600Mhz = 12.28TF
  • 60CUs @ 1693Mhz = 13TF
EUV will drop power use by 20%. That should keep them just under 200W with a GPU close to 5700XT.
That and the density increase will allow for a more power efficiend wider and slower design.
 
Last edited:
you really notice how shitty the xbox one CPU is when you get 5 fps in this game



and just now the game went down to 1 fps and went at snail pace just because i built something a bit elaborate. just feels ghetto af.
this new gen can't come fast enough just
 
you really notice how shitty the xbox one CPU is when you get 5 fps in this game



and just now the game went down to 1 fps and went at snail pace just because i built something a bit elaborate. just feels ghetto af.
this new gen can't come fast enough just


I wonder how Dwarf Fortress (without any performance improvement hacks) would run on the 8th gen consoles CPUs, lol. Simplest looking game in the world but it's all CPU.
 
That is not true anymore... PC APIs today is more close to consoles APIs so the delta is not 2x anymore... it is way lower than that.

If I had to guess then I believe consoles API can give you a boost in performance between 20-30% over PC APIs and that is with first-party exclusives taking advantage of the console API.
Multiplatforms games will perform close enough in similar PC hardware.

You are one of my favourite posters, but do we have a more authoritative source for this claim than 'neogaf poster Ethomaz'?
 
Dunno if API can be even more low level acess than the curret Vulkan/Directx12/consoles, but surprise me.
 
Last edited:
Dunno if API can be even more low level acess than the curret Vulkan/Directx12/consoles, but surprise me.
The advantage consoles have over pc when it comes to low level APIs is a fixed spec target they can optimize their code around (strengths and weakness) and for the more experienced use hw specific tricks and hacks. This level of "to the metal" optimization wont be done on PC with multiple configurations and its mostly seen on first party games or ambitious 3rd parties (Rockstar)

Optimizing the PS4 version of The Crew once the team did manage to get the code compiling required some serious work in deciding what data would be the best fit for each area of memory.
"One issue we had was that we had some of our shaders allocated in Garlic but the constant writing code actually had to read something from the shaders to understand what it was meant to be writing - and because that was in Garlic memory, that was a very slow read because it's not going through the CPU caches. That was one issue we had to sort out early on, making sure that everything is split into the correct memory regions otherwise that can really slow you down."

A more crucial issue is that, while the PS4 toolchain is designed to be familiar to those working on PC, the new Sony hardware doesn't use the DirectX API, so Sony has supplied two of their own.
"The graphics APIs are brand new - they don't have any legacy baggage, so they're quite clean, well thought-out and match the hardware really well," says Reflections' expert programmer Simon O'Connor.
"At the lowest level there's an API called GNM. That gives you nearly full control of the GPU. It gives you a lot of potential power and flexibility on how you program things. Driving the GPU at that level means more work."
"Most people start with the GNMX API which wraps around GNM and manages the more esoteric GPU details in a way that's a lot more familiar if you're used to platforms like D3D11. We started with the high-level one but eventually we moved to the low-level API because it suits our uses a little better," says O'Connor, explaining that while GNMX is a lot simpler to work with, it removes much of the custom access to the PS4 GPU, and also incurs a significant CPU hit.
A lot of work was put into the move to the lower-level GNM, and in the process the tech team found out just how much work DirectX does in the background in terms of memory allocation and resource management. Moving to GNM meant that the developers had to take on the burden there themselves, as O'Connor explains:
But the PS4 is a console not a PC, so a lot of things that are done for you by D3D on PC - you have to do that yourself. It means there's more DIY to do but it gives you a hell of a lot more control over what you can do with the system."
Simon O'Connor did point out that Reflections considers its work on The Crew to end up being much more than a simple, feature-complete port. This is an opportunity to explore what the new hardware is available of, and there's a sense that the PlayStation 4's graphics hardware is not being fully exploited.
"The PS4's GPU is very programmable. There's a lot of power in there that we're just not using yet. So what we want to do are some PS4-specific things for our rendering but within reason - it's a cross-platform game so we can't do too much that's PS4-specific," he reveals.
"There are two things we want to look into: asynchronous compute where we can actually run compute jobs in parallel... We [also] have low-level access to the fragment-processing hardware which allows us to do some quite interesting things with anti-aliasing and a few other effects."
 
Early Navi results kind of seems all over the place. Sometimes the XT basically matches a VII and beats the 2070S, others it can't even match a 2060 (non-s). Interestingly Wolfenstein used to love AMD hardware but contributes a fair bit to its drumming here.

If this is another case of launch day AMD drivers - they really need to get that together. People can "FineWine" meme all they want, but if they leave a lot of performance on the table on launch day, that's what most interested people will be reading and basing purchases off of, not what drivers mature into in 6 months.

If this does rapidly improve it does seem RDNA has some potential under the hood. In a console context this may all be well and good - tailored OS, API, driver, and not out for another year and a half - but I do hope for AMDs sake they didn't underestimate the importance of day 1 performance again.
 
Yes... real benchmarks.

I mean something like that Carmack quote to trot out occasionally.

So a pc with a 2012 Jag + 7850 (or whatever is in the ps4) would be running games with only a 20-30% penalty in 2019? If so, pc port devs are putting in work.
 
Last edited:
The advantage consoles have over pc when it comes to low level APIs is a fixed spec target they can optimize their code around (strengths and weakness) and for the more experienced use hw specific tricks and hacks. This level of "to the metal" optimization wont be done on PC with multiple configurations and its mostly seen on first party games or ambitious 3rd parties (Rockstar)

Low level API are strong, but difficult to work on. I know one single configuration put optmization in another world, but its the weakness of using PC. Heaven and hell I guess. Money put less enfort on that.
Strange how directx 12 was a little fail.
Its kinda sad AMD don't conyinue with Mantle. Was a good start at least.
 
Low level API are strong, but difficult to work on. I know one single configuration put optmization in another world, but its the weakness of using PC. Heaven and hell I guess. Money put less enfort on that.
Strange how directx 12 was a little fail.
Its kinda sad AMD don't conyinue with Mantle. Was a good start at least.

The lower level the API the harder it is to work with, but also the more performance it allows you to ring out of the same hardware.

Mantle became Vulkan. So it did not die out completely.
 
Strange how directx 12 was a little fail.

All PC API would be pretty failing for obvious reasons of one abstraction for multiple GPUs. Not to mention that coding for unified memory is totally different from dual pools and PC APIs must hide that.
Other things that current high level APIs hide are: memory limits (virtual memory and allocations), buffer formats, shader formats, texture formats (partially), synchronization pipeline. Etc.
 
I have a feeling they will dive deep on numbers, every single component shows a generational leap
Sony/Ms are not nintendo

Every single number ... except TF... which will maybe .. .. be double xbox one x. Oh and RAM which may be double x. Oh and Hard disk space. .. oh and CPU frequency..

Um .. actually i have no idea what your talking about.
 
Every single number ... except TF... which will maybe .. .. be double xbox one x. Oh and RAM which may be double x. Oh and Hard disk space. .. oh and CPU frequency..

Um .. actually i have no idea what your talking about.
For Sony 11TF (max possible on DUV) would be a big multiplier over Pro specially if they spin added 1.25-1.3x multiplier
For MS 11TF would still be a good number but they'd be more comfortable pushing 12TF (2X Scorpio) plus 1.25-1.3 multiplier spin
 
Last edited:
Zen 3000 with 8 cœurs / 16 threads @ 3,2 GHz
Radeon Navi 72 cu (36x2), 4608 shaders units @ 1550 MHz
14,2 TFLOPS
16 Go / 24 GB of GDDR6
Hardware ray tracing
2TB SSD




Follow the source. "Insider" never named. May as well be a pastebin imo. Specs are on the very optimistic end even for 499, even though possible.


 
Last edited:
Just make it 599,- and deliver a beast, Sony. If people cant afford it they could easily wait or buy the PS4 Slim or Pro.
Its gonna fly off the shelves anyway. Be it 399 (Forget it!) 499 (My prediction.) or 599 (Would still buy it). Gamers on PC pay for mid range cards way over 300$/€ and thats just the GPU alone. I think Sony can ask for a 599,- console without getting as much flak as they did back in 2006.
 
Last edited:
beast confirmed. it has to be euv right SonGoku SonGoku
Beast? more like KAIJU category V
Assuming leak its legit yeah EUV is the only way to fit 72CU (enabled) while keeping die size reasonable ~390-400mm2

For those worried $500 launch price might affect sales *cough* ArabianPrynce ArabianPrynce *cough*
The reason PS3 did so bad is that it took 3 years to match the 360 launch price, its impressive how much brand recognition carried PS3 early years
aq46xiX.png

eg2Yz3H.png
 
Last edited:
Beast? more like KAIJU category V
Assuming leak its legit yeah EUV is the only way to fit 72CU (enabled) while keeping die size reasonable ~390-400mm2

For those worried $500 launch price might affect sales *cough* ArabianPrynce ArabianPrynce *cough*
The reason PS3 did so bad is that it took 3 years to match the 360 launch price, its impressive how much brand recognition carried PS3 early years
aq46xiX.png

eg2Yz3H.png
I hope your on the money. some niggas on resetera think that
8 core zen 2
ssd
and 9tflops are worth 499, lmao.
 
I hope your on the money. some niggas on resetera think that
8 core zen 2
ssd
and 9tflops are worth 499, lmao.
This is just a leak so huge grain of salt as always
But for $500 i expect consoles to push minimum 10TF (closer to 11) on 7nm DUV and 11-13TF on 7nm EUV
14TF+ would enter MEGATON status.
 
dual 36 cus???? how can that fit when the chip is 250nm or something like that. god i hope they usw 7nm+
More like 80CUs total with 8 disabled
493.41mm2 on 7nm DUV
~390-400mm2 on 7nm EUV

I want to believe but at the same time im very skeptical, the leak reads like someone who read gaf/ree next gen threads
 
Last edited:
Status
Not open for further replies.
Top Bottom