Retail pricing of discrete card is irrelevant to Console APUsNew leak with the price of the 3080 and 3070 putting slight better than 2070 card at $500 and slightly (but maybe more slightly) better than 2060 card at $400 can't bode well for the prices of these consoles...if accurate, those prices are nearly $200 more than the initial leaks...
Arcturus
Navi.Arcturus
Hah!PlayStation 5 prototype spotted.
Note: Retail version will likely be smaller
Retail pricing of discrete card is irrelevant to Console APUs
What matters is die size and TDP.
Price on the GPU consoles use depends on die size entirelyExplain
Price on the GPU consoles use depends on die size entirely
TDP determines what can be fitted into a console (180W max)
ArcturusNavi.
Means that AMD cards are getting more efficient.Can someone translate this to layman please? Does it confirm we don't need super high TFs for better performance? So AMD don't need 13TF to perform as well as Nvidia's 9TF for example? Am I understanding this correctly?
Can you explain the efficiency part, if you don't mind?Means that AMD cards are getting more efficient.
Can you explain the efficiency part, if you don't mind?
Basically the more SE's you have in charge of CUs (or less CUs per SE) you can achieve better utilization of said CUs, which translates in less idle resources and the card coming closer to its peak performance in game scenariosAnother thing is the change from a single cluster of 4x16 SP's per CU (with a 64 Wave HW scheduler, and only ¼ of the sp's utilized per cycle) to 2x32 SIMDs per CUs. Thus, each CU gets better hardware scheduling (32-wave HW scheduler) to prevent under-utilization. The SIMD pipelines allow them to run simultaneously.
There could be other factors driving prices up, like poor initial yields, limited stock means its gonna sell out anywaysYeah, I know that. But on a basic, fundamental and economic level, the prices of gpus going up wouldn't drive the prices of other gpus with similarly technology up? Just sayin'
$500+ is looking more likely today than it was yesterday, can you really deny that?
Damn! Wow. If true wouldn’t this place Microsoft’s design engineers into the realm of time travelers? I mean how could they otherwise have possibly known about these theoretical Navi hurdles before even AMD? Then to wisely choose to brute force Vega into submission making it more powerful then AMD could even have hoped for. Well, shit this is incredible if true. I think I will believe this one! I also bet they are actually the ones that nabbed that CELL2 secret sauce too. Yep. It’s all becoming too clear now.I hate feeding into this garbage but, I'll bite.
Third party developer here with contacts at AMD from the old days. Just got our - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.pastebin.com
Point is it won't make any big difference besides popin you'd see the same assets on a single frame, we need larger memory sizes for next gen so we can have more assets on a frame, memory size is always an issue in consoles I swear to you if the ps5 has 8 or 12 GB then there is no point making itI mean, that's the point.
So at the same clock (1.3)we can get over 10 billion triangles. If that random ass 1.8ghz is true were looking at 14 billionHere is a very detailed post i found there that helps understand the implications of this change:
This is the specific layout of a graphics card... a block diagram, you might say. All chips have a layout and design, with different parts organized to do different tasks. (Think of how you would organize a kitchen with 10 people doing different things to make a meal) While Zen is AMD's CPU design, Navi is their graphics card. This post has to do with AMD's Navi Graphics card (it's an upcoming card, expected to launch in Quarter 3 of this year).
For the last 6 or 7 years, AMD was really close to bankruptcy. They had decent graphics cards, but they'd made some critical mistakes in the CPU market, and Intel was hammering them to death. So AMD had this one old graphics card design called GCN that was really powerful when it came out (Early GCN was better known to the public has the Radeon HD 7970. It was faster than ANYTHING Nvidia had... kind of like the 2080Ti of its time).
But as the company ran out of money and had tons of debt, they had less and less money to make better designs. So, the Tahiti design (7970) was changed slightly when it became Hawaii and Tonga, AMD doubled the amount of triangles it could draw. (remember everything in graphics is made of triangles). But after that, Polaris and Vega were both stuck at 4 Triangles per clock.
Clock is each "cycle" -- 1 Hertz is 1 cycle per second. 1 Megahertz is 1 Million cycles per second. 1 Ghz is 1 Billion cycles per second. Because the AMD GPU's were stuck at 4 triangles per clock, and designs like Polaris were around 1.3Ghz, AMD was stuck with only 5.2 Billion Triangles per second of performance. Vega raised the number of clocks, but NOT the number of Triangles the card could draw. Meanwhile Nvidia cards manage 11 triangles per cycle... THIS is why the 1060 and 1080 pulled SO far ahead of Polaris and Vega.
This is one of the first AMD graphics designs that shows a lot of promise, because AMD has reorganized it and given us 8 geometry units, meaning it can now draw 8 triangles. This shows a lot of promise as it "widens" the pipeline - letting the cards do more triangle work per second.
Traditional GCN design is split into 4 blocks, where data is put into the pipe at the top, and travels through a shader and geometry engine, and then into a Compute Unit. Each Compute Unit has 64sp's, but they were split into 4 rows of 16.
The scheduler would put data in one of those pipes, but while that was happening, the other pipes would be waiting for their next instruction. So the big problems with GCN were:
This new design WIDENS the front, and then makes 2 rows of 32 that can run simultaneously, with Multiple datasets at once, all following the same instructions. So this new design can draw triangles faster, do more shading, and make more frames quickly, leading to better performance.
- it was data starved. There was never enough data to keep all the SP's running at once (only 16 of 64 were running at any given time), unless the coders were very clever and specifically wrote their game / code / application to use all the pipes.
- Because the frontend of Geometry and shaders was only 4-wide, there was a bottleneck. The card could not do shading or draw triangles quickly enough to keep everything fed.
Basically, this new design which leaked is a huge change to the AMD Graphics cards of the last few years, meaning that performance is expected to be very good.
I hope this explanation helps you get a grasp of what's going on. I did simplify a few things, and I may not have explained everything well. Please ask questions an I'll try to answer them.
welcome to /r/hardware. You'll learn a LOT here. (I know I still do)
Apart from that more SE's means: doubling of geometry & rasterization performance vs prior GCN designs
Turing is more efficient than Pascal
1080TI: 11.34TF
RTX 2080: 10TF
Both cards have roughly same performance in old games, with RTX having advantage of new tech potentially giving it a big edge in future games (rt, async compute and vrs)
I can see $500 happening with a small loss or cutting even.
DemonCleaner . Please use the regular text colour for the forum. I have the dark theme on currently, and your text is grey and I can't even read it. Thanks.
Yep, it's fixed. Thank you.didn't chage that. i think my browser is screwing with me. edit the last post. does it work now?
Yeah, I know that. But on a basic, fundamental and economic level, the prices of gpus going up wouldn't drive the prices of other gpus with similarly technology up? Just sayin'
There are few episodes in there that made me think "now why do we have actors again"? Uncanny valley and all that, but CGI is reaching a threshold where it manages to escape it.Just watching Love, Death and Robots - The Secret War. I wonder if this is the sorts of graphics we will be getting next-gen.
Again, PC fanboys have said there will be no arch changes in Navi, because that's still GCN and 12.9TF in PS5 will only translate into GTX 1070 performance territory . And dont even dream about 24 GB RAM and HW RT, because that's cheap console so people will be lucky to see just 16 GB in PS5, and because software RT in MineCraft can be run at 30fps 720p on GTX 1070, so PS5 will not need HW RT solution for 1440-4K resolutionsHere is a very detailed post i found there that helps understand the implications of this change:
This is the specific layout of a graphics card... a block diagram, you might say. All chips have a layout and design, with different parts organized to do different tasks. (Think of how you would organize a kitchen with 10 people doing different things to make a meal) While Zen is AMD's CPU design, Navi is their graphics card. This post has to do with AMD's Navi Graphics card (it's an upcoming card, expected to launch in Quarter 3 of this year).
For the last 6 or 7 years, AMD was really close to bankruptcy. They had decent graphics cards, but they'd made some critical mistakes in the CPU market, and Intel was hammering them to death. So AMD had this one old graphics card design called GCN that was really powerful when it came out (Early GCN was better known to the public has the Radeon HD 7970. It was faster than ANYTHING Nvidia had... kind of like the 2080Ti of its time).
But as the company ran out of money and had tons of debt, they had less and less money to make better designs. So, the Tahiti design (7970) was changed slightly when it became Hawaii and Tonga, AMD doubled the amount of triangles it could draw. (remember everything in graphics is made of triangles). But after that, Polaris and Vega were both stuck at 4 Triangles per clock.
Clock is each "cycle" -- 1 Hertz is 1 cycle per second. 1 Megahertz is 1 Million cycles per second. 1 Ghz is 1 Billion cycles per second. Because the AMD GPU's were stuck at 4 triangles per clock, and designs like Polaris were around 1.3Ghz, AMD was stuck with only 5.2 Billion Triangles per second of performance. Vega raised the number of clocks, but NOT the number of Triangles the card could draw. Meanwhile Nvidia cards manage 11 triangles per cycle... THIS is why the 1060 and 1080 pulled SO far ahead of Polaris and Vega.
This is one of the first AMD graphics designs that shows a lot of promise, because AMD has reorganized it and given us 8 geometry units, meaning it can now draw 8 triangles. This shows a lot of promise as it "widens" the pipeline - letting the cards do more triangle work per second.
Traditional GCN design is split into 4 blocks, where data is put into the pipe at the top, and travels through a shader and geometry engine, and then into a Compute Unit. Each Compute Unit has 64sp's, but they were split into 4 rows of 16.
The scheduler would put data in one of those pipes, but while that was happening, the other pipes would be waiting for their next instruction. So the big problems with GCN were:
This new design WIDENS the front, and then makes 2 rows of 32 that can run simultaneously, with Multiple datasets at once, all following the same instructions. So this new design can draw triangles faster, do more shading, and make more frames quickly, leading to better performance.
- it was data starved. There was never enough data to keep all the SP's running at once (only 16 of 64 were running at any given time), unless the coders were very clever and specifically wrote their game / code / application to use all the pipes.
- Because the frontend of Geometry and shaders was only 4-wide, there was a bottleneck. The card could not do shading or draw triangles quickly enough to keep everything fed.
Basically, this new design which leaked is a huge change to the AMD Graphics cards of the last few years, meaning that performance is expected to be very good.
I hope this explanation helps you get a grasp of what's going on. I did simplify a few things, and I may not have explained everything well. Please ask questions an I'll try to answer them.
welcome to /r/hardware. You'll learn a LOT here. (I know I still do)
There are few episodes in there that made me think "now why do we have actors again"? Uncanny valley and all that, but CGI is reaching a threshold where it manages to escape it.
Launch PS4 was 348mm2 and xbone over 350mm2if the APU was ~320mm² they should be able to sell at $500 with a small profit
BTW he is saying 8 Shader Engine with 5 CUs each... that means 40CUs total (Polaris number os units).
They changed from SIMD-16 to SIMD-32... that means the limit of 16CUs per SE is now 8 CUs per SE.
Still the same 64CUs max.
Launch PS4 was 348mm2 and xbone over 350mm2
Im confused, is the limit 40CUs or 64CUs?
64CUs... 8SE with 8CU each in that new way... 4SE with 16CU each in the old way.Im confused, is the limit 40CUs or 64CUs?
and if there are 5 CU's for every SE, how do we get to 64? last SE has 4 CU's???
Im confused, is the limit 40CUs or 64CUs?
I think he/she was just quoting for a specific part - so it was a 40CU chip (5 not the new limit)
Yeap 8CU per SE is the new limit.I think he/she was just quoting for a specific part - so it was a 40CU chip (5 not the new limit)
The arch is not changing... they are changing how the scheduler/waves works internally.so 5 CU's is not the limit for every SE....interesting but how sure are we about this?
I can't find the originalso 5 CU's is not the limit for every SE....interesting but how sure are we about this? the leak said 5CUs for every SE no mention of more.
The Navi improvements are actually extremely interesting for us because it means that even if the PS5 has a somewhat "low" number of TFLOPs, which is mostly dictated by our skewed perception, it could still be a huge improvement. An 8TF Navi PS5 could be better than lets say a 10TF Vega PS5.
Really excited about this
In graphical processing power? None... both chip will have the same peak capabilities.Lets speculate for a moment. lets pretend PS5 has navi 8 SE * 8 CUs and Xbox2 has vega 4 SE * 16 CUs, same clocks obviously. how much difference will there be in graphics/power? anything noticeable?
If they have games to show the increased power, it's doable.yes but educating the public about this is going to be very hard. marketing will have its hands full.
Indeed or a 12TF Navi could be comparable to a 1080Ti/RTX2080The Navi improvements are actually extremely interesting for us because it means that even if the PS5 has a somewhat "low" number of TFLOPs, which is mostly dictated by our skewed perception, it could still be a huge improvement. An 8TF Navi PS5 could be better than lets say a 10TF Vega PS5.
Really excited about this
Shouldn't be as long as PC benchmarks show how it performs compared to previous AMD cards and Nvidiayes but educating the public about this is going to be very hard. marketing will have its hands full.
Im confused, is the limit 40CUs or 64CUs
Indeed or a 12TF Navi could be comparable to a 1080Ti/RTX2080
Shouldn't be as long as PC benchmarks show how it performs compared to previous AMD cards and Nvidia