Farrell55
Banned
What will i get when i am wining? Very very unlikely but what will you get when you win?It is 100% obvious it won’t have anything remotely close 24TFs.
What will i get when i am wining? Very very unlikely but what will you get when you win?It is 100% obvious it won’t have anything remotely close 24TFs.
I’m not the one that are betting with you...What will i get when i am wining? Very very unlikely but what will you get when you win?
The level of a GTX 1080 is perfect for the next generation in my opinion to not exceed $499.
40CU Navi with 4 disable for yields.Before i waste time watching this are they basing their speculation on a 40-48CU chip?
Call them out.A certain poster here has a very similar way of typing...
LOL DF are shameless40CU Navi with 4 disable for yields.
It's 8TF Austin, it was always 8TF.gif
Which is quite funny because Lisa was praising how good Cerny is.
Before i waste time watching this are they basing their speculation on a 40-48CU chip?
They have a PCMR-esque mindset and of course this type of mindset doesn't apply to consoles and their philosophy of doing things in the most efficient manner.LOL DF are shameless
btw 8tf isnt even the worst part, the worst part of their thought process is building a chp around 7nm weakness (clocks) insteads of its strength (density increase)
What she said about Cerny:Quote?
Even then its not enough and its wasting 7nm density increaseyes 40 cu. navi cu > vega cu
Even then its not enough and its wasting 7nm density increase
I didnt see that video yet, is their Prediction with 40cu's and 4 disabled for both Systems?40CU Navi with 4 disable for yields.
It's 8TF Austin, it was always 8TF.gif
Even accounting for RT bits a 64CU APU would be around 380 mm2 and 72CU 400 mm2maybe SoC has RT on it?
Even accounting for RT bits a 64CU APU would be around 380 mm2 and 72CU 400 mm2
With 6nm node shrink in the near future enabling cost reductions, it would be shortsighted to go with a small chip.
They base it around APU size.LOL DF are shameless
btw 8tf isnt even the worst part, the worst part of their thought process is building a chp around 7nm weakness (clocks) insteads of its strength (density increase)
If you've been following Richard's coverage, he has a theory that PS5's BC, similar to PS4 Pro, is more hardware based than Xbox, so the PS5 would need to be built in a manner to accommodate that. Additionally, he believes that Gonzalo is PS5's APU, and Navi Lite. Lastly, he's theorized based on the recent deal between Sony and MS for Azure to host PS4/PS5 PSNow games in the future, that PS5 games can basically run on MS hardware. Thus, GCN/RDNA Hybrid...Navi Lite.I didnt see that video yet, is their Prediction with 40cu's and 4 disabled for both Systems?
What she said about Cerny:
With RDNA DCU config you must disable 8CUs.I think 64 with 4 disabled would be the magic number for one of them.
I do too:They base it around APU size.
Launch die size at 7nm | "6nm" die size (15% reduction) |
---|---|
400 mm2 | 340 mm2 |
390 mm2 | 331.5 mm2 |
380 mm2 | 323 mm2 |
A bigger lower clocked chip is the best option to hit consoles power consumption sweet-spotI'm going by mid-gen consoles peak system power consumption compared to Navi 10 TBP and expected card only average gaming consumption.
Sony woulnt do the Wiiu mistake of sacrificing perfomance for backwards compatibilityIf you've been following Richard's coverage, he has a theory that PS5's BC, similar to PS4 Pro, is more hardware based than Xbox, so the PS5 would need to be built in a manner to accommodate that.
Node | 16 nm | 10 nm | 7 nm | 7nm/10nm Δ |
---|---|---|---|---|
Gate | 90 nm | 66 nm | 57 nm | 0.86x |
Min Metal | 64 nm | 42 nm | 40 nm | 0.95x |
Transistor Profile | |||
---|---|---|---|
Node | 10 nm | 7 nm | Δ |
Fin Pitch | 36 nm | 30 nm | 0.83x |
Fin Width | 6 nm | 6 nm | 1.00x |
Fin Height | 42 nm | 52 nm | 1.24x |
5-6 more months (MS/Sony respectively) of speculation to go.
YesBefore i waste time watching this are they basing their speculation on a 40-48CU chip?
How do they end up with 40CU 5700XT and 36CU 5700 Pro?With RDNA DCU config you must disable 8CUs.
Like they did with PS4 Pro?Sony woulnt do the Wiiu mistake of sacrificing perfomance for backwards compatibility
They are more likely to dump 5 million into an emulation team to create a software layer that assists hw based emulation.
2SEsHow do they end up with 40CU 5700XT and 36CU 5700 Pro?
PS4Pro didn't sacrifice performance for bc and even if it did its a midgen refresh meant to make enhancements as costless as possible for devs not a next gen machineLike they did with PS4 Pro?
I could ask you the same question lol how you plan to go from 1.6GHz 36CU 5700 Pro that consumes baseline 180W TBP to a 36CU system that consumes even less at 1.8Ghz?I've yet to hear an adequate explanation how you plan to go from 1.6GHz 36CU 5700 Pro that consumes baseline 180W TBP(probably more like 190W average gaming/200W peak)
2 full chips at 40CU and 36CU with no option for salvaged chips? We'll see. That would be odd compared to AMD normal modis operandi.Full chip i believe
I was talking about their approach in contrast to MS. Responding to this...PS4Pro didn't sacrifice performance for bc and even if it did its a midgen refresh meant to make enhancements as costless as possible for devs not a next gen machine
See...like PS4 Pro. Not hardware BC like putting old hardware in the system, hybrid BC where certain hardware config and clocks are required to accommodate BC. We're talking about the same thing. Xbox is further abstracted and virtualized. They used, and had to use, a different approach with Xbox.They are more likely to dump 5 million into an emulation team to create a software layer that assists hw based emulation.
You can't ask that question while pushing a config that consumes ~100W more than what I'm talking about. Hovis and power cap with "opportunistic" clock PR is how I would go about it. A more simple explanation is that Gonzalo's 1.8GHz is "boost"/opportunistic/high quality binned chip clock, and lower clock is coming to the retail version.I could ask you the same question lol how you plan to go from 1.6GHz 36CU 5700 Pro that consumes baseline 180W TBP to a 36CU system that consumes even less at 1.8Ghz?
Look at my edited post. 2SEs you disable a DCU from each2 full chips at 40CU and 36CU with no option for salvaged chips? We'll see. That would be odd compared to AMD normal modis operandi.
It does not apply to the slightest because the Pro was meant to be a revision of the same console, Cerny even said for next gen their approach would be differentSee...like PS4 Pro. Not hardware BC like putting old hardware in the system, hybrid BC where certain hardware config and clocks are required to accommodate BC. We're talking about the same thing. Xbox is further abstracted and virtualized. They used, and had to use, a different approach with Xbox.
I asked because you are using 1.8Ghz as part of your predictionYou can't ask that question while pushing a config that consumes ~100W more than what I'm talking about. Hovis and power cap with "opportunistic" clock PR is how I would go about it. A more simple explanation is that Gonzalo's 1.8GHz is "boost"/opportunistic/high quality binned chip clock, and lower clock is coming to the retail version.
Here we go
and he continues
40 with 4 disabled like the Pro which guarantees easy full BC like the Pro did again.Before i waste time watching this are they basing their speculation on a 40-48CU chip?
This is so dumb lol might as well call it the PS4 U40 with 4 disabled like the Pro which guarantees easy full BC like the Pro did again.
So are you saying these are 2 full chips, or 40CU XT with 4 disable for 36CU Pro?Look at my edited post. 2SEs you disable a DCU from each
For bigger chips a 4SE config will be needed
It's precedent, of course it applies.It does not apply to the slightest
This is your strawman, don't involve me.And i repeat again... Pro didn't sacrifice performance for bc
This is what I'm waiting for you to explain. I'm really not following, sorry. I've written literal pages explaining how I power cap, oc, and undervolt my RX 480 to get X1X's results under 160W, even without the doubling of cache and 326GB/s memory bandwidth compared to ~277GB/s on RX 480.I asked because you are using 1.8Ghz as part of your prediction
Nah, only Nintendo does that.The specs may never be revealed. Won't that be hilarious?
Nah, only Nintendo does that.
Btw, do you have inside sources?
I hope neither Sony nor MS go this routeIf MS decide against revealing the TF number in case it backfires then it's coming down to the games to land the knockout blows.
This40CU XT with 4 disable for 36CU Pro
Its a flawed comparison... Pro is a revision of the same consoleIt's precedent, of course it applies.
A 36CU chip would be compromising performanceThis is your strawman, don't involve me.
It is though because the X is undervolted to hit a lower stable clockThat going from 36CU to 40CU and having like 200MHz lower core clock results in some fantastical perf/watt uplift.
My theory doesn't apply to RX580 since it is a higher clocked version without increased CUsShow me how your theory works in relation to 2016 RX 470/480 and 2017 RX 580 and Vega line when it comes to perf/watt. P.S.-I already looked.
Still precedent because Sony has to deal with PS4 BC. 36CU for BC would just be 1 of many factors converging: BC, Power Consumption, die size, CUs disabled for yields.Its a flawed comparison... Pro is a revision of the same console
PS5 is a next gen clean slate console (Cernys words)
Remember my, "how about lower clocks and less CUs?" comment. I provided precedent with Polaris where the perf/watt sweet spot is year 1, and moving towards less CUs giving better perf/watt. Compared to the following year with 570/580's higher clock and Vega's wider/slower + architectural advantages both either only matching 470/480's perf/watt, or being worse.A 36CU chip would be compromising performance
This example fails because you claim the node maturing(RX 580) + wider/slower(Vega) will result in better perf/watt. That's to be seen, especially in regards to my above example.My theory doesn't apply to RX580 since it is a higher clocked version without increased CUs
If anything the RX 480-580 proves my point how a couple hundred Mhz can mean the world of difference for perf/watt once you push past the sweetspot and hit disminishing returns
Cerny already went on record that PS5 will be a clean slate and you can emulate 36CUs by diasabling CUs on the emulator anywaysStill precedent because Sony has to deal with PS4 BC. 36CU for BC would just be 1 of many factors converging: BC, Power Consumption, die size, CUs disabled for yields.
So 6tf then? lolRemember my, "how about lower clocks and less CUs?" comment.
An undervolted Vega kicks Polaris ass, AMD clocked those cards way past comfort zone570/580's higher clock and Vega's wider/slower + architectural advantages both either only matching 470/480's perf/watt, or being worse.
RX580 is clocked higher... to apply to my claim it would have to be undervolted to match rx480 clocksThis example fails because you claim the node maturing(RX 580) + wider/slower(Vega) will result in better perf/watt.
You remember my dream..This is so dumb lol might as well call it the PS4 U
If that was the case they could just disable CUs for the PS4 emu
Or go with a 72CU chip (2x36)
80CU (total) dream redeemed
You remember my dream..
Here we go
I actually now think that >10TF is unlikely for these machines. AMD's recent RX 5700 XT and RX 5700 cards are doing up to 9.75 TFLOPs and 7.95 TFLOPs.
9-10TF for both PS5 and Scarlett seem more realistic.
Unless I've missed something?
Those are small chips 36-40CUI actually now think that >10TF is unlikely for these machines. AMD's recent RX 5700 XT and RX 5700 cards are doing up to 9.75 TFLOPs and 7.95 TFLOPs.
9-10TF for both PS5 and Scarlett seem more realistic.
Unless I've missed something?
It was confirmed those GPU’s don’t support Ray-Tracing so they won’t be used on PS5 / Scarlett.
AMD ray-tracing hardware strategy overview.
AMD also briefly touched on its vision for real-time ray-tracing. To begin with, we can confirm that the "Navi 10" silicon has no fixed function hardware for ray-tracing such as the RT core or tensor cores found in NVIDIA "Turing" RTX GPUs. For now, AMD's implementation of DXR (DirectX...www.neogaf.com
But wait, it was only confirmed that PS5 is supporting Ray-Tracing via the GPU.
I’m kidding, I’m sure they both will.
What's interesting both PS4P and Xbox X are based on polaris architecture but xbox x GPU was clearly more capable thanks to MS customization. PS4P has 4.2TF yet mamy games on XBOX X render 2x as many pixels with just 1.8TF more. In games like wolfenstein 2 even on RX 580 you cant match xbox x results because even at dynamic and MINIMUM settings you get 45-55 fps, while xbox x runs the same game at 55-60 fps at even higher settings. If MS will customize Xbox Scarlett GPU in similar way, then their GPU will be clearly faster than PS5 even when both consoles will use the same 12TF GPU. Also CPU performance on xbox scarlett should be better if MS will use the same DX12 tech as it was on Xbox X (according to MS their DX12 tech was able to reduce draw calls drastically and as a results of that xbox x CPU was up to 50% faster). Maybe PS5 will have faster SDD, but I really think from pure CPU and GPU power perspective MS will customize their console with better results.XboxoneX is allready 50%more powerfull than ps4pro and the sony fans are fine with that
Those are small chips 36-40CU
Consoles will use 56-64CU
Pro is bandwidth limited. The X is not. Hence the higher pixel countWhat's interesting both PS4P and Xbox X are based on polaris architecture but xbox x GPU was clearly more capable thanks to MS customization. PS4P has 4.2TF yet mamy games on XBOX X render 2x as many pixels with just 1.8TF more. In games like wolfenstein 2 even on RX 580 you cant match xbox x results because even at dynamic and MINIMUM settings you get 45-55 fps, while xbox x runs the same game at 55-60 fps at even higher settings. If MS will customize Xbox Scarlett GPU in similar way, then their GPU will be clearly faster than PS5 even when both consoles will use the same 12TF GPU.
What's interesting both PS4P and Xbox X are based on polaris architecture but xbox x GPU was clearly more capable thanks to MS customization. PS4P has 4.2TF yet mamy games on XBOX X render 2x as many pixels with just 1.8TF more. In games like wolfenstein 2 even on RX 580 you cant match xbox x results because even at dynamic and MINIMUM settings you get 45-55 fps, while xbox x runs the same game at 55-60 fps at even higher settings. If MS will customize Xbox Scarlett GPU in similar way, then their GPU will be clearly faster than PS5 even when both consoles will use the same 12TF GPU.
Playstation gpu “will support” raytracing , xbox next has “hardware accelerated raytracing.”
You could read that as xbox gpu having dedicated hardware and sony gpu using shaders ... but not really the other ware around sorry.
Most likely they both have the same thing.
no, who would write such a nonsenseFarell55, is that you pal?