Next Xbox Reveal Set For May 21, 10:00 A.M. PST, Livestreamed

Status
Not open for further replies.
DAE then proceeded to say that the leaked specs that he released to Kotaku were accurate...which are lower than PS4's specs.

I don't find him to be a trustworthy source, and he even says openly that he does not like Sony.

Accurate, but maybe incomplete ?

Anyway, I don't believe in any megaton in regards to specs.
 
No, they aren't. DMEs use same memory paths. It's function is alleviate the R/W function from the GPU and CPU (whichever desired)

ESRAM is for latency

They could take same transistor budget and built 256 MB EDRAM with a throughput of 6TB/s, but they chose latency over bandwidth

The first statement is basically exactly what I said - alleviates bandwidth deficiencies by DDR3.

And ESRAM directly increased bandwidth as well, so it's not just latency, which isn't even much of an issue to begin with for rendering tasks.
 
Durango 100% efficient confirmed.
teh secret sauce

Well it's efficient engineering. Elegance. GPU makers just brute force their problems away.

If you can get more actual performance out of something without increasing your TDP or thermal, its really a job well done.

Less short term problems, more long term savings. (Yield)
 
No, they aren't. DMEs use same memory paths. It's function is alleviate the R/W function from the GPU and CPU (whichever desired)

ESRAM is for latency

They could take same transistor budget and built 256 MB EDRAM with a throughput of 6TB/s, but they chose latency over bandwidth
Wouldn't having 256 MB of EDRAM make the die too big and hot to put into a box? It seems that ESRAM is on the die.
If I'm honest that's what I would go with as well, but I still have hope, that's why I'm going with 1.6Tflops, rest the same as yours. (Even though that was called in question due to a 12 CU config would run too hot and I don't know about the feasibility of a 14 CU setup due to die space.)
Yeah, I think the GPU had to be lower due to ESRAM being on the die to offer the best results and with it, the die would be too huge for a closed box.
 
The first statement is basically exactly what I said - alleviates bandwidth deficiencies by DDR3.

And ESRAM directly increased bandwidth as well, so it's not just latency, which isn't even much of an issue to begin with for rendering tasks.

No, it isn't. DME bandwidth takes away from the bandwidth of the pathways, it doesn't add to it.

What it is doing is taking the function out of the hands of the GPU and CPU

This results an increase of threads being processed but also means the bandwidth is being used more. (As a result of more work being done)


Wouldn't having 256 MB of EDRAM make the die too big and hot to put into a box? It seems that ESRAM is on the die.

Yeah, I think the GPU had to be lower due to ESRAM being on the die to offer the best results and with it, the die would be too huge for a closed box.

From what I understand, the ESRAM would be separate, unlike the EDRAM which was linked to the GPU logic.

EDRAM wouldn't be that hot, just expensive to produce (multi layers and all that jazz)
 
Well it's efficient engineering. Elegance. GPU makers just brute force their problems away.

If you can get more actual performance out of something without increasing your TDP or thermal, its really a job well done.

Less short term problems, more long term savings. (Yield)

If there was some magical performance boost by going this route, GPU manufacturers would have already implemented it into their designs.

But there's no point other than to make up for the deficiencies of DDR3, which is a non-issue for GPU manufacturers of high end GPUs because they go with higher-spec'd GDDR5.
 
Well it's efficient engineering. Elegance. GPU makers just brute force their problems away.

If you can get more actual performance out of something without increasing your TDP or thermal, its really a job well done.

Less short term problems, more long term savings. (Yield)

Both consoles have substantial efficiency boosts over the standard GCN architecture. I'm curious to see if stuff like the extra ALUs and the simultaneous compute/rendering calculations that the PS4 GPU has make it into the Durango GPU. Those offer a pretty sizeable efficiency boost that will help the GPU in the long term.
 
I didn't mean to imply that at any point and don't see where I did. But I didn't mean to imply it either way.


I think you did. I also think any of the people who feel the need to come into a MS thread and take a hard stance that the PS3 will be significantly more powerful deserve all the crow they get if it proves to be untrue. If xbox fan was doing the same in PS4 thread he would deserve the same. Lucky for PS fans those threads seem to be free of the BS that goes on in 720 related ones.
 
Oh jesus, again this nonsense about Xbox having a "super efficient architecture", and PS4 being a "brute power design". The PS4 has also improvements to it's architecture, for example the asynchronous compute ability. Both consoles have increased effectivity. And I don't see how an unusual design with eSRAM will result in better yields, the Xbox 3 processor die will probably be of similar size compared to the PS4 processor die.
 
I think you did.
Good thing that I made it explicit that nothing you implied from my writing was what I meant. There is nothing more I can do than be explicit that it shouldn't be understood as whatever you read into it.

Or rewrite my sentence to encompass what I explicitly meant (as I've clarified in the follow-up) without your implicit reading and I'll edit it in. As I said, I don't know where I was supposed to do it, so I can't edit it out myself.
 
If there was some magical performance boost by going this route, GPU manufacturers would have already implemented it into their designs.

But there's no point other than to make up for the deficiencies of DDR3, which is a non-issue for GPU manufacturers of high end GPUs because they go with higher-spec'd GDDR5.

No they wouldn't have.

ESRAM works wonders for compute and certainly helps a bit in b/w but you can't invest that much transistor budget for just a GPGPU performance increase... That's why they brute force their problems away because its more beneficial to do more CUs (which benefits both Graphical and compute functions) ... Remember, thermal and TDP isn't exactly a worry for AMD/NV

DMEs are a creation of Microsoft, not AMD and it is patented


Gemüsepizza;57433098 said:
Oh jesus, again this nonsense about Xbox having a "super efficient architecture", and PS4 being a "brute power design". The PS4 has also improvements to their architecture, for example the asynchronous compute ability. Both consoles have increased effectivity. And I don't see how an unusual design with esram does result in better yields, the Xbox 3 processor die will probably be of similar size to the PS4 processor die.

Durango has asynchronous compute ability too. Thread priority is more loose on PS4 though, afaik

ACEs aren't something new
 
No they wouldn't have.

ESRAM works wonders for compute and certainly helps a bit in b/w but you can't invest that much transistor budget for just a GPGPU performance increase... That's why they brute force their problems away because its more beneficial to do more CUs (which benefits both Graphical and compute functions) ... Remember, thermal and TDP isn't exactly a worry for AMD/NV

DMEs are a creation of Microsoft, not AMD and it is patented

Performance/watt is ALWAYS a worry, and always will be. Again, if there were some performance gains to be had from this approach, they would be on an AMD/NV roadmap...but they're not. If they could ratchet up the performance on higher end GPUs with this approach, they would.

It all comes down to them being there for DDR3.

Also, are you now changing your tone and saying that ESRAM and DME are not for rendering, but for compute?
 
Surely their plan with the EDRAM is exactly the same as with the EDRAM in the Xbox 360. Why would they give it a big fat bandwidth if it's supposed to be just for latency sensitive pixel shaders or something?
 
Performance/watt is ALWAYS a worry, and always will be. Again, if there were some performance gains to be had from this approach, they would be on an AMD/NV roadmap...but they're not. If they could ratchet up the performance on higher end GPUs with this approach, they would.

It all comes down to them being there for DDR3.

Also, are you now changing your tone and saying that ESRAM and DME are not for rendering, but for compute?

Holy shit man

ESRAM is for compute and latency and to a smaller degree, rendering... Compute is incredibly latency sensitive, rendering is mainly bandwidth sensitive

DMEs are there to supplant the R/W process... This frees up the CU to do more rendering or compute

Both used in combination can be very powerful but there is limits

My tone has not changed

Wattage isn't that much of a worry when they are pushing >400W cards that take up 3 slots for the heat sink and fans, consoles don't have that luxury
 
Gemüsepizza;57433098 said:
Oh jesus, again this nonsense about Xbox having a "super efficient architecture", and PS4 being a "brute power design".

Durango 100% efficient!
PS4 53% efficient!

Shoot me now, same stupid shit, new page.
 
We should just have an official spec war thread and a poll for making an official guess at the relative performance of the consoles before the 21st. Then we can have easier-to-find material for an official crow-serving thread.
 
Call it stupid all you want

Doesn't make it any less true.

So, let's assume everything is true: 8 GB of DDR3, 1.2 TFLOP GPU, + 32 MB of ESRAM vs 8 GB of GDDR5 and 1.8 TFLOP GPU.

Is the former going to have better performance than the latter because of this magical efficiency gain that ESRAM + DME vs the "retard strength" of the PS4?
 
Call it stupid all you want

Doesn't make it any less true.

How do you factor libGCM vs. xbox3 API overheard into your 100% efficiency number?

Cidd said:
Lets wait until after May 21st, it should be more entertaining when all the secret sauces are out in the open.
Let's have a thread and a poll for guessing if the secret sauce will be spilled on the 21st.
 
Call it stupid all you want

Doesn't make it any less true.
You seem to be forgetting that the PS4 has substantial tweaks to help with the latency issue, which is a much smaller concern than you're making it out to be. Both consoles are well designed to mitigate their defficiencies, bandwidth on Durango and latency on PS4.
 
You seem to be forgetting that the PS4 has substantial tweaks to help with the latency issue, which is a much smaller concern than you're making it out to be. Both consoles are well designed to mitigate their defficiencies, bandwidth on Durango and latency on PS4.

Nah man, he's a engineer he knows better than Mark Cerny a veteran dev with 30 years experience under his belt and Sony/AMD engineers.

They threw 8GB GDDR5 in there for shits and giggles.
 
from one side or the other....day 21 we will heave rivers of tears

unless Microsoft just says

"Yo guize, we got an AMD GPU, 8 CORE CPU, and 8 GB of RAM Yo, let's now talk about games and services"

in which case, we will never know until even more leaks are revealed...or digital foundry starts doing their comparisons
 
So, let's assume everything is true: 8 GB of DDR3, 1.2 TFLOP GPU, + 32 MB of ESRAM vs 8 GB of GDDR5 and 1.8 TFLOP GPU.

Is the former going to have better performance than the latter because of this magical efficiency gain that ESRAM + DME vs the "retard strength" of the PS4?

i don't know where the "retard strength" came from, but w/e.

as is... if the DDR3 remains at 68 GB/s (same amount of chips) and the clocks remain the same

PS4 would most definitely be more powerful as at 1.2 vs 1.8, efficiency can't close that much gap. There is still other things we need to learn about inside the system, like the 360 SoC and/or secondary processors. (likewise with PS4)

At present though, it would appear durango would be more powerful than PS4 on CPU side (because of DDR3 and secondary APU (OS and Audio offload)) through the collection of rumors

Microsoft can increase the clocks and it could make it virtually identical to PS4 capability (at about 900-950 MHz... potentially too hot) but they'd have to do something about the DDR3, like doubling the chips (widening the bus) or going beyond the JEDEC specifications
 
Call it stupid all you want

Doesn't make it any less true.

Other than the fact the entire concept is nonsensical okay. That and the entire rumor was based off dumb people misinterpretation what others were talking about. The entire "100% efficient" came from comparing GCN to Xenos in 1 LANE SIMD vs 5 Lane SIMD in regards to specific shader utilization.. Then many people on the internet saw "100% efficient" and were like "errmahgawd! Durango is 100% efficient!" Not understanding the context at all, and thus we're here having this conversation.

The entire thing is just lulz worthy.
 
You seem to be forgetting that the PS4 has substantial tweaks to help with the latency issue, which is a much smaller concern than you're making it out to be. Both consoles are well designed to mitigate their defficiencies, bandwidth on Durango and latency on PS4.

well durango move engine help a lots ...

also the audio block help a lots....save lots of compute power....the audio mixing of a normal game in the x360 take 1/3 cpu power......
 
Other than the fact the entire concept is nonsensical okay. That and the entire rumor was based off dumb people misinterpretation what others were talking about. The entire "100% efficient" came from comparing GCN to Xenos in 1 LANE SIMD vs 5 Lane SIMD in regards to specific shader utilization.. Then many people on the internet saw "100% efficient" and were like "errmahgawd! Durango is 100% efficient!" Not understanding the context at all, and thus we're here having this conversation.

The entire thing is just lulz worthy.

ohhhhhhhhh kayyyyyy then

has nothing to do with that, but great job at over-sensationalizing it!

and it isn't nonsensical.
 
Other than the fact the entire concept is nonsensical okay. That and the entire rumor was based off dumb people misinterpretation what others were talking about. The entire "100% efficient" came from comparing GCN to Xenos in 1 LANE SIMD vs 5 Lane SIMD in regards to specific shader utilization.. Then many people on the internet saw "100% efficient" and were like "errmahgawd! Durango is 100% efficient!" Not understanding the context at all, and thus we're here having this conversation.

The entire thing is just lulz worthy.
That was Reiko ..

Tensions are RISING...

We are getting closer to the core. I mean the reveal.
hello bud!
 
No that was a shitload of GAF members who...well to be blunt don't know what the fuck they're talking about. Clearly the fact people are still believing it despite the fact it's been cleared up many times only furthers that point. That's where this entire "100% efficiency" and all it's typical banana phone derivatives came from, that one singular gross misinterpretation.

ohhhhhhhhh kayyyyyy then

has nothing to do with that, but great job at over-sensationalizing it!

and it isn't nonsensical.

Good to know you couldn't rebuttal with anything knowledgeable. Makes it easy for me to gloss over your posts in the coming months.
 
Tensions are RISING...

We are getting closer to the core. I mean the reveal.

C47Mckx.gif
 
Status
Not open for further replies.
Top Bottom