VGLeaks rumor: Durango CPU Overview

Less steps of translation between what you write and what happens on the hardware, doesn't necessarily make it better or better performance.

But to say this you should have done some profiling with the same code on both dev-kits and had to compare them. Otherwise this is just a wild guess. So, did you profile your code on both dev-kits?
 
But to say this you should have done some profiling with the same code on both dev-kits and had to compare them. Otherwise this is just a wild guess. So, did you profile your code on both dev-kits?

No. Its a assumption but i think its a decent one to make.

PSGL is built on libgcm after all.
 
Less steps of translation between what you write and what happens on the hardware, doesn't necessarily make it better or better performance.

you are arguing in a circle. if less steps doesn't make it better or better performing, what was the point of your original argument?
 
No. Its a assumption but i think its a decent one to make.

PSGL is built on libgcm after all.

It's an incorrect "assumption" is what I've been saying. You don't have enough information about the DX api on 360 to be able to formulate an assumption like that.

If you don't know what you are talking about you shouldn't be making assumptions.
 
Well theres a difference in performance if the API does not expose something, also not going through the API does let you get rid of a bit of the cruft that it adds but its not like its a giant difference but it is still there.

wow guys on gaf can be dicks, especially the hardcore fans :/. I doubt you even know a thing about programming but whatever.

I never said anything about the performance of the API just how thin it was.

Less steps of translation between what you write and what happens on the hardware, doesn't necessarily make it better or better performance.

Sounds like you didn't know what you were arguing. You were talking performance, then you weren't, and then you were, etc... You're entire argument was based on making it seem like sony's api's were better then the competition. I think we proved it's not a "Fact" as you put it once, it's more an "assumption" as you now admit (and as other developers on B3D have noted, it's an incorrect assumption).
 
No. Its a assumption

Thanks, that nails it. No more "facts", please. All is based on assumption and that's it. You have read enough replies now which show you how bad your argumentation was. No offense, it is widely common that fans go to "the other side"-threads and try to make it look bad so that the preferred product shines.
 
Thanks, that nails it. No more "facts", please. All is based on assumption and that's it. You have read enough replies now which show you how bad your argumentation was. No offense, it is widely common that fans go to "the other side"-threads and try to make it look bad so that the preferred product shines.

Okay ill stop sharing all information I have all together thats for sure.
 
Okay ill stop sharing all information I have all together thats for sure.

It's great to share "information", as long as it's correct and/or factual, not just based on opinion. Nothing wrong with admitting one is wrong, but let's cut the bullshit when we don't know something or have a clear picture of both sides.
 
Awesome. So industry insider KidBeta (according to Reiko) doesn't even know the basics about either APIs and has all his shady info from the fricking pirate bay where he tells others to go to as well. Seriously, dude?
 
Awesome. So industry insider KidBeta (according to Reiko) doesn't even know the basics about either APIs and has all his shady info from the fricking pirate bay where he tells others to go to as well. Seriously, dude?

Uh, only the PS3 SDK is on the pirate bay (it actually is). And I sure as hell know about the PS3 API. 360 only what I've been able to find which is not much.

The other stuff i got from other places.

Can confirm with mods if wanted.
 
Awesome. So industry insider KidBeta (according to Reiko) doesn't even know the basics about either APIs and has all his shady info from the fricking pirate bay where he tells others to go to as well. Seriously, dude?

I never said he was a industry insider.

I said he knows a little bit more than Brad Genz.
 
Oh Lord, another 'insider'. Will it ever end?

Guess it won't. I am also wondering why people make an account on GAF and post almost only in one thread the way some people do. What is the intention? And, obviously now, it is proven fast that these people don't know what they are talking about and that their argumentation is false. Why do they try to convince others from their own "facts"?
 
Guess it won't. I am also wondering why people make an account on GAF and post almost only in one thread the way some people do. What is the intention? And, obviously now, it is proven fast that these people don't know what they are talking about and that their argumentation is false. Why do they try to convince others from their own "facts"?

c0de, Junior member, 27 posts. 19 of which in this very thread. Or are you just trying to make fun of yourself?
 
c0de, Junior member, 27 posts. 19 of which in this very thread. Or are you just trying to make fun of yourself?

No :) It's not the quantity of posts but what is inside. I just don't like how some people try to argument. Just look at to who I replied. If you agree with my points or not is of no matter. But I don't like people to judge things they either don't know about because or they just want to downplay it. And me and Gemüsepizza also know each other from another forum, I just wanted to say "hi" :)
 
I hope people realise that they're not 'insiders' unless they've received non-public information from someone identifiable within the industry. Anonymous messages and forum posts of any description don't qualify.
 
For all this talk about "core gamer", I don't think anybody can even identify what that word really means. One thing I can say for certain is that GAF is not representative of the core gamer. I'd say GAF represents a very miniscule slice of the overall core gamer demographic, whatever that is.

.

I cant find the article, but some marketing guy said 'core gamer' was defined by amount of money spent on gaming, amount of time spent gaming, and amount of time spent reading about gaming. It had nothing to do with types of games or whatever.
 
TFLOPS don't matter man! It literally tells you zip about how capable a system is.
This is literally wrong. Particularly when comparing two almost identical GPU architectures, GFLOPs tell you a whole lot about their relative capabilities. Not everything of course, but also not "zip", not by a long shot.

It's great to share "information", as long as it's correct and/or factual, not just based on opinion.
Indeed.
 
This is literally wrong. Particularly when comparing two almost identical GPU architectures, GFLOPs tell you a whole lot about their relative capabilities. Not everything of course, but also not "zip", not by a long shot.

Indeed.

Indeed, they aren't almost identical either I reckon it is worth betting that they are nigh on identical down to nearly some minor details (more ACE's on PS4). Which would mean a direct compassion via FLOPS is incredibly relevant as you have pointed out.

Whilst FLOPS may give us peak performance and it may not be reached often (if ever) it is a good way to see how capable a given GPU, even more so now days with the modern AMD/NV archs where the peak FLOPS are something that isn't off in lalal land and completely off the mark. (yay scalar archs)
 
Yes, the company that destroyed all the competition in the prior generation managed to "keep up" in the current generation with the 2nd place console(a console that had to pay out a billion dollars in rebate support due to RROD). And in the process of doing so lost more money this gen than they made in all of the previous two generations combined. Ask Sony's board how they feel about that minor "victory". People keep saying Sony has gotten their shit together, but where is this being evidenced? Their insurance business? Yes. Electronics? Nope. I'm anxiously awaiting to see how they handle their next console as well as TVs, computers, and music hardware. I'm not one of those people that will dismiss the Vita as a lark. No way. The Vita epitomizes Sony.

Not really, not in 2013 anyway. Sony circa 2013 is very different from Sony circa 2011.

First, there is Kaz himself. A liberal arts guy who rose through the ranks of Sony's entertainment division isn't your typical Sony executive. He has also replaced all but 2 of Sony's senior management, huge huge leadership changes.

Next, Vita is a very small component of gaming, which itself is only one pillar of their core businesses, which are:
  • financial services
  • digital imaging (from broadcast to medical)
  • Entertainment (Sony Pictures is the #1 studio right now)
  • gaming
  • mobile

& yes TV and many types of consumer electronics in general are missing from that list. so if you are waiting for great new Sony "TVs, computers, and music hardware" to appear, you'll be waiting a long time. The tv business has 2 years to return to profitability, well 14 months as of today, and could possibly rejoin that core list, but it is doubtful. Amplifiers, headphones, walkmans, etc have all been relegated to side businesses; Sony will just milk the name for the over-40 crowd who grew up on Sony gear.

So whatever the Vita is, it isn't the epitome of Sony in 2013. It's a small part, a part that was designed between 2009-2010.

But all of that aside, probably the biggest change on the horizon for Sony is the coming devaluation of the Yen (Abe's new BoJ head takes office on mar. 14 iirc). The Yen has been slowly murdering Japan's export economy. If for no other reason than this, Sony will trudge on.

Will Kaz succeed? I have no clue. But if he fails it will be for very different reasons than why Sony has been failing from 2006-2012.
 
I don't really get how people can't separate the design philosophy of the PSV from its business failing; and can't see how much it deviates from the Kutaraging that brought about the PS3.

A high-end and/or $250 dedicated handheld is a fundamentally flawed product, whether that's the PSV or 3DS. And in reality in the not too distant future, a dedicated handheld will probably be a fundamentally flawed product for the market.

That doesn't mean that the actual design/engineering philosophies engaged in making the Vita were bad. They used essentially off-the-shelf components, they consulted with the software-side considerably - ensuring ease of development, they built towards a specific target price. They did everything they didn't do, and that not doing made the PS3 a clusterfuck.
 
TFLOPS don't matter man! It literally tells you zip about how capable a system is. The only people throwing around TFLOPS as a measuring mark are Sony and peeps that don't know one way or the other.

This is patently false.

Both XB3 and PS4 are using GPU with GCN (Cape Verde and Pitcairn derivatives respectively). They are both making similar modification to extract greater performance by increasing efficiency. So it looks like a matter of all else in the GPU being equal the TF advantage exists for PS4 atm. However, how this difference will manifest itself (maybe AA, resolution and most likely first party titles over the coming years) remains to be seen.

This is literally wrong. Particularly when comparing two almost identical GPU architectures, GFLOPs tell you a whole lot about their relative capabilities. Not everything of course, but also not "zip", not by a long shot.

This. I really do not get why people feel so threatened to accept a fact that is not up for debate (unless changes the GPU).
 
This is patently false.

Both XB3 and PS4 are using GPU with GCN (Cape Verde and Pitcairn derivatives respectively). They are both making similar modification to extract greater performance by increasing efficiency. So it looks like a matter of all else in the GPU being equal the TF advantage exists for PS4 atm. However, how this difference will manifest itself (maybe AA, resolution and most likely first party titles over the coming years) remains to be seen.



This. I really do not get why people feel so threatened to accept a fact that is not up for debate (unless changes the GPU).

Both seem to be Pitcairn derivatives to me, the Durango GPU can do 2 triangles/verts a clock.

If you look here.
http://www.amd.com/jp/Documents/GCN_Architecture_whitepaper.pdf
You'll see that Cape Verde only has 1 setup engine which afaik means 1 triangle/vert a clock.
 
This is literally wrong. Particularly when comparing two almost identical GPU architectures, GFLOPs tell you a whole lot about their relative capabilities. Not everything of course, but also not "zip", not by a long shot.

Yet again, arguing which architecture is more powerful by simply relying on a metric like "floating point operations per second" when you REALLY don't know just how "identical" they are? MS hasn't announced anything yet, correct me if I'm wrong?

Raw peak numbers like that are meaningless unless you have a better idea of the complete picture.
 
KidBeta said:
PSGL is built on libgcm after all.
That wasn't always the case though.

Anyway LibGcm has one differentiating advantage on PS3 - it comes with source, so if you have an issue with its thickness, performance or interface - you can always roll your own.
IIRC that's what people did early on anyway to do push-buffer building on SPEs.

But from API usability perspective that's not exactly a great argument, else PS2 would be the ultimate winner there...

oldergamer said:
when you REALLY don't know just how "identical" they are? MS hasn't announced anything yet, correct me if I'm wrong?
Well to be fair these are just rumor discussion threads. There wouldn't be much room for public debate if everyone agreed to just wait for official details...
 
Yet again, arguing which architecture is more powerful by simply relying on a metric like "floating point operations per second" when you REALLY don't know just how "identical" they are? MS hasn't announced anything yet, correct me if I'm wrong?

Raw peak numbers like that are meaningless unless you have a better idea of the complete picture.

Whilst MS has not announced anything I believe that vgleaks is accurate, they seem to have multiple sources and there information has been backed up by others and other documents. If vgleaks is correct it means they are very
comparable :).

That wasn't always the case though.

Anyway LibGcm has one differentiating advantage on PS3 - it comes with source, so if you have an issue with its thickness, performance or interface - you can always roll your own.
IIRC that's what people did early on anyway to do push-buffer building on SPEs.

But from API usability perspective that's not exactly a great argument, else PS2 would be the ultimate winner there...

Interesting, was PSGL for the PS3 always built on libgcm, or are you talking about the PS2 version?. Also if you have worked on both could you comment on there different levels of abstraction.
 
Whilst MS has not announced anything I believe that vgleaks is accurate, they seem to have multiple sources and there information has been backed up by others and other documents. If vgleaks is correct it means they are very comparable :).

It's slightly possible the esram could be used to jack up the efficiency of the durango gpu by virtue of being a low latency cache. maybe a long shot, and unknown how much it could help, but still.

that would be an example of differing architectures coming into play.

This is likely the best candidate "special sauce" of Durango if any exists imo, not the DME's. But nobody seems to mention it or acknowledge the possibility it even exists except me (and I only know because ERP on B3D wrote a series of speculative posts on it a few weeks ago).

ERP seems to think shaders are often memory latency bound even in the most modern GPU's, and having a 32 MB low latency cache sitting on the GPU could theoretically improve efficiency. Another example that has been listed of this is Nvidia GPU's being more efficient flop for flop than AMD's, partly because apparently they have more/lower latency caches.
 
Yet again, arguing which architecture is more powerful
Where did he argue "which architecture was more powerful"? He was assuming similar or identical architecture (which given the leaks so far beyond memory set-up, seems the case), and stating given that similarity, yes FLOPS are a useful metric for comparison.
 
It's slightly possible the esram could be used to jack up the efficiency of the durango gpu by virtue of being a low latency cache. maybe a long shot, and unknown how much it could help, but still.

that would be an example of differing architectures coming into play.

It might help a little but modern GPU archs are designed around dealing with high latency RAM so I wouldn't for example expect anything over single digits aside from corner cases.
 
Both seem to be Pitcairn derivatives to me, the Durango GPU can do 2 triangles/verts a clock.

If you look here.
http://www.amd.com/jp/Documents/GCN_Architecture_whitepaper.pdf
You'll see that Cape Verde only has 1 setup engine which afaik means 1 triangle/vert a clock.

So you are stating that because of the triangle setup being around 1.6B/sec @~800Mhz as opposed to 7770's 1B/sec @1000Mhz that it is a Pitcairn derivative. You could be right.
 
This is patently false.
Both XB3 and PS4 are using GPU with GCN (Cape Verde and Pitcairn derivatives respectively). They are both making similar modification to extract greater performance by increasing efficiency. So it looks like a matter of all else in the GPU being equal the TF advantage exists for PS4 atm. However, how this difference will manifest itself (maybe AA, resolution and most likely first party titles over the coming years) remains to be seen.

Patently false? sorry but you are incorrect there. Again nothing has been announced yet so there's no way it can be "false" yet. What I said stands, the media ( most of which haven't reported on the rumor because there's no proof yet), forum dwellers, and sony are the only people going around saying 1.8 is better then 1.2 ( or whatever it's rumored at.).

flops don't tell you if peak performance is achievable (if at all), it doesn't tell you how optimal the design is, it doesn't tell you about any overhead that could impact the performance of a game. ps3 had a higher flop rating then 360, yet it wasn't peak achievable. Peak numbers are 98% of the time theoretical and meaningless when comparing to a competing product released in the same time frame. Adding up FLOP values of all components is just as stupid as adding up bandwidth numbers without understanding ow the components are connected. No game is ever going to use 100% of a system to calculate floating point operations. if you did that you wouldn't be playing a game! Fact.

This. I really do not get why people feel so threatened to accept a fact that is not up for debate (unless changes the GPU).

Again, why are you using the word "Fact" when it's clearly not true.
 
Patently false? sorry but you are incorrect there. Again nothing has been announced yet so there's no way it can be "false" yet. What I said stands, the media ( most of which haven't reported on the rumor because there's no proof yet), forum dwellers, and sony are the only people going around saying 1.8 is better then 1.2 ( or whatever it's rumored at.).

flops don't tell you if peak performance is achievable (if at all), it doesn't tell you how optimal the design is, it doesn't tell you about any overhead that could impact the performance of a game. ps3 had a higher flop rating then 360, yet it wasn't peak achievable. Peak numbers are 98% of the time theoretical and meaningless when comparing to a competing product released in the same time frame. Adding up FLOP values of all components is just as stupid as adding up bandwidth numbers without understanding ow the components are connected. No game is ever going to use 100% of a system to calculate floating point operations. if you did that you wouldn't be playing a game! Fact.

The last confirmed stuff we had was from last year but from what we know not much is going to change the 1.2TFLOPs is from the Durango conference last year which show'd a stock standard 12CU GCN1 @ 800mhz.
 
It might help a little but modern GPU archs are designed around dealing with high latency RAM so I wouldn't for example expect anything over single digits aside from corner cases.

ERP addressed that very counterpoint :)

The CU's are only latency tolerant to a point, but when cache misses are common they just sit idle, the maximum number of inflight threads is limited by the register pool, and it's certainly not enough to offset common cache misses.
It's very easy to write memory bound compute jobs, it's not uncommon for first attempts to run slower than a CPU with 1/100th of the FP performance, and I suspect given how many textures are read for the average pixel these days that a lot of modern shaders are more memory bound than they are ALU bound, I guess the easy way to check is swap all the input shaders for 1x1 textures and see if they run faster.
But overall it's hard to quantify how much of an advantage it would really give you.

The last confirmed stuff we had was from last year but from what we know not much is going to change the 1.2TFLOPs is from the Durango conference last year which show'd a stock standard 12CU GCN1 @ 800mhz.

It seems somewhat possible for MS to jack the clock up to 1ghz. Wishful thinking but, yeah.

That would give Durango the equivalent of 15 CU's @800 mhz instead of 12.

Theoretically of course Sony could do the same thing, but my thinking is MS would be a lot more desperate for something to close the performance gap after Sony's 8GB announcement.

I see 7770's on newegg factory overclocked to 1100 and 1120 mhz at no price premium, and 1ghz is the standard clock of the 7770. It may be a stretch but 1ghz would seem doable. I suppose cooling, power draw, and yields would be the concern.
 
Well to be fair these are just rumor discussion threads. There wouldn't be much room for public debate if everyone agreed to just wait for official details...

The kidbeta posts really weren't rumor, he's clearly a fan of Sony and likes to argue for their perspective. I see a trend of people throwing the word "Fact" around a lot when talking about rumors and that irks me when we don't have any actual facts just yet. (for both sides)
 
ERP addressed that very counterpoint :)

Interesting, so pretty much we don't know. :D. I have a feeling though that if it made any huge difference we would be seeing eSRAM on desktop cards though.

The kidbeta posts really weren't rumor, he's clearly a fan of Sony and likes to argue for their perspective. I see a trend of people throwing the word "Fact" around a lot when talking about rumors and that irks me when we don't have any actual facts just yet.

Well this is a rumor thread so deal with it
 
Patently false? sorry but you are incorrect there. Again nothing has been announced yet so there's no way it can be "false" yet. What I said stands, the media ( most of which haven't reported on the rumor because there's no proof yet), forum dwellers, and sony are the only people going around saying 1.8 is better then 1.2 ( or whatever it's rumored at.).

flops don't tell you if peak performance is achievable (if at all), it doesn't tell you how optimal the design is, it doesn't tell you about any overhead that could impact the performance of a game. ps3 had a higher flop rating then 360, yet it wasn't peak achievable. Peak numbers are 98% of the time theoretical and meaningless when comparing to a competing product released in the same time frame. Adding up FLOP values of all components is just as stupid as adding up bandwidth numbers without understanding ow the components are connected. No game is ever going to use 100% of a system to calculate floating point operations. if you did that you wouldn't be playing a game! Fact.



Again, why are you using the word "Fact" when it's clearly not true.

So you want to play the "official announcement" game. If so, then all this discussion is utterly pointless and a waste of forum space until that rumoured time in April when XB3 is finally unveiled.

Now, going by the rumour, while peak theoretical figures may be meaningless in vacuum, it is not when comparing two GPUs that are share the same design philosophy to state which one is better.

1.84TF>rumoured 1.23TF. I really do not know how much simpler it can get than that. If you are trying to claim that there would not be any discernible real world performance "difference" then one only needs to look at the real world performance differences between any two graphics card on PC based on the same architecture to see how untrue it is (7770 vs 7750/7870 vs 7850/7970 vs 7950 etc).

What this rumoured ~600GFlops difference would manifest as in console environment where unlike on PC, fps are generally either locked at 60 or 30fps remains to be seen.
 
So you want to play the "official announcement" game. If so, then all this discussion is utterly pointless and a waste of forum space until that rumoured time in April when XB3 is finally unveiled.

Now, going by the rumour, while peak theoretical figures may be meaningless in vacuum, it is not when comparing two GPUs that are share the same design philosophy to state which one is better.

1.84TF>rumoured 1.23TF. I really do not know how much simpler it can get than that. If you are trying to claim that there would not be any discernible real world performance "difference" then one only needs to look at the real world performance differences between any two graphics card on PC based the same architecture to see how untrue it is (7770 vs 7750/7870 vs 7850/7970 vs 7950 etc).

What this rumoured ~600GFlops difference would manifest as in console environment where unlike on PC, fps are generally either locked at 60 or 30fps (in general) remains to be seen.


These threads have displayed to me that people will believe what they want, even in the face of overwhelming evidence.
 
Top Bottom