WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
It is explained a couple pages earlier which is where I saw how to calculate the polygon numbers. It is very easy actually, it is 1 polygon per clock per engine. This is for modern AMD GPUs. GCN and Cayman (HD 6900) handle 2 polygons per cycle.

If I'm reading this right, there is an actual component within the GPU with it own clock for polygons? Sorry, this just doesn't add up to me. If that was how it worked then would the PS2 had been capable of 147 million, GC 162 million and Xbox1 233 million?

By what you are telling me the, PS3's RSX should be outputting more than the 360 because its clocked at 550 mhz, but the numbers I found for it were 333 million max.
 
also how large would 8 bit be in a 40 nm chip.
Forgot about this part. I have no idea because that's all we know. Like I mentioned my guess is that it's the Command Processor.

I recall the method of doubling the polygons per hertz being mentioned earlier in this thread and asked if this could be using it. It would be interesting if it did. It would certainly explain the Bayonetta 2 numbers.

If it is the case I look forward to when games take advantage of it.

I keep hearing that Ninja Gaiden has models with 122k polygons. If that is the case. 192k would be likely if they are going after the same effect with all weapons and clothing like the NG model. Though I haven't looked into the numbers myself, I just keep seeing them posted including in this thread pages back.

Also BG, that is interesting. I do think the GPU is probably custom. I'm not saying it has to be odd and I don't think they have moved away from VLIW because from a coding standpoint that would be very easy to figure out for devs and we would of probably heard something about that by now.

Ideaman's info I heard about a couple weeks ago (from him) I find that probably explains some of the launch games performances and why developers complained about the CPU (while I don't think they would say it was particularly amazing I do think it was probably enough to handle ports from 360 without much problem as long as SIMD tasks were passed along to the GPU when needed) It also should point to comparing early launch ports as a means for comparisons as dishonest or pointless in many cases.

Yeah I just saw Ideaman's info. It does make one wonder, like he mentioned, how widespread this problem was.

I still have my doubts about Latte having a VLIW architecture. Like you said earlier there are inefficiencies to it. In fact I finally found the article I had been looking on and off for like a year as I came across it, didn't bookmark it, and could not remember how I found it. This article is how I learned that Xenos was not VLIW-based.

http://techreport.com/review/8342/details-of-ati-xbox-360-gpu-unveiled

I asked Feldstein whether the shaders themselves are, at the hardware level, actually more general than those in current graphics chips, because I expected that they would still contain a similar amount of custom logic to speed up common graphics operations. To my surprise, he said that the shaders are more general in hardware. At the outset of the project, he said, ATI hired a number of compiler experts in order to make sure everything would work right, and he noted that Microsoft is no slouch when it comes to compilers, either. Feldstein said Microsoft "made a great compiler for it."

At this point, Feldstein paused quickly to note that this GPU was not a VLIW machine, apparently reminded of all of the compiler talk surrounding a certain past competitor. (The GeForce FX was, infamously, a VLIW machine with some less-than-desirable performance characteristics, including an extreme sensitivity to compiler instruction tuning.) He was quite confident that the Xbox 360 GPU will not suffer from similar problems, and he claimed the relative abundance of vertex processing power in this GPU should allow objects like fur, feathers, hair, and cloth to look much better than past technology had allowed. Feldstein also said that character skin should look great, and he confirmed to me that real-time subsurface scattering effects should be possible on the Xbox 360.

I'm not thinking anything too crazy either. I don't see suggesting Latte is a non VLIW-based GPU with a dual engine as being way out there when looking at the die shot along with past and current designs passing on VLIW architecture.
 
BG only back one day and we got the special sauce flowing...lol

Man you guys never learn. hahahahaha

Yeah its gonna be difficult to search for, I'll try, but here are some posts that I think refer to that prior rumor.

This one is posting news discounting the power of the Wii U

http://www.neogaf.com/forum/showpost.php?p=36586328&postcount=13566

And here is a response from one of our favorite semi-insiders in the WUST thread

http://www.neogaf.com/forum/showpost.php?p=36586657&postcount=13588

Wow good find. Goes with the 160 shaders and that is very very likely what the wiiu has.

The comments after that story are just golden. too funny...
 
I still don't understand how these calculations work.

I've never known polygon counts to scale with clock rate point like your listing it. I always thought there was something else to it. This has confused me since earlier in the thread. Is this a more modern thing?
There's all kinds of stuff that might slow you down. An obvious one is vertex shaders; you're not going to push very many polygons through if you're doing a ton of calculations for each vertex.

Peak triangle throughput rates should be much higher than average rates so that you never, ever bottleneck on triangle setup when you're pushing triangles. If the rest of the GPU is ready to start chomping on a polygon, you don't want to be wasting time trying to throw a polygon at it.

The PS2's clock was 147.456 MHz, but max polyugon capability was 50 million and most ever achieved in an actual game was 10 million.

The GC's graphics clock was 162 MHz, but its peak polygon count was 110 million and the highest achieved was 20 million at 60 FPS.

The Xbox1's graphics clock was 233 MHz, but its peak polygon count was 120 million and most ever achieved in a game was 12 million at 30 FPS
You should really be listing it as polygons/s; the way you've listed it, it sounds like you're claiming the Gamecube was pushing 1,200,000,000 polygons per second, which is ridiculous. (By the way, where do people get solid polygon throughput numbers in games? Those aren't figures that get stated openly very often, and resources about particular models even are sparse for anything that isn't openly modable.)
 
There's all kinds of stuff that might slow you down. An obvious one is vertex shaders; you're not going to push very many polygons through if you're doing a ton of calculations for each vertex.

Peak triangle throughput rates should be much higher than average rates so that you never, ever bottleneck on triangle setup when you're pushing triangles. If the rest of the GPU is ready to start chomping on a polygon, you don't want to be wasting time trying to throw a polygon at it.


You should really be listing it as polygons/second and possibly dropping the framerate, as it's not relevant to the actual throughput; the way you've listed it, it sounds like you're claiming the Gamecube was pushing 1,200,000,000 polygons per second, which is ridiculous. (By the way, where do people get solid polygon throughput numbers in games? Those aren't figures that get stated openly very often, and resources about particular models even are sparse for anything that isn't openly modable.)

I have it stated that the first were peak maxes. The PS2 numbers were given by Sony and Xbox1 by Microsoft. I got the GC numbers from an writeup that Lostinblue linked me too a while back.

For the secondary numbers I was listing what was actually achieved in games and the FPS is there because the frame rate makes a big difference in performance. The reason I don't have the FPS listed for the PS2 numbers was because, as I recall, it was very unstable in the area where it was achieved in the game it came from.
 
That article was prior to the spec bump and the API improvements. At the time, it was likely true.

Lets me guess was this from a firmware update too? lol jk

Do we have a real source and not some fake insider source on this upgrade?

Anyway a spec bump wouldnt increase shader count unless they redesigned the chips in the console. Not enough shader as ps360 would match what we are seeing today.

"There aren't as many shaders, it's not as capable. Sure, some things are better, mostly as a result of it being a more modern design. But overall the Wii U just can't quite keep up."
So maybe the spec bump was ~100MHz and it still the same chip but now its performance was better.
 
Lets me guess was this from a firmware update too? lol jk

Do we have a real source and not some fake insider source on this upgrade?

Anyway a spec bump wouldnt increase shader count unless they redesigned the chips in the console. Not enough shader as ps360 would match what we are seeing today.

So maybe the spec bump was ~100MHz and it still the same chip but now its performance was better.

This has been covered many times. The clock bump is not enough to achieve the results we've seen in games with a shader count that low. That would require some actual "secret sauce" like you keep stating.

Nothing short of magic will produce the shading enhancements that have seen made to ports from other console and in games like ZombiU if it has a that many fewer shaders.

I just don't see you getting this out of 160 shaders.
http://www.youtube.com/watch?v=6OHUwDShrD4
You know, I just noticed that from 2:15 forward in this video, you can see some of that ZombiU style lighting.
 
Lets me guess was this from a firmware update too? lol jk

Do we have a real source and not some fake insider source on this upgrade?

Anyway a spec bump wouldnt increase shader count unless they redesigned the chips in the console. Not enough shader as ps360 would match what we are seeing today.

So maybe the spec bump was ~100MHz and it still the same chip but now its performance was better.

That article was from over a year ago. At some point in mid 2012, the specs went from 1Ghz CPU/400Mhz GPU to 1.24Ghz CPU/550Mhz GPU, and the API was literally painful to use until just before launch, like right round the time launch software was going gold.

I don't think developers commenting in April of 2012 had a full picture of what the Wii U is capable of. I also think that at the time they were probably correct.
 
This has been covered many times. The clock bump is not enough to achieve the results we've seen in games with a shader count that low. That would require some actual "secret sauce" like you keep stating.

Nothing short of magic will produce the shading enhancements that have seen made to ports from other console and in games like ZombiU if it has a shader reduction on that scale.

I do not agree this any of those statements at all. I think it matches perfectly with the result we have seen in wiiu games.

Those improvement are from "mostly as a result of it being a more modern design." As its been stated many times in this thread and this quote from a dev.

The tech in the ps360 is from 2005! Think about that for a second....

That article was from over a year ago. At some point in mid 2012, the specs went from 1Ghz CPU/400Mhz GPU to 1.25Ghz CPU/550Mhz GPU, and the API was literally painful to use until just before launch, like right round the time launch software was going gold.

I don't think developers commenting in April of 2012 had a full picture of what the Wii U is capable of. I also think that at the time they were probably correct.
Sounds like just newer dev kits. As i said the statement about power might be incorrect but the shader count would not have change unless the chips were redesign. Based on everything we have found it matches with what the dev stated. The chip were not redesign and we have a 160 shader part that is less than the ps360.
 
Lets me guess was this from a firmware update too? lol jk

Do we have a real source and not some fake insider source on this upgrade?

Anyway a spec bump wouldnt increase shader count unless they redesigned the chips in the console. Not enough shader as ps360 would match what we are seeing today.

So maybe the spec bump was ~100MHz and it still the same chip but now its performance was better.

I do not agree this any of those statements at all. I think it matches perfectly with the result we have seen in wiiu games.

Those improvement are from "mostly as a result of it being a more modern design." As its been stated many times in this thread and this quote from a dev.

The tech in the ps360 is from 2005! Think about that for a second....

"mostly as a result of it being a more modern design." That doesn't mean anything.


How does it being more modern tech make it achieve such results? What aspects of its modernness causes 160 shaders to outperform 215 in such a way with only a 50mhz clock difference? Please explain this to me.

Being a modern design does not always = better performance. More often than not, older designs get the better performance per numerical measurement. Like how a fixed function shader can get better watt for watt performance than a modern 1, or how a Pentium 3 get better performance watt for watt than a Pentium 4, or how Espresso gets better performance watt for watt than the xenos and the sell. Older tech usually allows you to do more with smaller numbers.
 
I do not agree this any of those statements at all. I think it matches perfectly with the result we have seen in wiiu games.

Those improvement are from "mostly as a result of it being a more modern design." As its been stated many times in this thread and this quote from a dev.

The tech in the ps360 is from 2005! Think about that for a second....

Think about what? You keep blustering and arm waving about what we've seen like it's the best the console will ever produce, as if there weren't real debilitating problems with Nintendo's development environment. We get it, the Wii U is nowhere near as powerful as PS4 and Durango. I think most people here are over it. What we're trying to accurately gauge is exactly what's in it, because it's kind of fascinating to try to deduce something from little to no information.

I find BG's theory about a dual graphics engine interesting, but I still think Fourth Storm's analysis is the closest to plausible, unless we're wrong about the densities and this is a 320 shader part. Plus, one shader from 7 years ago is not the same as one from a newer architecture. Even if it were 160, it should still outperform both the 360 and the PS3 when the memory architecture is taken into account. It will likely shine in first and second party output, where engines are designed around the machine. Even Unity based games will probably perform better than what we saw from UE3 at launch.
 
"mostly as a result of it being a more modern design." That doesn't mean anything.


How does it being more modern tech make it achieve such results? What aspects of its modernness causes 160 shaders to outperform 215 in such a way with only a 50mhz clock difference? Please explain this to me.

Being a modern design does not always = better performance. More often than not, older designs get the better performance per numerical measurement. Like how a fixed function shader can get better watt for watt performance than a modern 1, or how a Pentium 3 get better performance watt for watt than a Pentium 4, or how Espresso gets better performance watt for watt than the xenos and the sell. Older tech usually allows you to do more with smaller numbers.

Xenos and RSX were hampered by having architectural quirks that limited their performance. In both cases, it was memory related. PS3 had split memory pools that required much more complicated resource management, and the limited size and bus dynamics of the EDRAM in the 360 meant frequent reading and writing out to main memory. Both situations caused increased latency and wasted cycles on both GPUs. In theory at least, Wii U should have better throughput when the cache setup is leveraged.
 
Xenos and RSX were hampered by having architectural quirks that limited their performance. In both cases, it was memory related. PS3 had split memory pools that required much more complicated resource management, and the limited size and bus dynamics of the EDRAM in the 360 meant frequent reading and writing out to main memory. Both situations caused increased latency and wasted cycles on both GPUs. In theory at least, Wii U should have better throughput when the cache setup is leveraged.

I thought the running theory was that the Wii U had a lot of issues with properly utilizing its hardware compared to the other consoles on top devs being less familiar with using the achitecture and that they are still being ironed out? Trine 2 was doing better shading than what was seen on the 360/PS3 GPUs before launch and that was before any performance bumps and problems were fixed.

No matter how I leverage the math, 160 pulling off what we've seen to date just seems ludicrous, with the exception of Fourth Storms fixed function theory.
 
Right. How about you tell us your view on the die shot instead of making these types of posts?

And I'm not talking about just ALUs, TMUs, and ROPs.

Beside the part that matters? lol of course...

i do not believe in the "special sauce!" The gpu die houses a lot more than just the gpu. It houses the north/southbridge functionality, usb controller, DSP and other part just for BC. People have wrongly assume that everything on die is just for the GPU.

The main parts are the ALUs, TMUs and ROPs just like any other GPU.
 
Right. How about you tell us your view on the die shot instead of making these types of posts?

And I'm not talking about just ALUs, TMUs, and ROPs.

I was going to say everything in this thread is a speculation. It is up to the individuals to evaluate which would make the most sense.
Your idea is interesting and somewhat makes sense looking at the die shot. Does the average gpu have double of anything? Some of the doubles also have their own unique characteristics.(is it normal for doubles not to look exactly the same) Also another question, are you positive your labels of the gpu sections are accurate?(it is very interesting how you, fourth, and others are able to identify parts like that(I'm not the biggest tech head but this thread is interesting))
 
^ With the blocks I've seen ignoring ALUs, ROPs, and Caches no. And unfortunately there is no die shot of Cayman to confirm or disprove the idea.

When it came to labeling, I (and others) to the best of our ability looked at block sizes and SRAM layouts. That's how I decided the blocks are what they were. With no one really labeling GPUs in the past, it's really just guesswork.

Beside the part that matters? lol of course...

i do not believe in the "special sauce!" The gpu die houses a lot more than just the gpu. It houses the north/southbridge functionality, usb controller, DSP and other part just for BC. People have wrongly assume that everything on die is just for the GPU.

The main parts are the ALUs, TMUs and ROPs just like any other GPU.

I don't believe in "special sauce" either. I don't like that term. I didn't like it when it was being used for Xbox 3 either. That doesn't mean customizations can't be speculated on.

So where are the TMU and ROPs in your opinion? What are all the duplicate blocks to you? I'd rather engage in discussion than take jabs at others views.
 
Beside the part that matters? lol of course...

i do not believe in the "special sauce!" The gpu die houses a lot more than just the gpu. It houses the north/southbridge functionality, usb controller, DSP and other part just for BC. People have wrongly assume that everything on die is just for the GPU.

The main parts are the ALUs, TMUs and ROPs just like any other GPU.

So you think the SPUs just have 30-40% more room needed for 160 ALUs, just because? I was under the impression that you thought a 30-32 ALU per SPU fit, though you were told it was unlikely.

If they are using something other than VLIW5, VLIW4 for instance or some other sort of VLIW5 set up that hasn't been used before or even moving away from VLIW all together would allow for this. Especially since we can't be sure of the density of the registeries.
 
I thought the running theory was that the Wii U had a lot of issues with properly utilizing its hardware compared to the other consoles on top devs being less familiar with using the achitecture and that they are still being ironed out? Trine 2 was doing better shading than what was seen on the 360/PS3 GPUs before launch and that was before any performance bumps and problems were fixed.

No matter how I leverage the math, 160 pulling off what we've seen to date just seems ludicrous, with the exception of Fourth Storms fixed function theory.

I think the biggest hurdles with Wii U are unfamiliarity and the tools. I wish a developer could comment, so we didn't have to read so far into Criterion's comments, but it seems a lot better tools were available post launch.

I'm interested to see how an engine performs when it's built from the ground up to the system's strengths, much like all the work Epic did for optimizing UE3 on Xbox 360.
 
I was going to say everything in this thread is a speculation. It is up to the individuals to evaluate which would make the most sense.
Your idea is interesting and somewhat makes sense looking at the die shot. Does the average gpu have double of anything? Some of the doubles also have their own unique characteristics.(is it normal for doubles not to look exactly the same) Also another question, are you positive your labels of the gpu sections are accurate?(it is very interesting how you, fourth, and others are able to identify parts like that(I'm not the biggest tech head but this thread is interesting))

Yes other modern AMD GPUs have doubles of a lot of things and the hand layout of the die would account for them not looking similar, we can see this in the SPU blocks looking different from each other, especially the top right one.
 
Right. How about you tell us your view on the die shot instead of making these types of posts?

And I'm not talking about just ALUs, TMUs, and ROPs.


BG with all these processes and numbers being thrown around, what about the 160 vs 320 shaders debate? Which one is most likely in your opinion?

The Wii U in my point of view can't have so much "secret" functions that Nintendo is not sharing it with developers at this point. Nintendo should be showing exactly how to get the most out of the GPU to Third Parties. Simply showing how great their First Party software looks is not enough.
 
I don't believe in "special sauce" either. I don't like that term. I didn't like it when it was being used for Xbox 3 either. That doesn't mean customizations can't be speculated on.

So where are the TMU and ROPs in your opinion? What are all the duplicate blocks to you? I'd rather engage in discussion than take jabs at others views.
Doesnt matter what I think the blocks are or not. It all silly guess work. It only leads to baseless speculation. After all these month and many people a lot smarter than me have look at this thing and we are nowhere closer than what fourth storm has found and was posted many weeks ago.

It would be one thing if anyone here had any idea what most of the these could even be or be proven to be correct. The big picture things are the easiest to work with[ALUs, TMUs, or ROPs] things you dont seem to want to debate.
So you think the SPUs just have 30-40% more room needed for 160 ALUs, just because? I was under the impression that you thought a 30-32 ALU per SPU fit, though you were told it was unlikely.

If they are using something other than VLIW5, VLIW4 for instance or some other sort of VLIW5 set up that hasn't been used before or even moving away from VLIW all together would allow for this. Especially since we can't be sure of the density of the registeries.
Yes but now i do not think that is right. It doenst match was others factor on the GPU as fourth storm has found. On beyond3d it was shot down by the very next post in that thread. It wasnt even debated at all...

The simplest solution is they spread the alu to reduce the heat. Occam's razor and all... this is the most likely reason. Now we have a comment directly from a dev stated the wiiu has less shaders. If this is correct then there is nothing against it being a 160 ALU part. Everything we see just fits.
 
BG with all these processes and numbers being thrown around, what about the 160 vs 320 shaders debate? Which one is most likely in your opinion?

The Wii U in my point of view can't have so much "secret" functions that Nintendo is not sharing it with developers at this point. Nintendo should be showing exactly how to get the most out of the GPU to Third Parties. Simply showing how great their First Party software looks is not enough.

He's stated that 256 ALUs is the minimum in his opinion. That would mean each SPU has 32 ALUs. Personally 160 ALUs seems too little at 550MHz to out perform the 240 ALUs in Xenos. We might never know the answer but at the moment it is all guesswork.
 
BG with all these processes and numbers being thrown around, what about the 160 vs 320 shaders debate? Which one is most likely in your opinion?

The Wii U in my point of view can't have so much "secret" functions that Nintendo is not sharing it with developers at this point. Nintendo should be showing exactly how to get the most out of the GPU to Third Parties. Simply showing how great their First Party software looks is not enough.

The answer is: neither. AzaK posted BG's theory a few pages back.
 
Doesnt matter what I think the blocks are or not. It all silly guess work. It only leads to baseless speculation. After all these month and many people a lot smarter than me have look at this thing and we are nowhere closer than what fourth storm has found and was posted many weeks ago.

It would be one thing if anyone here had any idea what most of the these could even be or be proven to be correct. The big picture things are the easiest to work with[ALUs, TMUs, or ROPs] things you dont see what want to debate.

Yes but now i do not think that is right. It doenst match was others factor on the GPU as fourth storm has found.

The simplest solution is they spread the alu to reduce the heat. Occam's razor and all... this is the most likely reason. Now we have a comment directly from a dev stated the wiiu has less shaders. If this is correct then there is nothing against it being a 160 ALU part. Everything we see just fits.

Which dev said that? considering the low clock on 40nm process, I don't think spreading the ALUs out would gain much of a benefit, maybe if they were pushing for high clocks it would matter, but it doesn't make much sense considering how tightly everything else is packed, and the low clock of the GPU points to that being just a bad assumption.
 
Which dev said that? considering the low clock on 40nm process, I don't think spreading the ALUs out would gain much of a benefit, maybe if they were pushing for high clocks it would matter, but it doesn't make much sense considering how tightly everything else is packed, and the low clock of the GPU points to that being just a bad assumption.

It was just reposted today.


"There aren't as many shaders, it's not as capable. Sure, some things are better, mostly as a result of it being a more modern design. But overall the Wii U just can't quite keep up."

http://www.gamesindustry.biz/articl...ess-powerful-than-ps3-xbox-360-developers-say

The comment is so direct and that rare. Most of the time it just the gpu is better worse or whatever. No it stated it had less shaders. Now look where we are at with the debate.... Only ONE solution makes this statement correct.

If you look up a couple post. People said the dev kits gpu were clocked at 400MHz when this was written. Now base on the other facts we have found it just makes perfect sense.
That article was from over a year ago. At some point in mid 2012, the specs went from 1Ghz CPU/400Mhz GPU to 1.24Ghz CPU/550Mhz GPU, and the API was literally painful to use until just before launch, like right round the time launch software was going gold.

I don't think developers commenting in April of 2012 had a full picture of what the Wii U is capable of. I also think that at the time they were probably correct.

After 150MHz bloost the wiiu gpu would performance change 37% if it scaled with the clock rate increase. I think if we down clock the wiiu 37% we would see worse result than what the ps360 could do...
 
Beside the part that matters? lol of course...

i do not believe in the "special sauce!" The gpu die houses a lot more than just the gpu. It houses the north/southbridge functionality, usb controller, DSP and other part just for BC. People have wrongly assume that everything on die is just for the GPU.

The main parts are the ALUs, TMUs and ROPs just like any other GPU.

Speaking of "special sauce", I still don't understand how a 160sp 400/550MHz GPU would outperform Xenos going by those benchmarks you posted eariler. The clockspeed of the chips in those benchmarks are dissimilar to the ones for Latte and RSX, and I believe you are downplaying how weak the RSX is compared to Xenos. Xenos has twice the tri-setup and ROPs, for example.

Perhaps you are just confident that Nintendo and AMD was able to make Latte that much more efficient in SP performance. If that's the case, your theory is that the "efficency" is Latte's "special sauce." ;)
 
Speaking of "special sauce", I still don't understand how a 160sp 400/550MHz GPU would outperform Xenos going by those benchmarks you posted eariler. The clockspeed of the chips in those benchmarks are dissimilar to the ones for Latte and RSX, and I believe you are downplaying how weak the RSX is compared to Xenos. Xenos has twice the tri-setup and ROPs, for example.

Perhaps you are just confident that Nintendo and AMD was able to make Latte that much more efficient in SP performance. If that's the case, your theory is that the "efficency" is Latte's "special sauce." ;)

It just comes down to the GPU is the whole of its part. It has thing we know are a lot better than the ps360. A lot more edram and you can do a lot more with it compare to the x360. A lot more ram but its also slower. People forget x360 was pretty much a one of a kind chip. AMD move away from that design and improve it. The wiiu gpu is base on a lot newer design compare to the ps360.

I dont think it comes down to one thing. If it was one thing I would point to the edram. Really that is the one thing lets the wiiu punch above its weight.

If we go by the benchmark posted earlier the wiiu gpu should have LEAST 15% performance advantage without factor in the edram and more memory. I have yet to see anything that back up a 160 part would be beaten by the ps360. beside people just looking the number of shader and saying x > y so it impossible. I have shown my work....
 
It was just reposted today.




http://www.gamesindustry.biz/articl...ess-powerful-than-ps3-xbox-360-developers-say

The comment is so direct and that rare. Most of the time it just the gpu is better worse or whatever. No it stated it had less shaders. Now look where we are at with the debate.... Only ONE solution makes this statement correct.

If you look up a couple post. People said the dev kits gpu were clocked at 400MHz when this was written. Now base on the other facts we have found it just makes perfect sense.


After 150MHz bloost the wiiu gpu would performance change 37% if it scaled with the clock rate increase. I think if we down clock the wiiu 37% we would see worse result than what the ps360 could do...

I don't take much faith from shady sources, however I do think 160 ALUs is possible, just not VLIW5. I think it would have to be pretty different, either they buffed up all the shaders and that is why the SPUs are so big to house so little or they went with a design closer to VLIW4 and fit 32 ALUs in each SPU. Which personally I find more likely. It could also be VLIW5 with 30 ALUs in each unit. That would be fairly custom.

It just comes down to the GPU is the whole of its part. It has thing we know are a lot better than the ps360. A lot more edram and you can do a lot more with it compare to the x360. A lot more ram but its also slower. People forget x360 was pretty much a one of a kind chip. AMD move away from that design and improve it. The wiiu gpu is base on a lot newer design compare to the ps360.

I dont think it comes down to one thing. If it was one thing I would point to the edram. Really that is the one thing lets the wiiu punch above its weight.

If we go by the benchmark posted earlier the wiiu gpu should have LEAST 15% performance advantage without factor in the edram and more memory. I have yet to see anything that back up a 160 part would be beaten by the ps360. beside people just looking the number of shader and saying x > y so it impossible. I have shown my work....
That 15% is over RSX right? it would put it well under PS3 when cell is taken into account. There is no VLIW5 chip with 160 ALUs that beats out Xenos. The 6450 plays the same games worse than 360 and it is clocked higher than Wii U. edram isn't a magic bullet either. Xenos has 10MB of it's own and on top of that it has much faster main memory than Wii U. It doesn't add up correctly, you'd have to move away from VLIW5 to make 160 ALUs work.
 
I don't take much faith from shady sources, however I do think 160 ALUs is possible, just not VLIW5. I think it would have to be pretty different, either they buffed up all the shaders and that is why the SPUs are so big to house so little or they went with a design closer to VLIW4 and fit 32 ALUs in each SPU. Which personally I find more likely. It could also be VLIW5 with 30 ALUs in each unit. That would be fairly custom.

If that is the case why start with a r700 base? It just doesnt make sense at all....

Instead of changing the info we know and confirmed why not try to make the info we have fit? We can play what if forever...

if its 160 ALu what would be the reasons for them changes the size to be larger than what we seem in other amd gpu design on 40nm?

That 15% is over RSX right? it would put it well under PS3 when cell is taken into account.
really its over the 7900gt which seem to be more powerful than the rsx. Also the wiiu gpu should be more powerful since that card was only a 160:8:4. It was closer to 25% but i took some off since that card is a little newer than the r700. Maybe we would bring in cell for first party game but really do we have any idea how much the third party were using the cell for gpu tacks? I doubt not that many if at all...
 
It was just reposted today.




http://www.gamesindustry.biz/articl...ess-powerful-than-ps3-xbox-360-developers-say

The comment is so direct and that rare. Most of the time it just the gpu is better worse or whatever. No it stated it had less shaders. Now look where we are at with the debate.... Only ONE solution makes this statement correct.

If you look up a couple post. People said the dev kits gpu were clocked at 400MHz when this was written. Now base on the other facts we have found it just makes perfect sense.


After 150MHz bloost the wiiu gpu would performance change 37% if it scaled with the clock rate increase. I think if we down clock the wiiu 37% we would see worse result than what the ps360 could do...

This is already contradicted. Why do you keep stating the sames point over and over again when it has been discredited by the same facts?

Trine 2 was confirmed to have better shading than the PS3/360 were capable of before the console was even out. That would mean it was done while the Wii U was still in its under clocked, under-performing, poorly supported stated. That completely rules out the possibility of it only achieving better shading because of a later boost.
 
This is already contradicted. Why do you keep stating the sames point over and over again when it has been discredited by the same facts?

Trine 2 was confirmed to have better shading than the PS3/360 were capable of before the console was even out. That would mean it was done while the Wii U was still in its under clocked, under-performing, poorly supported stated. That completely rules out the possibility of it only achieving better shading because of a later boost.

Better shading? What does that even mean? How does that discredit that statement? Those comments were from way before e3. Wasnt that game not even announce until e3?

They started with an r700 because that's when the engineering effort began

And we dont have anything that says they move from that. r800 has been out for years. I not like they couldnt start on something else. Then only the r700 feature match the dev leak docs.
 
I truly wonder if there is a way to get significantly more, if not double performance out of the Wii U by coding a game solely for the use of the Classic controller pro, or Wii remote. Once you get to the title screen of the game with your game pad and press start with your pro controller, gamepad goes completely on standby. Screen off, communication with the console on pause until you exit the game, the whole shebang.

Chances are, Nintendo built in purposeful defenses against such things though. lol...
 
I truly wonder if there is a way to get significantly more, if not double performance out of the Wii U by coding a game solely for the use of the Classic controller pro, or Wii remote. Once you get to the title screen of the game with your game pad and press start with your pro controller, gamepad goes completely on standby. Screen off, communication with the console on pause until you exit the game, the whole shebang.

Chances are, Nintendo built in purposeful defenses against such things though. lol...

No. The gamepad does not eat that many resource to begin with, as the hardware was "designed" to output to it. There are components in the console(on the GPU I believe) specifically made for communicating with the gamepad. Not using it would just mean wasting those parts mostly. There is nothing else you could do with them. You may get a 5% boost in performance by not using the gamepad at best.

Where do people get this idea that the gamepad is crippling the console? I see so many people suggest that Nintendo should use the less featured controllers instead like it would provide some kind of benefit.
 
And we dont have anything that says they move from that. r800 has been out for years. I not like they couldnt start on something else. Then only the r700 feature match the dev leak docs.
Since it appears you've researched those things, in what way does R800 not match the leaked docs?
 
I truly wonder if there is a way to get significantly more, if not double performance out of the Wii U by coding a game solely for the use of the Classic controller pro, or Wii remote. Once you get to the title screen of the game with your game pad and press start with your pro controller, gamepad goes completely on standby. Screen off, communication with the console on pause until you exit the game, the whole shebang.

Chances are, Nintendo built in purposeful defenses against such things though. lol...
Sitting around by itself, the gamepad probably doesn't tax the other components of WiiU in any significant way. Even if developers are required to send something to the gamepad, they could probably just leave a static image floating around in an appropriate location and never worry about it.
 
The game was announced in development for the Wii U at e3 2011.
http://www.videogamer.com/wiiu/trin...ed_to_be_scaled_back_on_xbox_360_and_ps3.html

He made the comment that it would need downscaling to run on the 360/PS3 at the beginning of October 2012.

It's was e3 12 it was announce well after that comment.



http://www.ign.com/articles/2012/06/05/e3-2012-mass-effect-3-more-coming-to-wii-u

Since it appears you've researched those things, in what way does R800 not match the leaked docs?
it been a while sure you could look at the changes made from r700 to r800 ;)

Amd website should have them listed somewhere...
 
I have more to add to the discussion, but I believe the simplest explanation for those Bayonetta 2 numbers is that they are being used to bake the normal maps. It's so above even the numbers we have for PS4 games (KZ), that it must be ruled out. And Wii U and PS4 are getting their graphics IP from the same vendor, with Sony's chip clocked higher. Just seems very very unlikely those are final in-game polycounts even if it were a dual setup engine config.

The only problem with that idea is; character models created as normal-map sources are usually considerably higher in polys than 192k(more on that figure later). Naughty Dogs, for instance, had posted job listings, which included the task of “making million poly models game ready”

I can't picture P* creating a hi-poly model to capture normals of wrinkles in the character's leather clothing, for example, with just 192k polygons. Let's assume it's possible to do, but with some extremely efficient modeling techniques. Why even burden an artists with such a trivial - yet difficult - limitation when they KNOW can easily let loose on the polys with modeling software like Maya?

Take a careful look at the video again. Let's use this video as a reference:

Looking at the video, @ 0:15 sec, the model is very dense in polygons, especially the 4 guns, and the character's hands! (I imagine the face and hair will use a significant chunck of polys also). The model is shaded in the next scene, but notice; there's no geometry for wrinkles in the clothing. Wrinkles would have been a prime target for normal mapping. This is likely the in game model we're seeing. Whether they're going to use it for all in-game stuff, or just cut-scenes, is another matter but the normal mapping source models would have been considerably more detailed.

I think there's a correction to be made, also(unless my eyes deceive me). After a very careful observation, it looks to me, @ 0:19, the model is exactly 131,282 polygons. The second digit could be mistaken for a 2, but I'm certain it's not a 9. If a 190k character model wasn't already pretty low to grab normals from, 130k is almost certainly a waste of time - what decent normals are they hoping to grab from that? 130k is right next to realm of in-game models to begin with.
 

Seems I confused its e3 2011 coverage and the e3 2011 announcement of the Wii U, but that was never even part of the point to begin with. The point was that it had better shading on the Wii U before any of the enhancements and optimizations came which you ignored again.

The only problem with that idea is; character models created as normal-map sources are usually considerably higher in polys than 192k(more on that figure later). Naughty Dogs, for instance, had posted job listings, which included the task of “making million poly models game ready”

I can't picture P* creating a hi-poly model to capture normals of wrinkles in the character's leather clothing with just 192k polygons. Let's assume it's possible to do, but with some extremely efficient modeling techniques. Why even burden an artists to pull that off, when they can easily let loose with super detailed models in Maya?

Take a careful at the video again. Let's use this video as a reference:

Looking at the video, @ 0:15 sec, the model is very dense in polygons, especially 4 guns, and the character's hands! (I imagine the face and hair will use a significant chunck of polys also). The model is shaded in the next scene, but notice; there's no geometry for wrinkles in the clothing. Wrinkles would have been a prime target for normal mapping. This is likely the in game model we're seeing. Whether they're going to use it for all in-game stuff, or just cut-scenes, is another matter but the normal mapping source models would have been considerably more detailed.

I think there's a correction to be made, however(unless my eyes deceive me). After a very careful observation, it looks to me, @ 0:19, the model is exactly 131,282 polygons. The second digit could be mistaken for a 2, but I'm certain it's not a 9. If a 190k character model wasn't already pretty low to grab normals from, 130k is almost certainly a waste of time - what decent normals are they hoping to grab from that? 130k is right next to realm of in-game models to begin with.

Yeah, the normal map theory really doesn't make sense when you analyze it, because I was sure normal maps were usually made using CG.
 
Seems I confused its e3 2011 coverage and the e3 2011 announcement of the Wii U, but that was never even part of the point to begin with. The point was that it had better shading on the Wii U before any of the enhancements and optimizations came which you ignored again.

What exactly is this "better shaders" and when did it have it and this was before the changes for the development hardware ? Do we even known when this changes were made to the development hardware.

Do you have anything to back this up.
 
What exactly is this "better shaders" and when did it have it and this was before the changes for the development hardware ? Do we even known when this changes were made to the development hardware.

Do you have anything to back this up.

You are begging the question. This has been gone over numerous times.
 
No. The gamepad does not eat that many resource to begin with, as the hardware was "designed" to output to it. There are components in the console(on the GPU I believe) specifically made for communicating with the gamepad. Not using it would just mean wasting those parts mostly. There is nothing else you could do with them. You may get a 5% boost in performance by not using the gamepad at best.

Where do people get this idea that the gamepad is crippling the console? I see so many people suggest that Nintendo should use the less featured controllers instead like it would provide some kind of benefit.

It depends on what you're doing. If the dev is just using the screen as a low-graphic interface or to just mirroring the main screen, it wouldn't take much resources. If you are displaying a 3D game image from a different perspective, though, that will take twice the polygon rendering, shaders, etc.

Which brings me to a question: how is the Wii U handling splitting its resources when the main display and the touch screen are displaying different complex 3D images? Bgassassin's theory does makes a lot of sense in that situation, as it is similar to how the DS has two 2D graphic cores to use one per screen. What other ways are they doing this? With the 160sp theory, would it be strong enough to handle such a resource split while still displaying a current-gen+ screen?

IIRC, for example, Call of Duty would need to split its GPU resources if they have a multiplayer mode when the one player is playing with the Wii U controller.
 
Dunno if I mentioned it, but I found a Siggraph 2008 presentation a few days in which some CMU professor suggested using four-way multithreading on GPUs, as the shader units apparently stall a lot. Just a thought.
 
No. The gamepad does not eat that many resource to begin with, as the hardware was "designed" to output to it. There are components in the console(on the GPU I believe) specifically made for communicating with the gamepad. Not using it would just mean wasting those parts mostly. There is nothing else you could do with them. You may get a 5% boost in performance by not using the gamepad at best.

Where do people get this idea that the gamepad is crippling the console? I see so many people suggest that Nintendo should use the less featured controllers instead like it would provide some kind of benefit.

If I remember correctly an interview with a crytek developer said they could get more out of the WiiU (granted whatvthey had already was great i think 720 or 1080 at solid 30fps)if they could turn the gamepad off but there wasn't any way to do it.
 
Status
Not open for further replies.
Top Bottom