WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
Regarding Bayonetta 2: How do we know if it will run at 60 fps constantly? I've read this claim often, yet it's quite common that devs state that they "aim for xx fps" but don't reach it in the end. I think we should wait until the game is released before making any definite claims.

After E3 I looked for info on Bayo2 and performance, all I found was impressions on the game, but most stated the 60fps with no noticeable dips during the whole demo. Pretty impressive if you ask me and shows lots of potential, but like you said we will have to wait for an in depth analysis after the game comes out.
 
Tri Ultimate was listed to be 1080p 60FPS by Capcom though some analyzerclaimed to be 1080p 45FPS though from the video I saw of it, the Frame Rate was all over the place look to have been constantly increasing as the game ran. Also, I believe its using the European version which would have a target of 50hz and thus a 50 FPS max frame rate.

Games aren't developed with different framerate tagets for different regions anymore.And the Japenese version was tested anyway:
http://www.youtube.com/watch?v=c2ah3zDB-sM

It's not 45fps, it has an uncapped framerate but is almost always 30-40fps.

This is also prior to the update that caused a large number of game that were experiencing slow down and other perform issues to no longer have them(this has is completely factored out by everyone trying to use performance issues in launch games to limit the console). I have heard numerous complaints about Lego City having slow downs, but I have not experienced one. I didn't get it till after the first big performance update. If the U.S. version of MH Tri:Ultimate were tested now, I believe it would give different results.

Theres no evidence of a system update affecting game perfromance.

Maybe resolution is linked to bandwidth more than flops. And WiiU has the bandwidth in it eDRAM, which in my opinion is much higher than PS4.

An opinion based on...?

Wii U eDRAM is likely ~70GB/s which is easily outperformed by PS4.
 
I haven't go 101 but I did it with, IIRC NSMBU. It was flat 33 all the way from boot to menu screen to game.

If you ever get W-101 it would be great if you could test it again :), did you ever try it with more complex looking 3D games like AC III or ME3 perhaps ?. I doubt a 2D Mario would push it past 33w, if it can go above it.
 
I have MH3U and 30 - 40fps sounds about right, it's definitely no where near 60fps which is pretty disappointing considering it's just a Wii game running at 1080p :(.

It's Capcom and it's a port. They don't have the best record for those kind of things. It's a simple relatively cheap cash in.
 
I have MH3U and 30 - 40fps sounds about right, it's definitely no where near 60fps which is pretty disappointing considering it's just a Wii game running at 1080p :(.

you mean 3DS game? as the back of the box lists MTFramework mobile. Still it's a port that could clearly use more optimization for better performance.
 
Haha, yeah nice job, Krizz! Forget the PS4 launch! How dare our GPU banter be interrupted! :P

You can't compare TDPs to actual power draw, this is the top wattage the chip is designed to handle and almost never use this wattage level, it's almost always 2/3rds to 3/4ths this number. meaning for this card's GDDR5 version would run between 26watts and 30watts on average, though stuff like furmark can overheat a card and push them beyond their TDP, this doesn't happen in normal use like you'd see on a console, especially one targeting a specific low clock, which usually will see an efficiency gain with a lower power draw for the same performance. The same is true for MCM which could shave another 1-2watts thanks to sharing the board with a CPU that is also drawing less than 8 watts. (they would consume some of that power together) The e6760 for instance uses 35watts comes with 1GB GDDR5 and is 480 shaders clocked at 600MHz, I love how everyone says this is a binned part but yet is used in millions of devices world wide from PoS units to Casino machines.

I am aware that max TDP is not necessarily indicative of gameplay power draw, but to claim that it is completely irrelevant is disingenuous, given the data we have. If a TDP describes a worst case scenario (or rather, how much the card would be required to cool under max load), we must ask how close this is to the real life peaks in AMD’s GPUs. The facts are these: in gameplay tests run by TechPowerup (linked to previously), the Radeon HD 5550 peaked at 37watts – that’s within 2 watts of the figure AMD listed as max TDP.

AMD’s embedded lines are not relevant for comparison. People keep saying they are binned parts because they are binned parts. They are identical to the chips on laptop cards, only stuck on a BGA package with some RAM.
AnandTech said:
As we mentioned previously, the E6760 is based on the Turks GPU. Specifically, AMD’s embedded video cards are based on their mobile products, so the E6760 is derived from the 6700M series both in performance and naming.
http://www.anandtech.com/show/4307/amd-launches-radeon-e6760
That they are widely used is besides the point – laptops are pretty big market these days too!

I actually think we could be looking at the e6460 which is a 160 shader part, Nintendo would likely want to use an existing chip and modify it, in 2009 this chip wasn't ready so they built their kits on the high end (at the time) AMD GPU HD 4800 series and simply down clocked it until it was usable. I'm almost positive there is no TEVs in the shader blocks thanks to the Wii U Iwata asks where the engineers who built Wii U said that they integrated Gamecube's design into the AMD GPU they used. This could be an interesting aspect though because they may of had to add ALUs to the shader blocks to emulate TEV freely, also something that has been bugging me is that Gamecube had 3 raternizers, I'm not an expert at this so maybe someone like blu could answer this. Does that cause a problem with the idea that latte only has 1? or could it be hiding a couple more without the entire 5 block duality that HD 6970 and HD 7000 series displays?

I really don’t think there’s any point in positing that Nintendo used a stock HD 6000 series as a base. If that were the case, we wouldn’t be seeing things like “stripped down from DirectX 11” in developer notes, DirectX 10 on the Unity slide, or “based on R700 series” in the leaked specs sheet (which numerous independent sources on this board and elsewhere have confirmed as the final specs, but many in this thread want to discount as an unreliable rumor).

And I've long since abandoned the idea of any TEVs being on Latte. According to Marcan, BC runs via shim, so there are likely some extra transistors to do the translation, but not full units.

Fourth Storm, while you have answered some questions definitely, the truth is you also are not an expert on AMD designs and you could simply be seeing what you want to see. That is why I keep bringing up the idea that you could be wrong, because to be frank. You probably are about quite a few of the things you think you know. I'm sure plenty of people who might have insider information have helped you through PMs but you need to realize that most of those people are probably fake and the people that do have an idea might not actually know anything. I've been fine with the idea of 160 shaders in latte for a long time, it really doesn't make any difference whether it is 160, 256 or 320, but it is quite possible that we have this wrong and it is a bit different.

Also just because the chip is VLIW (if it is) It doesn't mean that it has to be VLIW 5 or 4, VLIW has been around a lot longer than HD2000, it's not even ATI's idea afaik, it's just another instruction type. Nintendo could of worked with AMD and their tech team to have something that is a bit different, which could explain why the blocks are 90% bigger than 20 ALU blocks, which is a huge huge difference, and while you are sure that Renesas is going to make it bigger, there are certainly things they should be able to make smaller especially since they are hand layouts which usually improves on density over machine layouts like we see in Brazo. You are just too sure to be taken seriously, and maybe that is because you know some things you can't divulge, but since I don't know those things, it just looks silly that you are so sure.

Wow z0m, that certainly was a full frontal assault. I know I may have questioned your level of expertise in the past, and I’m sorry if it came off as a personal attack. I’d like to keep it civil from here on out, though. Attack the argument, not the poster. No, I don’t have a degree in AMD graphics cards, but I have committed myself to researching them since about the time of the Wii U’s announcement and I’m learning more every day.

Regardless, I am not asking anyone to take my word for it on these matters. All the info I have found is freely available for people who take the time to research. I’ve merely shared my opinions and then stated my reasoning behind them, presenting evidence where applicable. Yes, I'm pretty damn sure of certain aspects, because I've seen enough evidence to satisfy.

I have not been reliant on any inside sources, except for the 45nm tidbit, and that came from a reputable channel. Even so, a 5nm difference isn’t going to factor in as much as differing fab houses will. A process node does not set a mandatory size for the hardware blocks – we can only look at what’s typical for one product from one manufacturer. Other than that, all bets are off.

To suggest that there's some type of custom VLIW architecture at work is a bit wild, imo. We have dev comments that state that it's a pretty standard design. The posts I read talking about how Nintendo/AMD/Renesas worked their asses off on this chip and while citing R&D numbers underestimate how much it takes to design a new memory subsystem, integrate all the components on an SoC, get BC running, put everything on an MCM, and so on. It's not just slapping the blocks together and calling it a day. Btw, who said it's a hand layout?

PS lets throw out the notion that Wii U games are even using all of the GPU's resources and thus pushing it's limits as far as TDP goes. I would guess we will see 37-38watts becoming the norm in the future which could give 3+ watts to the GPU. I really don't think 360 ports are going to push an AMD GPU with even 160 shaders to it's full power potential.

In a series of gameplay tests run by Digital Foundry, Wii U peaked at just over 33 watts. Mind you, just because the graphics in those launch games left something to be desired, it doesn’t follow that the GPU itself was slacking off.
Digital Foundry said:
One thing that did stand out from our Wii U power consumption testing - the uniformity of the results. No matter which retail games we tried, we still saw the same 32w result and only some occasional jumps higher to 33w. Those hoping for developers to "unlock" more Wii U processing power resulting in a bump higher are most likely going to be disappointed, as there's only a certain amount of variance in a console's "under load" power consumption.
http://www.eurogamer.net/articles/digitalfoundry-wii-u-is-the-green-console

Let's not forget, games like BLOPS2, ACIII, MEIII are no slouches when it comes to taxing a system. I'm sure that future games will be able to display nicer eye candy and more efficiently utilize resources, but I see no indicator that Wii U will ever consume over 34 watts.
 
Does it really matter if they are binned parts. Everything is binned. CPUs, desktop GPUs, its all binned. But to say the GPU in the Wii U can't be compared to one, I think is wrong.The point is that if there is a GPU that it could possibly be compared to, laptop or not, I think its fair. The chip is custom and I'm sure its on a mature process, it could be created to compare,
 
Games aren't developed with different framerate tagets for different regions anymore.And the Japenese version was tested anyway:
http://www.youtube.com/watch?v=c2ah3zDB-sM

It's not 45fps, it has an uncapped framerate but is almost always 30-40fps.



Theres no evidence of a system update affecting game perfromance.



An opinion based on...?

Wii U eDRAM is likely ~70GB/s which is easily outperformed by PS4.

And that number is based on what? No one knows the bus width, except for devs. Shinen continues to praise the WiiU high speed bandwidth from eDRAM, now their praising the shader core clock, saying shaders are really fast. Almost forgot the comment about render targets fitting in eDRAM.
 
Does it really matter if they are binned parts. Everything is binned. CPUs, desktop GPUs, its all binned. But to say the GPU in the Wii U can't be compared to one, I think is wrong.The point is that if there is a GPU that it could possibly be compared to, laptop or not, I think its fair. The chip is custom and I'm sure its on a mature process, it could be created to compare,

But it does really matters because console chips are not binned. They are design for one machine and you cannot offset the cost of terrible yields by using the chip in other products.

There are so many factors that just completely rule out 320 alu. The blocks are too small, the power consumption is too low and the other things on the chip Fourth Storm found do not fit. He went into detail about many times.
 
But it does really matters because console chips are not binned. They are design for one machine and you cannot offset the cost of terrible yields by using the chip in other products.

There are so many factors that just completely rule out 320 alu. The blocks are too small, the power consumption is too low and the other things on the chip Fourth Storm found do not fit. He went into detail about many times.

true, but 160 blocks doesn't fit either because the blocks are too big.
 
I am aware that max TDP is not necessarily indicative of gameplay power draw, but to claim that it is completely irrelevant is disingenuous, given the data we have. If a TDP describes a worst case scenario (or rather, how much the card would be required to cool under max load), we must ask how close this is to the real life peaks in AMD’s GPUs. The facts are these: in gameplay tests run by TechPowerup (linked to previously), the Radeon HD 5550 peaked at 37watts – that’s within 2 watts of the figure AMD listed as max TDP.

You actually are taking power consumption from the Wii U and comparing it to TDP of desktop cards, it's an apple to oranges comparison and that was my point.

AMD’s embedded lines are not relevant for comparison. People keep saying they are binned parts because they are binned parts. They are identical to the chips on laptop cards, only stuck on a BGA package with some RAM. http://www.anandtech.com/show/4307/amd-launches-radeon-e6760
That they are widely used is besides the point – laptops are pretty big market these days too!
This is also my point, millions of these embedded "binned parts" exist, how exactly is that possible when they are hand selected from a much higher quantity, where does that quantity come from? One reason "binned" parts can't be used is because they are hand selected parts and thus limited in quantity. the lower clock speeds means lower voltage means lower power draw, it's not rocket science and it isn't a stretch either.

I really don’t think there’s any point in positing that Nintendo used a stock HD 6000 series as a base. If that were the case, we wouldn’t be seeing things like “stripped down from DirectX 11” in developer notes, DirectX 10 on the Unity slide, or “based on R700 series” in the leaked specs sheet (which numerous independent sources on this board and elsewhere have confirmed as the final specs, but many in this thread want to discount as an unreliable rumor).

And I've long since abandoned the idea of any TEVs being on Latte. According to Marcan, BC runs via shim, so there are likely some extra transistors to do the translation, but not full units.
DirectX comparison can only be made to GX2, not the hardware itself. That is the big failure with people thinking the card is capable of directX version X, there is no such thing with Latte, it's running OpenGL and any directX programmer who has also worked on OpenGL will tell you that Direct X 10.1 and Direct X 11 cards are mostly a software thing except for new tessellation units. (that really is it.)

You quoted me saying that I don't believe there is any TEV logic in Latte, and then decided to lecture me on why there isn't?
Wow z0m, that certainly was a full frontal assault. I know I may have questioned your level of expertise in the past, and I’m sorry if it came off as a personal attack. I’d like to keep it civil from here on out, though. Attack the argument, not the poster. No, I don’t have a degree in AMD graphics cards, but I have committed myself to researching them since about the time of the Wii U’s announcement and I’m learning more every day.

Regardless, I am not asking anyone to take my word for it on these matters. All the info I have found is freely available for people who take the time to research. I’ve merely shared my opinions and then stated my reasoning behind them, presenting evidence where applicable. Yes, I'm pretty damn sure of certain aspects, because I've seen enough evidence to satisfy.

I have not been reliant on any inside sources, except for the 45nm tidbit, and that came from a reputable channel. Even so, a 5nm difference isn’t going to factor in as much as differing fab houses will. A process node does not set a mandatory size for the hardware blocks – we can only look at what’s typical for one product from one manufacturer. Other than that, all bets are off.

To suggest that there's some type of custom VLIW architecture at work is a bit wild, imo. We have dev comments that state that it's a pretty standard design. The posts I read talking about how Nintendo/AMD/Renesas worked their asses off on this chip and while citing R&D numbers underestimate how much it takes to design a new memory subsystem, integrate all the components on an SoC, get BC running, put everything on an MCM, and so on. It's not just slapping the blocks together and calling it a day. Btw, who said it's a hand layout?

Read it over and over again until you realize it isn't a personal attack but simple facts about your reasoning and assumptions about where your info is coming from. I respect you and anyone can get tunnel vision, I'm saying you are too closed minded about this topic and you'll never be able to say this is just a simple off the shelf design unless you were to get your hands on it.

IIRC the hand layout was from the chipworks company that gave us the pictures, but it is fairly obvious as the parts are not all standard shapes. The Custom VLIW isn't some fantasy multi billion dollar design, so just stop the nonsense, it's quite clear that it can be a standard chip while having unique attributes. As I've said VLIW isn't some closed box architecture, it's the type of "cpu" that these ALUs probably are and could come in a custom layout that isn't quite what you'd find in off the shelf parts.
http://en.wikipedia.org/wiki/Very_long_instruction_word
"VLIWs also gained significant consumer penetration in the GPU market, though both Nvidia and AMD have since moved to RISC architectures in order to improve performance on non-graphics workloads."

Nintendo has a graphics hardware team, I don't think they just sat on their hands while this chip was being made and that is my opinion, it has nothing in facts to back it up, but Nintendo certainly spent enough money on R&D to do something like this, as the 500 million USD I posted was from just 2007 and 2008.

In a series of gameplay tests run by Digital Foundry, Wii U peaked at just over 33 watts. Mind you, just because the graphics in those launch games left something to be desired, it doesn’t follow that the GPU itself was slacking off. http://www.eurogamer.net/articles/digitalfoundry-wii-u-is-the-green-console

Let's not forget, games like BLOPS2, ACIII, MEIII are no slouches when it comes to taxing a system. I'm sure that future games will be able to display nicer eye candy and more efficiently utilize resources, but I see no indicator that Wii U will ever consume over 34 watts.

Different games usually consume different power requirements, it is the reason why some games would make 360 RRoD more often than other games. It is also a well known fact that pushing different graphics can push a GPU harder than other games. Just look at furmark, the entire reason it was designed as software was to test a GPU's temperatures (max them out) by drawing more power into the GPU. You even concede that it could draw an extra watt, but that we shouldn't expect 3 or 4. I find that ridiculous.

This is not a personal attack, if you were an expert in this field, maybe it could be seen as one, but other people have disagreed with you who have also researched GPUs and latte. BGassassin being one of them, There have also been other techies who know more than either of us who have pointed out where you have been wrong in the past. Don't take it personally, the only reason why I even mention your name is because you "know" what latte is and you obviously can't.
 
BGassassin is the reason most people thought the gpu would be at least 600 gflops. lol

You to remember that console hardware is push harder than desktop gpu running games. It like running benchmarks on them.

true, but 160 blocks doesn't fit either because the blocks are too big.
There are many reason why it would be bigger. Fourth Storm has already posted that.

So if we says the reason the block are bigger is because its using a different foundry that cannot make the block as small as amd since they do not deal with these design for years.

Then everything just fits. I would love for so more tech data and not screenshots or other fanboy nonsense stuff. I have seen nothing that points away from 160 Alu given the tech data at hand.
 
I'm not saying that the Wii U GPU is a binned part, I'm saying its possible to make a part perform like one of the said GPUs in laptop or embedded and not have it binned.
 
So it was krizzx who broke it..

I didn't know the Playstation live was today. I wrote the message and it didn't go through, so I kept resubmitting, then I look on another site and saw that Sony were having their Gamescom conference.
 
Games aren't developed with different framerate tagets for different regions anymore.And the Japenese version was tested anyway:
http://www.youtube.com/watch?v=c2ah3zDB-sM

It's not 45fps, it has an uncapped framerate but is almost always 30-40fps.



Theres no evidence of a system update affecting game perfromance.




An opinion based on...?

Wii U eDRAM is likely ~70GB/s which is easily outperformed by PS4.

No evidence? It has already been confirmed by many personally in this thread including myself. I have observed the performance increase.

As I said, many people will attest to slow down in Lego City often. I had none. The big spring update made a lot of games run better. I'm guessing the optimizations to make the OS faster removed whatever was making the games incur slow down. This is why I would be interested in seeing what the FPS for MH3 Ultimate would be if tested now.
 
But it does really matters because console chips are not binned. They are design for one machine and you cannot offset the cost of terrible yields by using the chip in other products.

There are so many factors that just completely rule out 320 alu. The blocks are too small, the power consumption is too low and the other things on the chip Fourth Storm found do not fit. He went into detail about many times.


What information do we have to suggest we should accept FourthStorm analysis as the closes to confirmation?
 
If I remember correctly, Fourth Storm's stance, stated several times is that he can understand why some people may not be satisfied with his conclusions, and that he's always open to discussion and new information, but as of now, and in all likelihood from here on out, he's convinced of his conclusions. Quite a fair stance imo. He's not trying to make anyone accept his findings.
 
No evidence? It has already been confirmed by many personally in this thread including myself. I have observed the performance increase.

That's not evidence.

There is no definitive info on the bandwidth, so who knows...

I thought there were only a few options for what the bandwidth could be, and Wii U's performance in bandwidth intensive scenarios so far indicates it's on the low side..
 
You actually are taking power consumption from the Wii U and comparing it to TDP of desktop cards, it's an apple to oranges comparison and that was my point.

I just compared the measured power consumption of a graphics card under gameplay conditions to the measured power consumption of the entire Wii U console under gameplay conditions, and Wii U still draws less. 37 watts vs 33 watts. How is that apples to oranges?

This is also my point, millions of these embedded "binned parts" exist, how exactly is that possible when they are hand selected from a much higher quantity, where does that quantity come from? One reason "binned" parts can't be used is because they are hand selected parts and thus limited in quantity. the lower clock speeds means lower voltage means lower power draw, it's not rocket science and it isn't a stretch either.

Binning is common knowledge. I have no clue what the figures are for how many parts AMD produce and how many of them end up binned. If you're not convinced, find the figures and report back.

DirectX comparison can only be made to GX2, not the hardware itself. That is the big failure with people thinking the card is capable of directX version X, there is no such thing with Latte, it's running OpenGL and any directX programmer who has also worked on OpenGL will tell you that Direct X 10.1 and Direct X 11 cards are mostly a software thing except for new tessellation units. (that really is it.)

It's not just software. Changes were made to the hardware as well - the way interpolation is handled being one of them. Research it. Maybe it's not a paradigm shift, but there are differences. DirectX11 has an OpenGL equivalent. Devs just refer to DX version because it's an easy barometer for folks to understand.
You quoted me saying that I don't believe there is any TEV logic in Latte, and then decided to lecture me on why there isn't?

You seemed to believe that I thought it was on there (otherwise why would you bring it up), so I posted a one sentence summation of why I have come to believe differently. That's lecturing?

Read it over and over again until you realize it isn't a personal attack but simple facts about your reasoning and assumptions about where your info is coming from. I respect you and anyone can get tunnel vision, I'm saying you are too closed minded about this topic and you'll never be able to say this is just a simple off the shelf design unless you were to get your hands on it.

You basically write off my analysis because I don't have credentials, without explaining how my analysis is off. You state that I come off as silly and shouldn't be taken seriously. You state that I think I know alot but am "probably wrong." You question my judgement multiple times, implying that I am getting my info from insiders who are probably lying to me. Sorry, that's not the case. Most of the analysis is my own. Until you can present an "expert" opinion on how what appear to be 8 TMUs on Latte actually aren't, and how Nintendo have gotten around cutting the registers in the SPUs, I would argue that you are the one who is just seeing what you want to see. You say you are fine with 160 shaders, yet it seems like you are wishing for more...or at least for me to be wrong.

I have worded my posts very carefully. When I'm just guessing or not sure of something, I word my statements to reflect that. In fact, I can't recall ever saying that Latte is definitely 160 shaders - only that it appears to be and why. Then, others will go around saying "fourth storm confirms!" and whatnot. If people choose to accept my analysis, that is on them, not me.
IIRC the hand layout was from the chipworks company that gave us the pictures, but it is fairly obvious as the parts are not all standard shapes. The Custom VLIW isn't some fantasy multi billion dollar design, so just stop the nonsense, it's quite clear that it can be a standard chip while having unique attributes. As I've said VLIW isn't some closed box architecture, it's the type of "cpu" that these ALUs probably are and could come in a custom layout that isn't quite what you'd find in off the shelf parts.
http://en.wikipedia.org/wiki/Very_long_instruction_word
"VLIWs also gained significant consumer penetration in the GPU market, though both Nvidia and AMD have since moved to RISC architectures in order to improve performance on non-graphics workloads."

Nintendo has a graphics hardware team, I don't think they just sat on their hands while this chip was being made and that is my opinion, it has nothing in facts to back it up, but Nintendo certainly spent enough money on R&D to do something like this, as the 500 million USD I posted was from just 2007 and 2008.

Chipworks never said it was a hand layout. As for your assertions that we are looking at a completely new VLIW-style design, the burden of evidence is on you.


Different games usually consume different power requirements, it is the reason why some games would make 360 RRoD more often than other games. It is also a well known fact that pushing different graphics can push a GPU harder than other games. Just look at furmark, the entire reason it was designed as software was to test a GPU's temperatures (max them out) by drawing more power into the GPU. You even concede that it could draw an extra watt, but that we shouldn't expect 3 or 4. I find that ridiculous.

I provided you an expert opinion that the power consumption is unlikely to rise. DF tested the whole launch linup, which included numerous state of the art AAA titles. The damn thing wouldn't budge! I gave it an extra watt because of one remark that there was one part where the power consumption got to just over 33 watts. Of course there's going to be some variation, but in Wii U's case it looks to be very slight.

This is not a personal attack, if you were an expert in this field, maybe it could be seen as one, but other people have disagreed with you who have also researched GPUs and latte. BGassassin being one of them, There have also been other techies who know more than either of us who have pointed out where you have been wrong in the past. Don't take it personally, the only reason why I even mention your name is because you "know" what latte is and you obviously can't.

I am confident in some of my assertions, but again, I word my posts to reflect my varying degrees of certainty. Indeed, I made up my mind a while back on the shader count, TMUs, and ROPs. If someone comes up with some good evidence that suggests otherwise, I'm always open to discussion. But all I've seen is a bunch of subjective screen shot analysis and "what ifs."
 
considering the bolded part... is it possible Wii U is doing everything its doing from 3rd parties basically without even trying? like no optimization or nothing just throwing 360 code on Wii U hardware and basically it being ran on par or better(native 720P and 60FPS) without devs even giving a shit to try and get more out of the console.

"It should also be not forgotten that many current gen games don’t even run at 720p, but at lower resolutions which are scaled up (not to mention that most also only run at 30fps)."

No, it's not possible. 360 code wont work at all on Wii U.

this part should not be forgotten... most Wii U games even this early in its life are native 720p and 60FPS.

Discounting eShop titles how many can you name? NSMBU, Nintendo Land(?)....?

bolded part that leaves half of the EDRAM available after framebuffer for 1080p. what the hell are other developers not doing. im not saying Wii U is on par with ps4 and xb1 but its should be a more than capapble console. its going to be a shame if shin'en next game comes out 1080p 60fps and has all the bells and whistles "next gen" consoles are suppose to have.

Just because a 1080p framebuffer can fit in eDRAM doesn't mean its free.
 
I don't think Latte is an hand laid out chip. Here is a die shot of an Apple A6:
http://cdn.arstechnica.net/wp-content/uploads/2012/09/apple_a6_dieshot.jpg
Apple A6 ARM cores are hand laid out. It's GPU is computer laid out. The difference is clearly visible on the die shot. Latte look much more like A6's GPU than it's CPU.
The results obtained through computer layout are much more compact. I don't know why Apple A6's CPU is hand laid out, maybe because it is the first Apple homemade CPU.

With a hand layout, there is some void space between the different logical parts. If Latte was hand laid out, we could count the number of ALUs directly on the die shot and the biggest mistery of this thread would be solved
 
I just compared the measured power consumption of a graphics card under gameplay conditions to the measured power consumption of the entire Wii U console under gameplay conditions, and Wii U still draws less. 37 watts vs 33 watts. How is that apples to oranges?



Binning is common knowledge. I have no clue what the figures are for how many parts AMD produce and how many of them end up binned. If you're not convinced, find the figures and report back.



It's not just software. Changes were made to the hardware as well - the way interpolation is handled being one of them. Research it. Maybe it's not a paradigm shift, but there are differences. DirectX11 has an OpenGL equivalent. Devs just refer to DX version because it's an easy barometer for folks to understand.


You seemed to believe that I thought it was on there (otherwise why would you bring it up), so I posted a one sentence summation of why I have come to believe differently. That's lecturing?



You basically write off my analysis because I don't have credentials, without explaining how my analysis is off. You state that I come off as silly and shouldn't be taken seriously. You state that I think I know alot but am "probably wrong." You question my judgement multiple times, implying that I am getting my info from insiders who are probably lying to me. Sorry, that's not the case. Most of the analysis is my own. Until you can present an "expert" opinion on how what appear to be 8 TMUs on Latte actually aren't, and how Nintendo have gotten around cutting the registers in the SPUs, I would argue that you are the one who is just seeing what you want to see. You say you are fine with 160 shaders, yet it seems like you are wishing for more...or at least for me to be wrong.

I have worded my posts very carefully. When I'm just guessing or not sure of something, I word my statements to reflect that. In fact, I can't recall ever saying that Latte is definitely 160 shaders - only that it appears to be and why. Then, others will go around saying "fourth storm confirms!" and whatnot. If people choose to accept my analysis, that is on them, not me.


Chipworks never said it was a hand layout. As for your assertions that we are looking at a completely new VLIW-style design, the burden of evidence is on you.




I provided you an expert opinion that the power consumption is unlikely to rise. DF tested the whole launch linup, which included numerous state of the art AAA titles. The damn thing wouldn't budge! I gave it an extra watt because of one remark that there was one part where the power consumption got to just over 33 watts. Of course there's going to be some variation, but in Wii U's case it looks to be very slight.



I am confident in some of my assertions, but again, I word my posts to reflect my varying degrees of certainty. Indeed, I made up my mind a while back on the shader count, TMUs, and ROPs. If someone comes up with some good evidence that suggests otherwise, I'm always open to discussion. But all I've seen is a bunch of subjective screen shot analysis and "what ifs."

I don't see the point in these long combative posts anymore. If you didn't understand my reasoning in the last post, I don't think you ever will. Wii U's ALU count could be 160 ALUs I've pointed to this myself and was actually the first person to declare it if you look at the first page. It's mostly thanks to Xenos that I don't care what Wii U's packing, that and what next gen turned out to be (lighter than expected, XB1 even matching Wii U's initial "hopes")

I also think 160 ALUs can make a lot of sense if Nintendo's goal is to spin off this architecture into their future handhelds, because that is about right for a mobile part in ~2016. I'm fine if I'm wrong about any assumption I've made and to the point I'd like to ask about the power consumption you are looking at, what game was that? was Wii U running the same settings? because that is the closest you are going to get to an accurate measurement of that Desktop's GPU power consumption vs Wii U's. Launch games have long been known to mostly push 360 resolutions, aa and what not. We know this isn't actually an issue with Wii U except for the original bottlenecks due to unfinished hardware. We know this because many 360/Wii U games have shown to look and run as good or better than the 360 version (except for some RGB problems in some games)

Edit: We will know very soon if you are right about the manufacturing group making a large difference, every console sold in 2014 should be produced in TSMC's factories, so if the GPU die is significantly smaller you'd certainly be right. However if it more or less stays the same size, it would be clear that your assumption is wrong, as die size is the most expensive part about making a chip. afaik every chip made for Wii U this year was done by TSMC, but since it is selling so badly, we will need to wait for holiday sales to clear old stock before we can really be sure we have a new Wii U. I haven't bought mine yet, so if I end up waiting until next year, I'll open it up myself and measure the GPU for this thread. It would be an easy case closed if the GPU shrinks significantly.
 
question:
Does the WiiU render all games twice all of the time or only in some special cases?
i.e.: In CoD player 1 can play on TV with wiimote while player 2 can play on the wiiU gamepad. there are no details lacking or anything).
wouldn't that be immensly demanding?
 
question:
Does the WiiU render all games twice all of the time or only in some special cases?
i.e.: In CoD player 1 can play on TV with wiimote while player 2 can play on the wiiU gamepad. there are no details lacking or anything).
wouldn't that be immensly demanding?

1. The gamepad will mirror the TV to avoid using any extra resources for things like off tv play.

2. CoD: it lacks some detail, shadows come down in quality is the most noticeable change. However it is definitely a nice feature to have and in this case the Wii U really is "drawing" the game twice.
 
Seeing what we've seen in Gamescom the only title that looks remotely "Next gen" on the "U" is X and its doubtful that is internally rendered at 1080p.
 
so i was in the miiverse snapshot thread and these 3 pikmin 3 shots blew my mind
zlCfzRFC910dwOffjZ

zlCfzRE9pjAg7O4F9L

zlCfzRE9ojsaMeqH8H
im simply blown away, im not a big graphics guy but these pics really got me
 
Seeing what we've seen in Gamescom the only title that looks remotely "Next gen" on the "U" is X and its doubtful that is internally rendered at 1080p.

We know Wii U doesn't pack "next gen" power, even XB1 is questionable in this regard IMO. Also to run a 720p 30FPS game in 1080p 60FPS, you basically need ~5 times the raw performance of whatever that first device is. Wii U just doesn't have that sort of "juice"
 
We know Wii U doesn't pack "next gen" power, even XB1 is questionable in this regard IMO. Also to run a 720p 30FPS game in 1080p 60FPS, you basically need ~5 times the raw performance of whatever that first device is. Wii U just doesn't have that sort of "juice"

Yeah, not a surprise. I remember in the WUST threads how many, including myself, held out for the Wii U being able to easily run a complex AAA current gen game which ran sub 720 p at 1080 p when ported. Instead we get these modest bumps to 720 p and keep the framerate at 30. Any complex game is fated to run at 720 p. So what are we talking here, 2X current gen with 2X RAM?
 
Yeah, not a surprise. I remember in the WUST threads how many, including myself, held out for the Wii U being able to easily run a complex AAA current gen game which ran sub 720 p at 1080 p when ported. Instead we get these modest bumps to 720 p and keep the framerate at 30. Any complex game is fated to run at 720 p. So what are we talking here, 2X current gen with 2X RAM?
Plus more modern shaders and a tessalator that may get used sometimes.
 
Yeah, not a surprise. I remember in the WUST threads how many, including myself, held out for the Wii U being able to easily run a complex AAA current gen game which ran sub 720 p at 1080 p when ported. Instead we get these modest bumps to 720 p and keep the framerate at 30. Any complex game is fated to run at 720 p. So what are we talking here, 2X current gen with 2X RAM?

More complex than that, but the shaders on Wii U make up a lot of ground, 360 can run a very ugly (DX9, no advance effects) maybe almost as well as Wii U, but when it tries to do something like Crysis 3 on consoles, it will eat up a lot of that extra shader power found in Xenos (the 216GFLOPs) which is why even if Wii U is 176GFLOPs, it won't really matter, because the game will almost certainly look better on Wii U every time. (better lighting, higher res textures, far less bottlenecks, no screen tearing) all while likely pushing a solid frame rate. That is the bare Minimum Wii U offers. Of course this measurement is only when development resources are equal to both versions of the same game.

Edit: The reason Xenos would still be inferior to Latte is simply because of efficiency, Xenos has around 60% average efficiency meaning only around 130 GFLOPs seemingly are used on average. Expecting at least 80% efficiency on Wii U would mean about 141GFLOPs are being used, though I use 80% efficiency, it's likely 90+ giving it around ~160 GFLOPs on average. These numbers are based on some very basic math that tries to solve a very complex problem with a very general idea.

What I find the most amusing about WUST threads, is we originally expected XB1 specs for Wii U and said that would be a stop gap, not quite the "next gen" leap we were expecting.

So if you were to expect DX11 type effects in every game here on out, Wii U could possibly double the performance over Xenos even with a lower shader count because while DX10.1 capable cards (I'm still assuming the minimum here) would have to waste some power to do some DX11 effects, the vast majority of those effects were already found in DX10.1 and the difference between Shader Model 3, Shader Model 4.1 and Shader Model 5 is vastly huge between SM3 and SM4 while being much smaller between SM4 and SM5.

http://en.wikipedia.org/wiki/High-level_shader_language this has a great list of features and you'll notice SM4 and SM5 share all of the same ones.
 
Has any game appeared to make use of it? Tesselators have been around forever and its not until very recently that they've been used on PC.
I know the 360's only got used twice (I think?), which is why I said "may get used". Shin'en's new game will be using it, they said.
2014 will reveal the truth.... cant say much more than that... but its going to be surprising.
Is this based on something you know and can't say?
 
Nope, it was based on very similar SRAM structures found in the 8 TMU Brazos die.




Mind linking to your source for those figures? The effective power draw of GDDR5 chips is tough to find info on, but from what I've seen, we're not talking about a significant increase over DDR3 - maybe a few watts, but nothing that's going to close the gap we're looking at here. The numbers you see in whole system benchmarks are not good indicators, because if there is an increase in performance, that means the GPU is working harder (the RAM is getting more out of it), and that will produce more heat. You would have to find benchmarks with exactly identical performance in order to observe the true difference between DDR3 and GDDR5 power draw. (edit: saw where you got your figures and it's for the graphics card only, so nevermind the part about whole system benchmarks)

I have no reason to believe AMD's reference number to be merely theoretical, nor do I have any reason to believe that Wii U is going to all of a sudden start sucking a significantly higher amount of juice. It's max TDP appears to be about 33 watts, granted no USB ports are in use.

Edit: Were you getting your figures here? If so, you'll see the max TDP for the HD 5550 was 40 watts (and that's what we're comparing for now, the max TDP). The max TDP of the HD 5570 with GDDR3 and a higher clock was 50 watts - 10 watts higher. Pretty much as expected.

As you've just pointed out these cards only hit those numbers in synthetic benchmarks, not in game. During gameplay for instance the 5570 hit only 31w average and occasionally peaked at 37w, but in synthetic benchmarks it hit 50w. The WiiU tests are not synthetic/theoretical, they're in game. So they aren't comparable, which was my point.

At no point have I said WiiU will significantly increase its power draw. But the fact is as shown consistently throughout console power usage testing power draw does increase as more complex games are released. The fact is 33w is a gameplay result from launch games (I haven't even seen any tests on newer games) and not a max TDP as you keep referring to it. Whether any of this "closes the gap" isn't really the issue. The issue is with accuracy, and comparing apples to apples.

As to DDR3 vs GDDR5, I think the difference in power draw is quite large (in comparison to the power draw of DDR3), and to be honest those tests suggest that. I mean I don't think only 25% higher power draw should be expected from a card with 25% more ALU's 25% more TMU's, a 25% higher clock and 100% more RAM. But I'll look into more specific numbers.
 
But it does really matters because console chips are not binned. They are design for one machine and you cannot offset the cost of terrible yields by using the chip in other products.

There are so many factors that just completely rule out 320 alu. The blocks are too small, the power consumption is too low and the other things on the chip Fourth Storm found do not fit. He went into detail about many times.

Yes but the design is driven by the fact it has a single well defined set of operational parameters. You reduce the yield issues by designing to a single set of parameters. The very fact that you know that you are running at a fixed low clock allows you to make changes to improve density, reduce power consumption etc etc whilst maintaining acceptable yields.

Certainly in the field I currently work in this is pretty common. We often have versions of a "product" running a binned "component" at various performance points and a couple of low power variants running a custom non-binned variant of that component. The performance of the low power variants overlaps with the bottom two-thirds of the standard range. The power consumption is ~60% though.
 
took me a hell of a long time to get my account verified on gaf and become a member... i wont put that at risk just to try and make a point about graphics and processors. its basically just my personal opinion and wont say anything more than that. 2014 will reveal the truth one way or the other.

That's pretty dangerous talk. I assume there was a lot of bans handed out during E3 and the mod would not like to deal with the like of that again. Would probably be best if you pretty much keep it to yourself for the time being weather it be true or not.
 
I know the 360's only got used twice (I think?), which is why I said "may get used". Shin'en's new game will be using it, they said.

Is this based on something you know and can't say?

The tessellator in the 360 isn't even comparable to the one in the Wii U though. The one in the 360 was very basic. IT would be incapable of something like the Froblins demo by any stretch.

so i was in the miiverse snapshot thread and these 3 pikmin 3 shots blew my mind

im simply blown away, im not a big graphics guy but these pics really got me

Seems that game does indeed look better than I thought. Most what I've seen of the game are low res ground textures that look like they hadn't made any improvements since the Wii version. You definitely want get that type of graphical fidelity out of the 360,
 
As you've just pointed out these cards only hit those numbers in synthetic benchmarks, not in game. During gameplay for instance the 5570 hit only 31w average and occasionally peaked at 37w, but in synthetic benchmarks it hit 50w. The WiiU tests are not synthetic/theoretical, they're in game. So they aren't comparable, which was my point.

At no point have I said WiiU will significantly increase its power draw. But the fact is as shown consistently throughout console power usage testing power draw does increase as more complex games are released. The fact is 33w is a gameplay result from launch games (I haven't even seen any tests on newer games) and not a max TDP as you keep referring to it. Whether any of this "closes the gap" isn't really the issue. The issue is with accuracy, and comparing apples to apples.

As to DDR3 vs GDDR5, I think the difference in power draw is quite large (in comparison to the power draw of DDR3), and to be honest those tests suggest that. I mean I don't think only 25% higher power draw should be expected from a card with 25% more ALU's 25% more TMU's, a 25% higher clock and 100% more RAM. But I'll look into more specific numbers.

How much do you expect it to increase? Even post performance update, we didn't see any changes in power consumption.
 
How much do you expect it to increase? Even post performance update, we didn't see any changes in power consumption.

Why would we? the game is running the same code, the same number of polygons, shaders and textures in play, so where would an increase in power consumption come from the same software?
 
Isn't it supposedly running a higher clock frequency or something?

Oh were you one of the people who bought into the CPU running at a higher clock now? (the rumor was 3.5GHz, not even believable) that was debunked. When it came up, I as well as a few other members told people to check to see if there was a new power consumption, obviously a clock bump would increase the consumption by at least a watt. Exact same consumption meant no overclock. It's not impossible that Nintendo would do this in the future, but I do think it would be unlikely and you'd only see 100mhz for the cpu at max and 50mhz for the GPU, so it wouldn't be worth it since the GPU is likely designed for the 550mhz fequency.
 
took me a hell of a long time to get my account verified on gaf and become a member... i wont put that at risk just to try and make a point about graphics and processors. its basically just my personal opinion and wont say anything more than that. 2014 will reveal the truth one way or the other.

You either know something or you don't, it's a simple question. If you do a mod will talk to you, if you don't then I would be very careful about such remarks.

My guess is you are talking about the release of Kart, Bayo 2, Smash and X in 2014. We have all seen them in action, they look very nice (esp Kart and Smash for Nintendo games) but nothing spectacular, a modest jump over PS360 which is exactly what the console hardware seems to be.
 
Oh were you one of the people who bought into the CPU running at a higher clock now? (the rumor was 3.5GHz, not even believable) that was debunked. When it came up, I as well as a few other members told people to check to see if there was a new power consumption, obviously a clock bump would increase the consumption by at least a watt. Exact same consumption meant no overclock. It's not impossible that Nintendo would do this in the future, but I do think it would be unlikely and you'd only see 100mhz for the cpu at max and 50mhz for the GPU, so it wouldn't be worth it since the GPU is likely designed for the 550mhz fequency.

No, I was thinking GPU. That CPU clock rumor was just crazy. 1) That rumored (ie. bullshit) clock increase was almost double the original clock 2) No other CPU in that family has ever been clocked remotely near that.


Anyway, do we not know what exactly happened with the performance update?
 
No, I was thinking GPU. That CPU clock rumor was just crazy. 1) That rumored (ie. bullshit) clock increase was almost double the original clock 2) No other CPU in that family has ever been clocked remotely near that.


Anyway, do we not know what exactly happened with the performance update?

afaik it was an OS update and mostly dealt with how the system's memory was used. As well as allowing interaction before the page is 100% ready, the GPU hasn't changed clock speeds or there would probably be a bump in power consumption but I guess we could ask a hacker who has already gotten to the clocks before.
 
No, I was thinking GPU. That CPU clock rumor was just crazy. 1) That rumored (ie. bullshit) clock increase was almost double the original clock 2) No other CPU in that family has ever been clocked remotely near that.


Anyway, do we not know what exactly happened with the performance update?

No other CPU in the Nintendo CPU family has ever achieved clocks as high as a Expresso. The highest achieved with overclocks was 1.1 Ghz I belive and the performance increase is always small with a huge increase in heat production and power consumption. This is why it was always annoying when people insisted that the Wii was an overclocked Gamecube. The hardware doesn't overclock well at all.

We do no not know the entirety of the Espresso's design so to say it absolutely can't reach an overclock of x2 since is could very well be underclocked from all we know.

I've heard nothing of a GPU power increase.
 
Status
Not open for further replies.
Top Bottom