WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
I think the fans now rev'n higher are intended for improved cooling = improved stability, that's all. Wii U might have had the OS footprint reduced so more RAM = more performance, don't believe the 2 GHz overclock at all.
 
I think the fans now rev'n higher are intended for improved cooling = improved stability, that's all. Wii U might have had the OS footprint reduced so more RAM = more performance, don't believe the 2 GHz overclock at all.

You have logic here for sure, but did you ever feel the back of the fan on the U BEFORE the update? that thing was like a freaking air conditioner, I really don't think they needed to add more cooling unless they were preparing for better clockspeeds to SOME extent (not the extent of 3.24 GHz though)
 
From what I see, no one in this thread believes the clock was bumped this high. Some people believe the clock could be bumped slightly in the future, and are discussing that possibility, while others believe the original rumor was a typo and there has already been a small clock bump and are checking infamously poorly running games for improvement. The other people in this thread are making fun of everyone for believing the rumor.

Why is that a requirement for understanding how a CPU architecture works?
There's an Arthur Gies joke in there somewhere.
 
for your viewing pleasure, some "comments" from HamSandMan77, the user who wrote this in the first place:

http://tvtropes.org/pmwiki/article_history.php?article=Main.WiiU

"Seeing that people are having mixed views on said clock speeds (even for me when I first seen said clock speeds), I'm placing them as rumors. "

"Homedrewers have said that that is what the CPU & GPU's clock crystal is clocked at and that with said update it up clocked to that spead, also, the Wii U uses a Power6 base chip & the updated speads are placed back into rumor mode. "
 
8gigs of DDR5 was not just some pipe dream.

It was always feasible even during my cries of "2 gigs in the PS4 most likely." It was just expensive to do so, and could have ended up with an overly complicated board. This meant much less than I would have thought. Neither price nor the potentiality of complexity stopped Sony from going the extra mile.

GDDR5

you made the wii u cpu? you know what it was designed for? it obviously was designed for at least 1.24ghz, it isn't just 3 broadways duct tapped together
The fact that in Wii mode it just downclocks one of the cores to the Wii's clock speed, runs the code natively and disables the other cores does mean that it is the same as broadway.

The higher clock just comes from the newer process node.

for your viewing pleasure, some "comments" from HamSandMan77, the user who wrote this in the first place:

http://tvtropes.org/pmwiki/article_history.php?article=Main.WiiU

"Seeing that people are having mixed views on said clock speeds (even for me when I first seen said clock speeds), I'm placing them as rumors. "

"Homedrewers have said that that is what the CPU & GPU's clock crystal is clocked at and that with said update it up clocked to that spead, also, the Wii U uses a Power6 base chip & the updated speads are placed back into rumor mode. "

Lol, it is not Power6!
 
Now I finally got to play NBA 2k13 (Demo). I don't know how it performed before the update, but it's smooth as hell. No frame rate dips whatsoever. Seems like Nintendo really improved the performance. Might be software related though.
 
OK, I just played through what is by far the worst area of slowdown in NG3:RE (the end of day 2) and there's definitely improvement. It's not what I would describe as even slightly good performance but the opening area of the unfinished skyscraper was so bad before that even the music slowed down and that's not happening anymore.

I would bet that this is just a benefit of the OS patch and not any kind of boost from overclocking though.
 
GDDR5


The fact that in Wii mode it just downclocks one of the cores to the Wii's clock speed, runs the code natively and disables the other cores does mean that it is the same as broadway.

The higher clock just comes from the newer process node.

broadway downclocked itself to gamecube clock speed but it did have some other improvements over gecko
 
for your viewing pleasure, some "comments" from HamSandMan77, the user who wrote this in the first place:

http://tvtropes.org/pmwiki/article_history.php?article=Main.WiiU

"Seeing that people are having mixed views on said clock speeds (even for me when I first seen said clock speeds), I'm placing them as rumors. "

"Homedrewers have said that that is what the CPU & GPU's clock crystal is clocked at and that with said update it up clocked to that spead, also, the Wii U uses a Power6 base chip & the updated speads are placed back into rumor mode. "

If he really "seen said clock speeds", how come he doesn't just put up proof?
 
Changing the clockspeed like that is not plausible, and assumes gross incompetence on Nintendo's part.

1) Underclocking scenario: Nintendo tested every WiiU at the factory running safely at over 3Ghz. They then proceed to gimp the launch lineup by underclocking it down to 1.24Ghz. Devs are kept in the dark. If the system is certified to run at 3Ghz, why release it at any lower?

2) Overclocking scenario: Nintendo did not test every WiiU, and rolls out a live update to millions of consumers. Fireballs be damned.
 
If we're going to start basing generations on power, then yes my newest gaming PC is a generation ahead of the upcoming consoles. It's not my criteria, but many that want to deride Nintendo's weak hardware decisions and relegate them to a previous generation just because their products are released with hardware that don't greatly eclipse Microsoft and Sony's hardware.

That said, the sarcasm in this thread is needless. Even if in some hypothetical update Nintendo frees up more memory or does a minor firmware-based overclock to any of the parts (again, not unheard of - Sony and Nintendo have done this before) that doesn't preclude the fact that the Wii U is orders of magnitude weaker than the upcoming Microsoft and Sony consoles. Nothing can or will change that, because physics never - ever- lie.

The needless "circle jerk" or whatever GAF calls it nowadays on either side of the spectrum isn't necessary.

The fact that the GPU in this console is a marked improvement over the previous generation's GPUs isn't a stretch, and that the CPU isn't being overclocked to 3+ ghz is extremely, incredibly obvious to anyone who knows the type of CPU that's in the console. The end.

Your computer is 10x the PS4? 20TFLOPS?

Impressive bruh.
 
So the CPU is a 1.24GHz tri-core?
Isn't that even weaker than the 360's three cores?

How's the GPU?
CPU is based on PowerPC 7xx which is an old design (same as Gamecube/Wii). Iwata admits the console can't run everything perfectly and some developers complained about it. One developer has said they're ok with it.

GPU is better than PS3/360 but probably not by much. There's still some secrecy to the chip but don't expect dramatic improvements over what the Xbox 360 can do (compared to PS4/720).
 
CPU is based on PowerPC 7xx which is an old design (same as Gamecube/Wii). Iwata admits the console can't run everything perfectly and some developers complained about it. One developer has said they're ok with it.

GPU is better than PS3/360 but probably not by much. There's still some secrecy to the chip but don't expect dramatic improvements over what the Xbox 360 can do (compared to PS4/720).

This is the reason why companies don't like to share specs of custom silicon. Look at the Apple A5 (even better A5X) chip and then compare it to the Tegra 3. The Tegra 3 has 3 or so more times "cores" and faster clockspeeds all around. On paper the Tegra 3 is a beast. But then the "weaker" custom Apple chips beat it out on most categories. I feel that the Wii U can have the same potential being an MCM die and all. Once people figure out the architecture and the GPGPU I feel like it will hold its own just fine.
 
This is the reason why companies don't like to share specs of custom silicon.
Their loss. If interpretation is such a problem they can give a disclaimer about how each specs are suppose to be read.

Once people figure out the architecture and the GPGPU I feel like it will hold its own just fine.
Compared to the PS3/360? Sure, but that's not saying much (2012 vs 2005 tech).
 
This is the reason why companies don't like to share specs of custom silicon. Look at the Apple A5 (even better A5X) chip and then compare it to the Tegra 3. The Tegra 3 has 3 or so more times "cores" and faster clockspeeds all around. On paper the Tegra 3 is a beast. But then the "weaker" custom Apple chips beat it out on most categories. I feel that the Wii U can have the same potential being an MCM die and all. Once people figure out the architecture and the GPGPU I feel like it will hold its own just fine.

No, the A5 is duel core, Tegra 3 is quad core.
They use the same CPU cores! Just that Tegra 3 has 2x the number and higher clock speed.
Tegra 3 is faster CPU wise no matter what.
broadway downclocked itself to gamecube clock speed but it did have some other improvements over gecko

I am reading the IBM docs for both Gekko and Broadway and so far they both look the same.
 
No, the A5 is duel core, Tegra 3 is quad core.
They are the same cores! Just that Tegra 3 has 2x the number and higher clock speed.
Tegra 3 is faster CPU wise no matter what.

Tegra 3: 4-core CPU 1.6 GHz, 12-Core GPU = 16 total cores

A5: Dual core CPU 1 GHz, Dual core GPU= 4 total cores

FOUR TIMES as many cores.

I'll give you credit that the CPU is a BIT faster, not by much at all (according to anandtech) and the A5 DESTROYS the Tegra with 6 times the GPU cores. The A5 chip is INCREDIBLY efficient with better designed silicon with "less-powerful" specs. @JordanN
THIS is why specs don't matter, real-world performance is really what matters.
 
Tegra 3: 4-core CPU 1.6 GHz, 12-Core GPU = 16 total cores

A5: Dual core CPU 1 GHz, Dual core GPU= 4 total cores

FOUR TIMES as many cores.

I'll give you credit that the CPU is a BIT faster, not by much at all (according to anandtech) and the A5 DESTROYS the Tegra with 6 times the GPU cores. The A5 chip is INCREDIBLY efficient with better designed silicon with "less-powerful" specs. @JordanN
THIS is why specs don't matter, real-world performance is really what matters.

You can not just shove CPU cores and GPUs in the same boat!

The CPU cores in Tegra 3 are the same cores from ARM! It just has more and at a higher clock speed!

The GPU specs on the other hand, A5 are better than the Tegra3 and the "core"s are not comparable, one has Unified Shader Pipelines, the other does not!
 
Now I finally got to play NBA 2k13 (Demo). I don't know how it performed before the update, but it's smooth as hell. No frame rate dips whatsoever. Seems like Nintendo really improved the performance. Might be software related though.

I mean i've had nba 2k13 since i got the console back in december.. i don't remember it having any performance issues...i'll have to try it post-update but the game was already really smooth performance wise
 
You can not just shove CPU cores and GPUs in the same boat!

The CPU cores in Tegra 3 are the same cores from ARM! It just has more and at a higher clock speed!

The GPU specs on the other hand, A5 are better than the Tegra3 and the "core"s are not comparable, one has Unified Shader Pipelines, the other does not!

Ok... not sure how that makes sense, because even if we are just talking about CPU cores the Tegra is still a "newer" quad-core at 1.6 GHz and the A5 a dual core at 1 GHz. The Tegra is Barely better at most benchmarks. The point I'm trying to prove is that you don't have to look at the core specs to "determine" real world performance. I'm just giving an example to show that maybe, MAYBE the Wii U is not as bad off as we think and that it really might be really efficient at what it does because of its custom silicon. Yeah devs will have to figure out how to be better with the chipset but I just don't think people should rule it out as being so "weak" yet.
 
Ok... not sure how that makes sense, because even if we are just talking about CPU cores the Tegra is still a "newer" quad-core at 1.6 GHz and the A5 a dual core at 1 GHz. The Tegra is Barely better at most benchmarks. The point I'm trying to prove is that you don't have to look at the core specs to "determine" real world performance. I'm just giving an example to show that maybe, MAYBE the Wii U is not as bad off as we think and that it really might be really efficient at what it does because of its custom silicon. Yeah devs will have to figure out how to be better with the chipset but I just don't think people should rule it out as being so "weak" yet.

Don't know why he's debating you so adamantly: I'm pretty sure what you just said here has been consensus for, like, 3000 posts.
 
Ok... not sure how that makes sense, because even if we are just talking about CPU cores the Tegra is still a "newer" quad-core at 1.6 GHz and the A5 a dual core at 1 GHz. The Tegra is Barely better at most benchmarks. The point I'm trying to prove is that you don't have to look at the core specs to "determine" real world performance. I'm just giving an example to show that maybe, MAYBE the Wii U is not as bad off as we think and that it really might be really efficient at what it does because of its custom silicon. Yeah devs will have to figure out how to be better with the chipset but I just don't think people should rule it out as being so "weak" yet.

The CPU benchmarks in that article that compare the 2 are browser based and thus are stupid to use to compare CPU performance.
They are heavily reliant on how fast the browser is and they do not benchmark multi core performance.

GPU benchmarks have just about nothing to do with the CPU!

Don't know why he's debating you so adamantly: I'm pretty sure what you just said here has been consensus for, like, 3000 posts.

Because he is damn wrong about Tegra 3's CPU and is comparing GPUs based only on "cores"!
 
Honestly, anyone trying to say it's outright weaker than the PS3/360 isn't worth listening to. It would not be running ports at anything close to parity if a single part of the system was that huge of an issue. CPU that has limited in comparison SIMD functionality, but with co-processors to offload specific tasks to freeing a modicum of system resources. This is a change, well overall the entire system is. Literally taking a 4 stage pipeline CPU from a decade ago... no three of them! Slapping them together with refinements, and using the smaller die size to gain speed.

It's so weird. I mean just on the minus parts I've covered you can see why some PS3/360 code won't work well on it. I mean those CPU's when used effectively (effectively being the keyword) are crunching numbers that aren't even within the realm of theory on Espresso. Meanwhile it has a newer GPU, an interesting memory architecture to say the least, and co-processors to offload on.

The entire thing screams "How can we make our development changes as painless as possible?"

By changing very little in your development pipeline.
 
Your computer is 10x the PS4? 20TFLOPS?

Impressive bruh.

1.843TFLOPs x 10 = 18.43TFLOPs
4 HD7970s clocked at 1.2GHz = 19.661 TFLOPs so it isn't as impossible as you are making it out to be. But the reality is that CF doesn't hold 100% efficiency however, it is possible to be a generation ahead with a PC set up, in a similar manner to PS4 being a generation ahead of Wii U, which is his point.
 
1.843TFLOPs x 10 = 18.43TFLOPs
4 HD7970s clocked at 1.2GHz = 19.661 TFLOPs so it isn't as impossible as you are making it out to be. But the reality is that CF doesn't hold 100% efficiency however, it is possible to be a generation ahead with a PC set up, in a similar manner to PS4 being a generation ahead of Wii U, which is his point.

Oh my god, I almost posted a comment saying there's no fucking way his PC was 10x current gen unless he had like 4x 7970s.

Didn't know how close I was o.o
 
Oh my god, I almost posted a comment saying there's no fucking way his PC was 10x current gen unless he had like 4x 7970s.

Didn't know how close I was o.o

Hah, yeah also to be as far ahead of the PS4 as it is from 360, it would only need 7.6x the flops (this is all BS anyways) that is only 3 7970s clocked just below 1.2GHz.

Wii U's performance issues come from a lack of software, otherwise you could look at it as the average entry level gaming laptop that is already going to play stuff like BF4 (obviously not on high)
 
CPU is based on PowerPC 7xx which is an old design (same as Gamecube/Wii). Iwata admits the console can't run everything perfectly and some developers complained about it.
Where did Iwata say that and why have some developers(the ones who actually made good games for it) say the exact opposite?

GPU is better than PS3/360 but probably not by much.

How did you conclude this?
 
Honestly, anyone trying to say it's outright weaker than the PS3/360 isn't worth listening to. It would not be running ports at anything close to parity if a single part of the system was that huge of an issue. CPU that has limited in comparison SIMD functionality, but with co-processors to offload specific tasks to freeing a modicum of system resources. This is a change, well overall the entire system is. Literally taking a 4 stage pipeline CPU from a decade ago... no three of them! Slapping them together with refinements, and using the smaller die size to gain speed.
What coprocesssors? Audo DSP is official, but what else?

A fucking GBA is good for mixing 16 channels of CD-frequency audio. What's a dedicated sound DSP going to save you in this day and age, 2% CPU load? There's been a 121.something MHz figure floating around for it. How fast do you think such a DSP can be, in terms of ops/second or any other metric?

Point being, it's a drop in the barrel for the relative performance standings. If you argue for task offloading, both PS3 and Xbox 360 have room to spare for offloading whatever CPUish tasks, especially the PS3, and you're still writing code for the main processor. And that's definitely easier and more fruitful than supporting a custom DSP architecture for every little class of subtasks. I'd wager a single SPE outperforms the entire Wii U minus GPU in stream processing, DSPs or not.


PS: What's interesting about the memory architecture?
 
Status
Not open for further replies.
Top Bottom