Rumor: Wii U final specs

Isn't that pretty much the definition of eDRAM?

In most cases yes, but the eDRAM for the XBox360's GPU was on a separate die, so that's a potential case for the Wii U as well.

Also it's unlikely that it will be used for GPGPU stuff as access from the CPU will likely be much slower than main ram, if it's even possible

As Blu says, due to BC requirements, the eDRAM has to provide the same sort of access for the CPU as the 1T-SRAM did for the GC and Wii. Basically, we're probably looking at these sorts of characteristics for access to the eDRAM:

CPU: High bandwidth, very low latency
GPU: Very high bandwidth, extremely low latency

Which conforms pretty well with what you'd need for a GPGPU scratchpad.
 
To my knowledge, I do not think there has ever been a precedent where Nintendo has made a stock part do less after it has been modified.
didn't they underclock the graphics chip in the 3DS for thermal/battery life reasons? i thought I heard such a thing anyways. the Wii U is a pretty tiny device. reliability, yields, thermal performance, power consumption, cost, all things that could conceivably lead to the Wii U version not being at least as powerful as the stock part.

i mean, i'm not losing sleep over it either way. i think Wii U games look good graphically even if not on par with high end PC stuff. whatevers inside it, it's doing a good enough job for me, even if i'd like better.
 
tumblr_m4zst5natC1rx19pzo1_500.gif
 
I was playing Mario Kart Wii with my little boy last night (3yo). He even managed to almost get around a track by himself, for the first time ever. We played 8 races, had a laugh, got our arses kicked on some levels and served up our own arse kickings on others. High fives flew whenever we did something cool and we both went to bed happy.

Unfortunately it was all ruined by the SD graphics and underpowered GameCube tech with stereo sound coming from the TV.







I don't think I really give a shit about Wii U specs anymore.

You don't really need to give a shit about Wii U at all. Mario Kart Wii provides endless fun for the whole family.
 
Weren't the GameCube clock speeds slower and right before launch Nintendo made it better?

I believe they increased the clock speed of the CPU and lowered the clock speed of the GPU.

Going straight from memory, the original clock speeds were CPU 400 MHz, GPU 200 MHz. Then they changed it to CPU 486 MHz, GPU 162 MHz. I loved those days when gamers were kept up to date with tech developments in consoles.
 
didn't they underclock the graphics chip in the 3DS for thermal/battery life reasons? i thought I heard such a thing anyways. the Wii U is a pretty tiny device. reliability, yields, thermal performance, power consumption, cost, all things that could conceivably lead to the Wii U version not being at least as powerful as the stock part.

i mean, i'm not losing sleep over it either way. i think Wii U games look good graphically even if not on par with high end PC stuff. whatevers inside it, it's doing a good enough job for me, even if i'd like better.

The 3DS GPU is clocked pretty high. Not the full 400MHz which I think was the max clock for the pica obviously but still in the upper range. And quite remarkably it's actually clocked the same as the CPU
 
Going straight from memory, the original clock speeds were CPU 400 MHz, GPU 200 MHz. Then they changed it to CPU 486 MHz, GPU 162 MHz. I loved those days when gamers were kept up to date with tech developments in consoles.

I wonder if Nintendo will upgrade the CPU clock speed and keep the GPU the same right before launch.
 
Probably through actual development and whatever details they do have. You see they also tend to clarify that they are still learning the hardware.

The dev team probably ran some code that was not completely optimized for Wii U's CPU and realized that it was running slower compared to Xenon and/or Cell.
 
I wonder if Nintendo will upgrade the CPU clock speed and keep the GPU the same right before launch.

If anything I think they'll increase the CPU clock and decrease the GPU clock. Just like its being reported they did with the GC. I dont think they can afford to increase the clock of the cpu without taking something from some where else. If they did this, it would make the console more a little more balanced correct?
 
I wonder if Nintendo will upgrade the CPU clock speed and keep the GPU the same right before launch.

The Wii U should be in production now, so Nintendo have already done all the hardware optimizations that they are going to do. Upgraded firmware, software tools and more experience, though, will improve hardware improvements. Ideaman pointed out that some major improvements on a game(s)' performance was due to that.


not how it works, but okay.

*shrug* I'm open for clarifications.
 
If anything I think they'll increase the CPU clock and decrease the GPU clock. Just like its being reported they did with the GC. I dont think they can afford to increase the clock of the cpu without taking something from some where else. If they did this, it would make the console more a little more balanced correct?

I don't know too much about tech so someone correct me if I'm wrong but I think it was because of the size and the architecture of the GameCube they had to decrees the GPU clock speed, since the Wii U is bigger and because of the architecture I think it's possible to keep the GPU clock speed while upgrading the CPU clock speed, not sure though.
 
I wonder if Nintendo will upgrade the CPU clock speed and keep the GPU the same right before launch.

If I weren't lazy at the moment I'd look up the stories, but IIRC 400/200 was too unbalanced with the GPU being able to do more than the CPU could keep up with. So they did the change to 486/162 to make it balanced. I recall reading developers were very happy about the change. Also one of the reasons the GameCube was considered so well balanced.

With the developers complaints about the Wii U's CPU it sounds more like a new architecture learning curve, but I of course have no way of knowing. Either way I wouldn't complain if Nintendo upped the clock on the CPU, or at least made it an option for certain games.

EDIT: Oh yeah, the PSU is already locked so an increase in the CPU's power would have to come from somewhere else. The Gamecube's change I thought came earlier but it probably wouldn't have mattered given the decrease of the GPU's clock speed anyways.
 
I don't know too much about tech so someone correct me if I'm wrong but I think it was because of the size and the architecture of the GameCube they had to decrees the GPU clock speed, since the Wii U is bigger and because of the architecture I think it's possible to keep the GPU clock speed while upgrading the CPU clock speed, not sure though.

Yea I'm not sure either. I was thinking about the power budget. Need to get an engineer in here and tell us whats possible.
 
If I weren't lazy at the moment I'd look up the stories, but IIRC 400/200 was too unbalanced with the GPU being able to do more than the CPU could keep up with. So they did the change to 486/162 to make it balanced. I recall reading developers were very happy about the change. Also one of the reasons the GameCube was considered so well balanced.

With the developers complaints about the Wii U's CPU it sounds more like a new architecture learning curve, but I of course have no way of knowing. Either way I wouldn't complain if Nintendo upped the clock on the CPU, or at least made it an option for certain games.

EDIT: Oh yeah, the PSU is already locked so an increase in the CPU's power would have to come from somewhere else. The Gamecube's change I thought came earlier but it probably wouldn't have mattered given the decrease of the GPU's clock speed anyways.

Isn't this generation kinda different because the GPU is now doing the majority of the workload instead of the CPU?
 
*shrug* I'm open for clarifications.

for starters, I dont think any code can ever be completely optimized for hardware unless you're writting directly to it.

but clock speeds aren't directly comparable between CPU's, that said, the expectation is that the cpu will be running more instructions per cycle. If they're using "un optimized" code, it wouldn't matter because there's no particular reason why a newer CPU couldn't process the instructions sets written for older cpu's, particularly ones sharing similar designs, unless it was running at a lower clock speed.

the "lazy developer" argument is always a faulty one, these guys tend to know more then me or most others on this topic. If they say it's slower, then for all intents and purposes one should trust their opinion on the subject.
 
The 3DS GPU is clocked pretty high. Not the full 400MHz which I think was the max clock for the pica obviously but still in the upper range. And quite remarkably it's actually clocked the same as the CPU

Maybe you should update the 3DS Wikipedia page because the info there is wrong if this is correct.
 
Isn't this generation kinda different because the GPU is now doing the majority of the workload instead of the CPU?

*Paging bgassassin* As I understand it yes.

Maybe you should update the 3DS Wikipedia page because the info there is wrong if this is correct.

I don't see any info on the wikipedia page about clock speeds. That's an improvement since I recall they had the GPU clock speed at 800 MHz and the CPU at 400 MHz. I thought I read they were 266 MHz though, could of course be very wrong.
 
for starters, I dont think any code can ever be completely optimized for hardware unless you're writting directly to it.

but clock speeds aren't directly comparable between CPU's, that said, the expectation is that the cpu will be running more instructions per cycle. If they're using "un optimized" code, it wouldn't matter because there's no particular reason why a newer CPU couldn't process the instructions sets written for older cpu's, particularly ones sharing similar designs, unless it was running at a lower clock speed.

the "lazy developer" argument is always a faulty one, these guys tend to know more then me or most others on this topic. If they say it's slower, then for all intents and purposes one should trust their opinion on the subject.

I need to rephrase my statements, as I didn't mean to imply that the developers are being lazy. Wii U's balance between the CPU and GPU appears to be a bit different than the 360 and the PS3's. The Wii U's CPU is not meant to do some of the tasks that Xenon did. The Wii U has a DSP to handing sound and music, and Iwata specifically said that "the graphics processor can be used to handle various tasks aside from just graphics." While we are not sure how good is Wii U's GPU on GPGPU tasks yet, it is probably safe to assume that Nintendo wants the GPU to take over some of the tasks that the CPU usually did for current-gen consoles. One thing the Wii U's CPU may be superior to over Xenon is general processing, but most games would be designed to work around Cell and Xenon's issues with that.
 
*Paging bgassassin* As I understand it yes.



I don't see any info on the wikipedia page about clock speeds. That's an improvement since I recall they had the GPU clock speed at 800 MHz and the CPU at 400 MHz. I thought I read they were 266 MHz though, could of course be very wrong.

very close
 
I need to rephrase my statements, as I didn't mean to imply that the developers are being lazy. Wii U's balance between the CPU and GPU appears to be a bit different than the 360 and the PS3's. The Wii U's CPU is not meant to do some of the tasks that Xenon did. The Wii U has a DSP to handing sound and music, and Iwata specifically said that "the graphics processor can be used to handle various tasks aside from just graphics." While we are not sure how good is Wii U's GPU on GPGPU tasks yet, it is probably safe to assume that Nintendo wants the GPU to take over some of the tasks that the CPU usually did for current-gen consoles. One thing the Wii U's CPU may be superior to over Xenon is general processing, but most games would be designed to work around Cell and Xenon's issues with that.

if anything the Xenon is better at handling general processing calls. Corrine yu discussed how the xenon was being under utilized in one of her interviews, how it could be used for graphics processing, like the cell. Though this might be wrong, but from my limited understanding, a gpgpu usually requires a higher clockspeed otherwise it's going to bottleneck.
 
for starters, I dont think any code can ever be completely optimized for hardware unless you're writting directly to it.
It can surely be optimised to a point where it favors architecture A over architecture B, without writing a single line of assembly.

but clock speeds aren't directly comparable between CPU's, that said, the expectation is that the cpu will be running more instructions per cycle. If they're using "un optimized" code, it wouldn't matter because there's no particular reason why a newer CPU couldn't process the instructions sets written for older cpu's, particularly ones sharing similar designs, unless it was running at a lower clock speed.
Do you expect U-CPU to feature SIMD resources in the capacity found in Xenon? That's an honest question.

the "lazy developer" argument is always a faulty one, these guys tend to know more then me or most others on this topic. If they say it's slower, then for all intents and purposes one should trust their opinion on the subject.
The 'lazy developers' is a bastardisation of a perfectly valid argument referring to the harsh realities of game development - companies don't always have the resources to get their creations to the best levels of performance possible on the hw, in the given timeframes.
 
Good fucking god.

People need to stop contacting AMD tech support and posting their replies as if it means shitall.

If it turns out to be an E6760, it turns out to be an E6760.

But AMD tech support is not going to know what's in the Wii U. To believe they would is frankly all sorts of utterly stupid.


Like I said; it's a bit of fun.


If you'd rather I went back to going round in circles and arguing over things we simply don't know, then so be it.

You've killed the child in me, shinra. I hope you're happy :(
 
Like I said; it's a bit of fun.

If you'd rather I went back to going round in circles and arguing over things we simply don't know, then so be it.

You've killed the child in me, shinra. I hope you're happy :(
I'm off to the mall to pull off Santa beards.

There's about 5 (?) weeks to go? And I'm guessing that review consoles and games will presumably go out earlier. It really won't be long until we know what's actually inside.

And I'd actually prefer random (informed) speculation and tech talk - since it's interesting to learn stuff.
 
I'm off to the mall to pull off Santa beards.

There's about 5 (?) weeks to go? And I'm guessing that review consoles and games will presumably go out earlier. It really won't be long until we know what's actually inside.

And I'd actually prefer random (informed) speculation and tech talk - since it's interesting to learn stuff.


Lol. You monster!
 
I'm honestly surprised (though in a way not really) that the Wii-U CPU is so weak. That bird demo looked better than anything currently on the PS360.
 
On a Pikmin scale. The Wii CPU had the strength of 12 purple Pikmin. The Wii U CPU's strength is 21 purple Pikmin, 3 red, 3 yellow and 2 white Pikmin.

That's total rubbish.
Anyone who knows anything knows it's at the VERY LEAST 3.5 yellow and 3 white.
Fool
 
On a Pikmin scale. The Wii CPU had the strength of 12 purple Pikmin. The Wii U CPU's strength is 21 purple Pikmin, 3 red, 3 yellow and 2 white Pikmin.
informed speculation says otherwise. i think we're looking at 36 purple pikmin. don't expect details at Nintendo Direct on how fast they are because Reggie and Iwata are mad Miyamoto had more nectar to feed his.
 
It can surely be optimised to a point where it favors architecture A over architecture B, without writing a single line of assembly.


Do you expect U-CPU to feature SIMD resources in the capacity found in Xenon? That's an honest question.


The 'lazy developers' is a bastardisation of a perfectly valid argument referring to the harsh realities of game development - companies don't always have the resources to get their creations to the best levels of performance possible on the hw, in the given timeframes.

Although 'lazy developers' does apply in some cases, I think the term should be changed to 'constrained developers'. However that doesn't flow off the tongue as well.
 
Take this for what it's worth, but the developer documents do not talk about an E6760. It's almost line-for-line identical to R700 GPU specifications (hence why people are saying that's the base prior to customization).
 
Take this for what it's worth, but the developer documents do not talk about an E6760. It's almost line-for-line identical to R700 GPU specifications (hence why people are saying that's the base prior to customization).


Yeah I think it's widely accepted that its not an e6760, nor is it based on it.

We've just been having fun with AMD CS reps. It's just funny to see how these end up in all corners if the internet :)
 
Top Bottom