Rumor: Wii U final specs

why are we dealing with ns when something that much isn't a factor. However in terms of ms that's obvious when you buy displays what their max rates are. The same for gpus or the drivers that induce these things. So I don't need a study when a simple search nvidia or ati cards and input lag can inform you far more than some non existent studies you want me to show you.

The reason you have a hardware scaler or something external take over is so that it gives a signal to your tv so it doesn't have to end up processing.

You do control these things it's called not buying crap or substandard products. You're dealing with this problem after the fact I'm saying as a gamer if you're buying a display that has this issue potential a console or a pc isn't going to fix it.

you are talking about upscaling as a cause of noticeable input lag. i am saying that upscaling does not produce noticeable input lag. i'm not sure how you can demonstrate that a console's scaler or a TV's scaler is creating an unacceptable input lag.
 
you are talking about upscaling as a cause of noticeable input lag. i am saying that upscaling does not produce noticeable input lag. i'm not sure how you can demonstrate that a console's scaler or a TV's scaler is creating an unacceptable input lag.

Bigger hdtvs are horrible for pc gaming on this issue and youtube video on it can easily show your eyes the problem.

Save yourself the grief it's a real issue

Sticking your head in the say or playing baghdad bob doesn't excuse how people with timers have shown monitors and the like aren't equal.

For the record how you do something like you ask stick a crt vs flat panel running the same test. The differences are very apparent. Just because you don't know doesn't mean others don't as well.

Great scalers do not have this problem average ones do and it's extremely hard when buying hdtvs to get clear information on this issue.
 
Bigger hdtvs are horrible for pc gaming on this issue and youtube video on it can easily show your eyes the problem.

Save yourself the grief it's a real issue

Sticking your head in the say or playing baghdad bob doesn't excuse how people with timers have shown monitors and the like aren't equal.

For the record how you do something like you ask stick a crt vs flat panel running the same test. The differences are very apparent. Just because you don't know doesn't mean others don't as well.

Input lag is attributable to many factors caused by the display. You specifically mentioned upscaling as one cause of noticeable input lag. All of my replies have explicitly been about upscaling and input lag. And I've never once denied that input lag exists.

I've said all along that the specific act of upscaling is simply not the biggest culprit. I'm not sure why you want to dump the entire issue of input lag into this when you were talking only about upscaling affecting input lag. Unless we have different technical definitions of "upscaling"

EDIT: I'm not going to discuss this anymore. Through several post, I've made my point abundantly clear as a response to post #6690. I know full well what input lag is and what it looks like. It is caused by a combination of things, each of which can be measured. Upscaling is one of many factors, and a small one.
 
Input lag is attributable to many factors caused by the display. You specifically mentioned upscaling as one cause of noticeable input lag. All of my replies have explicitly been about upscaling and input lag. And I've never once denied that input lag exists.

I've said all along that the specific act of upscaling is simply not the biggest culprit. I'm not sure why you want to dump the entire issue of input lag into this when you were talking
about upscaling.

This is the beauty of quoting

i am saying that upscaling does not produce noticeable input lag.

Yet in the post above now you are arguing it can.

I was pointing it out because I despise the idea of upscaling due to a variety of disadvantages such as possible input lag that it can cause. There should be no argument about how upscaling can screw with the image in terms of color accuracy or reproduction vs a native image. Which is why I even began to argue about the various subject in relation to bigger argument which is native res vs upscaled. I know people put up with upscaling but to be clear as possible asa paying customer in to console platforms this is bs I shouldn't have to put up with at all nor should you considering they are now charging pc or in the case of electronics high end prices for the machines. For low end machines that have a low price nintendo, sony, and ms can do what they like. For what we are now paying at launch which is 300$ at the minimum I expect it to have features of gpus or other video products I buy of similar value.

I'd rather avoid the issue when possible. Instead as a console gamer I have to deal with this potential despite the fact it's now 2012 we have the tech do games much better than the 90's or the last decade yet we are dealing with this. Ironic how upscaling is held up as benefit in gaming when it is hack to deal with the fact a machine in this case the HDtwins or WiiU don't have enough juice to put out an image at native res or render at the native res of tv/display.

Gamers deservers better than the fud that console devs or makers continue to spread on this subject and it is beyond infuriating especially considering how it's being used to sell systems that aren't actually producing native results.
 
I don't know why people keep trying to attribute high tech to Nintendo's design. Nintendo is optimizing cost, not performance.

These die do not appeat to be on a MCM for any performance reason. The advantage I see is in manufacturing and QC. The Die are fabbed from different vendors, IBM and TSMC(?) and then eventually sent over to foxconn or whoever does the final build. The middle step in this is for them to go to an OSAT ( out sourced assembly and test) . THe OSAT tests the die, assembles working ones together in 1 package then sends this single part off. It's cheaper and more efficient and it makes final product assembly cheaper as well. It also makes temperature control easier as only one heat sink is needed. In a product that is as low margin as a console, saving a few bucks and some space is a huge driver for design.

The MCM used here is just a PCB. There is no interposer or TSV or anything that gives you the performance gain in latency that you can get by putting the chips close together. The advantage there comes from the type of interconnect more than the proximity. Each of these chips are still bumped and attached to the MCM PCB. They don't have the wire bonding or individual packages, but they are still worlds apart.
I guess you didn't read the latest Iwata Asks? Takeda outright said they went with a MCM for both cost and performance reasons:

This time we fully embraced the idea of using an MCM for our gaming console. An MCM is where the aforementioned Multi-core CPU chip and the GPU chip are built into a single component. The GPU itself also contains quite a large on-chip memory. Due to this MCM, the package costs less and we could speed up data exchange among two LSIs while lowering power consumption. And also the international division of labor in general, would be cost-effective.
http://iwataasks.nintendo.com/interviews/#/wiiu/console/0/0
 
Well, what worries me is not the CPU inside the MCM, it's the GPU.

Here's a picture of the Core i5 with integrated Graphics. The smaller chip is the CPU and the larger one the GPU, just like the WiiU

big_k42f-motherboard.jpg




And here's a picture of the R700 die on a dual setup:

IMG_0500.jpg




and again a picture of the WiiU die:

slide004.jpg


**I know that the images aren't on the same scale, but you can guesstimate the actual size

Notice the following:
-the size of the WiiU CPU compared against the first picture, the mobile core i5. It is considerably smaller.
-The size of the WiiU GPU is considerably smaller than the R700 GPU unit in the second picture.
-Summing up the first two observations, both units are at least half the size of the core i% CPU chip and R700 GPU chip. So there is no way it will be more than 2-3 times the performance of this current gen.

The R700 is actually quite a large GPU, 956 million transistor on a 55nm process and 260mm2 in size, with about 5x the shader performance of 360/PS3. But there are GPU's out there on 40nm (the minimum we can expect from WiiU's GPU) that are less than half the size with the same performance. For instance the HD7670 (40nm GPU) is 118mm2 and just as fast as an HD4870 (R700).

I don't disagree that shader performance won't be more than 3x current gen. Just think there's better comparisons than a R700 die, like any 40nm GPU (WiiU's GPU may be on an even smaller process, but I think we know it won't be anything larger than 40nm).
 
eh?

7670 -> 480:24:8 @800MHz, 128-bit GDDR5 @ 1GHz
4870 -> 800:40:16 @750MHz, 256-bit GDDR5 @ 900MHz

But they both perform very similarly in benchmarks, maybe just as fast is an exaggeration, but they're pretty close in the DX10.1/11 benchmarks I've seen.
 
But they both perform very similarly in benchmarks, maybe just as fast is an exaggeration, but they're pretty close in the DX10.1/11 benchmarks I've seen.

Ummm, They are not even close in benchmarks, even the much later HD7770 struggles against a HD4870.

It's one reason I've had trouble replacing my 4870, there's no value-range card that makes the upgrade from the 4870 an upgrade at all, let alone worthwhile.
 
Ummm, They are not even close in benchmarks, even the much later HD7770 struggles against a HD4870.

It's one reason I've had trouble replacing my 4870, there's no value-range card that makes the upgrade from the 4870 worthwhile.

I noticed this as well... I gave them both the benefit of the doubt when moving onto DX11... then again when they went with the die-shrink in the 6000's... but now even in the 7000's it's getting ridiculous that there is no clear and good upgrade path for my card... though in some ways I guess it's a blessing because it saved me on hardware costs the last couple years <_<
 
Mind linking them?

http://www.guru3d.com/articles_pages/radeon_hd_6670_review,4.html

Its a 6670 but the 6670 and 7670 are exactly the same GPU.

I can't link pics from it but as a summary:

Far Cry 2 1920x1200 8xAA:-

4870 - 32fps
6670 - 29fps (91% of 4870)

DX10 3DMark Vantage P-score:-

4870 - 10496
6670 - 9148 (87% of 4870)

DX10 3DMark Vantage GPU score:-

4870 - 8888
6670 - 7599 (86% of 4870).

Fair enough I haven't seen many benchmarks but the performance looks very close. Especially in the game benchmark and especially for a GPU that's only about 45% the size of 4870 and with far less memory bandwidth.
 
Ummm, They are not even close in benchmarks, even the much later HD7770 struggles against a HD4870.

It's one reason I've had trouble replacing my 4870, there's no value-range card that makes the upgrade from the 4870 an upgrade at all, let alone worthwhile.

Could you show me some benchmarks please, especially DX10.1/11 ones? Because the ones I've seen show otherwise (a rather limited set of comparisons I admit, but unless I see different its all I can base things on).

I seriously doubt that the HD7770 struggles against the HD4870 BTW.
 
AnandTech: 4870 vs 6670

Hardware Compare: 4870 vs 6670
Memory Bandwidth:
4870: 115200 MB/sec
6670: 64000 MB/sec

Texel Rate:

4870: 30000 Mtexels/sec
6670: 19200 Mtexels/sec

Pixel Rate:

4870: 12000 Mpixels/sec
6670: 6400 Mpixels/sec

I seriously doubt that the HD7770 struggles against the HD4870 BTW.

Clock for clock out of the box the 7770 just creeps by the 4870, however the 4870 overclocks like a beast whereas the 7770 has much less overclocking headroom.
 

Thanks for the link, I've picked out the relevant benchmarks (the only game benchmarks provided) for comparison:

StarCraft 2 - HD6670 82% of HD4870 performance.

Metro 2033 - HD6670 77% of HD4870 performance.

Crysis Warhead - HD6670 75% of HD4870 performance.

Texel/pixel fillrate and bandwidth ect aren't really worth comparing.

Together with the benchmarks I found it seems HD6670 is around 80% of the performance of HD4870, so not just as fast as I said at first, but certainly quite similar.
 
The R700 is actually quite a large GPU, 956 million transistor on a 55nm process and 260mm2 in size, with about 5x the shader performance of 360/PS3. But there are GPU's out there on 40nm (the minimum we can expect from WiiU's GPU) that are less than half the size with the same performance. For instance the HD7670 (40nm GPU) is 118mm2 and just as fast as an HD4870 (R700).

I don't disagree that shader performance won't be more than 3x current gen. Just think there's better comparisons than a R700 die, like any 40nm GPU (WiiU's GPU may be on an even smaller process, but I think we know it won't be anything larger than 40nm).

We already have a 40nm r700(4770). Using any gpu core but the r700 is silly at this point.
 
We already have a 40nm r700(4770). Using any gpu core but the r700 is silly at this point.

Fabrication processes mature over time and this is a custom GPU so I wouldn't say any 40nm GPU comparison is silly for our purposes (comparing die size and guestimating possible transistor count). Also its worst case because for all we know GPU7 could be using a process smaller than 40nm.

If you want to create a comparison based on WiiU's GPU vs a HD4770 then go for it, I'd consider that a valuable contribution. Doing a quick comparison myself GPU7 does look very similar in size to a HD4770 GPU, probably a bit bigger (to gauge perspective I had to compare the size of a USB port on WiiU's board to the DVI socket on the HD4770)
 
Ummm, They are not even close in benchmarks, even the much later HD7770 struggles against a HD4870.

It's one reason I've had trouble replacing my 4870, there's no value-range card that makes the upgrade from the 4870 an upgrade at all, let alone worthwhile.

Thanks for the link, I've picked out the relevant benchmarks (the only game benchmarks provided) for comparison:

StarCraft 2 - HD6670 82% of HD4870 performance.

Metro 2033 - HD6670 77% of HD4870 performance.

Crysis Warhead - HD6670 75% of HD4870 performance.

Texel/pixel fillrate and bandwidth ect aren't really worth comparing.

Together with the benchmarks I found it seems HD6670 is around 80% of the performance of HD4870, so not just as fast as I said at first, but certainly quite similar.

That was old drivers and only on benchmarks but the 7770 performs much better using latedt drivers. I have both cards and just swapped a 4890 and 4870 for 2 7770 (yay ebay) and they are in real world applications and not benchmarks faster than the 4890 on windows 7 and windows 8. Best $10 i spent since i sold the other two cards for $140 i also calculated that i save $75 a year on electricity for the two cards by swapping to the 7770 so for the 2 years i will save quite a bit so it is worth selling plus it is much cooler and quiter on idle. The 7750 is closer to the 4870 in performance and the 7770 and 7750 overclock much better than the 4870 or 4890 can but they are 28nm parts the 7670 is 40nm and performs between 70-80% of the 4870. Maybe it is closer to the 4770
 
Thanks for the link, I've picked out the relevant benchmarks (the only game benchmarks provided) for comparison:

StarCraft 2 - HD6670 82% of HD4870 performance.

Metro 2033 - HD6670 77% of HD4870 performance.

Crysis Warhead - HD6670 75% of HD4870 performance.

Texel/pixel fillrate and bandwidth ect aren't really worth comparing.

Together with the benchmarks I found it seems HD6670 is around 80% of the performance of HD4870, so not just as fast as I said at first, but certainly quite similar.
Fillrate matters increasingly as you go up in resolution, not sure if that's what we're seeing here but it's possible considering the fillrate differences between them. The tests there are at 1680x1050, almost double the pixels of 1280x720.
 
The R700 is actually quite a large GPU, 956 million transistor on a 55nm process and 260mm2 in size, with about 5x the shader performance of 360/PS3. But there are GPU's out there on 40nm (the minimum we can expect from WiiU's GPU) that are less than half the size with the same performance. For instance the HD7670 (40nm GPU) is 118mm2 and just as fast as an HD4870 (R700).

I don't disagree that shader performance won't be more than 3x current gen. Just think there's better comparisons than a R700 die, like any 40nm GPU (WiiU's GPU may be on an even smaller process, but I think we know it won't be anything larger than 40nm).
I still believe that the old HD5770 would be a perfect card for Nintendo.

A budget card from 2009, with DX11 compliance, respectable performance and low power consumption. It's a cheap card and a custom design in 28nm would lower, the already low, power consumption and heat levels even more.

HD5770 would offer a respectable bump in performance over the HD twins as well as somewhat future proof the console for the years to come. After all, Wii U is going to be around until at least 2017.
 
I still believe that the old HD5770 would be a perfect card for Nintendo.

A budget card from 2009, with DX11 compliance, respectable performance and low power consumption. It's a cheap card and a custom design in 28nm would lower, the already low, power consumption and heat levels even more.

A HD5770 would offer a respectable bump in performance over the HD twins as well as somewhat future proof the console for the years to come. After all, Wii U is going to be around until at least 2017.

A 5770 would provide excellent performance for a next gen console, with shader performance 6-7x that of any current gen console (1.36tflops). Basically the kind of performance we're expecting from XBox3 (1.2 - 1.3Tflops). Too big and hot for WiiU though especially with 32MB eDRAM added, even on 28nm. Maybe a 28nm 5750 down clocked to 486Mhz (4x the DSP clock) would have been possible. But we really don't know what Nintendo have ended up with yet so its hard to compare.
 
A 5770 would provide excellent performance for a next gen console, with shader performance 6-7x that of any current gen console (1.36tflops). Basically the kind of performance we're expecting from XBox3 (1.2 - 1.3Tflops). Too big and hot for WiiU though especially with 32MB eDRAM added, even on 28nm. Maybe a 28nm 5750 down clocked to 486Mhz (4x the DSP clock) would have been possible. But we really don't know what Nintendo have ended up with yet so its hard to compare.

The 5770 is 3(?) years old by now, so maybey AMD adjusted some things to get it to run cooler and have a lower TDP. I dunno. Wait till 2014; then we shall see what theis little box can pump out.
 
Does an MCM mean less capacity for die shrinks and cost reductions down the line, because its already shrunk to a degree? And does it also mean compromising on power to get something that'll fit on one package?
 
I still believe that the old HD5770 would be a perfect card for Nintendo.

A budget card from 2009, with DX11 compliance, respectable performance and low power consumption. It's a cheap card and a custom design in 28nm would lower, the already low, power consumption and heat levels even more.

HD5770 would offer a respectable bump in performance over the HD twins as well as somewhat future proof the console for the years to come. After all, Wii U is going to be around until at least 2017.

Consoles don't use video cards.
 
Does an MCM mean less capacity for die shrinks and cost reductions down the line, because its already shrunk to a degree?

MCM shouldn't matter really; they come in various sizes, and laptop MCMs tend to be bigger even and also typically include the GPU RAM.

The main hindrance to die shrinks is the physical I/O to external memory (or other devices). So, assuming a 128-bit bus on the GPU, they should be able to pull off one more shrink although it's not looking particularly practical if the chip ends up lower than ~100mm^2.

And does it also mean compromising on power to get something that'll fit on one package?

The main compromise to power would be the cooling solution and noise target level, both of which will be hindered by the chassis size.
 
Does an MCM mean less capacity for die shrinks and cost reductions down the line, because its already shrunk to a degree? And does it also mean compromising on power to get something that'll fit on one package?
I don't remember Nintendo ever doing die shrinks to begin with.
 
Funny thing I call this stuff long ago whenever I saw the size of the console. Back then people were saying the LOW end was 600 glfop and high was crazy at over tflop. Its going to be around 350-450 glfop at most.

I seem to remember you saying you didn't even believe WiiU was more powerful than current gen consoles originally :) 350-450 Gflops is a bit more reasonable, though I'm still expecting 400-500 personally.

Now you just need to remove the "its a R700 because it started development from R700" idea from your head and our expectations may finally overlap.
 
So does anyone know the Specs of the Wii-U? Its close to release, has anyone gotten a idea of what the processing power or type of GPU/CPU it is? Its just seems like the specs that I see that are out there are very vague with no specifics.
 
So does anyone know the Specs of the Wii-U? Its close to release, has anyone gotten a idea of what the processing power or type of GPU/CPU it is? Its just seems like the specs that I see that are out there are very vague with no specifics.

Basically - no.
 
So does anyone know the Specs of the Wii-U? Its close to release, has anyone gotten a idea of what the processing power or type of GPU/CPU it is? Its just seems like the specs that I see that are out there are very vague with no specifics.

Nope. Whe only things that we know for certain:

IBM Power-architecture multi-core CPU with eDRAM on die
AMD Radeon HD GPGPU with eDRAM on die
2GB of RAM; 1GB for games and 1GB for the OS, MiiVerse, Web Browser and Nintendo TVii
Nintendo Proprietary discs (25GB single layer, 50GB (hypothetical, but very possible) dual layer) with a read speed of 22.5MB/s.
8/32GB of flash storage.

In otherwords, what you'd find on Wikipedia...
 
Nope. Whe only things that we know for certain:

IBM Power-architecture multi-core CPU with eDRAM on die
AMD Radeon HD GPGPU with eDRAM on die
2GB of RAM; 1GB for games and 1GB for the OS, MiiVerse, Web Browser and Nintendo TVii
Nintendo Proprietary discs (25GB single layer, 50GB (hypothetical, but very possible) dual layer) with a read speed of 22.5MB/s.
8/32GB of flash storage.

In otherwords, what you'd find on Wikipedia...

Well we do know the eDRAM amount is 32MB and the CPU has 3MB cache with 2MB for the main core and 512kb for each of the other two cores. But other than that yeah its all just speculation.
 
Well we do know the eDRAM amount is 32MB and the CPU has 3MB cache with 2MB for the main core and 512kb for each of the other two cores. But other than that yeah its all just speculation.

The amount of eDRAM on each die itself is speculation. Nintendo and IBM themselves have said that both the CPU and GPU have "a large amount of eDRAM".

IBM_eDRAM_Full.jpg

This is the image that IBM used for their Wii U press release. They said it was a "test chip", but I'm not entirely convinced (more tech-savy people may be able to acertain the amount of RAM in this particular die). The amount seems kinda large to me, but again, I know little of how to tell how much RAM is on a die.
 
The amount of eDRAM on each die itself is speculation. Nintendo and IBM themselves have said that both the CPU and GPU have "a large amount of eDRAM".

Well I don't consider the leaked WiiU spec sheets to be speculation, I think its pretty obvious that they're genuine.
 
I don't consider the leaked WiiU spec sheets to be speculation. If that's considered speculation then we'll never know anything other than speculation about WiiU's hardware because the only place we'll get specifics is from leaks like that.

I doubted the legitimacy of that website after Nintendo said that the Wii U had 2048MB (that's 2GB) of total RAM where VGleaks said it was 1056MB (that's 1GB+32MB of eDRAM) It has eDRAM, yes, but we don't know how much (except for that ever-so-vague number that is "a lot").
 
I doubted the legitimacy of that website after Nintendo said that the Wii U had 2048MB (that's 2GB) of total RAM where VGleaks said it was 1056MB (that's 1GB+32MB of eDRAM) It has eDRAM, yes, but we don't know how much (except for that ever-so-vague number that is "a lot").

I'm talking about the original leaked spec sheet that vgleaks got a hold of which was confirmed by everyone with inside info. That document had been going around for quite some time before VGleaks got their hands on it and didn't mention 1056MB of RAM it just mentioned the current dev kit amount of 3GB:

http://www.vgleaks.com/world-premiere-wii-u-specs/

I think you're referring to the "leak" at the beginning of this thread which I agree seems a bit more speculative in parts (the whole enhanced Broadway thing for instance). That article mentions 1GB "mem2" for applications and 32MB "mem1". Note however that currently the system is confirmed to have 1GB for games, so that's not really inaccurate.
 
I'm talking about the original leaked spec sheet that vgleaks got a hold of which was confirmed by everyone with inside info. That document had been going around for quite some time before VGleaks got their hands on it and didn't mention 1056MB of RAM it just mentioned the current dev kit amount of 3GB. You might be thinking of the "leak" at the beginning of this thread which mentions 1GB "mem2" for applications and 32MB "mem1". Note however that currently the system is confirmed to have 1GB for games.

True, but it never mentions "1GB reserved for background system applications"
 
How much eDRAM did Xenos and RSX (if it did) have?
10 and 0MB, respectively. The Xenos eDRAM was off-die and a dedicated framebuffer, though. The Wii U eDRAM on the other hand appears to be just a small pool of very fast on-die RAM that can be used in many different ways.
 
Top Bottom