Rumor: Wii U final specs

Are you referring to additional SIMD instructions by 'toys'? lol.

It's quite clear what he's refering to.

The DSP, I/O processor, etc.

The Xenon and Cell processors lose a significant portion of their processing power and efficiency as they're tasked with audio, I/O, encoding/decoding, compresson/decompression, etc etc.

Nintendo have gone down a route that where possible, tasks like the above have dedicated silicon to more efficiently process them.

DSP = audio

I/O = I/O

GPU = dedicated portions to encoding/decoding, decompression/compression, flotation point, SMID, and also features programmable units allowing developers to utelize its power as efficiently as possible.

CPU = Out of Order, large L1-L3 cache, which both carry their own benifits for general purpose processing

The Xenon adn Cell did so much stuff, only a portion of their real power was free to developers to tap.

The 360, PS3, PS2, Xbox and Gamecube all made use of media extensions on their gpu to some extent.

Very limited compared to today's modern GPU architectures. I'm not sure why you keep going on about this. How many programmable units do the Xbox 360 and PS3 have compared to even the cheapest and nastiest GPUs on the market from AMD and Nvidia 2 years ago?

It isn't a new thing in the console space. 'SMID' is essential in parallel processing, and isn't just something tacked onto CPUs. I'm not exactly sure what you're trying to prove...

Again you go on about somethin that is irrelavent. We're not arguing or debating what SMID is used for, only where it should be used. In modern architecture, routines like these have long been shifted onto GPUs.

Also, a thread doesn't equal a certain percentage over a second, because if so it would be literally impossible for single core CPUs to multitask.

What do you think Hyperthreading and SMT are for

you can't just use part of a thread.....

If a thread finishes processing well before the end of a clock cycle, yes you can. That's waste and inefficent use of the power.

If you have a better example link me, because honestly the Audio DSP you are referring to probably is less than a tenth of the capability of the current gen CPUs...

Think about it this way

Creative's XFI DSP is 400mhz give or take. Look at what it can do with that 400mhz compared to the best any Xbox 360 or PS3 game has ever output.

400mhz dedicated Creative DSP vs Xenon's 3.2ghz Tri Core CPU.

DSP wins hands down. Everything from voices per channel, number of channels, 3D audio positioning, sampling 192khz and 24bit, audio quality and quanitity compression and decompression, etc. So this little 400mhz mops the floor with Xenon and Cell's best.

Dedicated silcon is always more efficient then tacking the task onto a general purpose CPU. That's why DSPs, storage controllers, north bridges and other busses, Memory Controllers, etc, tend to be/have dedicated bits of silcon. They offer the absolute best performance per watt, run cooler, are cheaper, and more efficient.

But i'm not going to continue to argue with you about it. You've clearly shown that you have next to no real knowledge of the Xenon, Cell, computer architecture, etc. To compare x86/x64 to Xenon and Cell was a joke, as too your inability to understand the difference between the GPU architecture in the Xbox 360 and PS3 vs modern architecture with their implamentation of programmable units, GPGPU, etc.
 
LMAO.

if an instruction finishes before a clock cycle it starts another in the same clock cycle?

Show me one popular game that uses more than the typical 5 or so percent or less in CPU Usage. Show me where the additional cost for a custom DSP in logic, heat, and price and whatnot, is worth the CPU offloading and get back to me.

Nintendo likely had those additional chips for backwards compatibility rather than CPU performance. It was spun into CPU improvement or something, but the effect is negligible at best.
 
LMAO.

if an instruction finishes before a clock cycle it starts another in the same clock cycle?

It's not that simple, read up on the Xenon's SMT and architecture and you'll see it's quite limited as to what it can do within a single clock cycle. Thus why developers spend so much time optimising and prioritising their code to ensure it efficiently loads the SMTs and Xenon CPU. It's not even half as simple as you make it, its quite limited in reality.

Show me one popular game that uses more than the typical 5 or so percent or less in CPU Usage. Show me where the additional cost for a custom DSP in logic, heat, and price and whatnot, is worth the CPU offloading and get back to me.

Intel and AMD x64 and x86 CPUs are not even remotely close to the Xenon and Cell processor. Stop trying to compare them!

Also stop comparing modern architecture to the 2004 era tech inside the Xbox 360 and PS3.

Nintendo likely had those additional chips for backwards compatibility rather than CPU performance.

They added it to the Gamecube for a reason... There's no question the Geko CPU could have done audio, Nintendo opted for a DSP for a reason. Performance, efficiency, power consumption.

It was spun into CPU improvement or something, but the effect is negligible at best.

Eh....
 
Think about it this way

Creative's XFI DSP is 400mhz give or take. Look at what it can do with that 400mhz compared to the best any Xbox 360 or PS3 game has ever output.

400mhz dedicated Creative DSP vs Xenon's 3.2ghz Tri Core CPU.

DSP wins hands down. Everything from voices per channel, number of channels, 3D audio positioning, sampling 192khz and 24bit, audio quality and quanitity compression and decompression, etc. So this little 400mhz mops the floor with Xenon and Cell's best.

Funny you should mention a 400mhz DSP.

I was thinking that maybe Nintendo is using a DSP with a similar speed.
Possibly even Renesas's (if they are not supplying RAM or doing the FAB):

SH-4A CPU core with a maximum operating frequency of 400MHz, and realizes a processing performance of 720 MIPS... SH-4 - used whenever high performance is required such as car multimedia terminals, video game consoles (sega), or set-top boxes

At any rate, applying Nintendo's Wii & GameCube multipliers (1,2,3,4) on the WiiU
gives us: DSP=400mhz GPU=800mhz CPU=2400mhz MEM=1600mhz

Where Gamecube to Wii was a 1.5 leap in power (on paper), Wii to WiiU is above 3.
 
Are you referring to additional SIMD instructions by 'toys'?


The 360, PS3, PS2, Xbox and Gamecube all made use of media extensions on their gpu to some extent. It isn't a new thing in the console space. 'SMID' is essential in parallel processing, and isn't just something tacked onto CPUs. I'm not exactly sure what you're trying to prove...

Also, a thread doesn't equal a certain percentage over a second, because if so it would be literally impossible for single core CPUs to multitask. you can't just use part of a thread.....

And often the sound effects you hear are uncompressed, or the CPU does spend time uncompressing them, but thats a RAM issue. I've not seen actual proof of a game with only 256 or so channels taking up so much CPU time on CELL or Xenon. If you have a better example link me, because honestly the Audio DSP you are referring to probably is less than a tenth of the capability of the current gen CPUs...
Wtf am I reading. I can't use a part of a thread?
 
LMAO.

if an instruction finishes before a clock cycle it starts another in the same clock cycle?

Show me one popular game that uses more than the typical 5 or so percent or less in CPU Usage. Show me where the additional cost for a custom DSP in logic, heat, and price and whatnot, is worth the CPU offloading and get back to me.

Nintendo likely had those additional chips for backwards compatibility rather than CPU performance. It was spun into CPU improvement or something, but the effect is negligible at best.

I'd be interested to see your source that the typical 360 game uses 5% CPU time for audio.

The point of something like a DSP is that its designed for one particular task and so can provide better performance at that task at a given complexity then a general purpose CPU, meaning more performance per watt/cost.
 
LMAO.

if an instruction finishes before a clock cycle it starts another in the same clock cycle?

Show me one popular game that uses more than the typical 5 or so percent or less in CPU Usage. Show me where the additional cost for a custom DSP in logic, heat, and price and whatnot, is worth the CPU offloading and get back to me.

Nintendo likely had those additional chips for backwards compatibility rather than CPU performance. It was spun into CPU improvement or something, but the effect is negligible at best.
ikioi is blowing the thing a bit out of proportion, but you're severely underestimating the impact of audio processing:

On the XBox 360, audio mixing for a normal game can use as much as two full hardware threads, 1/3 of the CPU dedicated to audio. That's not even considering complex games like car racers, where each vehicle can have dozens of voices and complex filters. That's hugely wasteful in terms of cost. A general purpose CPU is just not optimised for audio processing.

The difference is that a CPU core costs you dollars, and a DSP core costs you pennies. That's why almost all mobile architectures have dedicated audio silicon. By far the most popular request my team gets from game devs is "Make audio cheaper!". Just running a single good reverb will completely blow the L1 and L2 cache, and require ridiculous amounts of memory bandwidth. We support 320 simultaneous voices on the 360 (that's how many the XMA hardware will decode at a time). AAA games use all of them. Even plants vs zombies uses over 100 and most of a core, _just for audio_. If we were to have the choice of adding a full core to the 360 or a DSP that can handle the same load. We'd probably choose the DSP, since it would be vastly cheaper in BOM and essentially give devs an extra 50% CPU for game logic.
This was posted by a Microsoft engineer on B3D.
 
Funny you should mention a 400mhz DSP.

I was thinking that maybe Nintendo is using a DSP with a similar speed.
Possibly even Renesas's (if they are not supplying RAM or doing the FAB):



At any rate, applying Nintendo's Wii & GameCube multipliers (1,2,3,4) on the WiiU
gives us: DSP=400mhz GPU=800mhz CPU=2400mhz MEM=1600mhz

Where Gamecube to Wii was a 1.5 leap in power (on paper), Wii to WiiU is above 3.

I believe the DSP in WiiU is still 121.5Mhz, but of course multipliers need not be limited to such low numbers, so that doesn't limit the GPU/CPU clock. Having said that my own prediction is 1822Mhz CPU and 486Mhz GPU.
 
I believe the DSP in WiiU is still 121.5Mhz, but of course multipliers need not be limited to such low numbers, so that doesn't limit the GPU/CPU clock. Having said that my own prediction is 1822Mhz CPU and 486Mhz GPU.

Are you using your own multipliers to get those numbers, if so, what are they?
Because, using the typical Nintendo multipliers, a 1800mhz CPU should be supported by a 600mhz GPU.
 
Yeah... I don't really belong here.


Well it is difficult to determine, because many saw that the Wii could have been an HD* machine. But it was gimped to 480p.


*edit to add: Dont forget that the original Xbox supported the following resolutions: 480i, 480p, 576i, 576p, 720p, 1080i.
Wii is at least slightly more powerful. So why did Nintendo not have it support higher resolutions?
 
Well it is difficult to determine, because many saw that the Wii could have been an HD* machine. But it was gimped to 480p.


*edit to add: Dont forget that the original Xbox supported the following resolutions: 480i, 480p, 576i, 576p, 720p, 1080i.
Wii is at least slightly more powerful. So why did Nintendo not have it support higher resolutions?
eDRAM.
 
Are you using your own multipliers to get those numbers, if so, what are they?
Because, using the typical Nintendo multipliers, a 1800mhz CPU should be supported by a 600mhz GPU.

The only reason those multipliers are typical for Nintendo in recent times is because they were used in GameCube and by extension Wii (all clocks were increased by 50% so multipliers stayed the same). But there's no reason to consider any particular number of multiplier typical for Nintendo going forward.

The first leak we got for WiiU mentioned a 120Mhz DSP (likely 121.5Mhz to be exact since that's what Wii's DSP was clocked at).

Using the DSP clock speed as the base (purely because its the only one we seem to have a good idea of) I think x15 (1822Mhz) for the CPU seems likely since we've heard its closer in clock speed to Broadway then to Xenon (I suppose 1944Mhz is still technically closer to Broadway than Xenon but its right on the line). For the GPU 5x would give us 607Mhz (the same ratio used in GC and Wii), which seems too high. The next step down is 486Mhz.

Though to be honest since I thought about the fact that GC's multipliers shouldn't really be considered typical for Nintendo going forward I suppose whole number multipliers aren't necessarily a given :)

Like I said its all pure speculation based on the only clock speed we know of (DSP).
 
Why are people debating whether or not hardware dedicated and designed for a specific task is better at that task than general hardware?

The answer is yes. Lol.

Dont forget that the original Xbox supported the following resolutions: 480i, 480p, 576i, 576p, 720p, 1080i.
Wii is at least slightly more powerful. So why did Nintendo not have it support higher resolutions?

Does "support" mean just having an upscaler in the system instead of the TV, or does "support" mean "actually render the game internally at X resolution."

The Wii doesn't have the power or memory to render a 1080 image internally at a reasonable speed. And hey, neither does the 360 or PS3 really.
 
Resolution drops are entirely unrelated to CPU performance. In fact, I'd be hard pressed to think about anything more unrelated ;)

On the PS3 the CPU in many games handles a lot of the work that traditionally a GPU would/should do, including operations that depend heavily on resolution (full-screen style effects etc), so in that case dropping the resolution due to CPU perf is certainly plausible.
 
Its for i/o and audio not just audio alone, but it was improved slightly during the years but yeah it is not the most efficient method. Kind of puts the whole 729MHz broadway core in the wii being only 20% slower than a xenon core into perspective. A 900mhz unenhanced tricore broadway would have probably given xenon a run for its money back in the day. A tri core enhanced (enhanced to run over 1ghz) broadway at under 2.2 Ghz would run rings around xenon but I highly doubt the cpu is going to be over 2.2ghz

Where are you getting those numbers from?
 
Why are people debating whether or not hardware dedicated and designed for a specific task is better at that task than general hardware?

The answer is yes. Lol.



Does "support" mean just having an upscaler in the system instead of the TV, or does "support" mean "actually render the game internally at X resolution."

The Wii doesn't have the power or memory to render a 1080 image internally at a reasonable speed. And hey, neither does the 360 or PS3 really.

There is a good amount of 720p games on XBox 1.
 
Based on what I recall, most of the 720p games on Xbox 1 didn't sport the nicer visuals possible on Xbox. Stuff like Soul Calibur 2 was not visually upgraded over the arcade version from less powerful hardware. Games with Xbox level visuals like Dead or Alive 3 or Ninja Gaiden stuck with SD res. Almost all the HD games were PS2 ports.

That said I always thought Nintendo made a big error by not aiming for even slightly more powerful hardware for Wii, to enable 720p for HD compatibility. Wouldn't be surprised if, as on the Xbox 1, Wii could have pushed some games at 720p on its own. But not the impressive stuff like Galaxy or Metroid Prime 3.
 
Where are you getting those numbers from?

http://gbatemp.net/threads/retroarch-a-new-multi-system-emulator.333126/page-7

LibretroRetroArc said:
I believe if you program only against one main CPU (like we do for pretty much most emus), you would find that the PS3/Xenon CPUs in practice are only about 20% faster than the Wii CPU.

I've ported the same code over to enough platforms by now to state this with confidence - the PS3 and 360 at 3.2GHz are only (at best - I would stress) 20% faster than the 729Mhz out-of-order Wii CPU without multithreading (and multithreading isn't a be-all end-all solution and isn't a 'one size fits all' magic wand either). That's pretty pathetic considering the vast differences in clock speed, the increase in L2/L1 cache and other things considered - even for in-order CPUs, they shouldn't be this abysmally slow and should be totally leaving the Wii in the dust by at least 50/70% difference - but they don't.

BTW - if you search around on some of the game development forums you can hear game developers talking amongst themselves about how crap the 360/PS3 CPUs were to begin with. They were crap from the very first minute the systems were launched - with MS hardware executives (according to some 360 'making of' book) allegedly freaking out when IBM told them they would be getting in-order CPUs for their new console - which caused them to place an order to have three 'cores' instead of one because one core would be totally pathetic (pretty much like the PS3 then where you only have one main processor and 6/7 highly specialized 'vector' SIMD CPUs that are very fast but also very low on individual RAM and essentially have to be able to do some heavy code weightlighting for you to gain anything). Without utilizing multithreading, you're essentially looking at the equivalent of Pentium 4-spec consoles that have to be helped along by lots of vector CPUs (SPUs) and/or reasonably mid-specced, highly programmable GPUs (which the Wii admittedly lacks)



To be fair though, game developers have learned to go multithreading heavy and games these days are all about fancy polygonal graphics models and post-processing shaders - stuff where lots of parallel SIMD processors and a fancy GPU are the main crux of the system - and the PS3/360 are capable there - so they've managed to work around the main CPU being so utterly weak. That and the fact that with HDTVs - most HDTVs cannot be guaranteed to have zero input lag so you don't have to worry about running your games at 60fps since you would cut out a large percentage of your potential audience because they wouldn't own a TV capable of handling the input to action response time to be able to play it properly - so what they do instead is run it at 30fps - which leaves enough wiggle room for even the worst HDTVs with lots of post-processing filters going on that slow down the response time..

I don't know if this might help
 
Originally Posted by LibretroRetroArc:
I believe if you program only against one main CPU (like we do for pretty much most emus), you would find that the PS3/Xenon CPUs in practice are only about 20% faster than the Wii CPU.

I've ported the same code over to enough platforms by now to state this with confidence - the PS3 and 360 at 3.2GHz are only (at best - I would stress) 20% faster than the 729Mhz out-of-order Wii CPU without multithreading (and multithreading isn't a be-all end-all solution and isn't a 'one size fits all' magic wand either). That's pretty pathetic considering the vast differences in clock speed, the increase in L2/L1 cache and other things considered - even for in-order CPUs, they shouldn't be this abysmally slow and should be totally leaving the Wii in the dust by at least 50/70% difference - but they don't.

BTW - if you search around on some of the game development forums you can hear game developers talking amongst themselves about how crap the 360/PS3 CPUs were to begin with. They were crap from the very first minute the systems were launched - with MS hardware executives (according to some 360 'making of' book) allegedly freaking out when IBM told them they would be getting in-order CPUs for their new console - which caused them to place an order to have three 'cores' instead of one because one core would be totally pathetic (pretty much like the PS3 then where you only have one main processor and 6/7 highly specialized 'vector' SIMD CPUs that are very fast but also very low on individual RAM and essentially have to be able to do some heavy code weightlighting for you to gain anything). Without utilizing multithreading, you're essentially looking at the equivalent of Pentium 4-spec consoles that have to be helped along by lots of vector CPUs (SPUs) and/or reasonably mid-specced, highly programmable GPUs (which the Wii admittedly lacks)



To be fair though, game developers have learned to go multithreading heavy and games these days are all about fancy polygonal graphics models and post-processing shaders - stuff where lots of parallel SIMD processors and a fancy GPU are the main crux of the system - and the PS3/360 are capable there - so they've managed to work around the main CPU being so utterly weak. That and the fact that with HDTVs - most HDTVs cannot be guaranteed to have zero input lag so you don't have to worry about running your games at 60fps since you would cut out a large percentage of your potential audience because they wouldn't own a TV capable of handling the input to action response time to be able to play it properly - so what they do instead is run it at 30fps - which leaves enough wiggle room for even the worst HDTVs with lots of post-processing filters going on that slow down the response time..

Wait...Broadway has Out of Order Execution? Why has no other website reported this?
 
After reading up a little more on SIMD last night the question should never have been why the WiiU CPU is offloading so many functions to sub-processors but why the Xenon/Cell weren't.

I realize that circa 2005 Sony thought the Cell was powerful enough to fuel the US space program but why would MS have not opted to have sub-processor handle SIMD functions? It sounds as if the could have had a smaller, cooler processor and avoided all the nasty RRoD issues the Xenon caused.
 
After reading up a little more on SIMD last night the question should never have been why the WiiU CPU is offloading so many functions to sub-processors but why the Xenon/Cell weren't.

I realize that circa 2005 Sony thought the Cell was powerful enough to fuel the US space program but why would MS have not opted to have sub-processor handle SIMD functions? It sounds as if the could have had a smaller, cooler processor and avoided all the nasty RRoD issues the Xenon caused.

Perhaps its because of ms thinking they had one up on Sony by having pretty much the same ppu in xennon that Sony had in the cell, just ms had 3 of them instead of the spus, perhaps they fell for the cell hype too
 
Perhaps its because of ms thinking they had one up on Sony by having pretty much the same ppu in xennon that Sony had in the cell, just ms had 3 of them instead of the spus, perhaps they fell for the cell hype too
Yes, I think we all fell for that hype. I remember at one point in 2006 thinking that Sony/IBM/Toshiba actually had a realistic chance of taking over the consumer CPU market from Intel/AMD.

That seems pretty silly now looking back but that partnership did work out really well for IBM.

I have learned a lot from the last few pages of the thread. I never realized the single core of Broadway was that advanced. It goes a long way to explain why we haven't heard as many negatives from devs about the "enhanced Broadway cores" as you would expect to hear about any enhanced aspect of the Wii architecture.
Cell has 8 smaller, cooler processors fantastically capable of handling SIMD processing.
(7/6 on PS3)
Well the Cell was specifically designed for multimedia processing so I probably shouldn't have grouped it in with Xenon. I guess it wouldn't have saved Sony much to split those SPEs out to seperate chips as opposed to cooling them all under one heat sink.
 
Well the Cell was specifically designed for multimedia processing so I probably shouldn't have grouped it in with Xenon. I guess it wouldn't have saved Sony much to split those SPEs out to seperate chips as opposed to cooling them all under one heat sink.

Not really. Having the SPEs on die with the PPE was pretty much the whole point, as you had a nice high bandwidth, low latency ring bus between them all. The problem was that the SPEs weren't capable of performing the full graphical workload themselves, and neither was the RSX, really. This means you've got to split your graphics work across two completely different architectures on two different chips, which is far from convenient when you're trying to write a graphics engine.
 
Not really. Having the SPEs on die with the PPE was pretty much the whole point, as you had a nice high bandwidth, low latency ring bus between them all. The problem was that the SPEs weren't capable of performing the full graphical workload themselves, and neither was the RSX, really. This means you've got to split your graphics work across two completely different architectures on two different chips, which is far from convenient when you're trying to write a graphics engine.
I didn't even consider the bandwidth issue and I keep forgetting about the impact the late addition of the RSX had to Sony's design plans. It does make me more sympathetic about the difficult dev environment the PS3 ended up with as I imagine it would ahve been considerably streamlined had Cell been able to do everything as originally planned.

I would guess that the WiiU CPU/GPU combo chip is closer to what Sony had in mind (in terms of functionality) than what they ended up with in the PS3.
 
Idea-signal.jpg
 
rofl, amazing :)

For the GPU, i don't have the GFLOP count, but according to the impressions of my sources based on the raw "performances" (a mix of framerate + resolution + what is rendered on screens) of multi projects on Wii U, + considering an intricate use of the Gamepad in addition, it's pretty safe to assume UGPU demonstrate at least 2X Xenon capabilities. Now i'm talking of demonstration. So maybe it will be 390GFLOP in paper but with some more modern architecture there, a few fixed functions here, optimizations, better ties with the other components, it could "fake" a 500/600GFLOP Xenon. And even then, it's more complicated than just GFLOPs. We talked about it already pretty extensively on previous WUST. It was my educated+sourced guesstimation back then, it may be a tad better now after the reports of huge improvements i relayed earlier (they are more explained by developers having a better grasp on the dev kit + sdk than a boost in hardware by the way).

No really, it seems it's a greatly balanced and quite capable system. With a huge learning curve, so it's promising for the future.
 
Hey Ideaman, do you have any information on the Wii U's tessellation performance or even better, if any developer is using it for their games?

It's a subject not brought up a lot here.
 
ikioi is blowing the thing a bit out of proportion, but you're severely underestimating the impact of audio processing:

I was only blowing it out of proportion to drive home reality. Reality being that the Xenon and Cell processors were not some godly beasts of silicon, but rather CPUs that could do a couple things really well (vertex, smid, floating point calcs) but completely blew at basically everything else (audio, I/O, GP etc)

There's a reason why the Wii U, Durango, and Orbis will not be using CPUs with similar architecture. The architecture is archaic.

This was posted by a Microsoft engineer on B3D.

You watch, he'll stop posting in this thread now he's been completely beaten.

He's had how many people tell him now that the Xbox 360 is heavily taxed just doing Audio and IO? 4 or 5 individuals from my count.

Yet he still believes audio processing is piss easy and DSPs are some how more expensive then using a GP CPU. Not to mention the crap he was talking about regarding SMT processing on the Xenon and it being able to just 'start' another thread. Xenon is not that simple, but he clearly is.
 
Hey Ideaman, do you have any information on the Wii U's tessellation performance or even better, if any developer is using it for their games?

It's a subject not brought up a lot here.

It was one of the first question i asked one year ago, no precise answer, i guess it's a part of the "more modern architecture that allow us things we weren't able to do on PS360" recurring veiled comment (albeit current gen HD could do tesselation but weren't efficient enough for it on the contrary to modern GPU).
 
More perceptible graphical improvement over time. Makes the console look less old in the long run before being replaced. At least that's what it's meant to do in the theory of Sony.

This is the most stupid things on earth...
so instead of having an huge full library of games with the max graphics we will only have 3-4 games of that quality at the end of the life of the console when maybe no one cares about it anymore...
yeah fuck this and who thinks that is a good thing
 
This is the most stupid things on earth...
so instead of having an huge full library of games with the max graphics we will only have 3-4 games of that quality at the end of the life of the console when maybe no one cares about it anymore...
yeah fuck this and who thinks that is a good thing
It's not exactly a good thing but it's not the awful thing that you're making it out to be here either. Think of it in the context of how 360/PS3 games looked progressively better as the gen went on.

If a year from now everyone (aside from the crazies) have the same opinion of the quality of WiU games that we're seeing now then that's reason to worry.
 
Top Bottom