So the immediate stupid question that pops up in my head is what score would Xbox 360 gpu get?
I actually googled it but did not really find anything concrete.
What does that mean exactly?
We probably won't find the answer, but the discussion on the E6760 a couple of days ago does say a few things about what we might see as Wii U GPU and maybe even the CPU being weak on paper but due to advanced architecture and efficiency might actually be a fairly strong package. Remember this is all just speculation on my part on what can be possible for Wii U on a 75 watt power supply based on information we have around and is not indicative of anything final.
Let's look at the current gen console GPU's and their PC equivalents at the time.
http://www.technobuffalo.com/gaming/console-wars-round-5-xbox-360-vs-ps3-gpu/
The G70 which is 7800GTX was around 200GFLOPS and the r520 Radeon x1800XT performed similarly
The Xenos is 240GFLOPS but is comparable to the x1800xt and the 7800GTX and will actually perform better in lower resolutions because of its more advanced architecture that will be used later on in the HD2XXX series. The 7800GTX is what was used as a basis for the RSX.
The advanced architecture over the x1800xt equivalent is what made the xenos gpu a better gpu than the RSX (but cell makes up for it). Even though the Xenos is a little bit slower in speed compared to the x1800xt, the newer architecture actually made it perform better in real world gaming applications.
What does this say about the Wii U dev kit and the HD4850 55nm if it was actually in there? A ~1TFLOP card released in January 2009 might have been used in the devkit as a buddy card to simulate the same raw speed as what WiiU’s GPU. However the difference to the real GPU is the architecture will be a lot more advanced and probably closer to southern islands in features. I wouldn't hesitate to say that it might even be more advanced in features having an extra year of development.
The HD4850 is around 4.2x the power of Xenos in raw numbers and even more efficient due to the more advanced architecture. You could say that it is something closer to 5x the performance of xenos in real world applications. In real terms it can be seen as that it can run the same game at 1920x1080 at 60fps using slightly higher details compared to the Xbox 360 running it at 720p at 30fps. From far away you might not see the difference especially in still pictures.
The early WiiU devkits may have had a base card 5x Xenos on paper, now with a much better architecture when the final Wii U GPU is replaced in the devkit sometime in the next few weeks. How does this tie in with the E6760 a card released over a year ago in 2011? It seems that the E6760 on a 40nm process may only be 576GFLOPS but handily beats a radeon 4850 and at 35watts. It has been said that the Wii U GPU is about ~600GFLOPS and if that is anything to go by then having an architecture a year more advanced than the E6760 might bring real world GPU Performance a lot more than what the HD4850 can output and even what the E6760 can output or 6x xenos.
If this is anything to go by a Wii U GPU with 720GFLOPS at 30 watts on a 32nm process may actually be possible and may fit in with a 75 watts power supply limit. A Wii U gpu with 720GFLOPS and 2012 architecture is not just 3x xenos; it may well be more than double that. This does not mean the Wii U will be 7.5x xenos as the Wii U CPU is reportedly the weak part of the equation, since a lot of the processing will be offloaded to the GPU. If The Wii U CPU is only ~10watts and loosely based on a 476FP IBM processor I wouldn't think it would be much more powerful than the current cell processor in the PS3 and at best twice the processing power. What this means for real world gaming would be maybe 3.5x-4.5x the power of the xbox360 and with around 4x the ram compared to the Xbox 360 it would make sense.
What do 3.5 x-4.5 x xbox360 powers get you? You can probably get 720p at 60fps with one tablet at 576p at 60fps all with 4x AA and slightly more effects and much higher quality textures due to ram. Does this make it catch up to PS4 and xbox360? HELL NO, but the good thing is that the architecture will allow Wii U to have multiplatform titles without a complete overhaul of development compared to how it is with Wii. Another bonus on the Orbis and Durango side of things is that these advanced architectures will also benefit and apply to the next gen SONY and MS systems and having a 1.8TFLOP with 2013 architecture is going to murder PS3 and the Xbox 360 as a whole even if the CPU's in these new machines follow a similar route to Wii U due to something like GPGPU capabilities.
A HD6570 class GPU optimised for a console and on 32nm will be good enough for 720p 30fps 2xaa next gen gaming or optimised current gen gaming (720p 60fps 4xAA +576p tablet 60fps 4xaa)
I am however expecting 720p 60FPS 4xAA and more effects on PS4 and the next Xbox or 3-4 more performance. Basically a HD7850 card optimised for consoles which will have something along the lines of the performance of a 7870 on PC. I also expect these consoles to be $399 when launched.
Is Wii U a half gen leap in graphics compared to the leap from PS2 to PS3?
Probably a little less than half, by PS3 standards but we have to remember that the PS3 is probably a generation and a half ahead due to its price point and the money it lost during release. If Wii was released and was actually 8x the gamecube it would have probably released for $299 instead of $249. The Wii U is may be going back to its gamecube roots in power increase in that it will be a two proper generations increase from the Gamecube if you count how many years from release or possibly 64 gamecubes duct taped together. Still this might only be 1/3 of the eventual power of Orbis and Durango when all is said an done.
Some caveats on what I have said points to a Wii U being on a 32nm process. At 40nm things change, not dramatically but at least 10% less performance compared to 32nm.We Also do not know if the 4850 was used in the devkits or if the Wii U dev kit now has a ~600 GFLOP GPU in there. The only thing we know is development of the GPU started in 2009 and that it is not 2005 tech like some people here keep saying. However it is interesting how three years development can make a 35w TDP card at 40nm can beat a 55nm 110w card in performance. at 32nm and extra development who know what we can get. it certainly won't be 1.5x xenos as some might claim thats for sure. Also a ~600GFLOP gpu in a console will not perform the same as a ~1200GFLOP GPU on a pc. So those hoping for a miracle might be disappointed but architecture advancments do help and it showed with xenos compared to the x1800xt.
Here is a question for anyone in the know, is final silicon for the Wii U GPU in any devkit yet or is that yet to happen? If not when is it going to happen?