Right I understand that and if the GPU were going to be more traditional, I would be more inline with the notion of if being 1TFLOP. But a 1TFLOP GPU with the extra functions would pretty much put it on par (at least 1st party and 3rd party exclusives) with the PS4 and Xbox 3. That's one of the main reasons why we can eliminate that idea. I also don't see AMD making a non-GCN GPU at 28nm. Also it sounds like, and just being honest, you seem to be treating GCN differently than what it is. Like in this part you say "cut out gcn function". You can't cut out GCN function. That would be akin to saying "cut out the VLIW5 function". GCN is the architecture (e.g. the compute unit). GCN is just the name AMD gave to it. That might explain why you have been saying what you have in regard to PS4 and Xbox 3. They will be using compute units. There's really no arguing against that.
What I mean by cutting out "GCN function" is simply not designing a chip for computing as GCN is so obviously built, if it was designed to ignore compute in favor of pure gaming performance. (of course it would still compute) you would have a card closer to 680gtx (closer as in philosophy and not performance) This means it would out perform GCN per shader per hz per watt in games, while leaving the compute functionality of GCN on the sideline in Nintendo's version of a 28nm or even 32nm chip.
I also don't care about PS4's and Xbox3's architecture, but if it is GCN, Nintendo can make up a lot of ground as it is a heavy chip (size/performance is hurt in order to focus on compute units)
But you're talking about something that nothing points to being in existence on Nintendo's side. Bringing up the 7770 is irrelevant because that is a 1Ghz GPU. The 2.25x refers to the pixels only. And considering Nintendo is relying on eDRAM, I expect the BW of Xbox 3's memory to be much larger even if they did end up with the same amount. The power gap won't be smaller than Xbox/PS2. A similar gap is the best scenario.
You yourself believes that Nintendo could be using a 640sp chip, you give it the clock of 600mhz for no reason other than you think it should be around there, and I've shown that it could as high as 800mhz if it were designed similarly to the e6760 chip. I've also conceded that it might not be as high as 800mhz, but there is nothing locking it in at 600mhz if it's a smaller chip like we both assume, they should at the very least be able to push out 650-700mhz on a 32nm process.
And as for the power gap of the PS2 to Xbox, yes it will be smaller, you and I both expect the same functionality between Wii U and the other consoles (as far as graphical effects) We also both know this:
EatChildren said:
That generation is a weird one. Compared to the Xbox, the PS2 and GCN lacked the modern GPU architecture that allowed developers to easily use shaders and what have you. People often forget, but when the Xbox launched it really was a very capable PC in the form of a console. Developers could do a lot of things quicker and easier on the Xbox than they could on even the Wii. DirectX driver libraries will do that that for you.
I agree. Unfortunately due to how big PS2 was we didn't see enough games truly exploiting the difference.
It's not based on cost, but on trying to think like how Nintendo thinks (which helped on my earlier guesses). I don't see Nintendo pursuing a high-clocked GPU. That's not in their nature.
His post wasn't just about cost, it also points to heat, a smaller chip produces less heat, and the Wii U chip will be much smaller than Xbox3/PS4 without losing more than half the power, thanks to "losing" GCN in favor of performance.
"Can you tell me more about that?"
I think you disappeared on us for a moment during that time. But it originates from the third thread. Then a few days later a poster on B3D pretty much corroborated the idea.
My post that was at near the tail end of the discussion.
http://www.neogaf.com/forum/showthread.php?p=36485259&#post36485259
Li Mu Bai's post.
http://forum.beyond3d.com/showthread.php?p=1634440#post1634440
Having Fixed functionality would be absolutely great, and if so, there is no way that that skyrim video should be taken seriously at all... my E350 can play skyrim at those settings (~20fps) which is a 80GFLOPs gpu, the card needed to run skyrim on ultra is a 6970, a 3TFLOPs+ card, the difference in those graphic processing units is so much more larger than were we put Wii U and Xbox3 that it comes off as a huge joke...
I think you have gathered a lot of information, but you simply don't know how to use it, you aren't comparing properly to the hardware you are expecting out of the boxes, even if the boxes were 4x the performance and we tossed out fix functionality, you'd end up with medium to ultra settings on skyrim.