Rumor: Wii U final specs

Cool, so now we've been through nearly 18 months of speculation we know:

It's a GPU.


:P
Stop beating around the bush, man. The question is:

Is it a GPU with a penis or without a penis?
 
First the bolded, of course it's another GPU, because it's customized and it's suppose to hit performance of a 800shader R700 at a certain clock speed. (certainly not stock 625MHz | my guess is 360MHz or 480MHz btw or 3x-4x the DSP) So yes, GPU7 will be an "enhanced" R700 if you want to use that term.

You've changed what you know quite a few times over the last year, examples are various. Ram sizes, ALUs in the GPU, Wattage used and probably the easiest to point out is you having a talk with wsippel or another insider, where you came to the conclusion that the Wii U likely only had ~320GFLOPs. Every time a rumor comes in, you try to see how it fits into what you have (this is something we all do) this changes what you "know" on the last page for instance, "matt's" info changed what you knew about the system and you were then unsure about what the performance would be.

The second thing here about R700, you are basically saying GPU7 is R700, which is simply not the case, and your own info would clearly point this out to you. That link there says "GX2 is a 3D graphics API for the Nintendo Wii U system (also known as Cafe). The API is designed to be as efficient as GX(1) from the Nintendo GameCube and Wii systems. Current features are modeled after OpenGL and the AMD r7xx series of graphics processors. Wii U’s graphics processor is referred to as GPU7."

GPU7 exceeds R700 features, from multiple sources, including the Source of this OP. I appreciate your work, but it doesn't mean you have been flawless in navigating these rumors. In fact, we know Arkam and where he got his source of info, do we know if this source has been updated? Has Arkam gotten info about final dev kits, he doesn't work on the Wii U himself and he is not a programmer, so his knowledge is limited and it's also fed to him by someone else. If these "specs" are first dev kits (and he had an outdated one at the time) then it could very well be a R700 in his box.

I don't mean to attack you btw, but you act like you know stuff, when you just have a bunch of educated guesses and those are based on sources you likely can't even confirm. When I was throwing around the ~600GFLOPs rumor here from my source (a Nintendo engineer btw) I received a few PMs from other "insiders" that said it was 800GFLOPs. I know how it works BG, and it's not pretty, and impossible to just KNOW.

This is just getting craxy now. Nintendo engineer? Who do you think your fooling?
 
This is just getting craxy now. Nintendo engineer? Who do you think your fooling?

It's not a lie, some people in here know about him. Though some might not of known he worked for Nintendo, or in what division.

Edit: I won't be talking about this in any more detail, take it as fake if you wish, or ask around with some of the WUST people, they can confirm it, just do it in PMs.
 
It's all a balance I guess. If you have lots of GPGPU capacity, then aren't you sacrificing normal gpu capacity as its stealing space on the die. And what sort of thing do games programmers need to do that would benefit from GPGPU but wouldn't be suited for a multicore CPU to chew over?

I just think there is a risk that the system can become unbalanced with the GPU being asked to do too much
Apologies for the late reply - I missed your post the other day.

Do GPGPU and 'straight' GPU priorities conflict sometimes? Yes, they do. Do GPGPU priorities make a GPU bad per se? Apparently it's a broad generalisaiton, but even so, I don't think so. But the thing is, even a GPU with minimal GPGPU provisions can be a viable GPGPU unit. Heck, we have been doing GPGPU since the times when the term was not coined yet. Things have only been improving in this aspect since then.

Multi-core CPUs are good (actually, they're inevitable), but arming those with fat SIMD units for things 90% of which other fat (actually, many times fatter) SIMD units sitting elsewhere in the system could do just as well (or better), is dubious budgeting. The reasoning, 'But lets keep two agents of fat SIMD in the system, so we could do much more work' is a lost cause - that's not how things work. You pick the more efficient of the two and you multiply that in the system. Again, that said, I'm in no way against CPUs having SIMD units per se. I'm against designing those for workloads that are better suited to a modern GPU.
 
Apologies for the late reply - I missed your post the other day.

Do GPGPU and 'straight' GPU priorities conflict sometimes? Yes, they do. Do GPGPU priorities make a GPU bad per se? Apparently it's a broad generalisaiton, but even so, I don't think so. But the thing is, even a GPU with minimal GPGPU provisions can be a viable GPGPU unit. Heck, we have been doing GPGPU since the times when the term was not coined yet. Things have only been improving in this aspect since then.

Multi-core CPUs are good (actually, they're inevitable), but arming those with fat SIMD units for things 90% of which other fat (actually, many times fatter) SIMD units sitting elsewhere in the system could do just as well (or better), is dubious budgeting. The reasoning, 'But lets keep two agents of fat SIMD in the system, so we could do much more work' is a lost cause - that's not how things work. You pick the more efficient of the two and you multiply that in the system. Again, that said, I'm in no way against CPUs having SIMD units per se. I'm against designing those for workloads that are better suited to a modern GPU.


Hmm, so the question would then be as follows. Do you think Nintendo would, after having decided they want GPGPU to a lot of work, go and add those extra SIMD units so as not to burden a "standard" GPU?

My concern, if you could call it that, is that Nintendo took their base GPU, gimped the CPU and didn't compensate the other side with enough GPGPU capability. Thereby taking what would be a GPU above current gen and effectively turning it into a current gen GPU (maybe bit more) in real-world terms because it's doing lots of compute work now. If that makes sense.
 
Hmm, so the question would then be as follows. Do you think Nintendo would, after having decided they want GPGPU to a lot of work, go and add those extra SIMD units so as not to burden a "standard" GPU?

My concern, if you could call it that, is that Nintendo took their base GPU, gimped the CPU and didn't compensate the other side with enough GPGPU capability. Thereby taking what would be a GPU above current gen and effectively turning it into a current gen GPU (maybe bit more) in real-world terms because it's doing lots of compute work now. If that makes sense.

When you look at the CPU, you have to look at all the other components. Right now developers are taking their 360 games and shoehorning it onto the Wii U, the way the console is designed is to leave the CPU with as little to do as possible. The DSP is there for Sound, there is an ARM CPU for the OS, there is the I/O controller, and there is a GPGPU. There is also supposedly fixed function shaders as well for lighting and 32MB for basically free 4XAA (though I am guessing the 360 ports have to waste 10MBs for the function it's used in Xenos)

In the end you have somewhere around 600GFLOPs in the GPU, some of that... even say 100GFLOPs, will be used to offload the CPU and depending on what the Pad is doing, you could use up to 154GFLOPs on the pad to render an identical screen at the lower resolution leaving 346GFLOPs left over for the TV. (still leaving over 60% more processing power than Xenos) not to mention at least 4 GPU generations between Xenos and GPU7, newer effects and right now twice the RAM.

I really don't think there is much to be worried about when you look at the console a little deeper.
 
When you look at the CPU, you have to look at all the other components. Right now developers are taking their 360 games and shoehorning it onto the Wii U, the way the console is designed is to leave the CPU with as little to do as possible. The DSP is there for Sound, there is an ARM CPU for the OS, there is the I/O controller, and there is a GPGPU. There is also supposedly fixed function shaders as well for lighting and 32MB for basically free 4XAA (though I am guessing the 360 ports have to waste 10MBs for the function it's used in Xenos)

In the end you have somewhere around 600GFLOPs in the GPU, some of that... even say 100GFLOPs, will be used to offload the CPU and depending on what the Pad is doing, you could use up to 154GFLOPs on the pad to render an identical screen at the lower resolution leaving 346GFLOPs left over for the TV. (still leaving over 60% more processing power than Xenos) not to mention at least 4 GPU generations between Xenos and GPU7, newer effects and right now twice the RAM.

I really don't think there is much to be worried about when you look at the console a little deeper.

How do you know all of this? Are you a developer?
 
There are people saying it's a (modified) E6760; when I don't see how this is supported beyond the silly emails. Everything we've heard is DX10.1+, R700 base, GDDR3, afaik. From what I can tell the E6760 is derived from the Northern Islands series, uses GDDR5 and is DX11 compliant. The originators of the notion of the E6760 as something that could be achieved by AMD, as a reference for what they may do for Nintendo (bgassassin, Fourth Storm) don't seem to actually think it's the E6760.

There were people (sane or not) assuming it actually was a POWER7 CPU. Despite how ridiculous this would be.

I don't think any consumer product should ever actually use the maximal power output of it's power supply; that sounds like a recipe for product failure.


Yeah agreed, and again: the discussion that sparked that dude to call it a circle jerk was just about it ending up with similar properties to the e6760.

Anyone who said it was a p7 was corrected pretty quickly, so there was no reason for him to pile in and comment on that too. Just seemed a little overly worked up about it when the rest of us were having a friendly discussion!

You're right, it won't draw close to its max output for long periods (as Iwata stated). But that doesn't change the fact that 75w is its max output, not its power draw or anything to do with its efficiency. That guy brought up the psu debate as if he was proven correct, we were wrong and that we weren't reading all the info, only what we wanted to hear. If he'd read all the info, he'd have realised he was mistaken about it and shouldn't be using it to prove a point about us ignoring all the facts etc.

Anyway, water under the bridge now. Just don't like people dive bombing threads with nothing valuable to add to the subject except vaguely insulting other people. He wad perfectly civil before that and had some interesting insights etc, so its nothing personal.

/rant.
 
When you look at the CPU, you have to look at all the other components. Right now developers are taking their 360 games and shoehorning it onto the Wii U, the way the console is designed is to leave the CPU with as little to do as possible. The DSP is there for Sound, there is an ARM CPU for the OS, there is the I/O controller, and there is a GPGPU. There is also supposedly fixed function shaders as well for lighting and 32MB for basically free 4XAA (though I am guessing the 360 ports have to waste 10MBs for the function it's used in Xenos)

In the end you have somewhere around 600GFLOPs in the GPU, some of that... even say 100GFLOPs, will be used to offload the CPU and depending on what the Pad is doing, you could use up to 154GFLOPs on the pad to render an identical screen at the lower resolution leaving 346GFLOPs left over for the TV. (still leaving over 60% more processing power than Xenos) not to mention at least 4 GPU generations between Xenos and GPU7, newer effects and right now twice the RAM.

I really don't think there is much to be worried about when you look at the console a little deeper.

image.php


*little Zelda intro*
z0m3le, he come to this thread !
come to save ! the console Wii UuuUu !
enthusiasts are in dismay
'cause of wat specs bitchers say
But they will be ok, when z0m3le save the day !
 
Those are just wild assumptions, numbers pulled out of nowhere and best case scenarios.

Well the numbers are based on pushing pixels, I am not a developer... Well I build iOS and Android games but only as a hobby for now. I took the 921600 pixels 720P screens use, and the Upad's 409920 pixels, and did some dirty math with 500GFLOPs left over from the GPU... of course this isn't how it works, but it gives a rough idea of what Wii U devs have to work with, just splitting the TV and Gamepad resources into their numbers.

image.php


*little Zelda intro*
z0m3le, he come to this thread !
come to save ! the console Wii UuuUu !
enthusiasts are in dismay
'cause of wat specs bitchers say
But they will be ok, when z0m3le save the day !

lol thanks Ideaman, I'm just throwing around numbers but it's pretty clear that games are going to improve with the Wii U over the next couple years and PS360 development dies off.

PS, this is the coolest corner of gaf right now, with so many cel shaded links around.
 
Those are just wild assumptions, numbers pulled out of nowhere and best case scenarios.

What we do know for a fact is that the Wii U is indeed stronger than the PS3/Xbox 360. It has 2GB of ram and its GPU is newer than the GPU found in the Xbox360 and PS3. Anything else i feel is just here say.
 
What we do know for a fact is that the Wii U is indeed stronger than the PS3/Xbox 360. It has 2GB of ram and its GPU is newer than the GPU found in the Xbox360 and PS3. Anything else i feel is just here say.

I agree with this. I'm not much for "KNOWING" anything, the console launches in less than 2 months and I don't even have it preordered (I don't get paid until Friday :() I just really hope I can snag one at launch right now.
 
When you look at the CPU, you have to look at all the other components. Right now developers are taking their 360 games and shoehorning it onto the Wii U, the way the console is designed is to leave the CPU with as little to do as possible. The DSP is there for Sound, there is an ARM CPU for the OS, there is the I/O controller, and there is a GPGPU. There is also supposedly fixed function shaders as well for lighting and 32MB for basically free 4XAA (though I am guessing the 360 ports have to waste 10MBs for the function it's used in Xenos)

In the end you have somewhere around 600GFLOPs in the GPU, some of that... even say 100GFLOPs, will be used to offload the CPU and depending on what the Pad is doing, you could use up to 154GFLOPs on the pad to render an identical screen at the lower resolution leaving 346GFLOPs left over for the TV. (still leaving over 60% more processing power than Xenos) not to mention at least 4 GPU generations between Xenos and GPU7, newer effects and right now twice the RAM.

I really don't think there is much to be worried about when you look at the console a little deeper.

The Wii U uses an ARM for it's OS? That is both a relief, and it reflects inteestingly upon this debate.

Anyway, numbers time! The CPU in the x360 does both OS and audio work, but lets forget about that for a second, or at least just assume that the SIMD units are only used to perform tasks that the Wii U GPU is doing, and that we are only interested in fp16. Each SIMD in the x360 is 128bit wide, yes? So we'll at the most get (128/16) * 3 (cores) * 3,2Ghz = 76,8GFLOPS out of the SIMD units. You may mumble about the fp unit, but then you'd have to weigh the combined fp/SIMD part of the Wii U CPU against the fp unit in the x360CPU which may or may not tip out into the Wii Us favour FLOPs whise depending on it's layout. (And something something max flops != usable, blah blah other considerations)
 
There are people saying it's a (modified) E6760; when I don't see how this is supported beyond the silly emails. Everything we've heard is DX10.1+, R700 base, GDDR3, afaik. From what I can tell the E6760 is derived from the Northern Islands series, uses GDDR5 and is DX11 compliant. The originators of the notion of the E6760 as something that could be achieved by AMD, as a reference for what they may do for Nintendo (bgassassin, Fourth Storm) don't seem to actually think it's the E6760.

There were people (sane or not) assuming it actually was a POWER7 CPU. Despite how ridiculous this would be.

I don't think any consumer product should ever actually use the maximal power output of it's power supply; that sounds like a recipe for product failure.

The people saying it's a E6760 are all people i haven't seen before in any of the WUST threads. Everything we've heard, is that it has certain features lifting it beyond DX10.1, one developer even saying "DX11-level features, DX9-level performance". GDDR3? Link?

As i thought, and what Alstrong suggested, is that the main features, making a difference for DX11 compared to DX10.1, are better tesselator and GPGPU features... both things explicitly mentioned in the leaked devkit spec list.

People were not expecting a power7, everybody was asking what it could mean. Yes, when an official twitter account from the manufacturer keeps insisting it's a custum power7, has power7 technology etc... you HAVE to wonder what it COULD mean. How are "power7 features" possible in combination with a beefed up Broadway? What could "custom power7" mean in the same vein? Turns out they were bullshitting or didn't know. Which is something i find unforgivable for an official IBM spokesperson. The guy should be fired on the spot IMO.
 
Well the numbers are based on pushing pixels, I am not a developer... Well I build iOS and Android games but only as a hobby for now. I took the 921600 pixels 720P screens use, and the Upad's 409920 pixels, and did some dirty math with 500GFLOPs left over from the GPU... of course this isn't how it works, but it gives a rough idea of what Wii U devs have to work with, just splitting the TV and Gamepad resources into their numbers.

You're putting far too much faith in FLOPS as an accurate measure of performance. There are guaranteed to be bottlenecks that prevent parts of the hardware from being used to capacity or at all, depending what is being asked of it. And that's ignoring that most games ports are probably being done by a team of under 10 in a few months that don't even get to utilise the hardware that does play nice.
 
You're putting far too much faith in FLOPS as an accurate measure of performance. There are guaranteed to be bottlenecks that prevent parts of the hardware from being used to capacity or at all, depending what is being asked of it. And that's ignoring that most games ports are probably being done by a team of under 10 in a few months that don't even get to utilise the hardware that does play nice.

I'm just comparing it to Xenos, which would also have bottlenecks of a similar or worse nature, I'm also throwing out ~20% of what I believe the Wii U's GPU to be (using 500GFLOPs) and going with the assumption that GPU7 flop for flop, is at least equal to a GPU massed produced in 2005 (xenos) I think it's all really a fair assumption and gives us a fairly good understanding of what Wii U should be capable of.
 
What are the advantages of creating private graphics API such as PSGL or GX2? While i can understand Microsoft using a sort of DirectX in their machine since is known and already widely used i don't understand why Sony and N are doing this... it won't be much better to let the developers use something they already know like OpenGL?
 
Why would the console have to render the screen again on the pad? Unless its a different angle, shouldnt just the same picture be sent without having to render it twice?
 
If you want native res on the pad, then two renders are needed I think.

I think this is one thing about the GPU that will be totally bespoke. Taking about things like this in terms of PCs, yes, maybe it would work like that, but I'm sure GPU7 is made so that streaming to the pad is uber efficient and tailored to the task without calling on two sets of assets.
 
I think this is one thing about the GPU that will be totally bespoke. Taking about things like this in terms of PCs, yes, maybe it would work like that, but I'm sure GPU7 is made so that streaming to the pad is uber efficient and tailored to the task without calling on two sets of assets.

Yes, my example was just how the Wii U would split resources in the worst case (maximum fidelity to both devices, it could be another screen sure. I was just painting a picture.
 
I highly doubt that is going to happen. If it needs to be rendered twice, might just as well show an other angle of the scene and add to the experience.

I'm assuming there are latency reasons for simply not beaming a video feed of the original render.

What are the advantages of creating private graphics API such as PSGL or GX2? While i can understand Microsoft using a sort of DirectX in their machine since is known and already widely used i don't understand why Sony and N are doing this... it won't be much better to let the developers use something they already know like OpenGL?

One would think that a custom API would be necessary to get the most of the box and it's featureset.

While I'm sure there's a learning curve, both GX and SonyGL2 are based on OpenGL and are likely familiar to anyone with experience in OpenGL.
 
I'm assuming there are latency reasons for simply not beaming a video feed of the original render.

Such as? Reports so far have stated the videofeed on the gamepad is quicker than that on the TV. Also i don't see how rendering the image twice would negate such a problem.

If res is the only difference between the TV image and the pad image, then they'd use downscaling rather than re-rendering the whole scene.

Not to mention the fact that any game with less than optimal AA would likely look a lot better this way.
 
What are the advantages of creating private graphics API such as PSGL or GX2? While i can understand Microsoft using a sort of DirectX in their machine since is known and already widely used i don't understand why Sony and N are doing this... it won't be much better to let the developers use something they already know like OpenGL?

More than PSGL, I think developers used lower level librariers like libGCM: http://www.jonolick.com/uploads/7/9/2/1/7921194/gdc_07_rsx_slides_final.pdf
(lower level than standard OpenGL/PSGL, but probably higher level/more powerful than many developers had on PS2 perhaps)

The advantages of custom graphics API's, maybe modelled after an existing popular graphics API, are primarily developers' familiarity with that API and the possibility to tailor its implementation and features set to exactly what is inside that particular box which in turns allows developers to fully use the platform's features and face a much lower driver overhead due to the graphics API not being designed for portability and to abstract all specific implementation details away. Such libraries do not have the goal of targeting a wide variety of hardware platforms (portability and not efficiency is the top goal of normal API's like OpenGL and DirectX IMHO).
 
It seems like the press releases below keep getting ignored, no?

In my view, the Wii U GPU has been right in front of our faces, and has been for a while.

First,

Green Hills Software's MULTI Integrated Development Environment Selected by Nintendo for Wii U Development
http://www.ghs.com/news/20120327_ESC_Nintendo_WiiU.html

Second,

In May of this year, it was announced that the AMD Radeon E6760 would use Green Hills Software.
http://www.altsoftware.com/press-ne...gl-graphics-driver-architecture-embedded-syst
So is this where the E6760 rumor came from? Then it seems most people don't understand the press release. A company called Alt Software wrote OpenGL drivers for several GPUs, including the E6760, for systems running the Integrity operating system. Because AMD doesn't provide such drivers themselves. Green Hills is not involved in the driver development, the GPU doesn't use Integrity, there's no relationship between Alt Software and Nintendo, and the Wii U doesn't use OpenGL, so Nintendo would have no use for those drivers to begin with.
 
One would think that a custom API would be necessary to get the most of the box and it's featureset.

While I'm sure there's a learning curve, both GX and SonyGL2 are based on OpenGL and are likely familiar to anyone with experience in OpenGL.

More than PSGL, I think developers used lower level librariers like libGCM: http://www.jonolick.com/uploads/7/9/2/1/7921194/gdc_07_rsx_slides_final.pdf
(lower level than standard OpenGL/PSGL, but probably higher level/more powerful than many developers had on PS2 perhaps)

The advantages of custom graphics API's, maybe modelled after an existing popular graphics API, are primarily developers' familiarity with that API and the possibility to tailor its implementation and features set to exactly what is inside that particular box which in turns allows developers to fully use the platform's features and face a much lower driver overhead due to the graphics API being designed to abstract specific implementation details away and allow it to target multiple hardware platforms (portability and not efficiency is the top goal of normal API's like OpenGL and DirectX IMHO).

thanks.
 
If you want native res on the pad, then two renders are needed I think.

You would actually be better off not doing that. If you take a higher resolution image and shrink it down you basically get free AA, and the image ends up looking crisper and sharper. If they want to show the same thing on the pad as the TV they're better off rendering once and shrinking down that higher res image for the Pad.
 
So... is this still supposed to be 40nm?
If so, are we safe to assume that it's peak performance is 576 GFLOPS?
Or, are we assuming that Nintendo spent several hundred millions to get AMD to make a smaller E6760 to squeeze out about 18GLOPS per watt? (assuming it's 35w)
 
So... is this still supposed to be 40nm?
If so, are we safe to assume that it's peak performance is 576 GFLOPS?
Or, are we assuming that Nintendo spent several hundred millions to get AMD to make a smaller E6760 to squeeze out about 18GLOPS per watt? (assuming it's 35w)

BG thinks it will out perform the E6760, likely meaning exceeding 576 GFLOPs.

I am just going to stick to ~600 because it really doesn't matter what the exact performance is. However there is good reason to assume that Wii U's GPU is 32nm or 28nm. Rumor of a yield problem earlier in the year popped up, which is highly unlikely with such a "mature" process and TSMC is moving away from 40nm and focusing on their 28nm and 32nm processes. (which isn't something that makes a lot of sense if you plan to produce 10million+ GPUs)
 
Meanwhile I look at the actual games and assume that many of the flops have leaked out of one of the vents or got lost on their way to the TV.

I could seriously do without your nonstop BS cynical biased thread derailing comments every two pages of every decent thread. Its getting old. Its been old.

I dont understand how you can reward posters who put time, energy, thought, and logic into their posts with unfunny and gaf-cliche one liners that we all know god damn well have explanations.

You know why these launch games dont look as far advanced as later games will. We all know. Stop shitting everything up. Stop dismissing sincere efforts by other posters to come to decent conclusions with what i would call essentially trolling. You dont know anything, and these people obviously know something to some degree. Every time these spec threads start to lift their head from a puddle of drool people like you make these kinds of ridiculous posts that do NOTHING to elevate the conversation and undermine the amount of effort it takes the people who are trying to have a real discussion.

Save it man, everyone already is aware of the reasoning behind why some of these launch games arent going to be showing what the console is capable of. We already determined the "whys" months and months ago. Posts like yours drag the thread down into the trash and make people with real information and tech experience less likely to devote energy to actual discussion.
 
BG thinks it will out perform the E6760, likely meaning exceeding 576 GFLOPs.

I am just going to stick to ~600 because it really doesn't matter what the exact performance is. However there is good reason to assume that Wii U's GPU is 32nm or 28nm. Rumor of a yield problem earlier in the year popped up, which is highly unlikely with such a "mature" process and TSMC is moving away from 40nm and focusing on their 28nm and 32nm processes. (which isn't something that makes a lot of sense if you plan to produce 10million+ GPUs)

Has anyone got a link to anything about these yield problems?
 
TSMC is moving away from 40nm and focusing on their 28nm and 32nm processes. (which isn't something that makes a lot of sense if you plan to produce 10million+ GPUs)

The old fab units don't just magically disappear. Those are certainly much higher volume capable than 28/32nm at the moment, which are yet to fully ramp up.
 
The old fab units don't just magically disappear. Those are certainly much higher volume capable than 28/32nm at the moment, which are yet to fully ramp up.

Tsmc's website uses a picture of the wii to advertise its 32/28nm process and its not very likely at all that they'd have any yield problems at 40
 
BG thinks it will out perform the E6760, likely meaning exceeding 576 GFLOPs.

I missed that post. But if he did say that I think he might mean that it will do more with less. Because of the likely (Wii U) specific customisation, doing flop count comparisons is even more pointless than normal.

To use a car analogy, a Mclaren MP4-12C is faster round the Top Gear test track than the much more powerful Bugatti Veyron Super Sport.
 
So is this where the E6760 rumor came from? Then it seems most people don't understand the press release. A company called Alt Software wrote OpenGL drivers for several GPUs, including the E6760, for systems running the Integrity operating system. Because AMD doesn't provide such drivers themselves. Green Hills is not involved in the driver development, the GPU doesn't use Integrity, there's no relationship between Alt Software and Nintendo, and the Wii U doesn't use OpenGL, so Nintendo would have no use for those drivers to begin with.

???
What else would it use? It can't use DirectX
 
It feels like you're arguing just for the sake of arguing. To the latter first. I didn't say I was the first to mention it. I said the talk about E6760 being in Wii U originated from our discussions here using it as a comparison GPU. If it originated with B3D then the E6760 being Wii U's GPU would have started much sooner. And only those who have been putting down what Wii U might be capable of have been harping on the GPU still essentially being an R700.

Development for the GPU (and CPU) started mid-2009. We discovered that in the first WUST thanks to wsippel. Secondly the target specs publicly leaked confirming the R700 foundation.

http://www.vgleaks.com/world-premiere-wii-u-specs/

See the features section. All (the majority) of those listed come from the R700 line.

To me it seems like you're coming in on the tail end of all the discussion and findings, and applying what you missed on everyone else.

That said I think what Nintendo/AMD made will be as good as, if not better, than the performance of an E6760.

I apologize if you think I'm coming off as someone who just wants to argue. I can assure you that I don't.

I just didn't like the fact that certain things were being stated definitively when most everything in these topics have been rumors and speculations compounded with "confirmations" from 2nd, 3rd and sometimes even 4th hand sources.

Anything obtained, through way of second or third hand, anonymous or incognito sources, is not a confirmation, even if 10 different anonymous/2nd hand sources say the same things. Vgleaks is not a source for real confirmations since, once again, their sources are anonymous.

Interesting that you believe the final product will probably out perform the e6760. It still seems to me that it will share some of the same technology.
 
Top Bottom