Rumor: Wii U final specs

Bah. The "CJ" comment appears again.



Arkam only started posting this year and even indicated in this thread it's not a "full DX11 equivalent". Also Nintendo's not going to scrap years of R&D just like that.

Just for reference, Bgassassin is referring to this post by Arkam.

If you're reading this, Arkam, we are still waiting on those details. ;)



Seriously, don't get in trouble for us. We will endure.
 
Maybe is the average or the High preset in AMD's page? IDk.

The 3DMARK Vantage test was done using Performance preset level with this config (from AMD's Product Brief):
System configuration: 1280x1024, AMD PhenomTM II X4 965 @ 3.4GHz, MSI 890FXA-GD70, Corsair XMS3 8GB (4x2GB) 1333MHz 9-9-9-24 (TW3X4G1333C9A G), Windows® 7 64bit Ultimate

Most HD4850 are ~7000 on the same preset. The "1280x1024" and the "P" means Performance.
 
Yeah I've seen differing results for stock 4850s as well.

Yeah... See, I'm finding it hard to believe that (or to understand how) a 600MHz 480 ALU part can outperform a slightly higher clocked piece of hardware sporting 67% more ALUs in what is basically a DX10 test. Turks is still based on Evergreen VLIW5 architecture, so I'm not even sure where this magical performance is supposed to come from. ;)

By way of the SIMDs being tied to the TMU count, the 4850 already has an advantage in texture filtering (10 vs 6 SIMDs), and there's also a bandwidth advantage (~60GB/s vs E6760's 51GB/s).

It's also very likely for the E6760 to only be sporting 8ROPs (in-line with Turks) whereas the 4850 has 16. That won't necessarily result in double perf, but it'd be an excess fillrate given the bandwidth, so there shouldn't be a performance detriment from this aspect.

:p
 
I don't really think it makes sense to use the 4850 or a variant. First 4850 is designed with a bigger process in mind. It is not like you can just flip a switch and it will be manufactured in better process immediately.

Secondly, by sticking with a older architecture you lose all the efficiency improvement, which means for the same power requirement you will get less power. Nintendo likes to use low power part, and efficiency is certainly something important.

Finally, does picking an older architecture really going to save them money? Manufacture cost is probably going to be the same, customization cost is probably going to be the same.

You can verify the above easily by comparing the Radeon 4770 to the Radeon 6670. The 4770 is a special chip that is based on the R700 architecture but on a 40nm process. The 6670 also

Performance: 6670 is faster
TDP: 4770 is 80W, 6670 is 66W
Die size: 4770 is 137mm^2, 6670 is 118mm^2

So a GPU based on the 6000 series is going to be smaller (cheaper to make), use less power (fits in the small case) and has higher performance. Not to mention with all the new features that the 6000s get over the 4000s. It just doesn't make sense for Nintendo to use an inferior part that cost more.
 
I don't really think it makes sense to use the 4850 or a variant. First 4850 is designed with a bigger process in mind. It is not like you can just flip a switch and it will be manufactured in better process immediately.

Mobile 4830/4860 is based off of rv740. Given the rv7xx rumour, that would have been the most likely chip in the devkits last year.

Yes, there is absolutely no point in using the 55nm variant in 2011. Of course, there shouldn't have been any reason to hear about the rv7xx line at all in 2011, which is what is maddening in the first place, regardless.

The performance of the alpha kit isn't going to matter as much because all it needs to be is close to the features expected in the final unit - of course, they don't want to overshoot perf. The performance stuff comes later when the design is actually final and they can figure out TDP/yields. You don't want to hit a moving target, and really what the devs want is a stable feature set.
 
Mobile 4830/4860 is based off of rv740. Given the rv7xx rumour, that would have been the most likely chip in the devkits last year.

Yes, there is absolutely no point in using the 55nm variant in 2011. Of course, there shouldn't have been any reason to hear about the rv7xx line at all in 2011, which is what is maddening in the first place, regardless.

The performance of the alpha kit isn't going to matter as much because all it needs to be is close to the features expected in the final unit - of course, they don't want to overshoot perf. The performance stuff comes later when the design is actually final and they can figure out TDP/yields. You don't want to hit a moving target, and really what the devs want is a stable feature set.
What if one of the features is a massive-BW superb-latency edram pool? ;p
 
Bah. The "CJ" comment appears again.



Arkam only started posting this year and even indicated in this thread it's not a "full DX11 equivalent". Also Nintendo's not going to scrap years of R&D just like that.

What R&D are Nintendo scrapping exactly? We have no idea what Nintendo has been working towards for the final Wii U GPU except that it is a GPGPU, and has very low heat usage. Everything else has been speculation based on early dev kit rumors.

I have no doubt that early dev kits were most likely based around r700 but simply as a place holder. Just like how I believe that PS4/720 games are being worked on using older chips which will not be present in the actual final hardware.
 
So, we can probably assume that it's going to be the weakest console out of the three by quite a margin, and if the trend continues it will be the console that sells the most units of hardware by quite a margin.
 
Given the latest rumours, would it be relatively meaty? Obviously not as powerful as the PS4/720, but certainly not another Wii situation, no?

We won't have another Wii situation in terms of power difference unless future PS4/720 owners don't mind paying over a grand for a console the size of a PC. Although Nintendo's choice of GPU was the major thing stopping the Wii getting ports this gen, the U should have no problems either receiving down-ports or supplying up-ports from or to the PS4 and 720 next gen, albeit with a graphical downgrade.

If the target specs that Bgassassin had a while back are accurate all 3 consoles are going to be in the same sort of ballpark in terms of power. The 720 should be around 2 to 3 times more powerful than the U, but some of that power is going to be reserved for Microsoft's strange obsession with being more of a media center than a games machine.
 
how do you get by quite a margin when we don't even know what's going into the next xbox and ps?

can i have that crystal ball?
 
This is slightly rhetorical. How did people start to think that this GPU all of a sudden became a God-send? Wasn't the peak output of the Wii U supposed to be 75w?
 
This is slightly rhetorical. How did people start to think that this GPU all of a sudden became a God-send? Wasn't the peak output of the Wii U supposed to be 75w?

Not a godsend by any means but better than thought. The GPU will be close in performance (but not really based on) to a e6760 (35 watts) and will have all the bells and whistles needed to produce desired DirectX 11 effects if even at a lesser degree than the next Xbox/PS4.
 
What R&D are Nintendo scrapping exactly? We have no idea what Nintendo has been working towards for the final Wii U GPU except that it is a GPGPU, and has very low heat usage. Everything else has been speculation based on early dev kit rumors.

I have no doubt that early dev kits were most likely based around r700 but simply as a place holder. Just like how I believe that PS4/720 games are being worked on using older chips which will not be present in the actual final hardware.


Of course the R700 they used was a placeholder, but the custom GPU has R700 features. R&D started in 2009. And they wouldn't drop the work on that to use an E6760.
 
This is slightly rhetorical. How did people start to think that this GPU all of a sudden became a God-send? Wasn't the peak output of the Wii U supposed to be 75w?
That GPU was mostly used as an example to show that AMD had the tech to produce a 500+ Gflops GPU on a low power envelope with with high production yield. But the argument went out of control.

tl;dr The question was "how can the Wii U even be that powerful" and the answer was, "by using something like that". Somehow it transformed in that "so, the Wii U will use that!".
 
Of course the R700 they used was a placeholder, but the custom GPU has R700 features. R&D started in 2009. And they wouldn't drop the work on that to use an E6760.

This is a gigantic assumption that you are making. What R700 features are suppose to be in the final Wii U GPU that are being abandoned?

We have no idea what GPU Nintendo were doing their R&D on, nor do we know how long they have been planning to use said GPU. I don't think it's safe to claim we know for sure that the final Wii U GPU was always suppose to be some low wattage/low heat, feature rich, 40nm, highly modified R700 variation with the performance of a 4850, and General Purpose functionality.

If that is actually what is inside of the Wii U retail unit then I definitely have to tip my hat to Nintendo, for taking the long way to the finish line. I just don't want us pretending that we know without a shadow of a doubt that Nintendo and AMD have been designing a custom r700 variant for their final GPU design non stop since 2009.
 
This is a gigantic assumption that you are making. What R700 features are suppose to be in the final Wii U GPU that are being abandoned?

We have no idea what GPU Nintendo were doing their R&D on, nor do we know how long they have been planning to use said GPU. I don't think it's safe to claim we know for sure that the final Wii U GPU was always suppose to be some low wattage/low heat, feature rich, 40nm, highly modified R700 variation with the performance of a 4850, and General Purpose functionality.

If that is actually what is inside of the Wii U retail unit then I definitely have to tip my hat to Nintendo, for taking the long way to the finish line. I just don't want us pretending that we know without a shadow of a doubt that Nintendo and AMD have been designing a custom r700 variant for their final GPU design non stop since 2009.

Haha. This post suggests you weren't around for any of the first Wii U thread. But you did say you weren't familiar with the meaning of WUST so that's most likely the case.

As for the question I don't understand what you're asking. It sounds contradictory.
 
I don't think it's safe to claim we know for sure that the final Wii U GPU was always suppose to be some low wattage/low heat, feature rich, 40nm, highly modified R700 variation with the performance of a 4850, and General Purpose functionality.

Aside from it being a 40mn part (I'm still betting on 32nm), all of those things have been confirmed by multiple reputable sources. Well, perhaps not the "performance of a 4850", but lots of sources are saying the console's GPU has been derived from the R700 architecture, with more advanced features and a focus on GPGPU functionality. The low wattage/low heat we know from the size of the console and the typical 40W power draw.
 
Here's a question fo BG or anyone else in the know.

From what i've read, it sounds like Nintendo are still being incredibly secreative about the Wii U's hardware specifications. Even developers with licenses and Wii U dev kits still seem to be in the dark about the internal architecture and feature set of the Wii U.

This secrecy must also be extended to the devleopment kit software, compilers, and middleware tools provided by Nintendo. If Nintendo had provided developers details of all the CPU's instruction sets, the GPU's supported OpenGL/NintendoGL compliant APIs and hanrdware, thorough documentation, etc, developers and insiders would have a far better idea of the Wii U's capabilities and limitations.

A few developers have made comments suggesting that the only effective way for them to discover the Wii U's capabilities right now is via trial and error and throwing things at the Wii U and seeing what doesn't work. That to me sounds like even the dev kit software is very limited, undocumented, and incomplete. As such developers are taking a very manual approach with current Wii U devleopment, they're playing in the dark and discovering things as they go along.

The above might also explain why developers are having issues getting respectable performance out of the Wii U's CPU in particular. I don't see the logic in Nintendo including 2-4x the memory, 2x as powerful GPU, 3x EDRAM, DSP and I/O controllers, faster ROM drive, yet gimping the system with a very average CPU. Nintendo traditionally have been very thourough with their console hardware to insure it's all complimentory of one another. Could a lot of this poor performance be due to immature and limited dev kit tools and resources?

I also get the feeling that Nintendo right now, are not focused on their dev kits and 3rd party developers. Nintendo's attention seems to fixed on getting the Wii U's opperating system and online services up and running. The dev kit tool set might be suffering as a result of this secrecy and inattention by Nintendo?
 
That GPU was mostly used as an example to show that AMD had the tech to produce a 500+ Gflops GPU on a low power envelope with with high production yield. But the argument went out of control.

tl;dr The question was "how can the Wii U even be that powerful" and the answer was, "by using something like that". Somehow it transformed in that "so, the Wii U will use that!".

That explains it. It's just laughable that we went form 75w max output to something that was super powerful somehow.
 
Here's a question fo BG or anyone else in the know.

From what i've read, it sounds like Nintendo are still being incredibly secreative about the Wii U's hardware specifications. Even developers with licenses and Wii U dev kits still seem to be in the dark about the internal architecture and feature set of the Wii U.

This secrecy must also be extended to the devleopment kit software, compilers, and middleware tools provided by Nintendo. If Nintendo had provided developers details of all the CPU's instruction sets, the GPU's supported OpenGL/NintendoGL compliant APIs and hanrdware, etc, developers and insiders would have a far better idea of the Wii U's capabilities and limitations.

A few developers have made comments suggesting that the only effective way for them to discover the Wii U's capabilities right now is via trial and error and throwing things at the Wii U and seeing what doesn't work. That to me sounds like even the dev kit software is very limited, undocumented, and incomplete. As such developers are taking a very manual approach with current Wii U devleopment, they're playing in the dark and discovering things as they go along.

I also get the feeling that Nintendo right now are not focused on their dev kits and 3rd party developers. Nintendo's attention right now seems to fixed on getting the Wii U's opperating system and online services up and running.

Thoughts?

Devs know the features so they are able to design their games. But the things us "arm chair hardware designers" would like to know seems to be what Nintendo isn't giving. If we had the clock speed and ALU count we could have a decent understanding of the GPU's peak performance. Things like that are what Nintendo is apparently sitting on. The stuff in your second paragraph is known by developers. So what devs seem to have to learn on their own is the peak performance of the hardware. Hopefully saying it in that manner clears it up. They've supposedly done well in working with (certain) 3rd parties and the dev kits for the launch period have been finalized.

That explains it. It's just laughable that we went form 75w max output to something that was super powerful somehow.

For more context, go here.

http://www.neogaf.com/forum/showpost.php?p=42408109&postcount=4998
 
- We're not saying its a e6760
- Theres alot of rational discussion as to why the final wii u gpu could be very similar to e6760 (which is all we're discussing here)
- I dont think any sane person actually thought it was a power7 cpu. In fact I dont hink anyone ever said that, they were merely discussing the tweets which said it used a power 7 chip. Most of that discussion ended in the conclusion " The tweets mean nothing. Maybe its somthing loosely based on p7, but probabaly not"
- I seem to recall you were one of the people who ended up wrong about the psu, saying it was rated for 75w and therefore could only output around 60% of that. Thats false. I shouldn't have to explain why its false again, but in a nutshell: It outputs up to 75w (which is what the pre-e3 photo of the psu confirmed) As Iwata himself said in the Nintendo direct the console will consume up to 75w of power at times, normal use will be about 40w) blu and others have backed that up, afaik. Thats why there was alot of discussion, because people were incorrectly claiming as fact that it'll only output ~45w (60% of 75w) max as they confused psu efficiency/output/rating etc.
There are people saying it's a (modified) E6760; when I don't see how this is supported beyond the silly emails. Everything we've heard is DX10.1+, R700 base, GDDR3, afaik. From what I can tell the E6760 is derived from the Northern Islands series, uses GDDR5 and is DX11 compliant. The originators of the notion of the E6760 as something that could be achieved by AMD, as a reference for what they may do for Nintendo (bgassassin, Fourth Storm) don't seem to actually think it's the E6760.

There were people (sane or not) assuming it actually was a POWER7 CPU. Despite how ridiculous this would be.

I don't think any consumer product should ever actually use the maximal power output of it's power supply; that sounds like a recipe for product failure.
 
Devs know the features so they are able to design their games. But the things us "arm chair hardware designers" would like to know seems to be what Nintendo isn't giving. If we had the clock speed and ALU count we could have a decent understanding of the GPU's peak performance. Things like that are what Nintendo is apparently sitting on. The stuff in your second paragraph is known by developers. So what devs seem to have to learn on their own is the peak performance of the hardware. Hopefully saying it in that manner clears it up. They've supposedly done well in working with (certain) 3rd parties and the dev kits for the launch period have been finalized.



For more context, go here.

http://www.neogaf.com/forum/showpost.php?p=42408109&postcount=4998

That's quite clear and astonishing really. Devs really don't have clock speeds and such? Surely there are ways...

And we've heard of the feature set the most really. Based on R700 but we don't know performance of said features. Some Dx11 and SM5 level effects but perhaps only sparingly...you get the picture. And when things don't seem to make sense, just remember it's all about that eDRAM.
 
The grand irony is that, afaict and someone more versed should correct me if mistaken, the E6760 doesn't really seem "super powerful."

It just seems like a nice low power GPU, with modern features and a good performance/TDP ratio?
 
The grand irony is that, afaict and someone more versed should correct me if mistaken, the E6760 doesn't really seem "super powerful."

It just seems like a nice low power GPU, with modern features and a good performance/TDP ratio?

Yep, that's exactly what it is too.


In regards to this being tied to the Wii U, the form factor & performance is what seems to be similar, not necessarily the GPU itself being based on the e6760.

Remember the Gamecube specs? For the modest raw power it had, the system could produce (at the time) next gen results more advanced than PS2 and on par with Xbox with the help of Factor 5.....
 
Bah. The "CJ" comment appears again.



Arkam only started posting this year and even indicated in this thread it's not a "full DX11 equivalent". Also Nintendo's not going to scrap years of R&D just like that.

Sorry, I forgot when exactly it was, just that we were discussing E3 2011 Wii U at that time. It was also before the dev kit "bump" early this year. Also Nintendo isn't scrapping anything in my assumption, using a HD4850 in the early dev kits is something I've heard from you. I know that what you know changes with the next rumor that comes in, but there is no reason Nintendo would be "locked" into DX10.1, if they were customizing GPU7. There is also no reason that a DX11 compatible GPU couldn't be used in the Final Dev Kits if HD4850 was in the first ones... As developers building games on DX10.1 cards would not lose any functionality in the move to DX11.

Here is the question I really want to ask though, did Arkam say that these "specs" were from final dev kits? and did he again get it second hand? because there really isn't any change here from what he said 6-9 months ago.
 
Haha. This post suggests you weren't around for any of the first Wii U thread. But you did say you weren't familiar with the meaning of WUST so that's most likely the case.

As for the question I don't understand what you're asking. It sounds contradictory.

My point was that we don't know what Nintendo was targeting for their final GPU design nor do we know how long they have been targeting this anonymous GPU. To suggest that we knew for a fact that final GPU design would be a highly modified r700 derivative is an assumption.

Everything else I said, was me being sarcastic with exactly how much Nintendo would actually have to modify a r700 in order to have all of the specifications, that the final Wii U GPU has been confirmed to have, just to end up with, what is essentially what we know about the e6760.

I'm not saying that it is a modified e6760 variant, but lets not pretend that we know, without a shadow of a doubt, that it isn't only because the early Dev Kits were said to have had an r700 variant placeholder.

Btw BG you were not the first to mention the e6760 in association with the Wii U. At least not on GAF. The earliest I found was in early March on B3D. Not that it has any relevance as to whether or not it is based on the e6760 anyway. Nintendo is not looking at the forums going "Uh oh someone mentioned our GPU in the same paragraph as the Wii U. People will think he's the source of the idea to use it and we can't have that. Abort!".

Aside from it being a 40mn part (I'm still betting on 32nm), all of those things have been confirmed by multiple reputable sources. Well, perhaps not the "performance of a 4850", but lots of sources are saying the console's GPU has been derived from the R700 architecture, with more advanced features and a focus on GPGPU functionality. The low wattage/low heat we know from the size of the console and the typical 40W power draw.
That's exactly what I was pointing out. For all of the specs that are confirmed to be in the Wii U GPU, but are not standard in the r700 family, that is a heck of a lot of customization and modification just to reach a finish line that is that similar to a card that already exists. Nintendo may have taken the long route but I don't think we can confidently say that they did at this point.

The grand irony is that, afaict and someone more versed should correct me if mistaken, the E6760 doesn't really seem "super powerful."

It just seems like a nice low power GPU, with modern features and a good performance/TDP ratio?

If the Wii U is using a GPU based around the e6760, then what it does mean is that most 3rd parties would not exclude a Wii U version simply based upon performance. It's close enough in features and performance, to any reasonably affordable GPU that Sony or MS would use, that a down port wouldn't be that much trouble for them.

This is a scenario that, apparently, a lot of people do not want.
 
That's quite clear and astonishing really. Devs really don't have clock speeds and such? Surely there are ways...

And we've heard of the feature set the most really. Based on R700 but we don't know performance of said features. Some Dx11 and SM5 level effects but perhaps only sparingly...you get the picture. And when things don't seem to make sense, just remember it's all about that eDRAM.

No we really don't, DX11 features but "DX9" capabilities is also a reliable rumor. So something between 500-620GFLOPs that has moved beyond DX10.1 and SM4, we don't know how far, and neither do most developers. If they are saying anything, it's THAT.

Most likely what fits, is a GPU that doesn't use R700 series shaders at all, (less efficient, more power hungry, lacking any features beyond SM4 and DX10.1) So Nintendo and AMD got together and spent billions on R&D, and started to evolve the R700, into it's own new GPU, whether they went Evergreen (R800 - and thus similar to e6760) or N800 ("Nintendo"800series) no one here can tell you, and likely we just won't know. It is safe to assume however that it is not R700 at this point, and it's also reasonable to assume that e6760 is close to the performance result of the "GPU7".

So lets drop all the stupid drama, and just either except that it's not really DX10.1 and that it's performance is shy of 1TFLOP R700 cards but likely close to the 576GFLOPs e6760 "Turks" card. That we don't know the Tessellation performance or the GPGPU performance. E6760 doesn't out perform the HD4850 anyways, I've done a lot of research today, and the closest you can do is look for underclocked HD6670 cards, and use that for a performance marker... Effectively the Wii U has a decent low end GPU putting it 2-3 times Xenos, with effects current gen consoles cannot match. Future consoles from PS4 and XB3 will be 2-3 times Wii U but likely don't have important features that Wii U cannot match.
 
If the Wii U is using a GPU based around the e6760, then what it does mean is that most 3rd parties would not exclude a Wii U version simply based upon performance. It's close enough in features and performance, to any reasonably affordable GPU that Sony or MS would use, that a down port wouldn't be that much trouble for them.

If publishers feel there is money to be made on the Wii U, the ports will happen regardless if there is a e6760 or not. Which makes all this e6760 obsession a bunch of nonsense.
So lets drop all the stupid drama, and just either except that it's not really DX10.1 and that it's performance is shy of 1TFLOP R700 cards but likely close to the 576GFLOPs e6760 "Turks" card. That we don't know the Tessellation performance or the GPGPU performance. E6760 doesn't out perform the HD4850 anyways, I've done a lot of research today, and the closest you can do is look for underclocked HD6670 cards, and use that for a performance marker... Effectively the Wii U has a decent low end GPU putting it 2-3 times Xenos, with effects current gen consoles cannot match. Future consoles from PS4 and XB3 will be 2-3 times Wii U but likely don't have important features that Wii U cannot match.

smh at this post. From the 1TFLOP right down to the bullshit multipliers. It's like you pulled these numbers out of nowhere & just repeated it to yourself over & over until you started to believe them. Seriously, why?
 
If publishers feel there is money to be made on the Wii U, the ports will happen regardless if there is a e6760 or not. Which makes all this e6760 obsession a bunch of nonsense.


smh at this post. From the 1TFLOP right down to the bullshit multipliers. It's like you pulled these numbers out of nowhere & just repeated it to yourself over & over until you started to believe them. Seriously, why?

I see you actually don't understand what the heck I said. the 1TFLOPs that the HD4850 produces is a hard number, it's not "pulled" from anywhere, of course the dev kits were underclocked, which is why I said that the Wii U would not hit this number. As for the multiplier, it is the GPU's performance based on 500-620GFLOPs that the GPU7 is likely to produce when compared to Xenos 240GFLOPs. See it's true that you can't just put a multiplier on a console's performance but you can in fact put a multiplier based on facts or logic when you are directly comparing 2 GPU's performance. GPU7 @ 500-620GFLOPs would be 200%-300% faster than Xenos. Hopefully this helps you avoid the pitfalls of seeing something that is popular belief (like multipliers are BS) and jumping to conclusions that that is always the case. (here is a hint, depending on content, multipliers are completely logical, and if you don't believe me, go read a GPU launch, and their 30%-50% performance increase every time a new gpu generation comes out)
 
It seems like the press releases below keep getting ignored, no?

In my view, the Wii U GPU has been right in front of our faces, and has been for a while.

First,

Green Hills Software's MULTI Integrated Development Environment Selected by Nintendo for Wii U Development
http://www.ghs.com/news/20120327_ESC_Nintendo_WiiU.html

Second,

In May of this year, it was announced that the AMD Radeon E6760 would use Green Hills Software.
http://www.altsoftware.com/press-ne...gl-graphics-driver-architecture-embedded-syst


The first supported graphics processor will be the AMD Radeon E6760 GPU, integrated with the Green Hills INTEGRITY RTOS, and targeted for deployment in a next generation situational awareness avionics display system as part of a United States Department of Defense procurement.

OMG it's like the PS2 missile scare, except this time the Wii U is actually designed as a jet fighter targeting system.
 
Even if the Wii U GPU is an E6760 variant we just don't know the variant so it basically tells us almost nothing right?

It could be downclocked 50%, have ALU's removed or anything.
 
Sorry, I forgot when exactly it was, just that we were discussing E3 2011 Wii U at that time. It was also before the dev kit "bump" early this year. Also Nintendo isn't scrapping anything in my assumption, using a HD4850 in the early dev kits is something I've heard from you. I know that what you know changes with the next rumor that comes in, but there is no reason Nintendo would be "locked" into DX10.1, if they were customizing GPU7. There is also no reason that a DX11 compatible GPU couldn't be used in the Final Dev Kits if HD4850 was in the first ones... As developers building games on DX10.1 cards would not lose any functionality in the move to DX11.

Here is the question I really want to ask though, did Arkam say that these "specs" were from final dev kits? and did he again get it second hand? because there really isn't any change here from what he said 6-9 months ago.

To the bold, no you don't. Don't be absurd. The fact that I'm still referring to an R700 as the foundation even now shows that was absurd. But I'd like to see these changes you mention. Don't sit there and say I've done something without legitimate proof.

Also there is a reason why that doesn't work. The final dev kit has the actual silicon going in the retail unit, not "another" GPU. The bump was due to tweaking the final silicon, not due to using another GPU (in this case an E6760) instead of an RV770.

And of course there wasn't much change. That's the idea of setting target specs for the final hardware. You're not going to see a big deviation from beginning to end because that's what's on paper.

My point was that we don't know what Nintendo was targeting for their final GPU design nor do we know how long they have been targeting this anonymous GPU. To suggest that we knew for a fact that final GPU design would be a highly modified r700 derivative is an assumption.

Everything else I said, was me being sarcastic with exactly how much Nintendo would actually have to modify a r700 in order to have all of the specifications, that the final Wii U GPU has been confirmed to have, just to end up with, what is essentially what we know about the e6760.

I'm not saying that it is a modified e6760 variant, but lets not pretend that we know, without a shadow of a doubt, that it isn't only because the early Dev Kits were said to have had an r700 variant placeholder.

Btw BG you were not the first to mention the e6760 in association with the Wii U. At least not on GAF. The earliest I found was in early March on B3D. Not that it has any relevance as to whether or not it is based on the e6760 anyway. Nintendo is not looking at the forums going "Uh oh someone mentioned our GPU in the same paragraph as the Wii U. People will think he's the source of the idea to use it and we can't have that. Abort!".

It feels like you're arguing just for the sake of arguing. To the latter first. I didn't say I was the first to mention it. I said the talk about E6760 being in Wii U originated from our discussions here using it as a comparison GPU. If it originated with B3D then the E6760 being Wii U's GPU would have started much sooner. And only those who have been putting down what Wii U might be capable of have been harping on the GPU still essentially being an R700.

Development for the GPU (and CPU) started mid-2009. We discovered that in the first WUST thanks to wsippel. Secondly the target specs publicly leaked confirming the R700 foundation.

http://www.vgleaks.com/world-premiere-wii-u-specs/

See the features section. All (the majority) of those listed come from the R700 line.

To me it seems like you're coming in on the tail end of all the discussion and findings, and applying what you missed on everyone else.

That said I think what Nintendo/AMD made will be as good as, if not better, than the performance of an E6760.
 
To the bold, no you don't. Don't be absurd. The fact that I'm still referring to an R700 as the foundation even now shows that was absurd. But I'd like to see these changes you mention. Don't sit there and say I've done something without legitimate proof.

Also there is a reason why that doesn't work. The final dev kit has the actual silicon going in the retail unit, not "another" GPU. The bump was due to tweaking the final silicon, not due to using another GPU (in this case an E6760) instead of an RV770.

And of course there wasn't much change. That's the idea of setting target specs for the final hardware. You're not going to see a big deviation from beginning to end because that's what's on paper.
http://www.vgleaks.com/world-premiere-wii-u-specs/
First the bolded, of course it's another GPU, because it's customized and it's suppose to hit performance of a 800shader R700 at a certain clock speed. (certainly not stock 625MHz | my guess is 360MHz or 480MHz btw or 3x-4x the DSP) So yes, GPU7 will be an "enhanced" R700 if you want to use that term.

You've changed what you know quite a few times over the last year, examples are various. Ram sizes, ALUs in the GPU, Wattage used and probably the easiest to point out is you having a talk with wsippel or another insider, where you came to the conclusion that the Wii U likely only had ~320GFLOPs. Every time a rumor comes in, you try to see how it fits into what you have (this is something we all do) this changes what you "know" on the last page for instance, "matt's" info changed what you knew about the system and you were then unsure about what the performance would be.

The second thing here about R700, you are basically saying GPU7 is R700, which is simply not the case, and your own info would clearly point this out to you. That link there says "GX2 is a 3D graphics API for the Nintendo Wii U system (also known as Cafe). The API is designed to be as efficient as GX(1) from the Nintendo GameCube and Wii systems. Current features are modeled after OpenGL and the AMD r7xx series of graphics processors. Wii U’s graphics processor is referred to as GPU7."

GPU7 exceeds R700 features, from multiple sources, including the Source of this OP. I appreciate your work, but it doesn't mean you have been flawless in navigating these rumors. In fact, we know Arkam and where he got his source of info, do we know if this source has been updated? Has Arkam gotten info about final dev kits, he doesn't work on the Wii U himself and he is not a programmer, so his knowledge is limited and it's also fed to him by someone else. If these "specs" are first dev kits (and he had an outdated one at the time) then it could very well be a R700 in his box.

I don't mean to attack you btw, but you act like you know stuff, when you just have a bunch of educated guesses and those are based on sources you likely can't even confirm. When I was throwing around the ~600GFLOPs rumor here from my source (a Nintendo engineer btw) I received a few PMs from other "insiders" that said it was 800GFLOPs. I know how it works BG, and it's not pretty, and impossible to just KNOW.
 
Top Bottom