Rumor: Wii U final specs

I missed that post. But if he did say that I think he might mean that it will do more with less. Because of the likely (Wii U) specific customisation, doing flop count comparisons is even more pointless than normal.

To use a car analogy, a Mclaren MP4-12C is faster round the Top Gear test track than the much more powerful Bugatti Veyron Super Sport.

http://www.neogaf.com/forum/showpost.php?p=42443021&postcount=5151 last line of this post (it's on the last page)

I apologize if you think I'm coming off as someone who just wants to argue. I can assure you that I don't.

I just didn't like the fact that certain things were being stated definitively when most everything in these topics have been rumors and speculations compounded with "confirmations" from 2nd, 3rd and sometimes even 4th hand sources.

Anything obtained, through way of second or third hand, anonymous or incognito sources, is not a confirmation, even if 10 different anonymous/2nd hand sources say the same things. Vgleaks is not a source for real confirmations since, once again, their sources are anonymous.

Well said! This is why I've stopped posting in the WUSTs, Arkam btw is a second hand source with limited knowledge about what he heard as well.
 
I don't think otherwise, I got the impression from your post that you did

When I said "yet to ramp up" I was referring to 28/32nm.

It may be that the GPU is that, but there's enough demand from other companies that would make it either very expensive and (relatively) low volume for Nint.
 
The old fab units don't just magically disappear. Those are certainly much higher volume capable than 28/32nm at the moment, which are yet to fully ramp up.

Old fabs get replaced by newer ones. There is no reason in the world to keep having 40nm fab units when 28nm is the future now.
 
When I said "yet to ramp up" I was referring to 28/32nm.

It may be that the GPU is that, but there's enough demand from other companies that would make either very expensive and (relatively) low volume.

You also said about the old fab's not magically disappearing, that's where we got crossed wires lol
 
You would actually be better off not doing that. If you take a higher resolution image and shrink it down you basically get free AA, and the image ends up looking crisper and sharper. If they want to show the same thing on the pad as the TV they're better off rendering once and shrinking down that higher res image for the Pad.

You are right!
 
Old fabs get replaced by newer ones. There is no reason in the world to keep having 40nm fab units when 28nm is the future now.

I'll just leave this here:
tsmc_wafer_revenue_split.png
 
Here is what he said:

That fits perfectly with my car analogy. He did't say it would likely have more flops.
He didn't, I did. See the e6760 performs flop for flop around 40% better than r700. This either means an architecture greatly exceeding r700, more flops or both.
 
???
What else would it use? It can't use DirectX
Proprietary APIs, just like Sony. OpenGL (or DirectX for that matter - the Xbox doesn't really use "DirectX" either) have far too much overhead to be worthwhile in a console environment where compatibility with different GPUs isn't an issue. The Wii U one is supposedly called GX2, which makes sense as GX was the Gamecube and Wii API.
 
First the bolded, of course it's another GPU, because it's customized and it's suppose to hit performance of a 800shader R700 at a certain clock speed. (certainly not stock 625MHz | my guess is 360MHz or 480MHz btw or 3x-4x the DSP) So yes, GPU7 will be an "enhanced" R700 if you want to use that term.

Then what are we debating about here then? Because due to all the E6760 talk, for you to respond in the manner you did, I'm left to assume you are also saying an E6760 or variation replaced it.

You've changed what you know quite a few times over the last year, examples are various. Ram sizes, ALUs in the GPU, Wattage used and probably the easiest to point out is you having a talk with wsippel or another insider, where you came to the conclusion that the Wii U likely only had ~320GFLOPs. Every time a rumor comes in, you try to see how it fits into what you have (this is something we all do) this changes what you "know" on the last page for instance, "matt's" info changed what you knew about the system and you were then unsure about what the performance would be.

ROFL. Now I know you haven't been paying attention. You're crossing my speculation with my knowledge of actual things and saying it's all changed. And even then it hasn't all changed.

The RAM amount never changed because I was shown how much games would have access to (1-1.5GB) even though at the time I didn't know that it was only for games. I know for a fact I've never deviated from 640 ALUs being a part of my speculated target. The 320GFLOP was wsippel's take on a discussion we had. I even made a clarification post on that or did you forget? And even with Matt's info or StevieP's, what I "know" hasn't changed.

The second thing here about R700, you are basically saying GPU7 is R700, which is simply not the case, and your own info would clearly point this out to you. That link there says "GX2 is a 3D graphics API for the Nintendo Wii U system (also known as Cafe). The API is designed to be as efficient as GX(1) from the Nintendo GameCube and Wii systems. Current features are modeled after OpenGL and the AMD r7xx series of graphics processors. Wii U’s graphics processor is referred to as GPU7."

GPU7 exceeds R700 features, from multiple sources, including the Source of this OP. I appreciate your work, but it doesn't mean you have been flawless in navigating these rumors. In fact, we know Arkam and where he got his source of info, do we know if this source has been updated? Has Arkam gotten info about final dev kits, he doesn't work on the Wii U himself and he is not a programmer, so his knowledge is limited and it's also fed to him by someone else. If these "specs" are first dev kits (and he had an outdated one at the time) then it could very well be a R700 in his box.

Again you haven't been paying attention to anything I've said just in this thread alone, i.e. debating with USC, if you think I'm saying it's "just an R700".

And the OP info does comes from the final dev kit. I was shown the info in the OP at the end of August and was specifically told it's from the final kit.

I don't mean to attack you btw, but you act like you know stuff, when you just have a bunch of educated guesses and those are based on sources you likely can't even confirm. When I was throwing around the ~600GFLOPs rumor here from my source (a Nintendo engineer btw) I received a few PMs from other "insiders" that said it was 800GFLOPs. I know how it works BG, and it's not pretty, and impossible to just KNOW.

I can rather easily confirm those sources. But I'm not, just like you wouldn't confirm the people you are mentioning in this section of your post. Bringing that up is pointless. If all I've made are a "bunch of educated guesses", then you missed some of my responses to Shaheed in this thread. You "attacking" me (which I'm not bothered by) is only being done through making assumptions and arguing against the assumptions. If you actually read my posts, since that's what you are claiming, you would know what you're saying isn't totally correct. Just based on what I posted in this thread and what you are saying now shows you don't read all of them. I don't "act" like I know. When asked certain questions on things I've even said I don't know. I don't know how much clearer that can be. Please don't make assumptions and argue against them when available facts show those assumptions are wrong.

BG thinks it will out perform the E6760, likely meaning exceeding 576 GFLOPs.

I am just going to stick to ~600 because it really doesn't matter what the exact performance is. However there is good reason to assume that Wii U's GPU is 32nm or 28nm. Rumor of a yield problem earlier in the year popped up, which is highly unlikely with such a "mature" process and TSMC is moving away from 40nm and focusing on their 28nm and 32nm processes. (which isn't something that makes a lot of sense if you plan to produce 10million+ GPUs)

Not just that, but with the other customizations.
 
When I said "yet to ramp up" I was referring to 28/32nm.

It may be that the GPU is that, but there's enough demand from other companies that would make it either very expensive and (relatively) low volume for Nint.
You're correct of course, but I would think Nintendo and TSMC signed the contracts at least a year ago, probably closer to two years. And TSMC expected much higher volumes and better yields for 28nm back then if I remember correctly. So it's certainly possible that Nintendo planned to go with 28nm and designed the TDP around that, and couldn't really change that anymore - which would explain the rumored manufacturing issues.
 
You're correct of course, but I would think Nintendo and TSMC signed the contracts at least a year ago, probably closer to two years. And TSMC expected much higher volumes and better yields for 28nm back then if I remember correctly. So it's certainly possible that Nintendo planned to go with 28nm and designed the TDP around that, and couldn't really change that anymore - which would explain the rumored manufacturing issues.

Fair enough, and I want to make it clear I'm not saying it's impossible, just that there are caveats.

What concerns me more is the eDRAM because volume production typically lags the latest node.
 
He didn't, I did. See the e6760 performs flop for flop around 40% better than r700. This either means an architecture greatly exceeding r700, more flops or both.

Ok, then I'll just say in practice the Wii U GPU will out perform the e6760 even if its flop count is lower/same due to Nintendo's/AMD's customisations.

The sad thing is we're unlikely to ever have the full details of the chip even when it is stripped down by iSuppli.
 
image.php


*little Zelda intro*
z0m3le, he come to this thread !
come to save ! the console Wii UuuUu !
enthusiasts are in dismay
'cause of wat specs bitchers say
But they will be ok, when z0m3le save the day !

This thread is getting a bit insane. Let's all take a deep breathe.

Okay...time to catch up on more of the arguing.

R700 or not R700? I still don't get the debate. What's so hard to understand? Nintendo started with an R700. Then they changed it. Now, it is no longer an R700. It is GPU7. Bueno?
 
Ok, then I'll just say in practice the Wii U GPU will out perform the e6760 even if its flop count is lower/same due to Nintendo's/AMD's customisations.

The sad thing is we're unlikely to ever have the full details of the chip even when it is stripped down by iSuppli.

I clarified his response in the above post. :)

This thread is getting a bit insane. Let's all take a deep breathe.

Okay...time to catch up on more of the arguing.

R700 or not R700? I still don't get the debate. What's so hard to understand? Nintendo started with an R700. Then they changed it. Now, it is no longer an R700. It is GPU7. Bueno?

The sooner some drop the idea of E6760 being the GPU the better.
 
I clarified his response in the above post. :)



The sooner some drop the idea of E6760 being the GPU the better.

Indeed. And I should have been more clear in that last post when I said, "they changed it." I meant to say they modified it. Not switched it out for something completely different. Of course, the chip will resemble the e6760's GPU in some ways. I believe they are both VLIW5 (Would Nintendo tinker that deeply into the core architecture, though? Perhaps...)

I also don't know what kind of hardware changes would be made to enable whatever DX11/SM5-level effects they have chosen to include beyond the DX 10.1 base.

I'm looking at instances of convergent evolution in nature as an analogy. It's like how bats and birds both have the ability to fly, but they achieve it with different anatomy, movements, and efficiency. I suppose if I had the choice, I'd rather be a bird. But I'd rather be a bat than a rat. (Let's see if anyone makes sense of that haha)

No we really don't, DX11 features but "DX9" capabilities is also a reliable rumor. So something between 500-620GFLOPs that has moved beyond DX10.1 and SM4, we don't know how far, and neither do most developers. If they are saying anything, it's THAT.

Most likely what fits, is a GPU that doesn't use R700 series shaders at all, (less efficient, more power hungry, lacking any features beyond SM4 and DX10.1) So Nintendo and AMD got together and spent billions on R&D, and started to evolve the R700, into it's own new GPU, whether they went Evergreen (R800 - and thus similar to e6760) or N800 ("Nintendo"800series) no one here can tell you, and likely we just won't know. It is safe to assume however that it is not R700 at this point, and it's also reasonable to assume that e6760 is close to the performance result of the "GPU7".

So lets drop all the stupid drama, and just either except that it's not really DX10.1 and that it's performance is shy of 1TFLOP R700 cards but likely close to the 576GFLOPs e6760 "Turks" card. That we don't know the Tessellation performance or the GPGPU performance. E6760 doesn't out perform the HD4850 anyways, I've done a lot of research today, and the closest you can do is look for underclocked HD6670 cards, and use that for a performance marker... Effectively the Wii U has a decent low end GPU putting it 2-3 times Xenos, with effects current gen consoles cannot match. Future consoles from PS4 and XB3 will be 2-3 times Wii U but likely don't have important features that Wii U cannot match.

I don't understand what you're disagreeing with me about in the first bolded statement. That Nintendo may have taken some bullet point features of DX11 and included support for them in GPU7, but they may not run as efficiently is what I took from that Kotaku article. As we've discussed, DX11 has no specification for raw performance, so I read that dev comment in relation to said features. We do know R700 includes features such as GPGPU and tesselation. I believe it possible Nintendo has "upgraded" these two aspects of their GPU, but likely in a different way than, say, a SI card. For GPGPU in particular, I think they might be using the eDRAM as L2 or L3 cache. I'm sticking w/ my prediction that the eDRAM is actually embedded on the same chip as the GPU. Seeing as how SRAM is expensive and Nintendo struck it from the CPU L2 design, I wonder if it's possible they did the same with the GPU.

But onto your second bolded comment, we simply don't know any of that (except the added effects, I'll give yah). There is absolutely nothing about the e6760 that points to a very similar GPU being included in Wii U other than the fact that it's an impressively featured, low drawing card, and it would be nice to have.
 
Let's start again:


News: Wii U gpu is 'GPU7'. It's a custom built GPGPU built by AMD and Nintendo. R&D started in 2009 using rv7xx series gpu as a starting point.

Uses proprietary GX2 API which is based on at least equivalent dx10.1 and shader model 4.0+ feature set plus further improvements to both by way of custom fixed function hardware.

Will be extremely low TDP and will offer power/watt ratio similar to AMDs Northern Islands based embedded GPGPU e6760.

Will be coupled with at least 1GB of RAM (very likely GDDR3)

Discuss.
 
I'm looking at instances of convergent evolution in nature as an analogy. It's like how bats and birds both have the ability to fly, but they achieve it with different anatomy, movements, and efficiency. I suppose if I had the choice, I'd rather be a bird. But I'd rather be a bat than a rat. (Let's see if anyone makes sense of that haha)

That's not confusing nor that much different than how people talk about the ways to race with a car or a motorcyle. That is not senseless or irrational that is quite insightful.
 
Indeed. And I should have been more clear in that last post when I said, "they changed it." I meant to say they modified it. Not switched it out for something completely different. Of course, the chip will resemble the e6760's GPU in some ways. I believe they are both VLIW5 (Would Nintendo tinker that deeply into the core architecture, though? Perhaps...)

I also don't know what kind of hardware changes would be made to enable whatever DX11/SM5-level effects they have chosen to include beyond the DX 10.1 base.

I'm looking at instances of convergent evolution in nature as an analogy. It's like how bats and birds both have the ability to fly, but they achieve it with different anatomy, movements, and efficiency. I suppose if I had the choice, I'd rather be a bird. But I'd rather be a bat than a rat. (Let's see if anyone makes sense of that haha)

My take has been that at worst Nintendo and AMD addressed inefficiencies that a 10.1-level GPU would have with DX11-level effects in some fashion. As for remblance to the E6760 for me it was that they would both emphasize GPU general processing, embedded, and low TDP while still having a nice graphical output.

Let's start again:

News: Wii U gpu is 'GPU7'. It's a custom built GPGPU built by AMD and Nintendo. R&D started in 2009 using rv7xx series gpu as a starting point.

Uses proprietary GX2 API which is based on at least equivalent dx10.1 and shader model 4.0+ feature set plus further improvements to both by way of custom fixed function hardware.

Will be extremely low TDP and will offer power/watt ratio similar to AMDs Northern Islands based embedded GPGPU e6760.

Will be coupled with at least 1GB of RAM (very likely GDDR3)

Discuss.

Other than DDR3 being more likely due to GDDR3 density, this sounds good to me.

EDIT: Missed the fixed function part. The emphasis on GPU compute would replace that hypothetical.
 
Let's start again:


News: Wii U gpu is 'GPU7'. It's a custom built GPGPU built by AMD and Nintendo. R&D started in 2009 using rv7xx series gpu as a starting point.

Uses proprietary GX2 API which is based on at least equivalent dx10.1 and shader model 4.0+ feature set plus further improvements to both by way of custom fixed function hardware.

Will be extremely low TDP and will offer power/watt ratio similar to AMDs Northern Islands based embedded GPGPU e6760.

Will be coupled with at least 1GB of RAM (very likely GDDR3)

Discuss.

Don't know much about specs so is this good?
 
Fourth Storm is spot-on with the convergent evolution comparison.

If Nintendo/AMD started out the development of the Wii U's GPU on the Radeon HD4850 or whatever several years ago, I highly doubt they'd set aside a group of engineers to work on the development of said GPU in a technological vacuum. It's a lot more likely that as AMD developed their other GPU lines, alongside the Wii U GPU, they exchanged technologies and design principles and applied them to the Wii U GPU where possible.

Thus, you can have a modified e6760, as the Wii U GPU might very well have enough e6760 components for it to be called a modified e6760 GPU.
 
Let's start again:


News: Wii U gpu is 'GPU7'. It's a custom built GPGPU built by AMD and Nintendo. R&D started in 2009 using rv7xx series gpu as a starting point.

Uses proprietary GX2 API which is based on at least equivalent dx10.1 and shader model 4.0+ feature set plus further improvements to both by way of custom fixed function hardware.

Will be extremely low TDP and will offer power/watt ratio similar to AMDs Northern Islands based embedded GPGPU e6760.

Will be coupled with at least 1GB of RAM (very likely GDDR3)

Discuss.

Jesus Christ no.

I think one of the points of contention here is people who have info from sources aren't laying it out on the table for everyone to examine because they can't, won't.

But even people arguing amongst themselves who have inside info can't agree or are still playing with theories, so basically, if people are still doing that, then no fecker knows what the facts are.

Even Arkam initially had the opinion that the Wii U was a bit of a bag of shit, and now he seems like he's had a U-turn on that opinion, so even the developers don't know what it is or isn't yet.

Not much point in speculating any more until there is more info to go on as I see it.

But of course, everyone will, even until long after the Wii U is dead and buried; I mean, people have been talking about how powerful the N64 was recently!!

It is basically insanity.
 
I'll just leave this here:
tsmc_wafer_revenue_split.png

Makes sense, there are probably tons of embedded systems etc that don't need small processes. In comparison to 28nm, 150nm is like scratching a circuit with a sharp stick

That shows quite a polarised view though, Witt he volumes being at the large and small ends. Look how slowly 40nm ramped up, but now it's the volume in that area, so I expect 32/28 to ramp up similarly over the next year or so as processes mature.
 
This thread is getting a bit insane. Let's all take a deep breathe.

Okay...time to catch up on more of the arguing.

R700 or not R700? I still don't get the debate. What's so hard to understand? Nintendo started with an R700. Then they changed it. Now, it is no longer an R700. It is GPU7. Bueno?

Well, this is why i made this little song, to relax the atmosphere a bit, i think the mood isn't cool whereas we're talking about the hardware of an entertainment product.

And i'm a dinosaur, i lived 80 and 90's consoles tech wars, but it was more enjoyable, less bitter than here (surely because it was based mainly on detailed spec sheet, on the contrary than the Wii U case where a lot is left to speculation + a substrate of anxiety as a consequence of Nintendo hardware choices for the Wii)
 
Let's start again:


News: Wii U gpu is 'GPU7'. It's a custom built GPGPU built by AMD and Nintendo. R&D started in 2009 using rv7xx series gpu as a starting point.

Uses proprietary GX2 API which is based on at least equivalent dx10.1 and shader model 4.0+ feature set plus further improvements to both by way of custom fixed function hardware.

Will be extremely low TDP and will offer power/watt ratio similar to AMDs Northern Islands based embedded GPGPU e6760.

Will be coupled with at least 1GB of RAM (very likely GDDR3)

Discuss.
I wouldn't be surprised if the GPU7 had fixed-functions, but where did you get that from?
 
Fourth Storm is spot-on with the convergent evolution comparison.

If Nintendo/AMD started out the development of the Wii U's GPU on the Radeon HD4850 or whatever several years ago, I highly doubt they'd set aside a group of engineers to work on the development of said GPU in a technological vacuum. It's a lot more likely that as AMD developed their other GPU lines, alongside the Wii U GPU, they exchanged technologies and design principles and applied them to the Wii U GPU where possible.

Thus, you can have a modified e6760, as the Wii U GPU might very well have enough e6760 components for it to be called a modified e6760 GPU.

Well first that would go against the actual development process and secondly E6760's target applications aren't all gaming-related. Nintendo and AMD can take the R700 line and design something much better for gaming which is what I expect.

I need to state it more that when I say it's not an E6760, I also expect what GPU7 actually is will be a better performer for a gaming console than modifying a GPU that wasn't designed exclusively for gaming.
 
Well, this is why i made this little song, to relax the atmosphere a bit, i think the mood isn't cool whereas we're talking about the hardware of an entertainment product.

And i'm a dinosaur, i lived 80 and 90's consoles tech wars, but it was more enjoyable, less bitter than here (surely because it was based mainly on detailed spec sheet, on the contrary than the Wii U case where a lot is left to speculation + a substrate of anxiety as a consequence of Nintendo hardware choices for the Wii)

Haha, it's all good. I appreciated your little ditty. And yes, the 90s console wars were a bit different, although the internet was a lot younger back then, and I wasn't on it, so everything I recall was really between friends.

I really think that even if Nintendo have "dropped out of the hardware race," they should act a bit more proud of the GPU they have put together. Give us an interview with one of their engineers explaining the advances they made, and how they'll benefit games. Talk a bit more about the eDRAM and low TDP. Iwata was trumpeting the RAM amount recently - now give us a little more. That they have disclosed so little makes it seem like they have something to hide. People will assume it's worse than it really is and meanwhile some Nintendo fans may feel confident that it's better than it really is, and by not being more clear on the matter (as with the whole POWER7 debate), Nintendo puts them in an embarrassing spot and breeds resentment.

Come on, Iwata. Get that swagger back. Show some pride in your product!
 
How is it "very likely" to be 1GB of GDDR3? Has anything been announced regarding the ram speed?

My take has been that at worst Nintendo and AMD addressed inefficiencies that a 10.1-level GPU would have with DX11-level effects in some fashion. As for remblance to the E6760 for me it was that they would both emphasize GPU general processing, embedded, and low TDP while still having a nice graphical output.



Other than DDR3 being more likely due to GDDR3 density, this sounds good to me.


Yay! We're discussing ;)

I thought GDDR3 purely because they've used it in Wii and its lower power than GDDR5. I'd DDR3 really a better option?
 
I wouldn't be surprised if the GPU7 had fixed-functions, but where did you get that from?

I believe that came from a hypothetical scenario posed awhile back.

Yay! We're discussing ;)

I thought GDDR3 purely because they've used it in Wii and its lower power than GDDR5. I'd DDR3 really a better option?

In this case yes. GDDR3 vs DDR3 both have pros and cons that don't really outweigh the other from a performance perspective from what I've seen. So DDR3 gains the advantage due to a much larger density and in turn cost.
 
Yay! We're discussing ;)

I thought GDDR3 purely because they've used it in Wii and its lower power than GDDR5. I'd DDR3 really a better option?

Yup, GDDR3 is being phased out if it hasn't been already. Meanwhile, with DDR3, you can get up to 2 GB using 8 chips. For GDDR3, you'd need 16.
 
My take has been that at worst Nintendo and AMD addressed inefficiencies that a 10.1-level GPU would have with DX11-level effects in some fashion. As for remblance to the E6760 for me it was that they would both emphasize GPU general processing, embedded, and low TDP while still having a nice graphical output.



Other than DDR3 being more likely due to GDDR3 density, this sounds good to me.

EDIT: Missed the fixed function part. The emphasis on GPU compute would replace that hypothetical.

I wouldn't be surprised if the GPU7 had fixed-functions, but where did you get that from?


Thought that was mooted before in earlier WUSTs. If not, I've learned somthing! Through the power of casual discussion :D
 
Four chips. That's why it's so likely.


So what would this mean performance wise?

Any chance/reason for a split pool?


Edit:

NewNews:

Wii U gpu is 'GPU7'. It's a custom built GPGPU built by AMD and Nintendo. R&D started in 2009 using rv7xx series gpu as a starting point. Includes compute shader support and tesselation unit.

Uses proprietary GX2 API which is based on at least equivalent dx10.1 and shader model 4.0+

Will be extremely low TDP and will offer power/watt ratio similar to AMDs Northern Islands based embedded GPGPU e6760.

Will be coupled with at least 1GB of RAM (very likely DDR3)
 
Four chips. That's why it's so likely.

That's true. I figured they might be in limited supply at the moment, so they might start at 8. Micron seem to have the 4 gigabit 32-bit interface chips they'd need if they want to reduce the chips down to 4 either now or later. Don't know about other manufacturers like Samsung. And clockspeed is still up in the air. I want to say 960 Mhz, but 720 I also feel is possible.
 
^ Don't forget Samsung.

So what would this mean performance wise?

Any chance/reason for a split pool?

See my response to you above about performance. And I don't see the 2GB being split. They would only have to deal with one type of memory and one bus for that memory.
 
Indeed. And I should have been more clear in that last post when I said, "they changed it." I meant to say they modified it. Not switched it out for something completely different. Of course, the chip will resemble the e6760's GPU in some ways. I believe they are both VLIW5 (Would Nintendo tinker that deeply into the core architecture, though? Perhaps...)

I also don't know what kind of hardware changes would be made to enable whatever DX11/SM5-level effects they have chosen to include beyond the DX 10.1 base.

I'm looking at instances of convergent evolution in nature as an analogy. It's like how bats and birds both have the ability to fly, but they achieve it with different anatomy, movements, and efficiency. I suppose if I had the choice, I'd rather be a bird. But I'd rather be a bat than a rat. (Let's see if anyone makes sense of that haha)



I don't understand what you're disagreeing with me about in the first bolded statement. That Nintendo may have taken some bullet point features of DX11 and included support for them in GPU7, but they may not run as efficiently is what I took from that Kotaku article. As we've discussed, DX11 has no specification for raw performance, so I read that dev comment in relation to said features. We do know R700 includes features such as GPGPU and tesselation. I believe it possible Nintendo has "upgraded" these two aspects of their GPU, but likely in a different way than, say, a SI card. For GPGPU in particular, I think they might be using the eDRAM as L2 or L3 cache. I'm sticking w/ my prediction that the eDRAM is actually embedded on the same chip as the GPU. Seeing as how SRAM is expensive and Nintendo struck it from the CPU L2 design, I wonder if it's possible they did the same with the GPU.

But onto your second bolded comment, we simply don't know any of that (except the added effects, I'll give yah). There is absolutely nothing about the e6760 that points to a very similar GPU being included in Wii U other than the fact that it's an impressively featured, low drawing card, and it would be nice to have.

First, I'll get out of the way the point I disagreed with, I don't believe it's easy or clear to see what anyone means by DX11 features but not DX11 capable, or SM5 like features, but only used sparingly... How does that make sense to anyone? the changes from DX10.1 to DX11 are very small. It flat out is simply about moving tessellation into unified shaders as well as GPGPU targeting that same advancement. There is no long list of DX11 features that they might have or might not have, it's literally those 2 things from DX10.1, but I'm all ears if you have a list. SM5, well either the chip has it or it doesn't, it's a specification that Nintendo would likely not even mess with.

Now to move on, it seems like I'm rocking the boat or whatever, but how can that be when You, BG and myself are plainly saying the same thing. BG is right to ask where he and I disagree, because we don't. He said "No" to there being a different GPU than what was in early dev kits, but I think it was mostly misunderstood that I simply meant that a final silicon chip was put into the final dev kits that replaced the R700 card originally used. Beyond that I said it's possible that that card now supported DX11 (and for OUR purposes, any changes to improve GPGPU and Tessellation, would mean this.)

As for saying "we don't know" if Wii U's GPU is 2-3X Xenos, I think it's plain that it falls in that line, BG basically agrees with me and I don't see you disagreeing with him. We know that Wii U can render 360 graphics and beyond that, we also know that it can also render a second scene from the game onto the pad (batman did this during E3) we know that Batman isn't exactly pushing the hardware because it's an unoptimized port.

Now even 28nm is being taken seriously because someone else said that it's possible. The reason I don't post here much anymore is exactly because of this, reasonable discussion is halted, if not by trolls, then by people who just don't read or understand what others are saying.

I'd love to know where either of you disagree with me about Wii U specs. I'd love to have that debate, instead of what is "known" and what is "guessed" at... As for this spec leak in the OP, it's clearly second hand information from Arkam (he has said as much) so lets try not to treat it like a fact sheet.

BTW: I completely disagree with people trying to crush the E6760 rumor, because most people know it's a custom part, so of course it's not E6760, but it seems we all agree that it has the target performance fairly close and that is all people really care about, until someone can tell me a feature from DX11 they don't expect in GPU7, out of the two choices there really is to pick from. Then it makes no difference if people look at E6760 or an imaginary custom card that has the same performance. GPU7 won't use DX11 so I hardly see the point in telling people it's not E6760, because it would have all the same characteristics of whatever GPU7 turns out to be EXCEPT Nintendo customizations of course.
 
^ Don't forget Samsung.



See my response to you above about performance. And I don't see the 2GB being split. They would only have to deal with one type of memory and one bus for that memory.

I'm not, but if they want a 128 bit bus, they need those 32 bit i/o chips. They may very well have them or be planning them, but my searches haven't turned up anything conclusive. Micron has 800 Mhz DDR3 that fits the bill. I'm thinking the 32 bit i/o is rare and not necessary in most instances, since you're typically fitting the chips on a 64 bit dimm in quantities of 8 or more, so all the individual chip interfaces add up. With Wii U, they want to keep the number of chips low, but still need a decent bandwidth. Hmmm...

To add to your response concerning a unified pool, it also bears noting that it's likely Nintendo is designing this console to be very much like the Gamecube. In which case, the memory controller will be on the GPU and the CPU will access the main system memory via the GPU. So one bus from the CPU to GPU and one bus from the GPU to the system memory. Very elegant IMO.
 
I'm still of the opinion that Nintendo are hedging their bets by setting aside 1GB for themselves from the outset.

- Current multiplats are going to be made with the 512MB of the 360/PS3 in mind. Even at the starting 1GB, Wii U already offers twice the memory space over that hardware. XBLA/PSN digital titles ported to Wii U will also be made with this in mind and into the future, as long as XBLA/PSN are a thing. Even the majority of first wave of PS4/720 games are going to be 360/PS3 projects ported up.

- They don't want to write themselves into a corner by freeing up space to devs, then not have enough memory to implement a feature the competition debuts, much like Sony and cross-game voice chat.

- With the previous rumors that MS may be dedicating 1GB or more to their OS, Nintendo wouldn't want to be left out in the cold with any features that allows MS to do.

I'd say if Sony and MS have 2GB dedicated to games or more on their next hardware, Nintendo will start opening up the rest of the Wii U's memory. If they decide to ride with 1 or 1.5 GB for games, Nintendo will stay where they are at or just free up a bit more to games.



BTW, I'm convinced they have a similar strategy regarding the Wii U's price. The Wii U is coming out at 300$, so it'll be on par with the desirable versions of the PS3 and 360. If the PS4 and 720 come out at 400 or 500$, Nintendo doesn't need to move the Wii U's price.
 
BTW BG, no of course I didn't read the entire thread or the last WUST, and why would anyone read a debate with USC-fan. The guy KNOWS what Wii U is, there is no debate, though I did skim through everything, and someone pointed out to you that a brick wall would be easier to talk to.
 
Now to move on, it seems like I'm rocking the boat or whatever, but how can that be when You, BG and myself are plainly saying the same thing. BG is right to ask where he and I disagree, because we don't. He said "No" to there being a different GPU than what was in early dev kits, but I think it was mostly misunderstood that I simply meant that a final silicon chip was put into the final dev kits that replaced the R700 card originally used. Beyond that I said it's possible that that card now supported DX11 (and for OUR purposes, any changes to improve GPGPU and Tessellation, would mean this.)

As for saying "we don't know" if Wii U's GPU is 2-3X Xenos, I think it's plain that it falls in that line, BG basically agrees with me and I don't see you disagreeing with him. We know that Wii U can render 360 graphics and beyond that, we also know that it can also render a second scene from the game onto the pad (batman did this during E3) we know that Batman isn't exactly pushing the hardware because it's an unoptimized port.

...

I'd love to know where either of you disagree with me about Wii U specs. I'd love to have that debate, instead of what is "known" and what is "guessed" at... As for this spec leak in the OP, it's clearly second hand information from Arkam (he has said as much) so lets try not to treat it like a fact sheet.

I do agree there has been a decent amount of saying the same thing from different perspectives.

And I don't think we really disagree other than features we don't have full info on. I also felt it was plausible GPU7 could have been modified in a similar way as to what you are saying. But as it stands nothing really backs it up. And just like how certain things may be "missing" in comparison to DX11, there could be other things it does better.

Now even 28nm is being taken seriously because someone else said that it's possible. The reason I don't post here much anymore is exactly because of this, reasonable discussion is halted, if not by trolls, then by people who just don't read or understand what others are saying.

True, but with the bold the hope would be those people will work to understand each other. :)

BTW: I completely disagree with people trying to crush the E6760 rumor, because most people know it's a custom part, so of course it's not E6760, but it seems we all agree that it has the target performance fairly close and that is all people really care about, until someone can tell me a feature from DX11 they don't expect in GPU7, out of the two choices there really is to pick from. Then it makes no difference if people look at E6760 or an imaginary custom card that has the same performance. GPU7 won't use DX11 so I hardly see the point in telling people it's not E6760, because it would have all the same characteristics of whatever GPU7 turns out to be EXCEPT Nintendo customizations of course.

I can't agree with that. That would be like saying it's okay to call a Toyota Corolla a Honda Civic because they have the same or similar performance.

I'm not, but if they want a 128 bit bus, they need those 32 bit i/o chips. They may very well have them or be planning them, but my searches haven't turned up anything conclusive. Micron has 800 Mhz DDR3 that fits the bill. I'm thinking the 32 bit i/o is rare and not necessary in most instances, since you're typically fitting the chips on a 64 bit dimm in quantities of 8 or more, so all the individual chip interfaces add up. With Wii U, they want to keep the number of chips low, but still need a decent bandwidth. Hmmm...

To add to your response concerning a unified pool, it also bears noting that it's likely Nintendo is designing this console to be very much like the Gamecube. In which case, the memory controller will be on the GPU and the CPU will access the main system memory via the GPU. So one bus from the CPU to GPU and one bus from the GPU to the system memory. Very elegant IMO.

The Samsung comment was me trying to do multiple things at one time and not completely reading your post. My apologies.

I think what I was remembering came from a B3D post that had this link.

http://techon.nikkeibp.co.jp/english/NEWS_EN/20120222/205615/

Of course we'll then find out Nintendo decided to go with DDR4 due to future cost savings. >_>

BTW BG, no of course I didn't read the entire thread or the last WUST, and why would anyone read a debate with USC-fan. The guy KNOWS what Wii U is, there is no debate, though I did skim through everything, and someone pointed out to you that a brick wall would be easier to talk to.

XD

For me I just want you to be certain I said things as that's how I try to be even though I do still err in that regard (see responding to durante). I have no problem owning up to being wrong about something.

I'm still of the opinion that Nintendo are hedging their bets by setting aside 1GB for themselves from the outset.

- Current multiplats are going to be made with the 512MB of the 360/PS3 in mind. Even at the starting 1GB, Wii U already offers twice the memory space over that hardware. XBLA/PSN digital titles ported to Wii U will also be made with this in mind and into the future, as long as XBLA/PSN are a thing. Even the majority of first wave of PS4/720 games are going to be 360/PS3 projects ported up.

- They don't want to write themselves into a corner by freeing up space to devs, then not have enough memory to implement a feature the competition debuts, much like Sony and cross-game voice chat.

- With the previous rumors that MS may be dedicating 1GB or more to their OS, Nintendo wouldn't want to be left out in the cold with any features that allows MS to do.

I'd say if Sony and MS have 2GB dedicated to games or more on their next hardware, Nintendo will start opening up the rest of the Wii U's memory. If they decide to ride with 1 or 1.5 GB for games, Nintendo will stay where they are at or just free up a bit more to games.



BTW, I'm convinced they have a similar strategy regarding the Wii U's price. The Wii U is coming out at 300$, so it'll be on par with the desirable versions of the PS3 and 360. If the PS4 and 720 come out at 400 or 500$, Nintendo doesn't need to move the Wii U's price.

I agree with your points.
 
All I wanted to do was play Nintendo Games in HD I never meant for it to cause all this madness.


Can't Wii All Just Get Along?
 
I'll be laughing my ass off if Sony or MS expect me to buy their next consoles for $400-$500.

$299 is my limit. $350 if they bundle all sorts of goodies plus a game.
 
Top Bottom