WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
Binned mobile part? There's no such thing as a non-mobile Jaguar part, as it's inherently designed for 4-25W TDPs. The A4 series isn't exactly a premium chip either.

Try and find some actual power consumption numbers for that 15W CPU.

I know it just funny. Jaguar has 2 designs kabini and temash. That is how it goes from 4-25W TDPS. The part is still binned....

Yeah i will get right on that.... lol
 
I don't think they quite did it this time, but they have been known to do this...
Look at the GameCube compared to the PS2 and Xbox... The bolded pretty much describes that generation from a computational stand-point... other hardware mistakes notwithstanding.

Hardly, the gamecube was more powerful than the PS2 on paper and less powerful than Xbox on paper. That translated 1:1 in games.

You could not come to a conclusion unless it has some kind of basis in fact. What features are you refering to and what basis do you have for believing them to be unusable on WiiU given what we know of its performance?

Unusable? I don't think any single feature is unusable. However given the GPU is pre GCN and it has an incredibly low GPU FLOP count in comparison to other next gen, that alone is enough. You could read some of the links I posted to learn a bit about GPGPU for example, there's a reason VLIW5 is no longer part of AMD architecture.

Do you think everything is in place for the Wii U to achieve what the PS4 will achieve? It seems so crazy to consider that given what we know from it's processing capabilities, and the tech its based on. And even on the PS4 the use of GPGPU is up in the air, with developers expecting it to only really come into play further down the generation. WII U is also not based on AMD's APU HSA design.

I don't see what's so surprising about my comments. You need power and design to achieve certain desired results. If all you needed was a 33 watts machine, with a VLIW5 GPU outputting 200/300 GFLOPS to achieve what a 150 watts machine with a GCN GPU outputting 1.8 Tflops can..... do you see where I'm going with this?

2-3x what?, also tiny compared to what? WiiU's GPU clearly has significantly more transistors than 360's GPU for instance. Also its incorrect to suggest that newer features (DX10.1/11 ect) would neccesarily require more performance to use. A lot of the features in these new APIs are not just about making something better looking but also doing something faster and more efficiently.

Really? Why is it that AMD went from VLIW to GCN then? Why do new GPUs keep offering more power more efficiently?

It hardly seems intelligent to ignore such factors. Specially when you are using them as an advantage of Wii U over 360....
 
I think every one has posted their view numerous times, I don't see how me posting mine numerous times is any different apart from the fact that it disagrees with most of you.

Anyway I don't wish ill will towards any of you, it's just a difference of opinions, some of your opinions are much more valid than mine because of your tech knowledge and that is fine.

I love my WiiU, it's everything I thought it would be after we found out the basic specs and I'm sure there will be many impressive looking Nintendo game released for it, I just don't think it will ever wow people (esp the tech minded) more than the average PS360 game.

I really do hope that when it comes time for the next Nintendo console they start from scratch though and go with an x86 based system as it will be the industry standard and will help them get third party support and port parity.

The stuff the poster (sorry can't recall your name) was saying yesterday about this architecture being the future of Nintendo for the next 10 years made me die a little inside if it turns out to be true :(.

Edit -

Posters name was z0m3le, how terrible of me to forget as I've enjoyed reading your posts on here a lot :).

Just catching up on the thread, Glad you enjoy my posts. Wii U's architecture isn't really bad and the GPU can be upgraded further to bring it to DX11 or 12 or whatever is current at the time while still using the same set up (VLIW5/4 whatever) PPC seems to be the hang up here, but I just don't understand that. Developers are very familiar with PPC, 360 was a PPC(6?) based chip and most developers and publishers have already built in tools in their engines to quickly port between X86 and PPC.

What Nintendo needs to do to not have droughts in the next cycle or at least greatly reduce them, isn't to make a new architecture so that they have to learn something new just so 3rd parties can largely ignore them again for any other reason (lets face it, the real reason most publishers that have backed off the Wii U have done so because their games didn't sell on it.) What they need to do is use Wii U as a way to update their developers and get them all to learn programmable shaders and how to leverage the hardware they have now.

If they release a handheld with this architecture in some configuration of performance, they will be able to quickly take advantage of the hardware, but more importantly they will be able to have each team support both platforms in the same pipeline. Instead of choosing to which resources from their console or handheld, they could build a game, release it on one platform and have a second game in the same series ready in 6 months to a year afterwards for another platform, rather than wasting time creating a whole new engine for the other series.

3rd parties would also be able to release their one game (with one set of code but possibly 2 different levels of assets as they do for PC) to both platforms at once. Even with the early failures of 3DS and Wii U, that would be nearly 40million game units targeted at once, much much higher than just hitting nintendo's home console.

Personally if they don't do this, I'll be disappointed, because espresso doesn't even matter, the next gen consoles from Sony and Microsoft are using pretty weak CPUs so going forward I really don't think CPUs will matter much at all, if this wasn't true Sony and Microsoft would of opted for A series AMD processors at the very least.
 
Hardly, the gamecube was more powerful than the PS2 on paper and less powerful than Xbox on paper. That translated 1:1 in games.



Unusable? I don't think any single feature is unusable. However given the GPU is pre GCN and it has an incredibly low GPU FLOP count in comparison to other next gen, that alone is enough. You could read some of the links I posted to learn a bit about GPGPU for example, there's a reason VLIW5 is no longer part of AMD architecture.

Do you think everything is in place for the Wii U to achieve what the PS4 will achieve? It seems so crazy to consider that given what we know from it's processing capabilities, and the tech its based on. And even on the PS4 the use of GPGPU is up in the air, with developers expecting it to only really come into play further down the generation. WII U is also not based on AMD's APU HSA design.

I don't see what's so surprising about my comments. You need power and design to achieve certain desired results. If all you needed was a 33 watts machine, with a VLIW5 GPU outputting 200/300 GFLOPS to achieve what a 150 watts machine with a GCN GPU outputting 1.8 Tflops can..... do you see where I'm going with this?



Really? Why is it that AMD went from VLIW to GCN then? Why do new GPUs keep offering more power more efficiently?

It hardly seems intelligent to ignore such factors. Specially when you are using them as an advantage of Wii U over 360....

We are actually not sure if Wii U has vanilla VLIW5. The GPU's internals looks like it was at least partially influenced by AMD's Brazos which was released in 2011, so they may not be.
 
We are actually not sure if Wii U has vanilla VLIW5. The GPU's internals looks like it was at least partially influenced by AMD's Brazos which was released in 2011, so they may not be.

At best it would be VLIW4 which would still be inefficient for GPGPU when compared to GCN. But then it wouldn't make sense to start off with a VLIW5 design as the base.

Also Brazos is an APU design, not a GPU design. The GPU in brazos are in fact VLIW5 designs. Considering that the likes of Kingdom Hearts III and Final Fantasy XV aren't coming to the Wii U specifically because of lack of DX 11 level support, I would say we're actually dealing with a R700 core base through and through.

More than likely most of the heavy customization has to be directly connected to the main design philosophy behind the system. Low power consumption is clearly a priority here, and you don't get there by beefing up specs all over the place.
 
Unusable? I don't think any single feature is unusable. However given the GPU is pre GCN and it has an incredibly low GPU FLOP count in comparison to other next gen, that alone is enough. You could read some of the links I posted to learn a bit about GPGPU for example, there's a reason VLIW5 is no longer part of AMD architecture.
So you were referring to the usability of GPGPU, I see. You almost fooled me with the 'most features' broad sweep there..

Funny, I'm typing this on a VLIW5-equipped machine that does GPGPU quite adequately. FYI, VLIW (4 *and* 5) was part of AMD lineup up until quite recently. VLIW4 was intended to indeed improve on the GPGPU compute density of the VLIW architecture (one ALU being 'transcedental' does not help with the PE count), but it was rather short-lived and superceeded by GCN. The early (circa R700) bad rep that the AMD architecture got WRT GPGPU had little to do with the architecture being VLIW and everything to do with the inadequate size of the LDS - something that could make or break a GPGPU platform. Come R800, and eventually, Cayman, VLIW was doing GPGPU quite happily (BTW, check out what bitcoiners think of GCN some day). Until AMD hit a compiler brick wall, or so they say.

Do you think everything is in place for the Wii U to achieve what the PS4 will achieve? It seems so crazy to consider that given what we know from it's processing capabilities, and the tech its based on.
Pardon the recurring pattern, but are you saying that from the position of a seasoned GPU programmer?

And even on the PS4 the use of GPGPU is up in the air, with developers expecting it to only really come into play further down the generation. WII U is also not based on AMD's APU HSA design.
GPGPU will be a factor as soon as the first gen of titles for the new platforms, which gen started life predominantly as ps360 tech, gets out of the door. FYI, the latest iterations of the desktop 3D Graphics APIs (i.e. DX and OGL) do GPGPU seamlessly, without the need for external APIs (i.e. CUDA, OpenCL). GPGPU is getting tightly integrated into the featureset of *each and every* modern GPU supported by the latest iterations of the 3D graphics APIs, *irrespective* of its performance grade.

I don't see what's so surprising about my comments. You need power and design to achieve certain desired results. If all you needed was a 33 watts machine, with a VLIW5 GPU outputting 200/300 GFLOPS to achieve what a 150 watts machine with a GCN GPU outputting 1.8 Tflops can..... do you see where I'm going with this?
No, I don't. You tried to discard a featureset base entirely on vague, broadly-generalized performance considerations. That's not exactly how things work in this field.
 
What generation gpu is the wii u based on again? And what clock speeds did that tech attain in its pc iterations? Or is that still speculation and this is a more customized part.
 
Just catching up on the thread, Glad you enjoy my posts. Wii U's architecture isn't really bad and the GPU can be upgraded further to bring it to DX11 or 12 or whatever is current at the time while still using the same set up (VLIW5/4 whatever) PPC seems to be the hang up here, but I just don't understand that. Developers are very familiar with PPC, 360 was a PPC(6?) based chip and most developers and publishers have already built in tools in their engines to quickly port between X86 and PPC.

What Nintendo needs to do to not have droughts in the next cycle or at least greatly reduce them, isn't to make a new architecture so that they have to learn something new just so 3rd parties can largely ignore them again for any other reason (lets face it, the real reason most publishers that have backed off the Wii U have done so because their games didn't sell on it.) What they need to do is use Wii U as a way to update their developers and get them all to learn programmable shaders and how to leverage the hardware they have now.

If they release a handheld with this architecture in some configuration of performance, they will be able to quickly take advantage of the hardware, but more importantly they will be able to have each team support both platforms in the same pipeline. Instead of choosing to which resources from their console or handheld, they could build a game, release it on one platform and have a second game in the same series ready in 6 months to a year afterwards for another platform, rather than wasting time creating a whole new engine for the other series.

3rd parties would also be able to release their one game (with one set of code but possibly 2 different levels of assets as they do for PC) to both platforms at once. Even with the early failures of 3DS and Wii U, that would be nearly 40million game units targeted at once, much much higher than just hitting nintendo's home console.

Personally if they don't do this, I'll be disappointed, because espresso doesn't even matter, the next gen consoles from Sony and Microsoft are using pretty weak CPUs so going forward I really don't think CPUs will matter much at all, if this wasn't true Sony and Microsoft would of opted for A series AMD processors at the very least.

The thing is, since when has PPC had a serious update?
 
What generation gpu is the wii u based on again? And what clock speeds did that tech attain in its pc iterations? Or is that still speculation and this is a more customized part.

I recall the initial leaks said R700, which reached a core clock of 850 MHz for the HD 4890. The Wii U wouldn't be clocked anywhere near that though.

EDIT: Turns out we know Latte is clocked at 550 MHz.
 
2-3x what?, also tiny compared to what? WiiU's GPU clearly has significantly more transistors than 360's GPU for instance.

Just to add a bit of information: RV710 (80 SPUs) has 242 M transistors and RV730 (320 SPUs) has 514 M. If we believe in the theory that Latte has 160 SPUs and that possible additional funcunality for bc etc. doesn't add significantly to that number, it should be somewhere in between.
For comparison: Xenos (360's GPU) has 232 M transistors.
Whether that's a significant leap or not is the eye of the beholder.

Also its incorrect to suggest that newer features (DX10.1/11 ect) would neccesarily require more performance to use. A lot of the features in these new APIs are not just about making something better looking but also doing something faster and more efficiently.

True. However, in the context of transistor budget, implementing all DX 10.1 features won't come for free. In other words, if the GPU was only DX9 it could probably fit more raw power in the same number of transistors. Of course not that much more that it'd invalidate the advantages of a more modern feature set, but especially for small low power GPUs I wouldn't take it out of the equation entirely.
 
I recall the initial leaks said R700, which reached a core clock of 850 MHz for the HD 4890. The Wii U wouldn't be clocked anywhere near that though.

EDIT: Turns out we know Latte is clocked at 550 MHz.

Would it be crazy to assume that the only reason its not cocked at 800 mhz and above is to keep power draw lower and have a sysem with a small fan? Meaning if given a bigger fan and power draw the chipset would be fine running at 800 mhz?
 
Would it be crazy to assume that the only reason its not cocked at 800 mhz and above is to keep power draw lower and have a sysem with a small fan? Meaning if given a bigger fan and power draw the chipset would be fine running at 800 mhz?

That's a very likely scenario imo. A high clock rate might also decrease yields a bit in that not all produced chips reach the high clock rate without problems or too high voltages. But on the other hand we're looking at a very mature manufacturing process here so I doubt there would be much trouble.
Wii U is clearly designed to be a small and low power. If that wasn't a concern in Nintendo's eyes the console could be more powerful without really being more expensive.

edit: 850 MHz might be a bit too much though. Most R700 GPUs were clocked between 600 and 750 MHz.
 
Considering that the likes of Kingdom Hearts III and Final Fantasy XV aren't coming to the Wii U specifically because of lack of DX 11 level support, I would say we're actually dealing with a R700 core base through and through.

It's been stated time after time, in this thread alone that Square Enix did not actually say that about Wii U and DX 11, and it's been shown time after time with quotes and dev logs, etc. that the Wii U does support DX 11 features.
 
The thing is, since when has PPC had a serious update?

PPC? well Power 8 comes out soon. Espresso is a pretty big change over GCN's and Wii's CPUs. I mean those CPUs were single core one one thing. Nintendo could continue to have it developed just like they have done with Espresso, in fact the weak point of this CPU that clearly puts it behind Jaguar is simply not using SIMDs which could of been added to the CPU, however Nintendo likes paired singles and anyone who understands what a GPGPU is, knows that there will be little point in simply pushing flops on a CPU when what you really want is fast efficiency which is why 360's CPU actually pushes more flops than XB1/PS4's CPU iirc (they are pretty close, but both fall far short of Cell)
 
Brazos is VLIW5. The Radeon HD 7340 is branded 7xxx series but it's just another R700 GPU like the Wii U.
Brazos is an R800, more precisely Cedar. It's a successor to R700, but it does have certain architecture advancements. In this regard Latte is another successor to R700, so you could say Latte and Brazos are cousins, but neither of them is an R700.
 
Just to add a bit of information: RV710 (80 SPUs) has 242 M transistors and RV730 (320 SPUs) has 514 M. If we believe in the theory that Latte has 160 SPUs and that possible additional funcunality for bc etc. doesn't add significantly to that number, it should be somewhere in between.
For comparison: Xenos (360's GPU) has 232 M transistors.
Whether that's a significant leap or not is the eye of the beholder.



True. However, in the context of transistor budget, implementing all DX 10.1 features won't come for free. In other words, if the GPU was only DX9 it could probably fit more raw power in the same number of transistors. Of course not that much more that it'd invalidate the advantages of a more modern feature set, but especially for small low power GPUs I wouldn't take it out of the equation entirely.

Here I have already corrected someone about transistor counts of Wii U's Latte, so I'll just quote myself, quoting the original estimate of what Latte's Transistor count is.

4.3 Billion transistors for HD 7900 series (32 compute units or 2048ALUs) HD 7800 series is 2.8 Billion transistors (20 compute units or 1280ALUs) and the HD7700 series is 1.5 Billion transistors (10 compute units or 640ALUs)

Going by those families, XB1 has 12 compute units, but should otherwise fall into the HD 7700 series with 768 ALUs, making the transistor count close to ~1.7 Billion. PS4 has 18 compute units and should fall into HD 7800 series (between 7850 and 7870) making the count ~2.5 Billion transistors.

Microsoft's 5 Billion transistors claim is about all silicon, and in case anyone was wondering Wii U's GPU should be around ~750 million transistors if Thraktor's measurements were correct earlier in the year, not including the eDRAM which took up around 220 million transistors: http://www.neogaf.com/forum/showpost.php?p=45095173&postcount=836

Altogether Wii U's MCM is around 1.2 Billion transistors.

The Xenos chip often says it has over 300 transistors, but that includes the 10MB eDRAM, without it you are right about the 232 Million transistors for Xenos' GPU, this is one key point that never made sense to the 160 ALU speculation for Latte, the ARM core is somewhere around 40 million transistors, but Wii U's GPU still takes up ~3x the transistor count over Xenos, while a lot of that difference can be explained in higher feature set and GPGPU, the ALUs shouldn't be as big as VLIW5 R800, they certainly won't come close to GCN, which is the fattest ALUs AMD have ever made, mostly to speed up GPGPU calculations further beyond VLIW4. To be perfectly honestly I feel we have tied Wii U's GPU down to it's launch performance and if you look at the beyond 3D thread where the 160 ALUs came from, you can see that it was based on Need For Speed U's performance vs 160 ALU PC parts, which were clocked at 675MHz and performed about as good. The real ridiculous thing is how small of a budget the developers had for this game, it was EA's closing door game on Wii U and was expected to sell very poorly, so they didn't have the resources to push the console and really just took advantage of the updated features and larger RAM cache, they didn't have to push extra performance from anywhere.

I don't really care what Wii U's ALU count is, I only originally cared because I wondered how it stacked up to Xenos, but Xenos was so poor in efficiency that once I dug into it's performance and found that it was almost half as powerful as I thought it was, Wii U's bare minimum spec would be able to clearly win in performance, so I found little reason to care beyond that, but all the people throwing around 160 ALUs only do so because the majority of people following these threads have taken that number as fact, when in reality it is a poor guess based on performance of one game with a small budget not trying to push the console in any real way. I still think it is more likely that 320 ALUs makes sense and one reason fourth storm decided it didn't is because he heard from someone (a rumor) that Latte was actually a 45nm process chip, but I find that very odd since no AMD GPU has ever been 45nm afaik, and even the r700 was produced at 40nm and was found to be significantly more efficient, Nintendo's design does go against using 45nm here and just makes very little sense when the MCM was needed to hit more exact performance than anything Nintendo has made in the past.

TL:DR, I think Wii U's GPU could end up being 352GFLOPs and that no developer has actually used its performance well, it would make sense when you consider the performance advantage we see in titles like Bayonetta 2 which started it's life as a 360/PS3 game. Every argument I've heard basically says "It doesn't have to be to achieve such and such" nothing to do with how this is impossible when we know that it simply isn't. As for those that say Wii U's power consumption is too low for that, only need to look at MCM and the low clock which is known to have drastic savings on power consumption, this combined with similar AMD GPUs using 25watt TDPs while also adding 1GB of GDDR5 that are clocked higher. It's just silly to say this is just a binned part when they are actually mass produced and used in casino machines in the millions.
 
Not at all. The PS2 had more than twice the power consumption of the Gamecube, and the the Xbox had more than three times the power consumption.

Do you have a source for that?

I can't seem to find decent looking comparisons on the net anymore. Best I found was this which states 23 W for Gamecube and 32 W for PS2, but it's possible that they used a "fat" PS2's which already had a lower power consumption than the first revisions. I don't remember reading anywhere that PS2 was ever > 46 W though.

Here I have already corrected someone about transistor counts of Wii U's Latte, so I'll just quote myself, quoting the original estimate of what Latte's Transistor count is.

Then maybe I'm off with my estimation. Either that, or Latte isn't 40 nm as suggested in Thrakier's post but indeed 55 nm. In that case it'd be around 400 M transistors according to his calculations.
Or has the 55 nm theory already been debunked for certain? In that case I'm sorry to have brought it up again.
 
Something interesting from an interview with Teku Studios:

“We will ADAPT our DirectX11 features to Wii U, not that it supports them natively. However, we are very happy with Nintendo and its console, and we think that it well deserves that extra effort.”

Source

Did you read that Square Enix?
 
Do you have a source for that?

I can't seem to find decent looking comparisons on the net anymore. Best I found was this which states 23 W for Gamecube and 32 W for PS2, but it's possible that they used a "fat" PS2's which already had a lower power consumption than the first revisions. I don't remember reading anywhere that PS2 was ever > 46 W though.



Then maybe I'm off with my estimation. Either that, or Latte isn't 40 nm as suggested in Thrakier's post but indeed 55 nm. In that case it'd be around 400 M transistors according to his calculations.
Or has the 55 nm theory already been debunked for certain? In that case I'm sorry to have brought it up again.

55nm is debunked because the 32MB edram couldn't fit if it was 55nm, at the time 32MB wasn't hard confirmed.
 
Something interesting from an interview with Teku Studios:



Source

Did you read that Square Enix?

EDIT: You can write DX11 shaders to run (not as well) on DX10 cards.

Its not using same DX11 shader, its using rewritten shader for WiiU from DX11, same as many other multiplatform games like BF 3 or Crysis 3.

Seems to me like they use the DX11 renderer in Unity, which then compiles it to DX10.

From Unity site:
Enabling DirectX 11
To enable DirectX 11 for your game builds and the editor, set "Use DX11" option in Player Settings. Unity editor needs to be restarted for this to take effect.

Note that DX11 requires Windows Vista or later and at least a DX10-level GPU (preferably DX11-level). Unity editor window title has "<DX11>" at the end when it is actually running in DX11 mode.

Also, the game is "2D." It uses flat layered polygons and they slap the textures on the polygons. It creates a 2D look while allowing an easy parallax affect. Not exactly impressive considering there are only a couple dynamic lights on a screen at once.
 
So you were referring to the usability of GPGPU, I see. You almost fooled me with the 'most features' broad sweep there..

I specifically mentioned about those features not being feasible in tandem.

Funny, I'm typing this on a VLIW5-equipped machine that does GPGPU quite adequately.
FYI, VLIW (4 *and* 5) was part of AMD lineup up until quite recently. VLIW4 was intended to indeed improve on the GPGPU compute density of the VLIW architecture (one ALU being 'transcedental' does not help with the PE count), but it was rather short-lived and superceeded by GCN. The early (circa R700) bad rep that the AMD architecture got WRT GPGPU had little to do with the architecture being VLIW and everything to do with the inadequate size of the LDS - something that could make or break a GPGPU platform. Come R800, and eventually, Cayman, VLIW was doing GPGPU quite happily (BTW, check out what bitcoiners think of GCN some day). Until AMD hit a compiler brick wall, or so they say.

The scalar ALU works in paralell with 64 element vector alu. It is possible to make a loop that wastes only 1 cycle for the loop management code. On VLIW the loop overhead can be 10-40 cycles long even.
AMD suggests that Radeon HD 7970 can achieve up to a 7.5x peak theoretical compute performance improvement over the Radeon HD 6970 due to higher utilization. - Tomshardware
The fundamental issue moving forward is that VLIW designs are great for graphics; they are not so great for computing. - Anandtech

You know what's funny? I'm typing this from a VLIW4 machine, and it's shit in GPGPU scenarios, as seen by the horrid horrid performance in Tomb Raider using TressFX. And that's just hair on one character.

Essentially you have nothing to go on to even suggest that an architecture that was introduced in 2006 is efficiently designed for compute.

Pardon the recurring pattern, but are you saying that from the position of a seasoned GPU programmer?

I do not need to be a seasoned physicist to understand the concept of physics. You keep baiting, and either you want me to ask you the same or you seem to believe that you need a PHD to be able to use your brain.

GPGPU will be a factor as soon as the first gen of titles for the new platforms, which gen started life predominantly as ps360 tech, gets out of the door. FYI, the latest iterations of the desktop 3D Graphics APIs (i.e. DX and OGL) do GPGPU seamlessly, without the need for external APIs (i.e. CUDA, OpenCL). GPGPU is getting tightly integrated into the featureset of *each and every* modern GPU supported by the latest iterations of the 3D graphics APIs, *irrespective* of its performance grade.

This does not mean at all that developers are using it as seen by the games released thus far, and how resource intensive it is when actually applied.

No, I don't. You tried to discard a featureset base entirely on vague, broadly-generalized performance considerations. That's not exactly how things work in this field.

Vague and broad? Completely wrong. See the fact that I can turn on tesselation effects and GPGPU physics on my VLIW4 gpu, it doesn't mean they will run well nor that the architecture itself was designed specifically with those features in mind, and therefore might not be efficient at them at all.

Brazos is an R800, more precisely Cedar. It's a successor to R700, but it does have certain architecture advancements. In this regard Latte is another successor to R700, so you could say Latte and Brazos are cousins, but neither of them is an R700.

Wrong, Brazos is an APU. In 2011 came with a Radeon HD 6310 or Radeon HD 6250 (Wrestler - VLIW5). In either case these are low power GPUs, with very low performance. It's actually based on the same graphics are as the M HD 4300 series, which is R700.
 
What generation gpu is the wii u based on again? And what clock speeds did that tech attain in its pc iterations? Or is that still speculation and this is a more customized part.

The GPU is custom design that uses different parts from different architectures from all that can be told.

The only reason people even cling to the R700 thing is because the Wii U was using an under clocked stock 4850/4870 at the time before launch(clocked at 400mhz I beleive). It clearly isn't now, so even the R700 thing should not be considered as anywhere close to fact. A lot of people just like to cling to it because its older and sounds lower end just like the smaller SP hypothesis.

As for the 4850(assuming it is still that tech primarily) then look at the Froblins demo.
http://www.youtube.com/watch?v=EFWOab9MO4Q

Of course, thanks to various devs and more recently the Project C.A.R.S. changelog. We know that it has DX11 capability so there is no way that it is a R700 as that wasn't even possible in that series. It may still have some components from them but I don't see them on the die.

Then of course, there is the fact that you can't get performance out of a stock PC GPU like you can from a console. Just try playing PC ports of 360/PS3 games on a ATI x1800 or an underclocked Nvidia 7900. See if you can match the consoles performance. The consoles always get better performance than there PC GPU counterparts because there is no overheard like there is with a PC.
 
Essentially you have nothing to go on to even suggest that an architecture that was introduced in 2006 is efficiently designed for compute.
VLIW5 wasn't introduced in 2006, VLIW4 came in 2010? iirc. Wii U's GPU is obviously custom and thus we have no real idea about GPGPU performance, however it is obviously going to out perform the CPU from 360 and most likely Cell as well (fire shield up) VLIW obviously won't be as good at threading as a thread designed architecture like GCN, VLIW uses more basic instructions and thus need to be treated differently.

Wrong, Brazos is an APU. In 2011 came with a Radeon HD 6310 or Radeon HD 6250 (Wrestler - VLIW5). In either case these are low power GPUs, with very low performance. It's actually based on the same graphics are as the M HD 4300 series, which is R700.
It's actually still R800, AMD's R900 series is VLIW4 not VLIW5, brazos is a bit of a hybrid of ideas but it's roots are decidedly still based on R800's design, similar to how HD 6770 is a slightly modified HD 5770.
 

Nice find. This should go in the CPU thread, though as the GPU has nothing to do with the PPC other than the MCM connectivity which I would like to be explored more. The CPU thread does not get enough attention.

Seems graphics are all that matters to most people. I guess some things will never change.
 
55nm is debunked because the 32MB edram couldn't fit if it was 55nm, at the time 32MB wasn't hard confirmed.

Everybody seemed to acknowledge that EDRAM at 55nm was impossible given what we see in the die shot. But iirc a couple people tried to claim it could be 40nm EDRAM mixed with 55nm process on the rest of the die. I don't remember where that discussion went. Is that even possible?
 
VLIW5 wasn't introduced in 2006, VLIW4 came in 2010? iirc. Wii U's GPU is obviously custom and thus we have no real idea about GPGPU performance, however it is obviously going to out perform the CPU from 360 and most likely Cell as well (fire shield up) VLIW obviously won't be as good at threading as a thread designed architecture like GCN, VLIW uses more basic instructions and thus need to be treated differently.

I believe it was. VLIW4 came after VLIW5 (introcuded with Radeon HD 2000). It's one of the reasons why I say for example, that just because it has hardware tessellation doesn't mean it will be efficient in game.

Every GPU on these consoles are custom. It doesn't not mean what you imply, in that they forego the core architecture and build something completely different. The Wii U having more modern architecture, and more power than 360 is common knowledge. I wouldn't need to debate that.

It's actually still R800, AMD's R900 series is VLIW4 not VLIW5, brazos is a bit of a hybrid of ideas but it's roots are decidedly still based on R800's design, similar to how HD 6770 is a slightly modified HD 5770.

R800 designs are DX11 from the get go, so while you might be right I can't help but be confused when devs say it doesn't support DX 11 features natively:

&#8220;We will ADAPT our DirectX11 features to Wii U, not that it supports them natively. However, we are very happy with Nintendo and its console, and we think that it well deserves that extra effort.&#8221;. There's clearly some pieces missing from the puzzle still.

It's been stated time after time, in this thread alone that Square Enix did not actually say that about Wii U and DX 11, and it's been shown time after time with quotes and dev logs, etc. that the Wii U does support DX 11 features.

I wasn't aware of that, do you have a link?
 
Another dev states that they are using DX11 for a Wii U game
http://playeressence.com/candle-ind...ect-x11-and-is-confirmed-for-wii-u/#idc-cover

Now let the spinning of facts and twisting of statements begin... (cues anti-N sqad)

From the comments section:

Hi guys! Teku Studios here.

Just to clarify, we will ADAPT our DirectX11 features to Wii U, not that it supports them natively. However, we are very happy with Nintendo and its console, and we think that it well deserves that extra effort :)

Thanks for your comments!


So no. The Wii U does not support DirectX 11 natively.
 
From the comments section:

Hi guys! Teku Studios here.

Just to clarify, we will ADAPT our DirectX11 features to Wii U, not that it supports them natively. However, we are very happy with Nintendo and its console, and we think that it well deserves that extra effort :)

Thanks for your comments!


So no. The Wii U does not support DirectX 11 natively.

Dat spin!!!

jk

Another dev states that they are using DX11 for a Wii U game
http://playeressence.com/candle-ind...ect-x11-and-is-confirmed-for-wii-u/#idc-cover

Now let the spinning of facts and twisting of statements begin... (cues anti-N sqad)
Already posted several times, and sorry, there is no DX11 support.
 
Dat spin!!!

jk


Already posted several times, and sorry, there is no DX11 support.

Its already uses DX11 features in Project C.A.R.S. and a few other devs have stated that the GPU has them.

Please, stop. Your assumptions are constantly contradicting known facts.
 
&#8220;We will ADAPT our DirectX11 features to Wii U, not that it supports them natively. However, we are very happy with Nintendo and its console, and we think that it well deserves that extra effort.&#8221;

What does adapt mean? Are they shaping their game to fit Wii U's hardware? I remember Crytek said they were bringing DX11 stuff to the PS3/360 even though they don't support them.

Edit: Aw, I was beaten.
 
What does adapt mean? Are they shaping their game to fit Wii U's hardware? I remember Crytek said they were bringing DX11 stuff to the PS3/360 even though they don't support them.

Edit: Aw, I was beaten.

No, they didn't say that. They said they would have direct11-"like" graphics effects in Crysis 3. They are quite specific in that link you just posted. You removed their hyphen.

maybe it doesn't support them natively because it doesn't use Direct X........

That is correct, but as I said above, its going to be twisted/spinned.

The Wii U doesn't "support" Direct anything, but can produce the same effects as they are being used in known games.
 
No, they didn't say that. They said they would have direct11-"like" graphics effects in Crysis 3. They are quite specific in that link you just posted. You removed their hyphen.
Are you really going to argue semantics? I don't think there's a difference.
 
What "known facts"?

Like the one you just trimmed out of the post you are quoting.

Are you really going to argue semantics? I don't think there's a difference.

That is not the semantics at all. You are making the statement say something completely different than what they intended.

It would be like me saying I'm going to tweak and repaint my car to make it look "like" a Ferrari and you saying that I'm making a or have a Ferrari.

Saying something is "like" something else and saying it "is" something are making 2 entirely different statements.
 
Just to add a bit of information: RV710 (80 SPUs) has 242 M transistors and RV730 (320 SPUs) has 514 M. If we believe in the theory that Latte has 160 SPUs and that possible additional funcunality for bc etc. doesn't add significantly to that number, it should be somewhere in between.
For comparison: Xenos (360's GPU) has 232 M transistors.
Whether that's a significant leap or not is the eye of the beholder

Bit of a vague way to estimate the number of transistor in Latte though. Better just to look at the size of the chip, the manufacturing process and compare that to other AMD parts on a similar process. If you do that its clear the GPU is well above 500m transistors, more like twice that amount. Obviously embedded memory will take up a few hundred million transistor, but GPU logic should still be well over 500m.
 
Its already uses DX11 features in Project C.A.R.S. and a few other devs have stated that the GPU has them.

Please, stop. Your assumptions are constantly contradicting known facts.

PS3 and 360 were said to run "DX11" shaders when they sure as hell can't, but you can get similar results by messing with things and approaching the shaders differently.
 
Dat spin!!!

jk


Already posted several times, and sorry, there is no DX11 support.

Nope, not on PS4 either though. It's a Microsoft proprietary API...
It would have to be adapted to it's OpenGL/(Whatever Nintendo'sAPIiscalled) equivalent.
I'm still not saying that it is still using DX11 equivalent features, but that quote means nothing. For all we know they were avoiding lthe litigation involved with claiming to use one company's API on another's hardware...

I'm leaning more toward it being adapted to DX10.1 equivalent features, though...
 
Why don't Nintendo just release the specs & get someone to make some stunning effects/power/graphics demo to show what the WiiU can do, something that really pushes it because the 2011 E3 WiiU graphics demo looks very dated already !

http://www.youtube.com/watch?v=Shch7LNkVXw

Is the problem with the WiiU hardware not that it's not DX11 but not x86 ?
 
Like the one you just trimmed out of the post you are quoting.



That is not the semantics at all. You are making the statement say something completely different than what they intended.

It would be like me saying I'm going to tweak and repaint my car to make it look "like" a Ferrari and you saying that I'm making a or have a Ferrari.

Saying something is "like" something else and saying it "is" something are making 2 entirely different statement.

Oh, you mean the fact that Wii U does not support DirectX 11.
 
Nope, not on PS4 either though. It's a Microsoft proprietary API...
It would have to be adapted to it's OpenGL/(Whatever Nintendo'sAPIiscalled) equivalent.
I'm still not saying that it is still using DX11 equivalent features, but that quote means nothing. For all we know they were avoiding lthe litigation involved with claiming to use one company's API on another's hardware...

The Wii U API is called GX2. I think in previous pages of this thread, it was said to be similar to OpenGL 3.3 with tessellation support and perhaps some newer features added on. Honestly, all the dick swinging about DX10 or DX11 equivalent features probably matters little to what you'll end up seeing on screen. You can fake just about anything you want with pre-baked effects and they still look pretty good. You would need to really pick nits to even care. The PS3 and Xbox 360 ended up supporting much more modern effects in this manner than the hardware would otherwise indicate it would.
 
Status
Not open for further replies.
Top Bottom