WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
Power consumprion also something that points at 160alu. This would match the sub 15 watts for the gpu.

We don't hav e anything at all that said they moved from r700 base. Saying they moved to gcn or whatever just muddies the water.

while unlikely to be much over i dont think the gpu is sub 15w
 
But they have 8 Cores though? Aren't they basically Netbook Cores?

Jaguar cores are netbook cores (or super low powered cores) they have longer pipelines than Wii U's CPU too, likely 2 or 3 times as long, so the clock speed of Wii U's CPU could actually produce better performance per core for general purpose tasks but they do not have as drastic flop performance which all 3 designs will mostly push onto their GPU. Having the extra cores will help keep frame rates high, but very few games are CPU limited, and fewer are expected to come into existence over the course of the next 5 years.

Power consumprion also something that points at 160alu. This would match the sub 15 watts for the gpu.

We don't hav e anything at all that said they moved from r700 base. Saying they moved to gcn or whatever just muddies the water.

The size of the SPUs are far too big to house only 20ALUs, if that was the case, the SPUs wouldn't be so misshapen. They are obviously not just R700 cores if they are in fact only 20ALUs per unit. As we have both thought, it is likely they moved on to 32ALUs and used a more custom design, basically VLIW5 isn't a good candidate for TEV emulation.

Also 160ALUs @ 550MHz with low powered 40nm process would consume drastically less power unless it was a high power design and cut down which isn't what we are seeing at all. (you are talking about ~10watts worth of silicon btw which is far too little for the GPU)
 
Power consumprion also something that points at 160alu. This would match the sub 15 watts for the gpu.

We don't hav e anything at all that said they moved from r700 base. Saying they moved to gcn or whatever just muddies the water.

We don't have proof it's R700 based aside from rumors based on early development kits. It looks almost nothing like an R700.
 
But they have 8 Cores though? Aren't they basically Netbook Cores?

Now you are, like most others, factoring in other details which you didn't do before in the case of the Wii U CPU. Just the same, there is far more to it than clock speed.

Generally, the PowerPc processors get more performance per cycle than any other CPU out there.

You can't compare this CPU clock per clock because the architecture is so different. Marcan, the person who actually uncovered the clocks, stated this with, but people just omitted the rest of his comments.

The actual real world performance of the Wii U CPU is around the same as the 360 give or take.
 
The 2 dont cancel out of what? In power they could of been seen the same like 360 and PS3 yet GC had much less even notable 3rd party support even though it was more capable then the PS2. This is what we are debating no?

You need to stop acting like Gamecube was the only powerful console. Again, I said developers supported consoles with power. Did developers not support PS2? Did consoles weaker than PS2 get that support? No. There's nothing more to be said from that.

Gamecube got less. So what? There are also games 360 didn't get that PS3 has, doesn't mean Microsoft abandoning specs will make them more likely to get games.



Mihael Mello Keehl said:
That a powerful console will have third party support regardless, that its a fact right? Im not sure why else you had to say even added to discussion. And yes Nintendo made a console under powered compared to its next gen rivals..Who on gaf does not know this? Why are you telling me this as if I dont know?
I'd say Krizz.
krizzx said:
Ah, then you are another who is under the misconception that Nintendo isn't getting all of the big third party titles because of lower power but that was always a lie.

He ponies up N64/GC as proof but then I said
JordanN said:
Nintendo doing other alienating stuff like picking cartridges over CD's doesn't change it.
Keyword: Alienating. Those consoles didn't miss out on support because they weren't powerful enough. All future Nintendo consoles did however.

The Wii U is basically in between graphical generations, in while its raw power is not too far from current-gen, but is closer in GPU features and architecture (which are generally a lot closer to the previous gen than it traditionally has been.) This opens up some interesting options for developers... if they care enough, but the system's biggest issue is not its power at this time. We will see if Nintendo turns that around.
The Wii U being being closer to 360/PS3 in power is what jeopardizes all this. Only if Nintendo made the console closer to PS4/720 in power, would we really see its architecture being used instead of what's likely to be 360/PS3 ports.

M°°nblade;57544078 said:
Well, then that shouldn't be the center of the debate because it's not a fact.

Technical hurdles don't rule them out the presence of third party games. There are also other (eg. financial, human resources) aspects for that matter to be considered. But they do affect them. What sounds pleasible is that the gamecube hypothetically would have recieved less third party games, if the hardware was inferior to the PS2, the lead platform for which most games in that era were developed.
I was never saying it was the only reason. What you said in the second last part is what I'm referring to.

If Gamecube was a NES, developers wouldn't give a damn.
 
You need to stop acting like Gamecube was the only powerful console. Again, I said developers supported consoles with power. Did developers not support PS2? Did consoles weaker than PS2 get that support? No. There's nothing more to be said from that.

Gamecube got less. So what? There are also games 360 didn't get that PS3 has, doesn't mean Microsoft abandoning specs will make them more likely to get games.




I'd say Krizz.


He ponies up N64/GC as proof but then I said

Keyword: Alienating. Those consoles didn't miss out on support because they weren't powerful enough. All future Nintendo consoles did however.

What you are saying is basically that power wouldn't lead the console to any 3rd party support, an arbitrary amount of power is required to receive support but Nintendo wouldn't get it anyways because other platforms 3rd parties like more will exist. Power plays no real role on support or PS2 would have been dropped for the easier to develop, high powered Xbox.

Gamecube didn't receive support so they can't rely on 3rd parties to give them games if they burn more money on hardware, this is another answer for Nintendo though and based on Wii U's design I think they are going after that.

Also PCs have the same architecture as the PS4/XB3 and very similar to Wii U, in fact PCs will actually have more balanced architecture than all 3 consoles thanks to a much stronger CPU in general, but the point is these PCs have hardware that will run these next gen titles and will actually be weaker than Wii U's (laptops) so the reality is performance no longer matters to anyone but those that care about numbers. All game designs and games announced can be created on Wii U without issue and again you don't even understand the debate you are having here or you would be more able to defend your stance that Wii U's lack of hardware is causing it to miss out on current gen titles.
 
Power consumprion also something that points at 160alu. This would match the sub 15 watts for the gpu.

We don't hav e anything at all that said they moved from r700 base. Saying they moved to gcn or whatever just muddies the water.

Power consumption is another strong indication, yes. As is fuction's observation on beyond3D that we would at least be seeing 720p resolution as standard if Latte had a shader advantage over Xenos/RSX. Yet, games like Sonic Racers, BLOPS2, and Tekken sacrifice resolution just like the other console versions. On a 320 shader machine, that would not be necessary, as resolution is pretty much entirely dependent on GPU strength.

We don't have proof it's R700 based aside from rumors based on early development kits. It looks almost nothing like an R700.

It just wouldn't make sense for Nintendo to document the chip as being based on R700 in the leaked feature sheet (which has been confirmed by insiders) and then change it to something else entirely. Going from VLIW5 to VLIW4 or GCN would require devs to do major code revisions in order to optimize for those architectures.

The shader blocks themselves also look pretty similar to those of RV770, even though the layout of the chip as a whole is pretty different. Same goes for the T blocks, which I identify as TMUs. They look alot like those on the RV770.
 
Jaguar cores are netbook cores (or super low powered cores) they have longer pipelines than Wii U's CPU too, likely 2 or 3 times as long, so the clock speed of Wii U's CPU could actually produce better performance per core for general purpose tasks but they do not have as drastic flop performance which all 3 designs will mostly push onto their GPU. Having the extra cores will help keep frame rates high, but very few games are CPU limited, and fewer are expected to come into existence over the course of the next 5 years.



The size of the SPUs are far too big to house only 20ALUs, if that was the case, the SPUs wouldn't be so misshapen. They are obviously not just R700 cores if they are in fact only 20ALUs per unit. As we have both thought, it is likely they moved on to 32ALUs and used a more custom design, basically VLIW5 isn't a good candidate for TEV emulation.

Also 160ALUs @ 550MHz with low powered 40nm process would consume drastically less power unless it was a high power design and cut down which isn't what we are seeing at all. (you are talking about ~10watts worth of silicon btw which is far too little for the GPU)
Now redo the math at 55nm. Everyyhing on gpu falls in line with what we are seeing...
 
What you are saying is basically that power wouldn't lead the console to any 3rd party support, an arbitrary amount of power is required to receive support but Nintendo wouldn't get it anyways because other platforms 3rd parties like more will exist. Power plays no real role on support or PS2 would have been dropped for the easier to develop, high powered Xbox.

Gamecube didn't receive support so they can't rely on 3rd parties to give them games if they burn more money on hardware, this is another answer for Nintendo though and based on Wii U's design I think they are going after that.
Sigh, you people still don't get it.

PS2 set a benchmark. It was the move from one powerful console (PS1/N64/DC) to the next. I'm sure Gamecube and Xbox had their own benchmarks too. A game like Doom 3 was probably never going to come to PS2/Gamecube because of its shader requirements. What would you expect iD software to do? Squeeze blood from a stone?

We also saw Wii miss out on PS3/360 games for several years so I'm not sure why it's turned to denial at this point.

z0m3le said:
you would be more able to defend your stance that Wii U's lack of hardware is causing it to miss out on current gen titles.
What? I think you need to start from the beginning because I did not say Wii U is weaker than current gen.
 
Power consumption is another strong indication, yes. As is fuction's observation on beyond3D, that we would at least be seeing 720p resolution as standard if Latte had a shader advantage over Xenos/RSX. Yet, games like Sonic Racers, BLOPS2, and Tekken sacrifice resolution just like the other console versions. On a 320 shader machine, that would not be necessary as resolution is pretty much entirely dependent on GPU strength.



It just wouldn't make sense for Nintendo to document the chip as being based on R700 in the leaked feature sheet (which has been confirmed by insiders) and then change it to something else entirely. Going from VLIW5 to VLIW4 or GCN would require devs to do major code revisions in order to optimize for those architectures.

The shader blocks themselves also look pretty similar to those of RV770, even though the layout of the chip as a whole is pretty different. Same goes for the T blocks, which I identify as TMUs. They look alot like those on the RV770.

Your assumptions here are a bit off. The wattage of a 160ALU @ 550 MHz on 40nm that is designed in a bubble, would be ~10watts, not to mention it is MCM and low powered 40nm process being extremely likely.

Another problem is the document, final hardware didn't exist for launch, in order to make the launch games, stand in hardware needed to be used. Wii U's dev kits from early on were R700 this made sense to continue to code around, these games were "designed in a bubble" meaning that the code they wrote would have to work on the dev kits that existed at the time, VLIW5 code works beautifully on VLIW4 and even better on GCN. If you want to test this, pull a VLIW5 designed demo and run it on the newer card, even at the same specs, it will run better on the later designs, your assumptions here are misguided and they don't fit with the reality we see in the die photo.

Did you take a look at the GCN memory set up? I think the major problem with using R700 as the final specs is that that document was written for the dev kits received, I talked to a source around that time and he had mentioned that the document kept being updated. This points to a changing environment, I haven't had the chance to speak to him since September but it is likely that in November final hardware no longer resembled what was in the dev kit. could also be a strong reason why NFS was so easily impressive, while launch software failed to do anything beyond last gen consoles, the dev kits were simply not close enough to final hardware.

PS that would also explain the fps drops we see in COD BOP2.

Now redo the math at 55nm. Everyyhing on gpu falls in line with what we are seeing...
Did the math, 55nm wouldn't work, the embedded memory is far too small to be 55nm. Also a while ago there was rumors that Wii U was being built in a 40nm chip factory NEC use to own (who is now rensan or whatever they are called.
 
Never saw where that was completely ruled out it couldn't be 55nm. Looking at the gpu parts it the only thong that would work.


If the only part that doesn't fit at 55nn. We should look harder at the edram

or we accept its very custom and realise we aint really getting anywhere comparing to off the shelf silicon and instead we get a posse together to go kidnap Iwata and apply the thumbscrews til we get answers
 
The pipeline of the CPU is just too damn short for 3+ GHz.

I will agree that a clock bump "3+ GHz". I think we have all agreed on that, but that does not rule out the possibility of a clock bump in general altogether and certainly not the possibility as the CPU does support voltage stepping.

or we accept its very custom and realise we aint really getting anywhere comparing to off the shelf silicon and instead we get a posse together to go kidnap Iwata and apply the thumbscrews til we get answers

You will like get little more. Iwata is not a programmer anymore and I doubt he knows the ins and out of the hardware. You would have to kidnap someone from AMD or Renesasas as they made the GPU, not Nintendo.
 
Never saw where that was completely ruled out it couldn't be 55nm. Looking at the gpu parts it the only thong that would work.


If the only part that doesn't fit at 55nn. We should look harder at the edram
32MB Renesas eDRAM at 55nm are almost as big as the whole Latte die. Also, what's "everyyhing on gpu" supposed to mean? We have no fucking idea what most of it is, let alone how big any of that stuff should be at any given node.
 
I find this thread and the CPU thread awesome, because I'm learning a lot. My question though, why are these specs so hard to determine? I remember knowing what was inside the X360 well before it came out and it seems like people know everything there is to know about the PS4. Why are the details of the WiiU so secret?
 
Now you are, like most others, factoring in other details which you didn't do before in the case of the Wii U CPU. Just the same, there is far more to it than clock speed.

Generally, the PowerPc processors get more performance per cycle than any other CPU out there.
The architecture is 15 years old. If you think it can hang with modern designs just because PPC, you're sorely mistaken.
You've read Marcan's findings, good. It's all but confirmation that nothing has changed. SIMD width would have been the easiest thing to bump up, but they didn't even do that.
Until some verifiable, rational argument based on evidence pops up, I'm continuing to assume it's a triple-core Gekko at thrice the clocks.

Which means: dual-issue unified int/FP scheduler, peaking out at two ops per clock on the integer side, and two SP FLOPs.

Jaguar OTOH can issue two integer plus two FP, and sustains 8 SP FLOPs per clock (Bobcat could do 4).

These dinky netbook CPUs in the upcoming Sony/MS consoles will basically outperform Espresso by a factor of 10.

The comparison with contemporary smartphone CPUs is much more apt.

krizzx said:
You can't compare this CPU clock per clock because the architecture is so different. Marcan, the person who actually uncovered the clocks, stated this with, but people just omitted the rest of his comments.
Your interpretation of what Marcan said is pretty ... optimistic.

krizzx said:
The actual real world performance of the Wii U CPU is around the same as the 360 give or take.
Doubtful. Clock gap is too large. The SIMD performance gap is also huge.
 
I will agree that a clock bump "3+ GHz". I think we have all agreed on that, but that does not rule out the possibility of a clock bump in general altogether and certainly not the possibility as the CPU does support voltage stepping.



You will like get little more. Iwata is not a programmer anymore and I doubt he knows the ins and out of the hardware. You would have to kidnap someone from AMD or Renesasas as they made the GPU, not Nintendo.

he probably still knows the specs and we can get him to greenlight metroid dread before we release him
 
32MB Renesas eDRAM at 55nm are almost as big as the whole Latte die. Also, what's "everyyhing on gpu" supposed to mean? We have no fucking idea what most of it is, let alone how big any of that stuff should be at any given node.

Agreed. Last I checked, this is still what the cihp analysis looks like despite statements by DF and anyone else.

wiiudie_blocks.jpg
latte_annotated.jpg

USC-fan said:
Care to fill in those letters for us?
 
32MB Renesas eDRAM at 55nm are almost as big as the whole Latte die. Also, what's "everyyhing on gpu" supposed to mean? We have no fucking idea what most of it is, let alone how big any of that stuff should be at any given node.

Thanks, if that doesn't end the idea behind 55nm, I don't know what will. Obviously this chip is custom and the wattage doesn't fit with a simple 160 ALU R700 @ 40nm.

I'd say our wisdom that Wii U's GPU will be custom has left us because we were told the dev kits were R700. That is true they were afaik, but final hardware is not written in stone by the dev kits that preceded it. They are there to give an approximation of performance not final silicon.
 
Your interpretation of what Marcan said is pretty ... optimistic.

No, it was literal.

Marcan said:
The Espresso is an out of order design with a much shorter pipeline. It should win big on IPC on most code, but it has weak SIMD.

It's worth noting that Espresso is *not* comparable clock per clock to a Xenon or a Cell. Think P4 vs. P3-derived Core series.

No hardware threads. One per core. No new SIMD, just paired singles. But it's a saner core than the P4esque stuff in 360/PS3.

And I'm sure it's not an "idle" clock speed. 1.24G is exactly in line with what we expected for a 750-based design.

So yes, the Wii U CPU is nothing to write home about, but don't compare it clock per clock with a 360 and claim it's much worse. It isn't.
 
32MB Renesas eDRAM at 55nm are almost as big as the whole Latte die. Also, what's "everyyhing on gpu" supposed to mean? We have no fucking idea what most of it is, let alone how big any of that stuff should be at any given node.
Where are those number coming from and was it confirm to be renessas edram to begin with. Is there other edram that would fit at55nm?

Also it means at 55nm the alu blocks would match wgat they should be at 55nm.
 
Where are those number coming from and was it confirm to be renessas edram to begin with. Is there other edram that would fit at55nm?

Also it means at 55nm the alu blocks would match wgat they should be at 55nm.

I believe the people who actually gave the die shot in the first places confirmed it was Renesas.
 
It just wouldn't make sense for Nintendo to document the chip as being based on R700 in the leaked feature sheet (which has been confirmed by insiders) and then change it to something else entirely. Going from VLIW5 to VLIW4 or GCN would require devs to do major code revisions in order to optimize for those architectures.

The shader blocks themselves also look pretty similar to those of RV770, even though the layout of the chip as a whole is pretty different. Same goes for the T blocks, which I identify as TMUs. They look alot like those on the RV770.

I read a rumor that early Wii U dev kits had an RV770LE in them. The thing that had me considering a move to VLIW4 or GCN was the tessellator. I've been reading on the evolution of AMD's hardware, and all I've read says that the tesselators on AMD GPU's were essentially worthless until Cayman (HD69xx), yet the leaked feature sheet for the Wii U says it's usable, and Shin'en tweeted about using it for their next game. It just made me wonder if they may have used a newer tesselator than what would have been standard for an R700. If they did, it could be that other hardware may have been altered as well. FWIW, the assembler for VLIW4 is supposedly much simpler to code for than VLIW5, and GCN is even easier than that. I would think the move away from VLIW5 would be welcome in that regard.
 
I will agree that a clock bump "3+ GHz". I think we have all agreed on that, but that does not rule out the possibility of a clock bump in general altogether and certainly not the possibility as the CPU does support voltage stepping.

Probably the wrong thread for this, but I read/post a lot of news. Does no one else remember prior to E3 last year the stories were that Nintendo underclocked the CPU in the dev kits and then later the rumours came out that Nintendo had underclocked the CPU to prevent it overheating in the small case? Google it if you don't remember.

The U runs almost cold after hours of play, maybe they've tested it and found it can handle more heat than they thought? The PSU being 75w and the system running on an almost constant 33w seems wiggy.

Criterion said :

"Wii U is a good piece of hardware, it punches above its weight. For the power consumption it delivers in terms of raw wattage it's pretty incredible. Getting to that though, actually being able to use the tools from Nintendo to leverage that, was easily the hardest part."

On the CPU they said:

"I think a lot of people have been premature about it in a lot of ways because while it is a lower clock-speed, it punches above its weight in a lot of other areas"

"So I think you've got one group of people who walked away, you've got some other people who just dived in and tried and thought, 'Ah... it's not kind of there,' but not many people have done what we've done, which is to sit down and look at where it's weaker and why, but also see where it's stronger and leverage that. It's a different kind of chip and it's not fair to look at its clock-speed and other consoles' clock-speed and compare them as numbers that are relevant. It's not a relevant comparison to make when you have processors that are so divergent. It's apples and oranges."
 
I read a rumor that early Wii U dev kits had an RV770LE in them. The thing that had me considering a move to VLIW4 or GCN was the tessellator. I've been reading on the evolution of AMD's hardware, and all I've read says that the tesselators on AMD GPU's were essentially worthless until Cayman (HD69xx), yet the leaked feature sheet for the Wii U says it's usable, and Shin'en tweeted about using it for their next game. It just made me wonder if they may have used a newer tesselator than what would have been standard for an R700. If they did, it could be that other hardware may have been altered as well. FWIW, the assembler for VLIW4 is supposedly much simpler to code for than VLIW5, and GCN is even easier than that. I would think the move away from VLIW5 would be welcome in that regard.

the only reason the tessellator in the older amd cards was unusable was that it wasn't directx compatable, though the ones in the newer cards are better anyway
 
Probably the wrong thread for this, but I read/post a lot of news. Does no one else remember prior to E3 last year the stories were that Nintendo underclocked the CPU in the dev kits and then later the rumours came out that Nintendo had underclocked the CPU to prevent it overheating in the small case? Google it if you don't remember.

The U runs almost cold after hours of play, maybe they've tested it and found it can handle more heat than they thought? The PSU being 75w and the system running on an almost constant 33w seems wiggy.

Criterion said :

"Wii U is a good piece of hardware, it punches above its weight. For the power consumption it delivers in terms of raw wattage it's pretty incredible. Getting to that though, actually being able to use the tools from Nintendo to leverage that, was easily the hardest part."

On the CPU they said:

"I think a lot of people have been premature about it in a lot of ways because while it is a lower clock-speed, it punches above its weight in a lot of other areas"

"So I think you've got one group of people who walked away, you've got some other people who just dived in and tried and thought, 'Ah... it's not kind of there,' but not many people have done what we've done, which is to sit down and look at where it's weaker and why, but also see where it's stronger and leverage that. It's a different kind of chip and it's not fair to look at its clock-speed and other consoles' clock-speed and compare them as numbers that are relevant. It's not a relevant comparison to make when you have processors that are so divergent. It's apples and oranges."

Thank you. I forgot all about the rest of their statement about the CPU. Specifically the bolded which is something I've been saying since the beginning.
 
Thank you. I forgot all about the rest of their statement about the CPU. Specifically the bolded which is something I've been saying since the beginning.

The parts you bolded are a perfect example of PR lingo, if you read it carefully it doesn't mean anything specific.
 
A couple of old but still relevant posts from the beyond3D thread. Take them as clues (and please don't hound the guy to give more info and endanger whatever job/source he's working from). I don't believe every so-called insider on the internet, but my bullshit meter is pretty fine tuned at this point, and this guy doesn't show any of the traits of a conman in his posts

Bobblehead said:
This is true of all the recent AMD GPUs. You all should really not get hung up on finding an exact retail GPU version of the WiiU. It doesn't exist. The specific combination of the different components in WiiU does not match any other instance of the family it is based on. Anyone who says "it's clearly an rv750!" or "no way, it's obviously an rv720!" is wrong.

Both of these statements are equally true:
rv710 is a variant of rv770.
rv770 is a variant of rv710.

If the WiiU is a 7xx, then does it really matter if it started with 710 and a few numbers were increased or if it started with 770 and a few numbers were reduced?


Nintendo will never release the exact config or clockspeed, but both will eventually be unofficially figured out via developer leaks and die shot analysis.
http://forum.beyond3d.com/showpost.php?p=1682228&postcount=3523

Bobblehead said:
710 won't help you. The ratios it used weren't magic, it was just appropriate for the design targets. As you pointed out, not all of the 7xx chips have the same ratios either.

The ratios for WiiU are appropriate for WiiU, and not necessarily dependent on other instances of the family it came from.

Some developer will eventually leak the clock speed. If they are really curious they will also write a bunch of simple test cases to work out the numbers you're asking about. Some surely have already, they just don't want to violate NDAs. But eventually it will get out.
http://forum.beyond3d.com/showpost.php?p=1682422&postcount=3552
 
The architecture is 15 years old. If you think it can hang with modern designs just because PPC, you're sorely mistaken.
You've read Marcan's findings, good. It's all but confirmation that nothing has changed. SIMD width would have been the easiest thing to bump up, but they didn't even do that.
Until some verifiable, rational argument based on evidence pops up, I'm continuing to assume it's a triple-core Gekko at thrice the clocks.

Which means: dual-issue unified int/FP scheduler, peaking out at two ops per clock on the integer side, and two SP FLOPs.

Jaguar OTOH can issue two integer plus two FP, and sustains 8 SP FLOPs per clock (Bobcat could do 4).
Come on, Rolf, for somebody who's run the code you could at least check the posted results ; )

Bobcat has no SP FLOPS advantage over 750CL. Neither theoretical nor practical - it's one 2-way simd with 4 FLOPs throughput vs another 2-way simd with 4 FLOPs throughput. Actually, on that particular test 750CL demonstrates a sufficiently better clock-normalized performance, to the extent that the projected Espresso performance (per core) @ 1.24GHz would be sufficient to almost exactly match a Bobcat @1.6GHz.
 
the only reason the tessellator in the older amd cards was unusable was that it wasn't directx compatable, though the ones in the newer cards are better anyway

Correct, but it's use in OpenGL came at a stiff performance penalty due to how slow the scheduling is on VLIW5 cards in comparison to the more efficient VLIW4 and exceptionally more efficient GCN.
 
Thank you. I forgot all about the rest of their statement about the CPU. Specifically the bolded which is something I've been saying since the beginning.

It's a very insightful interview and explains why a lot of launch software wasn't up to scratch.

http://www.eurogamer.net/articles/digitalfoundry-need-for-speed-most-wanted-wii-u-behind-the-scenes

"The starting point is always, let's just get some running software and see what it's like - get something that's running and playable. When you start you're at some sort of frame-rate or other... you take out absolutely everything you can that's optional, get something playable, tune what you've got and get that up to an acceptable frame-rate, and then put more and more back in," he reveals.

"The difference with Wii U was that when we first started out, getting the graphics and GPU to run at an acceptable frame-rate was a real struggle. The hardware was always there, it was always capable. Nintendo gave us a lot of support - support which helps people who are doing cross-platform development actually get the GPU running to the kind of rate we've got it at now. We benefited by not quite being there for launch - we got a lot of that support that wasn't there at day one... the tools, everything."

 
Jaguar cores are netbook cores (or super low powered cores) they have longer pipelines than Wii U's CPU too, likely 2 or 3 times as long, so the clock speed of Wii U's CPU could actually produce better performance per core for general purpose tasks but they do not have as drastic flop performance which all 3 designs will mostly push onto their GPU. Having the extra cores will help keep frame rates high, but very few games are CPU limited, and fewer are expected to come into existence over the course of the next 5 years.



The size of the SPUs are far too big to house only 20ALUs, if that was the case, the SPUs wouldn't be so misshapen. They are obviously not just R700 cores if they are in fact only 20ALUs per unit. As we have both thought, it is likely they moved on to 32ALUs and used a more custom design, basically VLIW5 isn't a good candidate for TEV emulation.

Also 160ALUs @ 550MHz with low powered 40nm process would consume drastically less power unless it was a high power design and cut down which isn't what we are seeing at all. (you are talking about ~10watts worth of silicon btw which is far too little for the GPU)

This is the first time I've seen somebody talking about Jaguar pipeline depth. Espresso was, what, 6 stages? Whereas PS360 has 18+? Anybody have the actual Jaguar number?
 
Power consumption is another strong indication, yes. As is fuction's observation on beyond3D that we would at least be seeing 720p resolution as standard if Latte had a shader advantage over Xenos/RSX. Yet, games like Sonic Racers, BLOPS2, and Tekken sacrifice resolution just like the other console versions. On a 320 shader machine, that would not be necessary, as resolution is pretty much entirely dependent on GPU strength.



It just wouldn't make sense for Nintendo to document the chip as being based on R700 in the leaked feature sheet (which has been confirmed by insiders) and then change it to something else entirely. Going from VLIW5 to VLIW4 or GCN would require devs to do major code revisions in order to optimize for those architectures.

The shader blocks themselves also look pretty similar to those of RV770, even though the layout of the chip as a whole is pretty different. Same goes for the T blocks, which I identify as TMUs. They look alot like those on the RV770.

But are you forgetting to take into account memory usage, and whether devs had enough time to optimize code to a memory configuration like WiiU's.


There's no unified memory architecture here, which would make it easier to just port over with any necessary optimization.
 
The parts you bolded are a perfect example of PR lingo, if you read it carefully it doesn't mean anything specific.

No aspects about that sound like pr. It sounds like a professional dev analysis and recap of experiences with development on the hardware.

Why do people spin all positive/progressive comments about the Wii U as pr talk? Devs claim the Wii U has DX11 shading capabilities ="Oh, thats just pr talk". Yet if someone says something negative about the Wii U hardware they take it at face value as absolute 100% fact without questioning it and spread it to everyone corner of the internet they can find like with the recent comments from Insomniac and about the Frostbite engine.
 
But are you forgetting to take into account memory usage, and whether devs had enough time to optimize code to a memory configuration like WiiU's.

Believe me, I did everything I could to find ways around function's analysis. Code that is unoptimized for Wii U's unique memory config could explain things like lower framerates, but resolution seems to be dependent on the SPUs almost exclusively. I mean, it's conceivable that there are a myriad of small factors which prevent Wii U games from getting a resolution bump, but Occam's Razor can't be ignored either.
 
Believe me, I did everything I could to find ways around function's analysis. Code that is unoptimized for Wii U's unique memory config could explain things like lower framerates, but resolution seems to be dependent on the SPUs almost exclusively. I mean, it's conceivable that there are a myriad of small factors which prevent Wii U games from getting a resolution bump, but Occam's Razor can't be ignored either.

Well, we don't have to assume much to know what has been already confirmed, which is that the dev tools prior to launch were hot garbage. I don't think that can be overstated in our ongoing analysis.
 
Wait, so does that mean It takes the Jaguar over 4 times as long to complete a pass?
It's called 'instruction latency' (versus 'instruction throughput').

Believe me, I did everything I could to find ways around function's analysis. Code that is unoptimized for Wii U's unique memory config could explain things like lower framerates, but resolution seems to be dependent on the SPUs almost exclusively. I mean, it's conceivable that there are a myriad of small factors which prevent Wii U games from getting a resolution bump, but Occam's Razor can't be ignored either.
No ROPs whatsoever? ; )
 
Wait, so does that mean It takes the Jaguar over 4 times as long to complete a pass?

Yes, but the throughput won't suffer as badly as PS360 since it supports OoOE. This isn't going to make Espresso stronger than 8xJaguars, but the pipeline depth should massively close the gap over what you'd expect just looking at clock speeds and core counts under certain circumstances.

8xJaguar outperforming Espresso by over 10x as claimed a few posts ago is probably a dubious assertion.
 
Wait, are we back to "Shoddy ports" now rather than "underperforming hardware"?

I don't even think you can call them shoddy ports. More like shoddy tools that forced the hands of developers to release poorly optimized ports.

Also, the hardware is what it is. Fourth Storm has the strongest theory about what it is right now, and it puts it in line or perhaps slightly stronger than current gen (depending on efficiency) with much more modern feature support.
 
I don't even think you can call them shoddy ports. More like shoddy tools that forced the hands of developers to release poorly optimized ports.

Also, the hardware is what it is. Fourth Storm has the strongest theory about what it is right now, and it puts it in line or perhaps slightly stronger than current gen (depending on efficiency) with much more modern feature support.

Based on what everyone has pieced together, some of the launch ports running essentially "on par" is a halfway decent accomplishment. BO2 is completely on par with 360 in online (other than some jaggy shadows). AC3 and ME3 pull ahead of the PS3 versions in many cases.

We're looking at a "weak" console, sure. But we won't have a better indication of anything exact until we see the next wave of games, 1st party included.
 
Status
Not open for further replies.
Top Bottom