WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
Yup, but you've got to remember that Nintendo must have a dog's bollocks compression algorithm going by what developers managed to squeeze into 40MB for WiiWare games last gen. I don't think that the developer would have mentioned it if it was just your bog standard texture compression found with your average GPU.

One thing that people might also remember is Ancel praising the RAM setup, he said that the Wii U had almost unlimited RAM if I remember correctly, so that combined with the above leads me to believe that Nintendo have something going on (or maybe plural somethings) to stop what was then believed to be a 2GB difference in RAM for the PS4 and a 6GB difference in RAM for the One being a huge problem for developers porting between the different machines.

I did toy with the idea of Nintendo using onboard flash as swapspace which would explain where some of the 5GB of flash disappeared to after that first update but can't see that happening because a) it wouldn't be fast enough to be of much use and b) constant writing, deleting and writing would degrade the flash too quickly, give it a year or two and it would be pretty much buggered.

I know everyone rolls their eyes whenever someone mentions 'secret sauce' but there has to be something going on with the GPU that we're not aware of because Bayonetta 2 particularly the Gomorrah boss fight, shouldn't be possible on a GPU pushing the lower amount of flops that some people are attributing to it.

I'd be curious if you can find the translation somewhere.

I'm still somewhat careful with the regard to that comment. Predicting the data you need in the next frame is much easier in a sidescroller than a fully 3d game. You can't have 1gb of raw data in a frame then turn around and have separate 1gb of data in the frame without flushing the first. And if that second gigabyte of data isn't in RAM then it needs to be read from disk.

Bear in mind that GPU texture compression has existed for the better part of 15 years. Even with improvements, I wouldn't expect it to be leaps and bounds better.
 
What makes you think it's a 4600 part..? Nothing from the die shot represents that possibility if I remember correctly..?

Educated guess. If it were HD4700/4800 based then it would be in the 700 to 900gflop range at 550mhz. That's 3x to 4x faster than the 360 GPU and 4x to 5x faster than the PS3 GPU. It's possible but I'm very skeptical.

To put it in context, Black Ops 2 runs at 880x720 on 360 and 832x624 on PS3. 1920x1080 is 3x higher than the 360 resolution and 4x higher than the PS3 resolution. And that game I mentioned (whose name escapes me) that got bumped to 720p would likely have been bumped higher if it could.

Edit:

On a happier note, it might also mean that Xbox One games that run at 1080p become possible at similar resolutions on Wii U insofar that CPU speed and memory size aren't issues. 1300gflops on Xbox One to 352gflops on Wii U equals a 3.7x performance differential.
 
Basically, Nintendo needs to do what's necessary to secure 3rd party support in the future. If that means making their consoles a bit more like their competitors, then that's what needs to happen.

How will Nintendo know what their competitors are building?
And why shouldnt the competition design their consoles more like Nintendo's?

Anyway, I dont want to see homogenized consoles for the sake of third parties.
 
what does directx have to do with with WiiU?

Nothing in terms of software. But they share the same hardware history.

It was also just the example I used. I don't remember what version it was but OpenGL compliancy has demanded texture compression since forever.
 

No, but that was also my point. The fact that Black Ops 2 doesn't render at the higher resolution suggests that the GPU isn't strong enough to do so.

But I also don't think there's any question that the Wii U GPU is significantly faster than the 360/PS3 GPUs. 50% or 100% faster can make for far more beautiful games at 720p. But it's not enough to take those same games to 1080p because the performance needed is so much more than that.
 
When people say DX_, it's for simplicity sake to say "it has feature sets of"

I understand why it happens, but I think it confuses and diminishes
understanding behind the philosophy and technology of the console.
To this day people, apparently even developers, are confused about
TEV in the Wii and GC.
 
No, but that was also my point. The fact that Black Ops 2 doesn't render at the higher resolution suggests that the GPU isn't strong enough to do so.

But I also don't think there's any question that the Wii U GPU is significantly faster than the 360/PS3 GPUs. 50% or 100% faster can make for far more beautiful games at 720p. But it's not enough to take those same games to 1080p because the performance needed is so much more than that.

I have no idea how one relates to the other. I'm not quite sure the point you're making...
 
I understand why it happens, but I think it confuses and diminishes
understanding behind the philosophy and technology of the console.
To this day people, apparently even developers, are confused about
TEV in the Wii and GC.

I know what you mean but generalized hardware compliancy is pretty much the only thing we have to go with.

For example, it's an almost certainty that Nintendo's API supports hardware features that exist in the HD4000-series (like tessellation) that don't exist in the DX10 feature set. That doesn't make it a 'DX11 feature' because it isn't DX11 compliant but it simplifies things and manages a similar effect. It's also the reason that people think Wii U supports 'some DX11 effects' when it doesn't. Or that it's 'some parts HD4000 and some parts HD6000'. Or that there's any great practical difference between Shader Model 4 and 5.

There isn't exactly a lot of difference between DX10 and 11 but people tend to confuse compliancy with capability. Capability in terms of brute force is far more important than any hardware compliancy and a fast DX10 part will vastly outperform a slow DX11 part.
 
I'd be curious if you can find the translation somewhere.

I'm still somewhat careful with the regard to that comment. Predicting the data you need in the next frame is much easier in a sidescroller than a fully 3d game. You can't have 1gb of raw data in a frame then turn around and have separate 1gb of data in the frame without flushing the first. And if that second gigabyte of data isn't in RAM then it needs to be read from disk.

Bear in mind that GPU texture compression has existed for the better part of 15 years. Even with improvements, I wouldn't expect it to be leaps and bounds better.


The problem has always been memory bandwidth, and speed of decompression. Texture compression has been around a long time, but advancements have been made. ASTC is the latest in texture compression, doing more than BCn.
 
How will Nintendo know what their competitors are building?
And why shouldnt the competition design their consoles more like Nintendo's?

Anyway, I dont want to see homogenized consoles for the sake of third parties.

You can't know exactly, but it doesn't take a lot to get a ballpark estimate of what the other console manufacturers are going to do. You just look at past trends, economic conditions, current hardware prices and their trends. For example, I managed to a pretty accurate estimate of the number of shaders the PS4/X1 about eighteen months ago. I simply estimated how much these companies would be willing to spend, and roughly where prices would be by the time they launched.

As for, why Nintendo has to adopt hardware more similar to the competition rather than the other way around, that's because of economic considerations.
The era of esoteric hardware is simply over. Developmentis too expensive for devolop multiple, highly optimized builds of a single multiplatform title. 3rd party support for Microsoft and Sony are essentially a given these days, at least at launch. The same cannot be said for Nintendo, and this has been the case for awhile.

So building an architecually similar box, that's roughly in the same weight class as the competition is an neccesity if you're gonna attract third party support. If a company wants to differentiate themselves, they need to do it through services and controller innovations.
 
How will Nintendo know what their competitors are building?
And why shouldnt the competition design their consoles more like Nintendo's?

Hahahahaaaaa.

Wait maybe these were serious questions. Listen to devs, they know what's up. Additionally the competition doesn't want to release hardware destined for Wii U sales numbers.
 
I have no idea how one relates to the other. I'm not quite sure the point you're making...

A game that runs at 1280x720 needs exactly 2.25x more performance to run at the exact same settings at 1920x1080. Most data suggests that the Wii U GPU is 1.5x faster than the 360 GPU and 2x faster than the PS3 GPU. If the Wii U can't run a game at 1920x1080 when using the same graphical settings as a game that runs at 1280x720 on 360/PS3 then it suggests that it isn't 2.25x faster but something less.

In other words, that performance differential is better spent on making games prettier at 720p than uglier at 1080p.
 
A couple of things of note:

PC games are almost always 32-bit applications which have a hard limit of ~2gb of memory usage. DX10/11 reduce memory usage significantly (compared to DX9) to the point where a game like Crysis would be bumping against the 2gb ceiling in DX9 mode vs. ~1.3gb in DX10 mode with everything maxed out. Even where the PCs have 8gb of RAM in them, the games themselves haven't been using that much memory. The more realistic comparison is of ~450mb to ~2gb of memory while disregarding overheads.

It otherwise just depends on how the game allocates its memory. A game that allocates 512mb to gameworld content and 4.5gb to textures could probably be cut down for the Wii U while a game that allocates 3gb to the gameworld and 2gb to textures would most likely not. The former might be a Call of Duty while the latter could be a Grand Theft Auto.

Regarding texture compression, it has been a requirement for DirectX compliancy since DirectX 6 or 7.

Edit: I also highly doubt that the Latte is a 176gflop part. I've seen at least one case where a Wii U version runs at 720p while both 360/PS3 variants are sub-HD. 352gflops seems like a shoe-in for a HD4600 part.

Thanks for sharing that, interesting. Still that is 3 to 4 times what is on the PS360, very similar to the difference between Wii U and PS4/XB1. Problem is Nintendo needed a big start to capitalize on the current gen ports and early (2 years) next gen ports.

You can't know exactly, but it doesn't take a lot to get a ballpark estimate of what the other console manufacturers are going to do. You just look at past trends, economic conditions, current hardware prices and their trends. For example, I managed to a pretty accurate estimate of the number of shaders the PS4/X1 about eighteen months ago. I simply estimated how much these companies would be willing to spend, and roughly where prices would be by the time they launched.

As for, why Nintendo has to adopt hardware more similar to the competition rather than the other way around, that's because of economic considerations.
The era of esoteric hardware is simply over. Developmentis too expensive for devolop multiple, highly optimized builds of a single multiplatform title. 3rd party support for Microsoft and Sony are essentially a given these days, at least at launch. The same cannot be said for Nintendo, and this has been the case for awhile.

So building an architecually similar box, that's roughly in the same weight class as the competition is an neccesity if you're gonna attract third party support. If a company wants to differentiate themselves, they need to do it through services and controller innovations.

It has been stated way before in the thread. One thing is Wii U is architecturally different and another is the misconception of being hard to develop for. Most ports, some not optimized others surpassing PS360, were relatively cheap and developed in a short time.

The hardware does have a learning curve as how to use it to maximize efficiency and performance, but what machine does not.

Off topic but regarding 3rd party software Nintendo is not so bad right now, the big misses are FPS and sports games not coming including FIFA, PES and NBA 2K, I hope this games come next year.
 
Haven't been part of this discussion, but here's my two cents.

I don't blame Nintendo for trying to make a small and quiet console, but I figure they sacrificed just a little too much for this goal. I tend to figure that the Wii U needed to at least be able to match PS4/X1 visuals at 720p. This is just a guestimate, but they probably could have done this had they been willing to go up to 100 watts or so. It would've been a bit larger and louder, but probably still would've been the smallest and quietest of the three machines.

Granted, it might've cost a bit more if they went this route, and there's no guarantee a bit of extra horsepower would've made a difference in how things have played out.
This is sort of how I think too. With the polarisation of gaming, the length of the generation and how Wii lost its lustre in the last few years I felt that technical considerations were more important than Nintendo thought. Nintendo could have got away with what the did if they had lots and lots of games but it seemed obvious to me where third parties wanted things to go but Nintendo just wasn't asking or listening. Higher end tech is a selling point for a large portion of gamers and it's a good marketing gimmick too.

It's made even worse by two new powerhouses coming out and the string of fuck ups Nintendo have made with games, OS and selling people on the device.

It just really feels like a sad contender by a company struggling to compete but too arrogant and narrow minded to see the forest for the trees. RIM comes to mind when I think of Nintendo.
 
The problem has always been memory bandwidth, and speed of decompression. Texture compression has been around a long time, but advancements have been made. ASTC is the latest in texture compression, doing more than BCn.

Ya, but it's also a question of how much more it's doing than BCn. Getting 20% better compression at the same performance is huge technologically speaking but in terms of real world numbers it's the difference between 1gb of data and 1.2gb of data.
 
No, but that was also my point. The fact that Black Ops 2 doesn't render at the higher resolution suggests that the GPU isn't strong enough to do so.

But I also don't think there's any question that the Wii U GPU is significantly faster than the 360/PS3 GPUs. 50% or 100% faster can make for far more beautiful games at 720p. But it's not enough to take those same games to 1080p because the performance needed is so much more than that.
This claiming has not much sense in this discussion, with the data we actually know. This is no PC where every GPU has its own memory pool and where making a game to run a higher resolution is almost automatically done on stronger GPUs.

The WiiU has a MEMORY ARCHITECTURE (and memory is the BASE of everything) that works really different compared to the one seen on PS3/360 (even those two had nothing in common in that regard).
If the game wasn't programmed well enough, and that's absolutely probable considering that the dev-kits weren't finished at that time and the game had to be out at launch-date, then it wouldn't matter if the GPU is 1.5, 2 or 3 times more powerful, because you would hit a bottleneck in form of 12.8GB/s of bandwidth from the big pool of RAM on the WiiU, compared to the 22,4GB/s of bandwidth from Xbox 360's main memory.

Note that the pool of 2GB of memory it's not the main pool, but only "the big" one, because the main pool of memory is precisely those 32MB of eDram at who-knows how many GB/s it runs.
And that's without considering the other 2 extra pools (2+1 MB) that are without any doubt HUGE caches by themselves, even by today's standards.
Or the enormous caches on the CPU (2MB of cache for a core with only 4 pipeline stages is absolutely crazy in terms of size, and the other ones have very respectable 512KB of L2, more than enough considering how small the pipeline of that processor is).

If the game wasn't re-programmed to take advantage of that memory layout, then it rendering at the same or even less resolution wouldn't be that strange.
I suppose that if those 12.8GB/s were enough to run those games is because that memory has higher latencies than the one used on the WiiU, and maybe also because the bigger caches on the CPU could have helped a lot in order to reduce the data traffic between the big pool and the CPU.

Caches are something automatic that you can benefit from even when you don't plan your code in order to squeeze them, but those 32MB of eDram, those are different and seem like a normal pool of RAM (not limited to a cache functionality) completely at the programmers hands. And that means that unless you do something with it, it won't do anything for you, and considering that this is the main pool of RAM according to Nintendo's own schemes...
 
Thanks for sharing that, interesting. Still that is 3 to 4 times what is on the PS360, very similar to the difference between Wii U and PS4/XB1. Problem is Nintendo needed a big start to capitalize on the current gen ports and early (2 years) next gen ports.

Yep, but there's still that overhead that I mentioned. PCs tend to have discrete video cards with their own dedicated video memory and there is some duplication going on between video and system RAM that doesn't exist in the consoles. I don't know to what extent that is but it's there. Also, bear in mind that ~2gb is the maximum that PC games are allowed. Crysis used to crash when it exceeded that and most games try to keep well below that limit. The more realistic limit that I see on most PC exclusives tends to hover in the 1-1.5gb range.

Personally, I think there's every chance that the Wii U will get its share of ports from the higher end systems and that there won't be that big a difference. Just contrast Crysis 3 at 'very high' vs. 'low' in these comparison shots. I don't see why that game couldn't be scaled down to run at 720p and somewhat lowish settings on the Wii U. I've no doubt that the differences will become more visible as time goes on but I'm honestly hard pushed to tell the difference in those particular shots.
 
I think other important things to remember when comparing the launch multiplatform games to their PS3 and 360 counterparts are that those ports are from machines that are 'CPU heavy' with regards to floating point work, Cell and Xenon are both in-order CPUs with multithreading and those CPUs also deal with sound.

When you add the above to constantly changing SDKs and dev kits, unfinished/inadequate tools, mostly small teams or work being outsourced to other developers and the pressure involved in getting a title ready for launch day I'd say that having those launch titles available with very little performance issues was pretty impressive. It's unrealistic to expect a major resolution bump imo.
 
This is sort of how I think too. With the polarisation of gaming, the length of the generation and how Wii lost its lustre in the last few years I felt that technical considerations were more important than Nintendo thought. Nintendo could have got away with what the did if they had lots and lots of games but it seemed obvious to me where third parties wanted things to go but Nintendo just wasn't asking or listening. Higher end tech is a selling point for a large portion of gamers and it's a good marketing gimmick too.

It's made even worse by two new powerhouses coming out and the string of fuck ups Nintendo have made with games, OS and selling people on the device.

It just really feels like a sad contender by a company struggling to compete but too arrogant and narrow minded to see the forest for the trees. RIM comes to mind when I think of Nintendo.

I don't think Nintendo as a company is in mortal danger just yet, but it's OBVIOUS that it's time for a different management team to take over. To be fair, the company still makes some very good games, but, they just seem tone deaf when it comes to the console market. They need a team that both understands how the market has changed, and understands how to play nice with the third party developers, particularly the western developers.

That doesn't mean they cannot turn the Wii U around, at least to some extent. Next time around though, they, NEED to take a different approach, and I just don't think the current management team can do that.
 
Yep, but there's still that overhead that I mentioned. PCs tend to have discrete video cards with their own dedicated video memory and there is some duplication going on between video and system RAM that doesn't exist in the consoles. I don't know to what extent that is but it's there. Also, bear in mind that ~2gb is the maximum that PC games are allowed. Crysis used to crash when it exceeded that and most games try to keep well below that limit. The more realistic limit that I see on most PC exclusives tends to hover in the 1-1.5gb range.

Personally, I think there's every chance that the Wii U will get its share of ports from the higher end systems and that there won't be that big a difference. Just contrast Crysis 3 at 'very high' vs. 'low' in these comparison shots. I don't see why that game couldn't be scaled down to run at 720p and somewhat lowish settings on the Wii U. I've no doubt that the differences will become more visible as time goes on but I'm honestly hard pushed to tell the difference in those particular shots.

Differences seem to be less lighting in the scene.
 
This claiming has not much sense in this discussion, with the data we actually know. This is no PC where every GPU has its own memory pool and where making a game to run a higher resolution is almost automatically done on stronger GPUs.

The WiiU has a MEMORY ARCHITECTURE (and memory is the BASE of everything) that works really different compared to the one seen on PS3/360 (even those two had nothing in common in that regard).
If the game wasn't programmed well enough, and that's absolutely probable considering that the dev-kits weren't finished at that time and the game had to be out at launch-date, then it wouldn't matter if the GPU is 1.5, 2 or 3 times more powerful, because you would hit a bottleneck in form of 12.8GB/s of bandwidth from the big pool of RAM on the WiiU, compared to the 22,4GB/s of bandwidth from Xbox 360's main memory.

Note that the pool of 2GB of memory it's not the main pool, but only "the big" one, because the main pool of memory is precisely those 32MB of eDram at who-knows how many GB/s it runs.
And that's without considering the other 2 extra pools (2+1 MB) that are without any doubt HUGE caches by themselves, even by today's standards.
Or the enormous caches on the CPU (2MB of cache for a core with only 4 pipeline stages is absolutely crazy in terms of size, and the other ones have very respectable 512KB of L2, more than enough considering how small the pipeline of that processor is).

If the game wasn't re-programmed to take advantage of that memory layout, then it rendering at the same or even less resolution wouldn't be that strange.
I suppose that if those 12.8GB/s were enough to run those games is because that memory has higher latencies than the one used on the WiiU, and maybe also because the bigger caches on the CPU could have helped a lot in order to reduce the data traffic between the big pool and the CPU.

Caches are something automatic that you can benefit from even when you don't plan your code in order to squeeze them, but those 32MB of eDram, those are different and seem like a normal pool of RAM (not limited to a cache functionality) completely at the programmers hands. And that means that unless you do something with it, it won't do anything for you, and considering that this is the main pool of RAM according to Nintendo's own schemes...

Yes, I am aware of that. And it was a simplification.

But both 360 and Wii U have eDRAM to work with so it's also not a matter 22.4GB/s vs 12.8GB/s. I'll certainly grant you that Black Ops 2 could simply be a bad port but it's also not the only example. Even that game that actually did improve resolution over its 360 counterpart only went from whatever sub-HD resolution it was to 720p. That simply wouldn't be possible if it were just a matter of base bandwidth.

If it were a matter of 22.4GB/s vs. 12.8GB/s then the Wii U just flat out wouldn't be able to perform at 720p. 12.8GB/s literally isn't enough bandwidth for 60fps in Call of Duty at at that resolution.
 
Yes, I am aware of that. And it was a simplification.

But both 360 and Wii U have eDRAM to work with so it's also not a matter 22.4GB/s vs 12.8GB/s. I'll certainly grant you that Black Ops 2 could simply be a bad port but it's also not the only example. Even that game that actually did improve resolution over its 360 counterpart only went from whatever sub-HD resolution it was to 720p. That simply wouldn't be possible if it were just a matter of base bandwidth.

If it were a matter of 22.4GB/s vs. 12.8GB/s then the Wii U just flat out wouldn't be able to perform at 720p. 12.8GB/s literally isn't enough bandwidth for 60fps in Cyall of Duty at at that resolution.


eDRAM uses are totally different, you can't put shaders, postprocessing effects in X360 eDRAM. Its eDRAM is very limited and wasn't on die.
 
Btw, just to add to the discussion had before. Some GPU architectures can provide x FLOPS per watt. So knowing the wattage of an architecture can give you a rough idea of performance.
 
The WiiU has a MEMORY ARCHITECTURE (and memory is the BASE of everything) that works really different compared to the one seen on PS3/360 (even those two had nothing in common in that regard).
If the game wasn't programmed well enough, and that's absolutely probable considering that the dev-kits weren't finished at that time and the game had to be out at launch-date, then it wouldn't matter if the GPU is 1.5, 2 or 3 times more powerful, because you would hit a bottleneck in form of 12.8GB/s of bandwidth from the big pool of RAM on the WiiU, compared to the 22,4GB/s of bandwidth from Xbox 360's main memory.

Note that the pool of 2GB of memory it's not the main pool, but only "the big" one, because the main pool of memory is precisely those 32MB of eDram at who-knows how many GB/s it runs.
And that's without considering the other 2 extra pools (2+1 MB) that are without any doubt HUGE caches by themselves, even by today's standards.
Or the enormous caches on the CPU (2MB of cache for a core with only 4 pipeline stages is absolutely crazy in terms of size, and the other ones have very respectable 512KB of L2, more than enough considering how small the pipeline of that processor is).

If the game wasn't re-programmed to take advantage of that memory layout, then it rendering at the same or even less resolution wouldn't be that strange.
I suppose that if those 12.8GB/s were enough to run those games is because that memory has higher latencies than the one used on the WiiU, and maybe also because the bigger caches on the CPU could have helped a lot in order to reduce the data traffic between the big pool and the CPU.

Caches are something automatic that you can benefit from even when you don't plan your code in order to squeeze them, but those 32MB of eDram, those are different and seem like a normal pool of RAM (not limited to a cache functionality) completely at the programmers hands. And that means that unless you do something with it, it won't do anything for you, and considering that this is the main pool of RAM according to Nintendo's own schemes...

If you're implying that launch games didn't use eDRAM, there's zero chance that was the case. They would run significantly worse if they were restricted to 12.8Gb/s.
 
If you're implying that launch games didn't use eDRAM, there's zero chance that was the case. They would run significantly worse if they were restricted to 12.8Gb/s.

I dont think thats what he was implying.
But are you implying all launch games using 12.8Gbs would run worse, why is that?
 
Yes, I am aware of that. And it was a simplification.

But both 360 and Wii U have eDRAM to work with so it's also not a matter 22.4GB/s vs 12.8GB/s. I'll certainly grant you that Black Ops 2 could simply be a bad port but it's also not the only example. Even that game that actually did improve resolution over its 360 counterpart only went from whatever sub-HD resolution it was to 720p. That simply wouldn't be possible if it were just a matter of base bandwidth.

If it were a matter of 22.4GB/s vs. 12.8GB/s then the Wii U just flat out wouldn't be able to perform at 720p. 12.8GB/s literally isn't enough bandwidth for 60fps in Call of Duty at at that resolution.
And that may explain the sub-hd resolution and the framerate drops that those games had.
It all depends on how the game was programmed. For example, the eDram on Xbox 360 was write only by the GPU, which means that the only parts that could read from that memory were the ROPs and the extra specialized logic to apply "free" anti-aliasing (that wasn't possible without tile-rendering due to there being only 10MB of eDram).

On the WiiU de eDram is the main RAM, which means that it can be written and read not only by the whole GPU, but also by the CPU.
On Xbox 360 if you wanted to read something from the eDram you had to write it on the main RAM of 512MB, and then read it through there, and it's only logical to think that if launch ports were programmed like this, maybe the 12,8GB/s could be enough in most of the situations (thanks to the fact that the much bigger CPU caches and the 1+2MB of extra caches on the GPU surely have alleviated the impact of some operations on the big pool of memory) but not enough to upgrade the game to 1080p or true 720p.

NBtoaster said:
If you're implying that launch games didn't use eDRAM, there's zero chance that was the case. They would run significantly worse if they were restricted to 12.8Gb/s.
You can't know that for sure. There are things like the GPU and CPU automated caches (2+1MB + 3 MB) that even if not used at maximum because of lack of software planning are of course alleviating the bandwidth impact of the main memory pool. If you couple this with some framerate problems those games had compared to the Xbox 360 version, the fact that we can't know if the 22,4GB/s were used at its maximum or if there was still some room left, and that the 12,8GB/s of the WiiU's big pool of memory are closer to its theoretical peak due to the lower latencies (reduced memory latencies and reduced distance between northbridge and memory and also much reduced distance between CPU and northbridge) then I think that this scenario is in my opinion the most plausible one.

I don't doubt that there's something stored on the 32MB of eDram on those games, but having something there and making a decent use of it are two really different matters.
To use the 32MB of cache efficiently tons of things should have been changed or adapted.
And sincerely, if even exclusive games like Nano Assault Neo made by Shin'en (a team made of tech gurus and one of the best of the world taking advantage of a certain hardware), were only using properly one CPU core (the other two had a marginal use processing audio functions... that in their next engine will be handled by the DSP of the console) without even optimizing the program for the bigger caches, or even the GPU shaders, and much other known limitations, I can't even start to imagine how poorly optimized may be ports made from much bigger games by less experienced people on the demo scene and with of course incomplete devkits that limited the use of the CPU (don't we have some confirmation that there were whole CPU cores idling because it wasn't possible to access them until an update on the devkits arrived after even the console's launch?).
 
The problem is, it probably had limited use.

At minimum they were storing the framebuffer in eDRAM like 360.

I dont think thats what he was implying.
But are you implying all launch games using 12.8Gbs would run worse, why is that?

Because it's extremely slow.

You can't know that for sure. There are things like the GPU and CPU automated caches (2+1MB + 3 MB) that even if not used at maximum because of lack of software planning are of course alleviating the bandwidth impact of the main memory pool. If you couple this with some framerate problems those games had compared to the Xbox 360 version, the fact that we can't know if the 22,4GB/s were used at its maximum or if there was still some room left, and that the 12,8GB/s of the WiiU's big pool of memory are closer to its theoretical peak due to the lower latencies (reduced memory latencies and reduced distance between northbridge and memory and also much reduced distance between CPU and northbridge) then I think that this scenario is in my opinion the most plausible one.

The most plausable scenario is that eDRAM was being used for the framebuffer and whatever else they could fit, probably some textures. Though it probably wasn't that efficiently optimised. Using eDRAM for the framebuffer because you have otherwise low bandwidth is not something that needs to be learned.

The existence of eDRAM was leaked way before release and if smart memory management was so integral to the console they wouldn't have left the tools around it underdeveloped too late.
 
At minimum they were storing the framebuffer in eDRAM like 360.



Because it's extremely slow.



The most plausable scenario is that eDRAM was being used for the framebuffer and whatever else they could fit, probably some textures. Though it probably wasn't that efficiently optimised. Using eDRAM for the framebuffer because you have otherwise low bandwidth is not something that needs to be learned.

The existence of eDRAM was leaked way before release and if smart memory management was so integral to the console they wouldn't have left the tools around it underdeveloped too late.
Yes, and that's what I'm saying. To use the eDram in the same way is used on 360 (and this is the most probable scenario in rushed ports) is a waste that limits the whole capability of the console, because you're under-utilizing the main memory bank, or to put it in other words, the base from where both the CPU and the GPU get all they need in order to work.

Regarding textures, yes, some of them could be stored there instead of the big RAM, but again, this is not the ideal use of it, and it should be considered if re-engineering their engine to even store some simple things on the eDram besides what was stored on the Xbox 360 was something they could afford.

All in all, my point was that there are too many unknown factors regarding launch titles and their use of the hardware that we can't say that the GPU couldn't render those games at higher resolutions only because they weren't rendered at those. This is no PC GPU where a better GPU comes with a (comparable in terms of architecture) bigger and faster memory and where changes like those (AA, AF or increase resolution) are pretty much a given. This is a totally different beast that have to deal with things in its own way.
Maybe a bus of 128 bits to the big pool of RAM (and that would be 25.6GB/s) would have been much better in order to allow ports from this current gen, but Nintendo designed this to be future-proof and I'm sure that this increase in the bus size wouldn't be much useful compared to its cost on games designed around this architecture, so they ruled that possibility out.

sounds slow when you say 12.8 vs 22.4, but when you put it as 12.8 read or write vs 11.4 read 11.4 write it doesn't sound so bad
This is not how it works. Xbox 360 is 22.4GB/s read or write also, it's only the CPU that's limited to 11.4 + 11.4.
All in all, the one with the best memory architecture is by far the WiiU. Those huge caches (6MB in total) plus the main pool of 32MB of eDram and the fact that even those 12.8GB/s are much closer to being real than the 22.4GB/s of the Xbox 360 memory are things that will have a determining influence in future projects.
 
Interview from NintendoLife: Black Forest Games on Bringing Giana Sisters: Twisted Dreams to Wii U

From the development side, it wasn’t more effort to get Giana running on the Wii U than it was to port her to the X360. The TCR side was a bit simpler, but the low level engine side was harder. The Wii U has some PC-like dx10/dx11-like systems like geometry shaders, however, when we started working on the port, we already had our game released on the X360, and going from a console release to Wii U is easier since the consoles have very similar performance characteristics. Another helpful factor is the amount of overlap when it comes to controller support, interfaces, user management and suchlike (even though every single engine system needs to be rewritten to support it).

The effort we spent porting to the WiiU was comparable to migrating to some other major platforms, even though we had much more experience with those. All in all, the experience was very similar to doing an Xbox 360 port early on in the console’s life cycle.

Our “time to triangle”, which is the time it takes to get good performance with the final rendering techniques, was around 1 month for 2 programmers. This included doing the porting work needed for every other major system such as sound, input, etc. Doing all of the lotcheck requirements took about an additional 1-2 months for 2 developers (1 programmer and 1 tester on average).

On the tech side, having DX11 level features is pretty useful, which makes large parts of the system compatible with our other platforms. Overall the hardware itself is slightly faster than the other current-gen consoles. One very nice advantage the console has is the additional usable memory you have. Having that extra memory made development much easier compared to other current-gen platforms.

http://www.nintendolife.com/news/20...ringing_giana_sisters_twisted_dreams_to_wii_u
 
Because it's extremely slow.

Slow as compared to what? The PS3 and 360 RAM was bottleknecked to below 12 GB/s perfromance on top of them being higher latency and there simply not being even "half" as much(both the PS3 and 360 had reserved RAM for sound and OS)
 
Interview from NintendoLife: Black Forest Games on Bringing Giana Sisters: Twisted Dreams to Wii U
[/B]

Plus side, this argument that the WiiU does or does not have DX11 features can be put to rest.
Minus side, sounds like they are approaching the WiiU as they would approach developing for the X360- meaning, not looking at the eDRAM as the main ram.
But then again, maybe for this particular game its not necessary. Which makes
porting so easy.
 
Slow as compared to what? The PS3 and 360 RAM was bottleknecked to below 12 GB/s perfromance on top of them being higher latency and there simply not being even "half" as much(both the PS3 and 360 had reserved RAM for sound and OS)

No, Freezamite is correct. Must we keep repeating the same myths that have been debunked dozens of times in this thread, often to the same people who keep repeating them after they've been informed otherwise. The GPU can access the full bandwidth, the CPU (which always has lower bandwidth requirements) accesses 10.

This is not how it works. Xbox 360 is 22.4GB/s read or write also, it's only the CPU that's limited to 11.4 + 11.4.
All in all, the one with the best memory architecture is by far the WiiU. Those huge caches (6MB in total) plus the main pool of 32MB of eDram and the fact that even those 12.8GB/s are much closer to being real than the 22.4GB/s of the Xbox 360 memory are things that will have a determining influence in future projects.
(what 6MB cache though? are you counting the ones on the GPU presumably for Wii BC and adding them to the CPU ones?)

360, note the GPU bandwidth:
bandwidths.gif

220px-X360bandwidthdiagram.jpg


PS3
PS3-memory-bandwidth-table.jpg
 
(what 6MB cache though? are you counting the ones on the GPU presumably for Wii BC and adding them to the CPU ones?)
Yes, since I was talking of "automatic" features that doesn't require any intervention from the programmers, I added them even knowing that their functions are really different.
It's 3MB for the CPU (compared to 1MB on Xbox 360) and 2MB of eDram + 1MB of eSram on the GPU that are not in control of the programmer.
Of course, they also will act as caches (it wouldn't have any sense to leave those on idle while on WiiU mode) so they are something to consider when speaking about the memory layout.
 
No, Freezamite is correct.

I never said otherwise.

I added to it. The PS3/360 are filled with so many bottlenecks that even if most of the hardware wasn't used in the Wii U, it could still outperform them in most areas.

The only port that didn't show some degree of better performance on the Wii U at launch(when the dev kits were at their worse) is Darksiders 2 and that is only because THQ had gone bankrupt giving the game no budget and there were only 4 or 5 people working on the port(from what i heard). That's not including things we've been made aware of like the CPU cores not all being used in a lot of games.

As I've said. No dev has ever complained about a problem with the Wii U memory, not even an inch. All it has received is praise for how much and how fast it is. The RAM is not nor has it ever been "slow". At its worse it would still be doing better than the others in real world scenarios.

You can post those screenshots that show the theoretical potential of the 360 memory all day, but we know better at this point. Just as the 360 CPU could only actually use 2 cores at once for gaming as the other was used for the OS and sound which could sometimes take a core and a half. I learned all of that from the other big contributors in this thread.
 
Well thats a bog standard R700 chip put to bed then. This seems more and more like a hybrid chip design, a little bit of one and a little bit of something else. Of course we knew that already but it's nice to have it confirmed.
 
I never said otherwise.

I added to it. The PS3/360 are filled with so many bottlenecks that even if most of the hardware wasn't used in the Wii U, it could still outperform them in most areas.

The only port that didn't show some degree of better performance on the Wii U at launch(when the dev kits were at their worse) is Darksiders 2 and that is only because THQ had gone bankrupt giving the game no budget and there were only 4 or 5 people working on the port(from what i heard). That's not including things we've been made aware of like the CPU cores not all being used in a lot of games.

As I've said. No dev has ever complained about a problem with the Wii U memory, not even an inch. All it has received is praise for how much and how fast it is. The RAM is not nor has it ever been "slow". At its worse it would still be doing better than the others in real world scenarios.

You can post those screenshots that show the theoretical potential of the 360 memory all day, but we know better at this point. Just as the 360 CPU could only actually use 2 cores at once for gaming as the other was used for the OS and sound which could sometimes take a core and a half. I learned all of that from the other big contributors in this thread.



I was replying to something very specific, the myth that the 360 bandwidth is halved due to the read/write thing. The theoretical bandwidth to the GPU is still 22GB/s. Yes, they hit below those theoreticals, and yes, the Wii U probably has a more efficient controller, but we're talking all theoretical numbers here. In terms of theoreticals, saying 11GB/s for the 360 is just a myth that's been debunked many times and keeps popping up.

If you meant the other thing, going from 22 to 11 just with inefficiencies, that number is pulled out of your behind, without citations.

And you'll notice the PS3 one I provided does have measured, not theoretical, numbers.
 
I was replying to something very specific, the myth that the 360 bandwidth is halved due to the read/write thing. The theoretical bandwidth to the GPU is still 22GB/s. Yes, they hit below those theoreticals, and yes, the Wii U probably has a more efficient controller, but we're talking all theoretical numbers here. In terms of theoreticals, saying 11GB/s for the 360 is just a myth that's been debunked many times and keeps popping up.

If you meant the other thing, going from 22 to 11 just with inefficiencies, that number is pulled out of your behind, without citations.

And you'll notice the PS3 one I provided does have measured, not theoretical, numbers.

It not having bottlenecks was the myth. I've never seen a single person debunk the findings of its bottlenecks as they were the result of after world analysis on the 360. Debunking them would require more technical facts discovered after they were. All you've posted are screenshots of on paper stats that are already known.

As I said, you can post those photos all day long, but we know better at this point. They've been posted every-time this subject has come up that I've seen, but the hardware doesn't work like that in reality. You are posting the very things that have been disproved to try to disprove their disproof.

Bottlenecks don't magically disappear. No matter how grand everything else is, it will come to a crawl when that bottleneck is hit. That will be the limit of the performance in most situations. From what i heard from the pros, the high numbers in the 360/PS3 were actually introduced as a means to get around the bottleknecks that were already present in the design.
 
It not having bottlenecks was the myth. I've never seen a single person debunk the findings of its bottlenecks as they were the result of after world analysis on the 360. Debunking them would require more technical facts discovered after they were. All you've posted are screenshots of on paper stats.

As I said, you can post those photos all day long, but we know better at this point. They've been posted every-time this subject has come up that I've seen, but the hardware doesn't work like that in reality. You are posting the very things that have been disproved to try to disprove their disproof.

Bottlenecks don't magically disappear. No matter how grand everything else is, it will come to a crawl when that bottleneck is hit. That will be the limit of the performance in most situations. From what i heard from the pros, the high numbers in the 360/PS3 were actually introduced as a means to get around the bottleknecks that were already present in the design.

We've been talking theoretical figures up to this point, why bring up these "world analysis" in the first place? Can you link to them? Your making a claim to the contrary, surely it isn't "begging" when you're the one claiming figures that don't seem to hold up.
 
Thanks for the reply. I understand better what you mean about using multiplatform titles as a reference but I still think its disingenuous to do so for a couple reasons. Please allow me to explain.

First off, you say you acknowledge the difference in experience between Wii U and PS360 hardware. In doing so we are looking at two different architectural methodologies, GPU vs CPU centric respectively. That might not be a problem since they can accomplish a similar end, but leads me to the second point of why it may not be an apples to apple comparison.

Consider this snippet of an excellent post a couple pages back comparing GC and Xbox

Combine the architectural differences (which likely result in greater effort for porting) with the comparatively miniscule Wii U install base, with the arguably lower buying power of Wii U's demographic and I can't think of a single reason why devs would do anything above and beyond for a multiplatform Wii U release. As i quoted above, in the past it was the same with Xbox. The largest incentive (even if that incentive was Microsoft paying out) got the prettiest version of a multiplatform game. Critical to my point here, the prettiest version was NOT the most powerful hardware.
So Microsoft payed out to have the prettiest version of multiplatform games during the PS2/GC/Xbox era? That story sounds entirely unfounded and like it was intended to discard the inevitable question why the XBox, unlike the Wii U, actually did get superior multiplatform versions despite its small install base compared to the PS2. The explanation that MS simply had the better (power, develop friendly) hardware, allowing developers to produce better looking versions without the need of any (financial or not) incentives because it didn't require much additional effort seems much more plausible.
Finally as you say comparing Halo 4 to Uncharted 3, I agree it would be meaningless. As you say different design etc. But that's not a good analogy. We were originally comparing Bayo 1 to Bayo 2. Similar design, art style, developer, not sure what engine it is on but may share a lot there too.
Then use the difference between Halo 3 and 4 as an analogy. Those games are similar design, art style, developer, etc ... as well and the graphical progression has nothing to do with running on more powerful hardware. The fact that there's an imo bigger variability between Halo3&4 on the same hardware basically automatically invalidates the use of the difference between B1&2 as an argument for 'significantly more powerful hardware'.
Now I don't mean to come across as pugnacious here or anything but I'd say while Bayo 1 to Bayo 2 may not be a perfect comparison, for the reasons I mention above its probably better than a PS360 mutliplat or port. I still think question is deciding to what extent Platinum can take knowledge of PS360 and Bayo 1 and apply it to Wii U Bayo 2. Sort of to your point I still would be curious to see a PS4/XBone multiplat (without a PS/360 version) ported to Wii U. I reckon the similar feature sets and architectures would result in a prettier Wii U version than if the game had originally targetted PS360. How do you feel about that hypothetical scenario?
I think that the outcome of a hypothetical scenario that avoids direct comparisons would convince no one. If the Wii U is much, much more powerful than PS360 hardware, then the multiplatform cross-generation games will show that over time. If there's an incentive to create a better looking dx11 feature version on the Xbone and PS4, then there an incentive to create a better looking Wii U version as well, if the hardware allows that. There's no need to discard the validity of comparing multiplatform versions because they rather indicate the opposite for the moment, force even more flawed hypothetical situations or use downsampled prerelease bullshot Bayonetta screens. Time always tells.
 
It not having bottlenecks was the myth. I've never seen a single person debunk the findings of its bottlenecks as they were the result of after world analysis on the 360. Debunking them would require more technical facts discovered after they were. All you've posted are screenshots of on paper stats that are already known.

As I said, you can post those photos all day long, but we know better at this point. They've been posted every-time this subject has come up that I've seen, but the hardware doesn't work like that in reality. You are posting the very things that have been disproved to try to disprove their disproof.

Bottlenecks don't magically disappear. No matter how grand everything else is, it will come to a crawl when that bottleneck is hit. That will be the limit of the performance in most situations. From what i heard from the pros, the high numbers in the 360/PS3 were actually introduced as a means to get around the bottleknecks that were already present in the design.

Kindly read my post again. I was strictly talking about theoreticals, and the read/write thing halving the bandwidth myth. I acknowledged that the actual bandwidth figures are lower.
 
Status
Not open for further replies.
Top Bottom