WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
That is true, though technicaly, if the game does not continusally store the precedurally generated data, then you could run and entire game like that is less than 200 MB of memory. You would just have to clear the cache as you proceed.

Generally, the only way you will "need" more than a 1GB is if all of the textures in the game are the highest resolution possible, and the code isn't clean(ie. it doesn't ovewrite data for things that aren't on screen anymore.)

A dev would have to make an intentional effort to make a game require more than 2GB of RAM.

Maybe when we reach the point where we can game in 4k, Then I could see us absolutely needing more than 2GB.

I wouldn't go quite that far but it's interesting to note that Crysis (the original) was running near the 32-bit limit of 2 gigabytes of memory in DX9 mode. DX9 is highly wasteful of memory compared to DX10/11 and running it in DX10 mode would reduce memory usage down to 1.3 gigabytes or so. And that was with two discrete memory pools so that there was always some duplication going on that isn't necessary on consoles. And, since Crysis maps are rather huge, I really don't think that memory will be an issue on Wii U in terms of map sizes.

While I don't want to predict how PS4 games will look like in 2017 when using the full 5-6 gigabytes of memory (or how they will compare to Wii U games using 1gb of memory), I do agree that in many cases the difference will be largely superficial. PC games are still 32-bit and it remains to be seen what happens once they finally go 64-bit. Maybe it will be largely superficial (in terms of texture sizes) and maybe it will be genuinely useful (in terms of increased in-game content). For all we know, the memory size differential may not become readily apparent until when Nintendo is ready to launch their next console.
 
Assembly has always been my favorite means of programming.

It is the only way to truly get the most out of hardware.
It's also considered bad code by today's standards.

And for good reason.
I wonder if Shin'en program some of their game components in assembly.
Of course they do, or did.

They could never dream to do this on the GBA or do "shaders" on the DS otherwise.
They get such a high quality model with such little data put into use. Also, on another note, the Wii hardware had some really nice occlusion capability.
Stop, just stop.

One game claimed to using it; the "one game claimed to use it, it must be in the spec AND FUN FACT: IT'S REALLY GOOD AT DOING IT" mentality has to stop; it clearly wasn't.

First of all, you're looking at the model passes; not all passes must be done by the hardware; one can pre-bake ambient occlusion; if you had read the same articles you're drawing your conclusions from you'd know as much:

In our proprietary Game editor we place all of the geometry, bake ambient occlusion, setup lighting conditions and so on.
Source: http://jettrocket.wordpress.com/2010/06/21/part-three-from-vision-to-wii/

This plays to Wii's strenghts; which is texturing; realtime ambient occlusion being another completely different implementation that would be way too taxing and require manipulation via software to achieve (in fact, perhaps even this one required as much as I'm imagining it being done via normal mapping which wasn't on spec, but what I'm saying is real-time one would be like suicide), jeez. That though, makes it so that it's not showing off any "really nice occlusion capability".

Stop... jumping... to... conclusions!


And you've gotta quit comparing small teams that use proprietary tools on a limited scope and whose heritage is the demoscene to big studios, Shin'en deserves lots of praise, yes, they're also only capable of doing it because they're small and their games are also small in scope (they say as much, being small is an option; and there's reason to be so), their source code tree structures are probably pretty complex due to them actually going assembly and the like but still manageable because they wrote it and there aren't 100 dudes messing with the code or adding to it. It's a commodity.

Also, take mention of the fact I just told in the beginning of the post that, assembly is considered bad code by today's standards; that's because if you go assembly you'll be making ports a nightmare; Shin'en isn't tackling huge scope nor are they doing multiplatform, and that's the reason. Of course a "big studio" can't go there, they would have to be bollocks to, or at least have a very big incentive (huge userbase in said platform) to do so.

Thankfully though, they don't have to, and you're just comparing apples to oranges.
 
Froblins runs at 22fps on a HD 4890 by the way. Not really sure how that's helping Wii U especially since R700 is still outdated and that's a much more powerful card (almost Xbox One level?).

Also, that's a tech demo. Everyone knows games play differently. There was never a game made with it that I know of.

It also has tons of characters running around a very large area and using GPGPU to control their AI. Not to mention tessellating the terrain.

It's not really meaningful in terms of general games running on Wii U. There is a point of diminishing returns in increasing geometric detail and, by and large, the difference only becomes apparent when you're up close and personal with the object.

Edit:

Speaking of ambient occlusion, I couldn't help noticing it Wind Waker HD. It's presumably screen-space but it's still no mean feat considering it's running at 1080p/30. Having low geometric detail probably helps, though.
 
Speaking of ambient occlusion, I couldn't help noticing it Wind Waker HD. It's presumably screen-space but it's still no mean feat considering it's running at 1080p/30. Having low geometric detail probably helps, though.

Yeah, looks like screen-space.

Mind you, SSAO isn't particularly taxing (vs HDAO/HBAO). I believe Crytek got their C2/C3 implementations down to ~1ms frametime for PS360.
 
That is an an extreme exaggeration of my point and far cry from what I was trying to say.
Not really. Any game can fit onto a DVD if you compress the assets enough and make cuts to the resolution and fidelity of those assets -- just like any game can run comfortably with 2GB of memory, if the same things hold true.

Though, why are all of you raging about this? Its just my opinion. If you don't like it, you don't have to believe it. Goodness
I'm not raging, I'm disagreeing with you.

Generally, the only way you will "need" more than a 1GB is if all of the textures in the game are the highest resolution possible, and the code isn't clean(ie. it doesn't overwrite data for things that aren't on screen anymore.)

A dev would have to make an intentional effort to make a game require more than 2GB of RAM.

Maybe when we reach the point where we can game in 4k, Then I could see us realistically needing more than 2GB.
Playing Tomb Raider at high spec pushed me past 2GB of VRAM usage, and that's a game that was made and for and targeting current-gen systems. As the generation wears on, a low amount of memory will become a larger and larger issue -- just like it has with current-gen consoles.

You don't need 4k resolutions to blow through VRAM.

And you've gotta quit comparing small teams that use proprietary tools on a limited scope and whose heritage is the demoscene to big studios, Shin'en deserves lots of praise, yes, they're also only capable of doing it because they're small and their games are also small in scope (they say as much, being small is an option; and there's reason to be so), their source code tree structures are probably pretty complex due to them actually going assembly and the like but still manageable because they wrote it and there aren't 100 dudes messing with the code or adding to it. It's a commodity.
Well put; took the words from my mouth. Different developers have different methodologies and different requirements and objectives. Shin'en, coming from the demo scene, is focused on technical brilliance and optimization. That's great for them, but they're not a ruler to measure other teams against.
 
Monsters University had used 20GB of memory per frame.

Toy Story 2 (from 1999) was already up to 4GB a frame.

Resogun dedicates more than the entire memory of the PS3 to just the voxels/cubes.

Doesn't seem like there's ever enough memory.
 
One game claimed to using it; the "one game claimed to use it, it must be in the spec AND FUN FACT: IT'S REALLY GOOD AT DOING IT" mentality has to stop; it clearly wasn't.

First of all, you're looking at the model passes; not all passes must be done by the hardware; one can pre-bake ambient occlusion; if you had read the same articles you're drawing your conclusions from you'd know as much:

Source: http://jettrocket.wordpress.com/2010/06/21/part-three-from-vision-to-wii/

This plays to Wii's strenghts; which is texturing; realtime ambient occlusion being another completely different implementation that would be way too taxing and require manipulation via software to achieve (in fact, perhaps even this one required as much as I'm imagining it being done via normal mapping which wasn't on spec, but what I'm saying is real-time one would be like suicide), jeez. That though, makes it so that it's not showing off any "really nice occlusion capability".

Stop... jumping... to... conclusions!


And you've gotta quit comparing small teams that use proprietary tools on a limited scope and whose heritage is the demoscene to big studios, Shin'en deserves lots of praise, yes, they're also only capable of doing it because they're small and their games are also small in scope (they say as much, being small is an option; and there's reason to be so), their source code tree structures are probably pretty complex due to them actually going assembly and the like but still manageable because they wrote it and there aren't 100 dudes messing with the code or adding to it. It's a commodity.

Also, take mention of the fact I just told in the beginning of the post that, assembly is considered bad code by today's standards; that's because if you go assembly you'll be making ports a nightmare; Shin'en isn't tackling huge scope nor are they doing multiplatform, and that's the reason. Of course a "big studio" can't go there, they would have to be bollocks to, or at least have a very big incentive (huge userbase in said platform) to do so.

Thankfully though, they don't have to, and you're just comparing apples to oranges.

I wasn't basing that staement solely on Jett rocket. It just reminded me. That is why I said "on another note"(ie. aside from Jett Rocket). I was speaking in general after that. Wii games often had global illumination which is one of the most common uses of ambient occlusion. Games like Mario Galaxy and Metroid Prime 3 seemed to use it extensively, or least that what it looked like.

Though, why are you so up in arms about this? The TEV on the Wii GPU could pull off nearly any last gen texture effect to a decent degree.
 
Playing Tomb Raider at high spec pushed me past 2GB of VRAM usage, and that's a game that was made and for and targeting current-gen systems. As the generation wears on, a low amount of memory will become a larger and larger issue -- just like it has with current-gen consoles.
Converting assets is not so hard, providing the basis for the game can run, of course there are sweet spots, and compressing too much is never optimal (cough Zelda TP cough), but it can be done; lots of developers did so this generation on console conversions via batch conversion, and although that's not a professional way to do things no one will notice unless they had a more optimized version in hands and therefore knew it was possible.

People forget one thing though, the Wii U really was designed this way, more RAM could be very counter productive for it as the other two consoles have been designed with reliance with a hard drive in mind; there's no such thing with the Wii U when the drive is plugged on via USB 2.0 so you can only dream of having roughly ~30 MB/s; this means you have to keep loadings short, because you have 22 MB/s on the blu-ray drive and not much more via usb 2.0; it makes it more WYSIWYG for devs, but also means it can't live over it's possibilities, you have to make data as light as possible.

GTAV, for instance is living above it's possibilities on that hardware, streaming from optical disc and hard drive, not supporting Xbox 360's without a hard drive; it's mostly standard on the 360 to have storage means though, and Nintendo should get the queue fast, and discontinue the 8 GB version "with 3 GB available" for the user; make it at least 16 GB with 12 GB accessible.

Nevertheless, it might be the platform with constantly lower "plug and play" loadings due to it, just like the GC was, that steams from less peak bandwidth and RAM the developers can use (the GTA5 method being the "future" for newer platforms seeing how big the bottleneck of having 27 MB/s turns into if you have 6 GB of RAM); Wii U though, as is inflicted upon themselves the GTAV absence (I don't think it would be out for it if they had a proper hard drive as standard, but it would be better suited for it, if that makes sense).

As is, I don't think a 8 GB model could pull it, and that's just sad; 4 GB X360 sad sans-capability of pumping into it a internal sata HDD (not bound to 30 MB/s).
I wasn't basing that staement solely on Jett rocket. It just reminded me. Wii games often had global illumination which is one of the most common uses of ambient occlusion. Games like Mario Galaxy and Metroid Prime 3 seemed to use it extensively.
You're just confusing the fact it could do 8 hardware lights for free with whatever you want to conclude (occlusion, is it?). Or perhaps you think the rim shading/fairy shader on Mario Galaxy is occlusion?

I won't even try to elaborate further, I made my point; please drop it.
 
I won't even try to elaborate further, I made my point; please drop it.

Just drop it? You're the one who made a big issue out of it. Though, whatever.

Playing Tomb Raider at high spec pushed me past 2GB of VRAM usage, and that's a game that was made and for and targeting current-gen systems. As the generation wears on, a low amount of memory will become a larger and larger issue -- just like it has with current-gen consoles.

You don't need 4k resolutions to blow through VRAM.

I played through all of Tomb Raider(that game was such a disappointment by the way) on my PC with the setting at the highest possible. Tessellation and all. My 560Ti does not even have 2 GB of RAM. I ran into no major issues. It ran pretty fluidly.

I can think of no circumstance where not having more than 1GB would absolute prevent a game from being able to run effectively at the moment. The days of memory/storage capacitive bottlenecks are behind us.
 
Just drop it? You're the one who made a big issue out of it. Though, whatever.
I only did because you made me to.

You try to pose theories and theories out of nowhere as facts.


And you never backtrack on them, or admit to being wrong. Like I said before, it's damaging to the thread because we have to lose our time debunking them to someone that might just bring it up as a fact two pages ahead. You seem to believe things become true through repetition, your repetition.

Hell, I'll shut up, I was trying to not say anything, but it was too much.
 
I only did because you made me to.

You try to pose theories and theories you pulled out of nowhere as facts.

Where did I ever promote any of that as fact?

That was just pure theory. You took it as fact apparently, but I made no such claim.

I said it had good occlusion capability(as in potential), nothing more.

And you never backtrack on them, or admit to being wrong. Like I said before, it's damaging to the thread because we have to lose our time debunking them to someone that might just bring it up as a fact two pages ahead. You seem to believe things become true through repetition, your repetition.

Hell, I'll shut up, I was trying to not say anything, but it was too much.

You mean I never admit to doing something that didn't do. As I just said, my exact words were " occlusion capability". I was speaking of theoretical capability. The only way that would be wrong is if it were impossible to do occlusion on the Wii at all.

Is it impossible to do ambient occlusion on the Wii?

If not, then you are the only one who is in the wrong and refuses to admit it here. Don't throw dirt and run when you get called out on your mistake. That is so immature.
 
Where did I ever promote any of that as fact?

That was just pure theory. You took it as fact apparently, but I made no such claim.

I said it had good occlusion capability(as in potential), nothing more.



You mean I never admit to doing something that didn't do. As I just said, my exact word "capability". The only way that would be wrong is if it were impposible totdo occlusion on the Wii at all.

Is it impossible? If not, then you are the only one who is in the wrong and refuses to admit it here. Don't throw dirt and run when you get called out on your mistake. That is so immature
Can I laugh as an answer? Otherwise we'll just get into quote wars you'll dance your way around, as always.

You should try politics, or fanfics.

Anyway, this is getting offtopic, I'll drop it since you won't.

EDIT: There:

Also, on another note, the Wii hardware had some really nice occlusion capability.
Not promoted as a fact? LOL.


Extra points for making it backwards "I'm not wrong, you're the one in the wrong!". Notice I actually pulled "facts" and a link to counter your crazy assumptions I call theories; you provided me none, and please don't. You ought to have better things to do, I know I do.
 
I played through all of Tomb Raider(that game was such a disappointment by the way) on my PC with the setting at the highest possible. Tessellation and all. My 560Ti does not even have 2 GB of RAM. I ran into no major issues. It ran pretty fluidly.
I'm not sure what you're saying here, that the game can't or doesn't go above 2GB VRAM usage because your card ran it? Because it certainly can; as I said, it did for me. I don't have my exact settings on hand, but I cranked up everything as high as I could except, IIRC, shadows and AA due to performance issues before new drivers had hit. I was above 2GB VRAM used.

I can think of no circumstance where not having more than 1GB would absolute prevent a game from being able to run effectively at the moment. The days of memory/storage capacitive bottlenecks are behind us.
Again, I could say the same thing of storage media. Just because developers can do without it doesn't mean they should. Games are all the better for pushing technological boundaries, not making do with what we (almost always incorrectly) assume is the limit we can feasibly use.
 
Can I laugh as an answer? Otherwise we'll just get into quote wars you'll dance your way around, as always.

You should try politics.


Anyway, this is offtopic on my part, I'll drop it since you won't.

I am dancing around nothing. You were mistaken, and you are refusing to admit your mistake.


You jumped to conclusions and went into a rant about what you "thought" I did. I know exactly what I said and what I did. No aspect of saying "capability" is a lie unless capability is not there. I said nothing more than "occlusion capability". The post is still there for anyone to read http://www.neogaf.com/forum/showpost.php?p=83002109&postcount=10535. Do you know the definition of the word capability? Its not a claim of fact, its claim of potential which is exactly how I meant it as that entire post was about hardware "potential".
.
You assumed and made up the rest entirely on your own and now you are trying to play it off to save face and run.
 
What mistake? Seriously, I must be talking to a wall.

You're the one saving face, you're just not doing it very well.

What mistake? The one I just called you out on in 3 posts in a row.

The one where you took my statement of the Wii having good occlusion capability to be a claim of grandeur about Jet Rocket. I said "on another note" prior to the statement to specify that I was making a general statement about the Wii hardware.
 
i dont know anything about occlusion, but the spat is at this stage:
jett rockets occlusion got defeated by lostinblu showing it was baked. krizzx then said global illumination is an example of it and used in galaxy and metroid prime 3 (no evidence, but i dont know what it is/how obvious it was or known) so guess that needs discussion/refuting/recategorising.
also you said good capability, not just capability.
 
i dont know anything about occlusion, but the spat is at this stage:
jett rockets occlusion got defeated by lostinblu showing it was baked. krizzx then said global illumination is an example of it and used in galaxy and metroid prime 3 (no evidence, but i dont know what it is/how obvious it was or known) so guess that needs discussion/refuting/recategorising.
also you said good capability, not just capability.

I said used in Metroid Prime 3 and Mario galaxy "or least that what it looked like. " I'm not 100% certain, so I'm not claiming that it absolutely is, hence those last few words.

I was never trying to make any claim as a matter of fact, as I don't have enough information to draw such a conclusion. The only thing I claimed as fact is that the Wii had good occlusion capability.
 
I said used in Metroid Prime 3 and Mario galaxy "from what I can tell".

so... based on you saying "on another note" in the edit, you first of all didnt provide the examples/proof there, but then cite later on something you arent sure on? :p (note this is meant to show 'good capability' by your claim). note again: i know nothing of these things, just picking you up on bad critical writing.
 
so... based on you saying "on another note" in the edit, you first of all didnt provide the examples/proof there, but then cite later on something you arent sure on? :p (note this is meant to show 'good capability' by your claim). note again: i know nothing of these things, just picking you up on bad critical writing.

That statement was about "the Wii", not Jett Rocket.

Though, Jett Rocket did use global illumination on the environment for the record.
p3-lighting-1.jpg

Whether or not is was baked, I cannot say(that would be a lot of baking), but it used it.

I try to write things as concisely as possible, and when people misunderstand, I try to clarify, but they always run with their misunderstanding. Not matter what I say, they will try to make it into me spinning it when I'm just trying to make what I meant clear. They never ask or check. They jump to conclusions and run with it. That's what leads to these big, stupid and pointless arguments.

This thread turns into a brawl every 10 posts.


If they would just ask or verify before making an assumption, then we wouldn't keep having these big arguments. Thing is, they are intentionally trying to make issues out of what i say as if they have some vendetta against me for reasons that are beyond my knowledge. They don't care about clarification because they want there to be arugment. Its the reason a lot of them are even in this thread. They certainly don't try to contribute anything to the analysis(not naming anyone specific).
 
I'm still confused by what you were saying in the post I quoted above, regarding Tomb Raider. Could you clarify for me?

Sure.

I'm not sure what you're saying here, that the game can't or doesn't go above 2GB VRAM usage because your card ran it? Because it certainly can; as I said, it did for me. I don't have my exact settings on hand, but I cranked up everything as high as I could except, IIRC, shadows and AA due to performance issues before new drivers had hit. I was above 2GB VRAM used.


Again, I could say the same thing of storage media. Just because developers can do without it doesn't mean they should. Games are all the better for pushing technological boundaries, not making do with what we (almost always incorrectly) assume is the limit we can feasibly use.

No, the bolded is not what I was saying at all.

I was saying that even 2 GB was not necessary to run that game at its highest settings. The game will still run fine even if you don't have 4 GB. 4GB is just overkill at this point in time. Its more insurance than anything else.

Faster memory is far more beneficial than large amounts now. Consoles aren't PC that need to dedicate memory to system tasks and other things all of the time. They have pure access.

A lot of people who claimed that the Wii U having only 1 GB of accessible memory(currently) and the Xboxone and PS4 haveing 4-5 will prevent games from being portable within reason but I disagree. No optimized code would make that volume of RAM an absolute requirement.
 
As for Wii U, we know it fished above R700 on stuff like the Eyefinity implementation.
Actually, this was not confirmed.

Multiple display support (that was in the press release) doesn't have to infer Eyefinity (which is just being able to hook up to 3-6 displays). You can still output to more than one monitor on older GPU cards.

Regardless though, I don't think it means much considering Wii U only needs to output to one screen (and was never explicitly stated to do more).

Edit: Was referring to the gamepad. But woah and behold, Nintendo was only pushing for one display (off tv) so I'm still kinda right!
 
I was saying that even 2 GB was not necessary to run that game at its highest settings.
And I agree, as I never said it was necessary. I used it as an example specifically because it can reach that amount, and it's a current gen game targeting 7-8 year old hardware. If a game optimized for current gen hardware can reach 2GB of VRAM usage, what does that illustrate going forward as assets increase in quality/resolution and size? That's my point: 2GB may seem like a lot today, even if there are games that get there already, but it won't be out of reach for long as everything trends upwards in size, as it inevitably and always does.

Consoles aren't PC that need to dedicate memory to system tasks and other things all of the time. They have pure access.

A lot of people who claimed that the Wii U having only 1 GB of accessible memory(currently) and the Xboxone and PS4 haven't 4-5 will prevent games from being portable within reason but I disagree.

We're talking about VRAM, however, so I'm not sure why you're discussing system tasks and other running applications, since that affecting RAM more than VRAM (if at all).

Because back to Tomb Raider, it's using up to 2GB+ of only VRAM, in addition to whatever it holds in regular RAM. The WiiU has only 1GB, and it has to act as the pool for both. Thus, my point.
 
Actually, this was not confirmed.

Multiple display support (that was in the press release) doesn't have to infer Eyefinity (which is just being able to hook up to 3-6 displays). You can still output to more than one monitor on older GPU cards.

Regardless though, I don't think it means much considering Wii U only needs to output to one screen (and was never explicitly stated to do more).

the wii u can output to 3 screens
 
Eyefinitity is for sending video witlessly, not simply for multiple monitor setups, I thought. I was sure I read that somewhere.

the wii u can output to 3 screens

Actually, 5, but it can only effectively do it to two because of the 60hertz limit. If the Wii U has Bluetooth capability, then it may be possible to output it to 5 without the absolute frame rate splitting, I think.

The problem with multiple Gamepad thing is that it streams data at 60hz/s You can't raise to my knowledge. So if you output to 2 gamepads, it would be a maximum of 30(60/2)

At least, that is what I remember from earlier in the thread.
 
Edit: Was referring to the gamepad. But woah and behold, Nintendo was only pushing for one display (off tv) so I'm still kinda right!

No, you're still wrong. Two GamePad-support has been confirmed at E3 2012.

TV + GamePad 1 + GamePad 2 = three screens.
 
And I agree, as I never said it was necessary. I used it as an example specifically because it can reach that amount, and it's a current gen game targeting 7-8 year old hardware. If a game optimized for current gen hardware can reach 2GB of VRAM usage, what does that illustrate going forward as assets increase in quality/resolution and size? That's my point: 2GB may seem like a lot today, even if there are games that get there already, but it won't be out of reach for long as everything trends upwards in size, as it inevitably and always does.



We're talking about VRAM, however, so I'm not sure why you're discussing system tasks and other running applications, since that affecting RAM more than VRAM (if at all).

Because back to Tomb Raider, it's using up to 2GB+ of only VRAM, in addition to whatever it holds in regular RAM. The WiiU has only 1GB, and it has to act as the pool for both. Thus, my point.

That is incorrect. The Wii U has 1 GB currently avialable to dev(we don''t know all that the other 1GB does but some could be freed for devs in the future) and 32 MB of extremely fast, low latency EDRAM.

By Sony's on admission, EDRAM pushes the performance WAY beyond what it would be at a glance. http://www.gamechup.com/ps4-sony-earlier-thought-about-slow-gddr5-edram-1088gbs/

The 32 MB EDRAM is the thing that really makes the Wii U next gen performance capability wise.
 
No, you're still wrong. Two GamePad-support has been confirmed at E3 2012.

TV + GamePad 1 + GamePad 2 = three screens.
It's actually two. Nintendo wanted to "remove the TV" or whatever was the reason why they promoted the gamepad. But we're still only seeing one gamepad being used for time being (even though it supports 2).
 
I really don't quite understand what you've posted
I said it didn't matter in the end how much displays Wii U supports. By totality, it only outputs to the TV and the gamepad so far but Nintendo's philosophy for this thing also pushed for no TV making it one gamepad.
 
No, you're still wrong. Two GamePad-support has been confirmed at E3 2012.

TV + GamePad 1 + GamePad 2 = three screens.

Isn't multiple gamepad support done by selective frame skipping?
If that is the case then the GPU is still outputting to two 'screens'; it's just that the recipients if this second feed are making it seem like more.
 
That is incorrect. The Wii U has 1 GB currently avialable to dev(we don''t know all that the other 1GB does but some could be freed for devs in the future) and 32 MB of extremely fast, low latency EDRAM.

By Sony's on admission, EDRAM pushes the performance WAY beyond what it would be at a glance. http://www.gamechup.com/ps4-sony-earlier-thought-about-slow-gddr5-edram-1088gbs/
Yes, devs have access to 1GB currently; thus, why I said 1GB. I was excluding the 32MB eDRAM because I'm speaking of total volume, not speed or bandwidth. Regardless, neither point addresses what my concern is.

Until we know the WiiU's real-world performance of its eDRAM, saying that it pushes performance way beyond is speculative.
 
Yes, devs have access to 1GB currently; thus, why I said 1GB. I was excluding the 32MB eDRAM because I'm speaking of total volume, not speed or bandwidth.

Until we know the WiiU's real-world performance of its eDRAM, saying that it pushes performance way beyond is speculative.

Shin'en have attested to the huge speed boost that the eDRAM allows.
 
And This is why I scoff at statements that try to make issues out of meager RAM capacity differences over 1GB. No game "needs" more than 2GB of RAM to to make a high quality game. Having more than that just allows for more laziness in development.
That's not how it works. Procedural generation leads to small file sizes. That's pretty much it. It still requires tons of raw performance and RAM. Ember in particular is extremely demanding in both regards, even though the executable is only 1kB.
 
I said it didn't matter in the end how much displays Wii U supports. By totality, it only outputs to the TV and the gamepad so far but Nintendo's philosophy for this thing also pushed for no TV making it one gamepad.

So because no game currently supports two gamepads and the system supports off-tv play so you're reducing the screen output one...

...and what is your point exactly with that?

and I have seen a screenshot somewhere from wii u unity where you select between the tv, gamepad 1 and gamepad 2 as the display

Unity demo, I think it was GDC?
 
That's not how it works. Procedural generation leads to small file sizes. That's pretty much it. It still requires tons of raw performance and RAM. Ember in particular is extremely demanding in both regards, even though the executable is only 1kB.

I"m aware. I was just saying that because people make to much fuss out of memory and storage space these days.

RAM will not be the deciding fact in where or not game gets ported. Not unless we are dealing with the laziest devs on Earth.

Though, wasn't that 64kb demo made to run on pretty archaic hardware? I remember seeing that things years and years ago.
 
Shin'en have attested to the huge speed boost that the eDRAM allows.
That says that it's performing its duty acting as a high-speed place to move data back and forth; it does not give any measure to how fast it's doing that. It certainly does not imply 1TB/s speeds. That's an inference,

Regardless, it doesn't address what I was saying, again. I'm talking about the issues 1GB of unified VRAM/RAM can have when games are already capable of hitting 2GB+ in VRAM usage alone, and speaking about the eDRAM's speed is tangential to that. Please address what I'm talking about, which is not speed but capacity.
 
Can we not do this again please?
The thread was getting productive for a little while at least... What was in the past is in the past and all that jazz
 
So because no game currently supports two gamepads and the system supports off-tv play so you're reducing the screen output one...

...and what is your point exactly with that?
That it didn't matter if Wii U was displaying 6 screens. You were never going to see that much.
 
Status
Not open for further replies.
Top Bottom