Wii U Speculation Thread of Brains Beware: Wii U Re-Unveiling At E3 2012

Status
Not open for further replies.
Technically, this is all IBM have officially said on the subject:
http://www-03.ibm.com/press/us/en/pressrelease/34683.wss
http://www-03.ibm.com/press/us/en/photo/34681.wss

Curiously enough, in the regurgitated newsbit making rounds on the internet, the quantifier "a lot" has been used in conjunction with edram. But we have no credible source for that, AFAIK. IBM's own claim is that in comparison to sram, they can put 3x the amount of memory in the form of edram. So wheres you'd have 4-8MB of on-die L2/L3 in other designs, you could in theory have 12-24MB of similar-purpose memory in IBM's. That tells us next to nothing about how much edram Nintendo could afford to put on a separate die (ala Xenos) if they decided to.

Engadget
Details were scarce about the IBM silicon Nintendo's new HD powerhouse was packing, but we did some digging to get a little more info. IBM tells us that within the Wii U there's a 45nm custom chip with "a lot" of embedded DRAM (shown above). It's a silicon on insulator design and packs the same processor technology found in Watson, the supercomputer that bested a couple of meatbags on Jeopardy awhile back. Unfortunately, IBM wouldn't give us the chip's clock speeds, but if it's good enough to smoke Ken Jennings on national TV, we imagine it'll do alright against its competition from Sony and Microsoft.
 
Thanx Azure & BG
I must have skipped over those responses as there was alot of reading to do between the 2 threads.

Now it's nice to know how crazy of a performance boost that would provide if it was even possible to accomplish.
 
We understand that it is absurd that that much eDRAM would be possible (especially financially), but what if it were true?
RAM access is characterized by two factors - latency and bandwidth. RAM access limitations along both of those axes are the bane of high-performance computing today. It does not matter how many jiggaflops of ALU you have at your fingertips, if you cannot feed along data at a steady rate, i.e. if you starve your pipeline, you'll never get the full power of your ALUs. Most experienced coders would tell you that getting high performance out of your (CPU) silicon today is an exercise in optimal data access (usually the sentence ends with '..an exercise in caching', but we can generalize that beyond CPUs). One of the reason GPUs are so good at their job is that the latter is (normally) constrained to a very favorable set of tasks in terms of data access. As a result, the GPUs job can parallelize well - they excel at scalability, and thus the one memory parameter they mainly care about is sheer BW (to meet their expansive parallelism). Not so with 'general computing' and CPUs. For the latter both BW and latency are very important (perhaps latency moreso). Now, consider that GPUs tend to do more and more general computing these days, and you'll see that GPUs' inherent 'immunity' to the latency problem is slowly dissolving. It's just that general computing cares about latency, and there's not way around that (when was the last time you saw a CPU without cache?). Now, whereas most 'fast' (in the eye of the general public) RAM today focuses on the BW problem, embedded DRAM actually helps with both latency and BW. So 1GB of edram would be nothing short of a game changer.

Does that answer your question?
 
Just wondering, assuming everything else is literally the same from the dev kit leaks to the final hardware, which should we be more excited for:

- A 768bit (96MB?) - 1Gbit (128MB) pool of eDRAM complimenting a 1GB GDDR3/5 general purpose shared RAM between CPU & GPU

- A 32MB pool of eDRAM (as per the subtle hints from brain_stew) complimenting a 1.5 - 2GB GDDR3/5 pool of general purpose shared RAM between CPU & GPU

TL;DR - Which one do/should we want more, higher eDRAM counts to "low" general memory counts or "decent" (re: lower than the max presented but enough for some tricks) eDRAM amounts to high(er) general memory between the CPU & GPU?
 
blu explained it very well, but to simplify it even further: Even the fastest RAM is slow. No matter how fast your processor is, if all it does is waiting for data, all that speed is useless. Like, it's cool if you are damn fast at folding origami cranes, but if the guy handing you the sheets of paper is slow, your speed is worthless.
 
I was going to post something about Project Cars WiiU but for some reason the website name is completely blocked out.

Site is banned.

Here is a quote from the actual review at Eurogamer.

How that finished product makes it to the market remains in-flux. For now, it's set to be a free-to-play PC game, powered by micro-transactions that will, Slightly Mad says, be competitively pitched, undercutting its competitors in what they're calling a supermarket pricing philosophy. There's the intention of having it come to console too, with Xbox 360, PlayStation 3 and Wii U versions currently slated - which suggests a boxed copy when the project nears completion in 2013, an idea that Slightly Mad isn't entirely averse to.

http://www.eurogamer.net/articles/2011-11-24-project-cars-preview
 
Just wondering, assuming everything else is literally the same from the dev kit leaks to the final hardware, which should we be more excited for:

- A 768bit (96MB?) - 1Gbit (128MB) pool of eDRAM complimenting a 1GB GDDR3/5 general purpose shared RAM between CPU & GPU

- A 32MB pool of eDRAM (as per the subtle hints from brain_stew) complimenting a 1.5 - 2GB GDDR3/5 pool of general purpose shared RAM between CPU & GPU

TL;DR - Which one do/should we want more, higher eDRAM counts to "low" general memory counts or "decent" (re: lower than the max presented but enough for some tricks) eDRAM amounts to high(er) general memory between the CPU & GPU?

I'd take the former
 
blu explained it very well, but to simplify it even further: Even the fastest RAM is slow. No matter how fast your processor is, if all it does is waiting for data, all that speed is useless. Like, it's cool if you are damn fast at folding origami cranes, but if the guy handing you the sheets of paper is slow, your speed is worthless.

Jesus that was an amazing analogy.
 
So why has this news been given three pages of credence? Just look at it. The site has been a simple (and very occassional) copypaster of larger news media showing no connections whatsoever or no real insightful thoughts at all. Now they supposedly suddenly got some insider information from a Japanese insider no less. Furthermore their news goes against both what we know, especially the CPU, and plain common sense, suggesting a way too humongous embedded piece of EDRAM that despite being embedded is somehow still shared between CPU and GPU.

Come on. This isn't even a good try.
 
All demos were running on Wii assets. It was clear Nnitendo was not ready to show anything for the system. I expect EAD studio Group 1 (Takao Shimizu team) and Monster Games to work 24h/day for the next E3.

I'd add Retro to that list as well. They're one of the first studios you'd want on tap to demo the hardware's visual prowess.
 
So why has this news been given three pages of credence? Just look at it. The site has been a simple (and very occassional) copypaster of larger news media showing no connections whatsoever or no real insightful thoughts at all. Now they supposedly suddenly got some insider information from a Japanese insider no less. Furthermore their news goes against both what we know, especially the CPU, and plain common sense, suggesting a way too humongous embedded piece of EDRAM that despite being embedded is somehow still shared between CPU and GPU.

Come on. This isn't even a good try.

Everyone just loves to dream along, that's all.
 
Just wondering, assuming everything else is literally the same from the dev kit leaks to the final hardware, which should we be more excited for:

- A 768bit (96MB?) - 1Gbit (128MB) pool of eDRAM complimenting a 1GB GDDR3/5 general purpose shared RAM between CPU & GPU

- A 32MB pool of eDRAM (as per the subtle hints from brain_stew) complimenting a 1.5 - 2GB GDDR3/5 pool of general purpose shared RAM between CPU & GPU

TL;DR - Which one do/should we want more, higher eDRAM counts to "low" general memory counts or "decent" (re: lower than the max presented but enough for some tricks) eDRAM amounts to high(er) general memory between the CPU & GPU?

I'll take the second one. 32MB is still a fantastic amount to work with, and more main RAM allows for more complex/larger worlds and more detailed models/textures.

96MB sounds great but perhaps overkill for what the rest of the system is rumoured to have
 
So why has this news been given three pages of credence? Just look at it. The site has been a simple (and very occassional) copypaster of larger news media showing no connections whatsoever or no real insightful thoughts at all. Now they supposedly suddenly got some insider information from a Japanese insider no less. Furthermore their news goes against both what we know, especially the CPU, and plain common sense, suggesting a way too humongous embedded piece of EDRAM that despite being embedded is somehow still shared between CPU and GPU.

Come on. This isn't even a good try.

We're info starved and ready to discuss just about anything....?

We are ready to.... Believe?
 
After playing Pilotwings Resort I really would welcome a Monster Games developed PW for Wii U. But give it the full-blown treatment with five different islands and more vehicles (including the cannon ball).
 
So why has this news been given three pages of credence? Just look at it. The site has been a simple (and very occassional) copypaster of larger news media showing no connections whatsoever or no real insightful thoughts at all. Now they supposedly suddenly got some insider information from a Japanese insider no less. Furthermore their news goes against both what we know, especially the CPU, and plain common sense, suggesting a way too humongous embedded piece of EDRAM that despite being embedded is somehow still shared between CPU and GPU.

Come on. This isn't even a good try.

But you know that some people are going to constantly quote it, saying, "A reliable source has confirmed that Wii U only has 768MB of RAM, and that RAM is probably just so it can use the controller, so it's not more powerful than current gen! It might even end up behind current gen!" This fake leak is something we'll have to deal with until we get the real specs.
 
I'll take the second one. 32MB is still a fantastic amount to work with, and more main RAM allows for more complex/larger worlds and more detailed models/textures.

96MB sounds great but perhaps overkill for what the rest of the system is rumoured to have

I was under the impression that about 32MB was necessary to run games
at 1080p with FAA. If the CPU and GPU had to share that, then wouldnt that pose a problem?
If so, I would go for the 96.
 
I was under the impression that about 32MB was necessary to run games
at 1080p with FAA. If the CPU and GPU had to share that, then wouldnt that pose a problem?
If so, I would go for the 96.

The 32 MB would be L3 cache, purely for the CPU. It wouldn't have anything to do with graphical output.

According to the article the 768MB of on die memory would be shared by the cpu and gpu, so it's obviously bullshit.
 
But you know that some people are going to constantly quote it, saying, "A reliable source has confirmed that Wii U only has 768MB of RAM, and that RAM is probably just so it can use the controller, so it's not more powerful than current gen! It might even end up behind current gen!" This fake leak is something we'll have to deal with until we get the real specs.

right. which is why we should discuss it and debunk it.
Now the next time it gets brought up everyone can tell that person why it's not gonna happen instead of the few getting lost in the sea of responses.
 
The 32 MB would be L3 cache, purely for the CPU. It wouldn't have anything to do with graphical output.

According to the article the 768MB of on die memory would be shared by the cpu and gpu, so it's obviously bullshit.

Weren't there comments strongly hinting that any EDRAM on this system's final design will be its own separate pool of memory to be used between the CPU & GPU? I remember that much coming from brain_stew & wsippel a while back as they thought of how it would be implemented in this system's design.
 
The 32 MB would be L3 cache, purely for the CPU. It wouldn't have anything to do with graphical output.

According to the article the 768MB of on die memory would be shared by the cpu and gpu, so it's obviously bullshit.

We are discussing under the assuming the article actually made a mistake and meant 768mb being shared, therefore 96MB.

But yes, the 32MB would be for the CPU in the second case, but its not discussed if the GPU would have any eDRAM. So I assume the framebuffer comes from the main pool of memory like the Xbox.
 
It sounds like Nintendo will get a huge leap beyond Wii by going with modern shader architecture (SM4.1 at least) combined with alot of CPU & GPU embedded memory for greater speeds than PS3 can manage.
 
It sounds like Nintendo will get a huge leap beyond Wii by going with modern shader architecture (SM10.1 at least) combined with alot of CPU & GPU embedded memory for greater speeds than PS3 can manage.


I thought Shader Models didn't get this high yet. Isn't 4.1/5 the current hot shit?
 
It sounds like Nintendo will get a huge leap beyond Wii by going with modern shader architecture (SM10.1 at least) combined with alot of CPU & GPU embedded memory for greater speeds than PS3 can manage.
I just went from Skyrim to SS, and while SS' art style is definitely growing on me more as I play, I cannot wait to see what their artists are going to make with better lighting tech.

Also, SS is the first time I've thought to myself "this looks like a Gamecube game." I guess the shock of Skryim to this put me in Nintendo troll mode or something.
 
Don't get thrown off by the BS rumor, guys. We've gotten more reliable information on these boards. The person who made it up didn't even have any comprehension on what he was writing about except for knowing a few buzz words and numbers that have been tossed about for months in the speculation.

And yeah, if anything the 32 MB eDRAM will be on the GPU.
 
I just went from Skyrim to SS, and while SS' art style is definitely growing on me more as I play, I cannot wait to see what their artists are going to make with better lighting tech.

Also, SS is the first time I've thought to myself "this looks like a Gamecube game." I guess the shock of Skryim to this put me in Nintendo troll mode or something.

TP didn't do that too? Btw, you should get that reaction from most Wii games...
 
I just went from Skyrim to SS, and while SS' art style is definitely growing on me more as I play, I cannot wait to see what their artists are going to make with better lighting tech.

Also, SS is the first time I've thought to myself "this looks like a Gamecube game." I guess the shock of Skryim to this put me in Nintendo troll mode or something.

GameCube would run SS at 15-25FPS at best, and doesn't SS use some normal mapping?
 
I just went from Skyrim to SS, and while SS' art style is definitely growing on me more as I play, I cannot wait to see what their artists are going to make with better lighting tech.

Also, SS is the first time I've thought to myself "this looks like a Gamecube game." I guess the shock of Skryim to this put me in Nintendo troll mode or something.

Yeah, I know what you mean. At the same time, there were moments when I'd definitely think to myself, "this looks way better than TP"
 
TP didn't do that too? Btw, you should get that reaction from most Wii games...
TP is a 6 year old game. It looked pretty great at the time, especially considering it was a cube game.

GameCube would run SS at 15-25FPS at best, and doesn't SS use some normal mapping?
I doubt the Cube could even run SS, and I don't know about normal mapping. I'm just saying I got the reaction when I was playing it because I had been playing Skyrim. SS is a good looking game considering the hardware its on.

Yeah, I know what you mean. At the same time, there were moments when I'd definitely think to myself, "this looks way better than TP"
Oh, it's definitely better than TP. Especially the animations.
 
There were two responses giving hypotheticals.








I know Ace suggested that a couple of times awhile back. I wouldn't be capable of starting a thread like that.

The problem is that the real tech people aren't exactly regulars on this site.
I mean, Brain Stew does what he can, and there are a few others, but with how much there is to teach, it's just not possible.
So, the rest of us basically just pick up what we can and try to explain it.
 
TP is a 6 year old game. It looked pretty great at the time, especially considering it was a cube game.
That's what I meant... it actually was a cube game.

But anyway, I agree with you. I don't really understand why people praise the graphics of Wii games beyond "good for what they are." I only own a Wii, after having a PS2 and GC, and the rare occasions that I play one of HD Twins it always strikes me as odd that people ever bother getting worked up over Wii graphics. It's like telling a little kid he's good at something when he's not, but he's a kid so you don't want to be mean.

EDIT: This has nothing to do with art style/direction.
 
No. A console CPU with 32MB of L3 cache that the GPU can't access is one of the more absurd ideas proposed in this thread.



You think Reggie knows what the internal rendering resolutions of any Nintendo games is?

The demo was 720p without AA.

You would rather believe a group that made their analysis based on a video of a video, the same group that if i remember correctly started the rumor that Skyrim on PS3 had a resolution downgrade after the 1.2 patch based on nothing more than an incorrect observation by a poster on their site.
 
That's what I meant... it actually was a cube game.

But anyway, I agree with you. I don't really understand why people praise the graphics of Wii games beyond "good for what they are." I only own a Wii, after having a PS2 and GC, and the rare occasions that I play one of HD Twins it always strikes me as odd that people ever bother getting worked up over Wii graphics. It's like telling a little kid he's good at something when he's not, but he's a kid so you don't want to be mean.

EDIT: This has nothing to do with art style/direction.

For me, if a game looks nice, it looks nice, and if it doesn't, it doesn't. Tech isn't the most important thing to me if it isn't used well. I'd take SS or Galaxy over Black Ops anyday, for example, because, in spite of it's tech advantage, it's a fucking ugly game.

This is why I'd be excited for Nintendo games on Wii U even if it's only powerful enough to render Wii games in HD with better textures and IQ. The ONLY reason I care about power is because I think it'll have extremely poor sales without ports.
 
For me, if a game looks nice, it looks nice, and if it doesn't, it doesn't. Tech isn't the most important thing to me if it isn't used well. I'd take SS or Galaxy over Black Ops anyday, for example, because, in spite of it's tech advantage, it's a fucking ugly game.

This is why I'd be excited for Nintendo games on Wii U even if it's only powerful enough to render Wii games in HD with better textures and IQ. The ONLY reason I care about power is because I think it'll have extremely poor sales without ports.

i feel the exactly the same
 
For me, if a game looks nice, it looks nice, and if it doesn't, it doesn't. Tech isn't the most important thing to me if it isn't used well. I'd take SS or Galaxy over Black Ops anyday, for example, because, in spite of it's tech advantage, it's a fucking ugly game.

This is why I'd be excited for Nintendo games on Wii U even if it's only powerful enough to render Wii games in HD with better textures and IQ. The ONLY reason I care about power is because I think it'll have extremely poor sales without ports.

While I agree with this, the problem I have is that if Nintendo doesn't get up to par, they'll once again get no third party support.

I'd rather not have to buy another console for multi-plat games.
 
That's what I meant... it actually was a cube game.

But anyway, I agree with you. I don't really understand why people praise the graphics of Wii games beyond "good for what they are." I only own a Wii, after having a PS2 and GC, and the rare occasions that I play one of HD Twins it always strikes me as odd that people ever bother getting worked up over Wii graphics. It's like telling a little kid he's good at something when he's not, but he's a kid so you don't want to be mean.

EDIT: This has nothing to do with art style/direction.

It has everything to do with art style/direction. There are some games on the wii which genuinly look great and not just because i'm giving them a free pass because of the HW limitations. Games like wario land shake it to me look as good as pretty much anything released this gen.
 
It has everything to do with art style/direction. There are some games on the wii which genuinly look great and not just because i'm giving them a free pass because of the HW limitations. Games like wario land shake it to me look as good as pretty much anything released this gen.
Nobody's mentioning Kirby's Epic Yarn? The game that stole the show for "Best Graphics at E3" from GameTrailers?
 
Weren't there comments strongly hinting that any EDRAM on this system's final design will be its own separate pool of memory to be used between the CPU & GPU? I remember that much coming from brain_stew & wsippel a while back as they thought of how it would be implemented in this system's design.

If its a separate pool of memory, shared between COU and GPU, then it isn't edram, just ram. Edram needs to be on the same die as either the CPU or GPU. For Gaming I think we'd prefer it on the GPU as a frame buffer etc. on the CPU you'd more normally refer to it as cache.
 
Status
Not open for further replies.
Top Bottom