Wii U Speculation Thread of Brains Beware: Wii U Re-Unveiling At E3 2012

Status
Not open for further replies.
BurntPork said:
Only the high-end 6000 cards (6950/70/90) use the VLIW4 arch. the rest all use VLIW5.

Trinity APUs will use VLIW5.

Trinity is a HD7000 series chip, which is a modified HD6000 chip, they won't be changing the architecture till HD8000 series, as they are shrinking to 28nm, and HD8000 will be an HD8000 series.

Trinity would most likely use VLIW4 as it will be the HD7000 series, which HD6900 is the indicator of change for the HD7000, though some HD7000 series will be just brought over from the HD6000 series like they did with the HD5770's becoming HD6770, but Trinity is a custom chip, likely to change with the HD7000 line.

Trinity has been confirmed to be 50% faster than Llano:
Llano 400SP/5 (VLIW5) Trinity 480SP/4 (VLIW4) = 80 for Llano and 120 for Trinity, which lines up perfectly with the 50% faster statement. Trinity is very likely VLIW4
 
sarusama said:
Sorry to put you on the spot there z0m3le, but your post is another example of why parsing these threads for information is difficult. I understand that you might not be a native english speaker and that that could factor into it. However:

VLIW stands for Very Long Instruction Word. It refers to a processor technology aimed at enabling parallel instruction execution using a Multiple Instructions Multiple Data (MIMD) paradigm (see http://en.wikipedia.org/wiki/Very_long_instruction_word for a start). Saying that a chip is 5 vliws or 4vliws doesn't make much sense. Does 5 and 4 refer to the number of "vliw" units or the width of the long instruction word? If I assume the numbers refer to the width of the word, i.e., 5 or 4 instructions in one cycle, you could be right with your efficiency statement in the case that compilers are not able to utilize the extra instruction in the 5-case (compilers have to generate an instruction flow that can be spread across the n-words of the vliw, which highly depends on the serial dependencies of the computation being done). In that case you'd have wasted/underutilized resources, and yes 4 might be more efficient. But if the number reflects the width of the vliw that also means that in theory:

HD4000 can do 800 x 5 = 4000 instructions per clock
HD6000 can do 640 x 4 = 2560 instructions per clock

I guess I must not be understanding something right. Could you clarify why dividing the number of SPs by the width of the word means anything?
The IPC doesn't change. One stream processor does one instruction per cycle. It's about the way stream processors are organized. Five SPs make up one VLIW5 unit (R700), four SPs make up one VLIW4 unit (HD69xx). VLIW4 is supposedly more efficient because, according to AMD, only 3.4 SPs per unit are typically utilized on average, so on a R700, 1.6 SPs per unit would do absolutely nothing most of the time. Various benchmarks have shown that VLIW5 GPUs are faster in many cases, though. While it might be true that VLIW5 "wastes" SPs, the few cases that require five components seem to bog down VLIW4 GPUs considerably, to the point where it's actually beneficial to have that one "spare" SP.

Well, from what I understand at least...
 
Anasui Kishibe said:
what doesn't make sense here is the fact that Nintendo would promote TWO mainline Zelda games..heck, no company would do that

that's why i said they should never have showed the U at this E3. But their goal was to make the casuals drool, because that's who bought the Wii

They just did that with Mario two years ago. And like I said, forget Zelda, they didn't show any game period, and there's something wrong since this type of event should have happened 6-8 months ago with as little as what they've worked on. There's no reason why a company as big as Nintendo will literally only have two main console games in two years with Kirby last year and Zelda this year (not counting the games that have sit on the backburner that they decide to throw out or localize at the end of the Wii's life cycle.)

It's not about showing or not showing the Wii U at this E3, its about the fact that they're not further along making games and coming out with this system when they should have been, instead of sitting around with their thumbs up their asses apparently. They haven't released crap for a year and its going to take them another year, maybe longer, for them to release more than a console game?
 
z0m3le said:
Trinity is a HD7000 series chip, which is a modified HD6000 chip, they won't be changing the architecture till HD8000 series, as they are shrinking to 28nm, and HD8000 will be an HD8000 series.

Trinity would most likely use VLIW4 as it will be the HD7000 series, which HD6900 is the indicator of change for the HD7000, though some HD7000 series will be just brought over from the HD6000 series like they did with the HD5770's becoming HD6770, but Trinity is a custom chip, likely to change with the HD7000 line.

Trinity has been confirmed to be 50% faster than Llano:
Llano 400SP/5 (VLIW5) Trinity 480SP/4 (VLIW4) = 80 for Llano and 120 for Trinity, which lines up perfectly with the 50% faster statement. Trinity is very likely VLIW4
Except that it doesn't work like that. Each of AMD's SIMDs has 16 units, so VLIW5 SIMDs have 80 SPs each, and VLIW4 SIMDs have 64 each. (This is the real reason why the 6970 is faster than the 5870; 5870 has 20 SIMDs while 6970 has 24.) 64 doesn't go into 480 evenly, so it can't be using VLIW4. However, it's evenly divisible by 80. Clearly, Trinity's GPU is based on the Turks chip, which is used in the 6570 and 6670.

EDIT: Wait, jumped to conclusions there. There's nothing to confirm that it has 480 SPs yet, is there?
 
z0m3le said:

That was a long read. I admit I skimmed the last couple of pages. The gist of it as I see it is that AMD is moving towards the architecture Nvidia has been using for a couple of generations now (since NV80?). I feel the article is a little bit mis-representing the SIMD approach relative to the VLIW one: in SIMD you need to have lots of threads (or data) that will do the same thing. Everything in a wavefront (Nvidia called them warps...) has to execute the same instruction. That is fine if you have a bunch of fragments, for example, that will run the same shader. But you need that data-parallel workload in order for it to be efficient.

wsippel said:
The IPC doesn't change. One stream processor does one instruction per cycle. It's about the way stream processors are organized. Five SPs make up one VLIW5 unit (R700), four SPs make up one VLIW4 unit (HD69xx). VLIW4 is supposedly more efficient because, according to AMD, only 3.4 SPs per unit are typically utilized on average, so on a R700, 1.6 SPs per unit would do absolutely nothing most of the time. Various benchmarks have shown that VLIW5 GPUs are faster in many cases, though. While it might be true that VLIW5 "wastes" SPs, the few cases that require five components seem to bog down VLIW4 GPUs considerably, to the point where it's actually beneficial to have that one "spare" SP.

Ok, now things make more sense to me. I was a bit confused about what the SP was. I suppose the term was misused in that the 800 and 640 number represent the number of ALUs then 800/5 = 160 SPs where each SP is 5 ALUs.

Would it be fair to summarize the VLIW approach as one that has n-pipelines per processor? So shouldn't you be able to fold a data parallel problem into that the same way as you would on a SIMD? E.g., take the c = a + b instruction in a shader, and let's say a, b, and c are 4-component vectors:

VLIW5:
c0=a0+b0;...;c3=a3+b3; X

SIMD16:
run c stream = a stream + b stream
0 -> c0=a0+b0
...
16 -> c16=a16+b16

so the SIMD16 would compute the sum for 4 fragments (4 fragments x 4-wide vectors) in one cycle. The VLIW5 would do the sum for 1 fragment in 1 cycle and waste 1 unit as I scheduled it above. However, if I know I have 4 fragments can't I schedule the last pipe to compute c0=a0+b0 of the next fragment? Or is this not possible because it's statically scheduled and the compiler can't assume there will be than one fragment?
 
BurntPork said:
Except that it doesn't work like that. Each of AMD's SIMDs has 16 units, so VLIW5 SIMDs have 80 SPs each, and VLIW4 SIMDs have 64 each. (This is the real reason why the 6970 is faster than the 5870; 5870 has 20 SIMDs while 6970 has 24.) 64 doesn't go into 480 evenly, so it can't be using VLIW4. However, it's evenly divisible by 80. Clearly, Trinity's GPU is based on the Turks chip, which is used in the 6570 and 6670.

EDIT: Wait, jumped to conclusions there. There's nothing to confirm that it has 480 SPs yet, is there?

http://www.anandtech.com/Show/Index...-core-next-preview-amd-architects-for-compute - Third paragraph:
Anandtech: "What we know for a fact is that Trinity – the 2012 Bulldozer APU – will not use GCN, it will be based on Cayman’s VLIW4 architecture. Because Trinity will be VLIW4, it’s likely-to-certain that AMD will have midrange and low-end video cards using VLIW4 because of the importance they place on being able to Crossfire with the APU."

I am not certain that Nintendo will go with something similar to Trinity, but it would make a lot of sense from a design perspective, having the GPU/CPU on 1 chip in an APU configuration would be very interesting, and would allow for better cooling in the case and lower power usage as well. AMD and IBM already do this with the Xbox 360 Slim.

sarusama said:
so the SIMD16 would compute the sum for 4 fragments (4 fragments x 4-wide vectors) in one cycle. The VLIW5 would do the sum for 1 fragment in 1 cycle and waste 1 unit as I scheduled it above. However, if I know I have 4 fragments can't I schedule the last pipe to compute c0=a0+b0 of the next fragment? Or is this not possible because it's statically scheduled and the compiler can't assume there will be than one fragment?

Yeah, your right on assuming that the scheduler wouldn't be able to do that, the GCN is going to change that (not the GAMECUBE NINTENDO, but: GRAPHICS CARD NEXT or HD8000 series) but for now, coders would have to manually schedule for VLIW5, which just doesn't happen, VLIW4 is faster to code for, from what I gather.

All these post mean, is that I think we are looking at a fusion part similar to Trinity, but with Nintendo's team working on it as well, before all the rumor confirmations of R700, I was pretty sure Nintendo would go with a Trinity GPU modified to fit their needs, as it should be a 20w chip that can do roughly 4830's numbers... The dev kits point to around this GPU's power as you could draw a 50% power difference between the ps360 and it if you take into account off the shelf parts and being underclocked. Final units being modified Trinity units really does make a lot of sense if you put all of those things into account. Especially VLIW4 vs VLIW5.
 
z0m3le said:
http://www.anandtech.com/Show/Index...-core-next-preview-amd-architects-for-compute - Third paragraph:
Anandtech: "What we know for a fact is that Trinity – the 2012 Bulldozer APU – will not use GCN, it will be based on Cayman’s VLIW4 architecture. Because Trinity will be VLIW4, it’s likely-to-certain that AMD will have midrange and low-end video cards using VLIW4 because of the importance they place on being able to Crossfire with the APU."

I am not certain that Nintendo will go with something similar to Trinity, but it would make a lot of sense from a design perspective, having the GPU/CPU on 1 chip in an APU configuration would be very interesting, and would allow for better cooling in the case and lower power usage as well. AMD and IBM already do this with the Xbox 360 Slim.
God, I hope they don't. Low-end GPUs don't have enough ROPs or a wide enough bus. It would barely be twice the current gen. Granted, we still don't know how many SPs it has, so I could be wrong.

Still, I really don't like all of this Trinity talk. AMD's APUs are extremely overrated, IMO. Seeing everyone act like they're the most monumental innovation in PC tech ever gets really annoying after a while.
 
The timeframe for when Trinity comes out completely rules it out, hell it would have to be out by now to even have a chance.
 
BurntPork said:
God, I hope they don't. Low-end GPUs don't have enough ROPs or a wide enough bus. It would barely be twice the current gen. Granted, we still don't know how many SPs it has, so I could be wrong.

Still, I really don't like all of this Trinity talk. AMD's APUs are extremely overrated, IMO. Seeing everyone act like they're the most monumental innovation in PC tech ever gets really annoying after a while.

It has 480SP, and APU's being overrated is correct when talking about power, it would be more powerful to have a HD4850 than Trinity, but if you take wattage into account, APUs vs GPU/CPU means a lot less power and cooling only 1 unit vs power drained from both units and cooling 2 small heaters instead of just 1.

Even Trinity is still ~2.5x to 3.5x more powerful than PS360's GPUs. Neither company is going to blow tons of money like before, but even if they end up with systems 10x what they currently have, they will only be ~3x Wii U's power and come 2 to 3 years late to next gen, leaving them in a similar position to where Wii U will be next year, but without the Wii/Nintendo customers and super strong IP's. I think Nintendo sees this too, and just thinks that a modified Trinity like APU would be fine for 5-6 years.

AceBandage said:
Pretty sure we ruled out Trinity ages ago.

I'm not saying Trinity, I'm saying a GPU based on the HD6000 series sharing a chip with the CPU like Xbox 360 Slim. Remember, Nintendo has their own graphics team, so anything would be highly modified, and Nintendo coming to AMD 3 years ago, would be presented with the HD6000 architecture to work from, the R700 in dev kits makes a lot of sense, simply for price reasons and there really isn't anything in the HD6000 series that sits at 4830's performance.
 
z0m3le said:
It has 480SP, and APU's being overrated is correct when talking about power, it would be more powerful to have a HD4850 than Trinity, but if you take wattage into account, APUs vs GPU/CPU means a lot less power and cooling only 1 unit vs power drained from both units and cooling 2 small heaters instead of just 1.

Even Trinity is still ~2.5x to 3.5x more powerful than PS360's GPUs. Neither company is going to blow tons of money like before, but even if they end up with systems 10x what they currently have, they will only be ~3x Wii U's power and come 2 to 3 years late to next gen, leaving them in a similar position to where Wii U will be next year, but without the Wii/Nintendo customers and super strong IP's. I think Nintendo sees this too, and just thinks that a modified Trinity like APU would be fine for 5-6 years.



I'm not saying Trinity, I'm saying a GPU based on the HD6000 series sharing a chip with the CPU like Xbox 360 Slim. Remember, Nintendo has their own graphics team, so anything would be highly modified, and Nintendo coming to AMD 3 years ago, would be presented with the HD6000 architecture to work from, the R700 in dev kits makes a lot of sense, simply for price reasons and there really isn't anything in the HD6000 series that sits at 4830's performance.
If it has 480 SPs and is VLIW4, then it has 7.5 SIMDs. That's impossible. You can't have a fraction of an SIMD. Either one or the other is wrong.
 
Supa said:
How many bits is the Wii U?
I lost track after the N64 days! :-D
Hmm, google isn't giving me a straight answer but I think the Power7 architecture is available in both 32 and 64 bit so I guess it'll probably be 32 bit.
 
BurntPork said:
If it has 480 SPs and is VLIW4, then it has 7.5 SIMDs. That's impossible. You can't have a fraction of an SIMD. Either one or the other is wrong.

I'm not a GPU engineer, but I don't really understand why you couldn't use a set number of SIMD and just have one functioning at half efficiency?

Supa said:
How many bits is the Wii U?
I lost track after the N64 days! :-D

lol well the GPU is likely to have 256bit memory bandwidth... but the CPU is likely 64bit.

Luigiv said:
Hmm, google isn't giving me a straight answer but I think the Power7 architecture is available in both 32 and 64 bit so I guess it'll probably be 32 bit.

I'll stick with 64bit, as both gamecube and wii's cpu's are 64bit, so it would be easier for backwards compatibility.
 
z0m3le said:
I'm not a GPU engineer, but I don't really understand why you couldn't use a set number of SIMD and just have one functioning at half efficiency?



lol well the GPU is likely to have 256bit memory bandwidth... but the CPU is likely 64bit.



I'll stick with 64bit, as both gamecube and wii's cpu's are 64bit, so it would be easier for backwards compatibility.

Haha wow thanks for looking into it you two, I wasn't expecting a real answer, more something like "about 1 million bits!"
But that all sounds pretty fast. Definitely picking up the Wii U next year, not the least for the HD retro game downloads. :-)
 
z0m3le said:
I'm not a GPU engineer, but I don't really understand why you couldn't use a set number of SIMD and just have one functioning at half efficiency?



lol well the GPU is likely to have 256bit memory bandwidth... but the CPU is likely 64bit.



I'll stick with 64bit, as both gamecube and wii's cpu's are 64bit, so it would be easier for backwards compatibility.
Actually, Gekko and Broadway are 32bit (hence my guess).
 
Luigiv said:
Hmm, google isn't giving me a straight answer but I think the Power7 architecture is available in both 32 and 64 bit so I guess it'll probably be 32 bit.
Power7 is always PPC64, as is PowerPC A2/ PowerEN. The PowerPC 4xx line are the only "modern" 32bit PowerPC CPUs IBM offers these days, and A2 is supposed to replace those in many applications as far as I can tell.
 
wsippel said:
Power7 is always PPC64, as is PowerPC A2/ PowerEN. The PowerPC 4xx line are the only "modern" 32bit PowerPC CPUs IBM offers these days, and A2 is supposed to replace those in many applications as far as I can tell.
Oh, fair enough. Guess that's going to hurt the efficiency of the BC mode a bit, but I guess that's not a big deal when the clock is 5x faster.
 
Anasui Kishibe said:
the something they showed was a tech demo, something to make people go wooh at U's graphical capabilities. We will almost certainly never see that game, the next Zelda will have a different style (but if we did, I'd be the happiest man in town)...and how do you think some people would react at the announcement of Zelda HD with such graphics? They'd immediately call SS graphics a piece of shite and avoid the game

Almost every Zelda has had a direct sequel of some sort, I'd be really happy if we got one for Twilight Princess.
 
sarusama said:
My ranting was particularly about how comments were made as opposed to arguing for one or the other speculation. For what it's worth I agree that it seems unlikely they built an animation playback component from scratch just for this demo. As I was mentioning earlier it seems quite possible they reused some of their existing animation and/or cut-scene scripting components from before. Depending on how the game engine was built, specifically if it is design with appropriate layers of hardware abstraction (e.g., OpenGL is such an abstraction layer), it is quite possible it could be quickly ported to the new hardware, especially if the same or similar abstractions are directly supported. This would make sense from the point of view of producing something quickly, that might not take advantage of new hardware features or be properly optimized for the new hardware architecture.

So I agree that they probably used their existing engine to produce this. But just because the demo used assets that are from (or maybe even only inspired by) TP doesn't mean anything about what engine it is running on. It certainly doesn't mean it's "_the_ TP engine" and such a thing might not even exist (as opposed to the EAD Group1 engine or the EAD engine).



It really depends on what you want to show. The Samaritan people refer to all the time most probably is just a canned animation running in real-time with kick-ass surface shaders, lots of geometry and what not. It's a graphical show case, highlighting what the GPU could possibly do. I doubt it is doing any game simulation (AI, input device processing... I would be surprised if it even did any data loading from storage as opposed to pre-loading everything onto the Video RAM). So is that demo representative of what real games will be able to do?

It would actually be difficult to fully characterize the "power" of the WiiU. Specs can tell you so much, but performance is dependent on many, many factors. E.g., if you access data in a nice pattern where you use neighboring data next, caches are going to do wonders for you. But if your data is spread all over the place, their usefulness could be limited.



It's plausible that they used the same assets as the foundation. Those are independent of the engine. For example you could use Unreal Engine and script the same animation using the actual TP assets and you'd conceivably get the same animation shown at E3. That doesn't make UE the TP engine.

I think now is probably a good time to clarify that we're calling it the "TP engine" because we don't know the official name for it like we would for Unreal, Frostbite, etc. It's kinda like how some people call all sodas Coke. TP is the game we are seeing and we've just associated it accordingly.

Now I was always under the impression that once games got away from live and CG cut scenes that all of that was a part of what was handled by the engine itself and not separate from it. I mean Unreal Engine 3 is used in things that aren't even related to gaming.

http://en.wikipedia.org/wiki/Unreal_Engine_3#Non-gaming_use

You'll see things mentioned that would be the same as what we saw with the demo. So it would seem to me that the engine would handle the animation aspects as well when other gaming components are not being utilized instead of it being done separately. Especially when looking at the real time affects that were utilized.

I understand that they can apply the assets to different engines, but that doesn't sound like a Nintendo method. I don't know how much you lurk so you may not have seen me say it, but Nintendo applies two sayings (IMO):

If it ain't broke, don't fix it. If it is broke, try it a little longer till we know for sure.

Whatever engine they used for TP the game I just believe was used for this demo since the assets would already be there. But all of this was a conclusion based on inductive reasoning. Once they get ready to make the real Wii U Zelda, I would expect/hope they build something new to better utilize the hardware.

Obviously we won't know what they did with that demo unless they tell us.

sarusama said:
Wow, I beat TP, but I totally forgot about this boss fight. I didn't remember it at all. It's as if I saw this for the first time. Too bad Skyward Sword is coming out soon or I'd go and replay TP right now. I wonder what else I don't remember about it. And before anyone says there's plenty of time before the release... I'd never make it in time or if I did I'd probably be a little burnt out from Zelda and it would diminish my enjoyment of SS (which imho is looking stellar!!)

Considering how many years it's probably been since you played it, I can see why. These days I probably would forget what I played just a few days ago.
 
artwalknoon said:
Someone translate this fast! But judging from the photos the article looks like it mostly just recaps Nintendo's E3.
Doesnt say anything other than AMD source said based on HD4000 series. no further details
 
bgassassin said:
I think now is probably a good time to clarify that we're calling it the "TP engine" because we don't know the official name for it like we would for Unreal, Frostbite, etc. It's kinda like how some people call all sodas Coke. TP is the game we are seeing and we've just associated it accordingly.

Now I was always under the impression that once games got away from live and CG cut scenes that all of that was a part of what was handled by the engine itself and not separate from it. I mean Unreal Engine 3 is used in things that aren't even related to gaming.

http://en.wikipedia.org/wiki/Unreal_Engine_3#Non-gaming_use

You'll see things mentioned that would be the same as what we saw with the demo. So it would seem to me that the engine would handle the animation aspects as well when other gaming components are not being utilized instead of it being done separately. Especially when looking at the real time affects that were utilized.

I understand that they can apply the assets to different engines, but that doesn't sound like a Nintendo method. I don't know how much you lurk so you may not have seen me say it, but Nintendo applies two sayings (IMO):

If it ain't broke, don't fix it. If it is broke, try it a little longer till we know for sure.

Whatever engine they used for TP the game I just believe was used for this demo since the assets would already be there. But all of this was a conclusion based on inductive reasoning. Once they get ready to make the real Wii U Zelda, I would expect/hope they build something new to better utilize the hardware.

Obviously we won't know what they did with that demo unless they tell us.



Considering how many years it's probably been since you played it, I can see why. These days I probably would forget what I played just a few days ago.

IIRC Nintendo has been using the game 'engine', meaning 'rasterizer' for all of their games since the 64, they just customize it for each game and constantly update it. I'm pretty sure I remember hearing that the Twilight princess engine was technically the same as the wind waker engine and the super mario sunshine engine.

A rasterizer is a very basic piece of math, doesn't make much sense to rewrite it more than you need to, especially if you've already got one optimized one to run on your hardware. If you've got a toolset you like to use and mountains of established codebase, why toss it all away and rewrite everything?

Of course this gets to the point that saying something uses this or that 'engine' is meaningless except for companies who sell engines, but that should be obvious by now.
 
antonz said:
Doesnt say anything other than AMD source said based on HD4000 series. no further details

I don't know much Japanese, but I think it says that the GPU inside the Wii U is HD4000 based on R770. - source is "Marc Diana" from AMD.

Which would mean HD4830-HD4890, all of which are more powerful than the Trinity like card I was assuming... Hmm well if they can fit it in the box and cool it properly, and knowing Nintendo, they won't screw up with something like a RROD console (looks over to dead console :( ... ... ...) I for one, will enjoy the extra power.
 
artwalknoon said:
Has this been the rumor thus far and so just old "info" or is this new?


It seems to be the same info a few other japanese sites ran awhile back. Based on the 4000 series. It mentions RV770 but I think its in regard to that being the original codename for the HD4000 family.

Most sources do say its likely based on the 770 line but I don't think this article is speaking in that context
 
TheNatural said:
Appreciate the response. In your first part, I'm not really talking about how they handled it, I'm asking WHAT the heck have they been working on for this to be so far off and only have this little to show? As I've said, there's been basically no major releases internally for the DS and Wii the past year. Retro made DKC, Team Ninja made Other M, and outside of that there's been what - Kirby Epic Yarn since Super Mario Galaxy 2? And we know from what Myamoto said at E3 2009 that SMG 2 was basically done THEN and being held off to space releases.

What the heck has happened to Nintendo's output is what I'm wondering? I know they have a few 3DS games they're working on, and there's Zelda, but that's still not much at all. Also Zelda development is horribly timed, this is the second console Zelda in EIGHT years and both of the two games have been timed at the end/start of a new console lifecycle? It's just odd that they haven't really been putting out any console games for the past year and then for the forseeable future the next year or so outside of Zelda there's nothing either.

And I'll disagree about your comments about analog. I mean its been 25 years or so since the NES D pad and it's alive and well, analog hasn't replaced it totally, and motion or touch won't replace analog. It's just a different way to play. And by forcing you to play it, I mean making it the only way to play a game (with motion in the past) and also forcing you to buy it too with the system and when it's probably a very sizable chunk of the total cost of this. They should have two SKU's, just like the 360 with Kinect. If you don't want a Kinect, you don't have to buy one to get a 360, and you don't have to use it in games, and you don't have to play the Kinect only games. I'm sure a lot of games will need it because of the way the game is, but if it's just used as a map/item selection replacement - which a LOT of games will likely be with multiplatform versions of games, why not just have a normal control option out there as well so it will cheaper and more people will buy it?

Oh ok. That's really a Nintendo thing as that has happened with the last few consoles. And when looking that the following console they showed more with the Wii U than they did with the Wii. Here is a link to the E3 2005 press conference and go to around 33:20

http://video.google.com/videoplay?docid=5740279523739566758#

That said just because they've chosen not to say anything doesn't mean they aren't doing anything. They've been one of, if not the most secretive gaming company we've seen.


With the control part it wasn't about replacing it completely, but becoming the primary control method. The d-pad no longer was the primary method after Nintendo came out with the N64. Had they been implemented properly motion controls should have done the same thing to analog, but we're going to see a delay in that shift now because they were targeted towards non-gamers. Motion controls to me seem like the next logical step in control methods.

And I still believe "forcing" is the wrong choice of words because that's putting a negative connotation on what has always been done. Heck the NES would be a prime example of "forcing" based on that. You had no choice but to use the d-pad. And with Kinect, I expect it to come with all skus next gen with the way sales have gone for it. Making the controls optional kinda defeats the purpose of even putting them out there if you want folks to adapt. People, and in this case core gamers, can be very status quo. Unless you force them to change they won't. This generation is proof of that with the backlash we have seen thanks to motion controls being optional with the PS360.

And why be just like everyone else? That's what caused them to make the Wii in the first place. I'm not a fan of the amount of compromise they made with the Upad. The screen might be two steps forward, but the dual analog setup is a step backwards to me. And I think you forget (or didn't know) that the Classic Controller Pro can be used as well so there's your traditional option you are looking for. I think Nintendo is making too many compromises even thought I see why they are.

iamblades said:
IIRC Nintendo has been using the game 'engine', meaning 'rasterizer' for all of their games since the 64, they just customize it for each game and constantly update it. I'm pretty sure I remember hearing that the Twilight princess engine was technically the same as the wind waker engine and the super mario sunshine engine.

A rasterizer is a very basic piece of math, doesn't make much sense to rewrite it more than you need to, especially if you've already got one optimized one to run on your hardware. If you've got a toolset you like to use and mountains of established codebase, why toss it all away and rewrite everything?

Of course this gets to the point that saying something uses this or that 'engine' is meaningless except for companies who sell engines, but that should be obvious by now.

Now THAT sounds like Nintendo. It'd be nice to see them finally move on from that though.
 
bgassassin, I think I this point we're just talking past each other because we use terms to mean different things.

bgassassin said:
Now I was always under the impression that once games got away from live and CG cut scenes that all of that was a part of what was handled by the engine itself and not separate from it.

What do you mean separate from it? The "engine" can playback live/CG cut-scenes. Do you mean that the cutscene is first rendered externally as a video and then played-back in the engine?

bgassassin said:
So it would seem to me that the engine would handle the animation aspects as well when other gaming components are not being utilized instead of it being done separately. Especially when looking at the real time affects that were utilized.

Sorry I couldn't parse the meaning of this paragraph. Are you saying that in the case the "engine" is underused (by not having to handle AI, etc.) it could utilize idle resources to render the animation as opposed to playing back a pre-rendered movie of the animation?

To be clear I didn't mean to imply I didn't believe the Zelda demo was being rendered in real-time. I was from the fact that you could control various viewing and rendering parameters. What I was commenting on was the fact that from what we've seen, we can't infer what produced it. Just as likely are:

1. Tweaked existing engine to build and run on the new hardware. Throw together a quick real-time cutscene to show off.
2. Decide to start a new engine "from scratch". For the E3 Demo made sure the real-time cutscene management component is functional and some basic rendering. Throw togeter a quick Demo for E3.

The reason one might do 2, is to perform a serious refresh of the underlying software framework. Sometimes it's not easy to keep updating an existing framework because when you first designed it the assumptions for its use might have been drastically different then as opposed to now. Some assumptions on how things should collaborate might have been reasonable at the time but inefficient now.

bgassassin said:
I understand that they can apply the assets to different engines

(Nitpicking, but you wouldn't write "apply assets to engines" but "engines use assets")

bgassassin said:
Whatever engine they used for TP the game I just believe was used for this demo since the assets would already be there. But all of this was a conclusion based on inductive reasoning

This is where we are talking past each other. Engines and assets are orthogonal components that are used in games. They are usually reasonably separate and independent parts. What you just wrote is inferring in the following way:

1. "since the assets would already be there" is tying assets to the engine used for TP
2. because you are seeing the same assets you infer it's the engine used for TP

There are several leaps of faith here:
1. assets were re-used based on what you see. I would admit this a fair assumption, but I'd have to look at some of the assets in more detail (side by side) and compare. I wouldn't be surprised if they had a modeller just make these real quick based on existing designs.
2. you are connecting assets to engines.

I guess at this point we both understand each other, but I always reply because I can't resist arguing your desire to connect assets and engines and use that to infer something.

iamblades said:
IIRC Nintendo has been using the game 'engine', meaning 'rasterizer' ...

A rasterizer is a very basic piece of math, doesn't make much sense to rewrite it more than you need to, especially if you've already got one optimized one to run on your hardware.

engine != rasterizer, not by far. Even though you could write a software rasterizer, this hasn't been done since Castle Wolfenstein. Rasterizers specifically are one of the few pieces of non-programmable functionality left in the graphics pipeline. I believe they are implemented using custom circuitry, as opposed to using the ALU's of the GPU "cores" (at least it used to be for the longest time and I think it still is). It's a tiny, minuscule part of a "game engine" and is most certainly hidden under layers and layers of abstractions/APIs.
 
I would imagine that if it is the Wind Waker engine, it is 2.0 and not 1.4 or whatever, meaning basically a complete revamp of the engine to allow for programmable unified shaders. Calling it WW engine, or TP engine is sort of missing the point, Skyward sword would be using the newest engine version, and it's still too far out of date to really be useful today.

It's likely a brand new Zelda engine with the 3D models from TP put in. If you look at the lighting, it's pretty spectacular, shows off a lot of what I want to see from the Wii U to be honest, nice sharp picture without fog and blur... To me, it looks like a new engine, if you want to call it WW 2.0 that is fine, but you might as well call WW, OoT 2.0, I like just calling it a new Zelda engine.

As long as we have real looking fog and blur only when it ADDS to the game and not take it away, I'll be happy. So far blur has been this generation's fog, but that nature demo's DoF and Zelda's lack of blur gives me high hopes that we are moving away from blurring every motion.
 
I swear reading somewhere the WW engine is a re-worked mario sunshine engine and TP is still this engine too? and/or The Galaxy engine is derived from the WW one. :/ im confused....and a tad drunk
 
bgassassin said:
Glad to see you here to give exact info stew. Is it possible to achieve a consistent 1080p/60FPS based on what we know about possible large amounts of eDRAM, or is that still out of this console's range?

Obviously it would be up to the devs to pursue it, but would that be theoretically possible?

Both the PS3 and 360 can do that already and have games that demonstrate it. Its purely a software problem, though yes, being able to keep an entire 1080p framebuffer in eDRAM certainly won't hurt the console's ability to do that.

[QUOTE=BurntPork]That 256-bit bus should be a huge help. I think it means at least double the 360's bandwidth.[/QUOTE]

Even with GDDR3 on a 128 bit bus, you're going to get more bandwidth as the GDDR3 in the 360 runs very slow conpared to modern standards. Although I don't really see how GDDR3 makes all that much sense as there'll be no real cost savings over the console's lifespan. GDDR5 on a 64 bit bus makes a lot more sense if you want to cheap out in this area.
 
z0m3le said:
I would imagine that if it is the Wind Waker engine, it is 2.0 and not 1.4 or whatever, meaning basically a complete revamp of the engine to allow for programmable unified shaders. Calling it WW engine, or TP engine is sort of missing the point, Skyward sword would be using the newest engine version, and it's still too far out of date to really be useful today.

It's likely a brand new Zelda engine with the 3D models from TP put in. If you look at the lighting, it's pretty spectacular, shows off a lot of what I want to see from the Wii U to be honest, nice sharp picture without fog and blur... To me, it looks like a new engine, if you want to call it WW 2.0 that is fine, but you might as well call WW, OoT 2.0, I like just calling it a new Zelda engine.

As long as we have real looking fog and blur only when it ADDS to the game and not take it away, I'll be happy. So far blur has been this generation's fog, but that nature demo's DoF and Zelda's lack of blur gives me high hopes that we are moving away from blurring every motion.
Actually all the 3D models in the demo look to be brand new, just based off the same designs.

Case in point;

TP's Armoghoma (apologies for the 3D but this is the best image I could find)
zelda3d___armoghoma_by_darkml-d326se4.jpg


Demo's Armoghoma
untitled-1h886r8gc.jpg


As you can see, there's some noticeable differences in the design (most obviously, the arrangement of the eyes) that just wouldn't make sense if it were merely an upgraded model.
 
Luigiv said:
Actually all the 3D models in the demo look to be brand new, just based off the same designs.

Case in point;
:Pix:

As you can see, there's some noticeable differences in the design (most obviously, the arrangement of the eyes) that just wouldn't make sense if it were merely an upgraded model.

Nice, that is good to know, I figured they wouldn't spend a lot of time with new models, but NeoGAF educates me daily, one of the reasons I idle here so much.
 
SaintMadeOfPlaster said:
I'm no techie, but we all know how Nintendo likes to cut costs. Would the difference in production costs be minimal? If not, then we shouldn't be surprised.

RAM prices can actually increase over times as an older standard is no longer produced in mass quantities. So even if GDDR5 is slightly more expensive today, over the console's lifespan, there's no real cost savings to be made here, you might end up increasing costs. Never wonder why, despite being antiquated 1990s technology, Nintendo still went used cutting edge GDDR3?

Forgetting that, RAM can't be accounted for in isolation. Nintendo will have a certain bandwidth target for main memory and GDDR5 can get there at half the bus width. Reducing the bus width reduces die size, it reduces motherboard complexity and means that (at least at some point, if not by launch) you can use fewer RAM chips.

GDDR3 on a 256 bit bus delivers around the same bandwidth as GDDR5 on a 128 bit bus. The latter is already cheaper now (it even was when AMD first introduced GDDR5 to a consumer device several years ago when GDDR5 carried a much higher premium) and will prove significantly cheaper as time goes by.

TL;DR GDDR5 is actually the cheaper option for any given amount of bandwidth.
 
z0m3le said:
I'm not a GPU engineer, but I don't really understand why you couldn't use a set number of SIMD and just have one functioning at half efficiency?
Perhaps, but to my knowledge AMD has never done that before.

And man, that Japanese article got my hope up about RV770, but it seems that they're not sure after all. In other words, that article has no real new info. :(
 
TheNatural said:
They just did that with Mario two years ago. And like I said, forget Zelda, they didn't show any game period, and there's something wrong since this type of event should have happened 6-8 months ago with as little as what they've worked on. There's no reason why a company as big as Nintendo will literally only have two main console games in two years with Kirby last year and Zelda this year (not counting the games that have sit on the backburner that they decide to throw out or localize at the end of the Wii's life cycle.)

It's not about showing or not showing the Wii U at this E3, its about the fact that they're not further along making games and coming out with this system when they should have been, instead of sitting around with their thumbs up their asses apparently. They haven't released crap for a year and its going to take them another year, maybe longer, for them to release more than a console game?


NSMB wasn't a mainline Mario, it was primarly focused on multiplayer despite having an excellent single player. It was the casual-oriented Mario Nintendo needed to drive holiday sales

Nintendo obviously abandoned the Wii, and the U's unveiling was premature, to say the least. We agree on that



Gravijah said:
Almost every Zelda has had a direct sequel of some sort, I'd be really happy if we got one for Twilight Princess.


yes, I'm just saying we will probably never see a Zelda game with that graphical style
 
TheNatural said:
They just did that with Mario two years ago. And like I said, forget Zelda, they didn't show any game period, and there's something wrong since this type of event should have happened 6-8 months ago with as little as what they've worked on. There's no reason why a company as big as Nintendo will literally only have two main console games in two years with Kirby last year and Zelda this year (not counting the games that have sit on the backburner that they decide to throw out or localize at the end of the Wii's life cycle.)

It's not about showing or not showing the Wii U at this E3, its about the fact that they're not further along making games and coming out with this system when they should have been, instead of sitting around with their thumbs up their asses apparently. They haven't released crap for a year and its going to take them another year, maybe longer, for them to release more than a console game?

While I agree that the unveiling of the Wii U left a lot to be desired, I don't think anyone here is in a position to say authoritatively whether they will have nothing come the launch of the new system.
 
brain_stew said:
RAM prices can actually increase over times as an older standard is no longer produced in mass quantities. So even if GDDR5 is slightly more expensive today, over the console's lifespan, there's no real cost savings to be made here, you might end up increasing costs. Never wonder why, despite being antiquated 1990s technology, Nintendo still went used cutting edge GDDR3?

Forgetting that, RAM can't be accounted for in isolation. Nintendo will have a certain bandwidth target for main memory and GDDR5 can get there at half the bus width. Reducing the bus width reduces die size, it reduces motherboard complexity and means that (at least at some point, if not by launch) you can use fewer RAM chips.

GDDR3 on a 256 bit bus delivers around the same bandwidth as GDDR5 on a 128 bit bus. The latter is already cheaper now (it even was when AMD first introduced GDDR5 to a consumer device several years ago when GDDR5 carried a much higher premium) and will prove significantly cheaper as time goes by.

TL;DR GDDR5 is actually the cheaper option for any given amount of bandwidth.


IIRC In 2008 the 55nm Radeon 4870, 4850 (RV770) was introduced with GDDR5 on a 256-bit bus.

Hopefully Wii U uses 128-bit bus with GDDR5 and some embedded memory on the GPU (we already know the CPU is getting "a lot".
 
TheNatural said:
They just did that with Mario two years ago.
I was going to bring up the Mario thing, but it's clearly pretty different than if they'd put Super Mario Galaxy 2 and Super Mario Galaxy Wii U side-by-side. I think only Square has been that kind of crazy, like announcing FF IX, X, and XI together and having actual imagery for IX and X.
 
Anasui Kishibe said:
NSMB wasn't a mainline Mario, it was primarly focused on multiplayer despite having an excellent single player. It was the casual-oriented Mario Nintendo needed to drive holiday sales

Don't listen to this man. He knows not of what he speaks.
 
sarusama said:
1. assets were re-used based on what you see. I would admit this a fair assumption, but I'd have to look at some of the assets in more detail (side by side) and compare. I wouldn't be surprised if they had a modeller just make these real quick based on existing designs.

Luigiv said:
Actually all the 3D models in the demo look to be brand new, just based off the same designs.

TP's Armoghoma (apologies for the 3D but this is the best image I could find)
IMGhttp://i21.photobucket.com/albums/b263/Luigi_V/zelda3d___armoghoma_by_darkml-d326se4.jpg/IMG

Demo's Armoghoma
IMGhttp://h5.abload.de/img/untitled-1h886r8gc.jpgIMG

Thanks for posting these. I assumed it was something along the lines of remodeling.

z0m3le said:
I figured they wouldn't spend a lot of time with new models

It wouldn't necessarily take a modeller too long to do. They might have started with the one they made for TP and "touched it up". That might also be a reason why many people think the Demo is not impressive: it would be a case similar to Ocarina of Time 3D where they remastered it, but were only able to do so to a limited capacity. I.e., it would be relatively straightforward to change textures of a game (e.g., with high-resolution version), but geometry can be more tricky with all the dependent rigging for animation purposes and what not. I realize they took the time to update the Link model in OoT3D, but you'll notice that mostly the touch-ups don't modify the geometry.
 
sarusama said:
Thanks for posting these. I assumed it was something along the lines of remodeling.



It wouldn't necessarily take a modeller too long to do. They might have started with the one they made for TP and "touched it up". That might also be a reason why many people think the Demo is not impressive: it would be a case similar to Ocarina of Time 3D where they remastered it, but were only able to do so to a limited capacity. I.e., it would be relatively straightforward to change textures of a game (e.g., with high-resolution version), but geometry can be more tricky with all the dependent rigging for animation purposes and what not. I realize they took the time to update the Link model in OoT3D, but you'll notice that mostly the touch-ups don't modify the geometry.

Yeah, looking at it more closely, those "hairs" on the spider haven't changed much at all, they are uniformly in the same spots from what I can see, and the model hasn't changed much at all. Link's clothes look much better (the texture work) but his model is only slightly upgraded from the TP model, it looks good, strikingly so for the small amount of change gathered, but both models are hardly taxing the system... However that lighting is amazing.
 
sarusama said:
engine != rasterizer, not by far. Even though you could write a software rasterizer, this hasn't been done since Castle Wolfenstein. Rasterizers specifically are one of the few pieces of non-programmable functionality left in the graphics pipeline. I believe they are implemented using custom circuitry, as opposed to using the ALU's of the GPU "cores" (at least it used to be for the longest time and I think it still is). It's a tiny, minuscule part of a "game engine" and is most certainly hidden under layers and layers of abstractions/APIs.

But that's my point, not all game engines are full suites of development tools like unreal and crytek, sometimes you can start with just a basic graphics engine that just handles translating the draw calls to the API and still say something uses the same engine. If you have a graphics engine that is efficient and fast on your hardware why rewrite it when it is really easy to add support for whatever new features the next hardware supports. It's just like you can say that CoD uses the quake 3 engine, you can probably say that that zelda demo used the TP engine. It doesn't mean anything though.
 
sarusama said:
bgassassin, I think I this point we're just talking past each other because we use terms to mean different things.

I guess at this point we both understand each other, but I always reply because I can't resist arguing your desire to connect assets and engines and use that to infer something.

Ha. I always reply because an opportunity has presented itself to learn more. I'm a logic person. I'm about obtaining the best level of accuracy possible over settling with fallacies and running with them as truth. So when I can correct and/or add to the logical steps I've used to obtain a conclusion I will pursue it. In this case till I have the level of understanding I'm looking for or until you get tired of responding. You messed up by posting and it sounds like your nature makes it tough to stop in situations like this.
9.gif


So while we might be talking past each other, that's only because I'm trying to get everything applied in the context as you are using them in, yet still coming from the current perspective I have. This is one of the ways I learn. I say it, you've responded, and then I try to adapt properly. I'm going to be rearranging your post some due to this.


sarusama said:
(Nitpicking, but you wouldn't write "apply assets to engines" but "engines use assets")

OK. It's just that with the way things have sounded (and what I've read) that it was a fair application of that phrase. But this is also due to my current understanding. So I need to know what you are labeling as assets. Also how are these assets created/what is used to create them? An answer to this will make a big difference for me.

sarusama said:
What do you mean separate from it? The "engine" can playback live/CG cut-scenes. Do you mean that the cutscene is first rendered externally as a video and then played-back in the engine?

Yeah that sentence sounded better when I typed it than it does now because you obviously won't know my reasoning for the way it sounds. But that was speaking under my previous understanding before it became clearer about assets and engines. What I'm assuming is that the live and CG cutscenes are not considered assets in the sense of what would be used by an engine as opposed to what a real-time demo would be. Hope that sounds better.


sarusama said:
Sorry I couldn't parse the meaning of this paragraph. Are you saying that in the case the "engine" is underused (by not having to handle AI, etc.) it could utilize idle resources to render the animation as opposed to playing back a pre-rendered movie of the animation?

To be clear I didn't mean to imply I didn't believe the Zelda demo was being rendered in real-time. I was from the fact that you could control various viewing and rendering parameters. What I was commenting on was the fact that from what we've seen, we can't infer what produced it. Just as likely are:

1. Tweaked existing engine to build and run on the new hardware. Throw together a quick real-time cutscene to show off.
2. Decide to start a new engine "from scratch". For the E3 Demo made sure the real-time cutscene management component is functional and some basic rendering. Throw togeter a quick Demo for E3.

The reason one might do 2, is to perform a serious refresh of the underlying software framework. Sometimes it's not easy to keep updating an existing framework because when you first designed it the assumptions for its use might have been drastically different then as opposed to now. Some assumptions on how things should collaborate might have been reasonable at the time but inefficient now.

First off point #1 is what I've believed. That it was a modified engine to better utilize the hardware. That goes back to what I was talking about with Nintendo's "If it ain't broke" philosophy. But that I've believed the engine was old and some here have indicated that it's even older which gimps how well the assets could look or run (your response to asset creation may affect how this should be worded). Kinda like reusing a 1995 Camry engine in newer car bodies and modifying the engine to run as best as possible. Now we have a 2010 Lexus using that '95 engine and while it looks prettier, it's still hindered by the power of that engine. That would be why the demo "didn't look as good" as it could. This is of my reasoning for that specific engine being used. So it sounds like we see that the same.

Now that I have that out of the way, I think the way it sounds how you are using animation is where the misunderstanding is coming from. This is how the context comes off to me in your posts. Not saying this is what you mean, just what sounds like.

1. That animation is separate from the engine.

This I understand in the proper context now.

2. That the animation is not an asset.

This is where the confusion is for me because your posts seemed like this isn't part of the assets. First was:

What we've seen is a video showing some scenery that bears close resemblance to something that could fit into a TP style. There is no reason to believe there even is an "engine" powering any of this. For all we know it could simply be playing back an animation with some camera and rendering parametrization control. You don't need an "engine" for that.

And mainly with what you mentioned the Samaritan demo you said:

It really depends on what you want to show. The Samaritan people refer to all the time most probably is just a canned animation running in real-time with kick-ass surface shaders, lots of geometry and what not. It's a graphical show case, highlighting what the GPU could possibly do. I doubt it is doing any game simulation (AI, input device processing... I would be surprised if it even did any data loading from storage as opposed to pre-loading everything onto the Video RAM). So is that demo representative of what real games will be able to do?

I get the impression that the animation is not an asset based on this. This is why I posted the link showing non-gaming usage of UE3. Those things don't utilize the game simulations you mention, but they do use the engine to perform animation and other aspects. This is also another area where your response about assets will help. Because putting all the info I have together as of right now leaves me to believe you have to have the engine to run the assets and that animation is included under assets. Going back to the earlier analogy, that'd be like having the body of a car and no engine inside of it. It might look pretty, but it's not going anywhere. Saying it's possible that it could just be an animation doesn't sound logical for what Nintendo and Epic would be trying to achieve, and with what I'm referring to.

sarusama said:
This is where we are talking past each other. Engines and assets are orthogonal components that are used in games. They are usually reasonably separate and independent parts. What you just wrote is inferring in the following way:

1. "since the assets would already be there" is tying assets to the engine used for TP
2. because you are seeing the same assets you infer it's the engine used for TP

There are several leaps of faith here:
1. assets were re-used based on what you see. I would admit this a fair assumption, but I'd have to look at some of the assets in more detail (side by side) and compare. I wouldn't be surprised if they had a modeller just make these real quick based on existing designs.
2. you are connecting assets to engines.

I guess at this point we both understand each other, but I always reply because I can't resist arguing your desire to connect assets and engines and use that to infer something.

I believe I hit on the real area of concern, so just addressing this part of the post and the first two points:

1. What I mean by that is that they've already used that engine for those assets, so it would seem easier to make the assets look better and just use the engine you're already familiar with.
2. That is a part the of it with the other part being how Nintendo tends to go about these things.

As for the leaps of faith, that's what inductive and deductive reasoning are about which as I mentioned I used the inductive side. And for the two points:

1. I definitely believe they were reused. Like in the picture that Luigiv posted, the only real difference I see are the front eyes being changed.
2. Goes back to the previous #2, but that's how Nintendo tends to work. So in this case it's very hard to separate the two as opposed to a licensed engine like UE3.

Nintendo is taking a new step in technology and they need to adapt on the software side accordingly. So I'm hoping that by the time Zelda, Mario, and whatever else uses the engine will have a new version of it that properly uses the hardware. I say hope, but I actually expect them to since they always have. After all Nintendo's games tend to look the best on their consoles, which after these discussions is most likely because their engine is optimized for the hardware as opposed to other engines that are trying to cover as many bases as possible. Now not only will we see other engines run on this console thanks to the better hardware those engines were designed for, but we'll see one from Nintendo that's optimized for the new hardware. After seeing that Zelda demo and Nintendo's history with those demos, my body might not be ready for how the real thing will look. I'll have to train it and get it ready.
 
brain_stew said:
Both the PS3 and 360 can do that already and have games that demonstrate it. Its purely a software problem, though yes, being able to keep an entire 1080p framebuffer in eDRAM certainly won't hurt the console's ability to do that.

When you say software problem, what would that mean? The software can't handle it properly, the developers don't try to push it, or is it something else? And since we keep seeing mentions about lots of eDRAM from both the CPU and recently the GPU, will the high amounts help with that problem or is there other reasoning to this?

brain_stew said:
RAM prices can actually increase over times as an older standard is no longer produced in mass quantities. So even if GDDR5 is slightly more expensive today, over the console's lifespan, there's no real cost savings to be made here, you might end up increasing costs. Never wonder why, despite being antiquated 1990s technology, Nintendo still went used cutting edge GDDR3?

Forgetting that, RAM can't be accounted for in isolation. Nintendo will have a certain bandwidth target for main memory and GDDR5 can get there at half the bus width. Reducing the bus width reduces die size, it reduces motherboard complexity and means that (at least at some point, if not by launch) you can use fewer RAM chips.

GDDR3 on a 256 bit bus delivers around the same bandwidth as GDDR5 on a 128 bit bus. The latter is already cheaper now (it even was when AMD first introduced GDDR5 to a consumer device several years ago when GDDR5 carried a much higher premium) and will prove significantly cheaper as time goes by.

TL;DR GDDR5 is actually the cheaper option for any given amount of bandwidth.

What about GDDR5's latency? Would it be safe to say that "bandwidth's pro > latency's con"?
 
bgassassin said:
When you say software problem, what would that mean? The software can't handle it properly, the developers don't try to push it, or is it something else?

What about GDDR5's latency? Would it be safe to say that "bandwidth's pro > latency's con"?

Basically a design choice, 1080p 60fps limits the amount of flashy visuals you can do so most devs choose to ignore it.

I don't know if this is reliable or useful, some google translated page about a comparison between GDDR3 and various GDDR5 gpu/memory frequencies for the 4870.
http://www.madshrimps.be/vbulletin/f22/ati-hd4870-gddr5-vs-gddr3-45988/
 
Status
Not open for further replies.
Top Bottom