WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
At this point think its safe to say the performance will be lower than the 350 gflops. This would make a lot more sense since there was never enough power.

wat

Based on what?

And the 55nm evidence is undeniably far lesser than the 40nm evidence at the current moment.

In terms of power consumption 55 would make so little sense it's not even funny considering the power budget the GPU is likely working with here.

Seriously question, why do you (and abominable snowman) seem to want so badly for the system to be as weak as possible? What stake do you guys have in this?

Not trolling; serious question.
 
At this point think its safe to say the performance will be lower than the 350 gflops. This would make a lot more sense since there was never enough power.

eCyCeeW.jpg


If 40nm was a mature process at this point, why would they have gone with 55nm when it produces more heat and draws more power. That goes against the design philosophy of the entire system.
 
Thoughts on this hypothesis posed on B3D, anyone?

I have a hard time believing this. Mainly because it doesn't make sense imo.

And while many were expecting launch games to look noticeably better than PS360, they also didn't look that much worse. Mass Effect 3 and Assassins Creed 3 ran nearly as good as on 360. Had the hardware been this "botched", i really doubt what would have been possible for graphically demanding multiplat games, at launch, while 360 was lead platform and developers were inexperienced with WiiU and likely didn't get much resources and/or time. That simply would not be possible or this doesn't make any sense and a lot of that "mistery" logic on the GPU is doing some important stuff.
 
Presumably referring to the "weirdness" for lack of a better word of the shader blocks and the implications that explanations of them may have.


Thoughts on this hypothesis posed on B3D, anyone?

I don't buy that at all. I dont think it would be as cheap as he suggests it would've been and Nintendo have made several mentions of investments into hardware R&D for 'next gen console', which they probably wouldn't do if they were using a design from before 2010.

RE: Scaling, I was under the impression that if you moved a config from one fab process to a smaller one, it sort of linearly shrinks, though not as much as it would've been if designed for that destination fab process.
 
What about the theory posed by z0m3le above i.e. VLIW4 32SP per block - since it's seemingly too dense for 40 and too sparse for 20, at 40nm?

Is it even a feasible possibility?
 
Presumably referring to the "weirdness" for lack of a better word of the shader blocks and the implications that explanations of them may have.


Thoughts on this hypothesis posed on B3D, anyone?

To be frank as a hypothesis it falls flat in one very obvious way. How does a 160 gflop part allow games like Trine 2 to use shader effects not possible on 360/PS3? R700 like shaders should certainly be more efficient then 360s or PS3s but not by that much. Also we know that this GPU started out from R700. Everything from its base feature set to Nintendos own documentation makes that very clear. A number referencing R600 (very likely simply legacy) is a pretty lose thread to to tugging at IMO.

The whole simd comparison thing seems to be getting very convoluted. Lattes blocks seem to look about right for a 40 block of R700 shaders. Why people are then comparing to other further removed GPUs I'm not really sure, I cant see how it helps, rather it just muddies the water.
 
What about the theory posed by z0m3le above i.e. VLIW4 32SP per block - since it's seemingly too dense for 40 and too sparse for 20, at 40nm?

Is it even a feasible possibility?

I don't know enough to be able to tell if it's feasible or not, but I was considering something like this a couple days ago. Hopefully not because it would probably indicate a weaker part than we otherwise would have had on our hands.

What? 55nm now? Oh boy.

Highly unlikely, based on the performance and power consumption (especially) we are seeing. But I guess very little can be totally ruled out right now, per the experts.
 
That is one far-fetched hypothesis, as I already mentioned.
Didn't realise it had already been posted/addressed, sorry.

To be frank as a hypothesis it falls flat in one very obvious way. How does a 160gflop part allow games like Trine 2 to use shader effects not possible on 360/PS3? Also we know that this GPU started out from R700. Everything from its base feature set to Nintendos own documentation makes that very clear. Some odd number being connected to R600 is a pretty lose thread to to tugging at IMO.

The whole simd comparison thing seems to be getting very convoluted. Lattes blocks seem to look about right for a 40 block of R700 shaders. Why people are then comparing to other further removed GPUs I'm not really sure, I cant see how it helps, rather it just muddies the water.
Was there a to-scale comparison with a different R700 die instead of more removed components?
 
The working hypothesis I have, emphasis on hypothesis since this is my opinion and I wasn't aware enough apparently that I need to go further in clarifying this based on certain reactions, goes back to the past where at one point we did look at Flipper/Hollywood influence on Latte's design.

Like for example what if some of the GPU is a "modern transform and lighting unit"? The previous GPUs had a fixed function T&L unit, and in PC cards back in the day this became a staple until apparently the arrival of programmable vertex and pixel shaders. So from a modern perspective, the lighting would be obvious. On the transform side I never put much learnin' into understanding a T&L unit back then and forgot everything I knew, but this time around I found an old nVidia article about it.

http://www.nvidia.com/object/transform_lighting.html

How does transform work?
Transform performance dictates how precisely software developers can "tessellate" the 3D objects they create, how many objects they can put in a scene and how sophisticated the 3D world itself can be.

Maybe Nintendo allocated resources to ensure the stability and predictability of tessellation and lighting for future games and in turn these were also modified for BC purposes.

lol guess bg back. ;)



http://iwataasks.nintendo.com/interviews/#/wiiu/console/0/2

hmmmm..... Maybe this is why we are getting some signs of r6xx in the gpu.

For a little while at least, haha.

I don't understand the reasoning behind bringing up r6xx since Wii's architecture was released long before unified shader architecture came out. Why use an architecture unrelated to Wii for Wii BC?

Well, i was being sarcastic.

Saying it should have been hand drawn and the only other thing given didn't indicate sarcasm to you? :P

Or 32, and we are looking at VLIW4... I wonder if R700 could have been modified in this way.

It does make some sense given this measurement, and you'd have virtually the same performance in a PC vs 40, since the R700-"900" were ultimately efficient only up to the 4th stream processor in a unit. Of course for a console having the extra stream processors might of gained some performance, but at that point I am talking about increasing the size of the chip as well.

BTW, just a word to general gaf, this is a tech thread so I have to point this out. 360 had 48 stream processors, split into 3 SIMDs of 16 clusters each... I keep hearing 360 had 240, and that is just the computational performance in gflops when those 48 stream processors are clocked at the 500MHz microsoft designed for.

everything from r700 on uses many many more stream processors, the architectures are very different. crowd performance theory :)

The reason you hear that is because those 48 stream processors were a 5-wide design meaning each stream processor had 5 ALUs. 48 SPs x 5-wide = 240 ALUs @ 500Mhz = 240 GFLOPs. It used to confuse me too because that information is rarely mentioned. Xenos is not an actual VLIW5 design, but has 5 ALUs per SP. I once found an article that talked to one of the designers and the reasoning behind not using VLIW5. That's how I learned about it not being VLIW5 (something to do with efficiency I believe), but I wish I had bookmarked it because I haven't been able to find it since.

This to me go back to the "not all ALUs are created equal" discussion.
 
The working hypothesis I have, emphasis on hypothesis since this is my opinion and I wasn't aware enough apparently that I need to go further in clarifying this based on certain reactions, goes back to the past where at one point we did look at Flipper/Hollywood influence on Latte's design.

Like for example what if some of the GPU is a "modern transform and lighting unit"? The previous GPUs had a fixed function T&L unit, and in PC cards back in the day this became a staple until apparently the arrival of programmable vertex and pixel shaders. So from a modern perspective, the lighting would be obvious. On the transform side I never put much learnin' into understanding a T&L unit back then and forgot everything I knew, but this time around I found an old nVidia article about it.

http://www.nvidia.com/object/transform_lighting.html



Maybe Nintendo allocated resources to ensure the stability and predictability of tessellation and lighting for future games and in turn these were also modified for BC purposes.

Welcome back, bg! Neogaf is a hell of a drug, ain't it? Hope all has been well on your end.

Now that formalities are out of the way, on to your hypothesis. Something everyone should know about me is that there is nothing too far fetched that I won't at least give it a fair shot. I love thinking outside the box and when it comes to Nintendo, one almost has to get used to that way of thought.

That being said, here's how I look at the possibility of Nintendo designed fixed function/dedicated silicon blocks on Latte. I broke it down into a little "for and against." Anyone please feel free to add your own.

FOR:

-Wii BC blocks might be present - why not upgrade them for Wii U functionality?
-BG's source and Li Mu Bai have both hinted at such a possibility
-3DS' "Maestro" extensions are fixed function
-Predictable performance
-Low power consumption

Against:

-Not present in leaked features list - this is not something like clockspeed or ALU count that would have tipped off the competition or disappointed fans. If there is something like "free per pixel lighting," such a feature is exactly the type we would expect to be listed with the others in the initial vgleaks specs.

-Not heard of by Ideaman or mentioned by any developers in interviews.

-In an investor Q&A some time back, Iwata was questioned on why they chose fixed function pixel shaders for 3DS. What much of it boiled down to was that they were appropriate, at the time, for a portable device. Wii U is a different scenario.

-If these custom blocks exist, where did Nintendo get them from? On 3DS, they licensed the technology from DMP. Put plainly, I don't know if Nintendo's engineers are in the business of designing hardware blocks from scratch. Would AMD have really been a help with something this different from the current trend?

-Something this exotic would be in documentation. This is not like clock speeds and ALU count, where developers can easily gauge performance on their own. Further, Nintendo told them basically all they need to know in that the chip is based on the R700 ISA and uses a similar API as OpenGL. Devs know how to use that stuff. Not so with any dedicated silicon. Would Nintendo really sabotage 3rd parties this way?

-Legacy Wii hardware blocks (RAM) seem to be locked off by Nintendo. It's likely that if they included legacy Wii logic, they have done the same, rather than try to shoehorn it into a modern graphics pipeline, where it has no place.

Thoughts, anyone?
 
Welcome back, bg! Neogaf is a hell of a drug, ain't it? Hope all has been well on your end.

Now that formalities are out of the way, on to your hypothesis. Something everyone should know about me is that there is nothing too far fetched that I won't at least give it a fair shot. I love thinking outside the box and when it comes to Nintendo, one almost has to get used to that way of thought.

That being said, here's how I look at the possibility of Nintendo designed fixed function/dedicated silicon blocks on Latte. I broke it down into a little "for and against." Anyone please feel free to add your own.

FOR:

-Wii BC blocks might be present - why not upgrade them for Wii U functionality?
-BG's source and Li Mu Bai have both hinted at such a possibility
-3DS' "Maestro" extensions are fixed function
-Predictable performance
-Low power consumption

Against:

-Not present in leaked features list - this is not something like clockspeed or ALU count that would have tipped off the competition or disappointed fans. If there is something like "free per pixel lighting," such a feature is exactly the type we would expect to be listed with the others in the initial vgleaks specs.

-Not heard of by Ideaman or mentioned by any developers in interviews.

-In an investor Q&A some time back, Iwata was questioned on why they chose fixed function pixel shaders for 3DS. What much of it boiled down to was that they were appropriate, at the time, for a portable device. Wii U is a different scenario.

-If these custom blocks exist, where did Nintendo get them from? On 3DS, they licensed the technology from DMP. Put plainly, I don't know if Nintendo's engineers are in the business of designing hardware blocks from scratch. Would AMD have really been a help with something this different from the current trend?

-Something this exotic would be in documentation. This is not like clock speeds and ALU count, where developers can easily gauge performance on their own. Further, Nintendo told them basically all they need to know in that the chip is based on the R700 ISA and uses a similar API as OpenGL. Devs know how to use that stuff. Not so with any dedicated silicon. Would Nintendo really sabotage 3rd parties this way?

-Legacy Wii hardware blocks (RAM) seem to be locked off by Nintendo. It's likely that if they included legacy Wii logic, they have done the same, rather than try to shoehorn it into a modern graphics pipeline, where it has no place.

Thoughts, anyone?




As you said, if it was something here, it would be documented. Except if Nintendo think it's funny to hide things to developpers like way to handle effects "for free".
 
-Something this exotic would be in documentation. This is not like clock speeds and ALU count, where developers can easily gauge performance on their own. Further, Nintendo told them basically all they need to know in that the chip is based on the R700 ISA and uses a similar API as OpenGL. Devs know how to use that stuff. Not so with any dedicated silicon. Would Nintendo really sabotage 3rd parties this way?

Thoughts, anyone?

TwoTribes stumbled over hardware texture compression on Wii U GPU wich maybe was NOT documented at that time...

http://gengame.net/2012/10/two-tribes-saves-memory-with-new-wii-u-hardware-feature/

https://twitter.com/TwoTribesGames/status/260385727286751233

https://twitter.com/UtopiaOpera/status/260396673451319296
 
-Legacy Wii hardware blocks (RAM) seem to be locked off by Nintendo. It's likely that if they included legacy Wii logic, they have done the same, rather than try to shoehorn it into a modern graphics pipeline, where it has no place.

Thoughts, anyone?

Nintendo said that they didnt just include Wii hardware but instead found a way to modified the WiiU hardware to allow for Wii backwards compatability. So it doesnt sound like theres just Wii logic on the chip thats only enabled when in Wii mode. Also even if that extra embedded memory is locked off from developers that doesnt mean it isnt being used in WiiU mode.
 
TwoTribes stumbled over hardware texture compression on Wii U GPU wich was NOT documented at that time...

http://gengame.net/2012/10/two-tribes-saves-memory-with-new-wii-u-hardware-feature/

Of course, new hardware tricks and such are found frequently. It's part of learning a console's intricacies. For all we know, it may be something implicit in the R700 architecture that not even Nintendo was aware of.

It just seems that something like these fixed function units would be on a different level. We have it from multiple parties that Nintendo's documentation is poor, but to purposefully hide a supposedly major aspect of your hardware's architecture goes way beyond that.

Nintendo said that they didnt just include Wii hardware but instead found a way to modified the WiiU hardware to allow for Wii backwards compatability. So it doesnt sound like theres just Wii logic on the chip thats only enabled when in Wii mode. Also even if that extra embedded memory is locked off from developers that doesnt mean it isnt being used in WiiU mode.

It sounded like some parts might have been added "1:1." It's unclear. However, we can easily say that perhaps he was talking about having TEV instruction go to the pixel shader. Having the old rasterizers' instructions going to the new ones. Writing a vertex shader to take care of the Hardware T&L. It's just not clear what specific parts Iwata was speaking of and what kind of modifications we're talking. If anything, updated fixed function hardware seems more the opposite - changing Wii parts to work for Wii U. Not the other way around.
 
Of course, new hardware tricks and such are found frequently. It's part of learning a console's intricacies. For all we know, it may be something implicit in the R700 architecture that not even Nintendo was aware of.

It just seems that something like these fixed function units would be on a different level. We have it from multiple parties that Nintendo's documentation is poor, but to purposefully hide a supposedly major aspect of your hardware's architecture goes way beyond that.

Didn't find the proof for it being undocumented. I thought i read that. Well its been 5 month ago so...

I think the most reasonable thing may be that Wiis fixed function are changed to work on Wii U games aswell.I mean, why not use it when you need it in your GPU anyhow?
 
Of course, new hardware tricks and such are found frequently. It's part of learning a console's intricacies. For all we know, it may be something implicit in the R700 architecture that not even Nintendo was aware of.

It just seems that something like these fixed function units would be on a different level. We have it from multiple parties that Nintendo's documentation is poor, but to purposefully hide a supposedly major aspect of your hardware's architecture goes way beyond that.



It sounded like some parts might have been added "1:1." It's unclear. However, we can easily say that perhaps he was talking about having TEV instruction go to the pixel shader. Having the old rasterizers' instructions going to the new ones. Writing a vertex shader to take care of the Hardware T&L. It's just not clear what specific parts Iwata was speaking of and what kind of modifications we're talking. If anything, updated fixed function hardware seems more the opposite - changing Wii parts to work for Wii U. Not the other way around.

They did this on the N64 I'm not sure why people are surprised by this. Nintendo still hasn't grown up in this area like one could easily say sony and ms have especially in light of nintendo's history with this aspect of development. They won't police themselves, don't listen devs or their fans with knowledge it's pretty clear they only care about things in their own bubble.
 
I think it's more likely that they modified the programmable shaders so that they could function in a type of "compatibility" mode. So, when programming for Wii U, you might be able to use it as programmable or fixed, depending on the code thrown at it...

Does that sound remotely feasible? Or would the fixed functions of these theoretical shaders only be available when the clock speed is dropped in Wii mode?
 
I think it's more likely that they modified the programmable shaders so that they could function in a type of "hardware compatibility" mode. So, when programming for Wii U, you might be able to use it as programmable or fixed, depending on the code thrown at it...

Does that sound remotely feasible? Or would the fixed functions of these theoretical shaders only be available when the clock speed is dropped in Wii mode?

I believe what you just described as "compatibility mode" is basically writing a pixel shader. That is the benefit of programmable hardware. It's "fixed function" only so much as it does anything you tell it to do...well, optimally :P
 
In case somebody missed it, marcan changed his mind re which is the Expresso and which - the DDR3 bus. IOW, Chipworks were right.
 
I believe what you just described was basically writing a pixel shader. That is the benefit of programmable hardware. It's "fixed function" only so much as it does anything you tell it to do...well, optimally :P
I meant TEV functionality being tacked on, but having that part of the silicon disabled at higher clock speeds.

Edit: full disclosure, I have no idea what I'm talking about.but N4 has extra silicon... Just theorizing what it may be for.
 
In case somebody missed it, marcan changed his mind re the which is the Expresso and which - the DDR3 bus. IOW, Chipworks were right.

Round and round we go!

Edit: I'm still not so sure. He seems somewhat shaky on what's going on there. I'd like to see the BGA myself - not that I'm any type of expert.
 
Or 32, and we are looking at VLIW4... I wonder if R700 could have been modified in this way.

It does make some sense given this measurement, and you'd have virtually the same performance in a PC vs 40, since the R700-"900" were ultimately efficient only up to the 4th stream processor in a unit. Of course for a console having the extra stream processors might of gained some performance, but at that point I am talking about increasing the size of the chip as well.

BTW, just a word to general gaf, this is a tech thread so I have to point this out. 360 had 48 stream processors, split into 3 SIMDs of 16 clusters each... I keep hearing 360 had 240, and that is just the computational performance in gflops when those 48 stream processors are clocked at the 500MHz microsoft designed for.

everything from r700 on uses many many more stream processors, the architectures are very different. crowd performance theory :)
Well, this was because the fifth processor was different, but VLIW5 on a closed platform makes more sense to me than VLIW4.

If densities can go up thanks to a relatively low clock, lack of double precision instructions, 40nm being much more mature now than on 2007-2008 and whatever customization Nintendo could made on the SP, then 40SPs is not out of the picture by any means.
 
I know that AMD and nVidia will work with certain developers, take their shaders and hand-optimize them to run better on their hardware. Perhaps Nintendo did this with the fixed functions from the Wii. The game engine makes a call to the fixed function, the Wii system intercepts this call and replaces it with the optimized shader. The inputs would be the same, and the output would as well. As far as the engine knows, it called the function built into Hollywood. Even for games that code down to the metal, I would imagine it's possible. There's obviously some level of obfuscation on the hardware anyway, so there's nothing to say that the obfuscation layer can't do this.
 
The Radeon 9700 included no fixed function shading hardware. OpenGL 1.x and DirectX 8 and earlier were emulated with pixel shaders. Emulating the TEVs is not really an issue at all.
 
I know it's all speculation at the moment, but how much better is the Wii U GPU looking compared to the 360/PS3?

Similar power with relatively modern features?
 
I know it's all speculation at the moment, but how much better is the Wii U GPU looking compared to the 360/PS3?

Similar power with relatively modern features?

I wouldn't say so. From what I'm reading I just think with some development time the WiiU will separate itself from the 360/ps3
 
I know it's all speculation at the moment, but how much better is the Wii U GPU looking compared to the 360/PS3?

Similar power with relatively modern features?


I won't venture to give an "x times this" number, but I think at this point most of us would agree that it's likely a fair bump but still very much in the same ballpark.


Someone asks this every couple of pages by the way, anyone new to this thread should at least try to read some of it, or at least the OP.
 
Round and round we go!

Edit: I'm still not so sure. He seems somewhat shaky on what's going on there. I'd like to see the BGA myself - not that I'm any type of expert.
The more we see the less we know.
I know that AMD and nVidia will work with certain developers, take their shaders and hand-optimize them to run better on their hardware. Perhaps Nintendo did this with the fixed functions from the Wii. The game engine makes a call to the fixed function, the Wii system intercepts this call and replaces it with the optimized shader. The inputs would be the same, and the output would as well. As far as the engine knows, it called the function built into Hollywood. Even for games that code down to the metal, I would imagine it's possible. There's obviously some level of obfuscation on the hardware anyway, so there's nothing to say that the obfuscation layer can't do this.
I've imagined it's a combination of something like this (whether at the hardware level or somewhere higher in the software stack) and just duplicated (and/or upgraded) hardware for whatever is left that couldn't be done with the newer parts alone. It seems like the most plausible scenario that fits with what Shiota said.
 
I know it's all speculation at the moment, but how much better is the Wii U GPU looking compared to the 360/PS3?

Similar power with relatively modern features?

We're not really sure yet.

The floor seems to be on par with PS360 (a tad higher due to modern features).

The ceiling seems to be up to 2x the "power" (as imprecise as that metric is).

Within that where it lies is anybody's guess.
 
I won't venture to give an "x times this" number, but I think at this point most of us would agree that it's likely a fair bump but still very much in the same ballpark.

You know question has been asked so many times, its starting to get annoying. All we can give is speculative answers, can we stop and just speculate.
 
The more we see the less we know.

After thinking about it some more, I'm sticking to my guns and disputing the identity of the DDR3 and 60x I/O shown in the OP.

It makes no sense for the Wii's texture cache to be that far removed from the DDR3 interface. Similarly, it makes little sense for the USB interface to be on the opposite side of where the traces to those inputs appear on the motherboard!

Then again, alot of things don't make sense to me these days - especially when it comes to Nintendo. :P
 
I won't venture to give an "x times this" number, but I think at this point most of us would agree that it's likely a fair bump but still very much in the same ballpark.

Orbis and Durango are still within the same order of magnitude of the previous iterations. The PS3 and 360 were at least an order of magnitude more powerful than their predecessors. It's the primary reason I think Nintendo went the route they did. They're almost psychotically cost conscious, for one, but at the same time graphics performance is running into a diminishing returns scenario.

I'm not a neo-luddite, mind you, I see the benefits that increased performance brings, but count me among those that doesn't think the next consoles are going to be as far seprated from the previous generation as some expect.
 
the next consoles are going to be as far seprated from the previous generation as some expect.

I seem to remember this sentiment reflected during the end of a few generations, at least the last one for sure. The thing is, we don't get to see how much more developers can do with new hardware until it has been out for a few years. The PC side of things sometimes gives us a small glimpse, but even that is mostly games that are made with console limitations in mind and thrown a few extra fancy effects.

Need I remind people what the start of this generation looked like
1084485279.jpg
 
Orbis and Durango are still within the same order of magnitude of the previous iterations. The PS3 and 360 were at least an order of magnitude more powerful than their predecessors. It's the primary reason I think Nintendo went the route they did. They're almost psychotically cost conscious, for one, but at the same time graphics performance is running into a diminishing returns scenario.

I'm not a neo-luddite, mind you, I see the benefits that increased performance brings, but count me among those that doesn't think the next consoles are going to be as far seprated from the previous generation as some expect.

I expect a lot of damage control around here and elsewhere once the new consoles are revealed, and they fail to completely utterly blow away the best this gen has to offer.


EDIT @tipoo: Sure, but this time around, even the alleged raw numbers tell us that the jump is much smaller. As was said, this time it's less than an order of magnitude by most measures whereas last time it was mor ethan an order of magnitude by most measures.
 
Also works if you rotate the die shot 180 degrees though. Plus the larger (longer) I/O interface would then line up with all them traces going to the top and left side of the MCM (probably doesn't matter). That's what I was thinking anyway - mainly because I figured Chipworks had used the same orientation as their exterior shots of the GPU.


I know exactly diddly-squat about this though so it's pure conjecture/guesstimation!

As Marcan tweeted, there are 280 pins in that long top left I/O - much more than needed for 60x to CPU and much longer than needed for 4x DDR3.

But what if (as the chip labeling suggests) the GPU design was finished a couple years back? RAM is one of the last system aspects to fall in place. Back in 2010/2011, it may not have been clear that 4 gigabit DDR3 chips would be available. In that case, they would have required 8 DDR3 chips (or 8 GDDR3 chips if they only planned on 1 GB of RAM). In that case, it would have been wise to design a wider I/O just in case.
 
EDIT @tipoo: Sure, but this time around, even the alleged raw numbers tell us that the jump is much smaller. As was said, this time it's less than an order of magnitude by most measures whereas last time it was mor ethan an order of magnitude by most measures.


I don't disagree, but even a conservative six times more power does give developers a lot more to work with. Plus it's not just raw power, it's newer shader models, more memory (which is the number one thing developers seemed to hate this generation) etc, and that's not even counting the rumored dedicated GPGPU graphics hardware which may at the very least make a larger portion of games have more PhysX like effects that were promised so long ago.

It may not be as visible a change due to not being an SD-HD like jump again, but realism in games can still go a long way forward.
 
Orbis and Durango are still within the same order of magnitude of the previous iterations. The PS3 and 360 were at least an order of magnitude more powerful than their predecessors. It's the primary reason I think Nintendo went the route they did. They're almost psychotically cost conscious, for one, but at the same time graphics performance is running into a diminishing returns scenario.

I'm not a neo-luddite, mind you, I see the benefits that increased performance brings, but count me among those that doesn't think the next consoles are going to be as far seprated from the previous generation as some expect.

The thing is, the same thing had been said at the end of last generation (Xbox vs Xbox 360) except to a larger degree, yet we see where things turned out. The best looking game of this gen released imo is Halo 4, and Halo 4 has so much room for improvement just in IQ and texture quality.

There's a sizeable gap in every tech demo shown thus far and the games we get now. That's gotta be a ballpark of where devs are aiming at the least. I expect games rolling out soon after next gen that at least match the GTAIV mod in technical effect usage, since they're pretty easy to manipulate and implement. Is that a decent step up, in your opinion?
 
I seem to remember this sentiment reflected during the end of a few generations, at least the last one for sure. The thing is, we don't get to see how much more developers can do with new hardware until it has been out for a few years. The PC side of things sometimes gives us a small glimpse, but even that is mostly games that are made with console limitations in mind and thrown a few extra fancy effects.

I should clarify. I don't mean to say there will be no improvement. I just don't think it will be the kind of improvement that will have the gaming mainstream lining up to spend $500 or whatever.

It's easy to shit on Nintendo's strategy because they made some absolutely terrible mistakes marketing the Wii U and getting software ready, but at the same time I think a lot of people are delusional to think Durango and Orbis are going to have rockets on their back out of the gate. Especially when they likely will be at the same price points as tablets.
 
As Marcan tweeted, there are 280 pins in that long top left I/O - much more than needed for 60x to CPU and much longer than needed for 4x DDR3.

But what if (as the chip labeling suggests) the GPU design was finished a couple years back? RAM is one of the last system aspects to fall in place. Back in 2010/2011, it may not have been clear that 4 gigabit DDR3 chips would be available. In that case, they would have required 8 DDR3 chips (or 8 GDDR3 chips if they only planned on 1 GB of RAM). In that case, it would have been wise to design a wider I/O just in case.

Do we have any indication of when we will get CPU shots to line them up?
 
I don't disagree, but even a conservative six times more power does give developers a lot more to work with. Plus it's not just raw power, it's newer shader models, more memory (which is the number one thing developers seemed to hate this generation) etc, and that's not even counting the rumored dedicated GPGPU graphics hardware which may at the very least make a larger portion of games have more PhysX like effects that were promised so long ago.

It may not be as visible a change due to not being an SD-HD like jump again, but realism in games can still go a long way forward.

No arguments there. As you said I don't think the jump last time (SD to HD, advent of programmable shaders) will be matched, but there's definitely lots of room for improvement.

I still think there are some who will be in shock and/or denial when they see what PS4/720 is capable of (or not capable of) versus PS360 (especially early on). We'll see.

I expect games rolling out soon after next gen that at least match the GTAIV mod in technical effect usage, since they're pretty easy to manipulate and implement. Is that a decent step up, in your opinion?

You may not see this as I believe you said you put me on ignore. However, in my personal opinion, while it IS definitely a "decent" step up, it is nowhere close to what some people seem to be expecting out of PS4/720. And it's not enough to make a total joke out of PS360WiiU (also in my opinion).
 
There's a sizeable gap in every tech demo shown thus far and the games we get now. That's gotta be a ballpark of where devs are aiming at the least. I expect games rolling out soon after next gen that at least match the GTAIV mod in technical effect usage, since they're pretty easy to manipulate and implement. Is that a decent step up, in your opinion?

Oh sure. It will be a step up. I have a nearly $2000 gaming PC. I play Far Cry 3 in 1080p on my TV with shit turned all the way up. My wife first started playing video games 8 years ago, and when she watches me play she doesn't really see why I spent so much money on the PC vs just getting the game for my 360. I think most of the gaming population will probably have that reaction to next Gen. It's better, but not $xxx better.

Will people on this forum jizz in their pants no matter what? I'm sure, but that's probably not a good indication of broad market appeal.
 
In case somebody missed it, marcan changed his mind re which is the Expresso and which - the DDR3 bus. IOW, Chipworks were right.

Seems my OP policy was a wise one :)

As Marcan tweeted, there are 280 pins in that long top left I/O - much more than needed for 60x to CPU and much longer than needed for 4x DDR3

I'm not entirely sure that they would be using 60x any more. Given the move to SMP, change to the cache structure, etc., I'd have thought that they may as well move to something like CoreConnect instead. BC might restrict them on this, though.
 
As Marcan tweeted, there are 280 pins in that long top left I/O - much more than needed for 60x to CPU and much longer than needed for 4x DDR3.

But what if (as the chip labeling suggests) the GPU design was finished a couple years back? RAM is one of the last system aspects to fall in place. Back in 2010/2011, it may not have been clear that 4 gigabit DDR3 chips would be available. In that case, they would have required 8 DDR3 chips (or 8 GDDR3 chips if they only planned on 1 GB of RAM). In that case, it would have been wise to design a wider I/O just in case.

Is it possible that the extra IO is for the development kits which likely has more ram chips, of probably similar density.
 
Status
Not open for further replies.
Top Bottom