Wait, so does that mean the DS, Vita, 3DS are tablets? That would mean that Nintendo started the tablet boom. Who would have thought.
Why has this EA tweet thing spread to the technical discussion threads? He said nothing useful to technical discussion whatsoever.
No, of course. But it's hard to believe you can't figure out why someone who look at a piece of electronics covered 70% by a touchscreen would be tempted to call it a tablet. It looks like one.
Why has this EA tweet thing spread to the technical discussion threads? He said nothing useful to technical discussion whatsoever.
Based on Mickey Mouse and Mario/Sonic. Also this could be the reason why they cant just dump all the Wii Virtual Console games to the Wii U at once. They might need optimization. I experiened some terrible framerate drops in that Kirby game for example. How the hell does this happen? Its a NES Game.
Edit: Also, wasnt there a trailer analysis for Pikmin 3 and it turned it, it was locked at 30 fps?
He did comment on the power of the system. Given the discussion at hand with 160 alu.....Why has this EA tweet thing spread to the technical discussion threads? He said nothing useful to technical discussion whatsoever.
He did comment on the power of the system. Given the discussion at hand with 160 alu.....
He did comment on the power of the system. Given the discussion at hand with 160 alu.....
He also talked like an 11year old, do you really want to use that to try back up your argument?
It was a simple tweet.He also talked like an 11year old, do you really want to use that to try back up your argument?
Hard to tell. The environmental light distribution looks different between those images.Is it just a bad comparison pic or does the truck in the Wii U version lack the self-shadowing in the bed?
Hating Nintendo is srs bzness.You don't have much of a sense of humour about this, do you?
BG with all these processes and numbers being thrown around, what about the 160 vs 320 shaders debate? Which one is most likely in your opinion?
The Wii U in my point of view can't have so much "secret" functions that Nintendo is not sharing it with developers at this point. Nintendo should be showing exactly how to get the most out of the GPU to Third Parties. Simply showing how great their First Party software looks is not enough.
Doesnt matter what I think the blocks are or not. It all silly guess work. It only leads to baseless speculation. After all these month and many people a lot smarter than me have look at this thing and we are nowhere closer than what fourth storm has found and was posted many weeks ago.
It would be one thing if anyone here had any idea what most of the these could even be or be proven to be correct. The big picture things are the easiest to work with[ALUs, TMUs, or ROPs] things you dont seem to want to debate.
L2 caches: U blocks. 32(?) kB each. Seem to resemble the L2 on RV770 to an extent. I'm only counting the long SRAM banks as actual storage space. There should be a fair amount of other logic/SRAM for cache functionality, so it's hard to say how much is for actual texture data.
ROPs: W blocks: Seem to look like the ROPs on RV770 and their positioning next to L2 and DDR3 I/O makes sense.
TMUs: T blocks. Again, I've already explained why I think this, but it also makes sense for them to be close to the DDR3 I/O.
L1 caches: S blocks. 8 kB each.
Only problem is that the 160 shaders has little to do with power. That has to do with shading capability which is usually the only thing that is really improved in Wii U upports besides texture detail.
160 shaders does not produce these types of results.
You what? Processor count in a GPU is a massive determinant of power. What do you think Crossfire/SLI is? It's also a little strange how you believe you can determine from a screenshot the number of shader processors required to render it. The lighting changes in the NFS screens aren't especially drastic and look to be variable tweaks more than anything else. If the 360's GPU can do it a 160ALU Latte could be able to. The update to Trine was a fix for a gamma issue I believe, not a real increase in rendering fidelity.
I think the more obvious comparison is 360 games that have been ported to Wii U with identical graphics, while splitting resources to the gamepad as well as dealing with the Wii U development environment I mention in my last post. Something like AC3 for instance being on Wii U does prove IMO that Wii U is more capable than 360. Power isn't the discussion we are having, it is whether or not 160 ALUs is enough to achieve that with unfinished hardware and bad development tools and APIs. It does have to overcome quite a bit of a disadvantage some say that SM4 with VLIW5 is enough to achieve that based on higher clocked SM5 GPUs with VLIW5 running similar but different software without AA and different settings, but I am personally hesitant to use that example as what Wii U could do with 160 ALUs unless it is fairly different from VLIW5 as it stands in R700.
Krizzx said ALU count has to do with shading capability but little to do with power, which is meaningless nonsense. A debate on whether 160 ALUs is enough is a debate about power: if you don't believe devs with bad tools could produce AC3 on a 160 shader Latte you believe the chip must be more powerful than that.
I'm not coming down on one side or the other with regards to the ALU count because I don't know, personally I believe the results we've seen so far imply GPU is a bit beefier the Xenos/RSX but that could be wrong and it could just be the memory architecture allowing whatever small improvements have been made.
"The Wii U has had a bit of a bad rap - people have said it's not as powerful as 360, this, that and the other. That, by and large, has been based on apples to oranges comparisons that don't really hold water. Hopefully we'll go some way to proving that wrong," he says.
Verified improved shading, whether you want to acknowledge it or not.A selection of in-game screenshots taken from the Wii U version of Most Wanted. High-res PC textures are the headline addition, but Criterion has also improved night-time lighting after employing new staff who previously worked in the motion picture business. In contrast to the other versions, the game begins at night perhaps to highlight the difference.
Does this not mirror the issues brought up with Mario and Sonic at the Olympics and Epic Mickey 2?"The difference with Wii U was that when we first started out, getting the graphics and GPU to run at an acceptable frame-rate was a real struggle. The hardware was always there, it was always capable. Nintendo gave us a lot of support - support which helps people who are doing cross-platform development actually get the GPU running to the kind of rate we've got it at now.
Krizzx said ALU count has to do with shading capability but little to do with power, which is meaningless nonsense. A debate on whether 160 ALUs is enough is a debate about power: if you don't believe devs with bad tools could produce AC3 on a 160 shader Latte you believe the chip must be more powerful than that.
I'm not coming down on one side or the other with regards to the ALU count because I don't know, personally I believe the results we've seen so far imply GPU is a bit beefier the Xenos/RSX but that could be wrong and it could just be the memory architecture allowing whatever small improvements have been made.
You did. Perhaps you worded the sentence poorly.I did? I thought my post was solely about shading capabilities and this constant insistence by some posters on proving the claim made by the person identified as "an anonymous dev" who said the Wii U had less shaders and was less capability.
Shading capability is power. If the anonymous dev meant that it has 160 ALUs to the Xenos' 240 then that's a comment on the chip's power. Similarly, if they had said that it had more shaders (320) that would imply that Latte has more processing power (352gflops). However, I would take all such comments with a grain of salt and even if the ALU count is 160 (a figure which is looking more and more concrete) I don't think it alone tells the full story.Only problem is that the 160 shaders has little to do with power. That has to do with shading capability.
Once again, such improvements are entirely possible by changing existing lighting variables without overhauling the shading system. Not saying they haven't, but if the improvements were more dependent on tech than they were art I imagine they would have trumpeted the fact more. I'd also imagine that the differences between the 360/Wii-U screens would be a bit more impressive.Verified improved shading, whether you want to acknowledge it or not.
Do I need to bring up Deus Ex Director's Cut as well? Though, I will quote the dev as making the statement "We could have easily ported the game, washed out hands and that its" I have no doubt in my mind that this was the case with the majority of the Wii U ports.
http://www.denofgeek.com/games/deus-...ind-the-scenes
Just pointing out, I didn't mention Krizzx and wasn't talking to his points, it was to show that Wii U is likely more powerful than Xenos, the current debate in this thread is whether or not 160 ALUs is enough to achieve that, and my statement is that it depends on how efficient Xenos' 240 ALUs are. The clocks don't speak to much of a difference either way, so you have to put forward the architectures to see if R700 can achieve superior performance to Xenos with only 2/3rd of the ALU count.
Just pointing out, I didn't mention Krizzx and wasn't talking to his points, it was to show that Wii U is likely more powerful than Xenos, the current debate in this thread is whether or not 160 ALUs is enough to achieve that, and my statement is that it depends on how efficient Xenos' 240 ALUs are. The clocks don't speak to much of a difference either way, so you have to put forward the architectures to see if R700 can achieve superior performance to Xenos with only 2/3rd of the ALU count.
Idk.. on that same page, Durante appeared to have doubts about that.
He thought they we talking about ps4 compare to x720. Thats why he said the same amd base.
From this thread: http://www.neogaf.com/forum/showthread.php?t=559196
Silicon studio comments on developing with the Wii U. Not sure if you guys read it yet. They said, "...Wii U has very specific characteristics. Some game designers will like it. Some others will have a hard time to port their game. There are pros and cons. We are very close to Nintendo, so we were working on Wii U for a long time. We almost got the maximum performance with the hardware...".
Precisely. I have dropped the logic that "X way of doing things has been show to be demonstrably better than Y, thus Nintendo must have gone with X in Latte!" This applies to things like hardware interpolation and the VLIW5 architecture in general. It might not make sense to us as technology enthusiasts, but it's not unfathomable that it made sense to Nintendo at some point in time, for whatever Nintendo reasons they have (price, simplicity, "good enough" mentality).
This is an interesting observation, and hints to why tech speculation regarding WiiU has been so wrong again and again. The main problem is the huge time gap between the R700 base and today. It's 2013 and we're applying years of experience and second guessing Nintendo's engineering decisions. AMD's gone from VLIW5 to VLIW4 to GCN and already laying out GCN 2.0.
We have the benefit from both hindsight (lessons learned about the efficiency of VLIW5 versus VLIW4) and foresight (looking ahead to PS4/XB720 and the needs of future games). Using that information, it's hard to imagine coming up with what Nintendo did. But they didn't know then what we know now!
Everything is a tradeoff, and engineering constraints certainly change with the times. in 2008, VLIW5 was pretty good. To compete with PS3/X360, it was a sensible decision. VLIW5-style vertex shaders are ideally suited to DirectX9 era games. For DX10+, not so much. And for GPGPU, it's very troublesome compared to SIMD GCN.
Timing matters. Gamecube was an efficient and powerful design due in no small part to having the 180nm process node ready. PS2 had to launch with 250nm design rules and the huge eDRAM left little room on the GS for advanced effects. GC was eDRAM done right. NEC could fit twice the RAM at 180nm in the same space as Sony could at 250nm. WiiU could have been much more if designed for a 32nm or 28nm node.
It seems to me like something went wrong with Nintendo's project management. WiiU's hardware hints at a console that should have launch earlier than holiday 2012. IMO, it was feature-locked much earlier than past console designs. Maybe Nintendo had trouble coming up with a gameplay hook and went through many ideas before committing to the gamepad. Or they expected another Wii phenomenon and wanted to ensure enough units could be produced.
I just proposed an architectural change that would work very well with the 160 SU theory: Thread interleaving. Running 320 or 640 concurrent threads on 160 shader units. See this presentation, starting at page 31: http://s08.idav.ucdavis.edu/fatahalian-gpu-architecture.pdfI apologize if I have been a bit touchy lately in this thread. GPU speculation is nothing more than a fun hobby, afterall. You could even call it a somewhat bizarre (nerdy) offshoot of gaming in general. Admittedly, it is slightly frustrating that the possibility of a 160 shader Latte is dismissed so hastily (without proposing some type of massive architectural overhaul). I, personally, would love for there to be more going on there, but the deeper I have dug, the less likely it seems.
This is an interesting observation, and hints to why tech speculation regarding WiiU has been so wrong again and again. The main problem is the huge time gap between the R700 base and today. It's 2013 and we're applying years of experience and second guessing Nintendo's engineering decisions. AMD's gone from VLIW5 to VLIW4 to GCN and already laying out GCN 2.0.
We have the benefit from both hindsight (lessons learned about the efficiency of VLIW5 versus VLIW4) and foresight (looking ahead to PS4/XB720 and the needs of future games). Using that information, it's hard to imagine coming up with what Nintendo did. But they didn't know then what we know now!
Everything is a tradeoff, and engineering constraints certainly change with the times. in 2008, VLIW5 was pretty good. To compete with PS3/X360, it was a sensible decision. VLIW5-style vertex shaders are ideally suited to DirectX9 era games. For DX10+, not so much. And for GPGPU, it's very troublesome compared to SIMD GCN.
Timing matters. Gamecube was an efficient and powerful design due in no small part to having the 180nm process node ready. PS2 had to launch with 250nm design rules and the huge eDRAM left little room on the GS for advanced effects. GC was eDRAM done right. NEC could fit twice the RAM at 180nm in the same space as Sony could at 250nm. WiiU could have been much more if designed for a 32nm or 28nm node.
It seems to me like something went wrong with Nintendo's project management. WiiU's hardware hints at a console that should have launch earlier than holiday 2012. IMO, it was feature-locked much earlier than past console designs. Maybe Nintendo had trouble coming up with a gameplay hook and went through many ideas before committing to the gamepad. Or they expected another Wii phenomenon and wanted to ensure enough units could be produced.
I half agree, but releasing early did nothing but good for the 360. You're right about everything else though and it should have been released earlier (2010 maybe).Had Nintendo been managing their projects correctly, the follow-up to Wii would have been released only 4 years after the Wii. Yes, at the height of it's popularity. Instead Nintendo waited until the Wii's corpse was already cold and the PS4/Nextbox were just on the horizon. No lessons were learned from the Dreamcast there, you do NOT launch a console one year out of sync with the cycle! Especially one which only apparently brings you to parity with what exists in the current cycle, just as the next cycle is about to bring a significant hardware upgrade!
I half agree, but releasing early did nothing but good for the 360. You're right about everything else though and it should have been released earlier (2010 maybe).
160 shaders does not produce these types of results.
why not?
Honestly? Because you're talking about no single part of the console being at parity with the 360. Every part to a degree coming up lacking in comparison. The thing would struggle to run a 360 port at all. Let alone outpace it.
Minimum Wii U has 160 ALUs or more that perform over 40% better than Xenos ALUs. Assuming they have 50% increase over Xenos, we are looking at the equivalent of 264 Xenos ALUs. That should more or less be the bare minimum of what Wii U can have IMO considering exact ports pushing extra shader resources to the gamepad.
Whats wrong with this:
Is that not possible? I'm just trying to understand why 160 is impossible.
The line of thinking seems to be that, if the Wii U version of a GPU heavy game outperforms the Xbox360 version, it needs at least as many GFLOPS. And to get there, one would need a certain amount of shader units. Makes sense, right?why not?
The line of thinking seems to be that, if the Wii U version of a GPU heavy game outperforms the Xbox360 version, it needs at least as many GFLOPS. And to get there, one would need a certain amount of shader units. Makes sense, right?
Except it doesn't, because traditional GPUs are apparently quite inefficient. Reportedly mostly as a result from branching issues, which can reduce the overall real world performance of a GPU by as much as ~85%, and stalls during texture reads, which can take hundreds or thousands of cycles. That's the problem with GFLOPS figures - they're highly theoretical and nowhere near the actual performance you'll get under real workloads. So essentially, if Nintendo managed to eliminate just one of those bottlenecks, the GFLOPS comparison becomes pretty much meaningless.
The line of thinking seems to be that, if the Wii U version of a GPU heavy game outperforms the Xbox360 version, it needs at least as many GFLOPS. And to get there, one would need a certain amount of shader units. Makes sense, right?
Except it doesn't, because traditional GPUs are apparently quite inefficient. Reportedly mostly as a result from branching issues, which can reduce the overall real world performance of a GPU by as much as ~85%, and stalls during texture reads, which can take hundreds or thousands of cycles. That's the problem with GFLOPS figures - they're highly theoretical and nowhere near the actual performance you'll get under real workloads. So essentially, if Nintendo managed to eliminate just one of those bottlenecks, the GFLOPS comparison becomes pretty much meaningless.
Depends how far you think 2x as much RAM, 3x as much eDRAM and a more modern featureset takes you.Honestly? Because you're talking about no single part of the console being at parity with the 360. Every part to a degree coming up lacking in comparison. The thing would struggle to run a 360 port at all. Let alone outpace it.
To a place with better texturing, potentially better transparency effects, and better lighting solutions while coming up short in triangles, shadow resolution (while still being higher precision), all around similar performance in a purely visual sense, but potentially rendering to a second set of 307,000 pixels in need of shading visual effects yada. That could potentially knee cap the console.Depends how far you think 2x as much RAM, 3x as much eDRAM and a more modern featureset takes you.
Depends how far you think 2x as much RAM, 3x as much eDRAM and a more modern featureset takes you.
Probably why it never really comes up as an issue.been thinking about the ram bandwidth (and i dare say this thoughts already been brought up before but i'm bored) now the 360's gddr3 is a lot faster than the U's DDR3 but the 360 is split 50/50 between read and write whereas the wiiu is completely flexible, in real world situations wouldn't there be far more reads from it than writes (though the 360's limited edram would necessitate extra writes the wiiu doesn't need) thus negating most (if not all) of this bandwidth issue?