cyberheater
PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
So w-we...we won't get this??
![]()
Which explains why there are only 2 characters in the demo.
So w-we...we won't get this??
![]()
That exist because people can't take advantage of things that have happened since last generations. We know for a fact i7 for a fact are much better than i5 because of how they perform with stuff like encoding but game tech doesn't make use of the cpu the same way they do.
Games need to grow up themselves. We waste far too much for the results we want.
If the next Zelda game is contained on a single room, yeah, we could.
Hector Martin ‏@marcan42
It's worth noting that Espresso is *not* comparable clock per clock to a Xenon or a Cell. Think P4 vs. P3-derived Core series.
So w-we...we won't get this??
![]()
Which is pretty bad in this day and age.Well Zelda does load one room at a time, so yay![]()
If the next Zelda game is contained on a single room, yeah, we could.
Which explains why there are only 2 characters in the demo.
In that resolution.
The performance difference between CPUs in PC games is generally less than 5%. Offloading more work to GPUs will unlikely change that.
Hector Martin ‏@marcan42
The Espresso is an out of order design with a much shorter pipeline. It should win big on IPC on most code, but it has weak SIMD.
Hector Martin ‏@marcan42
It's worth noting that Espresso is *not* comparable clock per clock to a Xenon or a Cell. Think P4 vs. P3-derived Core series.
Hector Martin ‏@marcan42
No hardware threads. One per core. No new SIMD, just paired singles. But it's a saner core than the P4esque stuff in 360/PS3.
Also, as an aside, I think it's somewhat amusing how the XBox360's CPU clock speed is taken as some sort of golden standard. The XBox360 was released right at the very height of clock-speed chasing by CPUs. And I mean literally: The XBox360 was released in NA on the 16th November 2005. Just two days before that, on the 14th of November, Intel released the Pentium 4 HT 672, with a clock speed of 3.8GHz. They haven't released a CPU clocked that high in the 7 years since.
After seeing this, coupled with all of the other Wii U issues and Nintendo in-general screw-ups (poor decisions regarding their online network, localization decisions, marketing decisions, product decisions like 3d emphasis on the 3DS and the Wii Mini) I am really beginning to lose faith in Iwata's leadership.
Its easy to blame Reggie for all of this as he is constantly in the spotlight, but Reggie is a figurehead and doesn't hold any real power over these decisions. He basically has to clean up the mess Japan hands him. Iwata has a background in development, he endured programming with the N64, he should KNOW the kind of burden this underpowered hardware places on third parties to port. He should KNOW that lack of 3rd party support handicapped his previous system. He approved all of this, and he should be taken to task for not learning his lessons from the 3DS. Games sell the system.
So w-we...we won't get this??
![]()
Also, as an aside, I think it's somewhat amusing how the XBox360's CPU clock speed is taken as some sort of golden standard. The XBox360 was released right at the very height of clock-speed chasing by CPUs. And I mean literally: The XBox360 was released in NA on the 16th November 2005. Just two days before that, on the 14th of November, Intel released the Pentium 4 HT 672, with a clock speed of 3.8GHz. They haven't released a CPU clocked that high in the 7 years since.
Thank you for these explanations. Reading you tends to show that Nintendo had five desires when they designed the console:Kenka mentioned me a few pages back, so I might as well give my two cents.
First, it's worth keeping in mind that the general expectation until very recently was a CPU around 2GHz (many estimates around the 1.8GHz mark) and a GPU 500MHz or under (my guess was 480MHz).
The main take-home from the real clock speeds (higher clocked GPU than expected, lower clocked CPU than expected) is that the console is even more GPU-centric than expected. And, from the sheer die size difference between the CPU and GPU, we already knew it was going to be seriously GPU centric.
Basically, Nintendo's philosophy with the Wii U hardware is to have all Gflop-limited code (ie code which consists largely of raw computational grunt work, like physics) offloaded to the GPU, and keep the CPU dedicated to latency-limited code like AI. The reason for this is simply that GPUs offer much better Gflop per watt and Gflop per mm² characteristics, and when you've got a finite budget and thermal envelope, these things are important (even to MS and Sony, although their budgets and thermal envelopes may be much higher). With out-of-order execution, a short pipeline and a large cache the CPU should be well-suited to handling latency-limited code, and I wouldn't be surprised if it could actually handle pathfinding routines significantly better than Xenon or Cell (even with the much lower clock speed). Of course, if you were to try to run physics code on Wii U's CPU it would likely get trounced, but that's not how the console's designed to operate.
The thing is that, by all indications, MS and Sony's next consoles will operate on the same principle. The same factors of GPUs being better than CPUs at many tasks these days applies to them, and it looks like they'll combine Jaguar CPUs (which would be very similar to Wii U's CPU in performance, although clocked higher) with big beefy GPUs (obviously much more powerful than Wii U's).
Kenka mentioned me a few pages back, so I might as well give my two cents.
First, it's worth keeping in mind that the general expectation until very recently was a CPU around 2GHz (many estimates around the 1.8GHz mark) and a GPU 500MHz or under (my guess was 480MHz).
The main take-home from the real clock speeds (higher clocked GPU than expected, lower clocked CPU than expected) is that the console is even more GPU-centric than expected. And, from the sheer die size difference between the CPU and GPU, we already knew it was going to be seriously GPU centric.
Basically, Nintendo's philosophy with the Wii U hardware is to have all Gflop-limited code (ie code which consists largely of raw computational grunt work, like physics) offloaded to the GPU, and keep the CPU dedicated to latency-limited code like AI. The reason for this is simply that GPUs offer much better Gflop per watt and Gflop per mm² characteristics, and when you've got a finite budget and thermal envelope, these things are important (even to MS and Sony, although their budgets and thermal envelopes may be much higher). With out-of-order execution, a short pipeline and a large cache the CPU should be well-suited to handling latency-limited code, and I wouldn't be surprised if it could actually handle pathfinding routines significantly better than Xenon or Cell (even with the much lower clock speed). Of course, if you were to try to run physics code on Wii U's CPU it would likely get trounced, but that's not how the console's designed to operate.
The thing is that, by all indications, MS and Sony's next consoles will operate on the same principle. The same factors of GPUs being better than CPUs at many tasks these days applies to them, and it looks like they'll combine Jaguar CPUs (which would be very similar to Wii U's CPU in performance, although clocked higher) with big beefy GPUs (obviously much more powerful than Wii U's).
The performance difference between CPUs in PC games is generally less than 5%. Offloading more work to GPUs will unlikely change that.
It's a 1.2 GHz processor. And while it's OoE, it's not an ILP monster like an i7 either. It's fair to say it's slow for all workloads.Well, it has no vector units for starters. And the clock speed is pretty low. So yeah, it's probably pretty slow for certain workloads like physics or crowd AI. Unless developers manage to move those workloads to the GPU.
Kenka mentioned me a few pages back, so I might as well give my two cents.
First, it's worth keeping in mind that the general expectation until very recently was a CPU around 2GHz (many estimates around the 1.8GHz mark) and a GPU 500MHz or under (my guess was 480MHz).
The main take-home from the real clock speeds (higher clocked GPU than expected, lower clocked CPU than expected) is that the console is even more GPU-centric than expected. And, from the sheer die size difference between the CPU and GPU, we already knew it was going to be seriously GPU centric.
Basically, Nintendo's philosophy with the Wii U hardware is to have all Gflop-limited code (ie code which consists largely of raw computational grunt work, like physics) offloaded to the GPU, and keep the CPU dedicated to latency-limited code like AI. The reason for this is simply that GPUs offer much better Gflop per watt and Gflop per mm² characteristics, and when you've got a finite budget and thermal envelope, these things are important (even to MS and Sony, although their budgets and thermal envelopes may be much higher). With out-of-order execution, a short pipeline and a large cache the CPU should be well-suited to handling latency-limited code, and I wouldn't be surprised if it could actually handle pathfinding routines significantly better than Xenon or Cell (even with the much lower clock speed). Of course, if you were to try to run physics code on Wii U's CPU it would likely get trounced, but that's not how the console's designed to operate.
The thing is that, by all indications, MS and Sony's next consoles will operate on the same principle. The same factors of GPUs being better than CPUs at many tasks these days applies to them, and it looks like they'll combine Jaguar CPUs (which would be very similar to Wii U's CPU in performance, although clocked higher) with big beefy GPUs (obviously much more powerful than Wii U's).
So w-we...we won't get this??
![]()
I remember back when clock speeds were the way most people compared CPUs, AMD actually put an ad out explaining that clock speeds aren't everything.
Thank you for these explanations. Reading you tends to show that Nintendo had five desires when they designed the console:
- launch before competition
- be able to play current HD gen games
- backward compatible with the Wii
- low power consumption
- port from the future consoles made easy by similarities in architecture, and maybe power
If they can fulfil the fifth point, then I'd agree in saying that they were smart. If not, then I'd question my eventual future purchase.
So w-we...we won't get this??![]()
Wait what? Wii I understand, but the DS wasn't gimped in hardware.
Impossible to tell. We don't know how many execution units there are for example. And even Marcan seems to think that it's actually faster than CELL and Xenon for pretty much anything that isn't SIMD, thanks to a significantly better IPC performance.It's a 1.2 GHz processor. And while it's OoE, it's not an ILP monster like an i7 either. It's fair to say it's slow for all workloads.
Which tells use alot about how cpu development has changed since then.Also, as an aside, I think it's somewhat amusing how the XBox360's CPU clock speed is taken as some sort of golden standard. The XBox360 was released right at the very height of clock-speed chasing by CPUs. And I mean literally: The XBox360 was released in NA on the 16th November 2005. Just two days before that, on the 14th of November, Intel released the Pentium 4 HT 672, with a clock speed of 3.8GHz. They haven't released a CPU clocked that high in the 7 years since.
I wasn't disagreeing with this. In reality, there isn't near as much the CPU needs to do with games. Most of what needs to be done in a game is handled via GPU and handled better at that. Remembering that x86 were designed as general purpose processors and GPUs have always been designed specifically to cover things that 3d gaming needed.
I think the point is Shinen are one of the only developers who actually knew what they were doing on the Wii and and actually put effort into making graphical showcases. So we should probably listen when they talk about technical stuff because they understand that type of architecture best, while the other dev that said Wii U is horrible had a game already built for a different type of architecture and tried to port it.
Impossible to tell. We don't know how many execution units there are for example. And even Marcan seems to think that it's actually faster than CELL and Xenon for pretty much anything that isn't SIMD, thanks to a significantly better IPC performance.
Source: https://twitter.com/marcan42/status/274181216054423552Marcan said:Hector Martin ‏@marcan42
@ZuelaBR I don't know how it compares at the actual clock speeds, but at the same clock the 750 wins hands down except on pure SIMD.
Having a HD console with next gen processing power and 4x the RAM of current gen in 2012 is a far better trade off than the Wii with its 2001 API, SD output and 2/1.5x increase in RAM in 2006.Told you all your getting Wii'd again.
Compared to hardware that launched at the same time, the PSP, yes it was. Of course looking back Nintendo made great decisions on the DS. My point is they are not interested in an arms race with sony/MS.
I find it amazing actually that Nintendo can do this actually. Iwata is a very shrewd business man. I think however that Nintendo is blowing a huge chance to be truly dominant for years by not looking forward just a little bit more on the tech side.
All I can say is what I've been playing on the wiiU the last couple weeks looks awesome....
For 5 minutes, then I'm completely into the game and forget about how detailed everything is.
Right. Crowd AI/ pathfinding isn't. And that's something GPUs are really damn good at.
All I can say is what I've been playing on the wiiU the last couple weeks looks awesome....
For 5 minutes, then I'm completely into the game and forget about how detailed everything is.
Not only that, but it's not like Shin'en works on huge, ambitious, CPU-demanding games like Skyrim. I'm sure Shin'en is more than fine with what the Wii U offers in terms of performance.
Impossible to tell. We don't know how many execution units there are for example. And even Marcan seems to think that it's actually faster than CELL and Xenon for pretty much anything that isn't SIMD, thanks to a significantly better IPC performance.