Wii U CPU |Espresso| Die Photo - Courtesy of Chipworks

Xenon and Cell are in-order, so predicting this problem, not seen since Pentium 1, they added two lanes (2-way) just so when one line is stalling the other one can be moving nonetheless.
A bit of trivia regarding Intel and the 'ever since' part: Intel's next 'big thing' - Knight's Corner (aka Larrabee's successor) is essentially P5's pipeline with a fat vector unit on top and 4-way SMT. All that packed by the dozens, while running at a relatively low clock.

This is oversimplifying things though, as normal multithreading on out of order CPU's is meant to take advantage of unused resources, which usually amount to a 30% boost at best, but on these cpu's case, because they simply stall if something being executed has to wait for something from memory (and doesn't concede that turn for something smaller to be parsed) it actually means that the second thread sometimes has 100% of the processor overhead for it, making it even more important to have multithreaded code.
Actually SMT on in-order CPUs is usually more important, just because the stalls there for any given thread are more frequent. Out-of-order CPUs might still find what to do even with a pipeline or two stalled; in-orders CPUs have nothing to do but change the subject.
 
I don't think it was your post. Someone else was talking about this without limiting the scope of the discussion the way you have. Unless you also think the overall picture is more favorable to 360 -> Wii U ports than 360 -> PS3 ports. If you do think that, do you have a response for lostinblue's post?

I don't really have any opinion on that. I'm more at ease talking about graphics scaling than optimizing to different CPU architectures. Porting CPU code from 360 to PS3 may be easier but taking the graphics pipeline from unified architectures to the (far) more disjointed one in PS3 is a rather different subject. The RSX is at an enormous disadvantage when compared to the 360 (and probably has less than half the performance of Latte) and it's up to the SPUs to make up for that differential.

Somewhat more on topic, the creator of GeekBench posted his PS3 results under Linux here. It's only the PPU (and not the SPUs) but it should be able to give a rough estimate how the different CPUs compare in both int and float if you browse a bit.
 
A bit of trivia regarding Intel and the 'ever since' part: Intel's next 'big thing' - Knight's Corner (aka Larrabee's successor) is essentially P5's pipeline with a fat vector unit on top and 4-way SMT. All that packed by the dozens, while running at a relatively low clock.
Larabee/Knights Corner is not the next big thing, it's simply something they did and they've been trying to fit somewhere.

It's essentially an experience, and in the end, it was release as Intel MIC/Xeon Phi; a standalone product/card thing to boost floating point performance of stuff like supercomputers; it's as novelty as using CELL's on super computer clusters.

I doubt it has much future outside of that "market" at this point, not on these or foreseeable generations, at least.

And even if it does, perhaps it more suited for their GPU portion of the CPU than the CPU itself.
Actually SMT on in-order CPUs is usually more important, just because the stalls there for any given thread are more frequent. Out-of-order CPUs might still find what to do even with a pipeline or two stalled; in-orders CPUs have nothing to do but change the subject.
Yes, I believe that was my point, the fact that with in order you have to rely on that feature.
Somewhat more on topic, the creator of GeekBench posted his PS3 results under Linux here. It's only the PPU (and not the SPUs) but it should be able to give a rough estimate how the different CPUs compare in both int and float if you browse a bit.
I find this more enlightning:

-> http://web.archive.org/web/20100804...kpatrol.ca/2006/11/playstation-3-performance/
 
Now this is what I'm talking about with the people who keep trying to thump clocks as a meaninful measurement of power for modern hardware.

http://www.neogaf.com/forum/showpost.php?p=81613985&postcount=1

The corei5 3317 u just dwarfs the A10 all around at significantly lower clock and energy consumption. This is how I view Espresso compared to the vs the last gen CPUs.

Also, how much difference does it make in power draw when only 1 core of Espresso is being used as opposed to 3? I'd imagine that most devs have not been harnessing the entirety of its power since it doesn't auto delegate tasks.
 
^ Probably not much.

At most you could shut them off, seeing this design doesn't have any power gating going on. Even the ability of cutting them off energy-draw-wise is dubious.
 
Larabee/Knights Corner is not the next big thing, it's simply something they did and they've been trying to fit somewhere.
It was quite a substantial business effort on their part, with some dramatic consequences in both product roadmap and internal political terms. The reason they are indeed 'trying to fit it somewhere' today, as you say, is because of the internal political consequences resulting from Larrabee fallout - the GPU killer that did not deliver. But rest assured you have not seen the last of this branch of Intel endeavor. Actually, Intel acquired recently (as in this year) even more IP and man power to throw at the problem.

It's essentially an experience, and in the end, it was release as Intel MIC/Xeon Phi; a standalone product/card thing to boost floating point performance of stuff like supercomputers; it's as novelty as using CELL's on super computer clusters. I doubt it has much future outside of that "market" at this point, not on these or foreseeable generations, at least.
This card is sitting in more than a few desktops around the globe at the time of this post ; ) Stay tuned is all I can say on this subject. I mean, it could all crash and burn tomorrow, but my gut feeling today tells me otherwise.

And even if it does, perhaps it more suited for their GPU portion of the CPU than the CPU itself.
We haven't even seen the tip of the iceberg that GPGPU is. And Intel is aiming right in the heart of that market with the MIC.

Yes, I believe that was my point, the fact that with in order you have to rely on that feature.
Ok, I must have misread you then, my bad.
 
It was quite a substantial business effort on their part, with some dramatic consequences in both product roadmap and internal political terms. The reason they are indeed 'trying to fit it somewhere' today, as you say, is because of the internal political consequences resulting from Larrabee fallout - the GPU killer that did not deliver. But rest assured you have not seen the last of this branch of Intel endeavor. Actually, Intel acquired recently (as in this year) even more IP and man power to throw at the problem.
I think it's healthy to keep some ideas on the table, and that might be just that, but I also know that, many Pentium 1 cores is certainly a novelty idea to go with, and I'm sure they looked at their catalog and found the most evolved simple pipeline CPU in their backlog, but since they're investing so heavily onto it they'll also have to evolve it quite a bit further, until the point you really can't call it a slightly modified Pentium 1 that's the feeling I get. I mean we aren't calling Haswell something along the lines of a souped up Pentium 3.

Plus I don't know how it's done, but I remember intel apologizing a few years ago for Pentium D just being 2 Pentium 4 cores slotted together; they had things repeated on the die that didn't have to be otherwise. This implementation might be similar to that, I dunno, but it sounds inefficient. I sure remember Pentium 1 not being SMP enabled (no boards with dual cpu slots). On top of it all, using x86 units repeated like that all over also seems inefficient to me as they have all sorts of legacy things going on that could and should be stripped; it could still be x86, but certainly not a Pentium 1 whom you could theoretically strip out and still manage to run windows 98.

I could see a many core architecture using some variant of Atom, which in itself unshelved the Pentium 1 and went from there, but not a Pentium 1, no.

That directly correlates with what we've been saying for months about PPC750 actually; it still has merit despite being such an old implementation, it's pretty minimalist for what it is, but it's certainly not modern. P54C is older, and not as minimalist in my book. (I mean: reduced to the essential yet effective)
This card is sitting in more than a few desktops around the globe at the time of this post ; ) Stay tuned is all I can say on this subject. I mean, it could all crash and burn tomorrow, but my gut feeling today tells me otherwise
CELL's also were at some point. Question really lies if Intel can make it really useful for something more than folding, or something that current GPU's can't catch up in a few months. And through that disseminate it into the world.

For the time being I remember even John Carmack was confused a few months ago.
We haven't even seen the tip of the iceberg that GPGPU is. And Intel is aiming right in the heart of that market with the MIC.
Is there really a market there though? we live in the era of convergence, yet MIC is not convergence, it's being used as a stand-alone FLOP board, not meant for graphics, not meant for commanding a system either.

If it fails as a CPU and it fails as a GPU I really can't see it getting off the ground, hence the CELL comparison.

They're certainly not crazy for trying, but it's like that plate that looks edible if need be but seems to be lacking something pretty instrumental at this point; perhaps they need a more conventional CPU serving for a front-end or perhaps they need to solve the fact that it's not a normal GPU implementation in any way or form.


In Portuguese we have a saying that goes along the lines of "it isn't meat nor fish" as a way to say that something is really undefined, similar to saying jack of all trades master of none; I feel it describes Intel MIC existential crisis quite well
 
Frankly I feel a bit awkward championing Larrabee's offspring here, but I think there's a great deal of misunderstanding about what this tech is aimed at, and why it does the things it does. So I might carry on with the conversation for some longer. But let me change the approach to a top-down one ; )

GPGPU is partially about convergence, but that's not its major aspect, IMO.

There are roughly three consumer-relevant (or soon-to-be) CPU-related markets:
1. Desktop
2. Mobile
3. Compute

While the first two are pretty self-explanatory and well established, the 3rd one is not (mainly because it's emerging). Compute stands for maximum compute density - where ALU is king. How does it relate to consumers outside of the HPC domain? Pretty simple - GPGPU is its foray there; it all started with games, and look where we are today - people run all kinds of ALU-hungry tasks on equipment the ranks of which you could find only in Top500 some mere 10 years ago. The thing is, the more GPGPU becomes relevant in the consumer market (and it does), the more 'FLOP boards' will have increased chances of finding a place in people's desktops/game boxes. Now, the vendors' problem there is such: how much compute can you get for the consumer buck (which is not the same as the HPC buck, which is largely academia and corporate money)? GPUs have been the established norm there, and they get the extra benefit of convergence (ie. improved bang-for-consumer-buck). But we get all kinds of contenders coming from the other two domains, trying to put their foot in the compute door via various propositions. We get things like masses of mobile CPUs lumped together, with or without extra sauce. We get mobile-class dedicated FLOP chips. We get GPUs that are decidedly less GPU and more of a 'FLOP board'. So it's largely an emerging market - the consumers are not quite sure yet what meets theirs needs and how their buying power is best applied. But we are getting there.

Ok, gotta go now, to be continued.
 
I understand where you're coming from, thing is compute seems a new sub-criteria in a time such markets seem to be condemned to niches, so I'm skeptical, in lots of ways is not so much about the proposition as it is about the time and era it's being proposed.

I even see Desktop dimming away into a professional niche and warranting it's existent due to the fact that it can simply be souped up from the mobile reality, rather than a different implementation altogether, hence diluting R&D and production costs.


As for this, as of now it's a dedicated specialized part, much like Ageia PhysX or... say, the Nvidia Quadro line.

I remember a few years ago modding a Geforce 6600 to a Quadro equivalent spec (it was software locked, before they laser cut the logic) and that meant unlocking giganourmous potential in ray tracing calculations, enough to comfortably beat a Geforce 7800 doing the very same thing; but that didn't make it any better for daily use, hence most people don't need quadros, or better yet, all the same Geforce 6600 disadvantages against a better GPU still applied, minus one.

That's the battle Intel MIC/Larabee has, niche markets are niche and sometimes can't even justify their existence if the technology isn't shared with lower class yields, they need to get to the consumer and developer alike, Intel is certainly not thinking about adding the top range configuration for the consumer, probably more along the lines of stream processor count in GPU boards, keeping a lower spec'ed version to be the standard "embedded-into-something-else" solution, but that's proving to be a significant challenge and time is off the essence.

We'll see (and I'm definitely interested in what else you have to say in the matter) but I think I'm not prepared to give it more than a free pass at this point, unless I'm missing something major, a surefire reason why it'll be sought for.
 
Phew, all those typos. I should drop posting from the driver's seat of my car. So, where were we..

You are right MIC is essentially a specialized part today. If nothing else for the fact it is priced for the HPC market, not for the consumer. But the thing is, GPGPUs are also specialized! Tons of transistors and design trade-offs are committed there which don't help graphics, at least not in the traditional sense. And yet GPU vendors are producing those parts in droves. Certainly the convergence factor is at play there. But the market where consumers would chose GPUs for their general-purpose-ness is already emerging, and that's beyond the HPC market (mainly referring to accedemia here). Well, Intel had to respond to the trend, and their response was MIC. Now, how good their proposition is is another matter. Apparently they went straight for the GPGPU designation, and largely failed the GPU part. But they sort of got the GP part right. Remember, the end goal here is not GPGPU per se, but the big compute picture.

Ok, enough mumble from me for tonight. As I mentioned, just keep an eye on the development of that lineup, as things are just getting started there.
 
I want to get back to the more pratical side of the CPU.

I have a few base questions

First, how much difference does it make when using integers to compute physics as opposed to floating points?

Second. are floating point needed for A.I. at all?
 
I want to get back to the more pratical side of the CPU.

I have a few base questions

First, how much difference does it make when using integers to compute physics as opposed to floating points?

Second. are floating point needed for A.I. at all?

while i'm not an expert I don't believe usually AI uses floating point code
 
I want to get back to the more pratical side of the CPU.

I have a few base questions

First, how much difference does it make when using integers to compute physics as opposed to floating points?

Second. are floating point needed for A.I. at all?
There's no a 'yes/no' simple answer.

Floating point provides dynamic range. Basically, you get nicely uniform precision only between adjacent powers of the basis (powers of two for the normal floats, powers of 10 for decimal floats) ergo the term 'floating point', while having quite a few powers to go either way (macro or micro - up or down), depending on the size of the float. Integers can also vary in size, but they always provide uniform precision across their entire span - i.e. their range is static - once you assign a precision to their ULP that's it - your space is measured and boxed.

Rule of thumb these days is, floats are used wherever the range of the problem is hard to tell in advance - i.e. it could vary dynamically. That includes most spatial information (mainly in terms of geometry, but color spaces as well), but also scenarios where we want to be able to deeply focus at a locale in our domain, at the expense at losing precision at the periphery, i.e. far away from our focus. Integers have no such property - they make no difference between focus and periphery. Conversely, integers' perfect uniformity could be an advantage in some scenarios (as I said, you can't focus at details in the periphery of your floating-point world), but until we adopt, say, 128-bit integers (which, for reference, could span the known universe at a few nm granularity), they will always pose a 'I want to go a bit further and I can't because of the darn integers!' hazard.
 

blu, or anyone else that has experience.

1. Does it help run the OS in the Wii U?

2. Can we determine its clock? I remember reading that Starlet on the Wii was 200mhz.

3. Do we know how much power draw it adds to the Wii U hardware?

Generally, I'd like to know how many resources it uses. I've been wondering about its impact on the GPU and CPU since it is constantly active and must use the same buses.

Also, i wanted to ask a question about the MCM
MCM.jpg
I'll probably look stupid for asking, but are the rectangular components to the side just fuses and transistors, and what is that small square to the southwest?
 
blu, or anyone else that has experience.

1. Does it help run the OS in the Wii U?

2. Can we determine its clock? I remember reading that Starlet on the Wii was 200mhz.

3. Do we know how much power draw it adds to the Wii U hardware?

Generally, I'd like to know how many resources it uses. I've been wondering about its impact on the GPU and CPU since it is constantly active and must use the same buses.

Also, i wanted to ask a question about the MCM

I'll probably look stupid for asking, but are the rectangular components to the side just fuses and transistors, and what is that small square to the southwest?
I'm assuming that your question is related to the Wii U's ARM within Latte.

1) I'm unsure if we know this one. At the very least it would do what Scarlet did for Wii.

2) Scarlet was running at the same speed as the GPU (243MHz) IIRC. Maybe the ARM in the GPU would run at 550MHz, but Wii U's clocks doesn't seem to be in sync compared to the GCN and Wii.

3) My guess that it would be extremely low. Probably alot less than a watt.
 
I'll probably look stupid for asking, but are the rectangular components to the side just fuses and transistors, and what is that small square to the southwest?

Third chip is the an EEPROM chip from Renesas, like we knew months ago.
 
Huh, I thought it not using a core for audio like the 7G consoles was [made out to be] a big deal, but it appears to?



http://hdwarriors.com/general-impression-of-wii-u-edram-explained-by-shinen/
Software audio isn't typically that hardware intensive nowadays. At least not for simple playback of audio streams. Reverbs, occlusion, and wavetracing would be more CPU intensive but most audio engines don't do any of those even on PC. All of these are wonderful features to have but they aren't exactly necessary to good audio.

while i haven't played the game so don't know what the audio is like but we know shinen do a lot of procedurally generated graphics, perhaps they did some procedurally generated audio too? maybe

Procedural generation is just randomized data according to a few instructions and it works for visuals because so much of the natural world is generated randomly. Procedurally generated audio would just be noise.
 
Software audio isn't typically that hardware intensive nowadays. At least not for simple playback of audio streams. Reverbs, occlusion, and wavetracing would be more CPU intensive but most audio engines don't do any of those even on PC. All of these are wonderful features to have but they aren't exactly necessary to good audio.



Procedural generation is just randomized data according to a few instructions and it works for visuals because so much of the natural world is generated randomly. Procedurally generated audio would just be noise.

There were many games on the 360 (racing games, for example) that used 1 entire core/3 strictly for audio. Having audio chips in all of the 8th gen consoles is a boon to their relatively smaller CPUs.
 
There were many games on the 360 (racing games, for example) that used 1 entire core/3 strictly for audio.

Can you provide a source for that? I've only ever heard of this once and I think they were talking about 1 thread.
The last time I remember dev talk about audio in current gen games they said it would only use a few percent of CPU time (maybe I can provide a link later).
 
Procedural generation is just randomized data according to a few instructions and it works for visuals because so much of the natural world is generated randomly. Procedurally generated audio would just be noise.

while my suggestion was just a completely uninformed stab in the dark, the whole point of procedural generation is to create patterns, i cant see why you couldn't apply it to audio too
 
Can you provide a source for that? I've only ever heard of this once and I think they were talking about 1 thread.
The last time I remember dev talk about audio in current gen games they said it would only use a few percent of CPU time (maybe I can provide a link later).

many (most possibly even?) 360 games used 1 thread for audio, it is well reported that some games even used a whole core, that's a fairly well known fact in these threads

edit - what stevie said

wow a core an a half, that's insane
 
Can you provide a source for that? I've only ever heard of this once and I think they were talking about 1 thread.
The last time I remember dev talk about audio in current gen games they said it would only use a few percent of CPU time (maybe I can provide a link later).

Former audio engineer at Microsoft:
http://forum.beyond3d.com/showpost.php?p=1731306&postcount=2956
bkilian said:
On the 360, there is hardware for decoding XMA files, which is a much simpler subset of WMA. XAudio2 allows decoding of xWMA files too, but that's CPU side software only. The XMA decoder chip is rated at 320 channels, but in reality it generally maxes out lower than that. The 256 audio channels was calculated using a full core I believe, and that's using a very simple linear interpolation SRC, and possibly a filter and volume per channel.

All audio on the 360, other than XMA decompression, is software and uses the main CPU. Party chat, including codecs and mixing, happen in the system reservation. Game Chat, Kinect MEC and voice recognition, and all game audio happen in the game process and use game resources, including memory and CPU. Game audio frequently uses an entire hardware thread, and I've seen games where it uses 3 hardware threads. Car racing games, in particular, can use upwards of a hundred voices on a single car.

That's 3 out of 6 hardware threads in particularly demanding games, half the CPU power of the 360.
 
Interesting, thanks.
I'll see if I can still find the source that I had in mind (if I don't misremember it it came from a Crytek dev, so no racing game for sure ;)).

edit:
Not really what I thought I had read but I found this:

Supposedly on 360 it needed 30% [CPU performance for audio]
With respect, that's utter nonsense.
(source (german))

He's a programmer at Crytek. Just his opinion of course, probably depends heavily on the game.
 
while my suggestion was just a completely uninformed stab in the dark, the whole point of procedural generation is to create patterns, i cant see why you couldn't apply it to audio too

I guess it would somewhat depend on what kind of audio we were talking about. But consider that procedurally generated music already exists in the form of General MIDI. And it would never replace voice acting because acting depends on emotion and not just the words said.

Still, reverbs and other processed audio is technically procedural so it's already happening in that regard. And I did see some notion of GPGPU being used for wave-tracing in the next-gen consoles and would fall under the same hat.

Former audio engineer at Microsoft:
http://forum.beyond3d.com/showpost.php?p=1731306&postcount=2956


That's 3 out of 6 hardware threads in particularly demanding games, half the CPU power of the 360.

Wow. I did not know that.

Makes perfect sense though. I wasn't even thinking of things like voice chat.

Edit:

If anyone knows, I'm curious how much of an effect in-order vs. out-of-order has on audio in terms of CPU performance. PCs have seen an enormous trend towards software based solutions over the past decade without much seeming effect on game performance. Anyone know why the discrepancy?
 
This bit of 'info' came from a link cited in the Latte thread, and probably suits this thread better.

Other information about the Wii U version that isn’t widely known is that the game appears to only be using 1 out of the 3 cores available for use on the Wii U, something Shin’en Multimedia has admitted to only be using for it’s Wii U eShop game, Nano Assault Neo.

“Set main thread to (normal priority + 1) so that normal pri threads get
cpu time as WiiU threads don’t time slice. This is a temp solution until
proper thread balancing and core affinity is set
”
http://www.cinemablend.com/games/Pr...e-Shadows-Multi-Threaded-Shadowing-59659.html

This seems to be somewhat speculative, more than a confirmation, and we don't currently know how the build runs in this current state(with supposedly 1 core), or how many cores will be put to use in the final build. Anyway, to anyone with understanding on the matter, my question is; is that limited bit of info indeed suggestive of only one CPU core currently being used?

[I personally don't know how to interpret that part of the log, but at the very least, it does sound like the team haven't even began tweaking the cores yet. This may bode well for the Wii U version, since a lot of CPU work/features(including physics) is already being implemented in this 'unoptimized' state]
 
I don't see how that indicates it's only using one core.

Just so you know, that bold text in the change log was on my part, not the site's editor. It's possible that the first portion of the log was more critical in reaching their conclusion. I bolded the second part because it, at least, does imply that the team still haven't got around tweaking how work is shared among the cores, and which core is best suited for certain jobs. Hopefully someone can chime in. Blu, anyone?

*crickets*
 
“Set main thread to (normal priority + 1) so that normal pri threads get
cpu time as WiiU threads don’t time slice. This is a temp solution until
proper thread balancing and core affinity is set”

That means threads in one process (any user process?) don't get preempted. They are picked by the scheduler based on priority (apparently the higher the numerical priority - the lower the logical priority) and they run until they yield. That's called cooperative multithreading.
 
I don't know if we can or cant discuss homebrew in this thread, but the part I wanted to point out was a bit of detail on the CPU, not the homebrew part itself. Hope that's ok then

http://fail0verflow.com/blog/2013/espresso.html

This appeared possible initially, but unfortunately, it turned out that a few critical hardware registers were irreversibly disabled in Wii mode. However, due to the design of the Wii U’s architecture, a few things can be re-enabled. One of those is the multicore support of the Espresso CPU.


It appears parts of the chips can be enabled or disabled by software. Just mildly interesting I thought. There's a few more details on Espresso security too.
 
Hmm, associativity is low for L2. I wonder if that's a measure to counter eDRAM latencies. Anyhow, that should be affecting the eviction rate if the dev is not careful.
 
its marcan, he got that (unsure if image or whole thing, he posted an image) from some anon in his emails after a twitter spat with someone else about what espresso is based on.
 
WOW at the Espresso being even more underpowered than anyone could imagine. I mean, we at least thought that Espresso emulated Broadway like the Broadway emulated Gekko, that means, deactivating any additional logic and running at the same speed.
But thanks to that we know that L2 associativity is now HALF what it was on Broadway (Broadway's L2 cache was 8 way set-associative), which means that even if it has a bit bigger caches, it's use is so low compared to what was possible on normal CPUs with eSRAM that it won't make even the slightest difference (if it isn't even worse).

Man, I'm starting to think that Nintendo has messed to a point it's even difficult to understand. So now it results that the memory subsystem of the console is even worse comparatively speaking than what was found on the Wii!!!! XD
 
Freezamite

Broadway (like Gekko before it) had a 2 way set associative L2 cache. It was the L1 cache that was 8 way.

So Espresso has much better associativity than Broadway.
 
No, Broadway (like Gekko before it) had a 2 way set associative L2 cache. It was the L1 cache that was 8 way.
As far as I knew, one of the improvements between Gekko and Broadway was that Broadway's L2 cache was 8-way set associative.

If it was 2-way set associative like the Gekko then the Espresso could be able to emulate it perfectly and my statement, although still true in the sense that 4-way set associative is not even close to be enough nowadays if you wan't any good cache utilization, would be completely wrong.
 
WOW at the Espresso being even more underpowered than anyone could imagine. I mean, we at least thought that Espresso emulated Broadway like the Broadway emulated Gekko, that means, deactivating any additional logic and running at the same speed.
But thanks to that we know that L2 associativity is now HALF what it was on Broadway (Broadway's L2 cache was 8 way set-associative), which means that even if it has a bit bigger caches, it's use is so low compared to what was possible on normal CPUs with eSRAM that it won't make even the slightest difference (if it isn't even worse).

Man, I'm starting to think that Nintendo has messed to a point it's even difficult to understand. So now it results that the memory subsystem of the console is even worse comparatively speaking than what was found on the Wii!!!! XD

Most of us have known that Wii U has a "horrible, slow CPU" for awhile.

Nintendo needs to ditch BC and just go x86 next gen if they make a dedicated home console.
 
Most of us have known that Wii U has a "horrible, slow CPU" for awhile.

Nintendo needs to ditch BC and just go x86 next gen if they make a dedicated home console.

That's poor trolling, even for you.

Anyway that "Horrible, slow CPU" helps throw around some nice graphics so it ain't that bad.
 
Top Bottom