Wii U Speculation Thread of Brains Beware: Wii U Re-Unveiling At E3 2012

Status
Not open for further replies.
No, I mean that the mobile version of Juniper is super-high quality. Most chips Nintendo would try to produce would end up more like the desktop version. Hardly any would have that low power consumption, which is why the chip is so much more expensive than Juniper.
I know. But this problem is not as relevant for small chips (less material needs to be of high quality). Even if yield quality was a problem, then that problem is quickly resolved by continually higher quality yields on the 40nm node.

There's nothing stopping Nintendo from using a chip comparable to Juniper, although I do think that Juniper itself is a bit too big and hot for Wii U. It's probably a 640SPU on VLIW5 or 512SPU on VLIW4 unit.
 
Perhaps that's the reason nintendo has been stuck with old and overheating cards in the dev kits?
They aren't quite ready to add a 7xxx card to the set-up, and are considering delaying the console 3-4 months (December 2012 release instead of a 2012 may release) in order to boost its power.

Nintendo never makes sudden changes like that.
 
I wonder what the OS footprint is assuming 1.5 gigs of RAM. 128 MB? A little large compared to past consoles.

This is what I've been thinking... maybe it's rather large because Nintendo plan to centralise their services et al. Cloud storage etc. Meh, I'm super speculating, but it's certainly possible. One Nintendo hub for all your systems.
 
Chips like those are extremely low-yield. Nintendo would only be able to make 2-4 million every year at most.

You are right about such ULV parts but with around 800 SU (or 768 when using VLIW4) they could clock it even lower in the WiiU (say 500 MHz) and it probably would use less power than a 640 SU chip @600MHz.

The question is what is more economic: The smaller die on 28nm or high yields on a mature 40nm process. We are talking about 100mm² difference probably, so I guess 28nm would make sense (at least in the long run)
 
Apologies if I don't understand the tech talk but do the rumours give us a big upgrade or will it be a xbox360 in Nintendo disguise?
Thanks.
 
I think that we by this point should know better.
Nintendo has made some pretty sudden changes the last couple of years (releasing the Wii, the 3DS price drop+free games).

Perhaps, but a GPU change like that would require a ton of other changes that, at that point, would likely force a 6+ month delay of the system. It would be a nightmare for everyone involved.

They makes it after the console is released, it's better that way

Older than your mother.

Wasn't the Pico GPU in the 3DS a rather late change?


That was made at least 9 months before launch. The fact that devs are still using the same GPU they've been using since the alpha kit makes that highly unlikely. Besides, Pico was likely a back-up plan from the start.
 
Perhaps that's the reason nintendo has been stuck with old and overheating cards in the dev kits?
They aren't quite ready to add a 7xxx card to the set-up, and are considering delaying the console 3-4 months (December 2012 release instead of a 2012 may release) in order to boost its power.
The simple and obvious reason is that Nintendo uses a custom GPU that's in the works since 2007 or something. Switching to a more recent architecture would mean throwing years of work and millions of dollars out of the window. Not to mention efficiency didn't go up, so it doesn't really matter. In some ways, it's actually gone down due to the switch to VLIW4.
 
The simple and obvious reason is that Nintendo uses a custom GPU that's in the works since 2007 or something. Switching to a more recent architecture would mean throwing years of work and millions of dollars out of the window. Not to mention efficiency didn't go up, so it doesn't really matter. In some ways, it's actually gone down due to the switch to VLIW4.

I really want to see you explain this one.
 
Can I just ask what these 'recent rumours' were?
As far as I'm concerned, we've learnt nothing since E3.
Llhere's posts have been too vague imo, and I question his reliability... then again, he's all we've had for a long time, so.

You can ask some of neogaf mods to check reliability :P

I can't post all the specs here, maybe nintendo ninjas will be in my home when I back from work if I do it XD.
 
The GCN getting a boost in CPU/GPU clockspeed was made pretty late (only about 5-6 months before launch IIRC!)

Boosting clockspeed isn't a big deal. Changing the entire GPU is.

lherre, back me up. How likely do you think Nintendo is to change the GPU at this stage? Also, if Nintendo goes third-party and ports their planned Wii U games to PC, should I buy a low-end GPU or a mid-range (more than $100) one? ;)
 
Nintendo never makes sudden changes like that.

Perhaps, but a GPU change like that would require a ton of other changes that, at that point, would likely force a 6+ month delay of the system. It would be a nightmare for everyone involved.
Where are you getting all this expert information from, BurntPork?

Apologies if I don't understand the tech talk but do the rumours give us a big upgrade or will it be a xbox360 in Nintendo disguise?
Thanks.
All we know that it's probably going to be an upgrade. If rumours of the development kit specifications are true, and the the final console will achieve at least the same performance, we're going to get a console that will at the very least be able to produce the same graphics as PS360 on 1080p. All discussion we actually have is just trying to figure out how all random pieces of information fit together or what they would imply for the Wii U's power.
 
Perhaps, but a GPU change like that would require a ton of other changes that, at that point, would likely force a 6+ month delay of the system. It would be a nightmare for everyone involved.
But the WiiU doesn't have a set release date yet. They could change the GPU entirely and still be within the "fiscal year 2012" range (which is the only confirmed release window).
 
You can ask some of neogaf mods to check reliability :P

I can't post all the specs here, maybe nintendo ninjas will be in my home when I back from work if I do it XD.

Hm, I worded it wrong - I'm not questioning your reliability, just the reliability of the information, and how recent/subject to change it is...

Can you say anymore on the GPU?
Seeing as the CPU rumours all coincide pretty well, what's the situation with the graphical capabilities?
I'd just love to know what it's comparable to, and if it's planned to change anytime soon, for better or worse :|
 
So this thread is like the Garlic Jr. saga?

Freeza/Goku fight when the planet was going to blow up in five minutes. The Re-unveil will be when it finally explodes.

By the way, as far as I understand, all Power7 CPUs sold have eight cores, but a number of cores is deactivated. You can have them activated later by buying a license key from IBM and installing this on the CPU itself, which then unlocks additional cores. Yeah, I'm not making that up - that's how it used to be with POWER CPUs. The chip is so low volume that it's cheaper for IBM to produce just a single version.

This would definitely explain what I saw awhile back since they disable the cores by column.

The simple and obvious reason is that Nintendo uses a custom GPU that's in the works since 2007 or something. Switching to a more recent architecture would mean throwing years of work and millions of dollars out of the window. Not to mention efficiency didn't go up, so it doesn't really matter. In some ways, it's actually gone down due to the switch to VLIW4.

I thought you said since 2009?
 
Where are you getting all this expert information from, BurntPork?

Here, I guess? :p

But the WiiU doesn't have a set release date yet. They could change the GPU entirely and still be within the "fiscal year 2012" range (which is the only confirmed release window).

I really doubt that they want to risk another 3DS, so if they miss the holidays, they'll miss the fiscal year and launch in mid 2013.

wsippel made a good point. Nintendo's not going to throw away that R&D. We're getting an R700. It sucks, but it's what we'll be stuck with.
 
The simple and obvious reason is that Nintendo uses a custom GPU that's in the works since 2007 or something. Switching to a more recent architecture would mean throwing years of work and millions of dollars out of the window. Not to mention efficiency didn't go up, so it doesn't really matter. In some ways, it's actually gone down due to the switch to VLIW4.

a) I don't think it was specified in 2007, maybe in the concept finding stages
b) The current AMD architecture wasn't developed in 1 day either, so all the developments that found it's way (at least) into the 2011 (if not 2012) lineup will be considered in Nintendo's custom chips. Nintendo neither AMD aren't idiots, they know well before anybody (beside Nvidia, Microsoft and Sony) what will be available and what makes sense to implement.
 
Not to mention efficiency didn't go up, so it doesn't really matter. In some ways, it's actually gone down due to the switch to VLIW4.

Yup. VLIW4 was a concession for GPGPU, which isn't necessary in a games console.

AMD's architecture in recent years was extremely suited to console, with massive math power while Nvidia had much less raw power, but better computation abilities and efficiency.
In a console, AMD's designs would have had a huge edge on Nvidia's, because all that math power would have been actually used, since in console the software is tailored to the hardware rather than vice versa.

In recent times AMD has started turning it's architecture more towards compute as well though, so by the time PS4/720 come out, it may be a moot point.
 
a) I don't think it was specified in 2007, maybe in the concept finding stages
b) The current AMD architecture wasn't developed in 1 day either, so all the developments that found it's way (at least) into the 2011 (if not 2012) lineup will be considered in Nintendo's custom chips. Nintendo neither AMD aren't idiots, they know well before anybody (beside Nvidia, Microsoft and Sony) what will be available and what makes sense to implement.
Work started in 2007 if I remember correctly and continued until a few months ago, if it's even done yet. And while current AMD designs weren't developed over night, whatever Nintendo uses is a parallel development branch (which means that it very well might use some more recent stuff found in AMD's PC GPU).

Also, as I wrote many times in this and other Wii U tech threads, Nintendo has an internal chip design team. Implementing their ideas and making it fit doesn't work over night, either, so they had to start somewhere and branch out a few years ago. We'll see how much their final design will have to do with any known AMD GPU.
 
Found it.

Not much, but whatever: From what I found digging around, the Wii U GPU was in development between mid 2009, possibly earlier, and April 2011 or later. Just to put "based on R700" into perspective - we're definitely not looking at some off-the-shelf 2007 PC part. Other than that, it supports UVD (not a big deal, considering UVD was introduced many years ago) and uses an AMBA v2 bus (which seems a bit weird - might be there for BC stuff).

I knew I remembered you mentioning 2009.
 
Yup. VLIW4 was a concession for GPGPU, which isn't necessary in a games console.

AMD's architecture in recent years was extremely suited to console, with massive math power while Nvidia had much less raw power, but better computation abilities and efficiency.
In a console, AMD's designs would have had a huge edge on Nvidia's, because all that math power would have been actually used, since in console the software is tailored to the hardware rather than vice versa.

In recent times AMD has started turning it's architecture more towards compute as well though, so by the time PS4/720 come out, it may be a moot point.

The Xbox is coming out next year. PS4 2013.

BurntPork said:
We're getting an R700. It sucks, but it's what we'll be stuck with.

Why such optimism? I recently lent my Radeon 9800 Pro to Nintendo, so I think they may base their design on that.

lhhere said:
I can't post all the specs here, maybe nintendo ninjas will be in my home when I back from work if I do it XD.

They're already watching you.

DCking said:
If the CPU is indeed 'based on POWER7' we mean it's a new CPU built around the POWER7 core (a modified version of it). Although code execution on the Wii U CPU would be similar to a POWER7 CPU, the CPU package will be of completely different design, and can therefore use any amount of cores Nintendo and IBM want. The reason we can be quite certain it's a completely different design is because the POWER7 CPU has many features that are designed for enterprise servers instead of gaming consoles, and that it would take too much power for a small console as well.

We have so many conflicting rumours on the CPU, some of which stem from IBM themselves with the Watson/Power7 comments. Having an asymmetrical Cell-ish design flies in the face of any traditional Power7 setup, even one that has been stripped down for console use.
 
You can ask some of neogaf mods to check reliability :P

I can't post all the specs here, maybe nintendo ninjas will be in my home when I back from work if I do it XD.


You can dump them in my PM box. I'll launder the info then slowly release it into the wild. No one will ever trace it back to you.
 
You can ask some of neogaf mods to check reliability :P

I can't post all the specs here, maybe nintendo ninjas will be in my home when I back from work if I do it XD.

How about just posting the number of SPUs used in the Wiiu GPU? That would narrow things down a bit but still wouldn't give us the exact performance or feature set. Which should be vague enough to keep the Ninjas in Iwatas closet for another day.
 
^ Correct JJ.

How about just posting the number of SPUs used in the Wiiu GPU? That would narrow things down a bit but still wouldn't give us the exact performance or feature set. Which should be vague enough to keep the Ninjas in Iwatas closet for another day.

Better question that shouldn't put him in too much trouble. Does the number end in a zero?
 
That was a question I had been wanting to ask for awhile. It would let us know whether or not the GPU is VLIW5.

Then again if it's 640, then that kills that idea. :P
 
So coding extraordinaire, does a POWER7 core with two threads dedicated to OS and two to gaming sound plausible? And by plausible I mean splitting the threads in that manner.
I'm not familiar with power7's SMT details/gotchas, but at prima vista, I don't see why not.

And the latter part of your post made me think of an article linked to in a GC webpage I mentioned awhile back. Surprisingly the link is still alive.

http://www.eetimes.com/electronics-news/4166704/GameCube-clears-path-for-game-developers
What can I say, Gekko was that good of a CPU design ; )

Seriously, though, I don't know when IBM first introduced cache partitioning with a dedicated DMA engine for the locked part (first time I've seen it was in Gekko, otherwise cache locking per se has been around for ages in the ppc), but it looks like a darn efficient shortcut allowing sw to work around some of the inherent hurdles with classic caching schemes. The best part is, it is seamless and does not break the normal course of work of the cache as a whole; the locked part of the cache still produces cache hits as valid as those coming from the unlocked part. It's basically a clever and efficient 'deus ex machina' shortcut (ie. means to allow higher-level intelligence interfere in the workings of the low level algorithms) allowing for smarter caching than what the vanilla schemes can achieve. In comparison, the traditional cache preload/warmup means found in many CPUs today are much more subtle/less powerful (and of course, power/ppc has had those for eons as they are part of the early ISA standards).

To get a hint how Gekko's cache locking/partitioning works, check out the corresponding section in this 750CL's manual: https://www-01.ibm.com/chips/techli...$file/To CL - CL Special Features 6-22-09.pdf
 
As long as the WiiU runs current wii games at 1080P with tessellation, I will be happy.
It's weird but I never expect the hardware performance from nintendo that I would expect from MS or Sony and I'm happy with that.
 
As long as the WiiU runs current wii games at 1080P with tessellation, I will be happy.
It's weird but I never expect the hardware performance from nintendo that I would expect from MS or Sony and I'm happy with that.

Funny how 1 generation shapes people's visions. As an almost-exclusively PC gamer, I can tell you I never turned on my Wii for its visuals, despite the few visual delights it *does* have. Funny thing is, the same can be said of my Playstation.
 
Funny how 1 generation shapes people's visions. As an almost-exclusively PC gamer, I can tell you I never turned on my Wii for its visuals, despite the few visual delights it *does* have. Funny thing is, the same can be said of my Playstation.

I agree, I sold my gaming PC last month and while the 720p, 30FPS visuals on the xbox360 upset me, the wii never does.
 
Not much, but whatever: From what I found digging around, the Wii U GPU was in development between mid 2009, possibly earlier, and April 2011 or later. Just to put "based on R700" into perspective - we're definitely not looking at some off-the-shelf 2007 PC part. Other than that, it supports UVD (not a big deal, considering UVD was introduced many years ago) and uses an AMBA v2 bus (which seems a bit weird - might be there for BC stuff).
2009 was the year when Evergreen came out and the Northern Islands series was on the drawing board, and 2011 when Southern Islands was being finalized. This could still mean anything, but I doubt we can label the GPU as 2007 tech.

If the GPU design is finished, why are we still having a RV770LE in the devkits? Or was it replaced already?
We have so many conflicting rumours on the CPU, some of which stem from IBM themselves with the Watson/Power7 comments. Having an asymmetrical Cell-ish design flies in the face of any traditional Power7 setup, even one that has been stripped down for console use.
We don't really know what POWER7 cores can and can't do. There's only one chip we know of that has them. To be honest, I don't know if it's difficult to put something like that in an assymetrical design, but I doubt it's mindblowingly difficult. Besides, we don't know if it is a full-fledged assymetrical design at all, or if it's just a cache difference.
Shadaneus said:
So, VLIW5 is bad?
If we know the GPU has 640 cores we still don't know whether it is VLIW4 or VLIW5, because VLIW4 has multiples of 64 and VLIW5 has multiples of 80 ;)
 
Although the CPU architecture is weird, I think blu's theory makes a lot of sense. The GPU is really puzzling me though. lherre pointed out that the Wii U should be released within the year, so you'd think the GPU would be near final. That's why I'm wondering why there still seems to be a R770LE included in the devkits.

Although it's a good GPU that would send the 360 and PS3 crying to their moms, I can't wrap my head around why Nintendo would even consider putting it in the final unit. First of all, it's an outdated chip on a 55 nm process. To fit it into the power requirements, as well as fit in the brain_stew confirmed EDRAM, you would presume they customize it and shrink it down. Why they choose to customize a chip that has faulty shader units, and therefore ships with a more complex chip than necessary is beyond me. What's more is that Nintendo doesn't have to choose a 2008 chip. 2008 chips are actually designed in 2006 - the Southern Islands architecture AMD is releasing the coming months actually should have been finished at least halfway through last year. Even if Nintendo is being conservative, they still could have picked the Northern Islands (HD6xxx) architecture as the foundation for their chip. That one was on the drawing boards in 2009.

So basically, it would make absolutely no sense to use the RV770LE. But if it's still used in the devkits, what's going on?
The ability to disperse heat actually increases not with volume, but with surface area, which should be even greater (?). So yeah, I think the Wii U has enough room for decent components. EDIT: The HD6470 has 27W TDP only in the form of a PCI-e card. This chip should come in a more compact and embedded form factor. The power draw of the chip itself is probably only about half of that figure.
Yeah. If the APIs stay the same pretty much all of the code can be reused. From what I understand VLIW4 is an efficiency improvement, and should only make difference in performance and not functionality or programmibility. Only close-to-metal things, which I guess should be discouraged in devkit phase, will maybe need to be rewritten. It is important though that any new GPU should be able to deliver as much as the old one in every department.

I believe it's possible for some aspects of the GPU to draw on the more recent lines of AMD cards. Of course, both AMD and Nintendo would know what would be available probably a couple years back. Don't know much about VLIW4. Perhaps someone can break it down for me?

Basically, the only reason I think they are using an RV770LE in dev kits is because there are no other off the shelf components that fall in that ballpark of performance. I don't believe AMD makes any other GPUs that run between 500 and 600 MHz at 640 SPUs. Thus, while performance may increase in the final unit, due to advances in technology, Nintendo has a set target of SPUs and clock rate. Does that make any sense?

Additionally, that card utilizes GDDR3 on a 256 MB bus. AFAIK that would make for a pretty high bandwidth and low latency memory solution, especially when combined with copious amounts of eDRAM.
 
I'm not familiar with power7's SMT details/gotchas, but at prima vista, I don't see why not.


What can I say, Gekko was that good of a CPU design ; )

Seriously, though, I don't know when IBM first introduced cache partitioning with a dedicated DMA engine for the locked part (first time I've seen it was in Gekko, otherwise cache locking per se has been around for ages in the ppc), but it looks like a darn efficient shortcut allowing sw to work around some of the inherent hurdles with classic caching schemes. The best part is, it is seamless and does not break the normal course of work of the cache as a whole; the locked part of the cache still produces cache hits as valid as those coming from the unlocked part. It's basically a clever and efficient 'deus ex machina' shortcut (ie. means to allow higher-level intelligence interfere in the workings of the low level algorithms) allowing for smarter caching than what the vanilla schemes can achieve. In comparison, the traditional cache preload/warmup means found in many CPUs today are much more subtle/less powerful (and of course, power/ppc has had those for eons as they are part of the early ISA standards).

To get a hint how Gekko's cache locking/partitioning works, check out the corresponding section in this 750CL's manual: https://www-01.ibm.com/chips/techli...$file/To CL - CL Special Features 6-22-09.pdf

"Prima vista" was about all I was looking for thanks. :)

Nice read (for me at least). I'll check out that link when I get the chance.

So, VLIW5 is bad?

It's not bad so to speak. First it is true that it is supposed to be an attempt by AMD to improve the GPGPU functions of their previous processes. However some of the supposed benefits that made AMD switch is that once DX10 came out they were finding poor utilization of their ALUs. Trying to remember off the top of my head, but with each stream core has 5 units (4 simple and 1 complex). Once DX10 came out it was said that the games on average used 3.x (can't remember what x was) amount of the ALUs in each core. So what they did was remove the complex unit and improved the functionality of the 4 simple units (VLIW4), which leads to the next benefits. The removal of the complex unit reduces the amount of needed transistors by 10%, while supposedly giving the same performance as a comparable VLIW5 chip. I think the issue now is that games are still designed more around the VLIW5 stream core since Cayman is the only non-VLIW5 GPU in AMD's line so far. So this would be the main reason why (as of now) I would see Nintendo sticking with VLIW5. But if this is part of the future direction AMD wants to go with some of their GPUs, then it would be just a matter of time before devs convert to it and I would like to think Nintendo would pursue this direction on top of the other believed benefits.

Although compute units over VLIW4/5 would be even nicer. ;)
 
Once DX10 came out it was said that the games on average used 3.x (can't remember what x was) amount of the ALUs in each core.
3.6 is the number that pops in my head.

So what they did was remove the complex unit and improved the functionality of the 4 simple units (VLIW4), which leads to the next benefits. The removal of the complex unit reduces the amount of needed transistors by 10%, while supposedly giving the same performance as a comparable VLIW5 chip.
There are clear cases where the old design would outperform the new one, though. There was a very nice analysis thread on B3D, with code snippets and all, which I can't find now.

I think the issue now is that games are still designed more around the VLIW5 stream core since Cayman is the only non-VLIW5 GPU in AMD's line so far. So this would be the main reason why (as of now) I would see Nintendo sticking with VLIW5. But if this is part of the future direction AMD wants to go with some of their GPUs, then it would be just a matter of time before devs convert to it and I would like to think Nintendo would pursue this direction on top of the other believed benefits.
AMD want to go with SIMD. VLIW4 is as obsolete as VLIW5 on AMD's roadmaps.
 
3.6 is the number that pops in my head.

Cool. I kept thinking 3.2, but I knew it wasn't that low.


There are clear cases where the old design would outperform the new one, though. There was a very nice analysis thread on B3D, with code snippets and all, which I can't find now.

Hey man, you quoted a key sentence in the wrong spot. :P I'm going with poor optimization till you find that info, lol.
AMD want to go with SIMD. VLIW4 is as obsolete as VLIW5 on AMD's roadmaps.

Hence the last sentence in that post. ;)
 
Here, I guess? :p



I really doubt that they want to risk another 3DS, so if they miss the holidays, they'll miss the fiscal year and launch in mid 2013.

wsippel made a good point. Nintendo's not going to throw away that R&D. We're getting an R700. It sucks, but it's what we'll be stuck with.

What's wrong with a modded R700? There have been some very educational posts in this thread as to how more recent AMD cards wouldn't make much of a difference. It's obvious Nintendo is targeting roughly "2-3x" Xbox 360. The system, like Wii, will be more about the package as a whole. And an R700 fitted into the kind of package they showed at E3 is pretty damn impressive.


That was my original reaction, but what some other guys have been suggesting makes more sense: this OS-running core could be a 'vanilla' core just with "a tad" more cache, and that would make it suitable for being the OS core as well as also running some game code. The key is in the extra cache - one of the main performance hurdles when running entirely different processes (e.g. an OS and some game) on the same physical core is how such processes tend to screw each-other's cache states, also known as cache thrashing. Giving this "hetero-software" core some extra cache (and using a bit of cache partitioning - something a Power7 core might do natively) could solve the cache thrashing problem, and get this core to perform about or on-par with the 'dedicated game' cores.

Very intriguing. Thanks for the anlaysis. So am I to believe the three cores would be basically the same except for the OS core having more L2 cache? And this would probably mean throw out the idea of any cores running four threads. But perhaps this would be softened by the existence of an audio dsp and starlet-esque processor?
 
The sad thing is I'll be forced to buy A Wii-U as soon as they release their flagship Mario/Zelda game on the system, even though I have no interest in it otherwise. *Sigh, Nintendo owns my ass.
Aaannndd I have no fucking idea what anyone is talking about in this thread, so time to bail out.
 
There's nothing wrong with an R700. If they give us an RV740 like chip, with some decent modern tesselation power, and a nice chunk of EDRAM slapped on there's nothing more to wish for. The R700 series was the foundation of all AMD graphics until this day, and a 'more modern' design wouldn't make much of a difference at all in terms of shader architecture or other basic GPU functionality as far as I understand.

The only thing they simply can't ignore is a tesselation unit. If it doesn't have a tesselator, it can 'only' produce the same graphics as the PS360, although probably significantly upgraded. If it has a proper tesselator, it can actually do new stuff the PS360 can't, and do every trick the next Xbox and PS4 can. This thing therefore has the potential to make the difference between last gen and next gen, although it probably won't be as significant as not having programmable shaders.

Reading wsippel's posts, it seems to me the GPU won't be anything like an off-the-shelf part. The period it was designed in was a period where tesselators were a standard GPU feature for AMD, too. I'm not too worried.
 
There's nothing wrong with an R700. If they give us an RV740 like chip, with some decent modern tesselation power, and a nice chunk of EDRAM slapped on there's nothing more to wish for. The R700 series was the foundation of all AMD graphics until this day, and a 'more modern' design wouldn't make much of a difference at all in terms of shader architecture or other basic GPU functionality as far as I understand.

The only thing they simply can't ignore is a tesselation unit. If it doesn't have a tesselator, it can 'only' produce the same graphics as the PS360, although probably significantly upgraded. If it has a proper tesselator, it can actually do new stuff the PS360 can't, and do every trick the next Xbox and PS4 can. This thing therefore has the potential to make the difference between last gen and next gen, although it probably won't be as significant as not having programmable shaders.

Reading wsippel's posts, it seems to me the GPU won't be anything like an off-the-shelf part. The period it was designed in was a period where tesselators were a standard GPU feature for AMD, too. I'm not too worried.

Ok, I admit I had no friggin clue what tesselation was until a moment ago when I looked up a demonstration on youtube using Unreal technology. I said "wow."
 
Ok, I admit I had no friggin clue what tesselation was until a moment ago when I looked up a demonstration on youtube using Unreal technology. I said "wow."
Yeah. I'm not too impressed with it yet, because it's just better at doing small details. Given that tesselation isn't used in many games because of the PS360 not being able to do it (properly), I think it will become more impressive in the future. The Samaritan demo is pretty, but tesselation is only a small part in that. It's mostly other DX10/11 features running on way too powerful hardware :)

The most important accomplishment tesselation can bring about, is that it can actually do some great work for hair and fur. That means we can finally see the flood of bald space marine games die :)
 
I thought tessellation was only used to make blocks rounder, guess there's more to it which would definitely make it useful. Considering they've even brought tessellation into the 3DS, Nintendo must have determined it's of some value.

Aside from that what would the RV700 chips have that isn't present in today's latest chips? So far those weird dynamic lighting schemes, focus schemes and othersuch stuff seem to be only graphical gimmicks (funny how people label 3D as a "gimmick" when it's a much, much more immediately noticeable graphical feature) that you have to be told exists to even notice. Even that Tessellation video is forced to stop the action and point to the places where improvements occur -- great for fine-tuning screenshots, but does this really make for a better game, I'm not sure.

The whole thing about "Nintendo always makes low powered hardware" just goes to show how fast people come in and move out of this hobby. Tons of kids in my generation never touched an Atari or Colecovision, and many kids now probably never knew Sega used to make consoles.
 
So I think I lost my mind last night when I posted that they would have to use 28nm to get the gpu cool enough. Looking through the Radeon line, I believe they could definitely create a chip that runs around 600Mhz w/ 640 shader units and get it to run at around 40 watts at peak.
 
^ Correct JJ.



Better question that shouldn't put him in too much trouble. Does the number end in a zero?

My earlier question is just as good!

I'll even revise it:

If Nintendo goes third-party before Wii U releases and ports their games to PC, should a get a low-end graphics card or a mid-range card? (real mid range, as in the card launched at over $100)


Our questions combined can paint a pretty picture while he reveals almost nothing directly. IT'S FOOL PROOF!

in before "you should know"

Considering they've even brought tessellation into the 3DS

Huh? Is that a Pico feature?
 
Status
Not open for further replies.
Top Bottom