WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
I'm thinking the same. No other review I read mentioned drops. They are confused.

Others do, like
nintendolife.com said:
If there's one minor gripe that can be levelled at the game it's the fact that the frame rate still drops now and again when things get a little hectic. It's not damaging to the gameplay in any way, but it's just a shame that such slowdown is present despite Wind Waker being on a much more advanced system.

But without footage, its hard to know what exactly is happening.
 
Not random. But obviously not using the same calculation for bandwidth as you are using. I seem to remember blu and yourself talking about that. How are you getting those numbers again?
550,000,000 * 2048 / 8 / 1024^3

And as I mentioned: According to Nintendo, latency was a huge concern. (Pseudo-)static RAM gives you perfect single cycle latency - if (and only if) the RAM is clocked at the exact same frequency as the GPU.

If you add up the columns in a macro, you get 256, yes. The thing that throws me off and prevents me from saying one way or another is that they are in a "sandwich" pattern. The bottom of the columns (the lighter portions) being on either side of what must be some type of path out to the interface on the left of the modules. If we just count horizontal and assume that each "sandwich" is sharing the same path, it would be 128. I'm really struggling to describe my thoughts here. lol
I thought about that as well, which is why I wouldn't rule out 65.5GB/s. But there are both sandwiched and non-sandwiched designs, and the MEM0 macros are not sandwiched. So I guess the MEM1 macros just look like they are.

Something I've been thinking about. Do we know if the GPU downclocks in Wii mode? I know the CPU def does, but I don't think we've gotten word on Latte. In trying to figure out the bandwidth of the eDRAM pools, this info might help. If the eDRAM is playing the part of Wii's 24MB 1t-SRAM, then it would have to be on a different clock than the rest of the GPU...if they are doing BC that way.
They have to downclock it, only Wii U's MEM1 can emulate Wii's MEM1. There are nine PLLs on Latte.
 
550,000,000 * 2048 / 8 / 1024^3

And as I mentioned: According to Nintendo, latency was a huge concern. (Pseudo-)static RAM gives you perfect single cycle latency - if (and only if) the RAM is clocked at the exact same frequency as the GPU.

well seeing as Nintendo place so much emphasis on the edram they must surely have gone with 131gb/s as it would make no sense to bottleneck it if they consider it such an important part
 
The other consoles offer "similar" bandwidths. Keep in mind ROPs is another thing that determines what these things can put out.

Yes, to be honest 720p with better shaders and effects is pretty good for me. Also it would seem to me 131 is pretty great for 720p60 games.

I was not trying to return to the 720p vs 1080p discussion as is pretty clear that depends on many aspects and 1080p by itself tells us nothing.

Is Pikmin 3 the best use of shaders right now on Wii U? I know it is not technically consistent across the board, but the objects and fruits are pretty well done. That certainly throws a lot of promise.
 
Halo could unlock its frame rate, yes. But model animations were locked to 30. It's rather jarring to play because of that.
Yes, but question stands, did Halo Anniversary solve it?

There is procedural animation procedures, ragdoll physics and the like have used it for years and it has loosely been used on games where you don't often see the character moving as it tends to look weird due to warping without notion or respect for human anatomy; it also usually creates seams, between the end manipulation and the motion captured animations starting queue it has to return to.

Inverse kinematics animation is also procedural, but it usually applies to legs snapping to variable terrain heights, it doesn't mess with torso animations themselves, so it's "easier".


Then you have methods that take the queue for two animations that don't mesh correctly and create intermediate animation frames; Uncharted 2 and 3 famously used it, seeing most SPE's would just be iddling otherwise.

But that's too big of a structure to implement on a pre-existing game.


Doing a intermediate frame of animation for games in real time also seems like it could add an extra 16 ms input lag (as if realtime, game needs to take the next frame coordinates to take an intermediate reading), meaning a 60 frames per second game could have 83.33~ ms input lag rather than the standard 66,67~ ms. Still not too bad of a tradeoff, but not something you could be doing pre-emptively.

Or perhaps it would retain the 133,33~ ms input lag despite being 60 frames per second and add an extra 16,67~ ms, as it feels a lot like doing the original 30 frames and then adding 30 variants in between, queues for action and reaction would be the same... I dunno.


Or you could do those intermediate offline and pose them as new data, but I'm uncertain of that being easily automated and working out well enough.
Yes, to be honest 720p with better shaders and effects is pretty good for me. Also it would seem to me 131 is pretty great for 720p60 games.

I was not trying to return to the 720p vs 1080p discussion as is pretty clear that depends on many aspects and 1080p by itself tells us nothing.

Is Pikmin 3 the best use of shaders right now on Wii U? I know it is not technically consistent across the board, but the objects and fruits are pretty well done. That certainly throws a lot of promise.
No such thing as too much. :)

But Nintendo tends to aim for appropriate (or what they think appropriate is) so going higher would have to pose palpable benefits other than being able to be spammed someday by 2-3 teams/games in order to "punch above" it's weight.
 
Inverse kinematics animation is also procedural, but it usually applies to legs snapping to variable terrain heights, it doesn't mess with torso animations themselves, so it's "easier".

That's not what Inverse Kinematics, also known as IK, are. Feet snapping to variable terrain heights is something different. IK is just another way of controlling and setting keys onto a skeleton. With Forward Kinematics, or FK, you're setting rotational values on each join individually, so to move a hand you move and set keys on the shoulder, elbow, and wrist. With Inverse Kinematics you're moving the end effector on a skeletal chain and it uses math to figure out how those joints need to rotate to place the end of the chain where the end effector is. The mathematical principles and base for IK was developed by NASA for controlling robot arms.

Character rigs can have all kinds of mix and matches of FK and IK parts. I've used rigs with FK spines, but having the option of FK, or IK legs and arms. I've used rigs that have just all FK, or even just all IK.
 
30 fps because Nintendo keyframed every animation. Putting it up to 60 fps would just make the animations cycle through faster. It'd look really weird. Getting around it would require redoing the animations. No simple fix.
How are you sure that they are key framing every frame (I assume you meant every frame, since all animations have "key frames"? And how do you know that's why they limited it to 30fps? I'm asking because if you're getting frame drops at 30fps, it's highly unlikely that it could be pushed to 60fps without massive frame drops.

Also, 3d animation is trivial to shift from 30fps or 60fps, or whatever FPS you want to do. Even if EVERY frame was key framed (which is a horrible way to do things, and I would say 100% chance of not being the case), you can still tween between key frames, it's not as smooth, but possible. And another fallback would (if the unlikely was happening), you can duplicate animation frames, which would lead to 30fps character animation, but 60fps camera player movement in the world and everything else, but I doubt this would ever be necessary (Rayman does it because it's 2D animated, and every frame is "drawn" separately)


how does that work? does it simply add frames with coordinates averaged from the previous and next frame for all data points, or is there some clever way of doing it that accounts for inverse kinematics, center of mass etc?
This depends on the engine, but to answer the question you have to go back to how 3d art/animation is created.

1) 3d models are created in a program like 3dstudio/maya/the like, you get a 3d mesh.
2) then for things that are complex (like NPCs), have bones created, which essentially give you the ability to move/animate less control points, they've virtual, the player never sees them, it beats having to animate every vertext on a complicated 3d mesh directly.
3) So you have a smaller number of control points that you can "animate". You can decide to do forward or inverse kinematics on them. Forward is the most basic, which means you rotate at the joints to make your animation. Reverse is more complicated, but you connect mulitple joints in a virtual group, and you can move the end point (say the hand), and the rest of the joints below will move to allow for that joint to be placed where it needs to be (the elbow and wrist will bend automatically), good for walking, as you can get feet to not slide around. This is the key though to your question. These "angles" or "positions" in forward or reverse kinematics are "key framed". So the animator will pick key points in time where the animation is right. For an arm moving, you would have a key frame at the beginning of an animation, and then at the end. The engine/animation player will calculate the inbetween moves. To add more "flare" you can add as many key frames inbetween as you want that deviates from automatic "tween".

What Thunder_monkey is suggesting is that Nintendo opted to not use tweening, and hand animated, every frame @ 30fps, thus, and the engine can't handle tweening to 60fps. I guess another option is the source files are "exported" to the engine which could strip out any tweening, and key frame every frame, but this would only be a problem if Nintendo didn't have any of the source animation files.

Tweening can be rudimentary, averaging and whatnot, but it can also be complicated where you can have ease in/ease out and even curves to better control the motion.

My opinion is that animation isn't a factor in why it's 30fps... But I'm just looking from the outside, just doesn't make sense to me. But feel free to prove me wrong.
 
The 70 and 103GB/s figures are based on randomly made up clock frequencies.

Nah, it's just a difference in where 1000 or 1024 was used during the calculations, at least for the 70GB/s figure. :]

550MHz * 16 bytes (4 bytes read + write + colour + depth) * 8ROPs -> 70,400 "MB"/s, (65.5GiB/s), which simply corresponds to a 1024-bit bus using the same 1000 divisor, as you know.
 
Nah, it's just a difference in where 1000 or 1024 was used during the calculations, at least for the 70GB/s figure. :]

550MHz * 16 bytes (4 bytes read + write + colour + depth) * 8ROPs -> 70,400 "MB"/s, (65.5GiB/s), which simply corresponds to a 1024-bit bus using the same 1000 divisor, as you know.

Would that be considered a bottleneck?
 
Would this explain why most ports have been quite awful?

Switching a buffer to the DDR3 RAM and not using the larger space available on the WiiU's eDRAM because it's easier/has been commonplace to do so?

Cheers
Just to make sure I was not misunderstood: I was referring to the scheme where the back buffer (i.e. the fb the GPU is currently working on) sits in edram, and the front buffer(s) (i.e. the fb transmitted on the output) sit in main RAM. The BW expense for resolving such a back buffer to main RAM is the size of the color buffer (after any downsamplings from FSAA) times the framerate. I.e. for a 720p@60 game, the expense would be 1280 * 720 * 3 * 60 = 158MB/s. Flipper was even smarter there, as is supported on-the-fly conversion to YUV color space during the resolving. Basically, when the system is designed for it the price can be really minor.
 
That's not what Inverse Kinematics, also known as IK, are. Feet snapping to variable terrain heights is something different. IK is just another way of controlling and setting keys onto a skeleton. With Forward Kinematics, or FK, you're setting rotational values on each join individually, so to move a hand you move and set keys on the shoulder, elbow, and wrist. With Inverse Kinematics you're moving the end effector on a skeletal chain and it uses math to figure out how those joints need to rotate to place the end of the chain where the end effector is. The mathematical principles and base for IK was developed by NASA for controlling robot arms.

Character rigs can have all kinds of mix and matches of FK and IK parts. I've used rigs with FK spines, but having the option of FK, or IK legs and arms. I've used rigs that have just all FK, or even just all IK.
You're clearly more in the know regarding that than I am, and that's good because I mean to add to the discussion rather than subtract, but I have to insist:

Inverse kinematics is what I've been hearing for years that is used for legs; Virtua Fighter 2 on Saturn, Zelda's from OoT onwards, etc.

It was also talked about recently because of that Wind Waker retrospective:



Source: http://www.polycount.com/forum/showthread.php?t=104415

Off course Uncharted wasn't the first so it feels "silly", but precisely due to that the ensuing comments here on GAF and another places have went out of their way to show games that used it; none questioned the inverse kinematics method being applied.


Another, write-up:

If you’ve ever stopped, looked down at your character’s feet, and thought “wow, that’s some good connection with the ground my character has” then you have Inverse Kinematics to thank for that. Inverse Kinematics, you ask? Well, it’s the opposite of Forward Kinematics! Forward Kinematics is, for the simplest explanation, a process we do mentally every day. We think to ourselves “I’d like my finger to be there, so that I can type the ‘f’ key” and so we move our muscles so that our finger is there. In a computing sense, the idea is that you work out the position of something(like a finger) based on all the previous joints.

Inverse Kinematics, is of course, the opposite. Inverse Kinematics ask “if my finger is here, where would every other relevant part of me be?” Or, in most cases, it asks about characters feet. In fact, their legs were so specific that an early paper outlining Inverse Kinemation in animation referenced legs specifically (Computational modeling for the computer animation of legged figures). Personally, I started thinking about this recently, after noticing the effect in The Legend of Zelda: The Wind Waker, which was a particularly good implementation, especially for when it was written.

(...)

Ocarina of Time featured Inverse Kinematics. Remarkably, OoT wasn’t even the first game to feature IK. (...)
Source: http://www.polycount.com/forum/showthread.php?t=104415
 
I'm on the second sage dungeon in WWHD. The only "slowdown" I've Seen is during sword strikes and bomb/canon explosions when it passes for a couple of frames. It's the same effect from the original release... Unless I'm just totally missing something.
 
I'm on the second sage dungeon in WWHD. The only "slowdown" I've Seen is during sword strikes and bomb/canon explosions when it passes for a couple of frames. It's the same effect from the original release... Unless I'm just totally missing something.

Same here. I haven't noticed any distracting slowdown. Perhaps I'm just not as sensitive to it as others. I notice it more in Wonderful 101, but even then it's generally only for a second or two.
 
I played the game all weekend as well. I haven't come across any framerate issues. The intentional contact pause for a sense of tactile combat is very acceptable and welcome as far as I'm concerned.
 
Just to make sure I was not misunderstood: I was referring to the scheme where the back buffer (i.e. the fb the GPU is currently working on) sits in edram, and the front buffer(s) (i.e. the fb transmitted on the output) sit in main RAM. The BW expense for resolving such a back buffer to main RAM is the size of the color buffer (after any downsamplings from FSAA) times the framerate. I.e. for a 720p@60 game, the expense would be 1280 * 720 * 3 * 60 = 158MB/s. Flipper was even smarter there, as is supported on-the-fly conversion to YUV color space during the resolving. Basically, when the system is designed for it the price can be really minor.

What is expected or should be stored in eDRAM, and if Latte is a 160/176 shader part would 65 or 70gb/s of bandwidth be enough render what is expected from Nintendo next year.
 
You're clearly more in the know regarding that than I am, and that's good because I mean to add to the discussion rather than subtract, but I have to insist:

Inverse kinematics is what I've been hearing for years that is used for legs; Virtua Fighter 2 on Saturn, Zelda's from OoT onwards, etc.

It was also talked about recently because of that Wind Waker retrospective:



Source: http://www.polycount.com/forum/showthread.php?t=104415

Off course Uncharted wasn't the first so it feels "silly", but precisely due to that the ensuing comments here on GAF and another places have went out of their way to show games that used it; none questioned the inverse kinematics method being applied though.


Another, write-up:

Source: http://www.polycount.com/forum/showthread.php?t=104415

It's people improperly using the term.

Inverse kinematics refers to the use of the kinematics equations of a robot to determine the joint parameters that provide a desired position of the end-effector.[1] Specification of the movement of a robot so that its end-effector achieves a desired task is known as motion planning. Inverse kinematics transforms the motion plan into joint actuator trajectories for the robot.
The movement of a kinematic chain whether it is a robot or an animated character is modeled by the kinematics equations of the chain. These equations define the configuration of the chain in terms of its joint parameters. Forward kinematics uses the joint parameters to compute the configuration of the chain, and inverse kinematics reverses this calculation to determine the joint parameters that achieves a desired configuration.[2][3][4]
For example, inverse kinematics formulas allow calculation of the joint parameters that position a robot arm to pick up a part. Similar formulas determine the positions of the skeleton of an animated character that is to move in a particular way.

You can see where this would apply in CG. Inverse Kinematics has NOTHING inherit about it that automatically makes feet snap to the terrain. It is how ever the only system you could use really to accomplish that. Since forward kinematics wouldn't work for that. Terrain snapping would snap the end effect of a characters foot IK chain, or hand chain in another instance, to the ground. The systems actually doing the snapping are something else all together, but Inverse Kinematics are not the system that does that, that system does how ever USE IK to accomplish what it wants to do.

In the WW image above you're reading the IK use there incorrectly. It's saying Link's IK that control his feet dynamically adjust to the terrain. Not that it's the IK that's making it snap and adjust. It's like saying his boots dynamically adjust to the terrain, they're not the system that's doing that.

You can have an IK system driving any aspect of the character you want, from hands to feet to spine to tail. With out an IK system you wouldn't be able to have feet dynamically adjust and stick to the terrain, but that doesn't mean that's what an IK chain does. It's just one of the benefits of an IK chain.

Hopefully that explains it a bit better. The miss understanding seems to me to come from people outside of the animation world, using terms they've heard developers and animators use improperly. I see it a lot from game journalists.
 
Nah, it's just a difference in where 1000 or 1024 was used during the calculations, at least for the 70GB/s figure. :]

550MHz * 16 bytes (4 bytes read + write + colour + depth) * 8ROPs -> 70,400 "MB"/s, (65.5GiB/s), which simply corresponds to a 1024-bit bus using the same 1000 divisor, as you know.
I've never seen a bus that deals with base 10 figures, though - and as I said, it wouldn't make sense to begin with, as MEM1 is no dedicated framebuffer. If it's 550MHz, it simply can't be 70GB/s. It's just not possible. Why even entertain the thought? It makes very little sense and is almost certainly wrong.
 
I've never seen a bus that deals with base 10 figures, though - and as I said, it wouldn't make sense to begin with, as MEM1 is no dedicated framebuffer. If it's 550MHz, it simply can't be 70GB/s. It's just not possible. Why even entertain the thought? It makes very little sense and is almost certainly wrong.

I'm not sure what the problem is? It's just a simple calculation issue between 1024 vs 1000 in terms of converting between kilo/mega/giga.

http://en.wikipedia.org/wiki/Mebibyte

I'm not disagreeing with the 65.5GiB/s at all. It's just a difference in 1000 vs 1024.
 
I assume the original did have them.

I assume StevieP is right and they don't know it's an animation of the game and not a drop.

I assume. The game.

Nah. The frame rate drops are very real. Famed Wind Waker speed runner Cosmo noted many of them in his 24 hour marathon Twitch session a few days ago. They were quite apparent watching the stream too.

Considering his encyclopedic knowledge of the original, he would know a frame rate drop from an animation.
 
It's people improperly using the term.



You can see where this would apply in CG. Inverse Kinematics has NOTHING inherit about it that automatically makes feet snap to the terrain. It is how ever the only system you could use really to accomplish that. Since forward kinematics wouldn't work for that. Terrain snapping would snap the end effect of a characters foot IK chain, or hand chain in another instance, to the ground. The systems actually doing the snapping are something else all together, but Inverse Kinematics are not the system that does that, that system does how ever USE IK to accomplish what it wants to do.

In the WW image above you're reading the IK use there incorrectly. It's saying Link's IK that control his feet dynamically adjust to the terrain. Not that it's the IK that's making it snap and adjust. It's like saying his boots dynamically adjust to the terrain, they're not the system that's doing that.

You can have an IK system driving any aspect of the character you want, from hands to feet to spine to tail. With out an IK system you wouldn't be able to have feet dynamically adjust and stick to the terrain, but that doesn't mean that's what an IK chain does. It's just one of the benefits of an IK chain.

Hopefully that explains it a bit better. The miss understanding seems to me to come from people outside of the animation world, using terms they've heard developers and animators use improperly. I see it a lot from game journalists.
True, but if it doesn't get explained/debunked it will just keep being used. I haven't taken it all in just yet (it's too technical to just click in my brain), but I'll take your word for it and I'm gonna read more on the subject as soon as I can.

What would be the proper name for it then? Skeletal animation rather than IK? Not necessarily a technical term but something that isn't improper to say when referring to that.


Nevertheless, what I meant in my original post is that this kind of animation/positioning in relation to terrain is clearly procedural.
 
The problem is that I've never, ever seen a base 10 bus. UX8LD eDRAM certainly isn't base 10.
It's not the bus width, it's the 2^10 vs 10^3 discrepancy in the kilo/mega/giga units.

Code:
$ echo "scale=2; 550 * 10^6 * 1024 / 8 / 1000^3" | bc
70.40
In the above 1024 is the bus width, and 1000^3 is a GB, vs 1024^3 in your calculation.
 
It's not the bus width, it's the 2^10 vs 10^3 discrepancy in the kilo/mega/giga units.

Code:
$ echo "scale=2; 550 * 10^6 * 1024 / 8 / 1000^3" | bc
70.40
In the above 1024 is the bus width, and 1000^3 is a GB, vs 1024^3 in your calculation.
Didn't realize 70GB is just base 10 - but nobody uses base 10. Outside of hard disk manufacturers, of course (and Apple nowadays). When we're talking Xbone or PS4 bandwidth, it's all gibibytes as well.
 
well seeing as Nintendo place so much emphasis on the edram they must surely have gone with 131gb/s as it would make no sense to bottleneck it if they consider it such an important part

Exactly, why invest so much in eDRAM when they could have gone with 2 gigs of GDDR5 and be done with it?
 
Yeah, but nobody uses base 10. Outside of hard disk manufacturers, of course. When we're talking Xbone or PS4 bandwidth, it's all gibibytes as well.
Anand uses base 10: http://www.anandtech.com/show/6972/xbox-one-hardware-compared-to-playstation-4/3

BTW, when I first used the 70.4 figure that was an honest mistake on my part - I just did the MB computation as 550 * 1024 / 8, which effectively equates MB to 10^6. I didn't bother to correct that afterwards as everybody else (except you ; ) was doing the same 'mistake' (see Anand's link for reference).
 
Didn't realize 70GB is just base 10 - but nobody uses base 10. Outside of hard disk manufacturers, of course (and Apple nowadays). When we're talking Xbone or PS4 bandwidth, it's all gibibytes as well.


Well... actually, 102.4GB/s -> 800MHz * 128bytes per clock.

Similarly, 176GB/s -> 5.5GHz * 32bytes per clock...

It's not that uncommon really. Honestly don't know why there's a such a big fuss anyway as long it's understood... *shrug* There are probably more important matters to discuss than this... surely. :(

Would that be considered a bottleneck?

Outside of MSAA, it's pretty much the worst case scenario for 8ROPs@550MHz. Things are a bit trickier with MSAA because there's MSAA compression, so it's a bit unwieldy as to how much bandwidth one needs on average.

I'd be pleasantly surprised if they did take care of the worst-case MSAA scenario, but I'm not sure that's something they'd do as there are implications to HW I/O overhead and power consumption (aside from expectations on developer usage as there are storage costs to consider or even shading per sample for the correct result).

That doesn't make MSAA impossible (e.g. Call of Duty or other forward renderers), and they certainly don't need to apply AA at all stages of the rendering pipeline (e.g. transparencies/blending, post-processing).
 
Correct, that pause is intentional during the combat, and the instruments reflect it.

You are just assuming they are talking about that when they talk about the frame rate drops in the review though. They probably aren't. They are probably talking about the real ones, which are not infrequent.

Watch Cosmo's Twitch 24 hour marathon WW stream and note how many times he mentions it. In some areas, like Forsaken Fortress 2, he talks about them with some frequency from the time he bombs down the gate, through the fight with Phantom Ganon, and into the traversal after the rescue cutscene--that's like three instances where he calls it out the frame rate drops in less than 10 minutes.

Someone with Cosmo's level of knowledge would know the difference. I can't imagine how many hundreds if not thousands of hours he's put into WW.
 
Maybe because Sony's idea wasn't all that brilliant after all?
I'm not sure whether it's intentional or not, but that reads like a weird fanboy sneer founded on, well, nothing. Both memory architectures have their merits, so what's the dig for?

We'll probably find out within 12 months after launch of the Xbox One/PS4 which approach developers prefer anyways. If one is difficult to work with, people will complain -- they did so vocally and often with the PS2 and PS3.
 
and that would have broke backwards compatability

If the memory set-up would have broken backwards compatibility, they
could have simply added that small pool of RAM and eDRAM the Wii had onto the MCM.

The same could be said for XBONE. I wonder what was the reasoning behind these choices if sony did it, why not the competitors?

I thought the reasoning was because MS is heavily relying on non game applications to sell the XB1 and GDDR5 would not be the best choice for that.
 
The same could be said for XBONE. I wonder what was the reasoning behind these choices if sony did it, why not the competitors?

The machine was designed with 8GB of ram in mind at the outset, and it was a necessity to design it this way. The vision of the machine involved the heavy multitasking/multipurpose in mind. This was the only certain way of achieving it during its design stages.

Sony's machine was designed with 2GB in mind. They simply lucked out.
 
Not chiming in for anything, but I think it wasn't that brilliant to do on Sony's account; and I don't mean from a technical standpoint; the judge is out on that one and I believe they did the improvements they thought they needed to negate possible downfalls of the choice.

I don't think it's all that clever for a few reasons - Sony tends to shoot for the stars; PSVita was louded as a change in hardware design approach by Sony because it dropped past proprietary element tendencies like Emotion Engine and CELL or even PSP proprietary GPU+Media Engine; they made it OEM; but that's only half the equation, as they didn't drop the "so cutting edge it's as damn powerful as it's possible to do" premise. and that premise is not only outdated, it never amounted to much when it came to consoles.

That clearly came back to bite them in the ass for Vita, as it did for PSP (initially very expensive) and PS3 (loosing as much as $300 per unit during launch window); Sony thinks people will simply see the value, but value comes with software; something Sony ultimately delivers on home consoles, and rarely on portables; but I digress. They put themselves on difficult positions, and they didn't really change any of that with PS4.


Vita would probably have done a check mate if it was something akin to PSP Go, with half the PS Vita performance (dual core A9+SGX543mp2 and half the screen resolution; smaller form factor, price and still more powerful than 3DS); instead, they over-engineered; and went to the next "development cost" step of the way; this after doing the same with PSP a few years back. They left Nintendo with their now accepted as necessary PSP-development costs and soared higher; Nintendo should be thankful.


PS4 needlessly put itself on a difficult position for two reasons; first is obviously the price proposition, guts alone it's hands down the most expensive upcoming platform to produce; and second, DDR3 benefits from being the most produced type of RAM, it's not cutting edge, it's cheap, produced by loads of companies and it has stock; GDDR5 in the "cutting edge" double density variation really slims down production availability and it leaves Sony with Hynix, Samsung and Elpida.

This is an improvement over the Rambus RAM and XDR situations, but it's not that much better. And that has been proven this month, actually; when a Hynix Factory caught fire and price GDDR5 climbed 15% overnight.

Thankfully production capability wasn't severely impaired, just delayed; or Sony really could have a nightmare in hands.


Sony PS4 was a risky proposition, it was just benefitted from Microsoft apparently being infinitively clueless. Had the response been "okay, our platform's GPU is less powerful and we use DDR3, but hey, we are totally-not-adding-shitty-kinnect-bullcrap nor HDMI-in shenanigans just-so-you-can-see-TV-on-the-console-without-switching-channels and *thus* the console is $150/$200 dollars cheaper than PS4", and we'd all be saying "touché" and tough luck on Sony's account.

I'm glad it worked out for them, I really am, but I don't think they're any less silly for putting themselves in that potential situation; it was just lucky that Microsoft was that stupid.
 
True, but if it doesn't get explained/debunked it will just keep being used. I haven't taken it all in just yet (it's too technical to just click in my brain), but I'll take your word for it and I'm gonna read more on the subject as soon as I can.

What would be the proper name for it then? Skeletal animation rather than IK? Not necessarily a technical term but something that isn't improper to say when referring to that.


Nevertheless, what I meant in my original post is that this kind of animation/positioning in relation to terrain is clearly procedural.

I would imagine the name for it would depend on the team and engine. I don't know if I would technically call it procedural. What most of them most likely do is just take the Y value of the terrain, we'll call A(if we assume Y is up, like it is in most 3D apps, though not always the case in different engines) and saying the new Y value for this foot at it's start or end frame of the loop and going until the foot is in the air, is now Y +/- A.

I don't know if I would call that procedural, cause the walking animation is still based on already established keys. and it's just offsetting that data. Though I guess some of that depends on the engine/programmers, and what kind of blending they're doing.
 
Although it is off-topic, while the 8GB of DDR3 is much cheaper than 8GB of GDDR5, I believe Microsoft's chip is more expensive to fab than Sony's for a few reasons. I think they will both be taking a loss, but Sony moreso obviously.
 
I would imagine the name for it would depend on the team and engine. I don't know if I would technically call it procedural. What most of them most likely do is just take the Y value of the terrain, we'll call A(if we assume Y is up, like it is in most 3D apps, though not always the case in different engines) and saying the new Y value for this foot at it's start or end frame of the loop and going until the foot is in the air, is now Y +/- A.

I don't know if I would call that procedural, cause the walking animation is still based on already established keys. and it's just offsetting that data. Though I guess some of that depends on the engine/programmers, and what kind of blending they're doing.
Points taken; I've said this plenty times here (usually to validate some points I'm making), but I'm trained to do 3D modeling in various software suite's... Having recently finished my degree, I'm not employed yet, so I can't say it's my job (yet). I don't usually delve with animation since my degree is in industrial design, so I mostly do inanimate stuff like objects and buildings/spaces.

Nevertheless I'm interested in understanding as much as possible about it as I do feel it's a professional obligation that just happens to overlap videgames and CG. And one never knows what he might end up doing, truthfully told.

Anyway, I do feel like I should be grasping this way better than I am, even if, as you say concepts between offline rendering and real-time implementations can vary quite a bit; not getting the CG side of things is honestly frustrating me right now, but I obviously rather stand corrected than insist in being wrong. Thanks for the heads up. ;)
Although it is off-topic, while the 8GB of DDR3 is much cheaper than 8GB of GDDR5, I believe Microsoft's chip is more expensive to fab than Sony's for a few reasons. I think they will both be taking a loss, but Sony moreso obviously.
Due to the eDRAM?

I reckon the difference in DDR3+eDRAM cost is probably still lower than GDDR5, specially in the beginning, not just on chips prices but also PCB (less pins, DDR3 also manages to have more density so perhaps less chips too); and XBone GPU should already be compensating for that added silicon/chip cost, seeing not counting memory it'll have less transistors to it (against PS4).

Both consoles will cost the same to the end costumer, but Kinect is costing Microsoft $150 if the news reports are accurate, had them used that money difference to go lower instead of imposing a novelty nobody wants Sony would be feeling a lot more heat, and that was my point, to me the 8 GB GDDR5 scenario being viable, and not having been met with a checkmate is a fluke.


I'm really happy for Sony, I'm not gonna be buying any of those platforms at launch... I don't even have a Wii U yet, but I really feel Sony needed to do well after PS3, Vita, and PSP (to some extent); I just think they're gonna reap benefits out of other's inaptitude to cash-in on their competitive advantages (namely Microsoft).

And that's never a good position to be in, as providing Microsoft flunks this gen they're bound to get more intelligent next time around (just like they did for X360) and Sony is too formulaic (actually, we can say all 3 are pretty formulaic at this point) thus they're bound to pin future PS4 success stories on their choices; I think that'll be misplaced on their part.
 
Exactly, why invest so much in eDRAM when they could have gone with 2 gigs of GDDR5 and be done with it?

It's presumably a matter of cost. Assuming 130GB/s for the eDRAM, the bus would need to be 192-bit wide to match bandwidth with an accordingly more complex PCB.
 
Just to make sure I was not misunderstood: I was referring to the scheme where the back buffer (i.e. the fb the GPU is currently working on) sits in edram, and the front buffer(s) (i.e. the fb transmitted on the output) sit in main RAM. The BW expense for resolving such a back buffer to main RAM is the size of the color buffer (after any downsamplings from FSAA) times the framerate. I.e. for a 720p@60 game, the expense would be 1280 * 720 * 3 * 60 = 158MB/s. Flipper was even smarter there, as is supported on-the-fly conversion to YUV color space during the resolving. Basically, when the system is designed for it the price can be really minor.

Thanks for this; I've misunderstood quite a few things!

Regards
 
The machine was designed with 8GB of ram in mind at the outset, and it was a necessity to design it this way. The vision of the machine involved the heavy multitasking/multipurpose in mind. This was the only certain way of achieving it during its design stages.

Sony's machine was designed with 2GB in mind. They simply lucked out.

You mean 4GB?

Saw this cross quoted in another thread. Funny im having this same design debate in another thread.

I agree sony got very lucky with gddr5 and ms got unlucky with ddr4.
 
Status
Not open for further replies.
Top Bottom