Rumored Chinese Forum Xbox720 specs: 8CoreCPU,8GB,HD8800GPU,W8,640GBHDD

1080p is all you need! When 4K is right around the corner.
32MB of ram is all anyone will need! No one will ever use more than that.

Stereo 1080p or 4k.

Looking at that whole conversation, I'm not sure why aegies is discouraging comparison of the GPUs by pointing at that slide, or by suggesting the numbers being put out about Durango and Orbis are as airy-fairy as those in that slide.

The raw teraflop numbers for either console aren't going to tell you much about what they're actually capable of.
 
DDR3 on 386bit would be crazy [~quad speed of x360 main ram pool]... but the math is correct.

As for esram, theoretically it is slower than edram but it has better latencies. Maybe it will have 102GB/s after all...




I know, im just posting "math" that fits. I fully expect 128bit DDR3.

esram is faster than edram but more expensive which is why edram is used more.
 
It's not even EDRAM apparently. It's the sister, yeah I don't know if she's any hotter.

I genuinely lol'd.

Btw, can anyone be kind enough to explain in lay person terms what exactly is the function of ES or EDRAM? I mean is it the last stop for rendering info to travel to before it actually gets shown on screen and consecutively, the empty space of ES/ED RAM is once again filled with final rendering data to be displayed on screen within a 30th or 60th of a second? Does it mean that the memory consumed by each frame of an image including mesh data, physics data, animation data, lighting data and texture data can at max amount to 32MB, ergo, resolution and AA solutions would have to be adjusted to ensure staying within that limit?
 
If I had to guess this won't be UMA. OS ram will be OS ram(2gb slow ddr 3? - maybe for Kinect and OS). APP ram will be app ram(6gb faster ddr 3?)

Everything points to UMA, though. Why add complexity? Actually, maybe this is a stupid question, but is there any reason for them to not simply use DIMMs for the main RAM pool?
 
Well was anybody expecting more than 1080p? Sounds like that rez will be the ceiling. At least for big games.
Well I think that's enough for 2xMSAA "free" but I doubt it would be used for AA, but instead just getting timely renders out without bogging down the system.


Well I believe you, but you could explain MAN!
I don't know how they'd add up, but the ESRAM would be what communicates with the CPU/GPU while the DDR3 would be communicating with CPU and transferring data to the ESRAM. So you can't just add those two numbers together.
 
You could've left the sarcasm out.
The point here is:

Why settle for something that should be standard in 2013? 1080p @ 60fps should be the minimum going forward or the console will already be yesterdays news in 2014. And I don't mean that in terms of "something is always going to be better" because in technology it's already outdated as soon as you buy it pretty much.

Stereo 1080p or 4k.
Ideally, this is what we need to be shooting for.
 
The point here is:

Why settle for something that should be standard in 2013? 1080p @ 60fps should be the minimum going forward or the console will already be yesterdays news in 2014. And I don't mean that in terms of "something is always going to be better" because in technology it's already outdated as soon as you buy it pretty much.


Ideally, this is what we need to be shooting for.

It won't happen. Most developers favored graphics over gameplay this gen.
 
The raw teraflop numbers for either console aren't going to tell you much about what they're actually capable of.

These numbers are not like the '2 TFlop' PS3 number though, I wouldn't dismiss them as similar. They are a reflection of programmable resources, of what will actually run the shaders programmers write. Nothing in isolation tells you anything about performance but that part of a GPU is responsible for the performance of a large part of the render pipeline. If we are comparing GCN compute units to GCN compute units or even to a slightly lesser extent GCN to GCN2, those parts of the GPUs will be more directly comparable than any two console GPUs before.
 
Not considering bottlenecks.... I guess 170GB/s look very very close to 192 GB/s. On paper.

Where is that pic of the Princes Bride guy? You can't have one pool read/write into a smaller one and add their bandwidth. If you did it right you could get to 102GB/s by not stalling it with the 64GB/s pool.
 
Really dumb question but tesselation, will the Orbis have anything similar to it?

Orbis GPU fully supports DX 11 class features. Obviously Sony will use opengl though, but it's the same thing.

Yes, since they both seem to be based on the same line of GPUs.

Isn't Durango's gpu GCN2? DX 11.1 or whatever

The point here is:

Why settle for something that should be standard in 2013? 1080p @ 60fps should be the minimum going forward or the console will already be yesterdays news in 2014. And I don't mean that in terms of "something is always going to be better" because in technology it's already outdated as soon as you buy it pretty much.

That just shows that you don't know anything about game development. Do you think that a GTX 680 on the PC runs the latest game at 60 FPS because the game was designed for it?

No. High end cards on PC run games at all that speed because effectively they are ahead of what the game is trying to do.
 
I genuinely lol'd.

Btw, can anyone be kind enough to explain in lay person terms what exactly is the function of ES or EDRAM? I mean is it the last stop for rendering info to travel to before it actually gets shown on screen and consecutively, the empty space of ES/ED RAM is once again filled with final rendering data to be displayed on screen within a 30th or 60th of a second? Does it mean that the memory consumed by each frame of an image including mesh data, physics data, animation data, lighting data and texture data can at max amount to 32MB, ergo, resolution and AA solutions would have to be adjusted to ensure staying within that limit?

How about very basic.
Its like a memory supercharger. Allowing very small super fast bursts of speed to the output and if used correctly and wisely allows the use of additional effects to the image before the frame hits the screen.
 
I just really hope E3 is exciting. I remember being glued to gametrailers.com during E3 2006. every night I'd download all of the videos and just watch it for hours on end. I was fucking blown away by Rainbow Six Vegas and that game ended up totally delivering for me (unfortunately, the sequel did not).
 
Does it go for both PS4 and XB3?

it would go for anything really.

TF #'s on mordern GPU's just tells you the raw numbers that the compute shaders in a gpu are capable of cruching @ 100% efficiency @ the rated clock speed and in single precision. It tells you nothing else. This is why going to TF score isn't the best thing to do. It gives you a general idea of how powerful the GPU can be, but that's about it. It's pretty vague. You can have a GPU with a high TF rating be outperformed by one with a lower rating in games because the other factors come into play... like architecture efficiency, mem bandwidth, more advanced shader models, better tesselation engines...etc
 
I just really hope E3 is exciting. I remember being glued to gametrailers.com during E3 2006. every night I'd download all of the videos and just watch it for hours on end. I was fucking blown away by Rainbow Six Vegas and that game ended up totally delivering for me (unfortunately, the sequel did not).

Ya the sequel was a$60 map pack for the first game. I traded the game back in 2 days later.
 
How about very basic.
Its like a memory supercharger. Allowing very small super fast bursts of speed to the output and if used correctly and wisely allows the use of additional effects to the image before the frame hits the screen.

So the final rendering solution need not go through the ED/ES RAM and instead it can hold post processing effects and sync with final renders and simultaneously bequeath them with those effects as it comes on screen. Is my dumb ass close?

Also, thank you for taking the pains to explain to me. I really appreciate it.
 
sorry to disappoint, no ninjas here.
i had a feeling Jehova was lying and i did what i did to expose him.
next time dont believe random dudes.
How does this work?
You've told him his source was in trouble. If he was lying and he doesn't have any source, how could your message be effective in any way?

Unless I'm missing something, it seems more believable the info was indeed from some "legit" source (but the info could be wrong, anyway).
 
1080p is all you need! When 4K is right around the corner.
32MB of ram is all anyone will need! No one will ever use more than that.
1080p with 4xSSAA requires about 4x the pixel processing of 4k w/no AA, and has better IQ in all likelihood. Lets get there first before we worry about bumping up the resolution.
 
How does this work?
You've told him his source was in trouble. If he was lying and he doesn't have any source, how could your message be effective in any way?

Unless I'm missing something, it seems more believable the info was indeed from some "legit" source (but the info could be wrong, anyway).

Now when you think about it... Yeah. You're right. Explain yourself deanos.
 
I don't think 4k gaming is something you're going to see much of. Not natively, anyway. 4K is next gen's 1080p.

So Sony will support it more? (I'm sure they would anyway.. nothing major by any means). I wonder if they can manage GT6 in (at least an option for it) 4k at 60fps... Surely PS4 is at least more powerful than 4 PS3's networked.
 
So Sony will support it more? (I'm sure they would anyway.. nothing major by any means). I wonder if they can manage GT6 in (at least an option for it) 4k at 60fps... Surely PS4 is at least more powerful than 4 PS3's networked.

GT6 at 4K is possible. But would graphics fidelity in that mode be compromised?
 
So Sony will support it more? (I'm sure they would anyway.. nothing major by any means). I wonder if they can manage GT6 in (at least an option for it) 4k at 60fps... Surely PS4 is at least more powerful than 4 PS3's networked.

What? GT6, a graphics powerhouse running at 4K?

Such a big bump in rez will have to be traded by the lack of detail in the game world.
 
These numbers are not like the '2 TFlop' PS3 number though, I wouldn't dismiss them as similar. They are a reflection of programmable resources, of what will actually run the shaders programmers write. Nothing in isolation tells you anything about performance but that part of a GPU is responsible for the performance of a large part of the render pipeline. If we are comparing GCN compute units to GCN compute units or even to a slightly lesser extent GCN to GCN2, those parts of the GPUs will be more directly comparable than any two console GPUs before.

As far as I know, Orbis compute units are GCN, not GCN2 derivative.
 
What? GT6, a graphics powerhouse running at 4K?

Such a big bump in rez will have to be traded by the lack of detail in the game world.

How big of a drop?

virtuaracingscreen--af0uyi.jpg


I was thinking alpha effects could be compromised.
 
What? GT6, a graphics powerhouse running at 4K?

Such a big bump in rez will have to be traded by the lack of detail in the game world.

GT6 at 4K is possible. But would graphics fidelity in that mode be compromised?

Of course it would. Can't run full models like that at 4k. GT5 at 4k is beautiful though, so I can't imagine PD implementing a "lite" render mode in 4k that is more of an upgraded GT5 (graphically).


Exactly. My logic here is that, surely PS4 is more powerful than 4 PS3's.
 
I don't think 4k gaming is something you're going to see much of. Not natively, anyway. 4K is next gen's 1080p.

Well the trend is games like Temple Run & Angry Birds being a lot more popular than the games with over the top graphics & with the newer consoles having better NUI that could emulate Touch Controls I think we will see a lot of games that could support 4K.
 
As far as I know, Orbis compute units are GCN, not GCN2 derivative.

I think I covered that in my post.

If they're both GCN the numbers would be apples-to-apples as representations of the shading resources on each chip. If one is GCN2 we'll have to look inside the compute units and see what's changed. But they'd still be more easily comparable than any prior pair of console systems on this front.
 
Top Bottom