VGLeaks: Details multiple devkits evolution of Orbis

You should check real world data. Increased bandwidth has a definite performance impact that can be exploited. I will simply leave the data below:

gpu-memory-perf-3.png


As you can see in the top 2 lines (and all the rows shaded in blue), the HD5850 has higher theoretical performance with the only exception being that the memory bandwidth is halved (since the memory interface width is halved) but the HD4870 performs better on 3DMark despite being theoretically weaker. The Orbis GPU is faster than the HD4870 and you are telling me it is unable to exploit the extra bandwidth? Needless to say, I am extremely skeptical.

http://forums.anandtech.com/showthread.php?t=2243171
 
did developers get their kits earlier this time around compared to ps3/360?
 
True but this is when we are dealing with 7GB games. Next gen we can have 50GB games with amazing textures as well as other data. I can see them use 4+ GB of data especially for large games with uncompressed audio and textures.

I've wondered about this. Then again your thoughts seem to be the Orbis' worst case scenario vs rumored Durango. Is there no pros at all in using DDR5 then?
 
True but this is when we are dealing with 7GB games. Next gen we can have 50GB games with amazing textures as well as other data. I can see them use 4+ GB of data especially for large games with uncompressed audio and textures.

But once again, you are not going to be loading all the data at once. It is the same way that 50GB BD for MGS4 worked on a system with around 470MB RAM (including less than 256MB of VRAM). And iirc, PS3 has already done uncompressed audio.

It is about managing assets within two specified parameters and as aforementioned, both of them by today's standard consume less than what Cape Verde (1GB), let alone PS4 has to offer, i.e. resolution and framerate (unless you are pushing very high amount of resource intensive AA, detail etc). The biggest consumer of ram and bandwidth would probably have to be the next gen lighting system.

While you can never truly have enough RAM, ~3.5GB GDDR5 at this point, is more than sufficient for the expected results. As the bar reaches higher over the years, resolution would be the first thing to be sacrificed. By the twilight years, say 2018-19 XB3 and PS4 will show what they were meant to do. By then another generation with another set of requirements and subsequent compromises will start everything all over again.

that math doesn't add up for me. or did you mean GB/S?

Indeed.
 
http://www.overclockers.com/forums/showthread.php?t=715660

considering the 7870 has much more pure GPU performance. The increase of increase the memory clock is minimal. As I said, it gets to a point where memory gets very little return.

From what you said, you suggest that memory clock increase would lead to diminishing returns, but this is not what the data in your post suggests:

1200Mhz / 1300Mhz
min 46 - Max 87 - Avg 60.992
1200Mhz / 1400Mhz
min 48 - Max 88 - Avg 62.584
1200Mhz / 1500Mhz
min 50 - Max 92 - Avg 64.486

The first increment gives an FPS increase of 1.59 per second.
The second increment gives rise to an FPS increase of 1.902 per second. So not only is there no indication of diminishing returns, it shows the exact opposite in fact. Based on your data, the performance increase may in fact increase linearly with bandwidth. A doubling of bandwidth, assuming the relationship is linear, is therefore an approximate increase of 20.8 fps , a 30% increase in performance, completely within expectations.

In fact, I would like to mention that the bandwidth of stock 7850 starts at ~150GB/S, and with the clock increase above, it would end up at ~187GB/s, which is the ballpark of Orbis' rumoured specs, which implies that the bandwidth of the GDDR5 in Orbis is not so high as to be unexploitable. This performance difference increase will only grow more stark as next gen engine feature set grows and bandwidth requirements increase.
 
From what you said, you suggest that memory clock increase would lead to diminishing returns, but this is not what the data in your post suggests:



The first increment gives an FPS increase of 1.59 per second.
The second increment gives rise to an FPS increase of 1.902 per second. So not only is there no indication of diminishing returns, it shows the exact opposite in fact. Based on your data, the performance increase may in fact increase linearly with bandwidth. A doubling of bandwidth, assuming the relationship is linear, is therefore an approximate increase of 20.8 fps , a 30% increase in performance, completely within expectations.
The FPS is barely increasing with bandwidth, unless you are proposing that you can get FPS without bandwidth, it means it has passes saturation and the small variations are within the margin of error.

by your logic we would still have 50% performance at 0 bandwidth.
 
In fact, I would like to mention that the bandwidth of stock 7850 starts at ~150GB/S, and with the clock increase above, it would end up at ~187GB/s, which is the ballpark of Orbis' rumoured specs, which implies that the bandwidth of the GDDR5 in Orbis is not so high as to be unexploitable.

That makes sense. I don't think 180-190GB/s is too high for the expected performance potential. I remember a common complaint against the RSX, was that it had half the bandwidth of the PC 7800. As you say, that matches closely with the most equivalent PC video card. Also I could be wrong, but that's not all for the graphics, the CPU has to share that bandwidth to access its allocation of the UMA.
 
The FPS is barely increasing with bandwidth, unless you are proposing that you can get FPS without bandwidth, it means it has passes saturation and the small variations are within the margin of error.

by your logic we would still have 50% performance at 0 bandwidth.

That is just an estimation performed on limited data, and your data suggests it is linear over this range (~150 to ~187GB/S). Completely acceptable to provide a ballpark figure. I would have expected a 20% increment from a doubling of bandwidth, but 30% is not too bad as a ballpark figure.

As I have mentioned, even if the so called 'insignificant' change in FPS continues when we decrease the bandwidth, the performance increase over the DDR3 is by employing GDDR5 is ~27%, if the game runs at 60fps on GDDR5 (it would run at 47fps on DDR3). This is not insignificant to me. As you have no doubt noticed, since you mentioned it, the real world increase should be much higher, seeing as when we go down the scale, performance has to reach zero when we get to bandwidth zero. The performance difference per increment has to increase as we go down the scale. A 27% increase in performance of GDDR5 over DDR3 due only to bandwidth is very, very conservative.
 
That is just an estimation performed on limited data, and your data suggests it is linear over this range (~150 to ~187GB/S). Completely acceptable to provide a ballpark figure. I would have expected a 20% increment from a doubling of bandwidth, but 30% is not too bad as a ballpark figure.

As I have mentioned, even at the so called 'insignificant' change in FPS continues when we decrease the bandwidth, the performance increase over the DDR3 is by employing GDDR5 is ~27%, if the game runs at 60fps on GDDR5 (it would run at 47fps on DDR3). This is not insignificant to me. As you have no doubt noticed, since you mentioned it, the real world increase should be much higher, seeing as when we go down the scale, performance has to reach zero when we get to bandwidth zero. The performance difference per increment has to increase as we go down the scale. There a 27% increase in performance of GDDR5 over DDR3 due only to bandwidth is very, very conservative.
I never said moving to DDR3 was a good idea. I said over 100GB/s is starting to saturate and over 160GB/s starts to have very diminished returned.
 
It's probably just thousand island.

Damn you!


Seriously though, this is what DF claims :

However, there's a fair amount of "secret sauce" in Orbis and we can disclose details on one of the more interesting additions. Paired up with the eight AMD cores, we find a bespoke GPU-like "Compute" module, designed to ease the burden on certain operations - physics calculations are a good example of traditional CPU work that are often hived off to GPU cores. We're assured that this is bespoke hardware that is not a part of the main graphics pipeline but we remain rather mystified by its standalone inclusion, bearing in mind Compute functions could be run off the main graphics cores and that devs could have the option to utilise that power for additional graphical grunt, if they so chose.
 
If you can combine bandwidths, I reckon the xb3's bandwidth/flop is better than PS4, no?
Although not many look at it like that.
 
Even though you can't add up bandwidth in the Durango DDR3 and eSRAM with the rumoured 176GB/s for PS4 both just got a lot closer together. I do see a slight advantage now with Durango because of the doubled memory size. Too bad that all it took was 3W to go from superior to on par.
 
oversitting said:
The FPS is barely increasing with bandwidth
FPS is a poor metric for that - for all you know shaders are still idling waiting on memory but one of the 1001 other factors are limiting the FPS.
 
Do you all realise that out of the UMA 170GB/s - 192GB/s which ever you suspect isnt all for the GPU... so you cant compare directly to PC graphics card... remeber the CPU has/needs access to the UMA so you need to deduct that from bandwidth :) for for example if the CPU can use upto 50GB/s you would need to deduct that from total bandwidth of 190GB/s so the GPU would have 140GB/s.
 
I'm still betting on a type type of spurs engine with spu's. It could do sound and video processing and calculate physics. Also it would allow for backwards compatibility.

There was a poster called Giggzy in an earlier thread who gave us the hope of bc, and I really wanted it to be true. But it was so convoluted that I couldn't give him the benefit of the doubt. I've just suspended judgement till we know more...

BTW Cell isn't x86.
 
There was a poster called Giggzy in an earlier thread who gave us the hope of bc, and I really wanted it to be true. But it was so convoluted that I couldn't give him the benefit of the doubt. I've just suspended judgement till we know more...

BTW Cell isn't x86.

True, but I thought Sony patent some spu cluster that could work with a x86? I could be wrong though.
 
So, what are the odds of BC?

PS3? Won't happen. If you want to play PS3 stuff via your PS4 I think there might be some system for connecting a PS3 to a PS4 over a local network, in a sort of streaming setup, or connect to a remote server (gaikai). But not local execution of PS3 content on the PS4 itself.

Do you all realise that out of the UMA 170GB/s - 192GB/s which ever you suspect isnt all for the GPU... so you cant compare directly to PC graphics card... remeber the CPU has/needs access to the UMA so you need to deduct that from bandwidth :) for for example if the CPU can use upto 50GB/s you would need to deduct that from total bandwidth of 190GB/s so the GPU would have 140GB/s.


Just as a reference, the original VGLeaks document talked about the CPU having 12GB/s of cache-coherent access to memory. The CPU design has changed, but I doubt the intended bandwidth usage has much.
 
Just as a reference, the original VGLeaks document talked about the CPU having 12GB/s of cache-coherent access to memory. The CPU design has changed, but I doubt the intended bandwidth usage has much.

Isn't that Vita's Vram speed?

Maybe numbers got mixed up?

On that note, maybe they've learnt some new things from Vita architecture.
 
PS3? Won't happen. If you want to play PS3 stuff via your PS4 I think there might be some system for connecting a PS3 to a PS4 over a local network, in a sort of streaming setup, or connect to a remote server (gaikai). But not local execution of PS3 content on the PS4 itself.
No. Expect full BC. With all the PSN content this gen, going in without BC would be suicide.
 
No. Expect full BC. With all the PSN content this gen, going in without BC would be suicide.

Agreed.

Edit:

People would want the games they purchased online to carry over. And there's probably little difference with PS3 games bought digitally and games on disc so I hope that there is PS3 BC.
 
No. Expect full BC. With all the PSN content this gen, going in without BC would be suicide.

The likely price repercussions would be far more damaging.

Sony will be migrating more and more PSN and PS3 content onto their cloud service and will talk up the value of having device-neutral access to that content indefinitely into the future. I know that it is not functionally the same thing but that'll be their PR 'out'.
 
Top Bottom