eSRAM question

SRAM has much faster access times over dram. This is because each bit of dram used only 1 transistor and capacitor. SRAM has 4 or 6 transistors depending on the type. This means DRAM needs to be refreshed.

Why would a cpu cache use a type of sram and not dram if dram was faster?

The 6-T/8-T are a big disadvantage for the SRAM. At lower sizes (<10MB), an ESRAM would be faster by virtue of simpler access mechanism and static storage. However the SRAM cell has a lot more parasitic capacitance and resistance, hence as the size of the SRAM increases the bandwidth drops of a cliff. The 1-T DRAM on the other hand doesn't load the access lines much and hence it becomes faster for larger memory pools. This is the reason why IBM choose EDRAM for the huge L3 cache on Power PC 8 and also why Intel choose EDRAM for the L4 on Haswell.
 
In all seriousness, I feel something like this (just mocked it up) is a better water based analogy:



In either system, you can get to the 'water' just as quickly, it just takes a bunch of extra forward planning on the left. There will be cases better suited to both scenarios.

[edit] the pipes are not to scale :P

Except you don't have parity.
 
Care to explain? A 1080p color buffer + depth buffer + frame buffer = about 23mb esram. That is less than the 32mb that's on the xbox one.

The problem devs are having is figuring what how to implement AA techniques. For example putting MSAA on esram. This won't fit with the above buffers. Last gen the 360 had edram which was mainly used just for AA. It's very possible that this will happen this gen as well. In reality AA shouldn't be pooled onto esram as you really only want the resulting image in the buffer.

The ps4 has a stronger gpu which is really what is causing the parity issues.

TLDR: Devs are figuring out the systems and sdks. 1080p is possible. This gen is just starting

Deferred rendering techniques can have much bigger framebuffers. 32mb eSRAM makes using deferred rendering techniques difficult at 1080p. You also have to fit other assets in there and it seems many XB1 devs are shrinking the resolution to shrink the frame buffer to have other assets in there. 1080p is possible on the XB1 but you have to make sacrifices like card board cut out crowds and pre-baked lighting. As the generation progresses, you'll probably see fewer 1080p games and much more effects on lower res on the XB1 and likely the PS4; not more.
 
Well, PS Now offloads 100% of game processing on cloud (basically we could say streaming is like 100% on server side, while xbox one 'theory never in practise' is like a 50-50, with some parts local and rest of server).
The language for Gaikai was bout streaming games on demand, and playing older PS3 titles. The titles are processed on server and streamed to the console like a dumb set-top box. But they didn't talk about making Uncharted 4 look even better than the PS4 hardware is theoretically capable of, at least I don't remember they did.

Meanwhile, the marketing claim for MS Azure is that processing of current-generation games can be partially offloaded to the cloud, so that current-generation games can have better visuals. It was touted in Titanfall as well, I believe.
 
I don't really think that MS would screw up the Xbox with such a stupid mistake as useless ESRAM. The console has been in the works for years, I don't think that they decided their components at the last second (specially since ESRAM takes a big space and is expensive, so that money could be used in faster and better RAM/GPU, like the PS4).


I BELIEVE IN DA CLOUD!

cjh0teai04x07tylskcn.png
 
MS really painted themselves (or their engineers) into a corner with the combination of size of ram needed and the requirement for cost savings forcing a single die.

The former meant DDR3, needing embedded ram to support. Not the end of the world, worked ok on Xbox 360 and PS2 etc.

But the second point forces the choice of esram instead of edram. So you don't get any speed advantage worth talking about (it is around the same speed as the GDDR5 in the PS4, whereas if it was edram it'd be 4-6x faster than that); and the huge size of edram eats a ton of your APU die size


But you couldn't not use embedded ram and keep DDR3 - you'd have a GPU on par with PS4, but it'd be so bandwidth starved it would perform worse than the Xbox one with fewer CUs.

And you couldn't use GDDr5 because that would compromise MS requirements for the switchable OS.

And you couldn't use nice fast, small edram because MS wanted shrink ability and cost savings down the line.

In hindsight, I think they should have gone with the daughter die/edram approach and put 64-128MB edram in the system, allowing a full sized GPU
 
Deferred rendering techniques can have much bigger framebuffers. 32mb eSRAM makes using deferred rendering techniques difficult at 1080p. You also have to fit other assets in there and it seems many XB1 devs are shrinking the resolution to shrink the frame buffer to have other assets in there. 1080p is possible on the XB1 but you have to make sacrifices like card board cut out crowds and pre-baked lighting. As the generation progresses, you'll probably see fewer 1080p games and much more effects on lower res on the XB1 and likely the PS4; not more.

deferred rendering is quickly being phased out by forward rendering again since it gives you much better performance on your alphas, and tile based solutions are supplementing most of the dynamic lighting issues, i'd even argue that tiled based forward rendering (and physically based rendering) offers greatly enhanced GI as well. (it's one of my biggest complains about killzone, the game looks really "flat" in terms of lighting)
 
I don't really think that MS would screw up the Xbox with such a stupid mistake as useless ESRAM. The console has been in the works for years, I don't think that they decided their components at the last second (specially since ESRAM takes a big space and is expensive, so that money could be used in faster and better RAM/GPU, like the PS4).

There has been a lot of previous discussion on the subject, here and other forums. The most agreed upon consensus is the MS engineers produced the best console under the constraints they were given (time, costs, RAM amount). It wasn't a "stupid mistake" it was the only decision.
 
deferred rendering is quickly being phased out by forward rendering again since it gives you much better performance on your alphas, and tile based solutions are supplementing most of the dynamic lighting issues, i'd even argue that tiled based forward rendering (and physically based rendering) offers greatly enhanced GI as well. (it's one of my biggest complains about killzone, the game looks really "flat" in terms of lighting)

Really, the only game I know which uses a forward rendering on current gen consoles is Forza 5. If you think that the lighting in KZ looks flat, I don't know how you would quantify the lighting in F5 (flatter than flat or flattest probably are good qualifiers for the lighting in F5). There is no such phasing out of deferred rendering going on, such a claim can be made when Cry Engine, UE and Unity stop support for deferred rendering. The only hardware that sees gain from forward rendering are most smartphone SoCs which implement hardware based TBDR. Current gen desktop and console GPUs do not see meaningful improvement in handling of light sources by shifting to a forward renderrer.
 
There has been a lot of previous discussion on the subject, here and other forums. The most agreed upon consensus is the MS engineers produced the best console under the constraints they were given (time, costs, RAM amount). It wasn't a "stupid mistake" it was the only decision.

Totally agree with this, I hate it when people starting dissing MS hardware engineers, when they probably did the best they could within the constraints of price, power, TVTVTV, multi-tasking.
 
Totally agree with this, I hate it when people starting dissing MS hardware engineers, when they probably did the best they could within the constraints of price, power, TVTVTV, multi-tasking.

If your boss told you to make a car with square wheels; the failure is probably not your fault.
 
Offloading 100% of processing is much easier than offloading 50% as it requires virtually no developer input. Any game can be played from the cloud, not any game can be partially rendered in the cloud.

I agree that partial offloading is harder, since you would need to develop specific APIs to handle that scenario. Not impossible, but quite challenging.
100% offloading is something challenging also. From an infra perspective, that would require (i guess) a massive inversion in datacenter, at minimum, with specific hardware to handle gaming (probably the reason why Sony bought Gaikai, since they may have the knowhow).
 
Running the grunts...

Are they smarter and do they react faster now? Just wondering since hte videos I saw a couple months ago made them seem pretty much worthless. Besides, games were doing server side bots years ago.

From what I understand, this picture is simplified to a fault. Things aren't so simple.

The picture doesn't explain the situation, it just explains what a bottleneck is, which most people know. ESRAM is not entirely a bottleneck on all fronts.

Of course it's simplified and not accurate. I just posted it because it was being brought up, didn't know it was also revised.
 
Really, the only game I know which uses a forward rendering on current gen consoles is Forza 5. If you think that the lighting in KZ looks flat, I don't know how you would quantify the lighting in F5 (flatter than flat or flattest probably are good qualifiers for the lighting in F5). There is no such phasing out of deferred rendering going on, such a claim can be made when Cry Engine, UE and Unity stop support for deferred rendering. The only hardware that sees gain from forward rendering are most smartphone SoCs which implement hardware based TBDR. Current gen desktop and console GPUs do not see meaningful improvement in handling of light sources by shifting to a forward renderrer.

the lighting in killzone feels painted onto the surfaces as propagated from the materials in a credible way.

IIRC ryse is using a physically based rendering model ( http://www.makinggames.de/index.php/magazin/2391_ryse__the_transition_to_physically_based_shading ), and so are new Unreal games like fable legends (and some old ones like Remember Me https://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/ ), unity is producing more support for it, and since black ops 2 i think cod has been using a PBR model as well.

Deferred rendering just has too many pitfalls and looks baaaaaaaaaaaaad in a lot of cases.
 
EDRAM is a 1-T 1 trench cell. SRAM cells can be 6-T or 8-T. In terms of density and access speed/bandwidth EDRAM is aeons better than ESRAM. However ESRAM is more power efficient and does not need to be periodically refreshed. Most standard CMOS processes do not allow EDRAM cells. The only example of an on-die EDRAM that I know of is the IBM Power PC 8.
IBM have been doing on-die eDRAM for a while now (since POWER7 in 2009). Also, Nintendo's Latte GPU features 32MB of on-die eDRAM.

Where can I go to learn about this stuff? I'm very new to all this but would love to get a better grasp.
here
 
The 6-T/8-T are a big disadvantage for the SRAM. At lower sizes (<10MB), an ESRAM would be faster by virtue of simpler access mechanism and static storage. However the SRAM cell has a lot more parasitic capacitance and resistance, hence as the size of the SRAM increases the bandwidth drops of a cliff. The 1-T DRAM on the other hand doesn't load the access lines much and hence it becomes faster for larger memory pools. This is the reason why IBM choose EDRAM for the huge L3 cache on Power PC 8 and also why Intel choose EDRAM for the L4 on Haswell.
Where can I go to learn about this stuff? I'm very new to all this but would love to get a better grasp.
 
the lighting in killzone feels painted onto the surfaces as propagated from the materials in a credible way.

IIRC ryse is using a physically based rendering model ( http://www.makinggames.de/index.php/magazin/2391_ryse__the_transition_to_physically_based_shading ), and so are new Unreal games like fable legends (and some old ones like Remember Me https://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/ ), unity is producing more support for it, and since black ops 2 i think cod has been using a PBR model as well.

Deferred rendering just has too many pitfalls and looks baaaaaaaaaaaaad in a lot of cases.

KZ: SF used physically based rendering as well though...

http://www.slideshare.net/guerrillagames/lighting-of-killzone-shadow-fall
 
Anyone know what GDDR ram prices have done over the past year price wise?

Not sure on the size but this claims to have a almost 4 month estimate on retail prices.

http://www.aliexpress.com/gddr5-memory_price.html

Seems to have spiked during the ps4 launch, settled, then rose again for the japan launch. No data after Jan 2014.

DDR3 prices have risen in price in the short term.

http://www.kitguru.net/components/m...-remain-stable-for-now-but-future-is-unclear/

This article suggests coming price increases due to less competition through attrition and the gearing up for DDR4 of many manufacturers; pushing DDR3 prices higher.

Hard to find news on GDDR5 prices; probably because it's a smaller market and less people are as keenly interested in a commodity price for it and probably also because it's not a stand alone retail product. Likely people more attuned to that industry might have more info.
 
The cloud means dedicated machines for... hosting multiplayer server or doing some minor A.I. stuff. Nothing that enhances graphics nor sound nor has a big gameplay impact.

It's the biggest and dumbest PR shit I heard in years. They can fool people without computer knowledge.

"In the land of the blind, the one-eyed man is king."

ESRAM is a fast memory type but the Xbone has not enough for proper 1920x1080 rendering.
 
In hindsight, I think they should have gone with the daughter die/edram approach and put 64-128MB edram in the system, allowing a full sized GPU

I'm sure this has been pointed out before, but AMD doesn't produce DRAM. This is why they went with SRAM, which is what they already use for caches. Perhaps they could've used a different company to produce the daughter die, but that might complicate the process even more. I could be wrong though.
 
Where can I go to learn about this stuff? I'm very new to all this but would love to get a better grasp.

anandtech is the best source of hardware info if you are not an EE/CS engineer. IEEE spectrum also does some good easy to understand articles/podcasts on hardware (check out a somewhat old article on Resistance:Fall of Man and PS3 http://spectrum.ieee.org/consumer-electronics/gaming/the-insomniacs, its quite good).

I get most of my info from various conference proceedings/Journal/magazine articles (ISSC, VLSI Symposium, IEEE TCAS, IEEE Spectrum, etc.) and attending talks at my University. Two days back I attended a talk by the architect of the cell SPEs, Dr. H. Peter Hofstee (IBM Austin Research Labs), who talked at length about memory hierarchy and how having a unified memory map is great for big data. It was quite good, if you are in a university you could try attend EE seminars that interst you.
 
Are they smarter and do they react faster now? Just wondering since hte videos I saw a couple months ago made them seem pretty much worthless. Besides, games were doing server side bots years ago.

Nope - you are exactly right - they are just server side bots and yes thay have been in MMORPGs for years. But an MMORPG is just an excellent example of "cloud computing".... but but but... it's "just server side code"... erm.. yeah precicely... kinda why "cloud computing" is a marketing buzzword and actually remarkably meaningless in technology terms.

Well actually I kinda fib. Cloud computing generally offers a standardised set of specific services which can be utilised remotely by your software. Whereas MMORPG's run custom code served to custom client. But I imagine the flexibility of cloud computing these days means this difference is close to zero. It is just a "services" based approach to server side code rather than anything really very different. Just a way for Amazon, Google & MS to sell server cycles easily.

Streaming and cloud processing are very different things.

?! Really, why? If you mean Streaming Movies, I agree. But streaming games it is pretty much the same. The hardware may be different server side but it is effectively doing the same thing. Listening to client side code for input (in this case it is lightweight control inputs) and carrying out compute (in this case a whole game rather than some supplementary services) and then providing an output (in this case a frame buffer rather than a bunch of physics or some pretty cloud patterns). But the model is the same.

Actually I do wonder if Sony will shove some general purpose services API's for other games to run code on their servers when not being used for emulating games - effectively turning the PSNow infrastructure into a massive cloud capability. The only real difference is how you use the available server based computing; a video stream output could just as easily be physics or AI data. I imagine the hardware is general purpose enough to facilitate this, if not, Sony have shot themselves in the foot. But they could just use Azure :)
 
I don't know a lot about this stuff so bear with me. I know there is a lot of controversy about MS going with eSRAM. My question is does eSRAM work better for the future "Cloud gaming"? Is that why they went with eSRAM? It sounds like Microsoft may be showing off the potential of the cloud at E3. I guess we will see. Thanks for reading.

I still believe that it is impossible that microsoft lost its mark by such a margin. I mean, its not like they didnt start from an empty paper, isn't it?
I think the way xbone is designed will work better when the new graphics api is completed.
this ultra-compressed-batches stuff and new optimization techniques in dx12, I think they will play their role in directXbone12 when it done.

that being said, forza 5 was proof enough for me last year, but am looking to this E3 for further stuff.
 
Yeap... you worded that better... it was a bet in the future... if the GDDR5 512Mbits modules didn't get ready in the schedule PS4 could be launched with 4GB.

So basically they went all in pre-flop and caught the nuts on the river.

Crazy. 4GB would have been a fucking disaster.

I can't wait to read about the development of both of these consoles in 10 years.
 
I still believe that it is impossible that microsoft lost its mark by such a margin. I mean, its not like they didnt start from an empty paper, isn't it?
I think the way xbone is designed will work better when the new graphics api is completed.
this ultra-compressed-batches stuff and new optimization techniques in dx12, I think they will play their role in directXbone12 when it done.

that being said, forza 5 was proof enough for me last year, but am looking to this E3 for further stuff.

It's been said 1000 times, they compromised on the design to strike a balance between games, media, and Kinect. DX12 is not going to help much as the bottleneck is a weak GPU and very slow bandwidth to system RAM. Tiled resources, etc. are just hacks to get around the limitations. The fastest path across two distances is a straight line.

Forza 5 sacrificed a lot to hit their goal of 1080p / 60fps. The fact that it's racing game moving at 100mph onscreen masks a lot of the issues. If Forza was a FPS the bad stuff would be a lot more apparent.
 
To laugh this off is a bit ignorant. It's obvious MS had obvious intentions of cloud computing in the requirements of the hardware with the dedicated off loaders for compression/decompression of payloads for that purpose. This hardware functionality allows the box to compress/send and receive and store the result back in RAM without even a cycle used in the CPU. It's serious business.

Regarding the eSRAM, I feel for the engineers because the requirements and direction the console was took in from the highest point means that there were hardware which was produced to support a direction the console isn't necessarily been took into now. Sony also got lucky with GDDR and MS unlucky with the DDR fire. Also, from the leaked roadmaps, MS presumed DDR4 would be full-blown commercial by it's release which would of made a RAM solution that much better.

DX12 will help the eSRAM astronomically with RAM management and descriptor heaps/tables with bundles which are explained here:
http://blogs.msdn.com/b/directx/archive/2014/03/20/directx-12.aspx

This essentially means the API automatically can manage the DMZs and pull different 'bundles' from DDR into eSRAM for partial rendering before output. This could allow for some very fancy rendering techniques without the filling of the eSRAM and with no involvement from the CPU.

For me personally, this point along with the better management of DX threads across cores are the biggest performance points for what DX12 will bring.
 
It's obvious MS had obvious intentions of cloud computing in the requirements of the hardware with the dedicated off loaders for compression/decompression of payloads for that purpose. This hardware functionality allows the box to compress/send and receive and store the result back in RAM without even a cycle used in the CPU. It's serious business.

Erm... to take data at a rate 2000-10000 slower than the bus and unpack it "ultra fast" does not seem terribly well thought out ot be honest. The hardware compression/decomression is probably intended to unpack textures from RAM, not anything to do with the cloud. Something that may actually gain benefit from being done fast. Any equating cloud capability to ESRAM as some sort of amazing plan is wishful thinking - they could not be further apart. Pure PR bull to think otherwise.


For me personally, this point along with the better management of DX threads across cores are the biggest performance points for what DX12 will bring.

DX12 will bring performance improvements, but not of the magnitude many are hoping for. Nothing that could close the performance gap in any significant way to the PS4, which iswhat many hope.
 
To laugh this off is a bit ignorant. It's obvious MS had obvious intentions of cloud computing in the requirements of the hardware with the dedicated off loaders for compression/decompression of payloads for that purpose. This hardware functionality allows the box to compress/send and receive and store the result back in RAM without even a cycle used in the CPU. It's serious business.

You realize that compressing on the server side and decompressing on the client side just adds additional latency to what will already be a brutal round trip for anything that is supposed interactive, right? It's a dumb idea. PS4 has decompression logic too, and it mostly uses it to speed installs and loads which is what the Xbox One is probably actually doing.
 
To laugh this off is a bit ignorant. It's obvious MS had obvious intentions of cloud computing in the requirements of the hardware with the dedicated off loaders for compression/decompression of payloads for that purpose. This hardware functionality allows the box to compress/send and receive and store the result back in RAM without even a cycle used in the CPU. It's serious business.

V23CiWe.gif
 
I love getting peoples attention.

Erm... to take data at a rate 2000-10000 slower than the bus and unpack it "ultra fast" does not seem terribly well thought out ot be honest. The hardware compression/decomression is probably intended to unpack textures from RAM, not anything to do with the cloud. Something that may actually gain benefit from being done fast. Any equating cloud capability to ESRAM as some sort of amazing plan is wishful thinking - they could not be further apart. Pure PR bull to think otherwise.
That's not true, the off loaders is just to keep the CPU dedicated to it's primary purpose of the console. Of course dedicated compression/decompression chips have other purpose for being there, but using it to compress computational data is a very sensible and good idea to implement with the cloud processing. Something I'm sure the API's (if there is ever a standard library for doing something like this on the box) would take care of.
DX12 will bring performance improvements, but not of the magnitude many are hoping for. Nothing that could close the performance gap in any significant way to the PS4, which iswhat many hope.
Why do people always try to bring the PS4 into it when I'm discussing performance benefits for the Xbox and the Xbox purely? Obviously if I was going to compare these boxes I'd have to take into account a grade higher GPU. Come on.
The biggest performance benefits are purely theoretical, but DX12 brings massive changes to the graphics pipelines and that's always an excellent thing. There's also discussion around the queues in the system. For example the PS4 has two queues and they're sorely dedicated to game/system tasks. Whereas on the XB1 the queues are negotiable with GPGPU for the Kinect and plugging in holes between queues with high-priority tasks (game commands). With DX12 there's talk that developers could utilise both of these queues which would allow asynchronous computing in the GPU. This would allow things like shader computations while ROPs are being utilized for fill, for example.

When you dig down into specifics there's some very technical points which could relay to huge performance gains. The point I mentioned above is how SLI/Crossfire works for example.
You realize that compressing on the server side and decompressing on the client side just adds additional latency to what will already be a brutal round trip for anything that is supposed interactive, right? It's a dumb idea. PS4 has decompression logic too, and it mostly uses it to speed installs and loads which is what the Xbox One is probably actually doing.
Man latency is way over exaggerated. For example, with these compressors, your looking at hardly ms being added onto the time. With on average people receiving 20-30ms pings to their local Azure DCs and a ballpark figure of 8ms for computational time, your looking at like 38ms round trip in total which means 0.038 seconds (plus render time for frame) before the event which was sent appears on screen. Think of that in terms of what MS showed at build.

Add the fact that you'd obviously not use computational power like that for things like collision detection. Imagine a fully online co-op Crackdown 3 where you have 4 people in a game and they share computational power in the cloud which allows for a fully destructible world for example.
 
Erm... to take data at a rate 2000-10000 slower than the bus and unpack it "ultra fast" does not seem terribly well thought out ot be honest. The hardware compression/decomression is probably intended to unpack textures from RAM, not anything to do with the cloud. Something that may actually gain benefit from being done fast. Any equating cloud capability to ESRAM as some sort of amazing plan is wishful thinking - they could not be further apart. Pure PR bull to think otherwise.

DX12 will bring performance improvements, but not of the magnitude many are hoping for. Nothing that could close the performance gap in any significant way to the PS4, which iswhat many hope.

Many hope that DX12 and tool improvements will bring parity between the systems? then many people are delusional I guess. Things on Xbone will get better in the future, especially on exclusives that's a given though I doubt we'll see any noticeable improvemenents in multiplatform games unless things change and Xbone becomes the lead platform (which I highly doubt) but that's it.

People need to accept at this point that PS4 is the stronger machine but I guess there will always be some people that don't lose "hope". I remember some of my friends on the PS2 days that they couldn't accept that Xbox was much more powerful, they are still waiting to see the PS2's secret power unleashed.
 
Man latency is way over exaggerated. For example, with these compressors, your looking at hardly ms being added onto the time. With on average people receiving 20-30ms pings to their local Azure DCs and a ballpark figure of 8ms for computational time, your looking at like 38ms round trip in total which means 0.038 seconds (plus render time for frame) before the event which was sent appears on screen. Think of that in terms of what MS showed at build.

1. You are talking about best case scenarios. What about the poor bastards in Australia 200ms from the nearest Azure server?

2. You are using milliseconds. Memory latencies in a computer are measured in nanoseconds. Waiting 1000x longer to get a result from a server somewhere is usually a bad idea, especially in a single player experiences where the economics leave you upside down on the transaction.

Add the fact that you'd obviously not use computational power like that for things like collision detection. Imagine a fully online co-op Crackdown 3 where you have 4 people in a game and they share computational power in the cloud which allows for a fully destructible world for example.

Oh, you mean like a dedicated server for a multiplayer game? We've had those for decades and they have nothing to do with magic decompression hardware or ESRAM. In any case, I don't think your non-interactive version of destruction will be impressive to anyone.
 
Sony and MS should have used a dedicated GPU instead of the APU nonsense AMD was selling. Both CPUs would have been able to be faster and you wouldn't have such a strict limit on CUs when everything had to be crammed onto one die. For example they could have had an i3 and practically any mid range GPU and came away with far more performance.
 
Sony and MS should have used a dedicated GPU instead of the APU nonsense AMD was selling. Both CPUs would have been able to be faster and you wouldn't have such a strict limit on CUs when everything had to be crammed onto one die. For example they could have had an i3 and practically any mid range GPU and came away with far more performance.

And both would have cost $200 more and been much worse at GPGPU.
 
1. You are talking about best case scenarios. What about the poor bastards in Australia 200ms from the nearest Azure server?

2. You are using milliseconds. Memory latencies in a computer are measured in nanoseconds. Waiting 1000x longer to get a result from a server somewhere is usually a bad idea, especially in a single player experiences where the economics leave you upside down on the transaction.
Those poor bastards who got a built for purpose set of servers for Titanfall with a full Azure deployment soon? AUS is a massive market for that business so it's not like they're going to hold back.

You're talking about tasks which are built on the basis they're going to be waited on. You could build-out so much functionality in-game which isn't player effected which could pretty a game. For example, leaves blowing in the wind on trees.

This is a new concept for games, something that probably won't get utilized that much. In it's same context, computational offloading has been used in scientific/mathematical application for years. People act it's such a right-wing concept, but it's rather easy just a lot to flesh out in an engine.
Oh, you mean like a dedicated server for a multiplayer game? We've had those for decades and they have nothing to do with magic decompression hardware or ESRAM. In any case, I don't think your non-interactive version of destruction will be impressive to anyone.
You can label it what you want, but a dedicated server which also provides full physics calculations is something which would be equally amazing and not done before. That was one example, it could be applied for anything.

Anything in 'the cloud' for any application could be labelled a dedicated server. You have a server which holds and serves database requests? Oh that's a dedicated server.
 
Those poor bastards who got a built for purpose set of servers for Titanfall with a full Azure deployment soon? AUS is a massive market for that business so it's not like they're going to hold back.

That's a temporary solution for one game in one place. Not everywhere they sell Xbox Ones is within 20ms of an Azure data server and that will always be the case. What about bandwidth, network congestion, peak loads, intermittent wifi interference? Developers have to build their games around worst case scenarios, not best case scenarios. That's the whole point of TCRs. If a game can't gracefully handle an unfortunate set of circumstances it is refused publication.

You're talking about tasks which are built on the basis they're going to be waited on. You could build-out so much functionality in-game which isn't player effected which could pretty a game. For example, leaves blowing in the wind on trees.

If it's not time sensitive, then it is also faster and cheaper simply to pre-calculate a bunch of animation sets for those things. Why go to the enormous expense of engineering not only a cloud based atmospheric simulator and foliage animation system, but a client that must broadcast inputs and then integrate that data at irregular intervals when they can get 99% of the same results with an offline simulation to create a set of baked animations for a variety of conditions before you ever press the disc? I get leaves blowing in the wind when I play Dragon's Dogma on a PS3 right now. That certainly didn't take any cloud compute to accomplish. MS has a prurient interest in selling the idea of offloading mundane tasks to the cloud. They are selling both the server time and the client at a hardware disadvantage, but almost every use they put forth as an example would be better off accomplished locally with some skillful fakery.

You can label it what you want, but a dedicated server which also provides full physics calculations is something which would be equally amazing and not done before. That was one example, it could be applied for anything.

Do you seriously not understand this is stuff games like Battlefield are already doing with huge numbers of players?
 
That's a temporary solution for one game in one place. Not everywhere they sell Xbox Ones is within 20ms of an Azure data server and that will always be the case. What about bandwidth, network congestion, peak loads, intermittent wifi interference? Developers have to build their games around worst case scenarios, not best case scenarios. That's the whole point of TCRs. If a game can't gracefully handle an unfortunate set of circumstances it is refused publication.
I know it's a temporary solution for one game, but considering the division is missing out on a huge market by not having local AUS servers, then I'm sure a billion dollar division of a corporation can afford the investment to go there.

People tend to say these things but then go back to watching Netflix which will happily maintain a stream which runs at over 2Mbps for hours. Those variables aren't that much concern in an application which isn't bandwidth constrained. The backbone of the internet is beyond capable of things these days. You don't have to worry about error-rates and re-transmission anymore. Hence why concepts like Jumbo Frames, larger RWINs and MTUs were developed in TCP due to the reliability of the internet these days. There's even a RFC which is based on selective acknowledgements in TCP due to low error rates and packet loss within the internet. Your looking at worries what simply shouldn't be worried. If these were a genuine concerns then dedicated servers would of never been a thing because the variables your discussing would of interrupted the game too much.

If it's not time sensitive, then it is also faster and cheaper simply to pre-calculate a bunch of animation sets for those things. Why go to the enormous expense of engineering not only a cloud based atmospheric simulator and foliage animation system, but a client that must broadcast inputs and then integrate that data at irregular intervals when they can get 99% of the same results with an offline simulation to create a set of baked animations for a variety of conditions before you ever press the disc? I get leaves blowing in the wind when I play Dragon's Dogma on a PS3 right now. That certainly didn't take any cloud compute to accomplish. MS has a prurient interest in selling the idea of offloading mundane tasks to the cloud. They are selling both the server time and the client at a hardware disadvantage, but almost every use they put forth as an example would be better off accomplished locally with some skillful fakery.
It was an example, I'm giving examples to fortify the concept. Even if this was pre-animated, having it calculated in the cloud could mean different variables could be passed in to make it dynamic: wind speed, time of day etc. It was one example.

Do you seriously not understand this is stuff games like Battlefield are already doing with huge numbers of players?
BF4 does this?
http://channel9.msdn.com/Blogs/AndrewParsons/Cloud-Assist-Demo-from-Build-2014

Okay.

I understand the concept of a basic client/server model. Having the server perform more operations due to the resource available and the global deployment of it with the dynamics Azure provides is something which hasn't been done before.
 
That's a temporary solution for one game in one place. Not everywhere they sell Xbox Ones is within 20ms of an Azure data server and that will always be the case. What about bandwidth, network congestion, peak loads, intermittent wifi interference? Developers have to build their games around worst case scenarios, not best case scenarios. That's the whole point of TCRs. If a game can't gracefully handle an unfortunate set of circumstances it is refused publication.

As a proud XBox One owner the one thing I don't understand is how MS planned to sell the XBox outside of its main Azure server sites if certain games were planning to be reliant upon the Cloud for core gameplay related processing.

There are lots of answers - "We wouldnt sell those types of games in all territories etc" - but I can't think of a credible one.
 
Ok, What I remember reading about at around the announcement time - and I do think the logic makes sense here - was that they had decided that the cost was too prohibitive to go with 8gb of GDDR5 because of the memory target they had. So they decided to build in some eSRAM on the die for three reasons.

1) The speed and location of the memory would makeup for a lot of bandwidth loss when trading data between the CPU and GPU cores. The PS4 CPU and GPU have to access their data over a bus capable of supporting the GDDR5 (more chips = more money), while the eSRAM on die doesn't need separate control hardware, because direct access.

2) Because they were building it on die, the cost of the memory would go down over time due to advances in production (Think going from 45nm to 22nm, less heat and less power, more chips per die = lower cost per chip).

3) It was more or less a proven technology at the time. Microsoft, not wanting to repeat the mistakes of the last generation, were more careful in their hardware design leading to longer hardware change iterations. Meaning once they had selected an architecture, they would be unlikely to change it.

In general, integrated components - while initially more expensive to produce - get cheaper over time. The "gamble" they made was that GDDR5 was not going to come down in price in time for them to be able to make their launch window. Sony was "lucky" in that even though they had designed the system to use (for sake of argument) 4x 1GB GDDR5 chips, these 2GB chips came out that were at an acceptable price point and were a pin-for-pin replacement. (Take the old out, pop the new ones in, BAM, double the memory).

EDIT:


Yep, and most of the reports at the time were that even though the XONE's lower memory bandwith, the display targets that both consoles are trying to meet could be satisfied with even lower memory bandwith

EDIT: Woops!

Found that Eurogamer article that points out the bandwith: eSRAM 206GB/s, GDDR5 176GB/s, DDR3 68GB/s
Think of the XB1 as a suped-up Box truck working with a train vs a fleet of 18 wheelers. Both will get your data there, PS4 is "faster" when you consider the average over time of data transfer, but the XB1 is faster but only with a few MBs of data, otherwise it relies on its slower DDR3

First of all the PS4 is less expensive to make, so the point "cheaper" is wrong. the second point is that the XO/PS4 APUs are already at 28nm.
 
That's a temporary solution for one game in one place. Not everywhere they sell Xbox Ones is within 20ms of an Azure data server and that will always be the case. What about bandwidth, network congestion, peak loads, intermittent wifi interference? Developers have to build their games around worst case scenarios, not best case scenarios. That's the whole point ofTCRs. If a game can't gracefully handle an unfortunate set of circumstances it is refused publication.

u wot m8

what does TCRs stand for? my nick stands for Tom Clancy's Rainbow Six.. tfw no more rainbow six :/
 
It's been said 1000 times, they compromised on the design to strike a balance between games, media, and Kinect. DX12 is not going to help much as the bottleneck is a weak GPU and very slow bandwidth to system RAM. Tiled resources, etc. are just hacks to get around the limitations. The fastest path across two distances is a straight line.

Forza 5 sacrificed a lot to hit their goal of 1080p / 60fps. The fact that it's racing game moving at 100mph onscreen masks a lot of the issues. If Forza was a FPS the bad stuff would be a lot more apparent.

since I am of those that already bought the xbone, it is only logical that i am aware of what you say. however, I still say that their target was pretty clear, and bringing something to the market that will not be able to fill the pixels on my television, is not something I think microsoft would ever go ahead with.


furthermore, I totally disagree on your forza 5 statement, and since I could easily call myself a ..forza veteran, I can tell you that going from forza 4 to forza 5, I found enough upgraded things to be 100% satisfied. in fact, I did not find even one thing downgraded (aside from the lesser no. of cars/tracks, which is 100% reasonable).

if you have an opinion that forza 5 is less of something, then either provide a sound example I can test and agree/disagree on, or accept my opinion (with thousands of hours backing it) that forza 5 is an upgrade for forza, and nothing else.
 
u wot m8

what does TCRs stand for? my nick stands for Tom Clancy's Rainbow Six.. tfw no more rainbow six :/

Title Certification Requirements. It's the final testing Sony or MS or Nintendo do before allowing a game to be published. They are designed to make sure games adhere to platform guidelines for certain UI things, to ensure games won't damage the system or destroy data if a user does something weird and unexpected like ejecting the disc at a weird time or pulling the network cable unexpectedly.

It was an example, I'm giving examples to fortify the concept. Even if this was pre-animated, having it calculated in the cloud could mean different variables could be passed in to make it dynamic: wind speed, time of day etc. It was one example.

Yeah, it could offer infinite variations done in the cloud, but if it costs you 5% as much to "settle" on only 100 variations in a 5MB archive on the retail disk why wouldn't you save the money when no one will notice the difference?
 
This is what I've gathered from my research on the topic:

The PS4 originally had only 4GB of GDDR5. As prices came down and manufacturing abilities increased for GDDR5 chips, Sony decided to increase it to 8GB because it was a plug and play solution. Since the PS4 uses GDDR5 it does not need any on die memory such as eDRAM or eSRAM, which allowed them to use an APU with more GPU compute units.

During XB1 development MS determined that 8 GB of RAM was required for their media features, Kinect etc. DDR4 was not a viable option, and 8 GB of GDDR5 was cost prohibitive at the time. That's when they decided to go with DDR3 and some type of on die memory. AMD couldn't produce an APU with eDRAM on the die so they went with the slower eSRAM. This was fine in MS's eyes because it drew similarities to the hardware of the 360, which had eDRAM. Unfortunately the eSRAM took up large amounts of space on the die which restricted them to a GPU with less compute units. DX12 will help devs manage the eSRAM bandwidth more efficiently, which could possibly increase performance, but it will NOT make up for the smaller amount of compute units in the XB1.

The differences between the two can be seen here:
diecomparison.jpg
 
I don't really think that MS would screw up the Xbox with such a stupid mistake as useless ESRAM. The console has been in the works for years, I don't think that they decided their components at the last second (specially since ESRAM takes a big space and is expensive, so that money could be used in faster and better RAM/GPU, like the PS4).


I BELIEVE IN DA CLOUD!

cjh0teai04x07tylskcn.png

Is this real life?
 
Top Bottom