eSRAM question

Kind of a weird question, but can anyone clear up why the RAM speed is so important anyway? I know with PC gaming, it used to be the case (still is?) that RAM speed didn't really matter, it was like a 2 or 3 FPS difference at the most. Is the difference here because this RAM is also being used by the GPU? Does RAM speed matter more with GPU's than it does for normal system RAM?

I think you're confusing system RAM with GPU RAM. The switch from DDR3 to GDDR3 to GDDR5 made huge differences in how much throughput you had, and thus how much data you could output per frame. It's very important for the increase in resolution over the years. System RAM on the other hand doesn't need to have as much throughput since the CPU doesn't use as much as the GPU.
 
I'm here to clear up the misconceptions people have about esram. When someone asks, "so which console has the faster memory?" You have to immediately say Xbox one with its esram and it's not even up for debate. That's the single fastest piece of tech in either console. It's faster than anything in amazon's console as well. The truth is People like to downplay it because it's exclusive to Xbox one. The true developers like carmak know that when you need to do something fast in a game like shaders and bump maps, esram is the bees knees. This is why crytek chose to only bring Ryse to Xbox one and why call of duty was able to sustain a silky smooth. Why jump through unnecessary hoops?

The Cloud is a way to further future proof the console even though esram should be more than enough once Microsoft grants developers the access to code to the metal. It's the same reason people get insurance on their cars, you know? In case it's needed one day and not because the car won't work without it. We got a taste of how great cloud gaming can be on the pc with the launches of sim city and diablo 3. That's what Microsoft wants to bring to the comfy couches around the world and people hate them for it I don't know why. So imagine in five years when everyone wants the newest consoles and then Microsoft says bam you don't need one because every city now has google fiber one and pair it with Xbox one (hint hint) and you get a whole new console called xbox one cloud that will last you another five or so years. Nintendo just realized this so their next console will be a copy of Xbox cloud tech. It's too late for Sony to implement it haha.

I started reading this and became disappointed in it..am I the only one with Com Science classes under my belt on this site?

The real thing about ES ram is its memory compression and extremely quick decompression, being able to load highly compressed huge files of into memory and unload them extremely quickly. Theoretically you can store a lot of compressed files and then stream them from the memory its self.
 
I started reading this and became disappointed in it..am I the only one with Com Science classes under my belt on this site?

The real thing about ES ram is its memory compression and extremely quick decompression, being able to load highly compressed huge files of into memory and unload them extremely quickly. Theoretically you can store a lot of compressed files and then stream them from the memory its self.

I''ve been wondering if the framebuffer can be compressed or is it limited to shaders, textures, and other game code.
 
They chose it because it saves on energy consumption, is more affordable, is easier to manufacture compared to EDRAM, and it's a much faster chunk of memory that is crucial to helping out the slower main memory.

I also think a bit of familiarity plays into it being chosen also, because it seems like a pretty obvious extension and evolution of the 360's EDRAM. There were complaints about some of the limitations of the Xbox 360's EDRAM, and eSRAM on Xbox One seems to remove those limitations.
 
I think you're confusing system RAM with GPU RAM. The switch from DDR3 to GDDR3 to GDDR5 made huge differences in how much throughput you had, and thus how much data you could output per frame. It's very important for the increase in resolution over the years. System RAM on the other hand doesn't need to have as much throughput since the CPU doesn't use as much as the GPU.

I dunno how true this is, I imagine it depends heavily on the game and engine used. You can push a lot of data for the CPU to crunch to help improve GPU work, like all the good PS3 engines offloading work to the Cell to compensate for a weak GPU.

The thing with data is there a lot of ways to optimize it around your hardware architecture, either precomputing and putting the workload on your streaming from disc system or stuffing it into RAM, or having the CPU deal with it since you can't keep all the data in memory you need. With GPGPU you have even more to balance.

Hell, all I'm hearing about DX12 is that the CPU is going to pick up a lot of slack for the GPU which is where the Xbone is supposed to benefit since the CPU is comparable to the PS4's while the GPU is under powered. This can only be done by crunching data up for the GPU to handle in a more efficient manner.
 
Cloud computing can be benficial. It can be used to enhance gameplay experinces. Titanfall showed that.

eSRAM is a tool used to shore up slower RAM to increase peak performance

Trying to equate the two in any way into a meanful effect for gaming is sheer nonsense. Just look at the numbers...
A really good Internet connection is 100Mbps.
The eSRAM operated at 192Gbps

A factor of 2000 difference. One 60Hz frame * 2000 = half a minute. The magnitudes of difference means that they may both be useful but not for the same task.

Now count latency and so on and it becomes even more ridiculous! Who cares if the hardware is designed to unpack "packets of infomation" from the cloud really fast when the fastest connection you can get takes 100ms round trip (that is over 6 frames) and that data trickles in 2000 times slower on a good day.

Cloud computing *could* impact gameplay if you are prepared to wait around 10 frames for the results. So this immidiately limits uses of this capability.

But then look at this from a business perspective.. are you going to create a game for your console that says on the box "only for people with 50MBit internet comnnection and 50ms latancy of higher ONLY"? No... it would be commercial suicide. So all your cloud compute decisons have to be made to operate in far from perfect conditions... 250ms latancy and 1Mbit.

So the limititations of what you can actually do with cloud compute begin to layer on. And of course if your game relies on "Cloud Compute" to work at all you immidiately dismiss any offline customer.

All of these reasons and more are why all promises of the cloud are largely bullshit. And as soon as a platform holder touts it as amazing you can be sure there is even more bullshit in the equation as any cloud computing initiative can be provided to every platform. Indeed - how does PSNow work?!
 
Having a small pool of fast, on-chip scratchpad memory paired with a larger pool of slower memory is a design that's been used by at least one of the competing systems in every generation since the PS2.... Going with an improved extrapolation of those previous designs was a perfectly reasonable and understandable decision.
I wasn't downplaying the general design idea of "small fast pool and large slow pool". Past design should indeed always be available for reiteration and improvement; ratcheting like this is the way progress is made within the otherwise intractably large space of all possible design. I'm not claiming Microsoft chose an approach that's clearly bad on the face of it.

Rather, I'm disbelieving about the narrative that looking to the past was the only step taken before the design was decided upon. This is how natural selection works, but not human engineering. Even analogous methods sometimes used (e.g. simulated annealing) are aimed toward an endstate that fulfills a particular function, even if they don't optimally approach it. Engineered products are meant to be of use.

Put another way, we're looking for a plausible answer to the "Why?" question of eSRAM inclusion. I think "It's similar to what some other consoles have previously done" is woefully insufficient as a sole response. After all, many past consoles used other approaches. Surely the decision arose not merely from tradition, but in order to meet other, specific considerations. And I can't conceive of a scenario where those goals weren't teleological: the actual use cases for video games, and for OS and app functions.

The alternative is Microsoft designers saying "We've chosen a setup that we know is a functional game console, though we didn't bother to check what software will actually be run on it." Like I said, I find that possibility incredible.
 
I dunno how true this is, I imagine it depends heavily on the game and engine used. You can push a lot of data for the CPU to crunch to help improve GPU work, like all the good PS3 engines offloading work to the Cell to compensate for a weak GPU.
It depends on the CPU as well. If console makers intended on doing heavy graphics workloads on the CPU they should've gone with CELL 2.0 or Xeon Phi.

EDIT: It's a waste of silicon if you ask me since a better GPU would be preferable.
 
I dunno how true this is, I imagine it depends heavily on the game and engine used. You can push a lot of data for the CPU to crunch to help improve GPU work, like all the good PS3 engines offloading work to the Cell to compensate for a weak GPU.

The thing with data is there a lot of ways to optimize it around your hardware architecture, either precomputing and putting the workload on your streaming from disc system or stuffing it into RAM, or having the CPU deal with it since you can't keep all the data in memory you need. With GPGPU you have even more to balance.

Hell, all I'm hearing about DX12 is that the CPU is going to pick up a lot of slack for the GPU which is where the Xbone is supposed to benefit since the CPU is comparable to the PS4's while the GPU is under powered. This can only be done by crunching data up for the GPU to handle in a more efficient manner.

I agree that GPGPU is going to become a bigger deal this generation, but I was responding to the claim that it didn't make much of a difference in PC gaming. That is completely untrue. If you look at benchmarks of the same cards with GDDR3 vs GDDR5, the GDDR5 card will vastly outperform the GDDR3 card even with less RAM.

And the fact is, the PS4 is still more optimized for GPGPU than the XB1 (I'll believe the DX12 claims when I see proof) and will most likely see more performance gains once this technique becomes more common. The eSRAM has nothing to do with it.
 
Nope, but it's more complex to work with.
???

In what sense? In and of itself, SRAM memories can actually have much simpler characteristics than DRAM memories. Not that game developers will notice, as that's an extremely low-level observation. At the level we're discussing, the complexities of use will be a high-level choice that relates at most indirectly to the type of the memory cells.

6T-SRAM versus DRAM:

+ 6T-SRAM has lower latencies for extremely small pools (might not be true in XB1's case)
+ 6T-SRAM can be less power-hungry than DRAM in some applicatins
+ 6T-SRAM can be manufactured by any process that can build the CPU/GPU, so you don't have to use specialized foundries and processes (this is especially nice when you decide to die shrink)
- 6T-SRAM uses 6 transistors for each memory bit. Which means that it's HUGE, and thus very expensive.
 
We all agree that it is a salient fact esram was introduced to band aid the slow ddr3, since they knew sony was going for gddr5, and next gen need more than that of ddr3 can offer.

What baffles me though,

If it was supposed to band aid ddr3, did not microsoft have this configuration tested prior approval, to determine whether or not this band aid is working as planned? The fact is that:
1) it fails as a band aid to the ddr3
2) it complicates the devs
3) it does not fit the 1080p deferred framebuffer

It just fails in every single aspect, this esram is.

*edit ddr3 not gddr3
 
its been revised

BhRvknjCYAA2uY6.jpg:orig


(eSRAM itself has a slightly slower bus than GDDR5)

In all seriousness, I feel something like this (just mocked it up) is a better water based analogy:


In either system, you can get to the 'water' just as quickly, it just takes a bunch of extra forward planning on the left. There will be cases better suited to both scenarios.

[edit] the pipes are not to scale :P
 
Nope, but it's more complex to work with.

EDRAM is a 1-T 1 trench cell. SRAM cells can be 6-T or 8-T. In terms of density and access speed/bandwidth EDRAM is aeons better than ESRAM. However ESRAM is more power efficient and does not need to be periodically refreshed. Most standard CMOS processes do not allow EDRAM cells. The only example of an on-die EDRAM that I know of is the IBM Power PC 8.

Theoretically it should be easy to die-shrink an ESRAM based design. However in real life, the standard cells needed for the shrink may not be available since the only GF/TSMC customer than needs a 32MB ESRAM macro is MS(?). Frankly speaking having worked at Intel for a few years, I can state that the daughter die EDRAM idea is not bad for scalability. It allows you to use older cheaper processes for the EDRAM (eg. Intel Haswell's EDRAM daughter die) and decouples the die shrinks from the hard requirement of 32MB ESRAM macro.
 
I don't know a lot about this stuff so bear with me. I know there is a lot of controversy about MS going with eSRAM. My question is does eSRAM work better for the future "Cloud gaming"? Is that why they went with eSRAM? It sounds like Microsoft may be showing off the potential of the cloud at E3. I guess we will see. Thanks for reading.

Sony went all in and the gamble paid off - Samsung managed to produce 4 Gbit RAM chips, which were needed to equip the console with 8 GB of GDDR5 (it was a touch and go for quite some time since Sony's hopes rested in Samsung's ability to produce the new chip in time).

Microsoft played it safe and went with cheaper and widely available DDR3 RAM. However, DDR3 lacked greatly in bandwidth and therefore eSRAM was introduced as a bandwidth savior. However, this brought about the situation in which XO has split memory pool and still had lower total bandwidth than PS4. Not to mention that eSRAM is reported to be huge in size and heat production (hence relatively large XO case). Also, management at Microsoft probably thought that more and more people will watch/listen to media on XO instead of playing games and therefore, underestimated the need for more powerful hardware.

If fortune favors the bold, it definitely favored Sony, at least in the RAM department.
 
if Microsoft had increased the size of the ESRAM the Xbox One wouldn't be struggling so much with 1080p.

Care to explain? A 1080p color buffer + depth buffer + frame buffer = about 23mb esram. That is less than the 32mb that's on the xbox one.

The problem devs are having is figuring what how to implement AA techniques. For example putting MSAA on esram. This won't fit with the above buffers. Last gen the 360 had edram which was mainly used just for AA. It's very possible that this will happen this gen as well. In reality AA shouldn't be pooled onto esram as you really only want the resulting image in the buffer.

The ps4 has a stronger gpu which is really what is causing the parity issues.

TLDR: Devs are figuring out the systems and sdks. 1080p is possible. This gen is just starting
 
???

In what sense? In and of itself, SRAM memories can actually have much simpler characteristics than DRAM memories. Not that game developers will notice, as that's an extremely low-level observation. At the level we're discussing, the complexities of use will be a high-level choice that relates at most indirectly to the type of the memory cells.

It is more complicated to use relative to the EDRAM in 360 because the 360 design included some very strict proscriptions defining how it could be used. The ESRAM in Xbox One is more flexible, so it's up to devs to figure out how best to use it which could mean very different techniques than may have worked on 360. Additionally, both the size and bandwidth ratios relative to the GPU power and main memory are much worse on the Xbox One than they were on the 360. Even without the EDRAM, the GDDR3 in the 360 was quite fast relative to the GPU and CPU needs, and the 10MB EDRAM was 2% of the size of main memory. On Xbox One the ESRAM is only 0.3% of main memory and the DDR3 bandwidth is only about 3 times what the 360 had for a GPU 8 times as powerful.
 
I''ve been wondering if the framebuffer can be compressed or is it limited to shaders, textures, and other game code.

I have no idea, I would have to see the language in order to figure that out. I guess if you were to program to the metal you could do more but the idea is to have the AI be offloaded to the cloud so your ram is entirely freed up. A lot of people who aren't in my field don't understand that bandwidth can actually compensate for lack of memory when it comes to things like AI. Hence, the thinking being behind cloud servers. Anyone with some experience will tell you this

Memory can compensate for lack of computation
Bandwidth can compensate for lack of memory, and
Computation can compensate for lack of bandwidth and obviously we can switch these to around.

At least in theory.
 
I have no idea, I would have to see the language in order to figure that out. I guess if you were to program to the metal you could do more but the idea is to have the AI be offloaded to the cloud so your ram is entirely freed up. A lot of people who aren't in my field don't understand that bandwidth can actually compensate for lack of memory when it comes to things like AI. Hence, the thinking being behind cloud servers. Anyone with some experience will tell you this

Memory can compensate for lack of computation
Bandwidth can compensate for lack of memory, and
Computation can compensate for lack of bandwidth and obviously we can switch these to around.

At least in theory.

The frame buffer or front buffer is just the top image you see. You "can't" compress this as you need the 1080p or w.e resolution image to be rendered.

Tiled resources like textures, shaders etc will certainly both systems this gen in terms of memory consumption
 
Frankly speaking having worked at Intel for a few years, I can state that the daughter die EDRAM idea is not bad for scalability. It allows you to use older cheaper processes for the EDRAM (eg. Intel Haswell's EDRAM daughter die) and decouples the die shrinks from the hard requirement of 32MB ESRAM macro.
Seems like Intel has a different goal, maybe to keep their older fabs humming along when they move to BW/Sky.
 
So why didn't MS do 8GB ddr3 system ram and 4gb video gddr5, what could the cost per unit have been?

They would have had to cut the bus width of both in half which would have left you with 4GB of DDR3 that only runs at 34GB/s and 2-4GB of GDDR5 that runs at 88GB/s. t would have been more complex to build, and to program for (see all the complaints about the PS3's RAM last gen) and not provide much in the way of performance advantages. The problem is the physical space needed on a chip to run all the traces to the memory. Both PS4 and Xbox One use 256bit busses which is at the practical limit of what they could use with an eye towards shrinking the chips in the future.
 
It is more complicated to use relative to the EDRAM in 360 because the 360 design included some very strict proscriptions defining how it could be used. The ESRAM in Xbox One is more flexible, so it's up to devs to figure out how best to use it which could mean very different techniques than may have worked on 360. Additionally, both the size and bandwidth ratios relative to the GPU power and main memory are much worse on the Xbox One than they were on the 360. Even without the EDRAM, the GDDR3 in the 360 was quite fast relative to the GPU and CPU needs, and the 10MB EDRAM was 2% of the size of main memory. On Xbox One the ESRAM is only 0.3% of main memory and the DDR3 bandwidth is only about 3 times what the 360 had for a GPU 8 times as powerful.

The Xbox 360 EDRAM is quite limited in usage, however it is not a good example of EDRAM in SoCs. Check out IBM's Power PC 8 based design or Intel's Haswell design in which the EDRAM is used as a cache. Inherently EDRAM is much more cost effective, compact and faster solution (hardware engineer's perspective). From a software engineer's perspective cache operation is completely transparent and they do not need to be bothered with the low level details. However this is the biggest problem with the ESRAM in the Xbone, it is not a cache (automatically filled and emptied, etc) but a scratchpad. For programmers not well-versed in low level architecture using this scratchpad efficiently is not easy, most simply put the render target on it and call it a day.
 
If it was supposed to band aid ddr3, did not microsoft have this configuration tested prior approval, to determine whether or not this band aid is working as planned? The fact is that:
1) it fails as a band aid to the ddr3
2) it complicates the devs
3) it does not fit the 1080p deferred framebuffer
The thing is, the eSRAM isn't really a failure. It does band-aid the DDR3, or else Xbox One would be much further behind PS4 than it even is. As for the complexity, and the inability to use it for certain very large framebuffers, you have to keep in mind that Microsoft wasn't working with the knowledge we have now. They (and generally everyone else) were expecting a PS4 with faster but much less RAM. Differences might've still existed, but they'd be much smaller and in some cases would break Xbox's way. It was Sony's ability to match memory size while still maintaining their speed advantage that cast One in a bad light.
 
Is PS Now BS then? I dont think so.
Different contexts for both.

MS Azure initially claimed (?) to offload processing and offer performance/visuals above what the raw console specs suggested. Hence why it is brought up again in a performance-related thread.

Gaikai promised games streaming (though backwards compatibility with PS3 games got thrown into the mix).
 
Seems like Intel has a different goal, maybe to keep their older fabs humming along when they move to BW/Sky.

You are right it helps keeping old fabs operating at a moderate to high utilization benefiting Intel's bottom line, however older technology nodes are cheaper for end users too (not just fab owners). For example most university projects still are implemented on TSMC/UMC's 130nm and higher nodes because they are cheaper and more predicatble. The truth is it is quite economical to use an older node for the EDRAM regardless of whether you are a fab or customer.

Some have professed that ESRAM is easily scalable. This notion is completely divorced from reality. Logic cells can be scaled down without issues. Memory cells on the other hand need tons of optimization. A 32MB ESRAM shrink will need significant investment of engineering resources.
 
Different contexts for both.

MS Azure initially claimed (?) to offload processing and offer performance/visuals above what the raw console specs suggested. Hence why it is brought up again in a performance-related thread.

Gaikai promised games streaming (though backwards compatibility with PS3 games got thrown into the mix).

Well, PS Now offloads 100% of game processing on cloud (basically we could say streaming is like 100% on server side, while xbox one 'theory never in practise' is like a 50-50, with some parts local and rest of server).
 
Care to explain? A 1080p color buffer + depth buffer + frame buffer = about 23mb esram. That is less than the 32mb that's on the xbox one.

The problem devs are having is figuring what how to implement AA techniques. For example putting MSAA on esram. This won't fit with the above buffers. Last gen the 360 had edram which was mainly used just for AA. It's very possible that this will happen this gen as well. In reality AA shouldn't be pooled onto esram as you really only want the resulting image in the buffer.

The ps4 has a stronger gpu which is really what is causing the parity issues.

TLDR: Devs are figuring out the systems and sdks. 1080p is possible. This gen is just starting

So that leaves 9 MB unused on the ESRAM then?
What can that be used for, assuming you want to use it entirely?
 
The frame buffer or front buffer is just the top image you see. You "can't" compress this as you need the 1080p or w.e resolution image to be rendered.

Tiled resources like textures, shaders etc will certainly both systems this gen in terms of memory consumption

That's the nice thing about esram you don't have to load it until you need it.
 
The eSRAM is here to compensate for the slow DDR3. A clunky way to do it compared to the PS4 unified GDDR5.

The limited size of the eSRAM isn't optimal for handling high resolutions with many render targets and deferred renderers. Hence why many games are sub-HD.

What's crazy about this, is that sony's solution of directly using fast GDDR5 is probably less expensive (the ddr3 price stagnate and the eSRAM complicates the APU design).
 
The eSRAM is here to compensate for the slow DDR3. A clunky way to do it compared to the PS4 unified GDDR5.

The limited size of the eSRAM isn't optimal for handling high resolutions with many render targets and deferred renderers. Hence why many games are sub-HD.

What's crazy about this, is that sony's solution of directly using fast GDDR5 is less expensive (the ddr3 price stagnate and the eSRAM complicates the APU design).

Yes, but it is optimal for switching between OS systems and having a stable OS. The OS on the Xbox One is simply to heavy and needs to be thinned out. I say that owning and loving my xbox one.
 
Care to explain? A 1080p color buffer + depth buffer + frame buffer = about 23mb esram. That is less than the 32mb that's on the xbox one.

The problem devs are having is figuring what how to implement AA techniques. For example putting MSAA on esram. This won't fit with the above buffers. Last gen the 360 had edram which was mainly used just for AA. It's very possible that this will happen this gen as well. In reality AA shouldn't be pooled onto esram as you really only want the resulting image in the buffer.

The ps4 has a stronger gpu which is really what is causing the parity issues.

TLDR: Devs are figuring out the systems and sdks. 1080p is possible. This gen is just starting

http://www.neogaf.com/forum/showthread.php?t=819298

Something something, bits per pixel, 792p..... I will leave that stuff up to the tech geniuses good at math.

Either way if it was something so simple there would be alot more 1080p titles released for the One. There is a reason even some exclusives were less than 1080p and some MP titles as low as 792p and it seems to be about technical spec more than an issue with the SDK.

Different contexts for both.

MS Azure initially claimed (?) to offload processing and offer performance/visuals above what the raw console specs suggested. Hence why it is brought up again in a performance-related thread.

Gaikai promised games streaming (though backwards compatibility with PS3 games got thrown into the mix).

There was no BC. It is all about streaming. The only concern is bandwidth and latency so the minimum aim always seemed to be PS3.

Yes, but it is optimal for switching between OS systems and having a stable OS. The OS on the Xbox One is simply to heavy and needs to be thinned out. I say that owning and loving my xbox one.

Which I find Funny because isn't the PS4 reserved memory about 3 gigs as well?
 
It's not exactly 170GB/s sustained, 170 is the peak, and not sustained even if only the gpu is using it. And when the memory is serving the cpu and the gpu each GB/s the cpu consumes takes more than 1GB/s from the cpu.

That's one of the biggest advantages of a embedded memory: There's no contention from any one else: all the bandwidth is available to the gpu at all times (which doesn't mean it's sustained at 204 GB/s either).

The article referenced says they hit 133 GB/s on a ideal data set with a very memory bandwidth friendly operation. 176Gb/s on the GDDR5 isn't going to be sustainable but 204 GB/S isn't even possible, it's theoretical based on playing perfect operation tetris on perfect data. Sort of like saying a Dodge Minivan theoretically could go 200 mph... if dropped off a Antonov An-225 at 30,000 ft. Sure it's true but the conditions negate any meaning in the claim.
 
http://www.neogaf.com/forum/showthread.php?t=819298

Something something, bits per pixel 792p..... I will leave that stuff up to the tech geniuses good at math.

Either way if it was something so simple there would be alot more 1080p titles released for the One.



There was no BC. It is all about streaming. The only concern is bandwidth and latency so the minimum aim always seemed to be PS3.



Which I find Funny because isn't the PS4 reserved memory about 3 gigs as well?
Something like that, but all those snap features use up a lot of memory esp when you don't bother to close them
 
The opposite... eDRAM is faster... eDRAM can reach over 1TB/s of bandwidth.

MS choose eSRAM others advantages like better compatibility with GPU manufacture process... or even less power consumption.

SRAM has much faster access times over dram. This is because each bit of dram used only 1 transistor and capacitor. SRAM has 4 or 6 transistors depending on the type. This means DRAM needs to be refreshed.

Why would a cpu cache use a type of sram and not dram if dram was faster?
 
Well, PS Now offloads 100% of game processing on cloud (basically we could say streaming is like 100% on server side, while xbox one 'theory never in practise' is like a 50-50, with some parts local and rest of server).

Offloading 100% of processing is much easier than offloading 50% as it requires virtually no developer input. Any game can be played from the cloud, not any game can be partially rendered in the cloud.
 
Top Bottom