So Microsoft has made some lofty claims regarding "cloud computing". It's been a rather vague affair, and gauging the reality from the PR fluff has been tricky, especially with grand claims such as their Cloud offering 4x more power than the Xbox One itself. So how much is PR fluff, how much truth is there to it? How much is simply linked the the benefits of Dedicated Servers versus P2P? And how much is just to mask an otherwise DRM and always online envisioned agenda?
Obviously cloud computing is not a new phenomena, it's been happening for some time, namely on the storage front, but also with low latency computing as well. Plus we have OnLive and GaiKai (now owned by Sony) offering Cloud based game streaming systems. So it's there, and ever evolving, but what about Microsoft's promoted Cloud offering?
The basic gist is that there's a massive difference between getting the cloud to compute latency sensitive loads, and latency insensitive loads. It's the latter (insensitive) one's that could theoretically be branched off to the Cloud for computation, but not without a vast array of potential issues and logistical obstacles, especially on current average internet connections and speeds, as well as the cost of the server arrays.
The notion of latency sensitive loads being dynamically computed by the Cloud however, is a pipe dream, and something that is not viable any time in the foreseeable future.
So a few titbits on understanding Microsoft's Cloud promises.
----
EDIT: Update to OP with Digital Foundry article. Quotations taken from GAF thread.
Digital Foundry | In Theory: Can the Xbox One cloud transform next-gen gaming?
NeoGaf | Digital Foundry's evidence-based analysis on Xbox Cloud potential
---
Ars technica | How the Xbox One draws more processing power from cloud computing
---
And another discussion from the Xbox One Microsoft Tech Panel.
Xbox One Architecture Panel
---
Could Microsoft's (or Sony's) cloud systems really be used to compute traditionally locally processed latency insensitive loads of the kind described? Or is this PR fluff?
The thing that strikes me from the quotes above is that it becomes a very different and non-viable beast when the computation is required of something that is game dynamic. Matt Booty mentions that lighting does not necessarily have to be updated every frame, therefore could be pushed to the cloud, however, I would have assumed that next gen most games would have dynamic lighting, which would potentially need to be calculated every frame. Same thing could be said for physics calculations and even AI, if they were to be required in a particular scene, dynamic in use, on a frame by frame basis.
I have no doubt however, that with Dedicated Servers, more players could be added to an online game, or that the maps or worlds could be larger. But this is less to do with the cloud, and more to do with the low latency benefits of Dedicated Servers versus P2P, something that has always been the case.
Thoughts?
EDIT: Updated OP with Digital Foundry findings.
Obviously cloud computing is not a new phenomena, it's been happening for some time, namely on the storage front, but also with low latency computing as well. Plus we have OnLive and GaiKai (now owned by Sony) offering Cloud based game streaming systems. So it's there, and ever evolving, but what about Microsoft's promoted Cloud offering?
The basic gist is that there's a massive difference between getting the cloud to compute latency sensitive loads, and latency insensitive loads. It's the latter (insensitive) one's that could theoretically be branched off to the Cloud for computation, but not without a vast array of potential issues and logistical obstacles, especially on current average internet connections and speeds, as well as the cost of the server arrays.
The notion of latency sensitive loads being dynamically computed by the Cloud however, is a pipe dream, and something that is not viable any time in the foreseeable future.
So a few titbits on understanding Microsoft's Cloud promises.
----
EDIT: Update to OP with Digital Foundry article. Quotations taken from GAF thread.
Digital Foundry | In Theory: Can the Xbox One cloud transform next-gen gaming?
NeoGaf | Digital Foundry's evidence-based analysis on Xbox Cloud potential
Latency said:To put this in perspective, when the logic circuits of a CPU want some data, they have to wait a few nanoseconds (billionths of a second) to retrieve it from its cache.... If the CPU were to ask the cloud to calculate something, the answer won't be available for potentially 100ms or more, depending on internet latency - some 100,000,000 nanoseconds!
Bandwidth said:The PS4 memory system allocates around 20,000MB/s for the CPU of its total 176,000MB/s. The cloud can provide one twenty-thousandth of the data to the CPU that the PS4's system memory can. You may have an internet connection that's much better than 8mbps of course, but even superfast fibre-optic broadband at 50mbps equates to an anaemic 6MB/s
Economy said:Beyond the technical considerations of what is and isn't possible due to bandwidth and latency constraints, there are of course economic considerations. Running a server to provide a solo player game is extremely expensive. It makes far more sense to use the servers to run multiplayer games
Lies said:When you play Battlefield 3 on your Xbox 360, do you have the equivalent power of a dozen Xbox 360s because the server is notionally that powerful? Microsoft's claims seem pretty wishy-washy against such a comparison, and without the explicit clarification that they are literally installing four teraflops of server power for each and every Xbox One bought, the claims of that power target can only be considered bogus PR hand-waving to try and detract from the performance deficit with their rival.
Deceit said:What's obvious at this point is that the concept of cloud computing looks uncertain and unlikely, and Microsoft needs to prove its claims with actual software. Yet based on what we've been told, the firm itself isn't sure of what uses to put it to, while the limitations of latency and bandwidth severely impede the benefits of all that computing power.
Put up or Shut Up said:Microsoft needs to prove its position with strong ideas and practical demonstrations. Until then, it's perhaps best not to get too carried away with the idea of a super-powered console, and there's very little evidence that Sony needs to be worried about its PS4 specs advantage being comprehensively wiped out by "the power of the cloud".
---
Ars technica | How the Xbox One draws more processing power from cloud computing
Ars said:While Tuesday's Xbox One presentation answered some questions about Microsoft's upcoming system, it left just as many or more unsettled. Luckily, Ars got a chance to sit down with General Manager of Redmond Game Studios and Platforms Matt Booty to try to get more answers. While he wasn't able to answer some of the most pressing questions about the system, he was able to dive deep into some of the technical details.
Our first question had to do with the 30,000-server cloud architecture that Microsoft says the Xbox One will use to help support "latency-insensitive computation" in its games. What does that mean exactly, and can laggy cloud data really help in a video game where most things have to be able to respond locally and immediately?
"Things that I would call latency-sensitive would be reactions to animations in a shooter, reactions to hits and shots in a racing game, reactions to collisions," Booty told Ars. "Those things you need to have happen immediately and on frame and in sync with your controller. There are some things in a video game world, though, that don't necessarily need to be updated every frame or don't change that much in reaction to what's going on."
"One example of that might be lighting," he continued. "Let’s say you’re looking at a forest scene and you need to calculate the light coming through the trees, or you’re going through a battlefield and have very dense volumetric fog that’s hugging the terrain. Those things often involve some complicated up-front calculations when you enter that world, but they don’t necessarily have to be updated every frame. Those are perfect candidates for the console to offload that to the cloud—the cloud can do the heavy lifting, because you’ve got the ability to throw multiple devices at the problem in the cloud."
Booty added that things like physics modeling, fluid dynamics, and cloth motion were all prime examples of effects that require a lot of up-front computation that could be handled in the cloud without adding any lag to the actual gameplay. And the server resources Microsoft is putting toward these calculations will be much greater than a local Xbox One could handle on its own. "A rule of thumb we like to use is that [for] every Xbox One available in your living room we’ll have three of those devices in the cloud available," he said.
While cloud computation data doesn't have to be updated and synced with every frame of game data, developers are still going to have to manage the timing and flow of this cloud computing to avoid noticeable changes in graphic quality, Booty said. “Without getting too into the weeds, think about a lighting technique like ambient occlusion that gives you all the cracks and crevices and shadows that happen not just from direct light. There are a number of calculations that have to be done up front, and as the camera moves the effect will change. So when you walk into a room, it might be that for the first second or two the fidelity of the lighting is done by the console, but then, as the cloud catches up with that, the data comes back down to the console and you have incredibly realistic lighting."
Does that mean that Xbox One games will feature graphics that suddenly get much more realistic as complex data finally finishes downloading from the cloud? "Game developers have always had to wrestle with levels of detail... managing where and when you show details is part of the art of games," Booty said. "One of the exciting challenges going forward is a whole new set of techniques to manage what is going to be offloaded to the cloud and what’s going to come back.”
And what about those times when a gamer doesn't have an active Internet connection to make use of the cloud's computational power? Microsoft has confirmed that single-player games don't have to be online to work, but all this talk of cloud computing seems to suggest that these games might not look or perform as well if they don't have access to a high-speed connection.
"If there’s a fast connection and if the cloud is available and if the scene allows it, you’re obviously going to capitalize on that," Booty told Ars. "In the event of a drop out—and we all know that Internet can occasionally drop out, and I do say occasionally because these days it seems we depend on Internet as much as we depend on electricity—the game is going to have to intelligently handle that." Booty urged us to "stay tuned" for more on precisely how that intelligent handling would work, stressing that "it’s new technology and a new frontier for game design, and we’re going to see that evolve the way we’ve seen other technology evolve."
---
And another discussion from the Xbox One Microsoft Tech Panel.
Xbox One Architecture Panel
Microsoft tech panel said:Dan Greenwalt: It's also connected to the cloud, this gives us creators the ability to offload some of the processing that we would do on this powerful box, and also do processing that we can't do even on this powerful box, because of the power of the cloud. Because we can move things, physics, AI, worlds, we can move incredible rendering capabilities to the cloud, and that means this box is going to evolve. So this is a radically different way of thinking about how we creators work with the box.
Todd(?): With Xbox Live when Mark was talking about the number of machines that were added, this is a big deal, next gen isn't just about having lots of transistors local, its also about having transistors in the cloud and the best way I can explain it is that to me next gen is about change. I've got these games to stay the same, I've got these apps that are changing but now, you start throwing in servers that are just one hop away, that can start doing things like, hmmm, you know you look at a game, and there's latency sensitive loads, and there's latency insensitive loads, lets start moving those insensitive loads over to the cloud, freeing up local resources, and effectively over time, your box gets more and more powerful. This is completely unlike previous generations, you've got a fixed number of transistors in your house, and a variable number of transistors in the cloud, as we get smarter about which loads we can move in to the cloud, that frees up local resources to do things about the here and now, and that's really exciting.
Matt(?): But now what we get, is the power we can tap directly in to, to offload processes, and do again those low latency processing we want to put out there. So now we have the best of both worlds, we have a stable platform that we can create from, and we have an ever evolving world that we can tap in to.
Todd(?): I think that really is a fundamental difference between this generation and the last generation. In the last one, that box was fixed, and the game was all about optimise, optimise, optimise, the games that we see now on the 360 look tremendously better than the games we saw at launch on the 360 because, we deeply understand that chip. That's going to happen in this generation, but add to it, the number of transistors in the cloud, that are really not that far away, that you can start to move those loads on to. You can start to have bigger worlds, start to have lots of players together, but can also take maybe some of the things that are done locally and push them out, this generation is about improving and embracing change and growth, whilst still maintaining the predictability the game developer needs. This is a balancing act that we have to achieve.
---
Could Microsoft's (or Sony's) cloud systems really be used to compute traditionally locally processed latency insensitive loads of the kind described? Or is this PR fluff?
The thing that strikes me from the quotes above is that it becomes a very different and non-viable beast when the computation is required of something that is game dynamic. Matt Booty mentions that lighting does not necessarily have to be updated every frame, therefore could be pushed to the cloud, however, I would have assumed that next gen most games would have dynamic lighting, which would potentially need to be calculated every frame. Same thing could be said for physics calculations and even AI, if they were to be required in a particular scene, dynamic in use, on a frame by frame basis.
I have no doubt however, that with Dedicated Servers, more players could be added to an online game, or that the maps or worlds could be larger. But this is less to do with the cloud, and more to do with the low latency benefits of Dedicated Servers versus P2P, something that has always been the case.
Thoughts?
EDIT: Updated OP with Digital Foundry findings.