Interfectum
Member
This is pretty huge. Amazon going beast mode on gaming.
There are people making UE4 games completely using Blueprint.
$1.50 per 1000 DAU. http://aws.amazon.com/gamelift/pricing/
So, $16.5K instead of millions of dollars for those 11 Million users...
Titanfall, Gears UE, Halo 5 all use Azure properly with true elasticity. Respawn the best example of this, with their solution documented. It's a very safe bet to say EA implement the same technology in the servers they provide for their games. It'd be more of a shock if they didn't.
Too many people place reliance on the big names to provide SaaS services, whereas, I could set my own up in about half a day with tools like puppet and Xen. Which is probably what a lot of 3rd parties do, as they'll already have the hardware to support it.
That doesn't really change the pros and cons of managed versus unmanaged code. Blueprint is basically just C++ code snippets wrapped in a GUI.
In C++ is rare to have to "worry" about memory management, no need to use anything but smart pointers with reference counting if you don't want to. The most interesting task I had to do related to memory management in C++ was long time ago for a Wii game were we had to avoid memory fragmentation so we tracked all the memory allocations of all systems and used memory pools when needed.The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.
I'm unaware of what Gears is doing what or Halo 5 did, so I'll have to look into that. It would be nice to see some good examples out there other than just the new Crackdown. =D
Maybe because most studios realize their game will be gone once the game isn't financially supporting server costs anymore and they want their love child to live on for more than two years?
Or they remember the opposition Microsoft felt with their always online plans? People love to be able to play offline. So aside from multiplayer games it doesn't make sense to rely on cloud services. You're just binding yourself to someone without gaining anything in return.
A non cloud game can sell for years and years. A cloud game can sell for two, maybe three years. Cloud is the solution to a problem the consoles created when they launched with weak hardware, that wasn't even good enough to give average PCs a run for the money for one year.
So why exactly should a developer be willing to pay 50k - 200k a month to a cloud service when he can do everything locally and ends up with a million more profit?
They don't have the money to either buy the hardware or manage it 24/7. That is extremely costly and who knows if you will even need that. Who would want to drop a couple hundred thousand dollars in hardware and staff two sys ops when you can run on AWS at 1/10th the cost?
You could always plan to swap. Soft launch in a beta, see your stress levels and demand. Get an idea of cost and go from there to continue with cloud or migrate to self managed.
The elasticity of the cloud is also handy after the first 3~4 months after release,
when a lot of people have moved on.
57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
Look at Destiny (they're using automatic instancing) though I'm not sure if they're using cloud providers, Star Citizen which is using Google Compute Engine.
Star Citizen: I'm excited to see where that game goes in general, so I'll see if I can do some digging for information on how they're handling server-side stuff. *nervously crosses fingers*
What I'm saying is, is that companies like EA aren't going to suddenly shift to using AWS when they already have the resources available. It's not bloody hard or even that expensive to do this stuff. The most expensive bit will be the cost of hiring and paying the people to manage it, if you don't have them.Depending on the popularity of your game, buying enough hardware to build a cluster of machines running Xen VMs that could handle millions of gamers (plus the crew to install and configure everything) would be so much more costly than building that infrastructure somewhere like AWS. If you already have a lot of the hardware it may make a little more sense.
But again, just having VMs isn't the solution. It's what you can do with the myriad tools available for VMs. Rule based auto-scaling is a big key here. Things like Puppet, Chef & especially Terraform can go a long way in simplifying and instructing the things that get scaled and it's part of why they're such great tools.
Titanfall only used Azure for AI, if I remember correctly, and it didn't really seem to show/be noticeably better by comparison in gameplay to anything other titles.
I'm unaware of what Gears is doing what or Halo 5 did, so I'll have to look into that. It would be nice to see some good examples out there other than just the new Crackdown. =D
Bungie bought a datacenter in Vegas and rents space at other global data centers to run Destiny. They have something like 100 systems engineers managing that infrastructure. Given that they've admitted "a lot" or "most" of that insane budget they had was spent on this infrastructure, I would absolutely astonished if they had an auto-scaling system on their hands. Similarly, if they spent the salary cost on 100 polished systems engineers + the data center buyout + the additional rentals + any hardware they bought, I find it extremely infeasible that it was less money and time than doing this somewhere like AWS.
I do not know much about networking and server infrastructure. But SC uses Google's service of spinning up cloud servers per instanced game (ATM). I am pretty sure the eventual plan building up off of this so that everyone is eventually in the same universe setting under a master server, with different zones having multiple player instances to handle high traffic areas (spun off to localised servers). Then players are transfered across instances in areas of dead space for example to match them up with real players that happen to be on different instaced servers in those same physical spaces. So master server handling all the meta data about the game's systems and general zone locations, and then localised servers doing all the very player visible interactions.
Otherwise you would be spending most of your time flying around trillions of km of space and only encountering NPCs. Though there will be a PvP slider of sorts which would filter who and when and how often such real player interactions occur.
I am pretty sure this is how they have been explaining their plan for quite some time.
I sincerely don't have any rude intentions behind this question, but: did you actually read what I wrote? I know it's an insanely massive wall of ranting text, but I very definitely answered your question.
My question is: what the hell is taking the gaming industry so long to adopt this technology?
Using AWS ≠ using AWS properly. Just having virtualized instances instead of physical servers doesn't mean your infrastructure is better. I keep saying you, but this isn't directed at *you*, I promise. I just mean that I know of a lot of game studios that have virtualized servers living places like AWS, but they aren't actually architecting an infrastructure that makes use of all the tools available to them by doing so.
Even more terrifying, if there *are* studios that claim to be making use of all these tools (again, not just having VMs in the cloud), then how are they doing such a piss poor job of it?
What I'm saying is, is that companies like EA aren't going to suddenly shift to using AWS when they already have the resources available. It's not bloody hard or even that expensive to do this stuff. The most expensive bit will be the cost of hiring and paying the people to manage it, if you don't have them.
Regarding bolded: On a platform which expanded with rules to scale. Gears and Halo 5 used Azure, which again, uses the platform in a dynamically expanding (and shrinking) state.
It's done, and probably has been for a while.
Just because Crackdown is doing server side physics, doesn't mean that every other game isn't expanding on load.
You should know that Blueprint isn't actually C++, nor does it compile down to C++. UE4 does also do garbage collection, if you desire (it seems to do it by default).
Yeah, but they had one of the if not the smoothest online launches of any modern game to date.
Say what you will about that game, it launched smoothly and runs pretty smoothly on a daily basis. I guess it is to be expected if they staffed that many people on their infrastructure.
Halo 5 is dynamically provisioning resources for pretty much everything. Theater, forge, co-op, multiplayer, etc. Perhaps you were reading about Halo 4, which did only use Azure for analytics.
Oh I see. You read that blog post about them using the azure NoSQL stuff. That does not mean they did not use Azure for anything else. That is just one thing they used it with.
I'm not sure what blog post you're referring to; if you'd care to share it, I'd be down to read it though.
I'm definitely talking about Halo 5, not 4. The only things I could find with any detail (from Halo devs or Azure folks) about scaling were the event pipeline and the company tracking.
Sorry, on mobile. If you google Halo 5 Azure, the first item is a blog post about them using Azure DocumentDB to manage the company stuff like you mention.
Read this blog post from Respawn too:
www.respawn.com/news/lets-talk-about-the-xbox-live-cloud
Halo 5 is using this same stuff.
So they built this powerful system to let us create all sorts of tasks they they will run for us, and it will scale up and down automatically as players come and go.
With the Xbox Live Cloud, we don't have to worry about estimating how many servers we'll need on launch day.
These quotes definitely imply they are scaling the number of multiplayer server instances up and down based on player count.
As I'm sure you know, stuttering is predominantly the result of GC, as Unity uses C# and so never has direct control over memory management.
The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.
If you have the time and inclination to manually control memory and pointers to ensure consistent "to the metal" performance for every platform you release on (and their own individual quirks), Unity is probably a bad choice.
If you don't have the time or inclination, and want something that will run on multiple platforms - including often neglected Linux / Mac machines and you're fine with or can design around required GC, Unity is probably a good choice.
Its swings and roundabouts - there's definitely a reason Unity is still a popular choice, and its the traditional trade off between ease of use and performance.
Idk, it's not like devs were exactly lining up to use CryEngine, are they?
I have played Ori. I even put it on my top 10 last year.So I take it you have not played Ori?
In modern C++, you can get extremely predictable memory allocation and deallocation behaviour without anything I would call a "to the metal" implementation effort.As I'm sure you know, stuttering is predominantly the result of GC, as Unity uses C# and so never has direct control over memory management.
The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.
If you have the time and inclination to manually control memory and pointers to ensure consistent "to the metal" performance for every platform you release on (and their own individual quirks), Unity is probably a bad choice.
A plug-in for Maya/Max is probably just as good as exporting in FBX technically, but that surely leaves a ton of practical problems.
Question: what happens if you want to work with Blender/Modo/Cinema4D/Houdini instead of Maya?
Actually, Unity 5 uses a much more up-to-date GC when using the IL2CPP compiler.
https://aws.amazon.com/service-terms/
57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.
So Lumberyard can be legally used for life-critical or safety-critical systems only in the event that the events in "the walking dead" happen in real life?
The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.
Yes. You can still have leaks in these garbage collected languages if it runs for a long time and you don't release references. And it can be worse since you aren't always actively thinking about it. I prefer the option of manual memory management.I find that it tends to be the opposite, meaning you really do have to worry about memory management often even more(in terms of time spent on it) than in say c++ when using gc languages for high performance stuff but instead of having full control you are now dealing with a black box and will never be able to get the necessary performance whatever you do. I have no idea how unity is supposed to make it in vr with their current engine. even the slightest stutter is extremely disruptive to the experience.
Yes. You can still have "leaks" in these garbage collected languages if it runs for a long time and you don't release references. I always preferred manual memory management. The option is nice.
The reason why Unity's GC stutters is due it using a rather outdated GC along with an outdated version of Mono, which the devs have acknowledged is a problem but they can't just immediately fix it due to the numerous other potential issues that would result from it.
That being said, getting around the GC is really just a matter of good coding practice and making sure to only use the GC when it won't be an inconvenience to the user. Ori is very good about this, the stutters I've noticed are rare but pretty much always in between-area transitions. The GC isn't exactly a big issue for my own projects, either.
I thought it was also because Unity and Xamarin don't want to negotiate a new license or whatever which is why we've been stuck on some old as shit version of Mono Develop for years. I feel like whatever the root cause is, it is going to be Unity's eventual undoing.
Not when they already have it. EA will have all of these things with a team which do ops 24/7, or at least pass that work onto a 3rd party. You still need these things if you were running on AWS. AWS just gives you hardware on tap, it doesn't manage it for you. They'd still need a team of engineers to manage and monitor it, just exactly like what I'm currently doing as I'm typing this. They still need to create their own rules for elasticity, define their own firewall rules, templates, builds and have it monitored round the clock. Amazon don't do that for you. For EA, they'll have the hardware and the people. To pay a premium for that, when they already have it, is stupid.Why wouldn't they? The cost of an engineering team to manage a physical server cluster (whether they run VMs or not) + the cost of the hardware itself is faaaaar greater than the fraction of people you'd need to build an intelligent, self-monitoring, IRT reactionary systems infrastructure that lives somewhere like GCE or AWS. I've done the transition multiple times now and the biggest pain in the ass is the initial build. You need to re-create your infrastructure in the new environment and then optimize it with the new tools you have at your disposal. Then you schedule a cutover. There's a greater incurred cost during the time you have them both running in tandem, but once you cut over the savings are quite stunning.
They use it plain and simply for dedicated servers. Dedicated servers which spawn and destroy on demand, the exact same thing which AWS does.Looking into it, Halo 5 used Azure for two things.
1) Companies and the social abilities and tracking of the people contained in those companies. Cool use of cloud DB structures, but nothing revolutionary. My gripe is more with game studios not scaling out to cater to load demand of connections/sessions in real time (rather than hours, days or weeks). So like when you can't connect to a multiplayer game because the servers are overloaded, which shouldn't be a thing we still see in this day and age for more than a few minutes with, and I hate to be a broken record, things like auto-scaling.
2) They used Azure services for event/logic/telemetry tracking and processing. While this is absolutely crucial in making information based decisions for how to tweak the game itself and the servers running multiplayer operations, it's only part of the solution. Using some of the metric/event data gathered by this pipeline can and should be used to fire off real-time events to adjust the size and scope of infrastructure automatically on the fly with no manual intervention required. Again, that's only one side of this. It seems like a lot of this data was gathered more for single-server performance tweaking and game engine tweaking to make the experience smoother for people, but seemingly wasn't used on the systems infrastructure side of things. At least not in real time. It seems like data on events were collected for the explicit use of later tweaking, not real-time. The telemetry pipeline could scale automatically, but there was seemingly nothing in place the server-side infrastructure for handling load at scale.
You're asking several different questions rolled into one, so you'd need to be a lot more specific.polarizeme said:so why the fuck has no one charged head-first into this?
In modern C++, you can get extremely predictable memory allocation and deallocation behaviour without anything I would call a "to the metal" implementation effort.
I find that it tends to be the opposite, meaning you really do have to worry about memory management often even more
it's not that time consuming unless you are trying to do crazy stuff on very limited resources
Hmm... so it seems like they're at least modularizing instances w/ game worlds, but that would still mean that, especially with only one master server handling the load of all meta data pertaining to the state of the game world instances, they could run into load issues if there are too many players in the same gamespace. I'll have to read up on it more, but It seems like they're on the right path, though they're bottlenecking themselves by saying 1 game world has to = one VM instead of architecting the software to scale across multiple instances and allowing some form of auto-scaling. Still happy to see that they're at least being more creative with the infra though. Baby steps. *insert Bill Murray face*
i don't see many valid pros for gc memory when it comes to game engines. except "i don't have to learn stuff", which really is self-deception not a pro.Any discussion about managed versus unmanaged is going to have proponents for both sides, because there absolutely are valid pros and cons for each, and how pro a pro is or how big a con is is entirely dependent on the project, especially in production environments where you can have multiple different programmers of multiple different skill levels and multiple language familiarity all working on interlinking systems that often don't have the luxury of peer review or even commenting code.