• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Amazon releases new free game engine Lumberyard (based on CryEngine)


1) 1,000,000 DAU @ $1.50 per 1,000 ≠ $1.5m

2) You can only use GameLift if you use LumberYard (according to Amazon's verbiage). The functionality of GameLift exists outside of using that product, you just need to engineer it. The tools are there. So even if GameLift was crazy expensive, which it's not even close to being, you could still architect a good auto-scaling infrastructure for multiplayer games.
 
Titanfall, Gears UE, Halo 5 all use Azure properly with true elasticity. Respawn the best example of this, with their solution documented. It's a very safe bet to say EA implement the same technology in the servers they provide for their games. It'd be more of a shock if they didn't.

Too many people place reliance on the big names to provide SaaS services, whereas, I could set my own up in about half a day with tools like puppet and Xen. Which is probably what a lot of 3rd parties do, as they'll already have the hardware to support it.

Depending on the popularity of your game, buying enough hardware to build a cluster of machines running Xen VMs that could handle millions of gamers (plus the crew to install and configure everything) would be so much more costly than building that infrastructure somewhere like AWS. If you already have a lot of the hardware it may make a little more sense.

But again, just having VMs isn't the solution. It's what you can do with the myriad tools available for VMs. Rule based auto-scaling is a big key here. Things like Puppet, Chef & especially Terraform can go a long way in simplifying and instructing the things that get scaled and it's part of why they're such great tools.

Titanfall only used Azure for AI, if I remember correctly, and it didn't really seem to show/be noticeably better by comparison in gameplay to anything other titles.

I'm unaware of what Gears is doing what or Halo 5 did, so I'll have to look into that. It would be nice to see some good examples out there other than just the new Crackdown. =D
 

tuxfool

Banned
That doesn't really change the pros and cons of managed versus unmanaged code. Blueprint is basically just C++ code snippets wrapped in a GUI.

You should know that Blueprint isn't actually C++, nor does it compile down to C++. UE4 does also do garbage collection, if you desire (it seems to do it by default).
 
The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.
In C++ is rare to have to "worry" about memory management, no need to use anything but smart pointers with reference counting if you don't want to. The most interesting task I had to do related to memory management in C++ was long time ago for a Wii game were we had to avoid memory fragmentation so we tracked all the memory allocations of all systems and used memory pools when needed.
 
Maybe because most studios realize their game will be gone once the game isn't financially supporting server costs anymore and they want their love child to live on for more than two years?

But everything you said is *exactly* why services like AWS are the answer. Auto-scaling infrastructure isn't a one-way street. If you lose your user base numbers, infrastructure will scale down so that you don't have more in place than what's necessary. This is why tying performance monitoring to reactionary scaling is so helpful.

Or they remember the opposition Microsoft felt with their always online plans? People love to be able to play offline. So aside from multiplayer games it doesn't make sense to rely on cloud services. You're just binding yourself to someone without gaining anything in return.

I'm not talking about single player games. I'm not a huge fan of forcing people to connect online to play single player. I'm specifically talking about the trainwreck that is online game launches. I honestly can't think of a massively successful multiplayer game in the last ___ years that didn't eat shit on day one because infrastructure couldn't keep up with demand/traffic. It's shameful. GTA was hit with issues at all of their 3 launches. Tell, even The Last of Us has issues with multiplayer when it re-launched on PS4. It has seemingly become the norm, not the exception.

In terms of single player stuff, there are absolutely ways to leverage services like AWS, Azure and CGE on the development side, too. That "cloud" aspect doesn't have to bleed into the release and play of the game itself if you don't want it to, unless it's to help serve up things like downloadable content or something.

A non cloud game can sell for years and years. A cloud game can sell for two, maybe three years. Cloud is the solution to a problem the consoles created when they launched with weak hardware, that wasn't even good enough to give average PCs a run for the money for one year.

So why exactly should a developer be willing to pay 50k - 200k a month to a cloud service when he can do everything locally and ends up with a million more profit?

I think there's some miscommunication. I'm not advocating games that live "in the cloud" and have all the calculation and horsepower off-site somewhere. And the problem I'm talking about has absolutely nothing to do with console hardware. It has everything to do with dev studios and publishers serving up multiplayer and online-connected games in a very archaic way when there are tons of services and tools that can do a better job of it for cheaper with less manpower. They wouldn't be losing money, they'd be saving a shitload of it. I speak from experience on the matter.
 
They don't have the money to either buy the hardware or manage it 24/7. That is extremely costly and who knows if you will even need that. Who would want to drop a couple hundred thousand dollars in hardware and staff two sys ops when you can run on AWS at 1/10th the cost?

You could always plan to swap. Soft launch in a beta, see your stress levels and demand. Get an idea of cost and go from there to continue with cloud or migrate to self managed.

The elasticity of the cloud is also handy after the first 3~4 months after release,
when a lot of people have moved on.

Yup! <3
 

Deku Tree

Member
https://aws.amazon.com/service-terms/

57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.

So Lumberyard can be legally used for life-critical or safety-critical systems only in the event that the events in "the walking dead" happen in real life?
 
Look at Destiny (they're using automatic instancing) though I'm not sure if they're using cloud providers, Star Citizen which is using Google Compute Engine.

Bungie bought a datacenter in Vegas and rents space at other global data centers to run Destiny. They have something like 100 systems engineers managing that infrastructure. Given that they've admitted "a lot" or "most" of that insane budget they had was spent on this infrastructure, I would absolutely astonished if they had an auto-scaling system on their hands. Similarly, if they spent the salary cost on 100 polished systems engineers + the data center buyout + the additional rentals + any hardware they bought, I find it extremely infeasible that it was less money and time than doing this somewhere like AWS.

Take a look at Netflix. I've said it before, but they are a shining beacon of why this is so much more affordable than doing things the old way.

Star Citizen: I'm excited to see where that game goes in general, so I'll see if I can do some digging for information on how they're handling server-side stuff. *nervously crosses fingers*
 
Star Citizen: I'm excited to see where that game goes in general, so I'll see if I can do some digging for information on how they're handling server-side stuff. *nervously crosses fingers*

I do not know much about networking and server infrastructure. But SC uses Google's service of spinning up cloud servers per instanced game (ATM). I am pretty sure the eventual plan building up off of this so that everyone is eventually in the same universe setting under a master server, with different zones having multiple player instances to handle high traffic areas (spun off to localised servers). Then players are transfered across instances in areas of dead space for example to match them up with real players that happen to be on different instaced servers in those same physical spaces. So master server handling all the meta data about the game's systems and general zone locations, and then localised servers doing all the very player visible interactions.

Otherwise you would be spending most of your time flying around trillions of km of space and only encountering NPCs. Though there will be a PvP slider of sorts which would filter who and when and how often such real player interactions occur.

I am pretty sure this is how they have been explaining their plan for quite some time.
 
Q

Queen of Hunting

Unconfirmed Member
Amazon have devs from many huge studios and recently a founder of arenanet jumped to amazon. Kinder impressive at all the names have and what they are building
 

leeh

Member
Depending on the popularity of your game, buying enough hardware to build a cluster of machines running Xen VMs that could handle millions of gamers (plus the crew to install and configure everything) would be so much more costly than building that infrastructure somewhere like AWS. If you already have a lot of the hardware it may make a little more sense.

But again, just having VMs isn't the solution. It's what you can do with the myriad tools available for VMs. Rule based auto-scaling is a big key here. Things like Puppet, Chef & especially Terraform can go a long way in simplifying and instructing the things that get scaled and it's part of why they're such great tools.

Titanfall only used Azure for AI, if I remember correctly, and it didn't really seem to show/be noticeably better by comparison in gameplay to anything other titles.

I'm unaware of what Gears is doing what or Halo 5 did, so I'll have to look into that. It would be nice to see some good examples out there other than just the new Crackdown. =D
What I'm saying is, is that companies like EA aren't going to suddenly shift to using AWS when they already have the resources available. It's not bloody hard or even that expensive to do this stuff. The most expensive bit will be the cost of hiring and paying the people to manage it, if you don't have them.

Regarding bolded: On a platform which expanded with rules to scale. Gears and Halo 5 used Azure, which again, uses the platform in a dynamically expanding (and shrinking) state.

It's done, and probably has been for a while.

Just because Crackdown is doing server side physics, doesn't mean that every other game isn't expanding on load.
 

tuxfool

Banned
Bungie bought a datacenter in Vegas and rents space at other global data centers to run Destiny. They have something like 100 systems engineers managing that infrastructure. Given that they've admitted "a lot" or "most" of that insane budget they had was spent on this infrastructure, I would absolutely astonished if they had an auto-scaling system on their hands. Similarly, if they spent the salary cost on 100 polished systems engineers + the data center buyout + the additional rentals + any hardware they bought, I find it extremely infeasible that it was less money and time than doing this somewhere like AWS.

Yeah, but they had one of the if not the smoothest online launches of any modern game to date.

Say what you will about that game, it launched smoothly and runs pretty smoothly on a daily basis. I guess it is to be expected if they staffed that many people on their infrastructure.
 
I do not know much about networking and server infrastructure. But SC uses Google's service of spinning up cloud servers per instanced game (ATM). I am pretty sure the eventual plan building up off of this so that everyone is eventually in the same universe setting under a master server, with different zones having multiple player instances to handle high traffic areas (spun off to localised servers). Then players are transfered across instances in areas of dead space for example to match them up with real players that happen to be on different instaced servers in those same physical spaces. So master server handling all the meta data about the game's systems and general zone locations, and then localised servers doing all the very player visible interactions.

Otherwise you would be spending most of your time flying around trillions of km of space and only encountering NPCs. Though there will be a PvP slider of sorts which would filter who and when and how often such real player interactions occur.

I am pretty sure this is how they have been explaining their plan for quite some time.

Hmm... so it seems like they're at least modularizing instances w/ game worlds, but that would still mean that, especially with only one master server handling the load of all meta data pertaining to the state of the game world instances, they could run into load issues if there are too many players in the same gamespace. I'll have to read up on it more, but It seems like they're on the right path, though they're bottlenecking themselves by saying 1 game world has to = one VM instead of architecting the software to scale across multiple instances and allowing some form of auto-scaling. Still happy to see that they're at least being more creative with the infra though. Baby steps. *insert Bill Murray face*
 

JaggedSac

Member
I sincerely don't have any rude intentions behind this question, but: did you actually read what I wrote? I know it's an insanely massive wall of ranting text, but I very definitely answered your question.

My question is: what the hell is taking the gaming industry so long to adopt this technology?

Using AWS &#8800; using AWS properly. Just having virtualized instances instead of physical servers doesn't mean your infrastructure is better. I keep saying you, but this isn't directed at *you*, I promise. I just mean that I know of a lot of game studios that have virtualized servers living places like AWS, but they aren't actually architecting an infrastructure that makes use of all the tools available to them by doing so.

Even more terrifying, if there *are* studios that claim to be making use of all these tools (again, not just having VMs in the cloud), then how are they doing such a piss poor job of it?

You should check out Titanfall. It is doing great stuff with their cloud based implementation. Also, MS' entire matchmaking/party system relies on using scalable resources. Hell, if you start a party chat, you've got a dedicated server instance spun up/allocated to you.
 
What I'm saying is, is that companies like EA aren't going to suddenly shift to using AWS when they already have the resources available. It's not bloody hard or even that expensive to do this stuff. The most expensive bit will be the cost of hiring and paying the people to manage it, if you don't have them.

Why wouldn't they? The cost of an engineering team to manage a physical server cluster (whether they run VMs or not) + the cost of the hardware itself is faaaaar greater than the fraction of people you'd need to build an intelligent, self-monitoring, IRT reactionary systems infrastructure that lives somewhere like GCE or AWS. I've done the transition multiple times now and the biggest pain in the ass is the initial build. You need to re-create your infrastructure in the new environment and then optimize it with the new tools you have at your disposal. Then you schedule a cutover. There's a greater incurred cost during the time you have them both running in tandem, but once you cut over the savings are quite stunning.

Regarding bolded: On a platform which expanded with rules to scale. Gears and Halo 5 used Azure, which again, uses the platform in a dynamically expanding (and shrinking) state.

It's done, and probably has been for a while.

Just because Crackdown is doing server side physics, doesn't mean that every other game isn't expanding on load.

Looking into it, Halo 5 used Azure for two things.

1) Companies and the social abilities and tracking of the people contained in those companies. Cool use of cloud DB structures, but nothing revolutionary. My gripe is more with game studios not scaling out to cater to load demand of connections/sessions in real time (rather than hours, days or weeks). So like when you can't connect to a multiplayer game because the servers are overloaded, which shouldn't be a thing we still see in this day and age for more than a few minutes with, and I hate to be a broken record, things like auto-scaling.

2) They used Azure services for event/logic/telemetry tracking and processing. While this is absolutely crucial in making information based decisions for how to tweak the game itself and the servers running multiplayer operations, it's only part of the solution. Using some of the metric/event data gathered by this pipeline can and should be used to fire off real-time events to adjust the size and scope of infrastructure automatically on the fly with no manual intervention required. Again, that's only one side of this. It seems like a lot of this data was gathered more for single-server performance tweaking and game engine tweaking to make the experience smoother for people, but seemingly wasn't used on the systems infrastructure side of things. At least not in real time. It seems like data on events were collected for the explicit use of later tweaking, not real-time. The telemetry pipeline could scale automatically, but there was seemingly nothing in place the server-side infrastructure for handling load at scale.
 

LordRaptor

Member
You should know that Blueprint isn't actually C++, nor does it compile down to C++. UE4 does also do garbage collection, if you desire (it seems to do it by default).

I'm not familiar enough with Blueprint to dispute this categorically, but it seems weird to me that if they're not prepackaged C++ code snippets the method for creating a custom Blueprint node is to create C++ code.
 

JaggedSac

Member
Halo 5 is dynamically provisioning resources for pretty much everything. Theater, forge, co-op, multiplayer, etc. Perhaps you were reading about Halo 4, which did only use Azure for analytics.

Oh I see. You read that blog post about them using the azure NoSQL stuff. That does not mean they did not use Azure for anything else. That is just one thing they used it with.
 
Yeah, but they had one of the if not the smoothest online launches of any modern game to date.

Say what you will about that game, it launched smoothly and runs pretty smoothly on a daily basis. I guess it is to be expected if they staffed that many people on their infrastructure.

Oh, no doubt. Absurdly smooth by comparison to most. I'm just saying that they could have achieved the same thing with a DevOps crew of 5 and a fraction of the cost. I actually wish I was exaggerating when I said that, because it makes me sad to think about how much money, physical hardware and physical space they spent on something that didn't need to cost nearly as much to do the same thing. It's the equivalent of hemi engine trucks instead of better vehicle efficiency for power and longevity. =/
 
Halo 5 is dynamically provisioning resources for pretty much everything. Theater, forge, co-op, multiplayer, etc. Perhaps you were reading about Halo 4, which did only use Azure for analytics.

Oh I see. You read that blog post about them using the azure NoSQL stuff. That does not mean they did not use Azure for anything else. That is just one thing they used it with.

I'm not sure what blog post you're referring to; if you'd care to share it, I'd be down to read it though.

I'm definitely talking about Halo 5, not 4. The only things I could find with any detail (from Halo devs or Azure folks) about scaling were the event pipeline and the company tracking.
 

JaggedSac

Member
I'm not sure what blog post you're referring to; if you'd care to share it, I'd be down to read it though.

I'm definitely talking about Halo 5, not 4. The only things I could find with any detail (from Halo devs or Azure folks) about scaling were the event pipeline and the company tracking.

Sorry, on mobile. If you google Halo 5 Azure, the first item is a blog post about them using Azure DocumentDB to manage the company stuff like you mention.

Read this blog post from Respawn too:

www.respawn.com/news/lets-talk-about-the-xbox-live-cloud


Halo 5 is using this same stuff.
 
Sorry, on mobile. If you google Halo 5 Azure, the first item is a blog post about them using Azure DocumentDB to manage the company stuff like you mention.

Read this blog post from Respawn too:

www.respawn.com/news/lets-talk-about-the-xbox-live-cloud


Halo 5 is using this same stuff.

Ah, sweet. Thanks for the link! I did see the one post you mentioned (DocumentDB) but it was definitely not the only thing I read. =]

Reading the Respawn post, it seems like they're definitely on the right track, though still seemingly suffering from a lot of misconceptions. The fact that the price of AWS is even remotely compared to the price of RackSpace is laughable; as much as I hate this saying, it's comparing apples and oranges even if they meant RackSpace Cloud.

I knew about the additional AI and physics and stuff, but it's interesting to see that they actually used VMs for game world scaling. I'm a little shocked that it was such a milestone for them, given the small size of most of the maps. I'm curious to know if they used this same auto-scaling for connection load, given that any time after launch that I tried to play Titanfall on PC, I was constantly waiting around for server connections to function properly so that matchmaking could begin.

I think I said this earlier, but my concern for most of this stuff is in the realm of multiplayer; I'm actually not sure I'm too fond of how much XB1 titles are pushing this type of thing for singleplayer specifically because it means you either need internet for singleplayer or you're going to get a shittier experience offline.

Maybe one day in 15 years when everyone has gigabit fiber at home... =P
 

JaggedSac

Member
So they built this powerful system to let us create all sorts of tasks they they will run for us, and it will scale up and down automatically as players come and go.

With the Xbox Live Cloud, we don't have to worry about estimating how many servers we'll need on launch day.

These quotes definitely imply they are scaling the number of multiplayer server instances up and down based on player count. When I get home I will paste some items from the Xbone SDK related to dynamic scaling of mp assets. A lot of it is built into the sdk.

I'm sure Amazon offers similar things with their sdk. Only good news to see everyone going in that direction.
 
These quotes definitely imply they are scaling the number of multiplayer server instances up and down based on player count.

It's definitely possible. The quote seemed just vague enough, though, where it's hard to tell if they're auto-scaling instances that handle general load and connectivity, or if they're only auto-scaling instances that do the events, AI logic, physics calculations, etc, based on logged-on player count and demand.

If it's option #2, it's an awesome use of scaling but matters less if the base world instances aren't scaling with connection requests in the first place.

It's strange to me that, if they were auto-scaling based on actual connection load/demand, they made such a public spectacle of the AI & physics calculations in the cloud instead of touting "Hey! You will actually be able to connect and play on day 1 without getting lost forever in connection queues!"

Haha. That's the more interesting problem that needs immediate solving to me. Using cloud scaling for physics is an awesome idea in MP games, but it doesn't matter if people are having a shitty/impossible time connecting to play in the first place.

The good news is: people are learning.
The bad news is: this problem could have been solved years ago and it's absolutely batshit crazy to me, as someone who does this for a living, that it's not more widely in use.
 
As I'm sure you know, stuttering is predominantly the result of GC, as Unity uses C# and so never has direct control over memory management.

The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.

If you have the time and inclination to manually control memory and pointers to ensure consistent "to the metal" performance for every platform you release on (and their own individual quirks), Unity is probably a bad choice.
If you don't have the time or inclination, and want something that will run on multiple platforms - including often neglected Linux / Mac machines and you're fine with or can design around required GC, Unity is probably a good choice.

Its swings and roundabouts - there's definitely a reason Unity is still a popular choice, and its the traditional trade off between ease of use and performance.

This is patently false for any game of significant complexity. Other wise articles like this wouldn't exist:

http://www.gamasutra.com/blogs/Wend...nagement_for_Unity_Developers_part_1_of_3.php

There's a lot of hidden bullshit in developing in Unity that can really add up if you don't know what you're doing. Like the nightmarish way Unity unwraps ForEach loops, ugh.
 

TheSeks

Blinded by the luminous glory that is David Bowie's physical manifestation.
Idk, it's not like devs were exactly lining up to use CryEngine, are they?

Exactly my first thought on "Amazon licenses CryEngine to allow developers to use it for free."

I'm like "people use that over Unreal!?"
 

Durante

Member
So I take it you have not played Ori?
I have played Ori. I even put it on my top 10 last year.

However,
1) I wouldn't exactly qualify it as a large-scale, intensive game. Yes, it's probably one of the - if not the - most graphically intensive 2D game ever made, but that's still not comparable to a large-scale high-end 3D game.
2) Even Ori does the trademark intermittent Unity stutter. Just a lot less frequently than most Unity games.

As I'm sure you know, stuttering is predominantly the result of GC, as Unity uses C# and so never has direct control over memory management.

The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.

If you have the time and inclination to manually control memory and pointers to ensure consistent "to the metal" performance for every platform you release on (and their own individual quirks), Unity is probably a bad choice.
In modern C++, you can get extremely predictable memory allocation and deallocation behaviour without anything I would call a "to the metal" implementation effort.

And if there is one particular data type or use case which does require a bit more effort (for great performance payoffs), then C++ scales very neatly from no/minimal effort on memory management all the way to controlling exactly where each byte resides and at which time, with plenty of stops in between.
 
The reason why Unity's GC stutters is due it using a rather outdated GC along with an outdated version of Mono, which the devs have acknowledged is a problem but they can't just immediately fix it due to the numerous other potential issues that would result from it.

That being said, getting around the GC is really just a matter of good coding practice and making sure to only use the GC when it won't be an inconvenience to the user. Ori is very good about this, the stutters I've noticed are rare but pretty much always in between-area transitions. The GC isn't exactly a big issue for my own projects, either.
 

Parham

Banned
A plug-in for Maya/Max is probably just as good as exporting in FBX technically, but that surely leaves a ton of practical problems.

Question: what happens if you want to work with Blender/Modo/Cinema4D/Houdini instead of Maya?

Yeah, as it stands, if you use anything other than Maya or Max, you're SOL.
 

Gattsu25

Banned
https://aws.amazon.com/service-terms/

57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.

So Lumberyard can be legally used for life-critical or safety-critical systems only in the event that the events in "the walking dead" happen in real life?

Yes, it's an awfully altruistic move of them
 

Skinpop

Member
The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.

I find that it tends to be the opposite, meaning you really do have to worry about memory management often even more(in terms of time spent on it) than in say c++ when using gc languages for high performance stuff but instead of having full control you are now dealing with a black box and will never be able to get the necessary performance whatever you do. I have no idea how unity is supposed to make it in vr with their current engine. even the slightest stutter is extremely disruptive to the experience.
 
I find that it tends to be the opposite, meaning you really do have to worry about memory management often even more(in terms of time spent on it) than in say c++ when using gc languages for high performance stuff but instead of having full control you are now dealing with a black box and will never be able to get the necessary performance whatever you do. I have no idea how unity is supposed to make it in vr with their current engine. even the slightest stutter is extremely disruptive to the experience.
Yes. You can still have leaks in these garbage collected languages if it runs for a long time and you don't release references. And it can be worse since you aren't always actively thinking about it. I prefer the option of manual memory management.
 

Skinpop

Member
Yes. You can still have "leaks" in these garbage collected languages if it runs for a long time and you don't release references. I always preferred manual memory management. The option is nice.

yeah, and people vastly overrate how much time you spend on manual memory management. it's not that hard and it's not that time consuming unless you are trying to do crazy stuff on very limited resources.
 

Quasar

Member
Is curious...crytek providing a license to let amazon fork it and give it away.

Makes me wonder how far along they are in making a nextgen cryengine.
 
The reason why Unity's GC stutters is due it using a rather outdated GC along with an outdated version of Mono, which the devs have acknowledged is a problem but they can't just immediately fix it due to the numerous other potential issues that would result from it.

That being said, getting around the GC is really just a matter of good coding practice and making sure to only use the GC when it won't be an inconvenience to the user. Ori is very good about this, the stutters I've noticed are rare but pretty much always in between-area transitions. The GC isn't exactly a big issue for my own projects, either.

I thought it was also because Unity and Xamarin don't want to negotiate a new license or whatever which is why we've been stuck on some old as shit version of Mono Develop for years. I feel like whatever the root cause is, it is going to be Unity's eventual undoing.
 
I'm considering downloading this and getting started on doing some simple 2.5D games with it.. I have no past game developing experience and this will be a weekend hobby for me. Where can I find pointers towards coding tutorials and free assets to use with CryEngine/Lumberyard?
 
I thought it was also because Unity and Xamarin don't want to negotiate a new license or whatever which is why we've been stuck on some old as shit version of Mono Develop for years. I feel like whatever the root cause is, it is going to be Unity's eventual undoing.

Well, Xamarin is a big issue as well, apparently they want Unity to pay millions to use newer Mono. The guys at Unity are apparently looking at Microsoft's .NET Core instead, they're worried that if they just upgrade to the newest .NET version then upgrade to .NET Core too soon after that, everyone's gonna be pissed at them and there's gonna be splits in the userbase and on the asset store, etc.
 

leeh

Member
Why wouldn't they? The cost of an engineering team to manage a physical server cluster (whether they run VMs or not) + the cost of the hardware itself is faaaaar greater than the fraction of people you'd need to build an intelligent, self-monitoring, IRT reactionary systems infrastructure that lives somewhere like GCE or AWS. I've done the transition multiple times now and the biggest pain in the ass is the initial build. You need to re-create your infrastructure in the new environment and then optimize it with the new tools you have at your disposal. Then you schedule a cutover. There's a greater incurred cost during the time you have them both running in tandem, but once you cut over the savings are quite stunning.
Not when they already have it. EA will have all of these things with a team which do ops 24/7, or at least pass that work onto a 3rd party. You still need these things if you were running on AWS. AWS just gives you hardware on tap, it doesn't manage it for you. They'd still need a team of engineers to manage and monitor it, just exactly like what I'm currently doing as I'm typing this. They still need to create their own rules for elasticity, define their own firewall rules, templates, builds and have it monitored round the clock. Amazon don't do that for you. For EA, they'll have the hardware and the people. To pay a premium for that, when they already have it, is stupid.

Looking into it, Halo 5 used Azure for two things.

1) Companies and the social abilities and tracking of the people contained in those companies. Cool use of cloud DB structures, but nothing revolutionary. My gripe is more with game studios not scaling out to cater to load demand of connections/sessions in real time (rather than hours, days or weeks). So like when you can't connect to a multiplayer game because the servers are overloaded, which shouldn't be a thing we still see in this day and age for more than a few minutes with, and I hate to be a broken record, things like auto-scaling.

2) They used Azure services for event/logic/telemetry tracking and processing. While this is absolutely crucial in making information based decisions for how to tweak the game itself and the servers running multiplayer operations, it's only part of the solution. Using some of the metric/event data gathered by this pipeline can and should be used to fire off real-time events to adjust the size and scope of infrastructure automatically on the fly with no manual intervention required. Again, that's only one side of this. It seems like a lot of this data was gathered more for single-server performance tweaking and game engine tweaking to make the experience smoother for people, but seemingly wasn't used on the systems infrastructure side of things. At least not in real time. It seems like data on events were collected for the explicit use of later tweaking, not real-time. The telemetry pipeline could scale automatically, but there was seemingly nothing in place the server-side infrastructure for handling load at scale.
They use it plain and simply for dedicated servers. Dedicated servers which spawn and destroy on demand, the exact same thing which AWS does.
 

Fafalada

Fafracer forever
polarizeme said:
so why the fuck has no one charged head-first into this?
You're asking several different questions rolled into one, so you'd need to be a lot more specific.
For instance, "AAA" companies have used scalable infrastructure for a good long time (eg. Ubisoft has been using it in most of their games since at least 2009). Big companies value internally operating because one of the big parts of added-value from online is the ownership of your user-data, something that drives large parts of Amazon business model for that matter.
What you seem to be ignoring is that infrastructure as such is only a "relatively" small part of the problem, and writting arbitrarily scalable software is NOT a solved problem (if it were, GPU manufacturers would be owning the world right about now), even when you're using sane-compute abstractions for your infrastructure. These things remain hard-work, and they get harder as you add more complexity to your online-compute. There's good reason why embarrassingly parallel problems are the usual showcase for moving local->cloud compute ala Crackdown.

And there's other parts to this - lumping all manner of game-bugs together and blaming "online" for it is convenient, but ignores the fact that most of these games are fundamentally not well written due to the circumstances of their development, so the infrastructure is often the least of their problems (although it does occasionally compound them).
 

LordRaptor

Member
In modern C++, you can get extremely predictable memory allocation and deallocation behaviour without anything I would call a "to the metal" implementation effort.

I find that it tends to be the opposite, meaning you really do have to worry about memory management often even more

Any discussion about managed versus unmanaged is going to have proponents for both sides, because there absolutely are valid pros and cons for each, and how pro a pro is or how big a con is is entirely dependent on the project, especially in production environments where you can have multiple different programmers of multiple different skill levels and multiple language familiarity all working on interlinking systems that often don't have the luxury of peer review or even commenting code.

I'm absolutely not saying Unity is better than Unreal (or that managed is always better than unmanaged), but I will say it serves a specific niche that UE doesn't (and nor does Lumberyard from all appearances) and there are absolutely valid reasons for choosing one over another.

it's not that time consuming unless you are trying to do crazy stuff on very limited resources

That might as well be the dictionary definition of indie game development, heh.
 
Hmm... so it seems like they're at least modularizing instances w/ game worlds, but that would still mean that, especially with only one master server handling the load of all meta data pertaining to the state of the game world instances, they could run into load issues if there are too many players in the same gamespace. I'll have to read up on it more, but It seems like they're on the right path, though they're bottlenecking themselves by saying 1 game world has to = one VM instead of architecting the software to scale across multiple instances and allowing some form of auto-scaling. Still happy to see that they're at least being more creative with the infra though. Baby steps. *insert Bill Murray face*

Not to correct my original statement, but as an adendum, I am quite a lay person regarding this and I imagine your concerns are probably their concerns as well. The persistent universe is a long term love project, I imagine they are considering a lot of finer details to make it work in a mostly bottleneck-free way.
 

Skinpop

Member
Any discussion about managed versus unmanaged is going to have proponents for both sides, because there absolutely are valid pros and cons for each, and how pro a pro is or how big a con is is entirely dependent on the project, especially in production environments where you can have multiple different programmers of multiple different skill levels and multiple language familiarity all working on interlinking systems that often don't have the luxury of peer review or even commenting code.
i don't see many valid pros for gc memory when it comes to game engines. except "i don't have to learn stuff", which really is self-deception not a pro.

indie games almost never do the crazy stuff. they could benefit a lot by investing a few days into doing the memory management by themselves. but there is this mentality today that the tool should do everything for you, and when it breaks you deflect responsibility because you've fooled yourself into thinking that isn't up to game designers to care about performance and software engineering.
 
Top Bottom