• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Amazon releases new free game engine Lumberyard (based on CryEngine)

tuxfool

Banned
Full source code access already makes this better than Unity. Interesting that it has no announced scripting language that I can see. I wonder how hard it will be to roll LUA/JS bindings.

If it is based on Cryengine, scripting is almost certainly is handled by LUA.
 
I apologize for the possible length of this post. Also, I say fuck a lot when I'm angry or excited. Doubly when I'm both. Sorry.

...plus I've had a lot of coffee today.

OH MY GOD, FINALLY.

I've been wanting to start a new thread about this on GAF for ages, but I'm a lowly man who cannot create threads yet. Today is my magic day to talk about this subject. And by "talk" I mean "rant."

AAA game development studios (and the large publishers that own/deal with most of them) are fucking dumb when it comes to the world being announced with LumberYard and GameLift. I know that sounds harsh, but I'll explain and hopefully it will shed light on my frustration.

Background
Building public-facing and private systems infrastructure is what I do for a living. I've been at it for, I don't know, ~8 years. I've spent a lot of time screaming and being unbelievably frustrated with traditional hosting solutions like the massive Rackspace, as well as helping them test Rackspace Cloud in its infancy. It's terrible. They're terrible. The benefit you get from the cost is atrociously bad. We're talking about a world where requesting a new web server to account for increased load takes a week depending on how specific your needs are. A week. A fucking week! What happens if you are hit unexpectedly by a huge traffic spike? You're screwed, that's what. Having locally hosted hardware (like if Sony has their own server farm) obviously increases that response time, but we're talking still taking hours or days. That's preposterous.

My old job's infrastructure was pretty large. Not Netflix large or anything (not even close), but our monthly bill at Rackspace fluctuated between $75k-175k/mo depending on the time of year and what we were expecting for load. Now, when you're paying that much damn money for a managed infrastructure ("managed" in the sense of support when you're asleep or whatever), you'd expect that things would break less often. Sadly, we were constantly hit with downtime from technician muck-ups, bad deployments (of servers, not code), bad, un-tested OS updates that we never asked for... you name it. I digress... my point is that getting caught off-guard by high traffic times was a huge pain in the ass simply due to the nature of dealing with physical servers running everything. Serving traffic, doing load balancing, SQL masters and slaves, whatever. Any change in capacity meant a TON of planning with Rackspace to make sure they had the hardware available. That's not acceptable these days. Like at all. Not for any industry that constantly needs to deal with surprise peak traffic. Hell, even for planned peak traffic it's a shitty system.

What the answer isn't
Relying on the cloud and virtual machines.

What the answer IS
Relying on the cloud and virtual machines.

I know that sounds silly, but I'll explain.

You can't pretend that just moving your physical servers to VMs (virtual machines) is going to fix all your problems. It can't and it won't. It's not a 1:1 solution at all because it's nowhere near that simple.

That said, AWS (and hopefully someday Azure *laughs* and Google's Compute Engine) is absolutely the answer. Not because it's in the cloud, not because it's virtualized, but because it is both of those things + a massively robust and capable set of tools and features that can and will make your infrastructure a fucking monster when it has to be and a tiny mouse when it can be.

At my old, aforementioned job, we eventually moved *everything* to AWS. We optimized code on our end to take advantage of the insanely useful ecosystem that AWS offers and we saw the following:
  • Cost reduction (down to $12/k mo)
  • 3-4x performance on one web app
  • 2-3x performance increase on an older, antiquated web app
Performance being pageload times, response times, you name it. Data access from any of our databases was hilariously faster; the list goes on and on...

...but that list doesn't even include the things I loved most (and the things most relevant to the gaming world) like the ability to modularize your services and functions to eliminate single points of failure or automatically horizontally scale your infrastructure with demand in real time if you make it/let it.

What that means is this:
Say 20,000 concurrent users is the most your infrastructure can handle before it completely shits the bed. In a typical server infrastructure, physical or VM, the most you can really do to put out the fire is add more servers. Physically, as I mentioned, this takes time and a lot of it. With VMs you at least have the pleasure of easily spinning up a new VM (especially if you have templates on hand), but you still run into the same dumb shit in the end: it's all manual, it all takes time, and if you are hosting your own VMs, you run the risk of no longer having the hardware available to spin up new VMs.

^^ that's exactly what almost all game studios and their publishers are doing. They have locally and remotely hosted physical boxes, locally or remotely hosted virtual boxes (AWS, Azure, Dreamhost [maybe that was a joke, but I can't tell] or wherever) or a combination of those options. It's shitty, it's slow for responding to crises or dealing with issues and, in this day and age, it's fucking lazy and irresponsible.

Taking that same 20k concurrent user limit into account, the scenario on AWS is much, much different. As a very small example, you could set up a cluster of servers that operate on a set of rules. Those rules define that once the load on those servers reaches a limit, new servers are automatically created to compensate. Traffic slows down? Automatically decommission those servers and let the active connections drain so that new connections hit the remaining pool of servers. Traffic dies down even more? Decommission more automatically. You can set limits on this stuff, too, so that you set standards for a base amount of servers and also a ceiling amount if you're worried about cost going through the roof.

That's amazing as hell and, if utilized properly, could save the gaming studios SO much money with crazy amounts of added benefit. Hey, you know all those shitty times that services or MP games go down because of high load? *cough*

But that's just the tip of the iceberg.

You can do the exact same shit with databases, not just webservers.

You can do the same thing with logic tasks.

You can do the same with notification delivery and email delivery.

You can do the same with with security. Hell, you can even easily isolate which applications, services and servers have access to each other to mitigate security risks with REALLY easy rulesets.

Then there are the availability zones. Need high-availability for your application or service? Have it live in multiple availability zones so that devices and connections can fail over to each other if something takes a dump. You can make it so that load balancers can balance connections across these zones so you can be making use of all zones simultaneously. You can deploy in different regions of the world and bake in connection logic so that people are routed to the nearest region. It's glorious!

Deploy shitty code or a change that broke something? Roll it back. You can keep things versioned and store recent versions of machine images to quickly roll back to. You can keep versions of configuration of server clusters that you can roll back to. You can fix a shitty deploy in minutes, and when downtime costs you money that's a whole lot of money you're saving.

If you account for these features in your development and deployment pipeline, you can even use services that will know what size and kind of server you want spun up based on which application is running. Again, I'm not even getting into the nitty gritty here; this is all just basic "shit you can do with cloud infrastructure."

There's a reason that Netflix uses the holy bejesus out of AWS. They have tens of thousands of virtual instances serving their content to your home. They have many hundreds of instances just to handle logging of your issues when they occur. Can you imagine doing something that robust in a traditional hosted world? Fucking barf.

The gaming industry should have been smart enough to take note of the shining beacon of "holy shit, that's possible?!" scaling infrastructure that has been Netflix, and they should also be ashamed that they didn't.

That being said...

I'm not a huge XB1 fan (sorry!) but what Microsoft Studios is doing with the new Crackdown game is only a fraction of what's possible in the world of server and computing infrastructure in the cloud right now. It's more cost effective, more capable, more easily monitorable and manageable than anything the traditional server world can offer... so why the fuck has no one charged head-first into this? It's absolutely insane to me that seemingly no one/almost no one in the gaming studio world has gotten out in front of this yet.

And I don't mean Microsoft's initial "everyone is online all the time, so we just always use 'the power of the cloud' to make games better" so much as I'm referring to common sense shit like capacity planning and management so that multiplayer games aren't a steaming sack of shit on day one. Remember every single launch of GTA V (last gen, current gen, PC)? Every single time was a shit show of wondering if you were going to connect to servers or if you'd be booted from sessions, stuck in loading hell, or unable to play in general. Why do we live in a world where a development team of 1,000+ people didn't have the foresight to spend some of those development resources on a better online infrastructure? That blows my mind and makes me so sad as both a gamer and a technologist.

Can you imagine a world where your multiplayer sessions on consoles no longer run like soggy dick because the development and engineering teams were smart enough to take advantage of all these tools?

Have I mentioned how cheap this is compared to the old hat way of doing things?

Okay... I'm going to shut up in a minute.

The point is that I'm really, really excited for what Amazon just announced, even if just primarily because I'm so damn relieved to see that someone is addressing this gaping anus in the game development world. It sucks that GameLift is only available for developers using LumberYard, but there's good news: it doesn't fucking matter. All the stuff that GameLift does can be done already. Today. There's an AWS-SDK for almost any modern programming language in use today, so this sort of functionality doesn't have to mean studios moving to an entirely new engine; they can very, very easily bake it into their own.

So if there are any AAA game devs on this thread and you happen to read this post:
Where the fuck have you guys been? Is there a legitimate reason other than investment of time and initial cost that this isn't being more widely and quickly adopted in the industry? How are things like better reliability, better uptime and massive cost savings not a crazy motivational force to the CEOs and CIOs and CTOs to do this?

Okay bye. I love you. I'm sorry this was so long and I said fuck and anus.

Coffee.
 
What? You're asking why this engine that was released today isn't more widely used by AAA developers? Or are you saying AWS? Because a lot of games use AWS for their servers. I've used AWS in a mobile game developed with Unity.
 

tuxfool

Banned

This entire rant is a bit redundant. Just about everybody is actually moving to Cloud providers. AFAIK Sony uses AWS, and most instanced multiplayer games are actually doing things in the cloud. Instanced game servers map extremely well to scalable cloud providers and only legacy MMOs and games these days are even considering running their own server farms.
 
What? You're asking why this engine that was released today isn't more widely used by AAA developers? Or are you saying AWS? Because a lot of games use AWS for their servers. I've used AWS in a mobile game developed with Unity.

I sincerely don't have any rude intentions behind this question, but: did you actually read what I wrote? I know it's an insanely massive wall of ranting text, but I very definitely answered your question.

My question is: what the hell is taking the gaming industry so long to adopt this technology?

Using AWS ≠ using AWS properly. Just having virtualized instances instead of physical servers doesn't mean your infrastructure is better. I keep saying you, but this isn't directed at *you*, I promise. I just mean that I know of a lot of game studios that have virtualized servers living places like AWS, but they aren't actually architecting an infrastructure that makes use of all the tools available to them by doing so.

Even more terrifying, if there *are* studios that claim to be making use of all these tools (again, not just having VMs in the cloud), then how are they doing such a piss poor job of it?
 
This entire rant is a bit redundant. Just about everybody is actually moving to Cloud providers. AFAIK Sony uses AWS, and most instanced multiplayer games are actually doing things in the cloud. Instanced game servers map extremely well to scalable cloud providers and only legacy MMOs and games these days are even considering running their own server farms.

Having a server in the cloud or having a VM in the cloud doesn't magically make your infrastructure better. Are there benefits? Absolutely, but that's not the magic solution.

There are myriad tools and services that open up for use when you have VMs in the cloud, though, and that's where all the magic lives. I see absolutely no evidence of those tools being properly used in the the gaming world. My guess is that Amazon saw the same gap in the industry or they wouldn't have bothered with the things they announced today.

Running the risk of sounding rude again, which I promise is not my intention, I address all of this in my massive tirade.
 

tuxfool

Banned
Having a server in the cloud or having a VM in the cloud doesn't magically make your infrastructure better. Are there benefits? Absolutely, but that's not the magic solution.

Trust me. Game developers are aware of the differences. Why are they doing a *piss poor* job (using your words) is because game servers have much more stringent performance requirements than the average service. Also they have much more sudden changes in instance burden.
 
Trust me. Game developers are aware of the differences. Why are they doing a *piss poor* job (using your words) is because game servers have much more stringent performance requirements than the average service. Also they have much more sudden changes in instance burden.

Mostly this is a thing that systems people and DevOps (ugh, I hate that term) people/teams address, not software developer teams. So if there's no one at a studio to raise their hand and say, "yo, we should be doing this other stuff instead" then my guess is that it just doesn't happen. Change in large organizations is notoriously frustrating. It takes time and costs money and leadership is usually not okay with that notion. =/

Again I point to my GTAV mention. If a dev team of over 1k people and the money that must've been tossed at that project can't do a better job of an infrastructure that easily auto-scales to handle load, then what hope is there that 200 or 300 person AAA studios will address it with a smaller crew and budget to kick it off?

As for performance requirements per server and quick changes in instance burden... it doesn't matter. It really, really doesn't. If you build an infrastructure that uses metrics and monitoring to automatically scale with load and strain, then it doesn't matter that it's hard on a single instance because another one (or 5 or 12 or 100 or 1,000) pops on over to say, "hey dude, no worries. I've got your back. let's do this!"

Like that's exactly the purpose of auto-scaling, especially when it's triggered by monitored metrics. It's also the beauty of it. It doesn't matter what the application purpose is. It doesn't matter what purpose the service on the VM has. It's very, very applicable to the gaming industry.
 

Skinpop

Member
Again I point to my GTAV mention. If a dev team of over 1k people and the money that must've been tossed at that project can't do a better job of an infrastructure that easily auto-scales to handle load, then what hope is there that 200 or 300 person AAA studios will address it with a smaller crew and budget to kick it off?
probably better. throwing resources at problems is not a good solution. in these projects with hundreds of devs/artist working on the product there are going to be extremely few(if any) people with a good understanding of the whole thing.
 
probably better. throwing resources at problems is not a good solution. in these projects with hundreds of devs/artist working on the product there are going to be extremely few(if any) people with a good understanding of the whole thing.

Haha. Good point. The irony here is that if studios were really, truly using the tools available to them (speaking purely on systems infra, not software development) they'd need less in terms of resources, not more.

But yeah... you're spot on. simply chucking more money and people at development problems has, in my experience, just lead to deeper problems. Siloing, fatigue, miscommunication, carelessness, etc.
 

Skinpop

Member
Haha. Good point. The irony here is that if studios were really, truly using the tools available to them (speaking purely on systems infra, not software development) they'd need less in terms of resources, not more.

But yeah... you're spot on. simply chucking more money and people at development problems has, in my experience, just lead to deeper problems. Siloing, fatigue, miscommunication, carelessness, etc.

yeah and it affects game design as well. why do you think these aaa studios put so much focus on open world and "content" (generic quests, 3d assets, collectibles and so on)? Because it scales better than throwing resources at creating a well designed game(which can be done by a small team or even just one person). This makes the infrastructure incredibly rigid. Imagine the horror of a talented designer trying to improve the gameplay in a game at this scale. the people you have to talk to and the hierarchies you have to traverse to do so. there is too much noise to cut through.

imo content based game design is the biggest cancer of modern gaming.
 
yeah and it affects game design as well. why do you think these aaa studios put so much focus on open world and "content" (generic quests, 3d assets, collectibles and so on)? Because it scales better than throwing resources at creating a well designed game(which can be done by a small team or even just one person). This makes the infrastructure incredibly rigid. Imagine the horror of a talented designer trying to improve the gameplay in a game at this scale. the people you have to talk to and the hierarchies you have to traverse to do so. there is too much noise to cut through.

imo content based game design is the biggest cancer of modern gaming.

While I think there are certainly exceptions, I agree with this wholeheartedly.

I think you can create content based games that still deliver on things like diversity, creativity and longevity-from-more-than-just-sheer-amount, but those titles are sadly few and far between.
 

Skinpop

Member
While I think there are certainly exceptions, I agree with this wholeheartedly.

I think you can create content based games that still deliver on things like diversity, creativity and longevity-from-more-than-just-sheer-amount, but those titles are sadly few and far between.
sure, there is a happy middle ground and there are aaa-games that are developed by small(~100 person) talented teams. these tend to have much less focus on "content".
 

tuxfool

Banned
As for performance requirements per server and quick changes in instance burden... it doesn't matter. It really, really doesn't. If you build an infrastructure that uses metrics and monitoring to automatically scale with load and strain, then it doesn't matter that it's hard on a single instance because another one (or 5 or 12 or 100 or 1,000) pops on over to say, "hey dude, no worries. I've got your back. let's do this!"

Like that's exactly the purpose of auto-scaling, especially when it's triggered by monitored metrics. It's also the beauty of it. It doesn't matter what the application purpose is. It doesn't matter what purpose the service on the VM has. It's very, very applicable to the gaming industry.

I think you'll find two things.

1) performance metrics and the speed that cloud services scale at are still extremely hard to nail down, especially for applications that have real time and low latency requirements.

2) Computation cost on servers. Guaranteeing CPU time occupancy in order to reduce costs and at the same time provide a good experience is hard. This ties into point 1), you can allow your server code to expand to n cores in order to overcompensate for load but you won't always need all those cores on a given instance. How do you have an infrastructure that scales wide globally and spins up instances quickly without waste by over-provisioning. This is the classic trade-off in impulse response.
 

RiverBed

Banned
They already have a platform/console with FireTV. Not particularly successful so far, but I wouldn't be surprised to see them try at least another iteration or two.

I thought that was just an android platform so they might as well make/be considered phone games. They made FireTV games but not offered them for phones?
 
Interesting, but this thing is most certainly inheriting CryEngine's various problems, and it'll take a while for Amazon to actually catch up.

Unity trembles

A bit late, but I don't think Lumberyard is really gonna eat into Unity's market share. Unity has a lower rendering overhead (which is great for devs who don't give a shit about making really pretty graphics), ports to everything under the sun, and has fantastic documentation and an amazing asset store. Not to mention it's pretty much THE engine of choice for 2D games outside of Construct 2 and Game Maker.

Hilariously neither engine can match the level/map building tools from id Software even back in the 90s that was used for Quake, etc. UE does have a BSP editor but it's very bad compared to Quake's BSP editor for example. Blows my mind that neither engine has BSP editor that even comes close to that.

Gaffer Turfster made a "Quake 1/2 MAP and Source VMF to Unreal Engine 4 plugin" called HammUEr: https://gumroad.com/l/jNucW#

It's hilarious though, because reading discussions about gamedev years ago Hammer and its id equivalent (can't remember the name) were always described as archaic and clunky, but then on other places not so long ago I found modders/mapper were discussing them rather fondly compared to newer stuff.

There's actually a BSP-like modelling tool on the Unity asset store. I haven't bought it yet (I really should, considering I struggle with traditional modelling, and the BSP workflow looks really appealing), but apparently it's really good.

There's also ProBuilder, which is a more 'traditional' modelling setup for those who prefer that sort of thing, and the devs for that also released companion assets for tile-based design and zbrush-esque mesh deformation and vertex painting.
 

Biff

Member
Calling it now: Amazon is extending the Kindle self-publishing strategy to video games. They will offer digital distribution to compete with Steam while simultaneously offering best-in-class development tools.

This company is going to take over the world.
 

Somnid

Member
Calling it now: Amazon is extending the Kindle self-publishing strategy to video games. They will offer digital distribution to compete with Steam while simultaneously offering best-in-class development tools.

This company is going to take over the world.

They kinda do this already. There's no client but Amazon has always sold PC games and DLC and has several gaming related services.
 
I think you'll find two things.

1) performance metrics and the speed that cloud services scale at are still extremely hard to nail down, especially for applications that have real time and low latency requirements.

2) Computation cost on servers. Guaranteeing CPU time occupancy in order to reduce costs and at the same time provide a good experience is hard. This ties into point 1), you can allow your server code to expand to n cores in order to overcompensate for load but you won't always need all those cores on a given instance. How do you have an infrastructure that scales wide globally and spins up instances quickly without waste by over-provisioning. This is the classic trade-off in impulse response.

1) I'm not sure I follow. Performance metrics on a server, vm or not, are extremely easy to monitor. Down to the minute if we're talking something like CloudWatch. DB read/write, disk I/O, ram utilization, cpu load, disk space, network, logging, application performance... you name it. It's just as easy to build reactionary logic that responds to those metrics and executes tasks like spinning up additional instances to help. It's part of what I do for a living. Forecasting the up/down scale time of real-time applications is exactly part of why this is so easy right now. I can tell you, within a 15-30 second window, how long it will take most of our applications to scale up or down at any given time from adding/subtracting servers from their cluster.

2) Computational cost on VMs is nowhere near the cost in manpower, man hours and hardware that is needed by the alternative, old way of doing something like this, especially as game worlds get bigger and engines get chunkier. Again, given that it's part of my job, they are costs I've had to pay close attention to over the last near-decade. I'm not talking about server code needing access to more virtual cores on a VM in order to overcompensate, I'm talking about a system-one that already exists-that allows you to have a reactionary ecosystem that spins up new virtual machines in a matter of minutes (or less) to handle increased load and then, just as easily, decommissions those virtual machines when that load is low enough that additional machines are not necessary. This is exactly the point of what I'm saying. You don't have to over-provision. You don't have to under-provision. You simply make use of the plethora of tools that allow you scale on-demand automatically. Smarter tools for a smarter infrastructure serving up your application. I promise I'm not making this up.
 
1) I'm not sure I follow. Performance metrics on a server, vm or not, are extremely easy to monitor. Down to the minute if we're talking something like CloudWatch. DB read/write, disk I/O, ram utilization, cpu load, disk space, network, logging, application performance... you name it. It's just as easy to build reactionary logic that responds to those metrics and executes tasks like spinning up additional instances to help. It's part of what I do for a living. Forecasting the up/down scale time of real-time applications is exactly part of why this is so easy right now. I can tell you, within a 15-30 second window, how long it will take most of our applications to scale up or down at any given time from adding/subtracting servers from their cluster.

2) Computational cost on VMs is nowhere near the cost in manpower, man hours and hardware that is needed by the alternative, old way of doing something like this, especially as game worlds get bigger and engines get chunkier. Again, given that it's part of my job, they are costs I've had to pay close attention to over the last near-decade. I'm not talking about server code needing access to more virtual cores on a VM in order to overcompensate, I'm talking about a system-one that already exists-that allows you to have a reactionary ecosystem that spins up new virtual machines in a matter of minutes (or less) to handle increased load and then, just as easily, decommissions those virtual machines when that load is low enough that additional machines are not necessary. This is exactly the point of what I'm saying. You don't have to over-provision. You don't have to under-provision. You simply make use of the plethora of tools that allow you scale on-demand automatically. Smarter tools for a smarter infrastructure serving up your application. I promise I'm not making this up.
Really, the big problem is one of cost - actual monetary cost. Take GTA V for example - it's basically the most extreme example anyone could give. The game sold 11.2 million copies on day 1! If only 1 million of those gamers were able to log in on in that first day, Amazon would charge $1.5 million, and that's not counting instances or data. All 11 million, $16.5 million. No publisher will agree to that, not when the only price of not doing so is annoying a few customers for the first few days. Very few people return great games just because they have login problems the first few days. See, I think you are assuming they aren't using smart infrastructure just because you are judging the most popular games by their first days, but I think many of them are using smart networking infrastructures, but just don't want to pay for everyone to have a smooth experience on day 1. These days most games have stellar network experiences after the first week.
 
Really, the big problem is one of cost - actual monetary cost. Take GTA V for example - it's basically the most extreme example anyone could give. The game sold 11.2 million copies on day 1! If only 1 million of those gamers were able to log in on in that first day, Amazon would charge $1.5 million, and that's not counting instances or data. All 11 million, $16.5 million. No publisher will agree to that, not when the only price of not doing so is annoying a few customers for the first few days. Very few people return great games just because they have login problems the first few days. See, I think you are assuming they aren't using smart infrastructure just because you are judging the most popular games by their first days, but I think many of them are using smart networking infrastructures, but just don't want to pay for everyone to have a smooth experience on day 1. These days most games have stellar network experiences after the first week.

I'm not sure what cost you're calculating, but if AWS charged $1.50 per connection + instance costs, no one in their right mind would use AWS. I wouldn't. My current and former employers sure as hell wouldn't.

Would you mind providing the data you used that states AWS charges $1.50 per connection?
 
Interesting, but this thing is most certainly inheriting CryEngine's various problems, and it'll take a while for Amazon to actually catch up.



A bit late, but I don't think Lumberyard is really gonna eat into Unity's market share. Unity has a lower rendering overhead (which is great for devs who don't give a shit about making really pretty graphics), ports to everything under the sun, and has fantastic documentation and an amazing asset store. Not to mention it's pretty much THE engine of choice for 2D games outside of Construct 2 and Game Maker.
I am conflicted here. Unity has an obvious lower overhead overall (dx9 support, rudimentary rendering), yet it also scales extremely poorly as soon as you start being more ambitious (unlike CE / Lumberyard, Unity currently has huge performance problems for basic things like complex meshes, real time shadows, or many world objects). Unity also lacks basic and performant current gen graphical systems so one has to check out plugins on the store :/

I agree with everything else though and I think the added benefits of usability and documentation could perhaps cut down on the above mentioned problems. Unity is simple to recommend for low end projects.

I have no idea how this release will affect the current engine market for devs, but I do look forward to games having the graphical prowess and performance of CE. Crytek must have been crazy desperate around sale time for any cash flow to allow this at all. 1. Crytek was not outright bought 2. What they sold is their own exact product which now directly competes with what they offer

I honestly think Crytek can only bottom up over time from here on in unless they gain massive traction in their game development or VR dev. They quite literally cloned and gave away their most prized chicken to the richest guy in town.
 

Durante

Member
While I somewhat agree, sexy graphics are a thing, and they are one of the reasons I would take UE4 over Unity any day (strickly talking about 3D games). It's been years and I yet have to see Unity games that blow me away compared to its competitors, that combined with the apparently really bad console performance is a reason I would not go with Unity. And looking at the adoption rate of UE4 in the Indie/AAA space just seems to support that. But to each their own, obviously.
Forget about graphics, I'd consider Unity a serious contender only when I finally see the first large-scale Unity game without intermittent stuttering issues.
 
So is this the product that Garnett Lee (of Weekend Confirmed fame) left games media for Amazon for? He mentioned that he transitioned from being in games media to being on the development side of gaming when he took the job at Amazon....

Just pure speculation on my part.
 
So is this the product that Garnett Lee (of Weekend Confirmed fame) left games media for Amazon for? He mentioned that he transitioned from being in games media to being on the development side of gaming when he took the job at Amazon....

Just pure speculation on my part.

Probably more of an actual game, not the engine.
 

KOCMOHABT

Member
Tried it out and i think it's missing some of the newest CryEngine features, but is basically that with some more tools (UI Editor for example)

I would like to hear some statements from Crytek tbh, best not official, of what they think about this.

For the big devs like CIG, Warhorse etc. it's probably nice to have support from Crytek directly still, but I wonder why any big AAA would choose Cryengine over Lumberjack in the future, given they would choose this over Unreal at all.

Seems like Amazon plans on supporting high-end devs properly.
 
So let's say I have little to no experience coding, where would I start if I were interested on downloading one of these engines and just tinkering around with them. What would be easiest to "pick up and play"?
 

Durante

Member
So let's say I have little to no experience coding, where would I start if I were interested on downloading one of these engines and just tinkering around with them. What would be easiest to "pick up and play"?
Unity might be the easiest to pick up and play, but UE4 has source access (which is very valuable for learning), good genre-specific examples, and actually stable performance in large-scale projects. I'm unfamiliar with CryEngine (and by extension this) but its documentation is universally described as the worst of the three, and its lack of integration with lower-cost authoring tools would seem to make it less suitable for beginners.
 

leeh

Member
Having a server in the cloud or having a VM in the cloud doesn't magically make your infrastructure better. Are there benefits? Absolutely, but that's not the magic solution.

There are myriad tools and services that open up for use when you have VMs in the cloud, though, and that's where all the magic lives. I see absolutely no evidence of those tools being properly used in the the gaming world. My guess is that Amazon saw the same gap in the industry or they wouldn't have bothered with the things they announced today.

Running the risk of sounding rude again, which I promise is not my intention, I address all of this in my massive tirade.
Titanfall, Gears UE, Halo 5 all use Azure properly with true elasticity. Respawn the best example of this, with their solution documented. It's a very safe bet to say EA implement the same technology in the servers they provide for their games. It'd be more of a shock if they didn't.

Too many people place reliance on the big names to provide SaaS services, whereas, I could set my own up in about half a day with tools like puppet and Xen. Which is probably what a lot of 3rd parties do, as they'll already have the hardware to support it.
 

darkinstinct

...lacks reading comprehension.
I sincerely don't have any rude intentions behind this question, but: did you actually read what I wrote? I know it's an insanely massive wall of ranting text, but I very definitely answered your question.

My question is: what the hell is taking the gaming industry so long to adopt this technology?

Using AWS ≠ using AWS properly. Just having virtualized instances instead of physical servers doesn't mean your infrastructure is better. I keep saying you, but this isn't directed at *you*, I promise. I just mean that I know of a lot of game studios that have virtualized servers living places like AWS, but they aren't actually architecting an infrastructure that makes use of all the tools available to them by doing so.

Even more terrifying, if there *are* studios that claim to be making use of all these tools (again, not just having VMs in the cloud), then how are they doing such a piss poor job of it?

Maybe because most studios realize their game will be gone once the game isn't financially supporting server costs anymore and they want their love child to live on for more than two years? Or they remember the opposition Microsoft felt with their always online plans? People love to be able to play offline. So aside from multiplayer games it doesn't make sense to rely on cloud services. You're just binding yourself to someone without gaining anything in return. A non cloud game can sell for years and years. A cloud game can sell for two, maybe three years. Cloud is the solution to a problem the consoles created when they launched with weak hardware, that wasn't even good enough to give average PCs a run for the money for one year.

So why exactly should a developer be willing to pay 50k - 200k a month to a cloud service when he can do everything locally and ends up with a million more profit?


Originally Posted by Durante

Forget about graphics, I'd consider Unity a serious contender only when I finally see the first large-scale Unity game without intermittent stuttering issues.

So I take it you have not played Ori?
 

element

Member
So why exactly should a developer be willing to pay 50k - 200k a month to a cloud service when he can do everything locally and ends up with a million more profit?
They don't have the money to either buy the hardware or manage it 24/7. That is extremely costly and who knows if you will even need that. Who would want to drop a couple hundred thousand dollars in hardware and staff two sys ops when you can run on AWS at 1/10th the cost?

You could always plan to swap. Soft launch in a beta, see your stress levels and demand. Get an idea of cost and go from there to continue with cloud or migrate to self managed.
 
They don't have the money to either buy the hardware or manage it 24/7. That is extremely costly and who knows if you will even need that. Who would want to drop a couple hundred thousand dollars in hardware and staff two sys ops when you can run on AWS at 1/10th the cost?

You could always plan to swap. Soft launch in a beta, see your stress levels and demand. Get an idea of cost and go from there to continue with cloud or migrate to self managed.

The elasticity of the cloud is also handy after the first 3~4 months after release,
when a lot of people have moved on.
 
I think the big thing for me will be the single player games this could help power, especially for small studios. Really interesting news. Look forward to seeing how this turns out in the future.
 
OTOH it's based on CryEngine, and all my instincts are to run very far away.

OTOH it is backed by Amazon, who know what they are doing.

I guess I'll have a play around. How hard can Lua be? Hehe.
 

LordRaptor

Member
Forget about graphics, I'd consider Unity a serious contender only when I finally see the first large-scale Unity game without intermittent stuttering issues.

As I'm sure you know, stuttering is predominantly the result of GC, as Unity uses C# and so never has direct control over memory management.

The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.

If you have the time and inclination to manually control memory and pointers to ensure consistent "to the metal" performance for every platform you release on (and their own individual quirks), Unity is probably a bad choice.
If you don't have the time or inclination, and want something that will run on multiple platforms - including often neglected Linux / Mac machines and you're fine with or can design around required GC, Unity is probably a good choice.

Its swings and roundabouts - there's definitely a reason Unity is still a popular choice, and its the traditional trade off between ease of use and performance.
 

tuxfool

Banned
As I'm sure you know, stuttering is predominantly the result of GC, as Unity uses C# and so never has direct control over memory management.

The flipside is that developing with Unity means you never have to worry about memory management like you have to with C++.

If you have the time and inclination to manually control memory and pointers to ensure consistent "to the metal" performance for every platform you release on (and their own individual quirks), Unity is probably a bad choice.
If you don't have the time or inclination, and want something that will run on multiple platforms - including often neglected Linux / Mac machines and you're fine with or can design around required GC, Unity is probably a good choice.

Its swings and roundabouts - there's definitely a reason Unity is still a popular choice, and its the traditional trade off between ease of use and performance.

There are people making UE4 games completely using Blueprint.
 
Top Bottom