Stupid Cell Idea #1: Free MMORPGs?

User 406

Banned
Reading the arguments in another thread over the feasibility of distributed rendering using Cell, something occurred to me. What if the grid computing model was used by an MMORPG not for graphics purposes, but as the server itself. Currently, MMORPGs require big server farms to maintain the game world with very high bandwidth network connections to handle the load of all the connections from players, but what if that load was transferred to the gestalt of all the players' machines connected to the game?

If the need for the expensive servers/connections could be reduced down to around the level of that needed for your basic inexpensive peer-to-peer matchmaking, then monthly fees for the game could be eliminated. Additional created content could be handled with micropayments as needed, or by periodic downloadable expansions with a one time fee. This could open up the market for a MMORPG to more potential players who are currently put off by the idea of paying a continuous fee for a single game (like me, for instance. :P) And since the pool of players would be larger, the grid computing resources of the game network would be greater, resulting in better overall performance.

Would something like this be feasible?
 
You'd still need a hardcore database server to pull character info / location / etc from wouldnt you? I wouldn't want a database shared on userend PS3s. :O

Edit: plus you got guild wars :)
 
You got the stupid part right--without a database server that's out of the hands of the gaming populace, any MMO using the quasi-P2P technology you're thinking of would be hacked, cracked, and completely fucking worthless.

edit: Razoric, I love you. Guild Wars! (although I don't think it has a mac ver)
 
Sea Manky said:
Reading the arguments in another thread over the feasibility of distributed rendering using Cell, something occurred to me. What if the grid computing model was used by an MMORPG not for graphics purposes, but as the server itself. Currently, MMORPGs require big server farms to maintain the game world with very high bandwidth network connections to handle the load of all the connections from players, but what if that load was transferred to the gestalt of all the players' machines connected to the game?

If the need for the expensive servers/connections could be reduced down to around the level of that needed for your basic inexpensive peer-to-peer matchmaking, then monthly fees for the game could be eliminated. Additional created content could be handled with micropayments as needed, or by periodic downloadable expansions with a one time fee. This could open up the market for a MMORPG to more potential players who are currently put off by the idea of paying a continuous fee for a single game (like me, for instance. :P) And since the pool of players would be larger, the grid computing resources of the game network would be greater, resulting in better overall performance.

Would something like this be feasible?

The expense of the servers has become a small percentage of the cost of overall maintenance, customer support, additional content, continued advertising & development.

The only way you're going to see a free MMPORG is if somehow it can be manged via users (think open source methodology), or they commercialize the world you're in via advertising.

Your character eats at McDonalds, sleeps at Motel 8, buys his clothes from Old Navy, buys his guns from Smith & Wesson.
 
sonycowboy said:
The only way you're going to see a free MMPORG is if somehow it can be manged via users (think open source methodology), or they commercialize the world you're in via advertising.

Your character eats at McDonalds, sleeps at Motel 8, buys his clothes from Old Navy, buys his guns from Smith & Wesson.

:lol Fat chance. EA's been the one who's pioneering this, and their game prices haven't dropped one bit.
 
Hmmm, there would have to be a central matchmaking server anyway to allow connections to the complete network, would grabbing the user's character info when connecting and then updating at intervals be that big a load? One thing that occured to me was that if sections of the computing grid go down, it's possible for some data loss, so the master server would require a base template of the world to seed as needed. However, since such losses would hopefully be few and far between, restoring needed fragments back to the grid wouldn't require as powerful a connection as a standard MMORPG. Naturally, the game design itself would have to be made to accomodate issues like these.

As for the hacking worries, the point behind the grid computing idea is that each individual machine would be used by the network transparently as a CPU resource, so there's no way for any given machine to control what part of the server gestalt it's processing at any given time. It could be possible for someone to hack their machine so that it returns bad data, but if a distributed computing system can't manage to identify malfunctioning subsystems and cut them out of the loop, then it's not worth its salt, and none of the ideas for grid computing on unsecure networks are worth a damn. :P Any given hacker could fuck up his machine, but he wouldn't be able to control any data on the "server" by intent.
 
sonycowboy said:
The expense of the servers has become a small percentage of the cost of overall maintenance, customer support, additional content, continued advertising & development.

Okay, I was under the impression that the servers were a big chunk of it, with additional content being the other big chunk. I figured that as content was created, it could be priced to cover the costs of the P2P server and therefore continue to make money. Guess it won't work under the current method, but I still have to wonder if it's doable with a different business model.
 
Theres the small problem of handling the loot that drops, individual characters, and mob spawns. None of these can be done client side or it would get hacked to pieces.
 
DonasaurusRex said:
Theres the small problem of handling the loot that drops, individual characters, and mob spawns. None of these can be done client side or it would get hacked to pieces.

But like I was saying, the whole concept behind the Cell distributed computing thing is that every processor connected up is utilized by the network on an ad-hoc basis. There would really be no way for you to control which part of the "server" was being processed by your particular machine at any given moment. Therefore there would be no way to hack things in a meaningful way other than to just cause general network damage, and if the network can't handle and isolate bad responses from its processing elements, then the grid computing model itself is just a bad idea in general for gaming applications.
 
Sea Manky said:
But like I was saying, the whole concept behind the Cell distributed computing thing is that every processor connected up is utilized by the network on an ad-hoc basis. There would really be no way for you to control which part of the "server" was being processed by your particular machine at any given moment. Therefore there would be no way to hack things in a meaningful way other than to just cause general network damage, and if the network can't handle and isolate bad responses from its processing elements, then the grid computing model itself is just a bad idea in general for gaming applications.

Normally when people are talking about performant grid computing, they are talking about large scale calculations spread across large numbers of machines to solve the problem faster - but that time is generally not 'fast enough' on the scale to be useful for games. There are grids for games, but they are confined to fiber optic connections between the servers because if you have a high latency connection (and the internet is very high latency compared to local access), the amount of time its going to take to get your response will defeat the point of distributing the work.

Gridding works well because it assumes that the machines on the grid can be quickly repurposed to do general computational work from an IDLE state. A game console should NEVER be in an idle state - game developers should be using as much of the CPU as humanly possible at all times. When I plop in GT5, the machine should be spending all its energy doing complex physics and rendering. Can you spread some other 'non real time' work over to other machines? Sure - but you'll find that if you could wait 10 minutes to get the response:

1) you should have sent it to a central game oriented server
2) if there are other machines that have free CPU - so should yours and you should have just done the computation locally to begin with

When you start assuming that there are other machines that have CPU to spare, they you have to ask yourself - am I sure I can't just spawn a thread and do this computation locally.
 
Phoenix said:
Normally when people are talking about performant grid computing, they are talking about large scale calculations spread across large numbers of machines to solve the problem faster - but that time is generally not 'fast enough' on the scale to be useful for games. There are grids for games, but they are confined to fiber optic connections between the servers because if you have a high latency connection (and the internet is very high latency compared to local access), the amount of time its going to take to get your response will defeat the point of distributing the work.

Gridding works well because it assumes that the machines on the grid can be quickly repurposed to do general computational work from an IDLE state. A game console should NEVER be in an idle state - game developers should be using as much of the CPU as humanly possible at all times. When I plop in GT5, the machine should be spending all its energy doing complex physics and rendering. Can you spread some other 'non real time' work over to other machines? Sure - but you'll find that if you could wait 10 minutes to get the response:

1) you should have sent it to a central game oriented server
2) if there are other machines that have free CPU - so should yours and you should have just done the computation locally to begin with

When you start assuming that there are other machines that have CPU to spare, they you have to ask yourself - am I sure I can't just spawn a thread and do this computation locally.

The idea I was thinking of was that the game itself would intentionally be written to leave additional cycles needed by the server network. One of the benefits of using a single architecture would be being able to tweak this amount to get best performance both for the local end user and for the grid itself for a given minimum number of machines connected. As far as deciding whether or not do a computation locally, you bring up a good point about latency and the advantages of processing locally, but the problem is that the server program itself is larger than a single console could handle, and the grid idea would be to just transfer that load from a dedicated server farm onto the client machines to save on costs. Now if a particular client couldn't get a response fast enough from the grid to get a needed object in time, then of course it wouldn't work. I'm just wondering if the new architecture would be better at handling that sort of thing, or if the server program could be written in such a way that the objects were replicated in a dynamic fashion across nodes for faster access. And sonycowboy already mentioned that the server/connection costs for an MMORPG weren't as large as I had thought, and so it wouldn't work from a business standpoint anyway.

I still think the idea of using the grid idea for something similar would be interesting, maybe not for MMORPGs but perhaps for persistant worlds in other online games or something. I guess what I'm trying to get at is away from the idea that a client program just tries to steal cycles from other machines to do what it does, and more towards the idea of a completely separate program running within the larger grid that the clients communicate with as if it were a central server. I think there could be a lot of potential there.
 
sonycowboy said:
The expense of the servers has become a small percentage of the cost of overall maintenance, customer support, additional content, continued advertising & development.

Ok, you set up a server farm and have 200,000+ people connect to it 24/7 transferring gigs of bandwidth 24/7 while reading from the hard drives 24/7. On top of the high bandwidth costs, you also have hard drives that need to be completely replaced every couple of months.
 
Sea Manky said:
The idea I was thinking of was that the game itself would intentionally be written to leave additional cycles needed by the server network. One of the benefits of using a single architecture would be being able to tweak this amount to get best performance both for the local end user and for the grid itself for a given minimum number of machines connected.


The problem is moreso a factor of the speed of the network and the mechanics of synchronizing over the network. These are the things that make it impractical, but not impossible. We're talking about doing things an order of magnitude slower than just doing them locally in the average case so it makes more sense to just do it locally rather than have a grid just for the sake of having a grid... not to mention that adding broadband gridding to our application will suck for the millions of people in our target market who don't actually have broadband.
 
Phoenix said:
The problem is moreso a factor of the speed of the network and the mechanics of synchronizing over the network. These are the things that make it impractical, but not impossible. We're talking about doing things an order of magnitude slower than just doing them locally in the average case so it makes more sense to just do it locally rather than have a grid just for the sake of having a grid... not to mention that adding broadband gridding to our application will suck for the millions of people in our target market who don't actually have broadband.

Oh, I'm not suggesting this as a panacea for games in general as if it's going to magically make everything better. I'm talking about a situation where the game would need something outside the scope of any individual client console, that would normally require a central server. One possibility could be a very large, persistant, yet mutable world that a single console couldn't keep track of. As long as the grid could provide object data to each client fast enough to keep the nearby surroundings updated, something which shouldn't require rendering speed level latency, it would work, and would remove the need for that central server. And in either case, non-broadband people are screwed anyway. :P
 
Sea Manky said:
Oh, I'm not suggesting this as a panacea for games in general as if it's going to magically make everything better. I'm talking about a situation where the game would need something outside the scope of any individual client console, that would normally require a central server. One possibility could be a very large, persistant, yet mutable world that a single console couldn't keep track of. As long as the grid could provide object data to each client fast enough to keep the nearby surroundings updated, something which shouldn't require rendering speed level latency, it would work, and would remove the need for that central server. And in either case, non-broadband people are screwed anyway. :P

Think of it this way - if your console can't keep track of it, that means that no other individual console can keep track of it either. So to solve the problem requires more than one console so you spread out the work across those other consoles. At the same time, those other consoles want to spread out their work across other consoles as well. What you have is a scenario where everyone in the grid need more work than the grid is capable of delivering (in this scenario).

Lets break this down into a couple of scenarios:

1) My console can't do it, but another console has spare cycles.

We'd have to assume that they aren't playing the same game nor any game as capable in order to assume that they have spare cycles. If they are - then they need cycles to. So a single console can't help me, because if a single console had the cycles - then it would logically follow that MY console should have the cycles.

2) My console can't do it, and it would require more than one console with spare cycles to do the work.

If there are 4 consoles on the network and my work required two additional consoles so help. My console could consume the free cycles of console 2 and 3, console 4 needs work but no one else can help. Consoles 2 and 3 are also in the same situation, but since they are donating their idle CPU they can neither help themselves nor anyone else. In this scenario the grid will deadlock itself because the demand will be higher than the supply

These are the two scenarios that we've talked about thus far (and there are definitely others) and in neither do we even have to get into the network or the type of game. At the design/architecture level it just doesn't work.

You may want to check out Grid Computing
 
Phoenix said:
Think of it this way - if your console can't keep track of it, that means that no other individual console can keep track of it either. So to solve the problem requires more than one console so you spread out the work across those other consoles. At the same time, those other consoles want to spread out their work across other consoles as well. What you have is a scenario where everyone in the grid need more work than the grid is capable of delivering (in this scenario).

I think you're misunderstanding me. I'm not talking about the game program on the console needing more power to do what it does, I'm talking about a completely separate server program that is running on the collective Cell grid. The addition of more consoles doesn't add more complexity to the program, it just adds more available CPU cycles for it. The game world in the hasty example I gave doesn't get bigger the more people join, it just happens to be bigger to start with than any one console can track. Currently, we use central servers to handle this kind of situation, I'm just outlining a way they could be cut down significantly by letting the large numbers of connected machines carry the load. Now whether the latency issues you mention can be worked around is a different story, but considering there's real talk about shared rendering in some contexts, I don't think it's too unrealistic.
 
Sea Manky said:
I think you're misunderstanding me. I'm not talking about the game program on the console needing more power to do what it does, I'm talking about a completely separate server program that is running on the collective Cell grid. The addition of more consoles doesn't add more complexity to the program, it just adds more available CPU cycles for it. The game world in the hasty example I gave doesn't get bigger the more people join, it just happens to be bigger to start with than any one console can track. Currently, we use central servers to handle this kind of situation, I'm just outlining a way they could be cut down significantly by letting the large numbers of connected machines carry the load. Now whether the latency issues you mention can be worked around is a different story, but considering there's real talk about shared rendering in some contexts, I don't think it's too unrealistic.

Do you really think that a grid of PS3s is going to be that idle? The problem with all these ideas revolve around three flaws:

1) the transport cost of data/functions is not zero
2) No plugged in/powered on PS3 should be that idle - it is a game machine, it should be using ALL its resources for running the game that the owner purchased
3) A grid spanning across the PS3 community would have to have an API that every developer would agree to. Everyone would have to agree to give up a portion of their console to participate in a grid with no real benefit

It is impractical for a consumer electronics device outside of maybe your fridge to be that idle CPU wise.
 
Sea Manky said:
I think you're misunderstanding me. I'm not talking about the game program on the console needing more power to do what it does, I'm talking about a completely separate server program that is running on the collective Cell grid. The addition of more consoles doesn't add more complexity to the program, it just adds more available CPU cycles for it. The game world in the hasty example I gave doesn't get bigger the more people join, it just happens to be bigger to start with than any one console can track. Currently, we use central servers to handle this kind of situation, I'm just outlining a way they could be cut down significantly by letting the large numbers of connected machines carry the load. Now whether the latency issues you mention can be worked around is a different story, but considering there's real talk about shared rendering in some contexts, I don't think it's too unrealistic.

Seperate Server Program. Really? Doesn't change anything, it would still take up computing power no matter how seperate the program is.

You are seriously underestimating issues here. How exactly is this real talk "more real"? These aren't things he's regurgitating from some other tech site. These are issues you can go to any one studying computer science specializing in the field of high performance parallel computing. A lag on the internet in your halo game you say isn’t bad? Yea, it cost you the game and gave you some delays. But the level we are talking here, you will cripple if not, paralyze the computing structure. You have to see it, study it, and live it for yourself. There is a reason people are paid tons of cash to programming large computing clusters. Interestingly enough you assume the logic behind synchronization, scheduling, etc. concerning processes is perfect.

Think of it this way. Think of the number of operations a CPU can perform in a second. Now add a lag between each operation. The lag problem increases by an absurd rate. You have 8 CELLS with an extra CELL to keep track of all the internet processes; you will not only cripple it, but the same machine using all 9 CELLS locally will outperform it by an embarrassing margin. Why even bother putting so much fast memory in PS3, let it all stream from one central server instead. I mean, do you trust the internet enough to have your memory run to and from a central server? I mean, it is fast enough for CELL to do computations to and from thus the memory shouldn’t be a problem no? CELL can perform operations faster then memory could be read from doesn’t it?


Go ahead and quote me on the first paragraph. I don’t need CELL assumptions to back it up. Computing Power doesn’t solve everything, and in this case, not even close.
 
Phoenix said:
Do you really think that a grid of PS3s is going to be that idle? The problem with all these ideas revolve around two flaws:

1) the transport cost of data/functions is not zero
2) No plugged in/powered on PS3 should be that idle - it is a game machine, it should be using ALL its resources for running the game that the owner purchased
3) A grid spanning across the PS3 community would have to have an API that every developer would agree to. Everyone would have to agree to give up a portion of their console to participate in a grid with no real benefit

It is impractical for a consumer electronics device outside of maybe your fridge to be that idle CPU wise.

Maybe I need to be more clear. I'm not talking about this hypothetical game taking advantage of every PSX3 out there, just the ones running this game. I also mentioned earlier that the game design would necessarily require some built-in CPU slack for exactly this reason. So picture the game being designed to use all but a small percentage of the CPU, and the Cell chip being told by the program to use its idle cycles to connect to and be a part of the distributed server program.
 
Sea Manky said:
Maybe I need to be more clear. I'm not talking about this hypothetical game taking advantage of every PSX3 out there, just the ones running this game. I also mentioned earlier that the game design would necessarily require some built-in CPU slack for exactly this reason. So picture the game being designed to use all but a small percentage of the CPU, and the Cell chip being told by the program to use its idle cycles to connect to and be a part of the distributed server program.


Alright.. so... Every XBOX is not connected to XBOX LIVE playing Halo2, yet Halo 2 sold how much again? How many people are playing it online again? It only takes one PS3 connected over the net to a CELL server cluster to bring the whole thing down. :)
 
marsomega said:
You are seriously underestimating issues here. How exactly is this real talk "more real"? These aren't things he's regurgitating from some other tech site. These are issues you can go to any one studying computer science specializing in the field of high performance parallel computing. A lag on the internet in your halo game you say isn’t bad? Yea, it cost you the game and gave you some delays. But the level we are talking here, you will cripple if not, paralyze the computing structure. You have to see it, study it, and live it for yourself. There is a reason people are paid tons of cash to programming large computing clusters. Interestingly enough you assume the logic behind synchronization, scheduling, etc. concerning processes is perfect.

Don't get me wrong, I'm purely speculating with all of this, and that's why I'm feeling around the idea. The reference to the other discussion was in regards to developers talking about real work being done on the subject, and considering that's on such a time-critical function as graphics rendering, I thought that something a bit less latency intensive could be feasible.

Think of it this way. Think of the number of operations a CPU can perform in a second. Now add a lag between each operation. The lag problem increases by an absurd rate. You have 8 CELLS with an extra CELL to keep track of all the internet processes; you will not only cripple it, but the same machine using all 9 CELLS locally will outperform it by an embarrassing margin. Why even bother putting so much fast memory in PS3, let it all stream from one central server instead. I mean, do you trust the internet enough to have your memory run to and from a central server? I mean, it is fast enough for CELL to do computations to and from thus the memory shouldn’t be a problem no? CELL can perform operations faster then memory could be read from doesn’t it?

Again, I'm not talking about the entire game's logic and graphics being run off this grid, that would be preposterous. The game logic, rendering, and so on would all be handled locally, by the client program. Currently, that's how server based games like MMORPGs work. The server doesn't poll you controller for input remotely and run all the game logic itself, but what it does do is provide information about the environment to your client as you go.

Now, does it matter to the client program if it's attaching to a single server or a distributed computing network if it gets the same results? And if not, does it really matter if a small portion of the CPU of the client machine is helping run that distributed computer grid?

Just to emphasize, I am not talking about the pie-in-the-sky stuff where your PSX3 grabs cycles from your neighbors to make your games look better. Absolutely not.
 
marsomega said:
Alright.. so... Every XBOX is not connected to XBOX LIVE playing Halo2, yet Halo 2 sold how much again? How many people are playing it online again? It only takes one PS3 connected over the net to a CELL server cluster to bring the whole thing down. :)

Okay, I'm totally not following you here. The program wouldn't just use any Cell devices for this grid, just the ones running the game.
 
Then I'm really not getting you or I misunderstood. But apparently Phoenix is also in the same boat as I am because judging from his response he's also thinking the same thing I was.

I'm going put up a disclaimer of some type. :lol
I'm not attacking you, I'm discussing it with you. I don't hate you. :)

Sea Manky said:
Okay, I'm totally not following you here. The program wouldn't just use any Cell devices for this grid, just the ones running the game.


There is a lot more to it then you think. I'll talk to you later on IRC if your on. I might swing by, we'll have a chat. mmmkay? :D
 
marsomega said:
Then I'm really not getting you or I misunderstood. But apparently Phoenix is also in the same boat as I am because judging from his response he's also thinking the same thing I was.

I'm going put up a disclaimer of some type. :lol
I'm not attacking you, I'm discussing it with you. I don't hate you. :)

I'll talk to you later on IRC if your on. I might swing by, we'll have a chat. mmmkay? :D

Oh, absolutely, I'm feeling the same kind of disconnect with what Phoenix is saying, but there's nothing hostile about the discussion at all. I think I'm just looking at things from a different direction. I'll check you out on IRC then. :)
 
Sea Manky said:
Maybe I need to be more clear. I'm not talking about this hypothetical game taking advantage of every PSX3 out there, just the ones running this game. I also mentioned earlier that the game design would necessarily require some built-in CPU slack for exactly this reason. So picture the game being designed to use all but a small percentage of the CPU, and the Cell chip being told by the program to use its idle cycles to connect to and be a part of the distributed server program.

2) My console can't do it, and it would require more than one console with spare cycles to do the work.

If there are 4 consoles on the network and my work required two additional consoles so help. My console could consume the free cycles of console 2 and 3, console 4 needs work but no one else can help. Consoles 2 and 3 are also in the same situation, but since they are donating their idle CPU they can neither help themselves nor anyone else. In this scenario the grid will deadlock itself because the demand will be higher than the supply


This distributed server program running across all of these machines comprises one distributed task with no real controller. But lets assume for a minute Sony sets up a Fable PS3 style world manager and the world itself is distributed across all of the PS3s playing this Fable PS3 game (this will be our hypothetical scenario). So now we have a process separate from the game running in the background while our game is running in the foreground. This metaserver we will assume updates the weather in our world. Overtime the controlling server sends out the global world state in pieces to all of the PS3s playing the game so they can perform some contribution to the game. This makes some reasonable sense until you start looking at the bigger picture - you aren't just losing CPU, you're also losing memory as well because in order to store the units of work and the datasets you have to use memory on your local machine. Doable? Certainly. Practical? No.

Does it make sense for me as Fable PS3 dev to simply stack an array of Cell servers in a cluster at my office and distribute the results to the PS3s Xbox live style or does it make sense for me to distribute the workload over a growing and shrinking number of PS3 consoles with limited resources and a now (since its only related to a single game and set of data) hackable gme environment.

Nothing that you're listing is impossible - its all doable, but given the economies of scale it is very impractical as the end nodes (PS3s) won't *really* have a lot of idle CPU to contribute. If I'm just being cheap, yes I can offload the world to the players and minimize my investment in equiment. However I now replace that with a much more error prone and hackable system that has to be much more tolerant of faults in the end nodes than if I'd just done it locally without the synchronization nightmare that results.

If you are writing a PS3 game and have spare CPU to spare, don't 'share it out' - that's inefficient on some many levels. Add more enemies, give your physics engine more time to run, add more effects, do better skinning, synthesize voices, etc. Use the CPU if you've got CPU to spare!
 
FIXED

Phoenix said:
Nothing that you're listing is impossible - its all doable, but given the economies of scale it is very impractical as the end nodes (PS3s) won't *really* have a lot of idle CPU to contribute. If I'm just being cheap, yes I can offload the world to the players and minimize my investment in equiment. However I now replace that with a much more error prone and hackable system that has to be much more tolerant of faults in the end nodes than if I'd just done it locally without the synchronization nightmares that results.

If you are writing a PS3 game and have spare CPU to spare, don't 'share it out' - that's inefficient on some many levels. Add more enemies, give your physics engine more time to run, add more effects, do better skinning, synthesize voices, etc. Use the CPU if you've got CPU to spare!
 
Top Bottom