element
Member
it was clear that his Xbox 360 development is for IDs next game.golem said:Quake 4 isnt due till next year?
it was clear that his Xbox 360 development is for IDs next game.golem said:Quake 4 isnt due till next year?
border said:It's not a matter of him saying that "physics are useless" like some hyper-reactive people are claiming after reading a one-line summary of something he took at least 5-10 minutes to explain. He just seems to think that the performance tradeoff is currently not worth it, and that it's better to concentrate on the graphics side of things while physics tech improves behind the scenes.
Nonsense. New video cards are released every six months, but the core concepts at work in PCs are largely the same as ever. Major DirectX upgrades are years apart. The concept of multiprocessing, Carmack's focus here, has been around for a very long time....and id's been doing it since before it became mandatory with the PS2.Just remember consoles last 5 to 10 years, a PC changes every 6 months.
tenchir said:Hate to correct you again, but Maze Wars also had networked deathmatch. It was in there at the beginning too, not added later on.
Guy LeDouche said:The nice people at Team Kojima are going to do some mind-blowing shit. I know its sooo far away, but MGS4 is already making my front pantal region moist.
The next-gen consoles are about as powerful as current high-end pc's, but their cpu power is current PC processor / 2.
Again, it's not about "We can't tap this hardware". It's about a tradeoff -- "Is it worth all the extra money and manpower to tap this hardware"? I guess you can call the guy a hypocrite given id's exploitation of only 2 core franchises, but he seems legitimately concerned about rising budgets stifling creativity. He talks about what fun it's been working on cell phone games, because there was room to experiment and no pressure to put out a mega-hit or spend years on a project.hukasmokincaterpillar said:If his team isn't able to tap the new hardware then I personally wont take that as the sign for everyone else to pack their bags and call it a day, and I doubt the rest of the console development industry will either.
CaptainABAB said:I'm still respectively disagreeing about MazeWar being a FPS (it's a dungeon crawler), and it might be the first "multi-person networked game" but that's not the same as true deatchmatching.
The First First-Person Shooter
If you think id Software created first-person 3D gaming, think again: In the early 1970s, a group of engineers created a game known at various points as The Maze Game, Maze Wars, or simply Maze. It was a networked, first-person shooter with 3D graphics, and this is its story. By Alex Handy
Doesn't that also mean there could be a lot more X360 exclusives? I wonder if MS planned this on purpose.border said:He says that PS3 will be the best "base" platform for developers that want to do multiplatform games, and that PC will be the worst. This isn't because he thinks PS3 is the best development environment, but rather because it will be easier to port Cell code to X360 than it would be to do it the other way around.
Not really - PS2 was in exact same situation this generation, except even worse because it was also the weaker hardware.Doesn't that also mean there could be a lot more X360 exclusives?
---- said:Haven't both Sony and Microsoft embraced the Unreal Engine 3 technology thereby pretty much screwing over John Carmack? With the way the Doom 3 Engine has been passed over so completely I'm not surprised Carmack is bashing both systems.
No, id screwed themselves over by making an engine that was only suitable for mid-generation technology. Doom3 engine is too low-end for next-gen and too high-end for current gen (outside of Xbox). They're not bashing UE3 at all, anyway....he's mildly bashing the choice of an in-order multiprocessor setup. Carmack is so insanely rich that I doubt he cares too much about losing out on licensing deals. If he wasn't such a humble and self-effacing type you could say it's about power or ego, but that doesn't seem likely either. If this were Derek Smart or Itagaki or something you might suspect ulterior motives, but I tend to give Carmack a pretty high credibility rating.Haven't both Sony and Microsoft embraced the Unreal Engine 3 technology thereby pretty much screwing over John Carmack? With the way the Doom 3 Engine has been passed over so completely I'm not surprised Carmack is bashing both systems.
I don't always agree with JC's choices or opnions but they are always valid and well thought out.
Nobody can argue that in-order multi-core programming will make games more expensive and difficult to make. JC is just saying that maybe we jumped one generation too soon. From a technical point of view, I disagree but from a production point of view I'm not so sure. The problem isn't whether we can get better theoritical performance from multi-core architectures, its whether we have the tools and staff to get near the theoritical performance.
Its fine for the JC's of the world, we can lap this stuff up in our sleep but if we are only 5-10% of the programming staff, how good is the overall code base going to be?
JC isn't a lone coder anymore, he's a lead of a team and thats his concern, not his personal skill set. I'm sure JC could sit down and write the most awesome game on PS3 with-out blinking an eye but nobody buys games written by a single guy anymore.
border said:Again, it's not about "We can't tap this hardware". It's about a tradeoff -- "Is it worth all the extra money and manpower to tap this hardware"? I guess you can call the guy a hypocrite given id's exploitation of only 2 core franchises, but he seems legitimately concerned about rising budgets stifling creativity. He talks about what fun it's been working on cell phone games, because there was room to experiment and no pressure to put out a mega-hit or spend years on a project.
border said:One thing he is "negative" about is that he thinks the difficultly associated with In-Order multiprocessing is never going to get better, and that it's worth questioning whether the performance boost is worth the extra layers of complexity and X360 and PS3 have given us.
Parallel programming when you do it like this is more difficult. And anything that makes the game development process more difficult is not a terribly good thing. So the decision that has to be made there, is the performance benefit you get out of the this worth the extra development time and there's sort of an inclination to believe that, and there's some truth to it, Sony sort of takes this position where, "ok, so it's going to be difficult, maybe it's going to suck to do this but the really good game developers will just suck it up and make it work." And there's some truth to that. There will be the developers that go ahead and have a miserable time and do get good performance out of some of these multi-core approaches - and Cell is worse than others in some respects here. But I do somewhat question whether we might have been better off in this generation having an OoO main processor rather than splitting it all up into these multicore processors systems on here. It's probably a good thing for us to be getting with the programme now. The first generation games for both platforms will not be anywhere close to taking advantage of all this extra capability. But maybe by the time the next generation consoles roll around, the developers will be a little bit more comfortable with all this and be able to get more benefit out of it. But it's not a problem that I actually think is going to have a solution, I think it's going to stay hard. I don't think there's going to be a silver bullet for parallel programming. There have been a lot of very smart people researchers and so on that have been working this problem for 20 years,and it doesn't really look any more promising than it was before.
So that was one thing that I was pretty surprised when talking to some of the IBM developers of the Cell processor on there. I think that they made to some degree a misstep in their analysis of the performance would actually be good for where one of them explicitly said, basically "now that graphics is essentially done, what we have to be using this is for physics and AI", Those are the 2 poster childs for how we're going to use more CPU power. But the contention that graphics is essentially done, I really think is way off base. First of all, you can just look at it from the standpoint of "are we delivering everything a graphics designer could possibly want to put into a game, with as high a quality as they could possibly want?" And the answer is no. We'd like to be able to do Lord of the Rings quality rendering realtime. We've got orders of magnitude performance that we can actually soak up in doing all of this. There are, what I'm finding personally in my development now is that the interfaces that we've got to the hardware, the level of programmability that we've got, you can do really pretty close to whatever you want as a graphics programmer on there but what you find moreso now than before is that you get a clever idea for a graphics algorithm that will look really awesome and make a cool new feature for a game, you can go ahead and code it up and make it work, make it run on
the graphics hardware, but ultimately too often I'm finding that well this works great but
it's half the speed that it needs to be or a quarter of the speed, or I start thinking about
something "well, this would be really great but that's going to be one tenth the speed of
what we'd really like to have there". So I'm looking forward to another order of magnitude or two in graphics performance because I'm absolutely confident we can use it. We can actually suck that performance up and do something that will deliver a better experience for people there.
Which if you say, "well here's 8 cores or later it's going to be 64 cores or whatever,"do some physics with this that's going to make a game better"", or even worst "do some AI that'll make the game better". The problem with those, both of those,is that both fields have been much more bleeding edge than graphics has been, and do some degree that's
exciting where people in the games industry are doing very much cutting edge work in many cases, it is THE industrial application for alot of that research that goes on, but it's been tough to actually sit down and think how we'll turn this into a real benefit for the game. Let's go ahead, how do we use this however many gigaflops of processing performance to try and do some clever AI that you now, winds up using it fruitfully. And especially in AI, it's one of those cases where most of the stuff that happens in especially single player games is much more of a director's view of things. It's not a matter of getting your enemies to think for themselves, it's a matter of getting them to do what the director wants and putting the player in a situation you are envisaging in the game. Multiplayer focussed games do have much more of a case - you do want better bot intelligence, which is more of a classic AI problem, but the bulk of the games still being single player, it's not at all clear how you use incredible amounts of processing power to make a character do something that's going to make the gameplay experience better, I mean i keep coming back to examples from the really early days of Doom, where we would have characters that are doing this incredibly crude logic that fits inside a page of C code or something, and characters are just kind of bobbing around doing stuff, and you get people playing the game that are believing that they have devious plans and they're sneaking up on you and they're lying in wait and this is all just people taking these minor minor cues and incorporating them in their heads into what they think is happening in the game. And the sad things is, you could write incredibly complex code that does have monsters sneaking up on you, hiding behind corners, and it's not at all clear that that makes the gameplay better with some of these sort of happenstance things that happen with emergent behaviour. So until you get into cases where you think of games like the sims or MMO games where you really do what these sort of autonomous agent AIs running around doing
things, but then that's not really even a client problem, that's more of a server problem,
and that's not really where the multicore consumer cpus are going to be a big help.
Now, physics is sort of the other poster child of what we're going to do with all this CPU power, and there's some truth to that, certainly some of things we've been doing on CPUs for the physics stuff, it's gotten a lot more intensive on the CPU, where we find that things like ragdoll physics and all these different objects moving around, which is one of these "raise the bar" issues, every game now has to do this and it takes a lot of power. And it makes balancing some of the game things more difficult. When we're trying to crunch things to get our performance up, because it's not ..the problem with physics is, it's not scaleable with levels of detail in the way graphics are. Fundamentally when you're rendering an image of a scene, you don't have to render everything to the same level. It'd be like forward texture mapping which some old systems did manage to do but essentially what we have in graphics is a nice situation where there's a large number of techniques that we can do that we can fall off and degrade gracefully. Physics doesn't give you that situation in a general case. If you're trying to do physical objects that affect gameplay you need to simulate pretty much all of them all the time. You can't have cases where you start knocking some things over and you turn your back on it, and you stop updating the physics or even drop to some lower fidelity where you get situations where you know that if you hit this and turn around and run away, they'll land in a certain way, and if you watch them they'll land in a different way. And that's a bad thing for game development. And this problem is fairly fundamental. If you try to use physics for a simulation that's going to impact the gameplay, things that are going to block passage and things like that, it's difficult to see how we're going to be able to add a level of richness to the physical simulation world that we have for graphics without adding a whole lot more processing power. And it tends to reduce the robustness of the game, and bring on some other problems. So what winds up happening in the demos and things you'll tend to see on PS3 and the physics accelerator hardware. You'll wind up seeing a lot of stuff that effectively are non-interactive physics, this is the safe robust thing to do but it's a little bit disappointing when people think about "i want to have this physical simulation of the world". It makes good graphics when you can do things like, instead of the smoke clouds have the same clip into the floor that we've seen for ages on things, if you get smoke that pours around all the obstructions, if you get liquid water that actually splashes and bounces out of pools and reflects on the ground, this is neat stuff but it remains kind of non-core to the game experience. An argument can be made that we've essentially done that with graphics, where all of it is polish on top of a core game, and that's probably what will happen with the physics, but I don't expect any really radical changes in the gameplay experience from this. And i'm not really a physics simulation guy so that's one of those things were a lot of people are like damn this software for making us spend all this extra time on graphics, I'm one of those people who's like "damn all this software for making us spend all this extra time on here". But I realise things like the basic boxes falling down, knocking things off, bouncing around the world, ragdolls, that's all good stuff for the games, but I do think it's a mistake for people to try and go overboard and try and do a real simulation of the world, because it's a really hard problem, and you're not going to give really that much real benefit to the actual gameplay on there. You'll tend to make a game that may be fragile, may be slow, and you'd better have done some really really neat things with your physics to make it worth all of that pain and suffering on there. And I know there are going to be some people that are looking at the processing stuff with the cells and the multicore stuff and saying "well, this is what we've gotta do, the power is there, we should try and use it for this", but I think that we're probably going to be better served trying to just make sure all of the gameplay elements that we want to do, we can accomplish at a rapid rate, respectable low variance in a lot of ways. Personally I would rather see our next generation run at 60 frames per second on a console, rather than add a bunch more physics stuff. I actually don't think we'll make it, I think we will be 30fps on the consoles for most of what we're doing. Anyways, we're going to be soaking up a lot of CPU just for the normal housekeeping type of things we'll be doing.
Zaptruder said:I think the most relevant issue is that, CELL will be flexible enough for the designers to assign power to AI, physics or graphics as they see fit; that's its primary strength, even if it does require a little more effort to achieve what's ultimately possible.
mckmas8808 said:Look JC will get respect from me on the PC side, yet not on the console side. If he and the Half-Life guy want to bitch on complain about next-gen hardware then sit and watch hungry devs fly past you.
Just because they made beautiful games on the PC this gen doesn't mean they can speak out against next-gen console's choice of hardware and be right. Hungry devs like Blizzard (PGR3), Epic (Gears of War), Gureilla (Killzone), DigiGuys (WarDevil), Bandai (Gundam), and many more will spank the pants off of Half-Life and Doom games.
Keep crying, I don't care I spend my money on hungry devs. Even EA has a devs that make next-gen worthy (Just watch the Fight Night 3 demo). And JC said that games will cost $100 million dollars. Yeah Right.
My new slogan for PC devs might be. STOP CRYING!! STOP CRYING!! IF YOU DON'T YOUR NEXT-GEN GAMES WILL BE DYING!! :lol
gofreak said:It's interesting, because one of the things that can be done to improve graphics generally (in motion) is to tie physics more closely to it, but he does recognise that at least.
same here and probably everyone else on GAF. I cant wait for TGS just for MGS4, even if we just get a few pictures, but hopefully we will get to see some kick ass trailer that will all leave us thinking OMG MGS4 will be da best game eva!!!! It will probably be like what happened to everyone when everyone saw the first of MGS2Guy LeDouche said:The nice people at Team Kojima are going to do some mind-blowing shit. I know its sooo far away, but MGS4 is already making my front pantal region moist.
BirdySky said:It's safe to say the first MGS4 trailer will shake the industary to it's core.
This is what I dislike about the industary too. John, a very talented man. Deserves the accolade, but the extremely talented folk at Team Kojima, the coders who on the ps2 have produced graphics above and beyond anything anyone else has (In the real time cut scenes at least)..yet..few people could name them. The coders will never be interviewed..never remembered.
Just punch in, punch out...
Shame.
BirdySky said:It's safe to say the first MGS4 trailer will shake the industary to it's core.
This is what I dislike about the industary too. John, a very talented man. Deserves the accolade, but the extremely talented folk at Team Kojima, the coders who on the ps2 have produced graphics above and beyond anything anyone else has (In the real time cut scenes at least)..yet..few people could name them. The coders will never be interviewed..never remembered.
Just punch in, punch out...
Shame.
gofreak said:1) He did not say that IBM made a misstep with the Cell design as is being reported by some, he said he takes issue with IBM's contention of how that power should be used. They say it should be used for physics and AI since graphics is "done". Carmack obviously disagrees. And if Carmack wanted to use any CPU's power for graphics, I think he'd be better off with Cell regardless. But I don't think he's not saying he wants to do that. He was simply using that comment as a jumping off point to assert the primary importance of graphics.
John Carmack said:So that was one thing that I was pretty surprised when talking to some of the IBM developers of the Cell processor on there. I think that they made to some degree a misstep in their analysis of the performance would actually be good for where one of them explicitly said, basically "now that graphics is essentially done, what we have to be using this is for physics and AI", Those are the 2 poster childs for how we're going to use more CPU power. But the contention that graphics is essentially done, I really think is way off base.
Kleegamefan said:The original MGS also caused a big stirr when it was first shown on PSOne too (though not to the degree of MGS2, which was just mind blowing at the time)..
I am sure Kojima knows everybody in the world is waiting to see what he will show of MGS4 on PS3......in a way, I'd hate to be him right now....
He is the benchmark...
Vince said:Or, God-forbid, if they followed his recommendation and went with an OOOE CPU, are you going to be better off with that when running visualization tasks than, say, Cell?
Has he now? That's great news. If anyone would realize how important animation is, it would be Kojima (and his team).He's mentioned that MGS4 will be more about animation and making everything around very interactive.
dark10x said:Has he now? That's great news. If anyone would realize how important animation is, it would be Kojima (and his team).
PSM2: How will Snake look on PS3?
HK: Were not trying to make Snake look like a real person. We want to concentrate on the movement and animations, making him feel more natural, like an actual life-form.
At 234million transistors CELL can't be that cheap.Apenheul said:Yes, game developers would be better off with OOOE, but the thing is that these RISC cpu's, like Cell, are very very cheap. For consoles to stay reasonably affordable you have to put these budget cpu's inside.
border said:Yeesh, some people are dense. Are people so defensive about their Platform-Of-Choice(tm) that any criticism gets shrugged off as "whining", no matter how respected the source or how well-justified the critique?
Where could we find this speech? I'd like to take a look at it...sangreal said:I find it pretty funny because if you watch the video its actually quite positive. The same people that are dismissing his opinion as worthless PC developer crap would probably be defending it if they watched it.
The out of context sensationalist summary in the first post really isn't representitive of the actual speech, in my opinion.
dark10x said:Where could we find this speech? I'd like to take a look at it...
It's a pretty well-measured critique, where he considers if it's wise to add horsepower at the cost of adding complexity. It's done in a fairly thoughtful and diplomatic way....how IS it whining? Or are you just of the opinion that any complaint is "whining"? Generally people would say that "whining" is incessantly complaining about trivial matters, but none of this is trivial at all -- it's the backbone of next-generation hardware.TAJ said:How is this NOT whining? Please tell me. He sounds like fucking Lorne Lanning talking about PS2.
radioheadrule83 said:As players, most of us are only really concerned with what is in our line of sight. With costs already prohibitively high in making a video game, should developers be overly concerned with what's going on outside of that line of sight when it costs performance so much? We've seen doors and walls crumble into bits, we can throw things about and they'll have a realistic weight to them, and interact with other global elements correctly... this is all well and good. You can think up gameplay scenarios that would take advantage of this. But thinking about it - does a god damn video game need to be a simulation to the extent that it's managing the very elements and constantly updating what would be superfluous aspects of a world when it doesn't really affect the play that much? The focus should indeed, most definately be on the gameplay.