• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

John Carmack Keynote

Just remember consoles last 5 to 10 years, a PC changes every 6 months. Console devs take time and learn the hardware and eventually exploit every facet of it. You can't do this on a PC because 1) you have to cater to a multitude of platforms and 2) those platforms are constantly evolving. Again, look at 1st gen ps2 and 2nd and 3rd gen ps2, huge difference. Just relax and give it time and eventually the hardware will be understood. John and ID are in one of the toughest spots, working with unknown hardware, like any launch title studio. Its the hardest and biggest hurdle. But their mistakes are what make the next iteration of games even better.
 
border said:
It's not a matter of him saying that "physics are useless" like some hyper-reactive people are claiming after reading a one-line summary of something he took at least 5-10 minutes to explain. He just seems to think that the performance tradeoff is currently not worth it, and that it's better to concentrate on the graphics side of things while physics tech improves behind the scenes.

I guess since Im the one that quoted the "out of context bullet point" Im one of the hyper reactive dudes you're referring to. My take is he's entitled to his opinion that the performance tradeoff is currently not worth it at the moment (and your breakdown is roughly what I inferred he was getting at from said out of context bullet point). He could also be wrong. Sony (and to a lesser extent MS) certainly think he is, as they're the ones actually manufacturing the hardware with this "tradeoff" in mind. And maybe they've missed the mark, we'll certainly have plenty of time to find that out. But John Carmack isn't the only talented developer in the gaming industry. If his team isn't able to tap the new hardware then I personally wont take that as the sign for everyone else to pack their bags and call it a day, and I doubt the rest of the console development industry will either. But thats just me.
 
People have used all these bits-n-pieces of summary to paint it as though he has a very negative view of current technology. But again, if actually watch the video you see Carmack say he's happier than he's ever been with the way things are progressing.
Just remember consoles last 5 to 10 years, a PC changes every 6 months.
Nonsense. New video cards are released every six months, but the core concepts at work in PCs are largely the same as ever. Major DirectX upgrades are years apart. The concept of multiprocessing, Carmack's focus here, has been around for a very long time....and id's been doing it since before it became mandatory with the PS2.
 
tenchir said:
Hate to correct you again, but Maze Wars also had networked deathmatch. It was in there at the beginning too, not added later on.

I'm still respectively disagreeing about MazeWar being a FPS (it's a dungeon crawler), and it might be the first "multi-person networked game" but that's not the same as true deatchmatching.
 
Guy LeDouche said:
The nice people at Team Kojima are going to do some mind-blowing shit. I know its sooo far away, but MGS4 is already making my front pantal region moist.

It's safe to say the first MGS4 trailer will shake the industary to it's core.

This is what I dislike about the industary too. John, a very talented man. Deserves the accolade, but the extremely talented folk at Team Kojima, the coders who on the ps2 have produced graphics above and beyond anything anyone else has (In the real time cut scenes at least)..yet..few people could name them. The coders will never be interviewed..never remembered.

Just punch in, punch out...

Shame.
 
The next-gen consoles are about as powerful as current high-end pc's, but their cpu power is current PC processor / 2.

Umm, I'm not sure how accurate that can be when you consider the Cell can perform 100 million dot products per second, while the best p4 (Single core perhaps) can perform 4 million a second.

Cell is a power house. Carmack may dismiss it. But he does so wrongly..imo.
 
hukasmokincaterpillar said:
If his team isn't able to tap the new hardware then I personally wont take that as the sign for everyone else to pack their bags and call it a day, and I doubt the rest of the console development industry will either.
Again, it's not about "We can't tap this hardware". It's about a tradeoff -- "Is it worth all the extra money and manpower to tap this hardware"? I guess you can call the guy a hypocrite given id's exploitation of only 2 core franchises, but he seems legitimately concerned about rising budgets stifling creativity. He talks about what fun it's been working on cell phone games, because there was room to experiment and no pressure to put out a mega-hit or spend years on a project.

One thing he is "negative" about is that he thinks the difficultly associated with In-Order multiprocessing is never going to get better, and that it's worth questioning whether the performance boost is worth the extra layers of complexity and X360 and PS3 have given us.
 
CaptainABAB said:
I'm still respectively disagreeing about MazeWar being a FPS (it's a dungeon crawler), and it might be the first "multi-person networked game" but that's not the same as true deatchmatching.

Disagree all you want, but most people would disagree with you. Oh yeah, the Computer Games Magazine article on Maze Wars is the July edition.

http://www.cgonline.com/content/view/952/59/

The First First-Person Shooter
If you think id Software created first-person 3D gaming, think again: In the early 1970s, a group of engineers created a game known at various points as The Maze Game, Maze Wars, or simply Maze. It was a networked, first-person shooter with 3D graphics, and this is its story. By Alex Handy
 
He says that PS3 will be the best "base" platform for developers that want to do multiplatform games, and that PC will be the worst. This isn't because he thinks PS3 is the best development environment, but rather because it will be easier to port Cell code to X360 than it would be to do it the other way around.
 
Haven't both Sony and Microsoft embraced the Unreal Engine 3 technology thereby pretty much screwing over John Carmack? With the way the Doom 3 Engine has been passed over so completely I'm not surprised Carmack is bashing both systems.

border said:
He says that PS3 will be the best "base" platform for developers that want to do multiplatform games, and that PC will be the worst. This isn't because he thinks PS3 is the best development environment, but rather because it will be easier to port Cell code to X360 than it would be to do it the other way around.
Doesn't that also mean there could be a lot more X360 exclusives? I wonder if MS planned this on purpose.
 
Doesn't that also mean there could be a lot more X360 exclusives?
Not really - PS2 was in exact same situation this generation, except even worse because it was also the weaker hardware.
In the end more exclusives will go to where more money is made.
 
---- said:
Haven't both Sony and Microsoft embraced the Unreal Engine 3 technology thereby pretty much screwing over John Carmack? With the way the Doom 3 Engine has been passed over so completely I'm not surprised Carmack is bashing both systems.

Yeah, the Doom 3 Engine can't hold a candle to the UE3 engine. Call Doom 3 an older engine if you want, but Quake 4 and Gears of War are coming out during the same year.
 
Haven't both Sony and Microsoft embraced the Unreal Engine 3 technology thereby pretty much screwing over John Carmack? With the way the Doom 3 Engine has been passed over so completely I'm not surprised Carmack is bashing both systems.
No, id screwed themselves over by making an engine that was only suitable for mid-generation technology. Doom3 engine is too low-end for next-gen and too high-end for current gen (outside of Xbox). They're not bashing UE3 at all, anyway....he's mildly bashing the choice of an in-order multiprocessor setup. Carmack is so insanely rich that I doubt he cares too much about losing out on licensing deals. If he wasn't such a humble and self-effacing type you could say it's about power or ego, but that doesn't seem likely either. If this were Derek Smart or Itagaki or something you might suspect ulterior motives, but I tend to give Carmack a pretty high credibility rating.

And let's not rule out the Doom 3 engine for next generation stuff anyhow. Their engine has certainly not been pushed to its maximum, and the UE3 engine is very expensive to license. Isn't it like 1 million per game? All they have to do is price themselves under the competition and they will pick up some good deals. Doom 3 had to be a corridor game because hardware couldn't run anything bigger than that....but with better hardware it might be able to compete financially if not visually in more open-ended genres. The Q3A engine was mid-generation tech, but you still saw games like Call of Duty and Jedi Outcast/Jedi Academy using it years after it was released. For whatever reason, the people that made those games chose an id engine despite the fact that it was not cutting-edge. While nobody out there would say that those games were really stunners, nobody would say that they were real stinkers either. If everyone is right and the jump from generation-to-generation is less significant than it was last time, then id is even better shape to license out their engine to low-budget or mid-budget games. It seems rather unclear what their plans for the Doom 3 engine are though. They haven't really said anything about porting it to all the next-gen consoles. Obviously they are porting it to X360 for their Quake titles, but it remains unclear whether or not they will stick with it for their next major in-house project.

I think they definitely are frustrated that the PC is now going to be the worst platform to develop for if you want to make a multiplatform game. From the video you can tell Carmack really hates the licensing structure of the console world and really hates multiprocessor programming, but as things currently are that's where they will have to focus. As he states it, it will be difficult to port PC projects to consoles (50% speed reduction unless you do a major overhaul of the code) and it will be difficult to port console projects to PC (making a multi-processor console game support single-processor PC setups). He likes the open PC platform, but right now it's the last place you want to be if you want to appeal to a broad audience.
 
Some sensible comments from DeanoC on B3D that might put Carmack's concerns into some perspective:-

I don't always agree with JC's choices or opnions but they are always valid and well thought out.

Nobody can argue that in-order multi-core programming will make games more expensive and difficult to make. JC is just saying that maybe we jumped one generation too soon. From a technical point of view, I disagree but from a production point of view I'm not so sure. The problem isn't whether we can get better theoritical performance from multi-core architectures, its whether we have the tools and staff to get near the theoritical performance.

Its fine for the JC's of the world, we can lap this stuff up in our sleep but if we are only 5-10% of the programming staff, how good is the overall code base going to be?

JC isn't a lone coder anymore, he's a lead of a team and thats his concern, not his personal skill set. I'm sure JC could sit down and write the most awesome game on PS3 with-out blinking an eye but nobody buys games written by a single guy anymore.
 
border said:
Again, it's not about "We can't tap this hardware". It's about a tradeoff -- "Is it worth all the extra money and manpower to tap this hardware"? I guess you can call the guy a hypocrite given id's exploitation of only 2 core franchises, but he seems legitimately concerned about rising budgets stifling creativity. He talks about what fun it's been working on cell phone games, because there was room to experiment and no pressure to put out a mega-hit or spend years on a project.

I understand that concern and its certainly a valid one as the industry continues to grow. Publishing and funding for games is probably due for a major shift in philosophy sometime down the road if gaming wants to continue to expand at this pace. I suppose even an impending crash has to be considered if you want to be all doomy and gloomy about it. But from a hardware standpoint multicore seems more like a natural growing pain that had to happen eventually. Perhaps Sony and MS jumped the gun but obviously a wall has been hit and things need to change from a cost/performance perspective. The fact that the loudest moaning about the impending cramps is coming from pillars of the PC industry is rather instructive IMO.

border said:
One thing he is "negative" about is that he thinks the difficultly associated with In-Order multiprocessing is never going to get better, and that it's worth questioning whether the performance boost is worth the extra layers of complexity and X360 and PS3 have given us.

Thats rather negative yes. :)
 
Heh, I have a good time watching people questioning John Carmack but taking everything Kutaragi / Iwata / Moore say for granted :)
 
I posted this on B3D, but I'll post it here too.

After listening to the video, I don't think a lot of the chopped quotes and so forth floating around are fully or accurately representing what he was saying. I think it's better to have the full quotes, so for anyone just reading the quotes and not watching the video, here's the part of the speech that centered around multicore console processors and physics/ai, that seems to have generated some controversy:

Parallel programming when you do it like this is more difficult. And anything that makes the game development process more difficult is not a terribly good thing. So the decision that has to be made there, is the performance benefit you get out of the this worth the extra development time and there's sort of an inclination to believe that, and there's some truth to it, Sony sort of takes this position where, "ok, so it's going to be difficult, maybe it's going to suck to do this but the really good game developers will just suck it up and make it work." And there's some truth to that. There will be the developers that go ahead and have a miserable time and do get good performance out of some of these multi-core approaches - and Cell is worse than others in some respects here. But I do somewhat question whether we might have been better off in this generation having an OoO main processor rather than splitting it all up into these multicore processors systems on here. It's probably a good thing for us to be getting with the programme now. The first generation games for both platforms will not be anywhere close to taking advantage of all this extra capability. But maybe by the time the next generation consoles roll around, the developers will be a little bit more comfortable with all this and be able to get more benefit out of it. But it's not a problem that I actually think is going to have a solution, I think it's going to stay hard. I don't think there's going to be a silver bullet for parallel programming. There have been a lot of very smart people researchers and so on that have been working this problem for 20 years,and it doesn't really look any more promising than it was before.

So that was one thing that I was pretty surprised when talking to some of the IBM developers of the Cell processor on there. I think that they made to some degree a misstep in their analysis of the performance would actually be good for where one of them explicitly said, basically "now that graphics is essentially done, what we have to be using this is for physics and AI", Those are the 2 poster childs for how we're going to use more CPU power. But the contention that graphics is essentially done, I really think is way off base. First of all, you can just look at it from the standpoint of "are we delivering everything a graphics designer could possibly want to put into a game, with as high a quality as they could possibly want?" And the answer is no. We'd like to be able to do Lord of the Rings quality rendering realtime. We've got orders of magnitude performance that we can actually soak up in doing all of this. There are, what I'm finding personally in my development now is that the interfaces that we've got to the hardware, the level of programmability that we've got, you can do really pretty close to whatever you want as a graphics programmer on there but what you find moreso now than before is that you get a clever idea for a graphics algorithm that will look really awesome and make a cool new feature for a game, you can go ahead and code it up and make it work, make it run on
the graphics hardware, but ultimately too often I'm finding that well this works great but
it's half the speed that it needs to be or a quarter of the speed, or I start thinking about
something "well, this would be really great but that's going to be one tenth the speed of
what we'd really like to have there". So I'm looking forward to another order of magnitude or two in graphics performance because I'm absolutely confident we can use it. We can actually suck that performance up and do something that will deliver a better experience for people there.

Which if you say, "well here's 8 cores or later it's going to be 64 cores or whatever,"do some physics with this that's going to make a game better"", or even worst "do some AI that'll make the game better". The problem with those, both of those,is that both fields have been much more bleeding edge than graphics has been, and do some degree that's
exciting where people in the games industry are doing very much cutting edge work in many cases, it is THE industrial application for alot of that research that goes on, but it's been tough to actually sit down and think how we'll turn this into a real benefit for the game. Let's go ahead, how do we use this however many gigaflops of processing performance to try and do some clever AI that you now, winds up using it fruitfully. And especially in AI, it's one of those cases where most of the stuff that happens in especially single player games is much more of a director's view of things. It's not a matter of getting your enemies to think for themselves, it's a matter of getting them to do what the director wants and putting the player in a situation you are envisaging in the game. Multiplayer focussed games do have much more of a case - you do want better bot intelligence, which is more of a classic AI problem, but the bulk of the games still being single player, it's not at all clear how you use incredible amounts of processing power to make a character do something that's going to make the gameplay experience better, I mean i keep coming back to examples from the really early days of Doom, where we would have characters that are doing this incredibly crude logic that fits inside a page of C code or something, and characters are just kind of bobbing around doing stuff, and you get people playing the game that are believing that they have devious plans and they're sneaking up on you and they're lying in wait and this is all just people taking these minor minor cues and incorporating them in their heads into what they think is happening in the game. And the sad things is, you could write incredibly complex code that does have monsters sneaking up on you, hiding behind corners, and it's not at all clear that that makes the gameplay better with some of these sort of happenstance things that happen with emergent behaviour. So until you get into cases where you think of games like the sims or MMO games where you really do what these sort of autonomous agent AIs running around doing
things, but then that's not really even a client problem, that's more of a server problem,
and that's not really where the multicore consumer cpus are going to be a big help.

Now, physics is sort of the other poster child of what we're going to do with all this CPU power, and there's some truth to that, certainly some of things we've been doing on CPUs for the physics stuff, it's gotten a lot more intensive on the CPU, where we find that things like ragdoll physics and all these different objects moving around, which is one of these "raise the bar" issues, every game now has to do this and it takes a lot of power. And it makes balancing some of the game things more difficult. When we're trying to crunch things to get our performance up, because it's not ..the problem with physics is, it's not scaleable with levels of detail in the way graphics are. Fundamentally when you're rendering an image of a scene, you don't have to render everything to the same level. It'd be like forward texture mapping which some old systems did manage to do but essentially what we have in graphics is a nice situation where there's a large number of techniques that we can do that we can fall off and degrade gracefully. Physics doesn't give you that situation in a general case. If you're trying to do physical objects that affect gameplay you need to simulate pretty much all of them all the time. You can't have cases where you start knocking some things over and you turn your back on it, and you stop updating the physics or even drop to some lower fidelity where you get situations where you know that if you hit this and turn around and run away, they'll land in a certain way, and if you watch them they'll land in a different way. And that's a bad thing for game development. And this problem is fairly fundamental. If you try to use physics for a simulation that's going to impact the gameplay, things that are going to block passage and things like that, it's difficult to see how we're going to be able to add a level of richness to the physical simulation world that we have for graphics without adding a whole lot more processing power. And it tends to reduce the robustness of the game, and bring on some other problems. So what winds up happening in the demos and things you'll tend to see on PS3 and the physics accelerator hardware. You'll wind up seeing a lot of stuff that effectively are non-interactive physics, this is the safe robust thing to do but it's a little bit disappointing when people think about "i want to have this physical simulation of the world". It makes good graphics when you can do things like, instead of the smoke clouds have the same clip into the floor that we've seen for ages on things, if you get smoke that pours around all the obstructions, if you get liquid water that actually splashes and bounces out of pools and reflects on the ground, this is neat stuff but it remains kind of non-core to the game experience. An argument can be made that we've essentially done that with graphics, where all of it is polish on top of a core game, and that's probably what will happen with the physics, but I don't expect any really radical changes in the gameplay experience from this. And i'm not really a physics simulation guy so that's one of those things were a lot of people are like damn this software for making us spend all this extra time on graphics, I'm one of those people who's like "damn all this software for making us spend all this extra time on here". But I realise things like the basic boxes falling down, knocking things off, bouncing around the world, ragdolls, that's all good stuff for the games, but I do think it's a mistake for people to try and go overboard and try and do a real simulation of the world, because it's a really hard problem, and you're not going to give really that much real benefit to the actual gameplay on there. You'll tend to make a game that may be fragile, may be slow, and you'd better have done some really really neat things with your physics to make it worth all of that pain and suffering on there. And I know there are going to be some people that are looking at the processing stuff with the cells and the multicore stuff and saying "well, this is what we've gotta do, the power is there, we should try and use it for this", but I think that we're probably going to be better served trying to just make sure all of the gameplay elements that we want to do, we can accomplish at a rapid rate, respectable low variance in a lot of ways. Personally I would rather see our next generation run at 60 frames per second on a console, rather than add a bunch more physics stuff. I actually don't think we'll make it, I think we will be 30fps on the consoles for most of what we're doing. Anyways, we're going to be soaking up a lot of CPU just for the normal housekeeping type of things we'll be doing.

I think he's been taken out of context and misrepresented with some of the reports and quotes going around. Some quick thoughts:

1) He did not say that IBM made a misstep with the Cell design as is being reported by some, he said he takes issue with IBM's contention of how that power should be used. They say it should be used for physics and AI since graphics is "done". Carmack obviously disagrees. And if Carmack wanted to use any CPU's power for graphics, I think he'd be better off with Cell regardless. But I don't think he's not saying he wants to do that. He was simply using that comment as a jumping off point to assert the primary importance of graphics.

2) He also did not say that physics was unimportant or unnecessary as such for games, or at least in the way that was being portrayed. He's saying that if you want to take a pure simulation route, you're going to find it much more difficult to control what happens, and thus ensure a good game experience. Some people earlier were making the point that physics can contribute to the eye candy, so why would Carmack think it wasn't important if he thinks graphics and presentation is important? He actually does say it can be used in that manner to make things look better. Physically based visualisation doesn't have to upset the apple cart as far as game design is concerned, and he points at that - liquid water physics, smoke that behaves realistically etc. etc. So tying physics to visuals is useful as far as he's concerned. But he does tend to make it seem less important than just graphics alone, which is flatly contradictory IMO. He talks physics down to a degree as being mostly relegated to that - unless you're feeling lucky/ambitious - but graphics is "just" about presentation too and he hammers home at the beginning how important presentation is to the game, and why he's not apologetic about their technology-orientated appriach. He somewhat admits the contradiction and concedes that point, although he glosses over it quickly, and he does admit he's not a physical simulation guy. He also does admit that this is something that requires a lot of power regardless of whether you go a pure simulation route or not. And of course, even if Carmack doesn't feel comfortable making physics a lynch-pin of the gameplay, others may (and others arguably have already).

3) With regard to AI, I think he's focus is limited to the types of games he's making. I think he's right in that it's as much about what the player perceives as what the characters are actually doing. But purely directorial approaches just don't work, or at least his own examples don't. Doom3's "directed" AI was horrible IMO. Maybe he thinks most people don't notice, but I do, and I'm sure I'm not alone.

4) His comment about perhaps being in a better position next-gen with multi-core etc. is of course true, but if the current systems were all OoO as he ponders, that wouldn't be the case. You gotta start sometime.

I also thought his comments on HD were interesting. He flat-out said that while enforced minimum resolutions may be OK for now, with Quake4 etc. with his next-gen rendering tech, he'd prefer to do more complex per-pixel rendering with a lower resolution vs having to cut that to meet a higher resolution. It'll be interesting to see Sony's policy on enforcing minimum resolutions or not. Nintendo might also get some credibility out of that aswell ;)

edit - he also later goes on to say, when talking about where he'd like hardware to evolve, "The quibbles that I make about the exact division of the cpus on the consoles and so on, they're really just essentially quibbles, the hardware is really great, everyone is making really great hardware."
 
Very interesting comments on physics/AI versus graphics but I think what he's saying can go both ways. He's saying that falling objects and flowing liquid doesn't really imrpove gameplay. The same can be said about graphics though if not more so. Games like HL2 and Psi-Ops make excellent use of physics in their gameplay.

In fact what he says about monsters seeming more intelligent with small code can be flipped around by saying the better graphics get the more OFF it looks to reality.

Sucks to hear that the next gen consoles won't be much better then the top of the line PCs and will be surpassed very soon. I was hoping for a bigger gap.
 
I think the most relevant issue is that, CELL will be flexible enough for the designers to assign power to AI, physics or graphics as they see fit; that's its primary strength, even if it does require a little more effort to achieve what's ultimately possible.
 
Zaptruder said:
I think the most relevant issue is that, CELL will be flexible enough for the designers to assign power to AI, physics or graphics as they see fit; that's its primary strength, even if it does require a little more effort to achieve what's ultimately possible.

Yeah, it semed ironic that it was a Cell engineer's comment that sparked that discussion when if he wanted more power for graphics, theoretically Cell should be able to afford him that more than other CPUs could. Which is why I don't think the "misstep" comment was being assigned to Cell design or Cell at all, simply the notion/comment that graphics is not a "sponge" for power anymore (which just happened to come from a Cell engineer).

I think, on that engineer's comment, while graphics is not "done" certainly, I think we do need greater steps up elsewhere now (not vs graphics, but vs the steps up we've seen before in those areas), if games are to behave as nicely as they look in static screenshots. It's interesting, because one of the things that can be done to improve graphics generally (in motion) is to tie physics more closely to it, but he does recognise that at least.
 
mckmas8808 said:
Look JC will get respect from me on the PC side, yet not on the console side. If he and the Half-Life guy want to bitch on complain about next-gen hardware then sit and watch hungry devs fly past you.

Just because they made beautiful games on the PC this gen doesn't mean they can speak out against next-gen console's choice of hardware and be right. Hungry devs like Blizzard (PGR3), Epic (Gears of War), Gureilla (Killzone), DigiGuys (WarDevil), Bandai (Gundam), and many more will spank the pants off of Half-Life and Doom games.

Keep crying, I don't care I spend my money on hungry devs. Even EA has a devs that make next-gen worthy (Just watch the Fight Night 3 demo). And JC said that games will cost $100 million dollars. Yeah Right.

My new slogan for PC devs might be. STOP CRYING!! STOP CRYING!! IF YOU DON'T YOUR NEXT-GEN GAMES WILL BE DYING!! :lol


Im sorry? uh what exactly is the difference between a console and a PC? seriously besides the fact that most consoles dont have some type of mass storage a console is an integrated circuit pc. Its not some GRAND MYSTERY for a PC programmer let alone a programmer of carmacks level of experience to understand these platforms and write a game for them. He would actually be gaining peace of mind developing on a console , gets rid of tons of compatibility issues. As it was said before in previous posts Carmack has done a SMP engine for quake 3 and has experience with multi core programming. He just doesnt KNOW if RIGHT NOW its WORTH IT TIME WISE to HEAVILY INVEST IN MULTI THEADED CODE, BECAUSE IT TAKES MORE TIME, WHICH AINT FREE, AND IT MIGHT NOT ALWAYS WORK BETTER GOING ACROSS MULTIPLE CORES AS OPPOSED TO ONE. Geezus how hard is that to understand. The man knows multi core will be the future , ..NO one doubts this especially not JC. After a few years all the hassle included with multi-core developement now may be alleviated, so RIGHT NOW like..2005 RIGHT NOW he doesnt know if it IS worth it not if it WILL be worth it, or course it WILL, who knows what 2006 will bring. But RIGHT NOW it could be pandora's box and end up costing a dev a crap load, and for what, so that 10 months later they could've done their game multithreaded with 1/10th the hassle and more performance.
 
gofreak said:
It's interesting, because one of the things that can be done to improve graphics generally (in motion) is to tie physics more closely to it, but he does recognise that at least.

Exactly... I mean, who would prefer to see 50,000 strands of immaculately modelled hair that remains static vs 200 'strands' that react dynamically to movement and clips realistically against itself and other objects?

Then apply that line of thought to clothing, character limbs, debris, water, explosions, etc, etc.
 
Guy LeDouche said:
The nice people at Team Kojima are going to do some mind-blowing shit. I know its sooo far away, but MGS4 is already making my front pantal region moist.
same here and probably everyone else on GAF. I cant wait for TGS just for MGS4, even if we just get a few pictures, but hopefully we will get to see some kick ass trailer that will all leave us thinking OMG MGS4 will be da best game eva!!!! It will probably be like what happened to everyone when everyone saw the first of MGS2 :D
 
BirdySky said:
It's safe to say the first MGS4 trailer will shake the industary to it's core.

This is what I dislike about the industary too. John, a very talented man. Deserves the accolade, but the extremely talented folk at Team Kojima, the coders who on the ps2 have produced graphics above and beyond anything anyone else has (In the real time cut scenes at least)..yet..few people could name them. The coders will never be interviewed..never remembered.

Just punch in, punch out...

Shame.

I agree 100%, it really is a shame...
 
BirdySky said:
It's safe to say the first MGS4 trailer will shake the industary to it's core.

This is what I dislike about the industary too. John, a very talented man. Deserves the accolade, but the extremely talented folk at Team Kojima, the coders who on the ps2 have produced graphics above and beyond anything anyone else has (In the real time cut scenes at least)..yet..few people could name them. The coders will never be interviewed..never remembered.

Just punch in, punch out...

Shame.

They'll be remembered as the first team that made the PS2 do what it was never suppose to do according to many.

"Lets wait for gameplay videos..That trailer was all CG."

MGS4 is in the position to cause the a similar effect to what MGS2 did when it was first shown.
 
I haven't read any of this apart from the opening comments. Basically he can fuck off.

If he's saying X360 is slowe than AMD/P4, then the cores probably are. Especially if you don't code for them properly - if you just port your code across they'll be shit.


Dissing CELL is just stupid IMO. Whether you have to give yourself a migraine working out how to program for its multiple SPEs doesn't matter - fact is you'll have to do it to sell to the largest games market in the world - i.e PS3 owners.

Simple economics. Bitch and moan all you like Carmack, but unless you get a lovely moneyhat from MS, you'll have your powerpoint like all the rest, showing how great your code is on PS3.
 
The original MGS also caused a big stirr when it was first shown on PSOne too (though not to the degree of MGS2, which was just mind blowing at the time)..

I am sure Kojima knows everybody in the world is waiting to see what he will show of MGS4 on PS3......in a way, I'd hate to be him right now....

He is the benchmark...
 
gofreak said:
1) He did not say that IBM made a misstep with the Cell design as is being reported by some, he said he takes issue with IBM's contention of how that power should be used. They say it should be used for physics and AI since graphics is "done". Carmack obviously disagrees. And if Carmack wanted to use any CPU's power for graphics, I think he'd be better off with Cell regardless. But I don't think he's not saying he wants to do that. He was simply using that comment as a jumping off point to assert the primary importance of graphics.

I agree with him on this point. Although I think IBM etc are pushing AI/physics as examples of 'moving games forward'. I expect most devs to use 75% or more of CELL for graphics. Its almost a perfect vertex crunching machine. And you can use 1-2 SPEs for AI/physics and still have a massive increase compared to doing a bit here and there on your main processor, which is the current situation
 
Carmack is confusing me as he stated:

John Carmack said:
So that was one thing that I was pretty surprised when talking to some of the IBM developers of the Cell processor on there. I think that they made to some degree a misstep in their analysis of the performance would actually be good for where one of them explicitly said, basically "now that graphics is essentially done, what we have to be using this is for physics and AI", Those are the 2 poster childs for how we're going to use more CPU power. But the contention that graphics is essentially done, I really think is way off base.

Ok, so here's my problem with this statement which almost seems inconsistent: If we assume he'd correct in saying that graphics isn't "essentially done," and you will need additional preformance from the CPU to assist, which system (PS3, X360 or a PC) is going to be the better equipt for that?

Specificially, exactly what are you going to do concerning graphics, physics or AI on the X360 CPU that you can't do (noticably better) on Cell? Or, God-forbid, if they followed his recommendation and went with an OOOE CPU, are you going to be better off with that when running visualization tasks than, say, Cell?
 
Kleegamefan said:
The original MGS also caused a big stirr when it was first shown on PSOne too (though not to the degree of MGS2, which was just mind blowing at the time)..

I am sure Kojima knows everybody in the world is waiting to see what he will show of MGS4 on PS3......in a way, I'd hate to be him right now....

He is the benchmark...

I still love the original MGS trailers. They should've cut some Twin Snake trailer like that, especially the stair scene which was an awesome sequence in the first MGS trailers.

But you're right, the series has set a benchmark on each of the Playstation console. I'm expecting the same for this. It's a step in the right direction that from the interviews he doesn't seem to be focusing on what alot of other developers are at the moment for next gen games (realistic looks and tons of enemies on screen). He's mentioned that MGS4 will be more about animation and making everything around very interactive.

Depending on how some reports pan out, we'll get to see the engine either later this month (Games Convention) or next month (TGS).
 
Vince said:
Or, God-forbid, if they followed his recommendation and went with an OOOE CPU, are you going to be better off with that when running visualization tasks than, say, Cell?

Yes, game developers would be better off with OOOE, but the thing is that these RISC cpu's, like Cell, are very very cheap. For consoles to stay reasonably affordable you have to put these budget cpu's inside.
 
He's mentioned that MGS4 will be more about animation and making everything around very interactive.
Has he now? That's great news. If anyone would realize how important animation is, it would be Kojima (and his team).
 
I had only read the bullet points before. After taking in the actual address, I think those bullet points were actually too detailed. All they needed was one. In his Quakecon address, John Carmack said, "Waaaaaah! The new consoles are too much work. I won't be able to drive my Ferraris and build rockets as much if I want to stay competitive.".
 
dark10x said:
Has he now? That's great news. If anyone would realize how important animation is, it would be Kojima (and his team).

It was in a recent PSM, here's the quote

PSM2: How will Snake look on PS3?

HK: We’re not trying to make Snake look like a real person. We want to concentrate on the movement and animations, making him feel more natural, like an actual life-form.
 
Apenheul said:
Yes, game developers would be better off with OOOE, but the thing is that these RISC cpu's, like Cell, are very very cheap. For consoles to stay reasonably affordable you have to put these budget cpu's inside.
At 234million transistors CELL can't be that cheap.
 
Yeesh, some people are dense. Are people so defensive about their Platform-Of-Choice(tm) that any criticism gets shrugged off as "whining", no matter how respected the source or how well-justified the critique?
 
thanks for the partial transcript. pretty helpful.

why do all these idiots think they can correctly paraphrase his speech when they don't know the first fucking thing, jesus.

ok, i'm done.
 
I like what he's saying about AI and physics to be honest.

As players, most of us are only really concerned with what is in our line of sight. With costs already prohibitively high in making a video game, should developers be overly concerned with what's going on outside of that line of sight when it costs performance so much? We've seen doors and walls crumble into bits, we can throw things about and they'll have a realistic weight to them, and interact with other global elements correctly... this is all well and good. You can think up gameplay scenarios that would take advantage of this. But thinking about it - does a god damn video game need to be a simulation to the extent that it's managing the very elements and constantly updating what would be superfluous aspects of a world when it doesn't really affect the play that much? The focus should indeed, most definately be on the gameplay.

This isn't to say you can't explore emergent behaviour and incredible physics simulation in your game. I think some of the things that Peter Molyneaux went overboard with when hyping Fable for example can actually become a reality... and it'll be cool. I just think all he's really trying to say is that developers should be wary of the over-emphasis created by IBM and others peddaling industry hype with their products. To some extent, gamers are already satisfied and challenged with the coded routines and experiences that exist today. You can refine them, and given an inordinate amount of time and money you could do everything possible to make your game world more like the real world. But would that make it any more fun? Developers should be starting with the concept of play and enhancing play... what makes their game fun?

For anyone who doubts the importance of the gameplay idea, can you tell me why this generations most popular game was made on Renderware? For a guy who codes graphics engines, John Carmack knows how important the interactivity is. I remember his comments on the importance of story fondly. I hope a few people take his comments on board.
 
border said:
Yeesh, some people are dense. Are people so defensive about their Platform-Of-Choice(tm) that any criticism gets shrugged off as "whining", no matter how respected the source or how well-justified the critique?

I find it pretty funny because if you watch the video its actually quite positive. The same people that are dismissing his opinion as worthless PC developer crap would probably be defending it if they watched it.

The out of context sensationalist summary in the first post really isn't representitive of the actual speech, in my opinion.
 
sangreal said:
I find it pretty funny because if you watch the video its actually quite positive. The same people that are dismissing his opinion as worthless PC developer crap would probably be defending it if they watched it.

The out of context sensationalist summary in the first post really isn't representitive of the actual speech, in my opinion.
Where could we find this speech? I'd like to take a look at it...
 
>>>Parallel programming when you do it like this is more difficult. And anything that makes the game development process more difficult is not a terribly good thing. So the decision that has to be made there, is the performance benefit you get out of the this worth the extra development time and there's sort of an inclination to believe that, and there's some truth to it, Sony sort of takes this position where, "ok, so it's going to be difficult, maybe it's going to suck to do this but the really good game developers will just suck it up and make it work." And there's some truth to that. There will be the developers that go ahead and have a miserable time and do get good performance out of some of these multi-core approaches<<<

How is this NOT whining? Please tell me. He sounds like fucking Lorne Lanning talking about PS2.
See that look in those console programmers' eyes, John? You gotta get that look back, John. Eye of the tiger, man.
 
Kojima and his team need to concentrate on locking the frame rates in their games. I mean if you are going to make a rambo like action game at least lock the frame rate so that the game doesn't come to a crawl when you throw a grenade at a group of enemies. Unstable frame rate is the main problem in Kojima's games,
 
TAJ said:
How is this NOT whining? Please tell me. He sounds like fucking Lorne Lanning talking about PS2.
It's a pretty well-measured critique, where he considers if it's wise to add horsepower at the cost of adding complexity. It's done in a fairly thoughtful and diplomatic way....how IS it whining? Or are you just of the opinion that any complaint is "whining"? Generally people would say that "whining" is incessantly complaining about trivial matters, but none of this is trivial at all -- it's the backbone of next-generation hardware.

The fact that one of the most brilliant coders working today says things are going to be very difficult is bad news no matter how you spin it.
 
radioheadrule83 said:
As players, most of us are only really concerned with what is in our line of sight. With costs already prohibitively high in making a video game, should developers be overly concerned with what's going on outside of that line of sight when it costs performance so much? We've seen doors and walls crumble into bits, we can throw things about and they'll have a realistic weight to them, and interact with other global elements correctly... this is all well and good. You can think up gameplay scenarios that would take advantage of this. But thinking about it - does a god damn video game need to be a simulation to the extent that it's managing the very elements and constantly updating what would be superfluous aspects of a world when it doesn't really affect the play that much? The focus should indeed, most definately be on the gameplay.

He entertains two ideas of physics in his speech - one is the type that affects the gameplay deeply, and that HAS to be simulated no matter whether you're looking at it or not. He cautions against going overboard with that. Then there's the "looks nice but doesn't impact the gameplay" type of physics - the smoke that wraps around objects, the water that flows, the cloth that bends etc. as a character moves. In the latter case you can just worry about what the player is viewing at the time.

The thing is, he talks all this down a little, even the physics that adds polish. And he goes on and on and questions what benefit it brings to the core game. But at the very beginning of the speech, he talks about the importance of graphics and presentation to a game, and why it's a major focus of iD! That's certainly a little contradictory. And he actually acknowledges that somewhat too (although glosses over it really quickly).
 
Well, the problem is that physics is both gameplay and graphics... perhaps mostly graphics. Acting realistic is part of looking realistic. The most advanced render in the world isn't convincing if it moves like a drunken robot, and water that doesn't move right doesn't look like water. So maybe physics aren't a boon to gameplay, but static figures are going to be static figures, no matter how shiny.

...and if you're arguing for core gameplay then raw horsepower isn't really relevant.
 
Top Bottom