• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

John Carmack Keynote

Hitokage said:
Well, the problem is that physics is both gameplay and graphics... perhaps mostly graphics. Acting realistic is part of looking realistic. The most advanced render in the world isn't convincing if it moves like a drunken robot, and water that doesn't move right doesn't look like water. So maybe physics aren't a boon to gameplay, but static figures are going to be static figures, no matter how shiny.

...and if you're arguing for core gameplay then raw horsepower isn't really relevant.

Yup, completely agree. Given his focus on graphics, it makes his downplay of physics all the more strange when you consider the benefit they can bring to the visual, outside of the impact on gameplay.

That said, it felt like he was addressing the issue of physics to try and flesh out the argument, rather than as a core to it..he did qualify his statements by basically saying the physics "issue" wasn't as bad as the AI one when it comes to "core gameplay", and that it does and will require a lot of power. I think he knew that part of the argument was weaker.
 
Apenheul said:
Yes, game developers would be better off with OOOE, but the thing is that these RISC cpu's, like Cell, are very very cheap. For consoles to stay reasonably affordable you have to put these budget cpu's inside.

Uh, the question was rhetorical. The answer is an emphatic No, as we've already seen Cell outpreforms a generic x86 OOOE core at computationally expensive tasks by a very wide margin, these tasks are a specific subset that are mostly mappable to those functions used in games, such as it's fast fourier transform rate, MPEG decoding ability, raycasting|tracing, etc.

Cell was benchmarked at over 50 times the normalized preformance of a G5 running IBM's raycasting based Terrain Rendering Engine.

In addition, Cell is hardly a "Very, very cheap" ASIC. In fact, much to the contrary, it's an extremely large IC (250mm2 vs a Pentium4's 112mm2) that's fabricated on a more expensive PD-SOI substrate. Your comments are entirely fallicious and unfounded, in fact, they are plain dumb as fuck.
 
Apenheul said:
Yes, game developers would be better off with OOOE, but the thing is that these RISC cpu's, like Cell, are very very cheap. For consoles to stay reasonably affordable you have to put these budget cpu's inside.

Uh, the question was rhetorical. The answer is an emphatic No, as we've already seen Cell outpreforms a generic x86 OOOE core at computationally expensive tasks by a very wide margin, these tasks are a specific subset that are mostly mappable to those functions used in games, such as it's fast fourier transform rate, MPEG decoding ability, raycasting|tracing, etc. The problem is that unlike the more generalized x86 cores we're used to, the current generation console microarchitectures are specifically geared towards running only a subset of the tasks the former encounter and, thus, you need to alter your development strategies. Preformance wise, by any meaningful metric (eg. flop/watt, flop/gates, flops/mm2, etc) the console CPUs utterly outclass x86.

Cell was benchmarked at over 50 times the normalized preformance of a G5 running IBM's raycasting based Terrain Rendering Engine.

In addition, Cell is hardly a "Very, very cheap" ASIC. In fact, much to the contrary, it's an extremely large IC (250mm2 vs a Pentium4's 112mm2) that's fabricated on a more expensive PD-SOI substrate. Your comments are entirely fallicious and unfounded, in fact, they are plain dumb as fuck.
 
>>>It's a pretty well-measured critique, where he considers if it's wise to add horsepower at the cost of adding complexity. <<<

Console design is about getting the most power you can into a box that you can launch at $200-300 for acceptable losses. For the most part, people buying console games don't know or care how hard it was to get the game looking how it does. If you can be developer-friendly too, fine, but if you emphasize it over power, you'll be ignored by gamers.
 
- 3:35: In comparison to high-speed AMD/Intel system, X360 runs about half as fast
Out of curiosity, how did the xbox cpus compare to high-speed cpu available at the time of its release?
 
dorio said:
Out of curiosity, how did the xbox cpu compare to high-speed cpu available at the time of its release?

There were 2Ghz CPUs out at that time.

So you could say that maps neatly to the 2x difference Carmack is seeing with his PC code.

The difference here is that, harnessed properly, with certain workloads developers could get a more out of X360's CPU than they could out of current high-end CPUs - the same wasn't true of the Xbox CPU, which was exactly like the PC CPUs, except clocked much slower than the state of the art. Carmack's comparison here is using x86 code, which isn't very fair (although that is the situation he has found himself in).
 
dorio said:
Out of curiosity, how did the xbox cpu compare to high-speed cpu available at the time of its release?

Well it was a 700mhz celeron released in late 2001 when 2ghz P4s were available, so I imagine it didn't compare very favorably.

That being said, JC actual comment is:

"If you just take code designed for an x86 that is running on a P4/Athlon and run it on either of the PPCs for the new consoles it'll run at half of the speed, and that is because they are in-order processors..."
 
TAJ said:
If you can be developer-friendly too, fine, but if you emphasize it over power, you'll be ignored by gamers.
Except that the point here is that the power gain is not great enough to justify the complexity gain. The progression of single-core CPUs has halted, but you can still put out a pretty monstrously good one that can at least compete with a multi-core solution. In 5 years when single-core CPUs have not advanced at all, then the multi-core stuff will make more sense to use.
 
border said:
The fact that one of the most brilliant coders working today says things are going to be very difficult is bad news no matter how you spin it.

Listening to Carmack talk about the new multicore systems is like listening to a very gifted coachman with decades of experience driving horse-drawn carriages talk about the potential pitfalls of the first automobiles. :p He is brilliant coder, yes, but his talents were honed on x86 PC's, and years spent coding for that platform has definitely shaped his outlook. To use another analogy, learning these new systems is akin to learning to drive a car with a stickshift--guys like Carmack and Newell who have never driven anything but an automatic are going to have a much harder time adjusting than some young kid who's learned on a stick is. Likewise, I think that the next generation of developers are the ones who'll get the greatest performance out of these new platforms. Cutting their teeth on those multicore systems, they'll come to think in terms of multiple cores instinctively when mentally breaking down programming tasks, rather than trying to translate PC coding techniques that have become second nature to them into a new environment where they won't always apply.
 
Tellaerin said:
He is brilliant coder, yes, but his talents were honed on x86 PC's, and years spent coding for that platform has definitely shaped his outlook.
Except that he has more experience working with a multi-processor setup than almost anyone else working on console games. id has supported multiprocessor since at least Quake 3 in 1999....only the hardcore Saturn developers have been doing it longer.

And he has more experience with the PowerPC architecture than almost anyone else working on console games. id has ported their stuff to Mac for the better part of 10-13 years.

I thought we managed to shuffle off the whole "Oh, he just doesn't know about this field" bit on Page 1. To say he's just some kind of one-trick pony that can't get out of an single-CPU x86 mindset is crazy. He has demonstrated a really wide range of proficiency. He's not a part of the "Old Gaurd" on the verge of extinction....hell the supposed "New Guard" only exists in people's minds at this point.
Cutting their teeth on those multicore systems, they'll come to think in terms of multiple cores instinctively when mentally breaking down programming tasks, rather than trying to translate PC coding techniques that have become second nature to them into a new environment where they won't always apply.
Like he says in the keynote, you can hold your breath until your face turns blue waiting for some kind of magical solution, but the problems at the heart of things have not been solved even though they've existed for 2 decades.
 
trippingmartian said:
Multi-core systems are nothing new. As JC says in the keynote, it's a problem that's been around for 20 years.

Yes, they've been around... as fodder for academics, not deployed in mass-market consumer goods. It's only at the mass-market scale, with a correspondingly larger community of coders competing with each other to exploit the power of these platforms to the fullest extent possible, that the true potential (and pitfalls) of coding for multicore systems will become apparent.
 
border said:
Except that the point here is that the power gain is not great enough to justify the complexity gain.
That's where I'd disagree. Sony could have bitten the bullet and put something in the class of singlecore highend A64 in the box, giving them similar thermal characteristics to the PS3 Cell.
They'd get improved single threaded performance that way, but also at least an order of magnitude loss over any specialized tasks.
From hw design standpoint it's a rather poor tradeoff.

Anyway, thing is that rising costs in the industry are mostly not due to the technology side, so while I completely symphatize there'll be more initial difficulty with this transition, it's not what will be the decisive factor in increasing costs further.
 
border said:
I thought we managed to shuffle off the whole "Oh, he just doesn't know about this field" bit on Page 1. To say he's just some kind of one-trick pony that can't get out of an single-CPU x86 mindset is crazy. He has demonstrated a really wide range of proficiency.

Proficiency doesn't eliminate biases or make personal preference disappear. I still contend that the hardware a person learns to code for first shapes their perspective on subsequent systems, and Carmack has always evinced a fondness for the linear, incrementally-improved x86 PC's over all else. That's where his roots are, and it does show. So regardless of what you may feel has been 'shuffled off', I'm far from convinced that his perspective on the new systems should automatically taken as law, regardless of his occasional forays into multiple processor coding or how visually impressive the engines he's developed have been.

border said:
He's not a part of the "Old Gaurd" on the verge of extinction....hell the supposed "New Guard" only exists in people's minds at this point.

You're right. A legitimate 'New Guard' will come into being once the young guns of the coding world start cutting their teeth on these platforms. It's the difference between someone learning a foreign language as an adult and learning it in the cradle. I daresay you'll see a definite jump in software quality once that happens.

border said:
Like he says in the keynote, you can hold your breath until your face turns blue waiting for some kind of magical solution, but the problems at the heart of things have not been solved even though they've existed for 2 decades.

And he'd rather have everyone keep holding their breath and stick with PC legacy architecture in the interim, for at least another console generation. I thought magical solutions wouldn't come if you waited? Better to start making that transition now.
 
Tellaerin,
For your statement to work then Carmack would have to be crying for everyone to go back to the Apple II! Carmack's own dev system for the longest time was a high end SGI box, one that he had shipped to Hawaii for his honeymoon!
 
Tellaerin said:
I still contend that the hardware a person learns to code for first shapes their perspective on subsequent systems, and Carmack has always evinced a fondness for the linear, incrementally-improved x86 PC's over all else.
So far no developer has ever shown a preference for PowerPCs AND multi-core CPUs. So they're all just biased now? Nearly everyone's roots are in single-threaded applications and most of their roots are on non-PPC architectures. Carmack has shown far more preference for multiprocessing and PPC than probably 90% of console developers. If he is now "biased" against them then everyone else is biased in a much more extreme way.

I don't think his decrees should be taken as law, but I'm a helluva lot more willing to listen to him than the people actually trying to peddle this new hardware. Who do you think would be a more credible person to evaluate all of this?
It's the difference between someone learning a foreign language as an adult and learning it in the cradle. I daresay you'll see a definite jump in software quality once that happens.
Are all these analogies really accurate? Won't most people have to become proficient with single-threaded code before they can even tackle the multi-threaded stuff? Are current CS majors really required to even do that much in-order multi-threading? The sort of revolution you're talking about may require a big shift in CS ciriculums that won't happen for years.
And he'd rather have everyone keep holding their breath and stick with PC legacy architecture in the interim, for at least another console generation. I thought magical solutions wouldn't come if you waited? Better to start making that transition now.
OTOH You'd have us wait with budget-bloating hardware while we pray for some new generation of supercoders that can work with the same level of efficiency as the current ones. Why is it better to start the transition now, if it spirals costs upwards and doesn't produce a huge performance differential?

Carmack seems to be making the case for waiting another generation, and waiting until the multiple cores are capable of out-of-order execution. He's not down on the idea of multi-cores.....just their current implementation.
 
In theory, if they could get an A64 or P4 equivalent, OOOE chip that could give you the brute horsepower they will have in X360 and PS3, then they would.

Simple point is that they couldn't. They went with the best option available. Multicore, relatively simple chips.

That adds complexity, true. But the potential benefits must surely outweigh any disadvantages brought about by the complexity. The sheer amount of horsepower available is more than an order of magnitude higher than any desktop PC out there. It will remain significantly higher throughout the main lifetime of the next gen PCs (I don't see Intel hitting 50-100 GFlops anytime soon)

Yes, it will be easier to get to 90% of the potential power of a P4 than it will be to get to 90% of a CELL. But then 90% of a P4 can be beaten with less than 10% of CELL, so even crap coders will find benefits in the choices of CPU by MS and Sony.

border said:
Carmack seems to be making the case for waiting another generation, and waiting until the multiple cores are capable of out-of-order execution. He's not down on the idea of multi-cores.....just their current implementation.

Sure, but if you wait a generation, Sony may well have multicore CPUs with OOOE. and you'll have a generation of coders used to it. Why not start learning now? On PC there is no incentive to learn to code for multiple cores. On PS3 and X360 there is - a clear financial incentive (funding from publishers based on your ability to extract performance from these devices)

Although I'd guess that next gen they'd still go without OOOE, as that seems a relative constant in consoles, and people will be grounded in it.

Spin the argument round. I'm looking forward to some hotshot console coder turning round in 5 years time and complaining how OOOE is holding back PC CPUs and making console-PC ports increasingly difficult.
 
border - it's arguable that PS2 devs have been dealing with a rather rough-around-the-edges and "mini" version of Cell for the last 5 years (it is an asymmetric, multi-"core" design, mips core + vu0 + vu1. i believe in-order too, as a lot of console chips have been). I don't think it's true to say everyones experience and inclination toward multi-core, or specifically the next-gen console CPUs, is the same. For some devs this will be more natural that for others.
 
We will see who is right and who is wrong. Hey guys read carefully and think while you are doing so.


How should we care more about when it comes to making console games for the PS3. John Carmack or Hideo Kojima? Should we pay more attention to Quake4 on the X360 at TGS (if it's there) or MGS4 for the PS3? Both are great devs at their respective hardwares of choice correct?

Which game overall be it graphics, physics, sound, AI, epicness (is that a word), etc will show the true future of next-gen console games?

A. Metal Gear Solid 4 for PS3

or

B. Quake 4 for X360
 
complaining how OOOE is holding back PC CPUs and making console-PC ports increasingly difficult.
If PC game market continues shrinking at current rate I doubt that'll ever happen :P
To be fair though, OOOE is not without drawbacks. It's actually possible for it to cause unpredictable FPU computations on existing x86 CPUs, which can be more then a minor annoyance in some cases.
 
mrklaw said:
Sure, but if you wait a generation, Sony may well have multicore CPUs with OOOE. and you'll have a generation of coders used to it. Why not start learning now? On PC there is no incentive to learn to code for multiple cores. On PS3 and X360 there is - a clear financial incentive (funding from publishers based on your ability to extract performance from these devices)
The reason not to start now would be because it pushes us toward a more violent and debilitating transition, when you could wait for something that is at least has enough power-differential to make it worth the transition.

Multi-core out-of-order CPUs are shipping on PCs right now....why not go with that for the interrim?
Spin the argument round. I'm looking forward to some hotshot console coder turning round in 5 years time and complaining how OOOE is holding back PC CPUs and making console-PC ports increasingly difficult.
Enjoy the wait =)
I don't think it's true to say everyones experience and inclination toward multi-core etc. is the same. For some devs this will be more natural that for others.
Everyone's inclination is not the same, but it doesn't seem like many people are that inclined towards the multi-core setup. Kind of obnoxious how people just chant "Kojima" to answer concerns, as if one guy who turns out a huge-budget game every 3 years is really encouraging.
 
mckmas8808 said:
A. Metal Gear Solid 4 for PS3

or

B. Quake 4 for X360

Q4 is running on a ported year-old engine that began development in 2000. At the very earliest, MGS4 will be released a year from now. This is something of an apples-to-carburetor comparison.
 
I found these comments interesting, I'm surprised they haven't gotten more discussion:

(warning: this is my rough transcription, it is no way gramatically correct and there may be a few minor errors)

There is a fallacy which has been made over and over again which is being made yet again on this console generation and that’s that procedural synthesis is going to be worth a damn. People have been making this argument forever. That this is going to be how we use all of this great CPU power, that we are going to synthesize our graphics. It just never works out that way. Over and over and over again. The strategy is bet on data rather than sophisticated calculations on there. Its won over and over again. You basically want to unleash your artists and designers more and more. You don’t want to have your programmer trying to design something in an algorithm.

If you want to do the best on all platforms, you would unfortunately probably try to program towards the Sony cell model, which is isolated worker threads that work on small little nuggets of data rather than kind of pier threads because you can take threads like that and run them on the 360. You won’t be able to get as many of them, but you can still run.. You’ve got 3 processors with 2 threads or 3 cores with 2 threads so you can go ahead and make a game that has half a dozen little worker threads that run on the cell processor there and run as just threads on the 360 and a lot of the PC specs will have HT, the processor is already twice as fast. If you just let the threads run it’ll probably work out ok on the PC, although the OS scheduling might be a little dodgy for that which might actually be something that Microsoft improves in Longhorn.

That’s kind of an unfortunate thing that that would be the best development strategy to go there because it’s lot easier to do a better job if you followed the pier thread model that you would have on the 360 but then you’re going to have pain and suffering porting to the cell. I’m not completely sure which way we’re going to go. The plan of record is that it’s going to go the Microsoft model where we have the game renderer running as two primary threads and then we have targets of opportunity for render surface optimization, physics work going on in the space processor or threads which will be amenable moving to the cell. But its not clear yet how much the hand feeding of the graphic processor on the renderer, how well we’re going to be able to move that to a cell processor. That’s probably going to be a little bit more of an issue because the graphics interface on the PS3 is a little bit more heavyweight. You’re closer to the metal on the Microsoft platform and we do expect to have a little bit lower driver overhead. People that program directly on the PC as the first target are going to have a significantly more painful time, though it’ll essentially be like porting a game like we did on the xbox in doom. Lots of pain and suffering. You take a game that is designed for 2ghz or something and try to run it on an 800mhz-ish processor you have to make a lot of changes and improvements to get it cut down like that. That is one of the real motivators for why we’re trying to move some of our development to the consoles, to sort of make those decisions earlier.
 
blackadde said:
Q4 is running on a ported year-old engine that began development in 2000. At the very earliest, MGS4 will be released a year from now. This is something of an apples-to-carburetor comparison.


My point exactly. He is talking as if he is speaking for the devs of the world. When he is really speaking for his own dev team. Shouldn't he be saying, "Me and my team don't think that the CELL is blah blah blah"? He makes it sound like only a few devs will be able to harness the power of the PS3 and X360. THIS IS NOT TRUE.

The truth is teams like Bandai that made games that looked like this on the PS2.

gundam_screen022.jpg


Are making games that looked like this on the PS3.

25215gr.jpg
 
That’s probably going to be a little bit more of an issue because the graphics interface on the PS3 is a little bit more heavyweight. You’re closer to the metal on the Microsoft platform and we do expect to have a little bit lower driver overhead.
:( :( :(
 
mckmas8808 said:
My point exactly. He is talking as if he is speaking for the devs of the world. When he is really speaking for his own dev team. Shouldn't he be saying, "Me and my team don't think that the CELL is blah blah blah"? He makes it sound like only a few devs will be able to harness the power of the PS3 and X360. THIS IS NOT TRUE.

The truth is teams like Bandai that made games that looked like this on the PS2.

gundam_screen022.jpg


Are making games that looked like this on the PS3.

25215gr.jpg

holy shit @ the graphical jump! :O
 
border said:
I don't think his decrees should be taken as law, but I'm a helluva lot more willing to listen to him than the people actually trying to peddle this new hardware. Who do you think would be a more credible person to evaluate all of this?

Someone who doesn't have decades worth of vested interest in the existing paradigm, who isn't predisposed to look at the new systems with a jaundiced eye because said hardware's going to render a good amount of their legacy code and coding techniques obsolete, maybe? I'm more inclined to trust the opinions of young, bright coders who are still relatively new to the hardware (and to software development in general) than deeply-entrenched industry icons like Carmack or Gabe Newell who made their fortunes coding single-threaded applications.

border said:
Are all these analogies really accurate? Won't most people have to become proficient with single-threaded code before they can even tackle the multi-threaded stuff? Are current CS majors really required to even do that much in-order multi-threading? The sort of revolution you're talking about may require a big shift in CS ciriculums that won't happen for years.

Oh, there's no question that the transition's going to take time, though I also think many programmers' perspectives will be shaped more through actual experience in the industry than the time they spend in CS courses. Even if the next crop of up-and-coming coders isn't forced to deal extensively with multicore CPU's in university, having to come to grips with them at work from day one is going to result in those programmers acquiring a more intuitive grasp of the ins and outs of multicore development than a longtime PC developer would have. That's the transitional period. Once education catches up to the new tech, that's when things'll really take off. :)

border said:
OTOH You'd have us wait with budget-bloating hardware while we pray for some new generation of supercoders that can work with the same level of efficiency as the current ones. Why is it better to start the transition now, if it spirals costs upwards and doesn't produce a huge performance differential?

That's the thing. You seem to be looking at this from the standpoint of, 'John Carmack is a programming genius. If he says that these systems will be difficult to develop for, it means you'd have to be some kind of supercoder to write code for them.' I don't think that's the case. I think part of the potential difficulty he foresees arises from his personal biases when it comes to coding. Writing single-threaded x86 applications come to him as naturally as breathing - he's been doing it for decades. Sidestepping the common issues and pitfalls native to this environment are probably second nature by now. Though Carmack also has experience writing multiprocessor code, I suspect it's not as deeply-ingrained a skill set--not as instinctive to him--and probably feels that much more like work as a result. There may be additional challenges involved in developing for these new platforms, yes, but I believe much of the supposed difficulty is just a matter of established coders trying to approach problems in a fashion that's not natural to them. The coders destined to get the most out of these multicore systems won't necessarily be better than Carmack, they'll just be the ones who've learned to think differently--the men and women who have internalized the new paradigm and apply it intuitively to problems when coding.

border said:
Carmack seems to be making the case for waiting another generation, and waiting until the multiple cores are capable of out-of-order execution. He's not down on the idea of multi-cores.....just their current implementation.

Be that as it may, multicore platforms are already being deployed in the consumer marketplace. I doubt it's a passing fad, so it might be in developers' best interests to start making the best of the situation. :)

EDIT: Should've said 'are already being deployed on a massive scale' (as in the upcoming next-gen console launches), but I think you know what I meant.
 
What does Carmack's speech have to do with a next generation game looking better than a last generation game? Posting comparison pics is mostly stupid because it's a question of how much time and money went into each project. Shitty Gundam games are probably not going to sell any better than they used to, but they will continue to cost more.
 
Tellaerin said:
I'm more inclined to trust the opinions of young, bright coders who are still relatively new to the hardware (and to software development in general) than deeply-entrenched industry icons like Carmack or Gabe Newell who made their fortunes coding single-threaded applications.
How can you trust someone new to the business to really weigh the value of old paradigm versus new paradigm? If all they know is programming multi-threaded applications then how are they any less biased or invested in a particular technology?

Carmack's focus has always been PCs, but let's not even pretend like that's been static. Things have moved from DOS to Windows, from LAN to Internet, software renderers to hardware accelerators, Glide to OpenGL/D3D, etc. Considering how many radical transformations there's been, it just seems short-sighted to say Carmack is just "biased". He's adapted happily to any number of new technologies, and even sought out new ones to implement in id games.

At the end of the day, the advocate in favor of waiting on complex multithreading is an industry veteran who has always been enthusiastic about tackling new problems and working with bleeding edge technology. The advocate for doing complex multithreading today is purely hypothetical, and might not exist for years. I dunno, maybe after this Sony and MS will try to get some important names to evangelize multithreading and there will be someone real making good counterpoints.
 
sangreal said:
I thought that line was weird because I'm pretty sure he says the opposite later in the speech
Thing is that's something that had me genuienly worried in regards to new consoles for awhile now. People can say what they want about not wanting to deal with hardware too directly, but I'm far more used to running into issues with whacky behaviour of abstraction/API interfaces then with behaviour of hardware. As an old saying goes - "hardware never lies".
And frankly given the history of some company's library performance, I see far more negative then positive about being Forced to use abstraction layers for everything.
 
border said:
How can you trust someone new to the business to really weigh the value of old paradigm versus new paradigm? If all they know is programming multi-threaded applications then how are they any less biased or invested in a particular technology?

Someone who's split their development efforts roughly equally between the PC and a machine like the PS2 (which is a crude precursor to the architecture of the next-generation consoles in several respects, as gofreak mentioned earlier) would be a more qualified judge than someone who's heavily invested in the PC development scene and dabbles with multicore stuff on the side.

border said:
Carmack's focus has always been PCs, but let's not even pretend like that's been static. Things have moved from DOS to Windows, from LAN to Internet, software renderers to hardware accelerators, Glide to OpenGL/D3D, etc. Considering how many radical transformations there's been, it just seems short-sighted to say Carmack is just "biased". He's adapted happily to any number of new technologies, and even sought out new ones to implement in id games.

That doesn't change the fact that the underlying PC architecture on which all those things were built has been relatively static, extended incrementally over the years. None of the things you've described has required programmers to rethink the very fundamentals of how they construct their applications the way the switch from single to multiple cores will. And that seems to be the sticking point for the veteran PC coders when it comes to the new generation of consoles.

border said:
At the end of the day, the advocate in favor of waiting on complex multithreading is an industry veteran who has always been enthusiastic about tackling new problems and working with bleeding edge technology.

Except when it's forcing him to totally rethink how he codes. Then he seems a little less than enthusiastic. :) I still feel that it's apples and oranges, and that a master of single-threaded coding isn't the best person to look to for balanced opinions regarding multithreading and multiple cores. I guess we'll have to agree to disagree here.

border said:
The advocate for doing complex multithreading today is purely hypothetical, and might not exist for years. I dunno, maybe after this Sony and MS will try to get some important names to evangelize multithreading and there will be someone real making good counterpoints.

I think you'll see advocates arise once more people begin learning the hardware. Whether or not you'll be willing to accept their opinions when those opinions are at odds with the oldschool PC development crowd (whom you evidently respect greatly) is another question entirely. ;)
 
I was thinking about how physics can play a role in gameplay. Imagine if MGS4 took place in the desert, how the sand would be affected by wind and form landscapes right in front of you. Possibilities are endless.
 
I find the number of people second guessing Carmack in this thread just abso-fucking-lutely hilarious. I mean, it's not like he's just mindlessly bitching, he's presenting his case with rather in depth quotes on how it works and why he thinks another approach would work better. If anything, I would say that his judgement is clouded because he's used to working with ultra high end PC equipment that retails for $500 + a chip (be it grahpics cards or processors). But it's not like the GAF sideline experts know any better.
 
Tellaerin said:
Except when it's forcing him to totally rethink how he codes. Then he seems a little less than enthusiastic.
His last two engines supported multiple processors. They didn't have to. Nobody is going to not buy id titles because they aren't multithreaded, and less then 2% of buyers can take advantage anyhow. There was no real financial benefit to doing this, yet he still did it anyway. I don't get how you can paint this guy as interested in keeping old tech around because it fits his skillset and is in his financial best interests. He's been making multithreaded games since long before there was even any financial incentive to do so. Why would he all of a sudden turn against the idea when there IS a financial incentive?

You make it out like this whole thing is just a matter of training and experience. Nothing is actually more difficult than another thing, you just have to be versed in a particular paradigm and you'll work just as well as using anything else. But is it ever that way in Computer Science?

You've liked learning multithread to learning a new programming language. Okay, but people don't just program in a particular language because that's what they're the most familiar with. Somebody with 20 years of Assembly experience and only 5 years of C++ experience doesn't try to write games in Assembly just because he's "invested" more time in it. He knows it's damned difficult to do it in assembly, and if you ask him which language he'd rather work in then he'd probably still tell you C++. It's not always a matter of building education and experience....some ways of doing things are inherently more difficult, more detail oriented, less forgiving of sloppiness, and just all-around more time consuming. Not to say that the difference between multithreading and single-threading will be as wide as the difference between assembly and C++....but it's just an example of how you can still lose efficiency when switching paradigms, even if your coders are very well versed in the more esoteric choice.

I don't pretend to be a programmer, so to form my opinion it's a matter of listening to the experts to understand whether multithreading is an issue that can be solved with training or whether it's something that will just require increased manpower now and for the forseeable future. So far most everyone has said that familiarization and education will help, but it's something that will just permanently require more time and effort to be put into games. That part of Carmack's speech doesn't even seem to be in contention....the more controversial bit is the implication that we shouldn't have moved to multithreading so quickly. I expect that will be a point of contention for some time to come....though it's largely a moot point since there's really no turning back on the design of each console. I will be interested if Nintendo goes with a single-core solution though.
 
While it is true that most of the majority of game code (i.e. flow, state control, etc.) is done on one thread, any dependant processes that are designed correctly can be dumped off onto another thread of another core (HW or SW).

One such process could be the renderer, the sound engine, particle system, etc. The game thread can queue up render calls, sound effect hits, etc. in a to be processed on their respective core at the end of the frame. Deferred processing is NOT new to console programmers.
 
If worse case is about 1/2 perf of a regular high-end pc chip, but with the potential for more than an order magnitude increase in performance at several relevant tasks over such, it definitely was worth it, no ifs or buts. I trust the decisions of the hundreds of engineers involved in the design of these two cpus more than a single dev, no matter how famous or great he is. NO single man can do what such a collective intelligence has achieved.

Devs. that want to stick with the old-paradigm can simply take the halving hit, given moore's law, it's not much in a 5-10yr console lifespan... and those few blessed with the divine grace beyond that of mere mortals, can show stuff that'd take more than an order of magnitude increase in performance in traditional cpus, that is significant in the long term... and that is what's important for closed/static-h/w.

PS

It's folly to limit the potential of those who're amongst the brightest to simply ease the pain of those who's light shines not as bright. To limit the maximum potential of the brightests stars of an entire generation just to give a small boost in performance relatively and to ease the work of the average, that is so disgraceful.

PPS

[Inference heavy mode on]

A game like HS is said to go at about 5fps right now, cpu, IIRC, is holding it back. With a standard high-end cpu gaining twice the performance 10fps would be marvelous for some... but the dev.s are trying to hit 60fps, IIRC, and they believe with cell it's not outside the realm of possibility... with a standard high-end cpu it seems like it would be, and that would be a shame.

[/inference heavy mode off]

editedii
 
Oh! My Car! said:
Years ago I thought synchronization between threads would kill any advantage of multiple cores, I suppose that has changed or?
Me too, I can't imagine how that can be handled. God bless those like Carmack who are tackling the problem.
 
wheres yu suzuki in all this techno talk, if theres one guy who untill recently had been at the forfront of pushing technology and its capabilities, its him. He would give a very accurate picture of what this new technology means to the game industry.
 
Do you guys actually know what an order of magnitude is or is it just some number that gets thrown around to mean "Uhhh, something really large"?

Carmack notes a 40% speed increase when working with OOE multiprocessing. Hint: 40% != Order of magnitude
 
border said:
Do you guys actually know what an order of magnitude is or is it just some number that gets thrown around to mean "Uhhh, something really large"?

Carmack notes a 40% speed increase when working with OOE multiprocessing. Hint: 40% != Order of magnitude

An order of magnitude is a 10 fold increase.
 
You guys dogging Carmack are seriously confused and need to take a look around in the industry. That GPU in the PS3 was inspired by JC, most of the features on it wouldn't be there if JC didn't ask for them, same goes for the Xbox 360. No one else in the industry can make nVidia or ATi jump like Carmack can.
 
Let's wrap this thread up

Industry Outsiders and armchair coders are arguing and questioning the sanity and technical knowledge of a man who lives for coding, has destroyed or made the name of 3d card makers in the past by the power of a single .plan update, a man who has been sitting on the design boards of many makers.. those people are questionning this man when he expresses doubts about claims of performance by certain hardware makers?


Just another day on GAF!
 
Naked Shuriken said:
Let's wrap this thread up

Industry Outsiders and armchair coders are arguing and questioning the sanity and technical knowledge of a man who lives for coding, has destroyed or made the name of 3d card makers in the past by the power of a single .plan update, a man who has been sitting on the design boards of many makers.. those people are questionning this man when he expresses doubts about claims of performance by certain hardware makers?


Just another day on GAF!

Yes, John Carmack is a very important figure in the world of PC gaming. His opinion hasn't done jack to 'destroy or make the name of' any console hardware manufacturers I know of, and I doubt this is liable to change any time soon. This is something for which I'm grateful, since the last thing I'd want is for engineers to hobble their vision in order to appease a man who feels that change is coming 'too soon'. (Not to mention one whose priorities are somewhat at odds with where I feel the industry should be going - I'd hate to see hardware being tailored to suit a guy who thinks AI and physics in games now are 'good enough', and would rather concentrate on maxing out visuals. Games built around setpieces and monster closets are always going to be dull, no matter how much you pretty them up. :p )

I'm as incredulous over all the bowing and scraping I'm seeing as some of you seem to be over the fact that I don't accept Carmack's word as law. A new generation of console hardware's coming. Meanwhile, Carmack - a man whose bread and butter is not machines like this - is dwelling on the difficulty of developing for them. How dare we not bow to his superior knowledge?! So what if coding for single- and multicore systems require fundamentally different approaches? He's a coder, amirite? If he's good at coding for one type of architecture, he's gotta be just as talented with all the others! After all, he's a genius! Who the hell are these GAF peons to question? He's sending rockets into space, man! HE IS THE GOD CARMACK! Why not build the man a temple while you're at it? :p Sorry, but I don't share your weird compulsion to declare him an expert on development for the new consoles because he's created some impressive PC 3D engines, despite his dabbling with multicore systems in the past. If anything, I'm more interested in the opinions of the Japanese devs out there, guys who are already used to learning fundamentally different architectures from one console generation to the next and aren't quite as heavily invested in keeping things relatively static. (They're also the guys whose console output I actually care about - I'm not buying the next gen consoles to play the latest id title. I have a perfectly serviceable PC for things like that.)

Yeah, let's wrap this thread up. John Carmack's your god, and you're free to worship him to your heart's content. Fanboys will be fanboys, after all - it's just another day at GAF. Just don't disparage me because I don't stand in awe of the Mighty Carmack too. :p
 
mckmas8808 said:
The truth is teams like Bandai that made games that looked like this on the PS2.

gundam_screen022.jpg


Are making games that looked like this on the PS3.

25215gr.jpg

Isn't this more a case of Bandai this gen vs Namco Bandai next gen?
 
Top Bottom