• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

John Carmack Keynote

Fight for Freeform said:
Japanese developers main strengths lie in art direction and execution, whereas American/Western Developers strengths lie in creating engines
You have the approach where you design the technology as solution for your game design, and then you have the approach where you design a game as an answer to your technology.
Western games are usually more like the latter (though of course this is a simplified way of looking at things, nothing is black&white in real world).
 
One thing you have to see is that the Carmack keyonte is by far the most and most frank in depth stream of consciousness type talk that you will ever get by any programmer in a large game software house, ever. John Carmack is unique in that he is a tech head with complete freedom to say whatever he wants, whenever he wants. He is very frank about his distrust of PR speak and I can guarantee you that every other technical game dev is severely restricted by their PR departments in any interview they give. There is no tech head in the industry who has Carmack level freedom. Gabe Newell has that kind of freedom, but he's not a programmer, he's a business guy and all of his comments are based on cost.

And don't even bring up Japanese developers. Japanese programmers don't get any respect compared to their Western counterparts and there's no one even remotely as in depth, articulate and willing to talk as Carmack. Itagaki certainly has that background but the most in depth he ever goes into tech details is 'Team Ninja can code the pants off of anybody else' he's certainly never aired his views on the benefits of OOOE as opposed to multicore for example.

In summary. denigrating Carmack because Epic programmers and Japanese devs haven't made the same comments makes no sense because none of them have the kind of soapbox that Carmack is afforded ALSO Carmack never said the problems were insurmountable he said that there were other ways to go that he would have preferred and would have provided a much more immediate and significant boost (things like Full Virtualisation of Texture Mapping Resources) and he provided his reasoning.

Plus equating the comments of Newell to Carmack as you do isn't particularly justified. Like I said Newell is a business guy. Carmack is a programmer and when Newell goes: "Multicore apps require much more costly talent, and you have to start from scratch" it is *not* equivalent to Carmack going "Multicores are not going to be well exploited for certain in the early years of next-gen, academia hasn't been able to provide good techniques for the problems in decades, and there are technical avenues to explore that will provide much more immediate performace benefits".

In summary. Don't equate the comments from the two individuals to create a trend, they're coming from very different places. Newell is *right* when he talks about multi core games being much more expensive, but he never goes further than that. He's just concerned with Valve's balance sheet and milestone deadlines. Carmack's comments are the comments of a top tier software programmer/hardware expert expounding on the (immediate)future.

The problem I have with your posts is that you aren't attacking Carmack's conclusions in the keynote based on the reasoning and explanations that he provides in his talk. You are attacking the conclusions partly based on what you think he is subconsciously doing which is a lousy argument in any case.

Now your saying that the Epic guys disagree has some merit, but not much because you are not them. I would be EXTREMELY interested in hearing a discussion between Epic head of programming for PS3 middleware and Carmack as that would be an actual TECHNICAL discussion. You alas are a poor subsitute.

In fact one of the reason that the Epic comments gain more merit in your eyes makes them less reliable in mine, to wit:
they've positioned themselves as a major middleware provider for PC and next-generation console hardware
Epic programmers can't go out and say 'you know what multicore compilation is really really tough' because Epic as a company HAS to sell itself as the best technology to license for all kinds of development and having your code monkeys say something like that is bad bad PR. Epic technical staff do NOT have the freedom to be frank as they do not have the stature that Carmack does.

ESPECIALLY since Carmack goes on to characterise all of his less than enthusiastic comments on CPU division as 'quibbles' and saying that everyone is making great hardware.
 
Epic programmers can't go out and say 'you know what multicore compilation is really really tough' because Epic as a company HAS to sell itself as the best technology to license for all kinds of development

Sorry but poor arguement. It seems to me that Epic might have something here. Would people be willing to use it for 1 million dollars if it wasn't that great. Don't get mad because Epic might have figured out how to make multicored programming easier and your god has not.
 
mckmas8808 said:
Sorry but poor arguement. It seems to me that Epic might have something here. Would people be willing to use it for 1 million dollars if it wasn't that great. Don't get mad because Epic might have figured out how to make multicored programming easier and your god has not.
What the hell? I never said that Carmack was a god, I said that he has more freedom to be frank than Epic programmers do and your entire response to that is 'It seems to me that Epic might have something here'... WHAT? What does that have to do with how much leeway Epic techheads do or do not get from their PR and business bosses? In fact I said that any disagreement that the Epic guys might have with the Carmack keynote would be extremely interesting to know about and I'd like to be a fly in the wall on any such discussion (something that Tellarian does not provide).
 
Tellaerin said:
Considering that most of the counterarguments I've heard here so far have amounted to little more than, 'He's a genius, so we must all defer to his opinion', I'm inclined to stand by my position.
Well what possible counterargument can there be? You're still just going to hop up and down and call him "biased". He says something you don't like and there's no way you can discredit him with your own logic or knowledge and there's no one of similar credibility evangelizing multicore systems, so the best you can do is force the argument to a stalemate with the Bias Card. You chose to assume the worst, and no one can stop you if that's what you chose....even if it flies in the face of the way the guy has worked for years.

As it is though, even the Bias Card seems painstakingly tailored to fit the one or two elements of his development career. Carmack just doesn't like writing multithreaded games! You say he's been writing multithreaded games for at least 7 years now? Well then he's just scared of moving outside of the x86 instruction set! You say he's been involed in console projects and Mac projects? Well he hasn't worked on enough stuff outside of x86! For every counterargument you just increment your standards to fit the picture you like. You can use this line of thought to assail pretty much any developer who dislikes some new tech you're rooting for, which kind of shows how it's flawed.
 
Azih said:
What the hell? I never said that Carmack was a god, I said that he has more freedom to be frank than Epic programmers do and your entire response to that is 'It seems to me that Epic might have something here'... WHAT? What does that have to do with how much leeway Epic techheads do or do not get from their PR and business bosses? In fact I said that any disagreement that the Epic guys might have with the Carmack keynote would be extremely interesting to know about and I'd like to be a fly in the wall on any such discussion (something that Tellarian does not provide).


Actually Tellarian had alot of great stuff to say. For some reason you chose not to read what it is that he is saying. So this is the way I'm starting to see things around the boards.

A. If you have anything good to say then you are just doing that to sell something. Or doing it for moneyhats.

B. If you say something bad about the next-gen systems then you are being honest and are proving more and more that the next-gen systems aren't nothing but hype.

Let me ask you a question Azih.

Do you think that the next-gen consoles are hype in any way at all? If so please explain.
 
Tellaerin said:
The lighting effects and normal-mapping in Riddick were very impressive, yes, but on the other hand, it's not like the engine had to handle huge draw distances, either. I'm just as impressed by TN's ability to render large, complex environments at rock-solid framerates (and without the occasional lapses into lo-res grainy-vision you'd get from time to time in Riddick, either). Now, do you really want me to sit here and get into which other Japanese devs I feel have managed to get impressive performance out of consoles this gen (particularly on the PS2), or can we set aside the 'Western engine coders > Japanese engine coders' thing as the gross and inaccurate overgeneralization it is?

It's completely accurate. Name one graphically impressive PC game from a Japanese developer, that compares to something like F.E.A.R. Heck, name a few Japanese developed titles that use, say, normal mapping.

When Coder A's saying, 'Hey, we're getting all these good results from this new hardware and we're really psyched about what we can do with it,' and showing impressive results, while Coder B's saying, 'There are all these inherent pitfalls to writing code for these machines that academics haven't been able to solve for years now and I think we're moving to this new architecture too soon', I consider that a contradiction. You might be able to reconcile the two to your satisfaction after some mental gymnastics, but my personal opinion is that it's a big stretch.

Coder A is saying "Hey we're getting good results" and Coder B is saying "the results could be better if the design was a bit different."

You're RE-INTERPRETING what Carmack has said into something he hasn't said.

I'm sure you don't, which is pretty telling.

Quite telling about the person who I'm talking to. :)

Fafalada said:
You have the approach where you design the technology as solution for your game design, and then you have the approach where you design a game as an answer to your technology.
Western games are usually more like the latter (though of course this is a simplified way of looking at things, nothing is black&white in real world).

That's an excellent way of putting it.

The fact of the matter is, the only way to prove that Carmack is wrong is to have concrete evidence otherwise, which can include evidence from other developers.

Otherwise, it's far more reasonable to take Carmack's word over someone who posts on a forum and has seen neither peices of hardware.

It's like some scientist at NASA saying, "I think going in such and such direction is wrong" and some forum posters going "WTF, HE"S BAISED11! ALL THE OHTER ASSTRONOTS ARE DOING JUST FINE!".
 
I think the biggest point missed here is that Carmack can actually say he's working on the hardware in question with a strong body of work to support credibility in his statements which are, by his own admission, early takes on their viability in the whole scheme of things. Can anyone who disagrees with him say they have as much hands on experience with the hardware in question? Do they have a software technology backgroiund to strengthen their position on his statements? His statements don't indicate that the CPUs designs chosen are worthless or even very poor...he just thinks there may have been better choices available to balance the effort to output ratio.
 
Let's get some actually quotes about in-order multicore vs OOE singlecore FROM Mark Rein and the Epic people, or shut the fuck up about it. Have they said it's more or less difficult/expensive than the current paradigm? The fact that they are "showing results" with UE3 engine is totally irrelevant. When people even bring it up it shows that they've missed the point to some extent. Nobody is saying you can't get results with a multicore setup, just that the cost of doing so is much greater. Carmack states this explicitly.

On one hand it benefits Epic to say that multicore is tough since it will encourage publishers to consider licensing their engine, but on the other hand they are in bed with both Sony and MS right now and don't want to bash their hardware. So I guess people that wanna yell "biased" can work things in either direction :D
 
The fact of the matter is, the only way to prove that Carmack is wrong is to have concrete evidence otherwise, which can include evidence from other developers.

No the way to prove it is to look at console games that come out in 2006 versus the PC games that come out in 2006. THAT IT.

Will Quake 4 look better than Gears of War? I guess we will have to wait and see which one is better.
 
Aside from the logically inconsistent remarks on physics, there really isn't much to object with his remarks. Carmack himself said it's all basically "you know, this is fantastic hardware, but I would be a little happier if they did X".
 
mckmas8808 said:
No the way to prove it is to look at console games that come out in 2006 versus the PC games that come out in 2006. THAT IT.

Will Quake 4 look better than Gears of War? I guess we will have to wait and see which one is better.
In the previous post, for what must at least be the 5th time in the thread, I explained that this debate is about the FINANCIAL COST of making great looking games with a multicore setup, not whether or not they can be made. Some asinine comparison of two games BOTH CODED FOR MULTICORE SETUPS is not going to prove anything.
 
Fight for Freeform said:
It's completely accurate. Name one graphically impressive PC game from a Japanese developer, that compares to something like F.E.A.R. Heck, name a few Japanese developed titles that use, say, normal mapping.

What kind of comparison is that? Do any of the Japanese developers even create game engines from the ground up for the PC? Most, if not all of their games are created with the PS2 as the lead format and they're simply ported to the PC.
 
border said:
In the previous post, for what must at least be the 5th time in the thread, I explained that this debate is about the FINANCIAL COST of making great looking games with a multicore setup, not whether or not they can be made. Some asinine comparison of two games BOTH CODED FOR MULTICORE SETUPS is not going to prove anything.


Well then if that's your point then you're right. My bad. But wouldn't the cost be justifed with a more remarkable game?
 
BigBoss said:
What kind of comparison is that? Do any of the Japanese developers even create game engines from the ground up for the PC? Most, if not all of their games are created with the PS2 as the lead format and they're simply ported to the PC.

EXACTAMUNDO.

There is no comparison when it comes to experience using the latest graphics technologies.
 
border said:
Well what possible counterargument can there be? You're still just going to hop up and down and call him "biased". He says something you don't like and there's no way you can discredit him with your own logic or knowledge and there's no one of similar credibility evangelizing multicore systems, so the best you can do is force the argument to a stalemate with the Bias Card. You chose to assume the worst, and no one can stop you if that's what you chose....even if it flies in the face of the way the guy has worked for years.

As it is though, even the Bias Card seems painstakingly tailored to fit the one or two elements of his development career. He just doesn't like writing multithreaded games! What -- he's been writing multithreaded games for at least 7 years now? Well then he's just scared of moving outside of the x86 instruction set! What -- he's been involed in console projects and Mac projects! Well he hasn't worked on enough stuff outside of x86! For every counterargument you just increment your standards to fit the picture you like.

He's been writing multithreaded games for at least seven years now - yet the console world isn't ready for machines based on multicore CPU's for at least another hardware generation? Large logic hole there. Likewise, the fact that he's 'been involved in' console and Mac projects fails to prove much - what's been his primary target platform all this time, Border? Did he make his reputation as the god of Mac engine design? (I'd say that honor would probably fall to Bungie, though I'll admit that I'm far from a Mac expert.) Trying to claim that I'm shifting the goalposts is a load of crap. My core contention from the outset has been that the man is primarily an x86 PC developer, and has made a reputation for himself by pushing the architecture he knows intimately to its limits. To hear you talk, you'd think the man was equally proficient with every piece of hardware he's ever laid his hands on, and loves them all equally.

Personally, I'm inclined to believe that the paradigm a coder knows best, the one he automatically thinks in terms of when mentally breaking down the process of developing a cutting-edge new engine, is also going to be the one he prefers. I honestly believe that Carmack's view of the new hardware is filtered through that particular lens. I think that he's overstating the potential problems inherent to working with these consoles, and that this is the most likely reason why. (I also disagree with his views on the importance of visual performance vs. AI/physics, but right now, it's the multicore business that seems to have taken over the thread.)

Only time will tell whether or not I'm right about that. If I'm wrong, I'm sure we'll see ample evidence on both sides of the Pacific as frustrated developers vent their spleens to the press, however tactfully. Should that happen, I'll gladly admit I was mistaken, and that developing for these multicore machines is every bit as tough as Carmack and Gabe Newell have implied. In the interim, though, you (and a couple of the other posters here, Fight for Freeform in particular) might want to put the long knives away - debating a point vigorously is one thing, but the impassioned rhetoric seems to be drifting towards bile and vitriol now. You don't have to like me or my opinions, but I'd rather try to preserve at least some semblance of civility here.
 
mckmas8808 said:
Well then if that's your point then you're right.
It's not my point so much as it is Carmack's. Nobody has said that the multicore systems are less powerful than PCs or single core consoles. The point was that their performance edge comes at a big expense.
But wouldn't the cost be justifed with a more remarkable game?
Extra costs mean less ability to experiment, innovate, offer new IPs, etc. That seems to be the main concern.
 
Extra costs mean less ability to experiment, innovate, offer new IPs, etc. That seems to be the main concern.

Hasn't the bigger part of this next-gen cost goes more to art than the programmers? New IPs? I've seen so many new IPs with the next-gen consoles that's I lost myself thinking about them.

WarDevil, Gears of War, John Woo's new game, Heavenly Sword, I-8, Eyedentify, MotorStorm, etc etc. New IPs isn't something that I'm worried about right now.
 
Tellaerin said:
He's been writing multithreaded games for at least seven years now - yet the console world isn't ready for machines based on multicore CPU's for at least another hardware generation? Large logic hole there.
The main problem seems to be that consoles will not support OOOE multithreading.

I wouldn't say things have reached a totally vitriolic pitch, but you kind of bring it on yourself because you're here boasting that no one has offered valid counterpoints. I just then thought it was worth pointing out that your line-of-thinking is tailored to be unassailable. It's based on a significant presumption that you either buy into or do not buy into. Beyond that there is not much that can be done. You (general "you" not specifically Tellaerin) hone in one or two constants in a person's career and when he speaks out against a change in those constants you get to call him biased....it doesn't matter that he has been willing and happy to deal with many other previous paradigm shifts because you've isolated his bias to something rather specific and current. You can do it to almost anybody because nobody is proficient in all styles, paradigms, and languages.
 
mckmas8808 said:
WarDevil, Gears of War, John Woo's new game, Heavenly Sword, I-8, Eyedentify, MotorStorm, etc etc. New IPs isn't something that I'm worried about right now.
How many of those are from developers that pretty much have to offer new IPs because they don't have any existing ones or their current ones have been worn bare? Unreal has no cred on consoles, so Epic makes Gears of War. Insomniac makes I-8 because Ratchett & Clank will not survive 4 sequels in 5 years. Heavenly Sword is Ninja theory's first game - no existing IP.

Launch time is the best time to start new IPs so you will see many companies taking a crack at it. After that, things will be much more "safe". You already saw this trend in the current generation.
 
Insomniac makes I-8 because Ratchett & Clank will not survive 4 sequels in 5 years. Heavenly Sword is Ninja theory's first game - no existing IP.

Ummm What??? Ninja Theory has actually created a game before. Do some research quick before you lose credability.

Launch time is the best time to start new IPs so you will see many companies taking a crack at it. After that, things will be much more "safe". You already saw this trend in the current generation.


So what's the point in saying that extra cost will hurt devs making new IPs?
 
border said:
The main problem seems to be that consoles will not support OOOE multithreading.

I wouldn't say things have reached a totally vitriolic pitch, but you kind of bring it on yourself because you're here boasting that no one has offered valid counterpoints. I just then thought it was worth pointing out that your line-of-thinking is tailored to be unassailable. It's based on a significant presumption that you either buy into or do not buy into. Beyond that there is not much that can be done. You hone in one or two constants in a person's career and when he speaks out against a change in those constants you get to call him biased....it doesn't matter that he has been willing and happy to deal with many other previous paradigm shifts because you've isolated his bias to something rather specific and current. You can do it to almost anybody because nobody is proficient in all styles, paradigms, and languages.


I wouldn't call it 'boasting', though I suppose I can see how it might have seemed that way. It was more a matter of me expressing irritation at all the people throwing out statements like, 'But Carmack's a genius!', as if this alone should have been enough to shut me up, and who-the-hell-am-I-to-question? To me, that's not an answer - being a genius doesn't make you infallible, and seeing that kind of unquestioning acceptance when someone who's idolized by the public speaks has always rubbed me the wrong way.

As far as my thinking being predicated on certain assumptions goes, you're right. I believe I'm correct based on things like the coders I've known personally, my experiences with how people learn and how their familiarity with particular tools and environments will subtly predispose them toward things that function in a similar fashion and leave them inclined to look for flaws in the ones that don't, but all of that is empirical. All I can do is to explain what led me to make those assumptions (which I've done here) - whether or not someone else draws the same conclusions depends largely on whether or not they proceed from the same notions of human nature, of how people think. I didn't intentionally set out to create some kind of unassailable closed-logic construct or anything. :p (Apologies if that's a little incoherent - I'm fast approaching the '20-hours-without-sleep' mark here.)
 
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2377&p=3

Some Tim Sweeney quotes from the above article:-

AnandTech: Programming multiple threads can be complex. Wasn't it very hard to deal with the typical problems of programming multithreaded such as deadlocks, racing and synchronization?

Tim Sweeney: Yes! These are hard problems, certainly not the kind of problems every game industry programmer is going to want to tackle. This is also why it's especially important to focus multithreading efforts on the self-contained and performance-critical subsystems in an engine that offer the most potential performance gain. You definitely don't want to execute your 150,000 lines of object-oriented gameplay logic across multiple threads - the combinatorical complexity of all of the interactions is beyond what a team can economically manage. But if you're looking at handing off physics calculations or animation updates to threads, that becomes a more tractable problem.

We also see middleware as one of the major cost-saving directions for the industry as software complexity increases. It's certainly not economical for hundreds of teams to write their own multithreaded game engines and tool sets. But if a handful of company write the core engines and tools, and hundreds of developers can reuse that work, then developers can focus more of their time and money on content and design, the areas that really set games apart.


You can expect games to take advantage of multi-core pretty thoroughly in late 2006 as games and engines also targeting next-generation consoles start making their way onto the PC.

Writing multithreaded software is very hard; it's about as unnatural to support multithreading in C++ as it was to write object-oriented software in assembly language. The whole industry is starting to do it now, but it's pretty clear that a new programming model is needed if we're going to scale to ever more parallel architectures. I have been doing a lot of R&D along these lines, but it's going slowly.

It seems to me that Tim isn't really saying anything different to JC *in essence*
 
mckmas8808 said:
Ummm What??? Ninja Theory has actually created a game before. Do some research quick before you lose credability.
What games have they done? I did my research and only see one. Maybe you can suggest a better research method.

http://games.ign.com/objects/714/714847.html

IGN's info page lists Heavenly Sword as their only title.

The "About Us" page on their website says they are made up of the people who made Kung Fu Chaos, but that seems largely irrelevant to the point I was making. I don't even know who owns Kung Fu Chaos since a search of USPTO.gov turns up nothing. It's probably owned by Microsoft though. I believe they took control over most of the IPs on games they funded/published from 3rd parties (Oddworld aside). At any rate, Kung Fu Chaos is not a marketable IP for next generation, so they came out with something new.

WarDevil also appears to be from a newbie developer.
So what's the point in saying that extra cost will hurt devs making new IPs?
Because it still will hurt them in the long-run?
It was more a matter of me expressing irritation at all the people throwing out statements like, 'But Carmack's a genius!', as if this alone should have been enough to shut me up, and who-the-hell-am-I-to-question?
Well we are really talking about two different logical fallacies here. One is the "Bias Card" -- "This guy's analysis is wrong because he is somehow biased". The problem there is that even if you can prove that someone is biased, that doesn't necessarily mean they're wrong. Someone who's coded in C++ their whole life may laugh at the idea of writing a game in Java or Assembly, but that doesn't necessarily mean his outlook is incorrect. Those languages are not particularly suited for game development. There's a more proper term for it than "Bias Card", but I'm not remembering it at the moment.

On the other side, there is "Appeal to Authority". "This guy is automatically right this issue because he has degrees, experience, more knowledge, whatever". This logical fallacy is not necessarily always bad, though. You can't reliably gain definitive analysis from deferring to an authority, but I'd say it works better than just doing a bias analysis.
Tim Sweeney: Yes! These are hard problems, certainly not the kind of problems every game industry programmer is going to want to tackle. This is also why it's especially important to focus multithreading efforts on the self-contained and performance-critical subsystems in an engine that offer the most potential performance gain. You definitely don't want to execute your 150,000 lines of object-oriented gameplay logic across multiple threads - the combinatorical complexity of all of the interactions is beyond what a team can economically manage. But if you're looking at handing off physics calculations or animation updates to threads, that becomes a more tractable problem.
This is pretty much what I'd expect to see out of the UE3 people. They admit that it's difficult in a diplomatic way that doesn't totally bash the foundation of the hardware (thus upsetting MS and Sony), but still admit problems as a way of selling themselves as middleware developers. "This hardware is very tough -- but hey if you don't like it you can always buy our product!" People interested in playing the Bias Card can go either way. Either they wanna present hardware as too difficult to work with so they can sell their engine to third parties, or they want to downplay the difficulty so they can maintain a positive relationship with MS and Sony. Thanks to the vagueness they can do both....it still seems like they at least partially agree with Carmack in that multicore programming is definitely harder. I doubt they will go out on a limb far enough to say the performance differential makes the complexity addition unjustified,
 
As I said a few pages ago in this here thread, it's indeed difficult to harness such might. But would we rather settle for something weak but a bit easier to code for?

Last time the ps2 was very difficult to program, we had a somewhat similar scenario and when you saw the performance of the pcs at the times, at least using the same rating(even the comparison HYPE graphs showed so), the performance expected as compared to a traditional cpu in the pc space was a few fold more only.

Today we see that the performance as compared to traditional cpus in the pc/mac arena can reach an order of magnitude greater and in some cases can be even dozens upon dozens of times greater on some applications as compared to some of these cpus. Might it be hard to obtain such? Yeah. Should we rather remove such potential entirely out of the box to appease prog.s? NO, it doesn't make sense, it's a closed box, and as has been said by others you've to cram the most performance possible even if it's not easy to obtain.

There will be some dev.s that manage to obtain excellent performance, the rest will still get decent performance. If we'd have said that most code would run at a 4th-10th the speed of a convential cpu on the pc space, I'd agree it was a mistake, but given even when not coded properly(that is OoE code) you still get 1/2 the performance, it's the best choice.
 
Top Bottom