Sorry I'm sometimes dense and try not too use too much induced information to parse posts. I read what is there and try to make minimal assumptions, because these can be false. As I said before I not necessarily arguing the points, but much more the presentation.
bgassassin said:
Well when you take it out of context you lose what I was saying. I explained it right after that.
Sorry I fail to see how the following sentences support the task being "hard". If you're saying you have to do something as opposed to nothing to get it working, then I wouldn't call that "hard". Also, particularly when it come to assets, I can say for a fact that that is a gigantuan amount of work that goes into building content pipelines. These depend on two things: 1. the producer and 2. the consumer. In our case 1. the producer is a various set of tools like Maya for models/animations, Audio composition software and what have you. These software suites have a much more long-lived development cycle and typically change incrementally. 2. The consumer is the game engine that needs to be able to understand the assets and process them properly. Although the refresh cycle of these engines might be more dramatic than the producer software, what assets are and how they are described usually doesn't. Also no matter what your rendering special effects are, a mesh is a mesh so to say, and it's description is fairly stable. 90% of a game (if not more) is assets and optimizing the asset management, making sure you get the most out of your artists is important. All of this is to support the following proposition: developers are likely to separate out how an asset is managed from other components of the engine, such as the rendering engine. And that is to support: they are likely to reuse, without loss of functionality or "power" their asset pipeline. So finally, I would rather say that the task of using "old assets" with a "new engine" is likely not "hard".[/QUOTE]
bgassassin said:
But that has nothing to do with the point I was making. Gears of War was made with UE3. Those two engines aren't capable of producing that level of visuals, therefore the visuals of GoW would be gimped by using the older engines. There's plenty of info out their to show differences in the engine versions. It comes off that you're intentionally "playing dumb" to prove your point.
And you are also missing my point (most likely because I don't express it properly enough). I'm not saying the capabilities of UE2 vs UE3 aren't different and that running a game on one vs the other is not going to "gimp" it. I was talking about how fundamentally different Engine6 is as opposed to Engine5 (using UE as an illustration, not that I'm going to spend half a day googling and contrasting their internals): say 90% of the Engine6 uses the same code as Engine5 will you call it "new"? The most obvious changes from a users point of view are the graphics, but that is only one component of the engine. The point is that you don't know what the capability of Nintendo's engine is, nor how much needs to change for it to completely exploit the capabilities of the new hardware.
bgassassin said:
Since the mistakes Nintendo made with the N64, they made a fundamental change in certain things they did. Making simpler games, making hardware that's easier to develop for, making hardware that's about efficiency and not all out power, and separate from that using the same engine for multiple titles. Then there was continually using friend codes despite their broad unpopularity. And these are just off the top of my head.
Those are just a few of the things that lead to those conclusions.
Thanks, that's all I wanted to see, is something to support claims. I wouldn't say the evidence is convincing to me, but that is a different story. At least with these I can put your other comments into proper context.
bgassassin said:
Yes I would agree with what you have said, but would you also agree that if the management system is unable to produce what the modified code wants then the management system would have to upgraded?
Yes.
bgassassin said:
There's enough information available because that type of reasoning is based on observation of facts and draws a conclusion that is not 100% fact. You're forgetting the conclusion part that identifies it as inductive. So yes there is enough available for the conclusions I draw. Speaking strictly from a TP perspective, the engine was at best created and at worst modified for the Gamecube hardware since that was originally what TP was going to be release on. Referring back to your example, Wii U should at worse use Shader Model 3.3. Gamecube used Shader Model 0.0 because as it's been said here before the GC's TEV had no programmable pixel and vertex shaders. That alone calls for a new build. There are a lot of limitations with that hardware and an engine optimized for that extremely limited hardware can only go so far when considering the huge hardware leap. I'm sure even you would agree with that. Continuing with a UE comparison, why use UE2 when UE3 is designed for more current hardware. I'm expecting Nintendo to do the same with theirs.
Shit, I think how anal I am stems from having to deal with too many academic papers that are trying to pass BS by you.
1. (underlined above by itself) Your conclusions have to be convincing though. That depends on the evidence that you supply and the knowledge of your audience. It's what would differentiate a weak induction from a strong one. IMO the evidence was not strong enough for the conclusion.
2. (bold above) A new build of what? The rendering engine? sure fine I'll give you that. But for what I consider a game engine that would make up maybe 20% of it. I wouldn't call 20% constructing something new. Now again, remember, I'm
not arguing that they won't build a new engine. Something that has changed with the new hardware that is going to be much more fundamental is the level of parallelism exposed. Multiple CPU threads and lots of CPU cores (depending on how much these are exposed to general compute/stuff that isn't synthesizing an image). Effectively making use of parallelism is a much, much bigger beast than the evolution of shader model from 0 to 3. Why? 1. Because in the case of shaders they can still mostly fit into the concept of geometry transformation and fragment shading, which you have even with model 0. So you are not changing the fundamental operations and the pipelining concept your engine might be based on could still be perfectly relevant: just update it a bit to add binding of the programmable parts. 2. a paradigm shift from having mostly sequential execution to algorithms that most be conceived to expose more parallelism to be effective is currently a big issue with software development. It requires a different way of thinking about things, and producing such algorithms is much harder. Sorry I don't have time to find specific resources for that (talks at AMD's Fusion 11 conference might be good -- C++ AMP -- and you can always start with (
wikipedia).