• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

a new J Allard interview on Xbox 360 hardware - chip cost reductions, etc.

credit to 'one' on B3D for translating this from watch impress

http://www.beyond3d.com/forum/viewtopic.php?t=23595
http://pc.watch.impress.co.jp/docs/2005/0602/kaigai184.htm

one wrote:
"Hiroshige Goto did an interview with Allard again, this is my translation for the first half. (the second half will be uploaded to PC Watch few days later)"


kaigai01.jpg


Goto: I'd heard beta devkits for XBOX 360 would be out some time after GDC. Now it seems they are late.

Allard: Devkits are still at the alpha stage (at E3). They are based on PowerPC G5 dual-processor-based systems. But soon (after E3) will migrate to beta hardwares. Beta devkits are very close to the final hardware components.

I think the current completion rate of the system is like that of XBOX 1. At E3 2001 we were still XBOX 1 alpha devkits. So the launch of 360 will be a success just like XBOX 1.

Also, even before supplying devkits, we were trying to offer all info developers need. We are supplying the target spec and migration softwares. As we provided alpha kits earlier, content developments proceed very well. About 160 titles are in development now. A very good figure.


G: XBOX 360 seems to be very costly. For example, the number of memory chips increased from 4 to 8. The GPU die size was 150mm2 before but now a combination of a far bigger chip and an eDRAM chip. You could cheaply procure conservative Pentium 3 for the CPU, but this time 3-core newly designed custom CPU. Is it OK in terms of cost?

A: Your point is correct in a way. XBOX 360 is far more expensive than XBOX 1 for the silicon cost for the die area at launch, though instead the performance is impressive...

Let me explain our approach for this round. The point is, this time, we can pay more cost for the silicon by controlling manufactureing and design more than ever. Silicon cost can be reduced more than anything (in a game console). So it's OK to have expensive chips at the launch. We will be able to make them cheaper (by shrinking them).

In future, the DRAM capacity will be raised, and the process will shrink from 90nm to 65nm or 45nm. We can combine multiple chips on the mobo too. The it'll lead to reduce power consumption and the cost for power supply. So we expect the curve for the price and the cost will be very nice.


G: It's announced that the CPU is a symmetric 3-core CPU. Is this core an equivalent to PowerPC970/G5 or simplified one?

A: Simpler and advanced. Basically we adopted the same CPU core as PowerPC G5. It's based on PowerPC G5, but we removed unimportant features from it. For example instead of having L2 cache for each core we adopted L2 cache shared by 3 cores.

G: In a game console you won't need 2 double-precision ALUs either.

A: Right. Since we don't run general-purpose code, other than those already mentioned there were many features unnecessary for us. After removing them we added some features. For example, we added a crossbar to share a cache, and a special security hardware. Also we refined it to achieve the 3.2Ghz target clockspeed. While the PowerPC G5 chip hasn't reached 3.2Ghz, our CPU can reach there.

G: It's been long since Microsoft withdrew from WindowsNT for PowerPC. Is the XBOX 360 OS kernel is new or is it based on the old PowerPC WinNT kernel?

A: At the beginning of the first XBOX project we ported a part of the NT kernel to XBOX1. This time we ported it to PowerPC. It's not that we went back to the Windows code base. We used the XBOX codebase from the beginning. It's important for developers that they can use the same type of API that they were accustomed to in XBOX 1.

Of course they are not a straight port, the kernel is more advanced. In the XBOX 360 kernel, it supports 6 hardware threads with multiprocess. The I/O system is advanced too. But, even with such enhancements, the XBOX 360 OS is compact and has high performance.

G: IIRC XBOX1 has a small loader in its embedded ROM and loads the OS itself from an optical disc. Where are OS components in XBOX 360? Are they loaded from an optical disc too?

A: This time it's a bit different from XBOX 1. In XBOX 1, it had a file a bit larger than a simple boot loader which is something like a small kernel. In this generation, more technologies are put onto the ROM. For example the networking stack is not on a disc but in the ROM. User interface too. Many important softwares and features are on the ROM.

Basically it's the same strategy that XBOX1 took but in a larger scale. It's one of the goals to make softwares loaded in RAM fewer. However game libraries remain to be shipped on a game disc. It's for the compatibility.

G: AFAICS in the GPU architecture, the graphics API side also has to be extended from DirectX 9. How can you secure the compatibility with PC graphics?

A: It's difficult to completely preserve the compatibility between the 2 platform. There's always a fundamental difference between the PC platform and the XBOX platform. Although, we are making efforts to make them as alike as possible for partners. At least I believe writing a game toward the 2 platforms, XBOX 360 and PC, will be simpler than any other combination of next-gen platgorms. Though we strive to make it as easy as it can be, but hardly free.

G: We heard you'd realize the compatibility between XBOX 360 and XBOX 1 based on software. But performance is largely deteriorated in software emulation. Especially if they have reverse byte-orders like x86 and PowerPC.

A: The solution for the compatibility is partially in the silicon. But the most of it is software. The Microsoft product VirtualPC is an emulater that can run Windows on Macintosh and it runs x86 code on PowerPC. The team that developed it is in Microsoft, so we have some experience on this matter from the beginning. Besides XBOX360 has a very large performance margin. So software emulation is no problem.

Actually, x86 simulation is an easy part if you ask me, as it's emulation of a simple instruction set. Rather than that, graphics, XBOX Live, I/O, these are more difficult to emulate.

G: I suspect graphics will be very difficult as softwares directly reach the hardware in many cases.

A: First we have to suppress the board cost (so can't put a compatibility chip). Because of it, we face the problem how we can deal with different graphics pipes and compression technologies. We allowed game developers to hack the nVIDIA chip in the register level (in XBOX 1). So we had to put some special work for the register-combiner and so on.

G: NVIDIA has patents for things like shadow buffer so ATI can't use them. How can you solve it?

A: For that matter we could an agreement from nVIDIA. It allowed developers to fully use functions they used in the NVIDIA architecture.

It's not only graphics, the compatibility for Live is also very difficult. Especially this time it's more complicated as we integrater Live functions into the core system.
 
A nice read. :) It is good to see an interviewer focus on the console rather than the console wars – information beats hype and spinning any day of the week.
 

mrklaw

MrArseFace
G: In a game console you won't need 2 double-precision ALUs either.

A: Right. Since we don't run general-purpose code, other than those already mentioned there were many features unnecessary for us

busted Mr. 'XeCPU is way better than CELL at the essential general purpose code needed in games' Nelson.
 

HokieJoe

Member
mrklaw said:
busted Mr. 'XeCPU is way better than CELL at the essential general purpose code needed in games' Nelson.


I don't think those statements are mutually exclusive necessarily. IOW, in terms of general purpose code: G5>360>Cell. However, I think the Allard needs to clarify a bit on that point.
 

gofreak

GAF's Bob Woodward
HokieJoe said:
I don't think those statements are mutually exclusive necessarily. IOW, in terms of general purpose code: G5>360>Cell. However, I think the Allard needs to clarify a bit on that point.

X360's CPU is quite lean. About the same as the PPE in cell from what I gather, less lean than a SPE of course. But all the huffing and puffing about "general purpose processing" is ridiculous given the fact that neither CPU is particularly well suited to that. A lot of code that would run well on a regular desktop CPU, taken over as is to X360 or PS3 will see a big performance cut. You'll have to code to both specifically, and a lot of the challenges for both will be shared. It'll be interesting to see if X360's CPU is caught in an uncomfortable middleground - not really very general purpose, and not really benefitting as much as it could for certain key workloads if it was a bit more focussed. I guess we'll see with final silicon.

Also hilarious to see that DP floating point capability was, apparently, cut back dramatically, or completely, on XeCPU given Major Nelson's attempt to take the shine off Cell by pointing out is reduced DP performance versus SP. Not that it matters anyway, since DP is of fairly marginal use in games currently.
 

aaaaa0

Member
When J Allard is talking about not running general purpose code, he's talking about X2CPU running "Microsoft Office" vs "hand optimized game code".

Of course X2CPU is not intended to run "Microsoft Office", it's intended to run GAMES.

When Major Nelson is talking about X2CPU being better for general purpose code than CELL, he's talking about PPC cores being better for "integer branch heavy memory bound code" vs SPUs which are optimized for "streaming floating point code".

If you analyse modern game engines, you will find the instruction streams are dominated by "integer branch heavy memory bound code" as opposed to "streaming floating point code".

The key observation behind the balance of processing power in the x360 is that largest proportion of "streaming floating point code" is in the graphics pipeline (shaders and etc), and not the game engine itself, and hence, the most cost effective place for it to be is on the GPU, not the CPU.

Allard and Nelson are talking about two different things, and so they're both reasonably correct.
 

Particle Physicist

between a quark and a baryon
"It's important for developers that they can use the same type of API that they were accustomed to in XBOX 1."


uh oh. xbox 360 to be only minutely more powerful than xbox1 !! confirmed!

;P






..that was a pretty good interview.
 

gofreak

GAF's Bob Woodward
aaaaa0 said:
When Major Nelson is talking about X2CPU being better for general purpose code than CELL, he's talking about PPC cores being better for "integer branch heavy memory bound code" vs SPUs which are optimized for "streaming floating point code".

There's a running assumption amongst MS people, or seems to be, that the SPEs in Cell aren't capable of integer ops. This couldn't be further from the truth. Their integer performance should be almost as good as their floating point performance.

aaaaa0 said:
If you analyse modern game engines, you will find the instruction streams are dominated by "integer branch heavy memory bound code" as opposed to "streaming floating point code".

Or, if you beleive what MS tells you. Looking at my own code, the most called functions are floating point heavy. Anyway, you should optimise based on time spent in code, not what proportion of code is made up of that type in terms of number of lines or whatever.

Next-gen games will be very floating point heavy - if you want a big leap up in physics that is. Physics alone can hog a large proportion of frame processing time..accelerating that alone makes sense, and Cell should provide in that area (this is also why the case is being made for dedicated physics chips in PCs - simply just because the proportion of CPU time spent in physics with games going forward would become so huge).
 

aaaaa0

Member
gofreak said:
There's a running assumption amongst MS people, or seems to be, that the SPEs in Cell aren't capable of integer ops. This couldn't be further from the truth. Their integer performance should be almost as good as their floating point performance.

Please read what I wrote.

I did not say SPEs cannot process integer code.

I said that SPEs are not good at branch heavy memory bound integer code, which is true.

SPEs cannot touch main memory directly. They have no cache, just local store. To read anything from main memory you must schedule a DMA. This means you have to very carefully optimize your main memory accesses, otherwise your performance will really suck. (Well you do on a cache architecture as well, but it should be a lot easier.)

This is a pain to do if the algorithm you're trying to run on the SPE is branch heavy, and references structures all over main memory. This kind of stuff comprises a lot of game code.

PPCs are a load/store architecture and have L2 cache, which helps mitigate the slowness of main memory and makes this type of pointer heavy, branch heavy code easier to write. It's not perfect (cache misses will kill performance), but it's much easier and potentially faster than SPE code.

Or, if you beleive what MS tells you. Looking at my own code, the most called functions are floating point heavy.

The fact that you called a function, means you just executed a call instruction. On an SPE, you would have to make sure you schedule a DMA to upload the function you're about to call into SPE LS. Then you have to wait for that upload to finish. That will destroy performance. So you need to structure your code into overlays that you can swap in and out of SPE LS. Then you need to always make sure the functions you use together, get swapped in and out together, so you avoid doing as many DMAs as possible. If you want to get fancy, you need know in advance which functions you might want to call, so you can pre-queue a DMA so the function you want to call is in LS before you need it. Oh that means you need to reserve a chunk of LS for this, hm, that means you can fit less other stuff into LS, which means you might need to DMA more often for other stuff. Hmm.

So it turns into a big balancing act if you want to run the kind of code a normal CPU runs, and introduces a whole load of new stuff you have to worry about that you don't on a normal CPU.

Anyway, you should optimise based on time spent in code, not what proportion of code is made up of that type in terms of number of lines or whatever.

That's the point. Code is almost always dominated by loads, stores, tests, and branches, because execution time is almost always dominated by memory access time.

Next-gen games will be very floating point heavy - if you want a big leap up in physics that is. Physics alone can hog a large proportion of frame processing time..accelerating that alone makes sense, and Cell should provide in that area (this is also why the case is being made for dedicated physics chips in PCs - simply just because the proportion of CPU time spent in physics with games going forward would become so huge).

Physics is float heavy yes, but even in a hard-core physics engine, a lot of the instruction streams still look something vaguely like this: "load this address, derefence this pointer, load some other address, test if a bit is set, branch, load this address, test value, fpmath, branch, write some address", etc.

It's only when you get into the innermost loops that the instruction streams turn into "load, fmadd, write result, loop", and that kind of work is almost always graphics related stuff -- maybe that kind of stuff is best left to the GPU (or something like it) to handle, since it is totally optimized for this kind of stream processing. Cell would be pretty good here, but given a GPU in the system, I'm pretty sure that code like this isn't going to be 80% of your engine's execution time.
 

Panajev2001a

GAF's Pleasant Genius
aaaaa0 said:
Please read what I wrote.

I did not say SPEs cannot process integer code.

I said that SPEs are not good at branch heavy memory bound integer code, which is true.

SPEs cannot touch main memory directly. They have no cache, just local store. To read anything from main memory you must schedule a DMA. This means you have to very carefully optimize your main memory accesses, otherwise your performance will really suck. (Well you do on a cache architecture as well, but it should be a lot easier.)

This is a pain to do if the algorithm you're trying to run on the SPE is branch heavy, and references structures all over main memory. This kind of stuff comprises a lot of game code.

PPCs are a load/fetch architecture and have L2 cache, which helps mitigate the slowness of main memory and makes this type of pointer heavy, branch heavy code easier to write. It's not perfect (cache misses will kill performance), but it's much easier and potentially faster than SPE code.

True, but then you have a catch 22 problem: one thing you can do is to split the same workload over several threads working on each of the processor cores to maximize L2 cache hits without trashing it a lot, but it can become a synchronization hell too as multi-threaded code does have some headaches ready for you. You can do that or you can assign Physics to one thread, A.I. to another, OS and DirectX driver processing to another and so on having each thread working on something separate, but this way you also reduce the amount of common things each CPU core will be using. 6 threads working on a 1 MB cache, each wanting some different data from main RAM might be a bit worse than 2 threads working on 512 KB of cache (3x the cores using only 2x the L2 cache).

All can be worked around, but if you want to push an advanced game engine and complex physics and A.I. it won't b a piece of cake on Xbox 360 either.

A lot of execution time in physics calculations is spent on linera algebra calculation, it is spent processing tons of vectors and having to do a great deal of FP heavvy operations for an enormous amount of objects: batching is one thing I see pushed far on the Broadband Engine. Sure, you have to re-think how you write your code sometimes, but it might yield very good benefits and be maybe more manageable than synchronization of a multitude of threads performing various functions.
 

HokieJoe

Member
I assume that if the SPE has to fetch something from main memory, then the central processor will have to schedule it? If so, I wonder how many machine cycles it would take? One, two, four? I'm sure I'm totally off base here because my reference point is a Z80. :)
 

teiresias

Member
HokieJoe said:
I assume that if the SPE has to fetch something from main memory, then the central processor will have to schedule it? If so, I wonder how many machine cycles it would take? One, two, four? I'm sure I'm totally off base here because my reference point is a Z80. :)

I thought the main system RAM was hanging off the same EIB bus via a memory controller that the SPE's are attached to, therefore an SPE access isn't scheduled by the PPC but instead by the memory controller via the DMA request on the EIB bus. I could be completely wrong, it's been a while since I had time to look at the Cell architecture.
 

aaaaa0

Member
teiresias said:
I thought the main system RAM was hanging off the same EIB bus via a memory controller that the SPE's are attached to, therefore an SPE access isn't scheduled by the PPC but instead by the memory controller via the DMA request on the EIB bus. I could be completely wrong, it's been a while since I had time to look at the Cell architecture.

It seems SPEs can schedule DMAs themselves. But without intervention by the PPC, I find it really hard to believe that you'll be able to write any sort of synchronization objects, since DMAs are not atomic.

Simple example:

Say, in main memory you have a queue implemented as a linked list of blocks of work that you want 2 SPUs to work on:

head->A->B->C->D->...

How does SPE1 pick up block A, and reassign head to point to block B safely, so that SPE2 doesn't corrupt the list if it's running at the same time? I think the only choice is to tell the PPC you want some work, and have the PPC do the synchronization and setup the DMA for you.

But I don't know exactly how CELL works, so this is just guessing.

On xenon or any SMP, this is a trivial structure to create. Just make a mutex object, and make sure you grab it before modifying the list, and release it afterwards. (If you want to get fancy, use a lock-free list.)

With a shared L2 cache, this might be even faster, since there's no cache coherance overhead -- if the mutex is heavily contended between the cores, it will end up staying in an L2 cache line instead of main memory, right?
 

nitewulf

Member
heh, gofreak is on constant defense mode since e3. are you a bit dissapointed at the ps3 specs? personally i was on target, i dont know why you guys were extepecting a shit lot more. being engineers, you should be more practical as well.
 

3rdman

Member
nitewulf said:
heh, gofreak is on constant defense mode since e3. are you a bit dissapointed at the ps3 specs? personally i was on target, i dont know why you guys were extepecting a shit lot more. being engineers, you should be more practical as well.

Hey leave gofreak alone...he may be a-wee-bit biased, but he's been nothing if not patient and respectful. Personally, I wish we could stop with these comparisons until we really know more about their differences, but I guess human nature is too strong to resist.
 

rastex

Banned
Awesome analysis aaaa0, thanks a lot. This is the type of analysis I've been waiting for, ever since Cell was introduced.
 

shpankey

not an idiot
nitewulf said:
heh, gofreak is on constant defense mode since e3. are you a bit dissapointed at the ps3 specs? personally i was on target, i dont know why you guys were extepecting a shit lot more. being engineers, you should be more practical as well.
sounds a little more to me you just don't like what your hearing.
 

nitewulf

Member
3rdman said:
Hey leave gofreak alone...he may be a-wee-bit biased, but he's been nothing if not patient and respectful. Personally, I wish we could stop with these comparisons until we really know more about their differences, but I guess human nature is too strong to resist.
i didnt attack him.

sounds a little more to me you just don't like what your hearing.
with respect to what, ie, which hardware?
 

mrklaw

MrArseFace
6 threads on XeCPU sharing 1MB L2 cache Vs 7 SPEs with 256K dedicated local memory.

with more memory available per thread, I think you'd have less problems with constant fetching of program data. And considering how simply some developers uses VU1 and VU0 in PS2, they'll probably just set up the SPEs as static processors with fixed code running in them, at least initially. You have enough SPEs that you can get away with, say, 2 for vertices, 1 for AI, 2 for physics, 1 for collisions, 1 for culling vertices before processing. PPE for game code

I don't think you'll see much dynamic reallocation of SPEs in early games, especially when you are pretty controlled in what you do, and repeat it over and over each frame.
 

Fafalada

Fafracer forever
aaaa0 said:
Physics is float heavy yes, but even in a hard-core physics engine, a lot of the instruction streams still look something vaguely like this: "load this address, derefence this pointer, load some other address, test if a bit is set, branch, load this address, test value, fpmath, branch, write some address", etc.
While that's true, the most time intensive physics components (collision detection, matrix solvers etc.) still come down to tight math loops in the end. If they didn't, stuff like PPU wouldn't really work either.
If you try hard enough, a lot of it can be broken down small enough to even fit into teeny-micro memories like VU0 has.
 
PhoncipleBone said:
Nvidia. This was announced some time ago.

Nvidia-Sony partnership was formerly announced in late 2004 - but it was mentioned as far back as early 2003 that Nvidia was working with Sony on the GPU, actually, going to provide a GPU for PS3. when the CEO of Nvidia started talking about the Cell processor and PS3, while at the same time, distancing himself from Xbox 2.
 

gofreak

GAF's Bob Woodward
nitewulf said:
heh, gofreak is on constant defense mode since e3. are you a bit dissapointed at the ps3 specs? personally i was on target, i dont know why you guys were extepecting a shit lot more. being engineers, you should be more practical as well.

I'm not sure if I posted my pre-E3 spec expectations re. PS3 here (i certainly left suggestions as to my expectations if I didn't explicitly outline them), but I had a pretty good idea of what was coming. Pretty much everything met my expectations, and some parts in fact exceeded my expectation (I certainly didn't think it a lock that 512MB of Ram would be in PS3). I had the benefit of some little birdies though, so perhaps I was cheating.

I'm not sure why you think I was expecting too much - lots of people, in fact, teased me about being too conservative with my expectations (I often talked down the possibility of 512MB of RAM or 8 SPEs in PS3).

edit - here are some links to my pre-e3 comments on PS3 specs:

Clockspeed speculation:

http://www.ga-forum.com/showthread.php?p=1331349&highlight=#post1331347

Loads of "hints" about SPE count:

http://www.ga-forum.com/showpost.php?p=1259089&postcount=35
http://www.ga-forum.com/showpost.php?p=1270528&postcount=22
http://www.ga-forum.com/showpost.php?p=1283185&postcount=60
http://www.ga-forum.com/showpost.php?p=1283126&postcount=15
http://www.ga-forum.com/showpost.php?p=1331423&postcount=155

"On the money" Floating point performance prediction - ~2x X360:
http://www.ga-forum.com/showpost.php?p=1307390&postcount=16

Who's expectations were overblown again? Get your facts straight.
 

nitewulf

Member
gofreak said:
Who's expectations were overblown again? Get your facts straight.
i recall on quite a few ocassions that you expected cell to run at 4.0 GHz, and therein was the problem. i never expected it to run at that clockspeed, and i was certain it would be lower at production. i didnt say anything because you were defensive at that point as well, from a "sony cant do wrong" perspective, as you are doing now.
we all knew about the 1-8 configuration from various birdies, and we also knew one SPE would be reserved for the OS. as it turns out, one would be locked. i wonder if you knew that?
excuse me for not following every single post though, gofreak.
 

gofreak

GAF's Bob Woodward
nitewulf said:
i recall on quite a few ocassions that you expected cell to run at 4.0 GHz, and therein was the problem.

I've just pointed you to posts wherein I stated my expectation - 3-3.5Ghz. My expectations may or may not have changed since the initial Cell unveiling in Feb, but for some months prior to E3, 4Ghz certainly wasn't part of the picture.

nitewulf said:
we all knew about the 1-8 configuration from various birdies, and we also knew one SPE would be reserved for the OS. as it turns out, one would be locked. i wonder if you knew that? excuse me for not following every single post though, gofreak.

I suggested for a long time that one would be reserved for the OS and one would be unavailable completely. I touched on this in MANY posts running up to E3, sometimes more explicitly than other times.

My expectations for PS3's spec prior to E3 was pretty much spot-on if not on the low end in some instances. If you're going to call me out specifically you'd better be sure you can back up what you're saying - you simply aren't, and I've already proven my position prior to E3 above.

What is it with the specific personal references in threads like this recently? I can't believe I'm even having to make posts like this. If you've a problem with something I'm saying, then point it out and make your case. Stop harassing people and painting them in a misleading light - as you're clearly doing above - just because they disagree with you and you don't like what they're saying. It's just weak.
 

thorns

Banned
translation pt 2:
Goto: How do you look at the PS3 architecture? It takes a very different approach from that of XBOX 360.

Allard: I was asked who is the winner of the next-gen. The answer is simple. IBM. IBM that designs all of next-gen game consoles is the biggest winner (laugh)

In various ways, I'd suspected Sony's announcement would have something more. Yet basically it's only the technical spec. About the technical spec, I think it has some misleading points in the comparison of the 2 systems' performance. You'll find an interesting thing if you look at those specs more macroscopically by comparing the transistor counts.

The transistor counts for those two are almost the same. We and Sony arrange the same number of transistors at the same timing. Since the transistor counts are the same, the relative performances aren't much different. The only problem is how you arrange the transistors.

I think (between Sony and MS) there's a fundamental difference in where to put priority. The biggest difference is we give priority on what game developers want. Fon instance we didn't take a split memory architecture. We adopted the 512MB unified memory architecture, for no developer wants memories split by 256MB like PS3. This flexibility can be a strength of XBOX 360.

G: But, because of the heterogeneous architecture of simple CPU cores, Cell has twice the floating point power of the XBOX 360 CPU.

A: They are right on their claim. Indeed their FP performance is twice of ours in the system totals. It's because their hardware is designed for FP operations. But what you forget is, in today's game programs FP operations are 20% and the rest 80% are general integer operations or operations such as branching. They ignore that part. Integer operations are the most computation-cycle demanding part in game programs. Now, XBOX 360 has 3 times the integer processing performance of Cell.

One more important point is the eDRAM in the ATI part. This is another plus against PS3. Both XBOX 360 and PS3 have 512MB memory so it seems 10MB eDRAM makes no difference. However it becomes a different story if it's put on the graphics chip. If a super wide-bandwidth memory is connected with the graphics chip, it allows a shader program to access the memory with a very low latency.

You can conclude that XBOX 360 equals to PS3 by the transistor counts. But, for the arrangement of transistors, I think we could win by optimizing it to the need of game developers. For a system like a game console that is complicated and highly refined, the true key is software. It's indisputable that we can deliver better softwares.

G: How do you look at the difference that XBOX 360 CPU is symmetric multicore and Cell is asymmetric multicore?

A: I think it's an advantage as programmers have experienced multiprocessing already. In the PC side, Intel is migrating to the same type of architecture. So, even if a programmer is not accustomed to multiprocessing, he can get information as much as he want since all books, tools, and lectures at universities focus on this style of programming.

As a computer geek I like the spec of PS3 and from the viewpoint of an electronic engineer it's very interesting, it's interesting also from computer science. But the important thing is games, and developers.

For developers, it's not so easy to understand a whole architecture. So it gets important to offer a software. Then, with the more complicated hardware, Sony has to offer software tools better than ours. But we'll be able to offer better softwares as we are experienced in such things.

G: In the CPU business, processors dedicated to stream processing of data, such as Cell SPE, are catching attention.

A: For Cell SPE, synchronization can be a problem. Also the cache will be a problem too. I think they have not disclosed all the details about the cache architecture. Anyway, these problems will be bottlenecks for the system like in PS2.

G: SCE says "let's create whole computer entertainment beyond games." It's the vision that by creating a wonderful platform for computer entertainment you can also play games on it. Furthermore, they hope to change the computing paradigm by an innovative architecture.

A: Are they serious? (laugh)

I think the focus at E3 should be properly set on games. It was about games also at the press conference.

However, it's not that we ignore entertainment beyond games. I briefly mentioned about such experiences by XBOX 360 too. For example, we told about the innovation that allows us to hear all music we want in a game play. You can have fun by connecting USB tools like music players and digital cameras, you can make photo slide shows, you can voicechat with friends on the internet.

If you want to deliver such experiences, not technologies, to people, you have to explain them more concretely. How Gigabit Ethernets relate to entertainment, how 2 HDMI ports that support 1080 dots HDTVs relate to entertainment. I'm interesed in what experiences those design decisions by Mr. Kutaragi lead to.

My desicion is the fusion of hardware and software service, and it offers users unbelievable experiences. We focus on experiences and design a hardware to realize those experiences. It's different from making a hardware and afterwards thinking about services.

G: The approach of SCE in which the vision was presented rather than the game itself got a favorable reception.

A: You remember PS2 had an IEEE1394 port. It had an HDD port and a USB port. Besides, do you remember what they told about PS2 in '98-'99? They talked about a big dream like changing the computing paradigm and talked about the future of comprehensive computer entertainment. Internet browser, LCD screen, mouse... they talked about various things but almost nothing realized.

Why? It's simple. The important thing is experience, namely games. Unless focusing on there it makes no sense. Therefore we design a product for games.

G: How do you look at the vision of Cell computing by SCE?

A: I want to create an amazing game. No customer wants to buy a new distributed computing paradigm (laugh)

G: More real game graphics becomes, more important the reality of behavior of in-game characters and objects gets. It's because a real picture that doesn't move realistically is unnatural. So I expect physics simulation will be more important down the road...?

A: There are many factors that lead next-gen games to success. If you focus only on realism, then your opinion that physics is important is right. For other factors, for instance, I point out that lighting is also important. The ability to generate textures and geometries in realtime that I call "procedural synthesis" is important too. Even if you create a forest with perfect lighting and a perfect design, it'll be unnatural if all trees are the same. Why we write a program to generate a tree, why we write a program to generate textures to be pasted on trees, that's because they all look different.

These things are also important for realism, I think. So at E3 we made a 90 seconds trailer that focused on how this synthesis looks visually.

But, I want to repeat, from such a technical viewpoint, the both platforms, XBOX 360 and PS3 are very alike. If you look at the total performances the both can do the same things.

G: Do you think the high performance of next-gen consoles will innovate games? There are voices that only with hardware specs there'll be no differences.

A: Looking back, there were 4 important game ideas in these 10 years. Pokemon, Grand Theft Auto, Sim Series, and Halo. All of these had no special 90 seconds trailers. Halo is the only one with amazing graphics, physics and realism.

Therefore I think we should expand the concept of what to challeng to. I think Realism and physics are important factors. But you should not forget that what drives this industry is a great innovation.

A platform and an innovation on it have a close relationship. The PC platform is a right platform to innovate Sim Series. If you see the combination of Gameboy and Pokemon you know it's a right platform too. We did all kinds of things to make Halo a success on the XBOX platform. Xbox was a platform to innovate Halo. But, for GTA, graphics was not important. It's innovative that it's a new game metaphor.

By looking things like this, you know you should be cautious about the view that the spec drives gaming. I believe I designed a super high-performance system with competitive power that surpasses Sony's.

But the task we have is to make game developers think about the future. For that purpose experience is more important than technology. What kind of new game experiences we can promote, what new services we can offer. By that service, developers can innovate and offer users new ideas.

A new idea may be a thing that has unbelievable realism, graphics, and physics that exploits XBOX 360 power like Halo in XBOX 1. Or, it may be just a wonderful idea like GTA. To produce various innovative games, not only technical specs but also hardware, software and service, all should be offered as one, that's the important thing we think.
 

nitewulf

Member
gofreak said:
What is it with the specific personal references in threads like this recently? I can't believe I'm even having to make posts like this. If you've a problem with something I'm saying, then point it out and make your case. Stop harassing people and painting them in a misleading light - as you're clearly doing above - just because they disagree with you and you don't like what they're saying. It's just weak.
this is mostly because of your visibility rather than anything else.
your clockspeed post was confusing for me, i thought you were referencing the xenon cpu, for instance. because mostly i dont read threads sequentially, i jump from poster to poster due to time issues (dont have it). aside from that i didnt get many of your hints either, mostly you were winking a lot, and i mean A LOT. so it was tough to tell whether you were saying something legit with relevance to hardware or just happy or what. :lol
 

Agent Icebeezy

Welcome beautful toddler, Madison Elizabeth, to the horde!
Mrbob said:
Allard is the man. He always seems to have well thought out responses with little hyperbole.

For the way he's been getting grilled, he's a perfect person to rep your product
 
Mrbob said:
Allard is the man. He always seems to have well thought out responses with little hyperbole.

I enjoy his interviewt because while he's obviously going to take shots at sony (that's part of his job, of course), he doesn't come off like a total ass in the process. That's actually pretty rare around a console launch.
 

GhaleonEB

Member
Mrbob said:
Allard is the man. He always seems to have well thought out responses with little hyperbole.

I also like that he granted Sony the FP point; I seem to recall the engineers Major Nelson interviewed waffling on that, basically saying "Sony's was an aggresive figure, ours a conservative" without actually answering the question. Less BS = good.
 

Agent Icebeezy

Welcome beautful toddler, Madison Elizabeth, to the horde!
GhaleonEB said:
I also like that he granted Sony the FP point; I seem to recall the engineers Major Nelson interviewed waffling on that, basically saying "Sony's was an aggresive figure, ours a conservative" without actually answering the question. Less BS = good.

In reading up on him since this system has come to the forefront. He has had a lot of influence on how I spend my days and nights. He is the one to convince Bill Gates to take on the internet. Shit, he's the perfect person to lead the charge. Smart guy, much better than Seamus was.
 
GhaleonEB said:
I also like that he granted Sony the FP point; I seem to recall the engineers Major Nelson interviewed waffling on that, basically saying "Sony's was an aggresive figure, ours a conservative" without actually answering the question. Less BS = good.

agreed.

Allard seems a bit more forthcoming than MN.
 

mrklaw

MrArseFace
Looking back, there were 4 important game ideas in these 10 years. Pokemon, Grand Theft Auto, Sim Series, and Halo

Fucking hell, J. Not backward in coming forward, are you?

I also like that he granted Sony the FP point

I don't, because he used it as the intro to a PR slap. "they are 2x faster in FP, but we are 3 time faster in integer, and thats whats most important". I'm assuming he was calculating 3xXeCPU cores Vs 1xCELL Core, and handily ignoring the SPEs. I would expect CELL overall integer performance to be better than XeCPU.
 
Looking back, there were 4 important game ideas in these 10 years. Pokemon, Grand Theft Auto, Sim Series, and Halo.

If he's talking about game ideas that sold best, I guess I'll agree. Weird statement, anyway.
 
Top Bottom