EDGE: "Power struggle: the real differences between PS4 and Xbox One performance"

FUFUFUFUFUFU you made me spill my milk
Well it would be a remarkable oversight for them to invent a kind of memory specifically for use in GPUs, and to iterate it through 5 generations of the technology if its design was not a good fit for GPU use.
 
I never said anything about the numbers. I know about the numbers since you can't avoid them as they have been shoved into peoples faces constantly since E3. I questioned the credibility of the article using anonymous sources when there are statements to the contrary and the author had been making FUD with anonymous sources before so that the credibility of the article was in question. Being told "BUT LOOK AT NUMBERS. THERE IS A 141 2/3rd PERCENT CHANCE OF WINNING. THE NUMBERS DONT LIE" doesn't answer my doubts on the credibility of the article.

So it basically comes down to which numbers a person wants to believe, since there's been more reports of a 30%-50% difference and the system specs are out there, I can see why most don't see it your way.
 
Does this not suggest its hardware?

"Modern GPU
- DirectX 11.2+/OpenGL 4.4 feature set
- With custom SCE features"

again from official playstation source

That just means the API is there for tiled resources for that GPU, the implementation will either be software or hardware depending on the tier level.
 
I'm not saying the Xbox will dominate, far from it. I'm saying it will be very close. The Xbox brand is very powerful in the US and UK, I don't see that changing. Microsoft's mistakes will cost them the lead they would otherwise have after such a successful generation but that is all. People who expect a PS4 domination are just letting their feelings cloud their judgment.

The MS brand is not powerful in the UK they went with PS1, PS2 and Wii (360 if you don't count the Wii)
They are a price sensitive market and unlike last gen PS4 coming out the same time and at a cheaper price .
It having the better hardware is the icing on the cake for them .
 
I'm not saying the Xbox will dominate, far from it. I'm saying it will be very close. The Xbox brand is very powerful in the US and UK, I don't see that changing. Microsoft's mistakes will cost them the lead they would otherwise have after such a successful generation but that is all. People who expect a PS4 domination are just letting their feelings cloud their judgment.

Uk it might struggle outside the hardcore xbox fan, ms thought you know that price point that sony tried with ps3 and struggled i know lets go above that and see how it goes. £429 is too much for a games console in a country that people are still watching what they spend. Ps4 is £349 and that price point will help sony in a market that normally favours the cheaper console when power isnt so different.
 
Ok I'll ask this again for the new page of people wishing to discuss latencies.

What are the exact latencies of XBO's DDR3 and PS4's GDDR5? Why are these specific numbers never used in these spec arguments or articles when every other number each system has is being discussed?

What are the latency numbers?
 
It's by no means a level playing field. That's just patently disingenuous.



"most powerful console ever" is far easier sell than "control your cable".
You didn't read what I wrote. There is nothing stopping MS from saying they're the most powerful. If they want to be more truthful...they could say "Most powerful Xbox ever" while Sony says, "Most powerful console ever". People will here "most powerful" from both and it'll be a marketing wash. Unless Sony can somehow draw a meaningful comparison with their hardware advantage then I don't see how it'll resonate in a sound bite advertisement.
 
Yes. Didn't we land, in other threads, that this is a non-issue!? Or am I imagining things?

The latency thing is pretty well-worn territory, but I don't think I've seen so many claims of non-linear CU performance scaling before Penello's claim on the matter.
 
Ok I'll ask this again for the new page of people wishing to discuss latencies.

What are the exact latencies of XBO's DDR3 and PS4's GDDR5? Why are these specific numbers never used in these spec arguments or articles when every other number each system has is being discussed?

What are the latency numbers?

We simply don't know. Only the folks with dev kits and NDA's know.
 
Well it would be a remarkable oversight for them to invent a kind of memory specifically for use in GPUs, and to iterate it through 5 generations of the technology if its design was not a good fit for GPU use.

o i understand it was just simple and funny. it shouldnt even be an issue with this being a game console. this setup woudnt work on a pc well. but technically makes sense for a gaming machine
 
You didn't read what I wrote. There is nothing stopping MS from saying they're the most powerful. If they want to be more truthful...they could say "Most powerful Xbox ever" while Sony says, "Most powerful console ever". People will here "most powerful" from both and it'll be a marketing wash. Unless Sony can somehow draw a meaningful comparison with their hardware advantage then I don't see how it'll resonate in a sound bite advertisement.

Well then we have to just disagree. It's quite easy to discern most powerful from most powerful Xbox. All SONY has to do is nail that in every ad, while Xbox has to prove to the masses that they want Kinect again, when they moved on once already.
 
All I know about power is that both will have a switch that turns them on so I can play games and that is awesome.

tumblr_inline_mpdbh6OhLQ1qz4rgp.gif
 
So people are coming here saying DDR3 has the advantage when they have no clue one way or the other?

Because when you go and look at the specs of DDR3 and GDDR5 memory, latency is better on DDR3 at the same clock rate.
 
It's by no means a level playing field. That's just patently disingenuous.

Maybe here on GAF, the new consoles are not considered to be at a level playing field (at least, that is the consensus I am seeing).

However, in terms of "Joe Q. Public" (at least in North America), I think it still very much is a level playing field. At least initially, the deciding factors will be based upon Marketing, as well as the consumer's preference towards each console's Launch game portfolio.

However, in the longer run, other factors will come into play. Certainly, at this point, for the long-term outlook, it would seem as though Sony is holding the high cards...but if video game history is any indicator, the success of any given console is very hard to predict from initial conditions.
 
I was referring to the memory write performance being better on Xbox One which talks to latency.
I'm referring to the round trip time excluding the time it takes to get/send the data. Like ping against bandwidth. GDDR has a higher one because it was designed like that in mind. Like many people have said here, GPU's don't really care about this because they can swap tasks like pimps do hoes. CPU's don't have that privilege however and its literally throwing clock cycles out of the window. Hence why DDR is still used in general purpose PC's to this day.

I don't want to give the impression that I'm downplaying the PS4, because its got an immense GPU, far more powerful than the X1. Its just the overall architecture which as cliche as it is unfortunately, which does make them more of a level playing field. You can't really stamp percentages on each, there's just too many factors.
 
I'm not saying the Xbox will dominate, far from it. I'm saying it will be very close. The Xbox brand is very powerful in the US and UK, I don't see that changing. Microsoft's mistakes will cost them the lead they would otherwise have after such a successful generation but that is all. People who expect a PS4 domination are just letting their feelings cloud their judgment.

The Playstation brand was utterly dominant globally (including the US and UK) before the PS3 but still ended up in dead last place for most of the generation. The Xbone will probably do well, but I don't think it will be "very close".
 
Modern game engines are built to take advantage of whatever is available in the hardware. Even games that have been mocked for having similar graphics year after year, now start with high resolution graphics and then scale down to what's necessary. If there is a difference between the PS4 and the X1 you will probably be able to notice it. But it's not like the gameplay will be much different, no dev is gonna have different frames per second, or add more levels or anything like that. Games will be equally fun.
 
I'm not saying the Xbox will dominate, far from it. I'm saying it will be very close. The Xbox brand is very powerful in the US and UK, I don't see that changing. Microsoft's mistakes will cost them the lead they would otherwise have after such a successful generation but that is all. People who expect a PS4 domination are just letting their feelings cloud their judgment.

actually the PS4 and Xbox one will be close in sales in the United States...however, in the rest of the world Sony always dominates, which is why the majority of gamers expects the PS4 to sale alot more consoles than Xbox One next gen. This gen, in the united states, the xbox 360 and nintendo wii dominated (especially the wii) but the ps3 was still able to overtake xbox 360, due the global sales. Just factor what's gonna happen when PS4 and Xbox One will have close sales in the US, and Sony still dominate globally (Japan, UK, Austrailia, ect).
 
That just means the API is there for tiled resources for that GPU, the implementation will either be software or hardware depending on the tier level.

Can you elaborate a little more on what that means? If the API supports it and GPU supports it, do you mean its design choice on what method to use it? Like Nvidia PhysX software/hardware variants.

It all seems to add up perfectly to use a 16mb Tiled resources + a 16mb Frame buffer file in its eSRAM - PS4 I hope it supports it also, atleast the GPU does. (Supports DX11.2/openGL 4.4)

slightly off topic but interesting article where valve is actually saying openGL is faster than DirectX (surprise :P)

http://www.extremetech.com/gaming/133824-valve-opengl-is-faster-than-directx-even-on-windows
 
  • 6 more CU's mean the whole console is 50% faster does it? Its a very well known fact that extra CU's dramatically decrease the efficiency of multi-threading tasks and the shader cores them selves. Its not a linear performance gain, it depends on many factors. I'm not saying the PS4 GPU hasn't got more CU's which it has. What about if I say the PS4 GPU is going to have a lot more to work on outside of games compared to the X1. This includes video encoding, video decoding and even like Mark Cerny said, a lot of the audio tasks will be offloaded to the GPU due to the fact that the GPU is a parralel processing unit which isn't effected by GDDR latency in the same way as the CPU is. Those extra CU's are starting to become less and less without the custom architecture to back them up. Oh and the developers have a lot more leg work managing the threading and task handling of the GPU.

What a load of crap.

Extra CUs don't "dramatically decrease the efficiency". If a certain GPU task scales to 768 cores (X1), it will also scale to 1152 cores (PS4). The gain is almost linear when you don't have other limiting factors. Look at benchmarks.

Video encoding on PS4 is done via a dedicated chip as has been said 1000 times. It takes literally zero GPU resources.

PS4 also has a sound processor. Cerny certainly didn't say "a lot of audio tasks" will need to be done via CPU, he talked about audio raycasting which is a very exotic feature that I'm pretty sure the X1 sound processor can't do via hardware either.
 
If GDDR's latency is so bad for GPUs, why was it specifically designed for GPUs?

Size and bandwidth problems are cheaper to solve. Latency issues are expensive.
With graphics you work on a lot of big data chunks so latency is not that big of an issues.
And thread switching on gpu is as good as free compared on cpu a thread switch is an expensive and takes quiet a while if im not mistaken.
So another compute thread could run on the GPU while the other waits for its data to arrive.
 
actually the PS4 and Xbox one will be close in sales in the United States...however, in the rest of the world Sony always dominates, which is why the majority of gamers expects the PS4 to sale alot more consoles than Xbox One next gen.

Judging by last Gen, yes, Judging by Pre-Orders and General Polls/feeling for NEXT gen, the PS4 will likely get a handle on the US + UK - both that used to have a strong 360 userbase.
 
I'm referring to the round trip time excluding the time it takes to get/send the data. Like ping against bandwidth. GDDR has a higher one because it was designed like that in mind. Like many people have said here, GPU's don't really care about this because they can swap tasks like pimps do hoes. CPU's don't have that privilege however and its literally throwing clock cycles out of the window. Hence why DDR is still used in general purpose PC's to this day.

I agree with you. Although I have no proof to back up my supposition.
 
Let me ask, did DirectX or OpenGL ever get upgraded after the consoles got released? These drivers are final at manufacturing and both companies have had theirs final for a very long time, which makes me believe this article is just capitalistic journalism at its best.
It doesn't work like that on consoles. Drivers are jncluded with the game, and as the new ones are ready, they can make them available at any time to be used with any new game from there on. Reason for this is to ensure the older games would 100% for sure work regardless of what drivers are being made availabe later on.

You are also wrong on several other points in your original post, if not all of them. For example, PS4 has dedicated video encoding and decoding hardware, as well as the audio hardware. GDDR latency thing has been dicussed ad nauseum, and people with very intimate knowledge of the matter don't think its latency is big enough nowadays to affect anything.
 
Finally got my account approved on NeoGAF. Cheers admins!

Welcome!

How can drivers in console still be un-finished and the hardware not final when they've gone into mass production?
The drivers will probably never be finished as both companies will continue to optimize their consoles and tools for the developers. As it stands now the PS4 has better tools from what we have heard from developers, I forgot the exact name of the studio but I am sure someone will pull up the quote (I think it was Avalanche but not too sure). The hardware not being final is something that Microsoft keeps repeating so it would be better to ask them about that.
DirectX has been on the platform since the start, it's not buggy or "poor" it just works due to shared codebase. They also released their mono driver during E3 which is the specially optimised version of DirectX for the platform. So saying they have been late with drivers is flat out wrong.
Again, this isn't some random conjecture we created in our mind, this type of stuff comes from the developers who have said the PS4 is easier to code for and has more mature development tools. We don't know how well the drivers are on either side but I think we can confidently say both will be very optimized.
6 more CU's mean the whole console is 50% faster does it? Its a very well known fact that extra CU's dramatically decrease the efficiency of multi-threading tasks and the shader cores them selves. Its not a linear performance gain, it depends on many factors. I'm not saying the PS4 GPU hasn't got more CU's which it has. What about if I say the PS4 GPU is going to have a lot more to work on outside of games compared to the X1. This includes video encoding, video decoding and even like Mark Cerny said, a lot of the audio tasks will be offloaded to the GPU due to the fact that the GPU is a parralel processing unit which isn't effected by GDDR latency in the same way as the CPU is. Those extra CU's are starting to become less and less without the custom architecture to back them up. Oh and the developers have a lot more leg work managing the threading and task handling of the GPU.
The problem is 6 more CUs is not the only thing that the PS4 has over the Xbox One. It also has 32 ROPS over the 16 in Xbox One and it has optimization specifically for GPGPU. The main problem here is that these 2 GPUs come from the same product family but from a different product stack. The GPU in Xbox One is more akin to a mainstream product(7700 series) while the GPU in the PS4 is mid-ranged enthusiast card(7800 series).
Memory reads are 50% faster? From what? I can tell you as a fact that if its the CPU doing the memory read, it would be a heck lot slower. Even if its the GPU doing the read, it the developer doesn't implement the switching of tasks while waiting for GDDR return, then it'll still be slower. It depends how deep the OpenGL wrapper goes.
This is wrong, GDDR5 doesn't inherently have any more latency than DDR3. In fact both are pretty much the same memory just that one is optimized for latency while the other is optimized for bandwidth. The reason why PCs use DDR3 memory for RAM is because there are so many applications that require the CPU at the same time and so the low latency helps in that regard but on a closed hardware console what is going to be calling on the CPU while you are gaming? Nothing. Another thing is Jaguar is an OoO CPU so it doesn't have to wait around and do nothing while it's waiting for data to be retrieved.
By any means, I'm not saying the PS4 doesn't have more of a GPU, because it does. The thing is though, it needs that GPU when you've got a CPU crippled by GDDR latency. Audio processing (not be confused by the audio encoder in the PS4) will have to be off-loaded to the GPU, a lot of the physics will be handled by the GPU. Those extra CU's start decreasing and decreasing and when you've got a CPU which you have to think a lot about because they've put GDDR in there, then you're starting to see what Albert Penello is saying.
Audio will not be off-loaded to the GPU, Mark Cerney was talking about in the future when developers harness the power of GPGPU they will be able to offload some audio tasks to the GPU like audio raycasting. As far as I remember the Xbox One doesn't have a PPU so it's also going to be running physics on the GPU and that's not really a Playstation specific problem. On the CPU being "crippled" I have already addressed this above.
 
Size and bandwidth problems are cheaper to solve. Latency issues are expensive.
With graphics you work on a lot of big data chunks so latency is not that big of an issues.
And thread switching on gpu is as good as free compared on cpu a thread switch is an expensive and takes quiet a while if im not mistaken.
So another compute thread could run on the GPU while the other waits for its data to arrive.

its just a matter of timing. if bandwith is fine its not a big deal
 
Can you elaborate a little more on what that means? If the API supports it and GPU supports it, do you mean its design choice on what method to use it? Like Nvidia PhysX software/hardware variants.

It all seems to add up perfectly to use a 16mb Tiled resources + a 16mb Frame buffer file in its eSRAM - PS4 I hope it supports it also, atleast the GPU does. (Supports DX11.2/openGL 4.4)

No, it is a hardware thing as to whether tiled resources need to be implemented via software or via hardware. There was a thread on Beyond3D where an AMD guy came in and spoke about Tier 1 and Tier 2 hardware. I am attempting to look it up now. We don't know which Tier either of the consoles fall into though.

Edit: Found it on this thread: http://forum.beyond3d.com/showthread.php?t=64206&page=9

A relevant post is here:

Originally Posted by MJP View Post
I know I'm a few days late to the party, but I spent a half hour or so trying to figure out what the difference is between the TIER1 and TIER2 feature levels exposed by DX11.2 for tiled resources. Unfortunately there's no documentation yet for the enumerations or the corresponding feature levels (or at least none that I could find), so all I have to go off is the sample code provided by MS. These are the major differences illustrated by the code:

TIER2 supports MIN and MAX texture sampling modes that return the min or max of 4 neighboring texels. In the sample they use this when sampling a residency texture that tells the shader the highest-resolution mip level that can be used when sampling a particular tile. For TIER1 they emulate it with a Gather.
TIER1 doesn't support sampling from unmapped tiles, so you have to either avoid it in your shader or make map all unloaded tiles to dummy tile data (the sample does the latter)
TIER1 doesn't support packed mips for texture arrays
TIER2 supports a new version of Texture2D.Sample that lets you clamp the mip level to a certain value. They use this to force the shader to sample from lower-resolution mip levels if the higher-resolution mip isn't currently resident in memory. For TIER1 they emulate this by computing what mip level would normally be used, comparing it with the mip level available in memory, and then falling back to SampleLevel if the mip level needs to be clamped. There's also another overload for Sample that returns a status variable that you can pass to a new "CheckAccessFullyMapped" intrinsic that tells you if the sample operation would access unmapped tiles. The docs don't say that these functions are restricted to TIER2, but I would assume that to be the case.


Aside from those things, all of the core hardware functionality appears to be available with TIER1.

Originally Posted by sebbbi (RedLynx) employee who knows his shit
Thanks for the info. It's pretty much as I expected. I only had the DX11.2 online documentation and the online documentation didn't have any details about the differences of tier 1/2. I am likely getting a copy of Win 8.1 next month, so I can do some experiments on my own.

Basically on tier 1 HW this means that similar method needs to be used to determine page faults that was used for the software virtual texture implementations. But that's pretty much the most efficient way to do it, so the missing ChackAccessFullyMapped shouldn't hurt performance at all. The missing min/max filtering and the missing lod clamp for sampler mean that tier 1 hardware needs quite a few extra ALU instructions in the pixel shader. It shouldn't be a big deal for basic use scenarios, but when combined with per pixel displacement techniques (POM/QDM/etc) the extra ALU cost will start to hurt. And obviously if you use tiled resources for GPU SVO rendering, the extra ALU cost might hurt tier 1 hardware even more. But still, this is quite positive news. Tier 1 only requires some small changes in pixel shaders, and all the important parts are supported.

http://forum.beyond3d.com/showthread.php?t=64206&page=6

My understanding is.

Southern Island is not a single gcn architecture:
All dx11.2 compatible after driver update.
gcn 1.0 -> tier 1
gcn 1.1 -> tier 2

The differences in gcn versions may not be huge, but they are there.

So yes they are all dx11.2 compatible, but one on a hardware level, the other emulating some functions in the driver etc.

AMD guy in response to the post above:

That was like pulling teeth!

In reference to people trying to figure out his vague posts.
 
What a load of crap.

Extra CUs don't "dramatically decrease the efficiency". If a certain GPU task scales to 768 cores (X1), it will also scale to 1152 cores (PS4). The gain is almost linear when you don't have other limiting factors. Look at benchmarks.

Video encoding on PS4 is done via a dedicated chip as has been said 1000 times. It takes literally zero GPU resources.

PS4 also has a sound processor. Cerny certainly didn't say "a lot of audio tasks" will need to be done via CPU, he talked about audio raycasting which is a very exotic feature that I'm pretty sure the X1 sound processor can't do via hardware either.
Unfortunately its not that linear. I hate reverting back to PC because they're quite incomparable but for this instance its not too bad.

Look at the Titan compared to the GTX 780. The titan has a whole 0.5Tflops of power more, which is a 12.5% theoretical performance gain. Unfortunately, they're both very similar in benchmark results. For example:
http://www.videocardbenchmark.net/high_end_gpus.html
 
1080p, better effects, better Image Quality, stable 60fps, a Mouse, cheaper, free online. You can have friends on PC too. Yeah. Much More enjoyable.

You're saying all this as if you've been to the future.

I don't like playing with a mouse. I hate it in fact. That right there alone would make playing the game way less enjoyable.
 
I really don't see that as being possible. It's just how GDDR "works" and is structured. Very much like saying you couldn't get a X86 CPU to do PowerPC's tricks.

If I see evidence, then obviously this makes this invalid. I honestly don't see it happening though.

How about you read this http://www.reddit.com/r/Games/comments/1h2qxn/xbox_one_vs_ps4_memory_subsystems_compared/caqjldw
it proves that GDDR latency is almost or maybe even better than DDR3

I'll try to copy it here.
Here's a write-up I did a while ago that I'll just paste here:

My background is in VLSI design, but I know nothing about graphics programming. But here goes my analysis for the chip and system level design.

1) There are low to mid-range video cards that have both a DDR3 and GDDR5 version. This provides a direct comparison point. In the very low end, I found benchmarks showing that the GDDR5 version is only 10-20% faster. However, in mid-high range cards, the GDDR5 version can be almost 50% faster. http://ht4u.net/reviews/2012/msi_ra...39.php&usg=ALkJrhi1G4TxkhzXnvN1ZfRJ3KdXukbpQQ

This makes sense. In low end cards, the GPU does not have enough processing power to be significantly bottlenecked by memory bandwidth, but in faster cards it definitely can.

So bandwidth is critical for GPU tasks. That's why high-end video cards use 384-bit wide interfaces to memory while CPU memory interfaces are only 64 bits wide (per channel)! It certainly is not cheap to dedicate that many IO pins to memory, so they do it for a good reason.

Memory bandwidth and latency is not too critical for most CPU tasks though. For the PC builders out there, you can find some benchmarks comparing different memory timings and speeds and in most cases you'd be better off buying a faster video card instead of spending money on better RAM.

2) GDDR5 having much higher latency than DDR3 is a myth that's been constantly perpetuated with no source to back it up. Go look up datasheets of the actual chips and you'll see that the absolute latency has always been the same, at around 10ns. It has been around that since DDR1. Since the data rates have been increasing, the latency in clock cycles has increased but the absolute latency has always been the same. Anyone who wants to argue with me should dig through datasheets to back their claims up.

From Wikipedia: DDR3 PC3-12800 @ IO frequency 800MHz has typical CAS latency 8. This means the absolute latency is 10ns. DDR2 PC2-6400 runs at IO frequency 400MHz, with CAS latency 4. This is also 10 ns.

Here's a typical GDDR5 chip datasheet: http://www.hynix.com/datasheet/pdf/graphics/H5GQ1H24AFR(Rev1.0).pdf

Here is the table showing CAS latency vs frequency (page 43):
dnHldht.png


The data rates are a factor of 4x faster than the memory clock. So at a typical 5.0Gbps output data rate, the memory runs at 1.25GHz source: (page 6)
FEhkHNm.png

and supports CL latency of 15. This is 15/(1.25 GHz) = 12 ns

3) The Xbox One has additional SRAM cache to improve their bandwidth. However, they needed to dedicate additional silicon area and power budget in the cache and the cache controller. This is definitely a big cost-adder in terms of yield and power budget, but probably not as much as using GDDR5 chips. Chips these days are limited only by the amount of power they can dissipate and everything is a trade-off. By adding complexity in one area, the designer must remove it from another. So Microsoft spent some of their power budget on implementing a cache, while Sony could use it to actually improve the number of GPU cores. And it shows

Who knows how well the Xbox One's cache system will work to catch up to PS4's bandwidth advantage. But it is certainly not going to be _faster_ or simpler. When you're streaming in the huge textures needed for next-gen 720p to 1080p graphics, a 32MB cache is not big enough to constantly provide enough bandwidth.

Also, since the PS4 has more GPU power, it will definitely need all the bandwidth it can get.
 
Ok so you do buy less power for more money. Check.

Lol...point was I don't really care about 'power' when it comes to a pc. Now get back on topic and talk about consoles.

I will be interested to see what comes of the situation with next gen power...and how that translates to goals for next-next gen if there are big consequences to the choices for xb1/ps4.
 
Unfortunately its not that linear. I hate reverting back to PC because they're quite incomparable but for this instance its not too bad.

Look at the Titan compared to the GTX 780. The titan has a whole 0.5Tflops of power more, which is a 12.5% theoretical performance gain. Unfortunately, they're both very similar in benchmark results. For example:
http://www.videocardbenchmark.net/high_end_gpus.html

GTX 780 and Titan are both based on GK110. PS4 and Xbox One's GPUs are not based on the same chip.
 
The MS brand is not powerful in the UK they went with PS1, PS2 and Wii (360 if you don't count the Wii)
They are a price sensitive market and unlike last gen PS4 coming out the same time and at a cheaper price .
It having the better hardware is the icing on the cake for them .

What? The 360 will probably be the biggest selling console ever in the UK.
 
Top Bottom