• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Interesting 'new' public PS3 GPU facts emerge

from what I read in other articles, RSX *probably* has 24 pixel pipelines, and most likely has 8 vertex pipelines and this cannot be compared to Xbox360 Xenos' 48 pipes unifed pipes.
 
Vince said:
This irks me. They're using the same process (CMOS4) for RSX in 2006 that the PSP and [EE+GS] have been using, the latter since 2H2003. And the latter two designs have eDRAM to boot... CELL I can understand being at 90nm, as it's a highly-custom sSOI design, but WTF happened here. CMOS5 (65nm) has been sampling since 2004, somewhere, some group fucked up.

And does anyone know the NV40s sie size? It has to be huge, having 220M tranistsors at 130nm. [130] => [90nm] is a 2X increase in area, RSX is just over 300M... I'm guessing NV40 is huge and nVidia|Sony are banking over absolute preformance.


it irritates me too. I was expected a 500 million or more transister GPU on 65m, with eDRAM


actually if things had parallaled the PS2, the PS3 GPU wouldve had roughly 4x the amount of transistors that the CPU had.
 
Izzy said:
It means, as amazing as Heavenly Sword was, it was hardly using Cell power at all. Only the main core (PPE) - SPEs (7 of 'em) were idle. o.O

Doesn't this contradict previous reports that Cell would handle dynamic allocation to the SPEs? So you don't need to explicitly set code to them as the PPE determines which SPEs get which tasks and distributes them accordingly (while performing some processing on its own). Unless they specifically locked down the 7 SPEs, which I don't see a reason why they would, I highly doubt they were idle.
 
rastex said:
Doesn't this contradict previous reports that Cell would handle dynamic allocation to the SPEs? So you don't need to explicitly set code to them as the PPE determines which SPEs get which tasks and distributes them accordingly (while performing some processing on its own). Unless they specifically locked down the 7 SPEs, which I don't see a reason why they would, I highly doubt they were idle.

I'm not Deano, but here's what he had to say on the subject:

Deano Calver said:
We haven't made the jump to a multi-thread game architeture yet. Almost everything sits on a single thread...
 
Q: So, Deano, once you guys start to multithread, what advantages do you think the game will have? Will it improve graphics, physics, AI, etc.?

A: Graphics and framerate are the low hanging fruit. Just threading up the animation system and procedural graphics (hair, cloth, flags etc.) will gives us a large amount of CPU time back for the game. The army need this the most.

Longer term, Physics and AI services are obvious candidates.

As for what improves?, thats a good question. The priority is to move the heavy weight stuff off the main game thread, hopefully doing this will provide lots more time for the game code. That should improve the gameplay in lots of ways.

Wether we will acheive all this and keep the code easy to develop with is the big question. Lots of designers and coders (especially the more junior members of the team) aren't used to dealing with threads, DMA and C like code. Keeping a balance between the high level and the harder stuff is the biggest challenge. I don't want a level designer having to worry about threads but at the same time don't want him/her coding in such a way that its totally serialised...
 
One thing that has been bugging me these recent days is the Floating Point Calculations which Cell almost doubles Xbox360's PowerPC processor in terms of performance. What does this account for exactly in terms of games?
 
Izzy said:
I'm not Deano, but here's what he had to say on the subject:

Yes, as has been mentioned before - the idea is to have a multithreaded game where the threads are working on 'in-order' data. Then you can have an thread per SPE (at least one assuming its running 100% peak) doing something. If you only have one thread, work cannot be allocated across the SPEs because you're only telling the machine to do one thing at a time. When you thread it, you're telling the machine to do multiple things at once (and this is where you rapidly fall from the theoretical numbers down to the actual numbers - that's the same for any MP system). When there is more than one thing to do, only then can work be spread out in producer:consumer fashion to the architecture.

In laymens terms its like having 7 UPS delivery trucks, but only one package to deliver versus have 7 UPS delivery trucks with more than one package to deliver. In the later you use more trucks.
 
JMPovoa said:
One thing that has been bugging me these recent days is the Floating Point Calculations which Cell almost doubles Xbox360's PowerPC processor in terms of performance. What does this account for exactly in terms of games?

Floating point calculations are used for everything. A floating point number is a number with decimals - a float. So assume that you wanted to draw rain. If you wanted to model it you might have the position (x, y, z) as an ordered tuple of three floats. Now when you want to move it, you need to perform operations on these x,y, and z values so the rain falls to the ground (simple case of a stupid particle system). The transformation of this position one each axis will be a floating point operation.

So, theoretically, the more floating point operations you can do, the more complex and realistic a system you can create. This is particularly true in a physics simulation (and a particle system is just one of the simpler physics simulations you can do). The more floating point ops you can do, the more realistic you can make the simulation without hurting the frame rate. There is a finite amount of time between each VSYNC flip on the TV and you have pretty straightforward falloff from 60->30->15->etc. So lets say you're currently running at 15fps and you want to make the game run at 30 - you start finding ways to optimize or cut down the sophistication of the simulation so that you can post the frame to be rasterized (put in your face). Unlike PCs, there aren't 17FPS and such. If you miss the retrace - you have to wait :)
 
So, theoretically, the more floating point operations you can do, the more complex and realistic a system you can create. This is particularly true in a physics simulation (and a particle system is just one of the simpler physics simulations you can do). The more floating point ops you can do, the more realistic you can make the simulation without hurting the frame rate. There is a finite amount of time between each VSYNC flip on the TV and you have pretty straightforward falloff from 60->30->15->etc. So lets say you're currently running at 15fps and you want to make the game run at 30 - you start finding ways to optimize or cut down the sophistication of the simulation so that you can post the frame to be rasterized (put in your face). Unlike PCs, there aren't 17FPS and such. If you miss the retrace - you have to wait

But even if the Cell processor can make 218 floating point operations (per second?), is this some sort of peak number, or is it always achievable, independent of other things the hardware might be stressed with?

I mean, is it possible that even though in this regard, Cell is better than Xbox 360's Power PC processor, it may in a real world situation underperform against it, taking into account everything a certain game might be doing in terms of graphics?
 
Your speculation is correct, it is a peak value and is dependent on other things. Sort of game, quality of developer, Sony support to developer etc.
Yes in a real world case there will be times when the X360 cpu is probably better for a job than the cell. Though this is true of a lot of different architectures the number of times this occurs is of course dependent on the aforementioned factors.
 
Flop numbers are always theoretical peak. You could write a dummy script that does calculate floating point operations all the time in a loop and than you would be able to reach that number, but in a realworld situation, you never achieve that, not on the PS3 and not on the xbox 360 or any other hardware. Now how close can you get? That is hard to awnser as different things use flops differently all the time. To say that it's easier on the xbox 360 to come close to the max is true in a way, because you have less cores to split the workload, but once your code is written for the paralelism of the Cell architecture, at that point it's more important what you actualy do than how many cores a chip has. In other words, an engine optimized for the Cell architecture will come as close to the peak as an engine optimized for the xbox 360, but an unoptimized engine will have it easier on the 360. Actualy an unoptimized, non-threaded engine will get nearly the same flops on both machines as they could only use one thread of one PE.

Fredi
 
Im probably stating the obvious but it sounds like theirs going to be some growing pains with the move to multiple threads in the development process, for all platforms. I doubt most first gen titles will take much advantage of the CPU's at their disposal. Especially the synergy that might be accomplished between Cell/RSX, who knows what sort of funky shit may end up being finagled out of that tag team when its all said and done.
 
hukasmokincaterpillar said:
Im probably stating the obvious but it sounds like theirs going to be some growing pains with the move to multiple threads in the development process, for all platforms. I doubt most first gen titles will take much advantage of the CPU's at their disposal. Especially the synergy that might be accomplished between Cell/RSX, who knows what sort of funky shit may end up being finagled out of that tag team when its all said and done.

This is why middleware will be VERY popular this generation. It will be written thread aware and tailored to the 2 boxes. You'll get packages for physics simulation (Ageia), AI, collision engines, audio engines, and of course, graphics (which covers ALOT of ground).

Using middleware will make multiplatform games much easier and it will mean that a game on one system could run much better than a game on another system if hardware allows. The Middleware engines are going to be very, very finely tuned and in this case, having multiple processors makes the job, much, much easier.

Otherwise, you'd have a scheduler that tries to portion CPU time to the same processor for disparate middleware engines.

You will see a difference.
 
This is why middleware will be VERY popular this generation. It will be written thread aware and tailored to the 2 boxes. You'll get packages for physics simulation (Ageia), AI, collision engines, audio engines, and of course, graphics (which covers ALOT of ground).

And in your opinion, which console will benefit the most? (you may consider this a rethorical question, but i am curious :D)
 
JMPovoa said:
And in your opinion, which console will benefit the most? (you may consider this a rethorical question, but i am curious :D)

Both. But since Microsoft has a better development environment and since Sony has more "processing units", I'd say PS3 will benefit more, but that doesn't mean that PS3 would have the highest performance on those systems.
 
Middleware makes me frown a little bit just for the whole sameness aspect to it, at least in the PS2/Xbox/Cube era. I understand its need, especially for multiplatform. And its cranked out some great titles. But as far as first or second party is concerned I think dedicated engines gave us the best stuff this past gen. Halo, Metal Gear, GT, RE4, Metroid, NG etc. Do you see that changing next gen? Will Middleware be a must due to general cost and efficiency? Will that be the real bottleneck perhaps?
 
Izzy said:
More from this article.

David Kirk (nVIDIA) said:

1) RSX can use XDR-RAM(256MB) as VRAM too.

2) 7 SPEs and RSX can work togehter as a total GPU. SPE as vertex shader
, post processing a rendering result from RSX etc...

BTW - nVidia's RSX 128-bit HDR implementation is rumoured to be exceptional. :)

This thing is a monster waiting to awaken.
 
sonycowboy said:
This is why middleware will be VERY popular this generation. It will be written thread aware and tailored to the 2 boxes.

It will be more than thread aware/thread safe. Most next-gen middleware will likely be threaded at the core.
 
JMPovoa said:
And in your opinion, which console will benefit the most? (you may consider this a rethorical question, but i am curious :D)

Depends on which version of that middleware is more efficient. Its nearly impossible to quantify as it REALLY depends on the middleware and what its optimized to do. For example you wouldn't in your right mind use Unreal 3 for a flight simulator, that would be a poor use of the engine and it likely would result in poor performance compared to another middleware product designed specifically to render open environment.
 
hukasmokincaterpillar said:
Middleware makes me frown a little bit just for the whole sameness aspect to it, at least in the PS2/Xbox/Cube era. I understand its need, especially for multiplatform. And its cranked out some great titles. But as far as first or second party is concerned I think dedicated engines gave us the best stuff this past gen. Halo, Metal Gear, GT, RE4, Metroid, NG etc. Do you see that changing next gen? Will Middleware be a must due to general cost and efficiency? Will that be the real bottleneck perhaps?

Companies that don't have dedicated engine teams and currently use middleware are almost 100% guaranteed to continue to do so. For example, Elder Scrolls: Oblivion uses Gamebryo. Game looks great and is one of the few next generation titles shown that is actually running on the showfloor WORKING. Now, the guys who roll their own engines will undoubtedly make stuff that is more efficient than Gamebryo could ever hope to be because they don't intend to be multi-platform and as such can optimize specifically for the platform they're on. But at the end of the day - what you 'see' is as much game design and art as it is the actual engine and hardware performance itself. You can have the best hand coded engine in the world, but if you let me - a development guy - do your modelling, its going to look ASS. .... guaranteed :)
 
Phoenix said:
Yes, as has been mentioned before - the idea is to have a multithreaded game where the threads are working on 'in-order' data. Then you can have an thread per SPE (at least one assuming its running 100% peak) doing something. If you only have one thread, work cannot be allocated across the SPEs because you're only telling the machine to do one thing at a time. When you thread it, you're telling the machine to do multiple things at once (and this is where you rapidly fall from the theoretical numbers down to the actual numbers - that's the same for any MP system). When there is more than one thing to do, only then can work be spread out in producer:consumer fashion to the architecture.

In laymens terms its like having 7 UPS delivery trucks, but only one package to deliver versus have 7 UPS delivery trucks with more than one package to deliver. In the later you use more trucks.

This still doesn't mean that the SPEs aren't being used. Even if threads aren't coded into the engine explicitly the provided compilers can parallelize things on their own, there are compilers out there that already do that, so I'd assume that the current Cell compilers do that as well. Having the compiler take care of everything for you won't get you as efficient as doing it by hand, but it'll get you a good way there.
 
Phoenix said:
Now, the guys who roll their own engines will undoubtedly make stuff that is more efficient than Gamebryo could ever hope to be because they don't intend to be multi-platform and as such can optimize specifically for the platform they're on.

I think you'll see the big Middleware guys really, really make incredibly efficient engines that are tailored to each platform and to provide regular updates throught the life of the consoles as they make iterative changes.

It will be very difficult for any developer to make an engine better than the Middleware guys this generation. It's like every other industry out there, you'll want to choose the best of breed and use parts that have been refined dozens of times for hundreds of clients as opposed to relying on in-house staff to solve problems that have already been solved, refined, and redeveloped numerous times.

That's a big reason why EA bought renderware. They recognized that they would spend millions and millions of dollars to develop various engines that have already been done better by others. And that's from a company that develops over 100 skus a year and could afford to develop in-house engines that could be leveraged by many systems.
 
sonycowboy said:
That's a big reason why EA bought renderware. They recognized that they would spend millions and millions of dollars to develop various engines that have already been done better by others. And that's from a company that develops over 100 skus a year and could afford to develop in-house engines that could be leveraged by many systems.

This is a little interesting. I guess it works great for EA since they are such a large company, but what has surprised me is how well adopted Unreal Engine 3.0 has become. I'm starting to think it is going to supplant Renderware as the middleware of choice for these next gen platforms.
 
Mrbob said:
This is a little interesting. I guess it works great for EA since they are such a large company, but what has surprised me is how well adopted Unreal Engine 3.0 has become. I'm starting to think it is going to supplant Renderware as the middleware of choice for these next gen platforms.

It will. EA am future cry
 
seismologist said:
yeah your right. I was thinking more along the lines of having a dedicated unit for processing graphics on the CPU.
This is something the X360 lacks unless they allocate 1/3 of the CPU cores to process graphics. Right?
Before I get accused of misleading, let me correct what I said. I was wrong, the GIF bus from the EE to the GS apparently isn't one-way. Deano recently said something about seeing the EE interact with the GS like on a few ocassions, and IIRC, Faf alluded to this years ago while the tech discussions were still heavy here. So I would like to say that it's probably 2-way but due to the bandwidth problems (sending vertices and uncompressed textures to the GS), it probably isn't a very viable option for most games. 360 and PS3 should be much more capable in this regard. PEACE.
 
rastex said:
This still doesn't mean that the SPEs aren't being used. Even if threads aren't coded into the engine explicitly the provided compilers can parallelize things on their own, there are compilers out there that already do that, so I'd assume that the current Cell compilers do that as well. Having the compiler take care of everything for you won't get you as efficient as doing it by hand, but it'll get you a good way there.

Its very difficult to parallelize things for in-order processors simply by using the compiler. This is different than just rolling something for a G5 or an AMD because you can fork the process and do pieces and then join the pieces later. These stupidly fast cores are not out-of-order processors, they expect to receive their instructions in order and as such if you don't feed the processors, there isn't a whole lot that the compiler can tell them to do.
 
sonycowboy said:
It will be very difficult for any developer to make an engine better than the Middleware guys this generation.

I would actually be willing to put money AGAINST that :) If you're saying that SCEA can't write more optimal code for their own machine than the Renderware and Gamebryo folks... Building a specific purpose shader set for doing very specific effects is always going to be faster than the general purpose sets done by middleware. That has always been the case and I don't see any reason why that would be any different this generation.
 
Phoenix said:
I would actually be willing to put money AGAINST that :) If you're saying that SCEA can't write more optimal code for their own machine than the Renderware and Gamebryo folks... Building a specific purpose shader set for doing very specific effects is always going to be faster than the general purpose sets done by middleware. That has always been the case and I don't see any reason why that would be any different this generation.

I should reword that. They can, the issue is whether it's cost efficient to do so. This generation, Renderware had 100's of games using their engine, and the numbers will go up this generation. Many developers are going to have a difficult time taming the CELL and the Xbox 360 CPU, and the same is true of the RSX and the Xenos.

The absolute top developers (KCEJ, Tecmo, PD) will certainly push the hardware to the limits more than a middleware engine will. However, even then, the middleware engine will be much more efficient than years past.
 
Phoenix said:
Its very difficult to parallelize things for in-order processors simply by using the compiler. This is different than just rolling something for a G5 or an AMD because you can fork the process and do pieces and then join the pieces later. These stupidly fast cores are not out-of-order processors, they expect to receive their instructions in order and as such if you don't feed the processors, there isn't a whole lot that the compiler can tell them to do.

Well here's what I'm thinking, depending on how smart the IBM compiler is. On one of the passes it'll figure out the data dependencies of a given block of code and split that block up into independent sub-blocks. Then on execution with special instructions (inserted by the compiler) the PPE will delegate out those sub-blocks to the SPEs.

Further clarification: So say you have Block A. The compiler will reorder and all that and in the end it'll be divided up into Tasks 1-8. So when the program runs the PPE will assign Task 1 to SPE01, Task 2 to SPE02 etc etc. Obviously this is a trivial example, and maybe I'm expecting WAY too much out of the compiler and head PPE unit, but I'd be surprised if there isn't anything AT ALL going on like this.
 
Good read, that explains why they didn't go with embedded ram...since both the Cell and RSX handle rendering it wouldn't make much sense.
 
rastex said:
Well here's what I'm thinking, depending on how smart the IBM compiler is. On one of the passes it'll figure out the data dependencies of a given block of code and split that block up into independent sub-blocks. Then on execution with special instructions (inserted by the compiler) the PPE will delegate out those sub-blocks to the SPEs.

Further clarification: So say you have Block A. The compiler will reorder and all that and in the end it'll be divided up into Tasks 1-8. So when the program runs the PPE will assign Task 1 to SPE01, Task 2 to SPE02 etc etc. Obviously this is a trivial example, and maybe I'm expecting WAY too much out of the compiler and head PPE unit, but I'd be surprised if there isn't anything AT ALL going on like this.


It depends entirely on the algorithm - there are somethings that are conducive to being broken up into pieces and some things that aren't. Since we wouldn't be breaking up tasks over the SPEs you have to look at more granular operations - like algorithms which are comprised of instructions. Many of these are not well suited to being broken up into fragments for in-order processors. We could actually run through a few of the graphics ones (like visibility/occlusion), but I think you understand what I'm saying as I do understand what you're saying. What ends up really happening is that when you look at what is really being assigned to the processor, especially the average embedded in-order processing RISC processor - you don't achieve much parallelism without splitting stuff into threads.

You also have to remember the penalties involved with moving data across the bus to merge stuff together spread out across the processors. You can certainly have it scheduled and it will work, but you of course have latencies to stage the next set of things while you're waiting for data to come back. Anyways 'its not as simple as leaving it to the compiler to do'.
 
Basic cell question, I'm assuming that the SPE's should be able to time-multiplex various processes themselves right, like any multitasking processor, or is it more a matter of finding some non-interruptable block of inline code to execut on the SPE's?
 
sonycowboy said:
I think you'll see the big Middleware guys really, really make incredibly efficient engines that are tailored to each platform and to provide regular updates throught the life of the consoles as they make iterative changes.

Efficient? I dunno.
 
As soon as the performance analyzer is out, games will start to look absolutly amazing. The Cell chip is built in a way that the PA can watch every SPE and so optimize multicore code.

Fredi
 
For the people saying that the Getaway demo was done all on CELL... I remember Sony playing a graphics demo done completely by Cell during the press conference. Why didn't that look even 1/1000000000000000000000000000000000000 as good as the Getaway demo, while much less was going on in it? Why didn't Sony use the Getaway demo instead to pimp how great of graphics Cell can put out on it's own without the GPU? You guys are full of it.
 
Cerebral Palsy said:
For the people saying that the Getaway demo was done all on CELL... I remember Sony playing a graphics demo done completely by Cell during the press conference. Why didn't that look even 1/1000000000000000000000000000000000000 as good as the Getaway demo, while much less was going on in it? Why didn't Sony use the Getaway demo instead to pimp how great of graphics Cell can put out on it's own without the GPU? You guys are full of it.

Uhm, strange... that demo was doing complex math, complete procedural generation of all materials and of the landscape and was not meant as a pure graphics showcase. Strange it did not look as good as the London Rendering demo, still Chatani must be out of the loop ;).
 
Cerebral Palsy said:
For the people saying that the Getaway demo was done all on CELL... I remember Sony playing a graphics demo done completely by Cell during the press
conference.

Which demo was this? The landscape demos? They're not exactly comparable, they've very different requirements.
 
Cerebral Palsy said:
For the people saying that the Getaway demo was done all on CELL... I remember Sony playing a graphics demo done completely by Cell during the press conference. Why didn't that look even 1/1000000000000000000000000000000000000 as good as the Getaway demo, while much less was going on in it? Why didn't Sony use the Getaway demo instead to pimp how great of graphics Cell can put out on it's own without the GPU? You guys are full of it.

They were completely different demos.
One was Cell generating a landscape in a completely procedural way,without using textures stored in memory.
The Getaway was Cell dealing with graphics with the traditional approach.
 
Okay then. Funny that Sony didn't go out of the way to pimp the fact that CELL was doing the Getaway demo almost all on its own, but they did for the landscape demo. People had to come to this conclusion that the Getaway demo was all done by CELL from the words of Sony marketing, words that could easily be misread. Whatever makes you guys happy though.

AAAAAAAALLLLLLLLLL ABOOOOOOOOOOOOOOOOARD THE HYPE TRAIN!
 
Even looking at The Getaway shots, maybe it's just me, but they seem to have a different quality to them than you might expect from something running on RSX (?) The textures look a little "soft", not as filtered as you'd expect on a GPU, for instance.

the-getaway-ps3-20050519100623263.jpg


Look at the underground sign..look how less readable the text is on the far side of it versus the near side.

Funny that Sony didn't go out of the way to pimp the fact that CELL was doing the Getaway demo almost all on its own, but they did for the landscape demo. People had to come to this conclusion that the Getaway demo was all done by CELL from the words of Sony marketing, words that could easily be misread.

I agree, it is strange. And I'm not sure if I fully believe it myself yet. But the confirmation didn't come from marketing, it was Masayuki Chatani, their technology officer.

It's definitely something I'd like the media to pick up on a clarify - forget killzone, we need absolute clarification as to whether The Getaway was being rendered by Cell or not! (or a mix of Cell and RSX!).
 
the gas station explosion was not 'as pretty' as the Getaway one, but then it was showing the simulation aspects.

And the entire explosion was calculated using CELL - no scripted animations - all the volumetrice gas stuff, all calculated realtime.


I think the Getaway was mostly CELL. It couldn't have been 'bringing the city alive' - what, a couple of buses running along preset routes?

I think most of the SCEE stuff was amazing, and perhaps they have been concentrating on CELL for a long time in case it was the GPU as well?


So CELL on its own is fantastic. RSX and one PPE is fantastic (Heavenly swords).

RSX+PPE+7xSPE is going to own us all. Yum.
 
gofreak said:
Even looking at The Getaway shots, maybe it's just me, but they seem to have a different quality to them than you might expect from something running on RSX (?) The textures look a little "soft", not as filtered as you'd expect on a GPU, for instance.

the-getaway-ps3-20050519100623263.jpg


Look at the underground sign..look how less readable the text is on the far side of it versus the near side.

And Motor Storm was realtime because of the V-synch issues, right? Or Killzone2 because there were some slight graphical glitches seen? :lol

Keep looking! Keep believing!
 
Cerebral Palsy said:
And Motor Storm was realtime because of the V-synch issues, right? Or Killzone2 because there were some slight graphical glitches seen? :lol

Keep looking! Keep believing!

See above, I don't necessarily believe it entirely myself yet. I'm not quite sure what to make of it, but Chatani was pretty explicit about it and went into detail.
 
What’s wrong Cerebral Palsy? You seem kinda, I dunno worried about something? Take a chill pill, I’m sure the X360 will be a awesome system too. :)
 
gofreak said:
See above, I don't necessarily believe it entirely myself yet. I'm not quite sure what to make of it, but Chatani was pretty explicit about it and went into detail.

Haven't had the chance to see the Chatani interview. I only caught Phil Harrison on G4.


Forsete said:
What’s wrong Cerebral Palsy? You seem kinda, I dunno worried about something? Take a chill pill, I’m sure the X360 will be a awesome system too. :)

Not worried. I'm sure both systems will be awesome. It's the people driving me mad! It's the people I tell you! Seriously, things were so nice and quiet before the pre-E3 press conferences. E3 hadn't even started and the needle had been buried on the graphicwhore-o-meter. Now we have everyone and their dog talking out of their asses trying to prove this and that.


McFly said:
Keep trolling, not my problem.

Fredi

Not trying to troll. Just pointing out the desperate lengths people have gone to keep their hopes alive. People who mostly have no clue what they are talking about.
 
Top Bottom