Rumor: Wii U final specs

I think it's a fair asumption on your part. What could be interesting is if reaching the 720P resolution would make such a significant difference. I could tell you that if it wan't for the pixel counters, not many people would know that the COD games were running in sub HD resolutions.

Agreed. But those same people would cry "last gen" if the Wii U version wasn't 1080.
 
MCM has nothing to do with what parts are on the same silicon. It's about multiple dies on the same package substrate.

Quick question. Why is it considered better to have an MCM as opposed to just putting the units on the mobo? They look to be far apart anyway so you could put them just as close on the mobo. I'm obviously missing something but I'd like to know why it's better? Is it something to do with the tracers between the units?
 
Quick question. Why is it considered better to have an MCM as opposed to just putting the units on the mobo? They look to be far apart anyway so you could put them just as close on the mobo. I'm obviously missing something but I'd like to know why it's better? Is it something to do with the tracers between the units?
Actually, no, you might not put them just as close on the PCB - on the MCM you may be looking at bare dies whereas on the PCB those chips would have to sit in their own (large pin/ball-count) packages.

Basically, the proximity of dies improves as follows, depending on the packaging tech:
1. separate packages
2. naked dies on MCM (note: it could also be packages on MCM, but in the more advanced case it's bare dies)
3. 2.5D stacking
4. 3D stacking

From there on, yes, the traces themselves have different power properties when on the PCB and on a substrate - on the latter you can transmit a signal using less electricity. From there on, the entire IO segments of the dies consume less power, which improves the overall power characteristics of the die - less power lines, less TDP.

For a general idea, check these two articles:
http://www.eetimes.com/design/programmable-logic/4370596/2D-vs--2-5D-vs--3D-ICs-101
http://www.eetimes.com/design/progr...-ICs-are-more-than-a-stepping-stone-to-3D-ICs
 
The clutching at straws in this thread is stupid.

Just because some things are shared between the R700 and some later GPUs does not mean that the Wii U GPU being called "R700 based" means that it is some later much more advanced GPU.

No one would call a newer GPU "R700 based" when telling devs about it if it was not.
If they are using a newer architecture they would have said it was Evergreen, or NI based ect.

They are not trivially different!



Using "DX10" "DX11" when talking about GPUs is fine, due to the fact they have min feature set requirement and because Nvidia and AMD build their GPUs around it and actively plot it's course (in addition to MS), get over it.
 
Quick question. Why is it considered better to have an MCM as opposed to just putting the units on the mobo? They look to be far apart anyway so you could put them just as close on the mobo. I'm obviously missing something but I'd like to know why it's better? Is it something to do with the tracers between the units?

Centralized heat (easier to deal with getting rid of it), lower latency between CPU and GPU (which was a massive advantage the GameCube had over both the PS2 and Xbox)
 
I think it's a fair asumption on your part. What could be interesting is if reaching the 720P resolution would make such a significant difference. I could tell you that if it wan't for the pixel counters, not many people would know that the COD games were running in sub HD resolutions.

1080P would be really nice though.

Playing on way below all our TV's native resolutions blows.
 
Once WiiU will be out, will we know for sure whats inside it after a teardown?

No. We wont know clockspeeds or the internals of the GPU and CPU.

But a full teardown, we will learn more than we know now, once people who know what they're looking at (not me) get a look.

I'd say Nintendo's preteardown spoiled like, 70% of the surprise, but more can be learned with an up close look. Actually measuring the die's, finding out more info such as manufacturers or part numbers on the RAM chips, etc.
 
The clutching at straws in this thread is stupid.

Just because some things are shared between the R700 and some later GPUs does not mean that the Wii U GPU being called "R700 based" means that it is some later much more advanced GPU.

No one would call a newer GPU "R700 based" when telling devs about it if it was not.
If they are using a newer architecture they would have said it was Evergreen, or NI based ect.

They are not trivially different!



Using "DX10" "DX11" when talking about GPUs is fine, due to the fact they have min feature set requirement and because Nvidia and AMD build their GPUs around it and actively plot it's course (in addition to MS), get over it.

I don't think anyone was actually trying to argue that. It was only a counter argument to the people claiming that r700 based means it's an exact r700 with no changes whatsover. r700 based can mean pretty much anything and there is no way we can know how big the changes are. This is a GPU that will be manufactured somewhere in the 50-100 million unit range over the next 5 years; I'm guessing AMD would be quite willing to put some resources into meeting any specifications Nintendo might have.
 
Gemüsepizza;43148967 said:
Yes. Here is a nice link describing how people can find out what's in a chip:

http://www.ifixit.com/Teardown/Apple+A4+Teardown/2204/1

Frequencies will probably be leaked by some devs, the production process (45nm vs 28nm) can be identified, and how the GPU is constructed (via X-ray etc.). Then they can compare it to similar chips.

Frequencies havent been leaked by devs yet, which is troubling, but yeah, hopefully at some point. Then of course there's the issue of determining that the leaks in question are trustworthy. It'll probably boil down to, you'll have to determine some guy on a forum is trustworthy.

I also think hackers can get the frequencies by running like, linux or something. This requires it to be hacked though, and I'm not sure the info would get back to us (for example, I think it's happened on the 3DS, and the only way I found out about it for the PS3 was Alstrong linking some backwater hacker forum post from like 2008). That also requires Wii U to be hacked which could take a while.

Also, from your link:

It's quite challenging to identify block-level logic inside a processor, so to identify the GPU we're falling back to software: early benchmarks are showing similar 3D performance to the iPhone, so we're guessing that the iPad uses the same PowerVR SGX 535 GPU.

They use benchmarks to determine what it is. But you cant run benchmarks on a console (unless it's hacked maybe) so...

Also, I've never seen a console chip X-rayed. That's a lot of trouble and expense. So I dont expect that.

They may be X-rayed, but I bet you have to pay a lot to see it.
 
Also, I've never seen a console chip X-rayed. That's a lot of trouble and expense. So I dont expect that.

They may be X-rayed, but I bet you have to pay a lot to see it.
I have an electron micrscope with digital camera at work. It allows x3k-8k magnifications.

But I'm not sure if the chip will fit. We usually use 5mm grids. If I break the chip in two parts, I destroy the structure.
 
No. We wont know clockspeeds or the internals of the GPU and CPU.

But a full teardown, we will learn more than we know now, once people who know what they're looking at (not me) get a look.

I'd say Nintendo's preteardown spoiled like, 70% of the surprise, but more can be learned with an up close look. Actually measuring the die's, finding out more info such as manufacturers or part numbers on the RAM chips, etc.

Why is it so difficult though? We know everything whats inside the PS3 and 360.

Gemüsepizza;43148967 said:
Yes. Here is a nice link describing how people can find out what's in a chip:

http://www.ifixit.com/Teardown/Apple+A4+Teardown/2204/1

Frequencies will probably be leaked by some devs, the production process (45nm vs 28nm) can be identified, and how the GPU is constructed (via X-ray etc.). Then they can compare it to similar chips.

Thats great.
 
Frequencies havent been leaked by devs yet, which is troubling, but yeah, hopefully at some point. Then of course there's the issue of determining that the leaks in question are trustworthy. It'll probably boil down to, you'll have to determine some guy on a forum is trustworthy.

I also think hackers can get the frequencies by running like, linux or something. This requires it to be hacked though, and I'm not sure the info would get back to us (for example, I think it's happened on the 3DS, and the only way I found out about it for the PS3 was Alstrong linking some backwater hacker forum post from like 2008). That also requires Wii U to be hacked which could take a while.

Also, from your link:



They use benchmarks to determine what it is. But you cant run benchmarks on a console (unless it's hacked maybe) so...

Also, I've never seen a console chip X-rayed. That's a lot of trouble and expense. So I dont expect that.

They may be X-rayed, but I bet you have to pay a lot to see it.

Weren't the 3ds memory chips xrayed? I seem to remember that's how we got it confirmed it was 128mb
 
Well, what worries me is not the CPU inside the MCM, it's the GPU.

[big pics here]

Notice the following:
-the size of the WiiU CPU compared against the first picture, the mobile core i5. It is considerably smaller.
-The size of the WiiU GPU is considerably smaller than the R700 GPU unit in the second picture.
-Summing up the first two observations, both units are at least half the size of the core i% CPU chip and R700 GPU chip. So there is no way it will be more than 2-3 times the performance of this current gen.
From some searching around that i5 is 81mm^2 at 32nm and R700 is 256mm^2 at 55nm.

Wii U's CPU looks like it might be around 30mm^2 at 45nm(?). Not sure if Jaguar cores are still rumored for the others, but Anandtech mentions those as 3mm^2 per core at 28nm.

The GPU looks like it's around 160mm^2 at who knows what nm. And who knows what tweaks they've made. We have an idea of wattage from the system, but without knowing more details about the chips it'd be hard to say what they're capable of outside of a loose performance range depending on how efficient everything is or isn't (...what that range would be, I have no clue).

For another size comparison, I came across this one on Anandtech. Xenos (left) vs 9400M (right), both at 65nm:

L0Tme.jpg

Gemüsepizza;43149112 said:
You are right, it's probably a little bit more difficult than I imagined. But I hope we will get a better impression of how capable the Wii U is.
The main useful bits would probably be the structural stuff, like how much space EDRAM and whatever other stuff is taking up, and process size to give context to the whole die size. I think the main problem with recognizing stuff for comparison is that stuff might be customized enough that it's hard to make a good comparison, and/or the limited number of comparison points to begin with (like how many other Radeon GPUs have been examined/diagrammed like that?).

Another example is the A6 teardown (step 11), cause the CPU cores were a complete mystery. They can identify most of the other stuff on the chip but there's only so much they can say on the CPU. Their findings are basically that it looks manually laid out (which apparently can be faster but more expensive to do), so they figure it's a new Apple design as rumored. But without the software benchmarks, that'd just be a guess about the performance from examining the physical structure.
 
Something that I have done.

captura-de-pantalla-2012-10-11-a-las-13-11-58.png

I have observed doing it that the system uses 2 different types of memory for it's main RAM. 2 GDDR5 modules (256MB each) and 2 DDR3 modules with 768MB each.

Interesting. Looking at the pictures again, there is a difference in the shape of the memory modules, even when considering the perspective. I just think your numbers are off and the types of memory. Where is the FLASH sitting by the way? Near the front?
 
Means a whole lot. This is still the core design of the system. Console customization does not rebuild the design of the core or why would you buy the design in the first place?
Xenos was based on R500. Using your logic, that should've resulted in a ~120-150 Glops part.



pretty much. although i still we think we should believe Rayman is 1080p until specifically told otherwise as there's footage of the game running at 1080p.

http://wiiunews.at/wp-content/gallery/rayman_legends_1080/rayman-legends-wallpaper-1.jpg <- screen

and

http://gamersyde.com/download_rayman_legends_trailer_fr_-28017_en.html <- video
There's also 1080p media for Nano Assault Neo iirc.
 
I think it's a fair asumption on your part. What could be interesting is if reaching the 720P resolution would make such a significant difference. I could tell you that if it wan't for the pixel counters, not many people would know that the COD games were running in sub HD resolutions.

Well, not knowing what sub-HD is and thinking that the image is a bit blurry are two different things. But with that argumentation you can also feed people shit and say "well as long as they don't know there is better food out there...".
 
Actually, no, you might not put them just as close on the PCB - on the MCM you may be looking at bare dies whereas on the PCB those chips would have to sit in their own (large pin/ball-count) packages.

Basically, the proximity of dies improves as follows, depending on the packaging tech:
1. separate packages
2. naked dies on MCM (note: it could also be packages on MCM, but in the more advanced case it's bare dies)
3. 2.5D stacking
4. 3D stacking

From there on, yes, the traces themselves have different power properties when on the PCB and on a substrate - on the latter you can transmit a signal using less electricity. From there on, the entire IO segments of the dies consume less power, which improves the overall power characteristics of the die - less power lines, less TDP.

For a general idea, check these two articles:
http://www.eetimes.com/design/programmable-logic/4370596/2D-vs--2-5D-vs--3D-ICs-101
http://www.eetimes.com/design/progr...-ICs-are-more-than-a-stepping-stone-to-3D-ICs

Awesome, thanks.

Frequencies havent been leaked by devs yet,
Although from what Matt said it's a little under 600. What "a little" under means I don't know but I would assume between 550 and 600.
 
Actually, no, you might not put them just as close on the PCB - on the MCM you may be looking at bare dies whereas on the PCB those chips would have to sit in their own (large pin/ball-count) packages.

Basically, the proximity of dies improves as follows, depending on the packaging tech:
1. separate packages
2. naked dies on MCM (note: it could also be packages on MCM, but in the more advanced case it's bare dies)
3. 2.5D stacking
4. 3D stacking

From there on, yes, the traces themselves have different power properties when on the PCB and on a substrate - on the latter you can transmit a signal using less electricity. From there on, the entire IO segments of the dies consume less power, which improves the overall power characteristics of the die - less power lines, less TDP.

For a general idea, check these two articles:
http://www.eetimes.com/design/programmable-logic/4370596/2D-vs--2-5D-vs--3D-ICs-101
http://www.eetimes.com/design/progr...-ICs-are-more-than-a-stepping-stone-to-3D-ICs

I don't know why people keep trying to attribute high tech to Nintendo's design. Nintendo is optimizing cost, not performance.

These die do not appeat to be on a MCM for any performance reason. The advantage I see is in manufacturing and QC. The Die are fabbed from different vendors, IBM and TSMC(?) and then eventually sent over to foxconn or whoever does the final build. The middle step in this is for them to go to an OSAT ( out sourced assembly and test) . THe OSAT tests the die, assembles working ones together in 1 package then sends this single part off. It's cheaper and more efficient and it makes final product assembly cheaper as well. It also makes temperature control easier as only one heat sink is needed. In a product that is as low margin as a console, saving a few bucks and some space is a huge driver for design.

The MCM used here is just a PCB. There is no interposer or TSV or anything that gives you the performance gain in latency that you can get by putting the chips close together. The advantage there comes from the type of interconnect more than the proximity. Each of these chips are still bumped and attached to the MCM PCB. They don't have the wire bonding or individual packages, but they are still worlds apart.
 
I don't know why people keep trying to attribute high tech to Nintendo's design. Nintendo is optimizing cost, not performance.

These die do not appeat to be on a MCM for any performance reason. The advantage I see is in manufacturing and QC. The Die are fabbed from different vendors, IBM and TSMC(?) and then eventually sent over to foxconn or whoever does the final build. The middle step in this is for them to go to an OSAT ( out sourced assembly and test) . THe OSAT tests the die, assembles working ones together in 1 package then sends this single part off. It's cheaper and more efficient and it makes final product assembly cheaper as well. It also makes temperature control easier as only one heat sink is needed. In a product that is as low margin as a console, saving a few bucks and some space is a huge driver for design.

The MCM used here is just a PCB. There is no interposer or TSV or anything that gives you the performance gain in latency that you can get by putting the chips close together. The advantage there comes from the type of interconnect more than the proximity. Each of these chips are still bumped and attached to the MCM PCB. They don't have the wire bonding or individual packages, but they are still worlds apart.

There might be some latency advantage. They have obviously met the under 10 ns prereq from cpu to gpu/edram that allows Wii emulation. Also, wouldn't they be able to get away with a wider bus for cheaper between the two chips on the MCM? It must be at least comparable to the 30 something GB/s from Xenos to its daughter die.
 
Frequencies havent been leaked by devs yet, which is troubling, but yeah, hopefully at some point. Then of course there's the issue of determining that the leaks in question are trustworthy. It'll probably boil down to, you'll have to determine some guy on a forum is trustworthy.

I also think hackers can get the frequencies by running like, linux or something. This requires it to be hacked though, and I'm not sure the info would get back to us (for example, I think it's happened on the 3DS, and the only way I found out about it for the PS3 was Alstrong linking some backwater hacker forum post from like 2008). That also requires Wii U to be hacked which could take a while.

Also, from your link:



They use benchmarks to determine what it is. But you cant run benchmarks on a console (unless it's hacked maybe) so...

Also, I've never seen a console chip X-rayed. That's a lot of trouble and expense. So I dont expect that.

They may be X-rayed, but I bet you have to pay a lot to see it.

The iFixit fellahs either X-Rayed or MRIed the FCRAM in the 3DS to find out what it was so it does happen lol.
 
The iFixit fellahs either X-Rayed or MRIed the FCRAM in the 3DS to find out what it was so it does happen lol.

Was it behind a paywall though?

I mean you can get reports on stuff, for a few thousand lol.


Wii U's CPU looks like it might be around 30mm^2 at 45nm(?). Not sure if Jaguar cores are still rumored for the others, but Anandtech mentions those as 3mm^2 per core at 28nm.

i've been throwing around 3mm per jag core too, but i'm starting to get pretty skeptical, just sounds too good to be true, and i think when i was researching say, zacate die sizes, there's nothing really to indicate something so small. a quad core zacate with a gpu is over 100mm easy iirc, i want to say even 140mm. of course it's hard to separate cpu and gpu etc.

i'm guessing in reality a 8 core jaguar will take up 70mm or more, possibly significantly more. the 3mm figure may not include any cache or anything.

As it should be. I'd really rather not have my Durango CPU die taking up a whopping 40mm lol. It's not enough.

Edit: A zacate which has only 2 bobcat cores (predecessor to Jaguar) and 80 SP's is 75mm^2 on 40nm. http://www.anandtech.com/show/4134/the-brazos-review-amds-e350-supplants-ion-for-miniitx

So yeah, even with a shrink I'm not seeing 3mm per core Jaguar being realistic.
 
Was it behind a paywall though?

I mean you can get reports on stuff, for a few thousand lol.




i've been throwing around 3mm per jag core too, but i'm starting to get pretty skeptical, just sounds too good to be true, and i think when i was researching say, zacate die sizes, there's nothing really to indicate something so small. of course it's hard to separate cpu and gpu etc.

i'm guessing in reality a 8 core jaguar will take up 70mm or more. the 3mm figure may not include any cache or anything.

Nope, they said they snuck it into a hospital and did it apparently...I'm guessing they knew someone that worked in that department.
 
I have my PC hooked up to my tv (which isn't massive) and the difference between 720p and 1080p is night and day. It's not just jaggies, there is a ton of texture detail simply being lost running at 720p.

PC games just stretch the native image to fit your HDTV / Monitor which make the difference much more noticeable, consoles have built in up scalers which are fantastic and are the main reason why games like Black Ops for PS3 which runs natively at 960 x 544 can still look decent on a 1080p HDTV.

All WiiU games should be 720p native resolution which is a jump over PS360 games in itself as a decent amount of PS360 games are SD native, upscaled to 720p and then again to 1080p if you have a 1080p HDTV.

I highly doubt PS4 or 720 will run games at 1080p native either because the developers would much rather use the increased power for graphical effects and AI / map size improvements than simply increasing the resolution to which 99% of casual gamers won't even notice the difference.

All in all running a PC game at 720p on a 1080p HDTV is a LOT different from a console game being up scaled to 1080p, it looks far better on console.
 
i've been throwing around 3mm per jag core too, but i'm starting to get pretty skeptical, just sounds too good to be true, and i think when i was researching say, zacate die sizes, there's nothing really to indicate something so small. a quad core zacate with a gpu is over 100mm easy iirc, i want to say even 140mm. of course it's hard to separate cpu and gpu etc.

i'm guessing in reality a 8 core jaguar will take up 70mm or more, possibly significantly more. the 3mm figure may not include any cache or anything.

As it should be. I'd really rather not have my Durango CPU die taking up a whopping 40mm lol. It's not enough.

Edit: A zacate which has only 2 bobcat cores (predecessor to Jaguar) and 80 SP's is 75mm^2 on 40nm. http://www.anandtech.com/show/4134/the-brazos-review-amds-e350-supplants-ion-for-miniitx

So yeah, even with a shrink I'm not seeing 3mm per core Jaguar being realistic.
I'd guess the 3mm figure is entirely just core, and yeah you can't really use the integrated ones for comparison without knowing what part is CPU vs GPU and everything else in there. I found this pic (although I also found another site saying the AMD one isn't an actual shot of the die, but I'd guess the Intel one is real at least):
4.6mm at 40nm for Bobcat and 9.7mm at 45nm for Atom (or 8 and 14.1 if you include L2 cache).
 
GameCube had a very strange design too it, much smaller than the PS2, more powerful and 100$ cheaper, I wouldn't look at size for Nintendo consoles to see how powerful a console is.
Was the Gamecube much smaller? Thinking about the total amount of curcuit boards etc. In Gamecube they stack up on eachother.
 
Was the Gamecube much smaller? Thinking about the total amount of curcuit boards etc. In Gamecube they stack up on eachother.


The Gamecube isn't that much smaller than the original PS2 mobo. I have taken both apart on a few occasions and was surprised how close in size they actually are.

Also, a few things to remember are that the PS2 has a larger drive, hdd bay, and internal PSU.
 
Tech has changed considerably in the last 11 years so the Gamecube/PS2 comparison isn't a good example. Keeping consoles cool was a much easier task back then.
 
PC games just stretch the native image to fit your HDTV / Monitor which make the difference much more noticeable, consoles have built in up scalers which are fantastic and are the main reason why games like Black Ops for PS3 which runs natively at 960 x 544 can still look decent on a 1080p HDTV.

All WiiU games should be 720p native resolution which is a jump over PS360 games in itself as a decent amount of PS360 games are SD native, upscaled to 720p and then again to 1080p if you have a 1080p HDTV.

I highly doubt PS4 or 720 will run games at 1080p native either because the developers would much rather use the increased power for graphical effects and AI / map size improvements than simply increasing the resolution to which 99% of casual gamers won't even notice the difference.

All in all running a PC game at 720p on a 1080p HDTV is a LOT different from a console game being up scaled to 1080p, it looks far better on console.

Not everyone uses scaling on their gpus nor is it necessary if you let your display or in the case of myself use a hardware based scaler. Also pc based hardware scalers and hardware based scalers shit on anything you're going to find in a TV let alone a monitor, as they often cost the price of high end tv/display. The quality spectrum for scaling on pc allows for more than a console yet if you're on the low end it's going to be garbage no matter what.

Gamers especially console gamers aware of input lag should tell devs to fuck off with upscaling. No matter how you slice it adds input lag and is not genuine native res.
 
Not everyone uses scaling on their gpus nor is it necessary if you let your display or in the case of myself use a hardware based scaler. Also pc based hardware scalers and hardware based scalers shit on anything you're going to find in a TV let alone a monitor, as they often cost the price of high end tv/display. The quality spectrum for scaling on pc allows for more than a console yet if you're on the low end it's going to be garbage no matter what.

Gamers especially console gamers aware of input lag should tell devs to fuck off with upscaling. No matter how you slice it adds input lag and is not genuine native res.

most TVs may wind up processing any video input regardless and you can't control this. upscaling really does not add any real noticeable, measurable lag. i doubt you could produce any studies or tests that identify in ns or ms how upscaling done by a HDTV or console adds to input lag.
 
What does this mean, are you an insider?

very doubtful. 80% of "insiders" on GAF are just some asshole on a keybord in a dark room pulling out random things to make it seem like they know something. I could say that I am an insider and (if I didn;t make this post) nobody would be able to tell if I am telling the truth.
 
most TVs may wind up processing any video input regardless and you can't control this. upscaling really does not add any real noticeable, measurable lag. i doubt you could produce any studies or tests that identify in ns or ms how upscaling done by a HDTV or console adds to input lag.

why are we dealing with ns when something that much isn't a factor. However in terms of ms that's obvious when you buy displays what their max rates are. The same for gpus or the drivers that induce these things. So I don't need a study when a simple search nvidia or ati cards and input lag can inform you far more than some non existent studies you want me to show you.

The reason you have a hardware scaler or something external take over is so that it gives a signal to your tv so it doesn't have to end up processing.

You do control these things it's called not buying crap or substandard products. You're dealing with this problem after the fact I'm saying as a gamer if you're buying a display that has this issue potential a console or a pc isn't going to fix it.
 
very doubtful. 80% of "insiders" on GAF are just some asshole on a keybord in a dark room pulling out random things to make it seem like they know something. I could say that I am an insider and (if I didn;t make this post) nobody would be able to tell if I am telling the truth.

You'd be very wrong though, although Wrathful (always preferred that nick Doug :P) works for another of the big three, not Nintendo. :P
 
very doubtful. 80% of "insiders" on GAF are just some asshole on a keybord in a dark room pulling out random things to make it seem like they know something. I could say that I am an insider and (if I didn;t make this post) nobody would be able to tell if I am telling the truth.

I'm pretty sure you can get banned if someone said they are an insider when they are not, don't people have to prove it to the mods or something?

Even though I do agree that some people take people's comments on GAF and assume they are confirmed.
 
I'm pretty sure you can get banned if someone said they are an insider when they are not, don't people have to prove it to the mods or something?

Even though I do agree that some people take people's comments on GAF and assume they are confirmed.

I think the admins/mods know Drinky's bonafides quite well by now, he was after all an admin here for a long time. :P
 
Yes admin, and banned? You sure about that?,
another one falls into the trap :P
 
Top Bottom