While on the topic,
Blu, how exactly was the Wii GPU different, if at all, from the GameCube's?
Was hollywood simply a souped up flipper?
What work did ATI do on the GPU to warrant to being renamed? Was it a die shrink? Did they integrate something something else? Use newer manufacturing tech?
Outside of the generally known "it's a faster GameCube with more ram", I've always wondered how exactly the Wii's guts differed. That is, not how they were similar but how they were different.
From my very warn-up memory (read: take it with a grain of salt):
Hollywood TEV had a few advances (and surprisingly, some minor, unpopular Flipper features got dropped too) - it allowed extra parameters to the per-stage shading equation - IIRC, an extra interpolant to the equation, which turned out quite handy, z-buffer shadow maps and a texture sampling mode for their easy use (very few wii games ever used that, but it was used), an extra matrix stage at the fixed-function TnL block, aniso up to 4x (which, curiously, was meant to be in Flipper too, but was broken in the final Flipper silicon, thus unusable (IIRC, corruption used it on some very tricky oblique textures in Sky City, to some gorgeous effect), and lat but not least the dual-memory pool interface (original Flipper could access only the 1T-SRAM pool).