Not even the most hopeful here are suggesting that. But I get your point, especially at 45 and 40nm processes rather than 28 like the other two, that's two fabrication process generations away.
Regarding the Tori Tori dev on texture compression, like we already discussed dozens of pages ago, every modern Radeon (and separately geforce) has something like that. The Wii U may well have such a feature, but that developer didn't say if the other two don't (and I think that team only works on wii u anyways). Really, Nintendo would be in the GPU business if half of this wishing was true
Well, there's also something that has to be pointed. When people talks about "Nintendo specific features" they don't mean that those are graphical features only known or discovered by Nintendo, but techniques PRIORITIZED by Nintendo.
In other words, when both AMD or Nvidia launch a line of GPUs have to be VERY cautious about what is expected by the industry, and the features that will be implemented on the next DirectX iteration its something that nearly determines the whole feature-set of a certain GPU (only in some cases). AMD knows this very well, they were the ones pushing for a tessellation unit on a GPU and spent 3 generations of GPUs with a tessellation unit on them that wasn't used beyond some demos made by ATI themselves.
Nintendo, as someone that owns the whole "production chain" (in the sense that both hardware and software APIs are made by them) can ask for customizations that no one would made on the PC scenario because of fear of wasting silicone in something not implemented on the general graphical API used by everyone.
They may want to prioritize some graphical effects that they feel are the most important, thus making a "hardware unit" specialized on it, and that goes from texture compression algorithms to some shaders directly implemented by the GPU.
The N3DS for example is a pretty clear example of that. In order to gain efficiency they ditched out the whole programmability of the fragment shaders and coupled some vertex shaders with a number of fixed functions that they considered to be the most important and good enough to produce good graphics. By doing that they ended with a much more limited solution in terms of programmability, but a much more better one in terms of graphical power for transistor used.
Nintendo loves to customize their hardware, but on the other hand, you have all the 3rd parties that want the most PC-ised solution possible, in order to reduce the difficulty to port their games between platforms.
Regarding the dual-engine issue, I don't know if the WiiU has it, but what I have read is that the GC had 3 rasterizers (and that would match it's higher polygonal output compared to the original Xbox if that one only had one rasterizer to work with) and the same goes for the Wii.
If the WiiU is emulating the Wii at the same clocks (and this has been confirmed), and if there was an optimal case where the GC rasterizers could rasterize 1 polygon per clock each, then there is no way that there are less than 3 rasterizers on the WiiU (modern rasterizers are better approaching the 1 tri/clk ideal, but if you want more than 1 tri/clk you need more than one rasterizer).
This could be something interesting and worth of a more in-depth discussion.
Regards!