There's no need to use 1T-SRAM if they want backwards compatibility, because if they have 32MB of EDRAM they can use that to emulate the low latency of 1T-SRAM as well.
I never found that MoSys rumour too convincing. Could anybody link to the source? lherre felt that it was most probable there was going to be 1 GB in the final unit so that probably meant there wasn't a significant amount of 1T-SRAM (or any other stuff) in the devkit. Wonder if that's going to stay the same.
Sorry, I thought that Mosys comment was from an actual investor conference call. The real source (a post by some guy on the Beyond3D forum) is quite a lot less reliable. I can't say I can speak for the characteristics of modern eDRAM vs 1T-SRAM characteristics, but I'm a little weary of rumours regarding the use of eDRAM in the console, given the potential confusion between the use of eDRAM as a cache in the CPU (confirmed by IBM) and use of eDRAM as a framebuffer in the GPU (as yet unconfirmed but likely).
I should also clarify that I was merely speculating about the use of the framebuffer as VRAM in Wii mode, I don't know whether it's technically possible (probably not).
From what I've read, the PS3 CPU can actually access the VRAM. It's just not very fast, but it would be quite an expense to make it work fast.
Yeah, it's obviously simpler from a technical perspective to have a single unified memory pool. Not impossible to have two, though, especially if the CPU is designed specifically for the purpose (which CELL wasn't).
There's no such thing as 100% reliability in wireless technology. Nintendo doesn't even need it. When transmitting what is video for practical purposes, odd pixels, out-of-order frames or lost data are gone from the screen within a blink of an eye. Nintendo may have high standards, but it doesn't take 100% reliability for this to work. Far from it, even.
The
WirelessHD standard pretty much defines everything that's needed for Nintendo here. It includes a 7 GHz band (that doesn't interfere with other common radios), a video codec (although Nintendo may well specialize that) and it even goes for 10 meters without a line of sight. The specification was finalized in 2008, so that's why I called it a solved problem.
Wireless equipment manufacturers always,
always quote best-case scenarios for performance. It's basically the first rule of wireless telecommunications. Companies and standards setting bodies* use the results of lab experiments, or even theoretical calculations, to advertise their technologies because it generally sounds a hell of a lot better than the truth. You know why you haven't seen a load of TVs, set-top boxes with technologies like WirelessHD and WHDI in the real world? Because the small number of devices released with the technologies don't work nearly as well as advertised. I know people who work in the high-end AV industry, and it's pretty well accepted that these technologies are unreliable, they don't work over the distances advertised, and given the very short ranges over which they do work, there's zero advantage over just using a cable. This corresponds pretty much exactly to what I'd expect from what I've been taught in telecommunications and information theory. There also remains the considerable issue that, even if they did work perfectly, these standards are for transmission between static devices, which is a completely different thing to transmission to a moving object like a Wii U pad. All other things being equal, you can expect a wireless transmission to a moving device to be an order of magnitude less reliable than one to a static device.
To illustrate what I meant by 100% reliability, let's do a few calculations. Assume that Nintendo is using a simple wireless technology to transfer the video to the Wii U pad with a bit error rate of 1 in 1000, that is "99.9% reliability" (this is actually absurdly optimistic in real world terms, but what the hell). We'll also assume that the bit errors are evenly distributed (also highly optimistic). A single frame for the Wii U pad will be 854x480x24bits, ie 9,838,080 bits. We can therefore estimate that there will be 9,838 bit errors per frame. I'll be generous once again in assuming that people will only notice changes in the two most significant bits of a colour component of any given pixel. A change to the most significant bit would be the change from bright red to dark red, for example, which we can classify as
very noticeable and a change to the second most significant bit would be half that, which we can classify as just
noticable. Doing the maths, in each frame we're going to get 409 very noticeable pixel errors and a further 409 noticeable pixel errors. If that's not bad enough, consider it in terms of the number of errors per second. In the
very optimistic scenario I'm describing, we're going to see a total of 24,558 noticeable pixel errors every single second, of which 12,273 will be very noticeable. Included in that figure is an extra 12 pixels every second that are going to have bit errors in the most significant bits of
two colour components, which I think I can fairly describe as
jarringly noticeable. As well as that, we're going to get a pixel on average once every 8 seconds with the most significant bit flipped in all three of its colour components, which I can't even think of a name for.
My point is that even with a seemingly high data accuracy rate like 99.9%, you get an image which is totally unacceptable in a consumer product. In order to get an image that I'd consider just about acceptable (no more than 1 pixel with a most-significant bit flipped every 10 seconds), you'll need an effective data accuracy rate of 99.9999997% or higher, even in the worst case scenario for interference. Of course you can decrease the effective error rate by including error correcting code with the data, but an awful lot of it will be needed in this case, probably pushing the actual amount of data transferred closer to 400Mbps than the 300Mbps of the video data itself. There is a possibility that the video will be compressed, but it would have to be a very basic compression scheme not to result in a noticeably laggy screen, so the compression ratios won't be anywhere near what we're used to with h264, etc.
While wireless video streaming devices may technically exist, doing what Nintendo need to do; stream video to a moving device with no noticeable lag and a good picture quality even in very high interference, is far from a "solved problem", in fact it's probably one of the most difficult problems in short-range wireless telecommunications. I trust Nintendo have managed it for a 854x480 screen, because they wouldn't have announced the product otherwise, but I don't doubt that the main reason they didn't announce support for multiple Wii U controllers is that there simply isn't feasible technology to do the same for multiple screens at a time.
*I'll exempt IEEE and ITU from my criticism of standards setting bodies, as they're fairly rigorous and transparent in terms of their procedures. You still won't get quite the quoted speed on something like WiFi most of the time, but it's nowhere near the disparity you get with technologies designed by private groups.
With the Wii U, this isn't as trivial as it is with normal graphics cards. The Wii U's framebuffer will be probably be of fixed size (32MB of EDRAM we've heard), with a special range dedicated to the tablet screen. In that scenario you can't 'just' allocate more room like you do on a normal graphics card, as the location of each pixel on each screen is predetermined in hardware. It's not impossible to get his to work, but fitting the extra screen(s) in EDRAM will cause a performance hit.
I should have clarified myself there. I meant that Nintendo could have changed the specs to include a larger framebuffer, which would have been quite feasible after E3, when there was still over a year to launch.