You can take as a fact that Xbox had S3TC.
S3TC was implemented as a compliance requirement for DirectX 6:
Source:
http://en.wikipedia.org/wiki/S3_Texture_Compression
So of course a DirectX 8 part is fully compatible with it.
It is not the same as the S3 Graphics compression the GameCube used, the texture compression that nintendo invented with S3 is exclusive to Nintendo hardware. The S3TC that was implemented on the DirectX6 was for Windows not the original Xbox which was using a custom firmware and API created in collaboration with NVidia and Microsoft.
http://en.wikipedia.org/wiki/Xbox_(console)
The Xbox runs a custom operating system which was once believed to be a modified version of the Windows 2000 kernel. It exposes APIs similar to APIs found in Microsoft Windows, such as DirectX 8.1. The system software may have been based on the Windows NT architecture that powered Windows 2000; it is not a modified version of either.
DirectX 8.0a 4.08.00.0400 (RC14) Last supported version for Windows 95 February 5, 2001
DirectX 8.1 4.08.01.0810 Windows XP, Windows XP SP1, Windows Server 2003 and
Xbox exclusive October 25, 2001
4.08.01.0881 (RC7) This version is for the down level operating systems
(Windows 98, Windows Me and Windows 2000)
Which is not clear what API where available on the original Xbox because it was custom made. If S3TC that GameCube used was available in DirectX6 in windows so it would be available on Dreamcast, which is
not, as well and I am very certain that the Dreamcast was using a custom Windows firmware CE2000.
DirectX 6.0 4.06.00.0318 (RC3) Windows CE as implemented on Dreamcast August 7, 1998
maybe the original Xbox was using some kind of S3TC but nowhere as efficiently as the GameCube because Xbox GPU had other priorities. Plus the memory of the GameCube was more efficient producing double the amount of fillrate data than the Xbox.
http://en.wikipedia.org/wiki/DirectX
There is no question that GC's main memory of 1T-SRAM is much more efficient than the Xbox's DDR SDRAM, as the latency of GC's 1T-SRAM is 10 ns, and the average latency of 200 MHz DDR SDRAM is estimated to be around 30 ns.
Memory efficiency is largely driven by data streaming. What that means is that developers can do optimizations to their data accesses so that they are more linear and thus suffer from less latency. Latency is highest on the first page fetch, and less on subsequent linear accesses. It's random accesses that drives memory efficiency down, as more latency is introduced in all the new page fetches.
It has been brought up that DDR SDRAM is only 65 percent effective, and it is only 65 percent effective when comparing a SDRAM based GeForce2 graphics card with a DDR based GeForce2 graphics card. The Xbox's main memory efficiency should be around 75 percent effective if one considers that the Geforce3 has a much better memory controller than what is on the Geforce2 chipsets. You can see that incredible efficiency of the Geforce3 memory controller versus the Geforce2 at AnandTech's Geforce3 review, where fill-rate is compared, and that is a good measure of memory effectiveness. The comparison at AnandTech's does not just highlight the effectiveness of the GeForce3's Lightspeed Memory Architecture (memory controller), but also highlights the effectiveness of the texture cache, and the visibility subsystem.
The GC's 1T-SRAM main memory is speculated to be 90 percent effective. A significant difference between the two memories!
So GameCube was built to take advantage of it's powers.
Frame Buffer and Z-Buffer Accesses
The GC has a 2 MB on-chip frame (draw) buffer and z-buffer, so reads and writes to that on-chip memory buffer does not effect the main memory bandwidth. The GC still has to send the frame buffer to memory for display each frame.
The Xbox stores it's frame buffer and z-buffer in main memory, and it supports z-buffer compression at a 4:1 ratio, so a 32-bit z-buffer value is only 8-bits in size when compressed. The decompression and compression of z-buffer data, to and from memory, is handled automatically by the Xbox GPU.
Xbox: 640 x 480 (resolution) x 5 (frame buffer write (24-bits) + z-buffer read (1 byte) + z-buffer write (1 byte)) x 3 (overdraw) x 60 FPS = ~277 MB/sec or 0.277 GB/sec. So 4.05 GB/sec - 0.277 GB/sec = 3.77 GB/sec
GC: Only has to write out frame buffer each frame and at 60 FPS is roughly 55 MB/sec or 0.055 GB/sec. So 1.44 GB/sec - 0.055 GB/sec = 1.39 GB/sec.
So if the original Xbox had S3TC why the GPU had to do extra compression and decompression therefore hitting the performance of the framebuffer?
What is known:
GC cache is either 8 times to 4 times larger than the Xbox's (128 KB or 256 KB).
Xbox can feed it's cache with 3 times greater data per second than the GC.
There is also speculation that the GC cache can hold compresed textures and the Xbox cache cannot, if so then that can make a huge difference in the comparison as with a 6:1 compression ratio, the cache can hold 6 times more data! 6 MB of data for the GC compared to 128 KB or 256 KB for the Xbox is a huge difference.
Since there is so much speculation on the two different caches for each GPU, and there is no clear calculation for an accurate comparison, the cache will not be included in our result below.
That's why it is not clear if the Xbox had the same S3TC technology as the GameCube but a different approach on the matter.
http://segatech.com/technical/consolecompare2/
Anyway that is not the point. The point is that latte is not yet put to the test by take advantage of the GPGPU features, the differences in power would be visible on multiplatform titles that would hit the 3 consoles(ps4,Xbox one, WiiU) at the same time.
Do you have those screenshots yet?
I am trying to find a good VCC that supports 1080p resolution at 60fps. I was looking for the digital foundry equipment AND THE FUNNY part is that THEY DO NOT write WHAT equipment they are using so at least buy the same stuff to make the same comparison. All the cards that I find here in my country is at 720p in 60fps and 1080i 30fps. So if I am going to make an investment for my personal use I am going to buy the best value for my money. My friend's capture card is at 720p max so if I post something like that it would be stupid and immoral from my part so be a little more patient, I have not forgotten you.
Why is it so hard to comprehend that Edram running at 1TB/sec isn't nearly the same as the Edram on either the Xbox One or the Wii U? We're talking about a 1Tb/sec vs 200 ~Gb/sec at most.
That is exactly my point WE DO NOT KNOW the bandwidth the eDRAM the Wii U is producing. Maybe it 100 or 150 or 200 or 1000Gb/s for all I know that's why we need to see MORE games not judging early ports for god sake. Some of the posters here do not want to find the reality about the latte rather than bash Wii U as a shitty, low tech console. Myself wants only to see what the machine is capable of, not make console wars or graphics contests so that I would vindicated on my purchase for strongest console.