That second paragraph as if you read my mind and broke it down to me. Yes, that was what I wanna learn more about, but I'm not pretty much that educated in that matter though.
My logic behind the question was: If the game is already there (game size occupied in the SSD), and the size is the same, why (double-compress) when they might be already compressed to begin with? Here even Zlib might sound better as it's also lossless AFAIK.
To me it sounds like you order a meal and instead of eating it all you left some leftover. It's better to have something you can fully eat/finish (final results on screen) so you don't pay extra money or waste the extra food (storage).
Many of the things in your post are pretty technically deeper than my knowledge, which I like it but not necessarily fully understood yet but I get the overall idea. Thanks a lot for your time and glad to have you here among us.
From reading your comment, it feels like you are asking ,me: why did they do BCpaclk? What's the point when they could zlib? And that is the crux of the matter, really.
Since the beginning of GPU block compression acceleration with S3_TC/DXT, the idea is to either accept lower texture quality to save on VRAM needed, RAM footprint, transfer IO (at the same texture dimensions), or to use the same resources as the RAW texture to get the job done, but use the compression saving to go up a texture dimension size - which depending on circumstances may result in better texturing overall in spite of introducing errors through the lossy compression.
Multi-texturing also provided different options, as you could blend two compressed textures in the storage of 1xRAW, etc, and it probably wasn't until shader based multi-texturing - like two tone speckled paint - and the ability to decompress zlib raw textures on PS3, that lossy texture formats weren't seen as a automatic win (IMHO).
Once zlib decompression was possible on GPU or SPU RAW textures became viable for more things again, and by default block compressed textures don't compress well with zlib based compression AFAIK, so using DXT with zlib offered limited gains for its inferior image quality.
(IIRC) Prior to DirectX10 there was quite a bit of research in the games industry on ways to use DXTn with shaders to improve the compression time for generating optimal textures, and also techniques for moving to/from different colour schemes (using shaders)to retaining detail where it mattered most and place the errors where they mattered least (a little bit like comparing a CCD sensor to a CMOS sensor for photography) .
DX10 incorporated DXTn as BC – I'm not sure if they enhanced the compression beyond DXT – but this is where BCpack comes in with a potential 2-3x storage saving over and above regular S3_TC/DXT/BC. BCpack presumably works by looking to find block repetition or similar-enough repetition between texture blocks, so that Bcpack can share index tables and/or blending values between multiple texture blocks.
I suspect that with textures so big in today's games that little perceptible difference will be seen by a gamer looking at a Bcpack texture, and a BC one, or a raw texture on a model, in a current gen title, and that I believe is why xbox has backed this strategy.
ps. Thanks for the nice comment - I've been really enjoying my brief time here, and love the upbeat positive vibe you and others bring about the excitement of new games and tech in this thread.