krejlooc explanation is half right. The numbers are completely wrong.
You don't miss anything.
You are almost right.
Some remarks:
- PS1 framebuffer can be set to 15-bit (2 bytes) or 24-bit (3 bytes) color
- PS1 textures are never better than 15 bits
- PS1 24-bit does only apply to internal GPU shading computation
It goes as follows;
-> GPU sees polygon with 15-bit texture
-> GPU computes gouraud shading in 24-bit and combines with the texture color
-> if framebuffer == 24-bit: copy 24-bit GPU color to 24-bit framebuffer
-> if framebuffer == 15-bit && dither_flag == false:
--> discard the lower 3 bits of each GPU 8-bit color component (=> banding)
--> copy to 15-bit framebuffer
-> if framebuffer == 15-bit && dither_flag == true:
--> apply dithering to the GPU 24-bit color down to 15-bit (not 8-bit) color
--> copy 15-bit dithered color to 15-bit framebuffer
Result:
- the RGB color cube in 15-bit mode is 32Rx32Gx32B = 32768 colors
- plenty of shades to shade a 3d objects with arbitrary colored lights
- dithering from 256 down to 32 shades per color component is sufficient!
But why does it looks so bad at times?
Because the textures aren't dithered by the GPU (already 15-bit).
Only the shades from the lighting computation (24-bit) will be dithered.
The textures are 15-bit from the get-go, from the perspective of the GPU.
Now what?
Now how does the 15-bit texture became 15-bit in the first place?
Via quantization.
Now what 2?
Quantized 15-bit (banded) texture + GPU 15-bit dithered shade = meh.
Now what 3?
Quantized and dithered 15-bit texture + 15-bit dithered shade = ugly hell.
Why? Because double dithering leads to strong artifacts.
Without texturing, just shading, the dithering would look quite good.
Bottom line:
24-bit textures would alleviate the problem, but would also cost more RAM.
(the texture unit would also need to work with more bits, increasing cost)
Edit:
But you can also have fine dithered textures while skipping dithered shades.
(textures would look good, but the shading will band across the screen)