It is all still up in the air. The thread I referred to is 50 pages long and still no consensus. Maybe you could check out the latest page?
ETA: Here is the blurb on Velocity Architecture from the Xbox website.
Just took a look at the last page and it's as I thought. Someone obsessed over the word choice in some marketing blurb.
What "instantly" in the context it's used is referring to not needing to cache everything in RAM any more and being able to stream it into RAM just-in-time.
This marketing blurb is somehow being seen through the lens of a fanboy comparing it to PS5, when in fact it's Microsoft comparing it to Xbox One and mechanical HDDs that weren't capable of just-in-time asset loading and needed tricks to predict where you were going or tricks to slow you down etc.
That's what's new.
There is no "instantly" about it. Sequentially read data will be hitting RAM at an average ~4.8GB/s where it can then be used by the CPU/GPUs.
The idea of a GPU directly addressing an SSD to crunch data is laughable. It's orders of magnitude slower than getting it from RAM. Not just in ultimate bandwidth, but in all the tiny random reads it would be requesting to fill its working caches. Utter gibberish that could only be dreamed up by someone that doesn't really know how things work even on a basic level.
The 100 GB figure is either thrown around as an example game package size (which in the context of the casual marketing blurb makes sense) or if it really
is technical and something to be analysed in detail then it probably refers to the amount of logical to physical flash address memory mapping range that can be cached to help with random reads.
The host (CPU/game) doesn't use physical addresses to get data from flash NAND, but logical ones that the flash controller translates.
100 GB of mapping table would need around 100 MB of storage close to the flash controller for best performance.
The Sony storage patent linked here earlier discusses ways of getting around this limitation, as well as maximising latency by using SRAM to do the job.
If the 100 GB wasn't just a bit of marketing blurb then as a limitation it probably refers to how big this look up table can be in their setup.
If they're using a DRAMless SSD as rumoured by the LinkedIn post, then maybe the solution they have for mitigating random read overheads is limited in this way.