Let's do some basic math here...
We know the demo was running mostly at 1440p
We know the demo was running at 30 fps
We know the demo was using raw uncompressed data
We know the demo was having pretty much one triangle per pixel
I'll assume 32-bit colors.
I'll assume RAM doesn't exist.
Based on the above, you have 2560 x 1440, which is 3,686,400 triangles/pixels.
Running at 30 fps means you have 33.3 ms to render a frame.
If you stream everything on the fly, that means you have to process all those 3,686,400 pixels in less than 33.3ms.
32 x 3,686,400 / 8 = 14745600 bytes = 14.7456 MB.
Vertices are a better indication of throughput cost than triangles, but since they are generally almost equal, rounding up should be ok. You would have to output about 15MB of data per frame, multiply by 30 is about 450MB/s. That is what you would need to stream if you were literally sending data from the SSD to the GPU without any processing in the middle. That is too fast to be streamed from an HDD, but very doable from any SSD.
Note that this is output data being used as reference, not input. The input is inevitably higher, but it is unclear how much higher it would be. It all depends on how efficient the reading from the SSD is, i.e. if you're loading full textures and culling them later, or if you're loading primarily what you need and ignoring the rest.
Additionally, this completely ignores reused assets, and assumes that every triangle/pixel is completely unique, which generally is not the case.
Bottom line is, even if the streaming from SSD is 5 times larger, you would still be below 2.4GB/s.
Based on this extremely rough estimate of what would need to be streamed, I doubt the full capacity of the PS5 SSD is being used, and I extremely doubt that the XSX or PC would be incapable.
It's the engine that is the 'hero' here. They've completely changed the way of rendering things.