Before file I/O subsystems (at both the hardware and software level) were being significantly redesigned, if you had games with lots of data that the player MIGHT see, but you weren't sure they WOULD see, you'd need to pre-cache big chunks of it in RAM to prevent stalls and hangs, or significant pop-in issues. In other words, for the immediate things a player was doing, only a portion of the RAM was actually being used. But since you can only compress but so much data into RAM, that amount still acted as a hard limit.
With the SSDs and accompanying I/O hardware subsystems and file I/O software systems, RAM that would've been allocated in the past as a cache is being freed up, meaning games are getting more RAM capacity for immediately pertinent graphics and asset data. You don't need to reserve 1 GB or whatever for data that might be accessed by the player 30 seconds away anymore; you can now let that 1 GB be used for current area the player is actually in, then quickly replace it with new data as it's needed. That wasn't possible before without SSDs and, more importantly, revamped memory I/O subsystems and file I/O restructuring.
Basically, just go rewatch Mark Cerny's Road to PS5 if you want a short-but-simple overview of what these new technologies can actually do (and in parts, are already helping to enable; trackside detail in the new Forza Motorsport is much better than the extremely simple & basic ones in Forza 7 not just because of GPU/CPU power increase but because the memory subsystem and file I/O are able to refresh bigger chunks of RAM magnitudes more quickly than what the previous gen of consoles were able to do. Powerful GPU & CPU don't mean much if the stalling bottleneck in your pipeline is a HDD barely doing 100 MB/s with no decompression support of any kind).
Anyway to answer the OP's question, I personally don't think any of that stuff is the future of storage. The posters basically saying things will stay the same but the drives will get faster, are probably correct. Other subtle parts will improve as well, but the real improvements will come with the technologies leveraging SSDs.
-True cache coherency over PCIe with some form of CXL (preferably 3.0)
-Standardization of decompression ICs for offloading decompression tasks from the CPU (will be vital for lower-powered devices)
-Opening up decompression IC access to peripherals other than just SSDs (microSD cards, USB drives for example)
-Potentially leveraging some form of more advanced PNM (Processing-Near-Memory) for higher-tier SSDs where the drive
can have its own block of integrated RAM and processing logic to process the data before sending it over a CXL 3.0-layered
PCIe link (taking off the stress of decompression and data processing from the CPU/GPU, maybe having the decompression IC
built into the SSD itself, decentralizing the storage I/O process more or less completely from the device accessing the storage)
Those things, should they happen, are going to have a much more meaningful impact than just increasing capacity tenfold (and likely getting worst performance, certainly less cycle endurance for that type of stuff). I'm hoping these things, and in particular serious PNM and especially PIM (Processing-In-Memory) architecture designs become prevalent with at least one of the 10th-gen consoles, where along with chiplets and switching to better memories (HBM-based ones, maybe with some NVRAM for specific caches thrown in) will help bring big performance increases for those systems without blowing them up into 400 watt monstrosities just to be competitive. But that is another type of conversation to have altogether.