Let's take a look at how you can go GPU/CPU efficient with 5.5GB/s raw and 9GB/s compressed assets, if you turn around in a speed of like 0.5sec as Mark says, you'll be able to download 4GB of compressed assets! Here at 10:00
This was a good part of the presentation but there's still caveats. Mainly, the way data is stored on SSDs. If the data is compressed, then it has to be uncompressed in order to be read, and space on the drive has to be made available to write the decompressed data back to.
Then, the data has to be read, but at page level, since that's the smallest read-addressable level for data on NAND. A standard page size is about 4KB, so one of the smallest levels of granularity with data on NAND is 4KB. Comparatively, the smallest level of granularity for data to be read in volatile memory like GDDR6 is 1 byte. That's four thousand times smaller.
Also in the case of writing the data back to the SSD, it has to be done in blocks, which are MUCH larger than a 4KB page size (speaking of which, if the NAND data is being read in decompressed state by the GPU, the size of data the GPU can read on a single pass depends on how wide of a bus it has. It also depends on the width of the bus for the flash memory controller and the bus width of the NANC IC of the custom storage. This is why knowing the bandwidth number and bit size of the chip buses is as important as knowing the overall speed. TBF both Sony and MS have only mentioned speed rather than bandwidth which is annoying). So a lot of that speed in communication for data writing might be spend on constant replacing of data that does not actually need to be replaced of its own accord, but has to be necessity due to being in the same block of NAND as data that DOES need to be replaced.
So for texture data that doesn't need to be modified very much and is fairly uniform in size to the page size of the NAND on the SSD, that is where streaming the texture as v-cache will be its MOST beneficial. Even then, this is mainly for decompressed data on the drive. Otherwise there will have to be programming tricks such as duplicating altered copies of modestly changed texture data at a safe "near proximity" to the player that is read from when needed (and has to be at least read and decompressed once and then written back to the SSD), or use a combination of that plus placing texture cache in the GDDR6 for data that is expected to be frequently altered at the bit-and-byte level (or even in cases where that level of granularity isn't needed, but the speed of the GDDR6 will be more beneficial).
Where did I say "Big"? Try reading better, "big" devs won't take side on news like this but plenty of devs have took to Twitter and apparently told media sources that they're excited about the PS5. The XSX?; Not so much. Also love the "it must be first party devs in disguise." conspiracy.
So literally what I said. You ok, bro? Need a break?
Geez, calm down dude. It was an honest mistake, okay? But my points generally still stand. And it's not a "conspiracy" to imply it could be 1st-party devs making statements of such. They are also developers, are they not? So there is nothing wrong in putting them in the mix, since the devs in question are not actually specified.
Also no, that's not what you said. That's what you could've implied, but you left it open-ended enough to mean a lot of things. I simply clarified your comment and apparently you don't like that
You didn’t mention raytracing performance.
Sorry, it slipped. I'll add it to the OP (did the numbers for it last night,
DemonCleaner
also and the numbers check out)