Assuming same block-size and compression used, the I/O difference would be marginal at most. The main trade-off is some extra compute costs for visibility determination and manual handling of filtering edge-cases in shaders.Now my original statement that lead to all this was only this: I'm unclear at how much more efficient SFS is than PRT because I haven't been able to find any comparisons.
The amount of 'possible' visible detail changes, so yes, RAM is a factor, but only up to a point. As long as you have more Ram than what can possibly be visible in a given frame (which has a hard limit set by screen resolution * unique texels per sample ), you're all set.Thanks for the other info but I do find this a rather odd statement... either way I think you got my point. The level of detail possible is certainly limited by RAM, considering on the PS5 they are likely using 20 times the amount of RAM in the PS3 just to render what is on screen lol
Below is a contrived example, but it illustrates the point better than theory:
The SFS demo shows around 300-500MB in use after their visibility determination. That's at 4k, with a number of textures per pixel (say 8? - fairly standard in PS4 gen).
Back on PS3, Rage was at 720P, with a single texture. So 32x less memory per frame in use - in this example that'd be ~10-15MB.
Proportionally % of memory used is about the same on both machines, and not a limiting factor on either.
You might be asking - what if Rage used multiple textures per-sample? Yes that would use more memory - but I/O would become a bottleneck far sooner than memory (drives in that era were in the 10-20MB/s range).
No - you store your data in blocks that can be fetched separately. Eg. if the approach is to load on texture-page boundary (say, 64KB or so) your textures would be stored as series of compressed 64KB blocks.Curious how the SF partial reading of textures stacks with the on disk compression. How do you get part texture from a compressed file?
Do you need to decompress, read the small portion somewhere along the i/o chain and then send it onto ram?
Above isn't something that's new to this gen - block-storage has benefits that translated well all the way back to PS2 era for load-speed efficiencies etc. The idea of treating assets as 'files' and compressing them individually was never particularly optimal aside for what small size reduction it yields.
Last edited: