I think 15-20% with such little info is too generous and probably in the wrong direction. The reason the PS3 and PS4's prowess in compute were important for comparison was because the Playstation consoles aligned or bettered their Xbox rival hardware in the other metrics, so the difference really looked usable. The PS4 and Xbox One aren't very exotic (Esram is a bit exotic) so even without minute details, comparing effective capability was pretty straight forward. The PS3 was very exotic, but because all the IBM Cell SDK and design/development info was fully documented and available to all, it was very easy to compare in a fair way to a Tri-Core 2-way PPC chip (PowerMac or IBM entry web server).
At the moment TF doesn't represent a realistic comparison of average throughput for XsX, because the memory setup and absence of I/O complex instinctively tells us that it is going to have a large difference between max and average TF real-work done. Where as the PS5 info is talking about narrower CU count, high clock and constant energy use to push for optimal utilisation and work done.
So, even if the PS5 drops clock resulting in 9.2TF for optimum work done - it won't, it will be just under 10TF in all likelihood – the Ps5 is going to get more real-work done in TF than XsX throughout the next-gen and almost certainly by some margin based on Cerny's talk. I would certainly reconsider that view if Xbox were able to show the bandwidth graphs of Gears and RT Minecraft on XsX to prove they are getting close to 75% max memory bandwidth (0.75 * 560GB/s) and show they can sustain 75% or more utilisation of their 12 TF in gameplay, but it seems they are happier to have the assumed hardware victory for marketing, rather than a real victory – which is a shame because more people from Playstation ranks like me would probably buy both next-gen if the XsX is really the more impressive hardware.