1.) The PS5 likely doesn't support the hardware based VRS solution which the Xbox Series and RDNA2 include.
It's possible to implement similar techhniques in software, even quite efficient or with advantages but it's not universally better and is already achieved on the last gen consoles.
Further one can use and combine both solutions, the Xbox Series has this option the PS5 likely hasn't.
How large is the advantage? Nobody knows without benchmarks, so claiming it's in general much better, the same or worse is pure fantasy without good data and arguments backing it up.
2.) I said it already on this forum but you really need to know the specifications and features from "Primitive Shaders" on the PS5 to know what's the difference, if there is any, in comparison to "Mesh Shaders".
Those terms are just arbitrary names otherwise.
Personally I don't think there is a significant difference.
3.) You can support Machine Learning on nearly every advanced processor but you can't train or execute ML networks fast enough for real time rendering on every hardware.
There are many ML models which don't need high precision mathematics, so one can use FP16, INT8, INT4 or even INT1.
What's important for real time rendering is that the throughput is high enough.
It wouldn't be surprising to me if the PS5 only has packed math for FP16 and INT16, that's quite a bite worse than the mixed precision dot-product instructions which the Xbox Series and RDNA2 GPUs can offer.
They can multiply 2xFP16, 4xINT8 or 8xINT4 data elements and add the results to a FP32 or INT32 accumulator in one execution step.
This is more precise and/or a lot faster than just working with single precision and packed math operations.
You can precisely tell that the Floating-Point Register File was cut in half, that's enough to know that the PS5 can't execute FP256-Instructions as fast as vanilla Zen2 cores.
What's harder to tell or impossible is, if and in what way the digital logic on the execution paths was cut.
I think Nemez's fifth tweet is talking about theoretical TFLOPs and that in real world terms more factors are important, putting both closer together in practise.
The raster pipeline on the PS5 is from a high level view structured as on RDNA1 GPUs.
This may be even a performance advantage, since AMD's own RDNA2 GPUs rebalanced execution resources and made some cut downs.
They have less Primitive Units and Depth ROPs per Shader Engine.
From a Compute Unit perspective, for most operations, there isn't a real difference between RDNA1 or RDNA2.
Smart Shift is basically "firmware magic".
Since multiple generations AMD has the necessary hardware built in to control and set voltage levels, clocks, thermal limits, etc.
It's all very programmable and can be set as desired.
Sony went for a clock budget, which is also shared between CPU and GPU, based on activity counters and MS wanted fixed clock rates.
AFAIK the way Sony is doing it, is very similar to how AMD first implemented variable frequencies under Cayman (6970/50, 2010/11 GPUs), where AMD opted for no clock differences between the same GPU SKUs.
That's different now, AMD is using real time measurements and let's each chip optimatly behave under his unique attributes.
Every chip is different in quality, they all behave a bit differently, something you certainly don't want for a gaming console which is why every PS5 is modeled after a reference behaviour based on activity counters.
That's a high level diagram from GCN2, showing how the power/thermal/clock control is laid down, with programmable software layers.