There is a lot to unpack here.
Nvidia's tensor cores were originally designed for inference/AI workloads. In Turing they are used to denoise the ray-traced lighting effects. The RT core in Turing is what generates the actual rays.
RTX is a hybrid ray tracing solution. It still uses standard rasterisation for the final image, but introduces limited ray tracing for lighting and shadows etc. And then uses tensor cores to denoise the image so that its appear a garbled mess.
Afaik, AMD's patent describes ray casting engines in the texture mapping units. I believe there are 4 ray tracing cores per TMU. There is also BVH intersection engine, but I don't recall how AMD are handling that.
AMD's solution is also hybrid.
There is no 100% real time ray tracing. Even with the dedicated RT cores and denoising algorithms, the hardware is just far too slow to pull that off.
And of course, you can purely use FP32 compute to handle ray tracing using software, but it'll be slow as absolute shit to produce a usable image or it'll produce an unusably noisy image if limited to the frame-time window.
We don't know exactly how PS5 will handle ray tracing, but if Cerny says there is some form of hardware acceleration then I'm not sure how you can really argue against it.