I completely appreciate your headsup about SDFs, and far superior knowledge on the subject, so thank you for that, and I empathise with the last part of your post, and on balance of probabilities suspect you are probably correct - about LoDs still being used in the Nanite part of the rendering, even if it is using SDFs.
However, in saying that, from the introductory SDF info I got from these two links below, I'm probably leaning more towards no LoDs in nanite for various reasons.
Raymarching SDFs (Signed Distance Fields, or Functions sometimes)
Examples of basic SDF geometry
But just in case our definitions of what no LoDs would be are different, I'll first make my distinction. IMO a LoD level - much like a mipmap - is a discrete self contained/standalone version of the full item, just at a lower level of detail.
By comparison, I don't consider signals made up of multiple signal parts, eg Jpeg, MP3, etc to have LoD levels, primarily because the only level that represents the signal by itself, is the lowest detail base signal - all the other higher order components that need added to reconstruct the full signal, don't represent the signal, but just some aspect of it.
Any how, my main reasonings behind why I think an SDF Nanite solution wouldn't have any LoDs is as follows.
SDFs have seemingly been around for quite some time, and I don't think just formulating the best of existing real-time SDF techniques for use on new powerful consoles and putting them in a toolchain with a 2-way converter for megascans and AtomView assets is enough for Unreal engine 5's sales pitch. And even if it was, does it not seem odd to you, that UE5 nanite code is still under NDA for techniques that would already be in the public domain? I also think that from the way they've bullishly suggested that nanite could have handled massively more geometry, rendered at full 4k60 without slow down- if lumen wasn't the bottleneck - leans me to thinking nanite is a very simple solution that doesn't care about what it is encoding and rendering.
On the technical side , to achieve that, I would be expecting Epic to have looked at procedurally representing megascans/atomview assets with just one repeating general purpose SDF primitive - a triangle - and like a JPEG encoder, are (3D) quantising the (normalised model space) asset with their SDF primitive to generate a base signal; by starting at the furthest visible distance - in some standard Frustum setup with a 4K viewport - and continuing towards the near frustum clip plane.
Naturally, if they have been able to achieve something like a JPEG style of signal representation, then I would expect the computational cost of rendering the complex items at distance to automatically be tiny, and as they fill the screen, the cost would be massive, but because the union with other SDFs in the scene would eliminate the other SDF items' higher order workloads, because those SDFs were occluded, the expected computational cost would even out mostly, would be my way of thinking if that was the solution.