You are right that SDF in the mathematical sense have unlimited resolution but almost all the big implementations of it store them discretized in a grid-like volume texture.
The voxel size is what effectively constitutes the LOD level. With a bigger grid size you are going in bigger steps when raymarching but the volume has less resolution.
I used polygonal meshes in the example described in the post you quote because I think folks are more used to polygons and would be easier to understand. And even if Nanite used SDF, those are not really well suited for everything. Animated characters for example would be hard so I'm sure it will be a mixed approach where some meshes are rendered via SDF and others aren't. These last kind of meshes will be using LODs IMHO.
Having said that, I can obviously be wrong. I think it aligns quite well with the restrictions Epics claims about Nanite and with Brian Karis prior research but it might be personal bias about interest for that kind of rendering.
Even if that weren't the case, though, I think we can all agree that "having no LOD system" is bullshit and that traditional LOD methods won't be removed (except for a few simple cases) for the near future. Improving the progressive-meshes-like techniques to further refine a LOD and to make transitions between them almost invisible and not having to author LODs manually, does not mean they won't be used, which is the initial claim I was debunking.
Let me take advantage of this post to ask you something in return: I'd love if the few forum posters with more graphics computing knowledge here like you wouldn't let these misconceptions to be spread. It's great to speculate about what they are doing, but spawning technical discussions with things that are obviously wrong only brings confusion to the table and are only used as fuel for trolling (see all the "R&C is still using traditional LOD systems", "PS5 won't use LODs" or "Xbox can only render 5x less polygons than the PS5" posts across the forum in other threads).
I completely appreciate your headsup about SDFs, and far superior knowledge on the subject, so thank you for that, and I empathise with the last part of your post, and on balance of probabilities suspect you are probably correct - about LoDs still being used in the Nanite part of the rendering, even if it is using SDFs.
However, in saying that, from the introductory SDF info I got from these two links below, I'm probably leaning more towards no LoDs in nanite for various reasons.
Raymarching SDFs (Signed Distance Fields, or Functions sometimes)
Examples of basic SDF geometry
But just in case our definitions of what no LoDs would be are different, I'll first make my distinction. IMO a LoD level - much like a mipmap - is a discrete self contained/standalone version of the full item, just at a lower level of detail.
By comparison, I don't consider signals made up of multiple signal parts, eg Jpeg, MP3, etc to have LoD levels, primarily because the only level that represents the signal by itself, is the lowest detail base signal - all the other higher order components that need added to reconstruct the full signal, don't represent the signal, but just some aspect of it.
Any how, my main reasonings behind why I think an SDF Nanite solution wouldn't have any LoDs is as follows.
SDFs have seemingly been around for quite some time, and I don't think just formulating the best of existing real-time SDF techniques for use on new powerful consoles and putting them in a toolchain with a 2-way converter for megascans and AtomView assets is enough for Unreal engine 5's sales pitch. And even if it was, does it not seem odd to you, that UE5 nanite code is still under NDA for techniques that would already be in the public domain? I also think that from the way they've bullishly suggested that nanite could have handled massively more geometry, rendered at full 4k60 without slow down- if lumen wasn't the bottleneck - leans me to thinking nanite is a very simple solution that doesn't care about what it is encoding and rendering.
On the technical side , to achieve that, I would be expecting Epic to have looked at procedurally representing megascans/atomview assets with just one repeating general purpose SDF primitive - a triangle - and like a JPEG encoder, are (3D) quantising the (normalised model space) asset with their SDF primitive to generate a base signal; by starting at the furthest visible distance - in some standard Frustum setup with a 4K viewport - and continuing towards the near frustum clip plane.
Naturally, if they have been able to achieve something like a JPEG style of signal representation, then I would expect the computational cost of rendering the complex items at distance to automatically be tiny, and as they fill the screen, the cost would be massive, but because the union with other SDFs in the scene would eliminate the other SDF items' higher order workloads, because those SDFs were occluded, the expected computational cost would even out mostly, would be my way of thinking if that was the solution.