Yeah, that's how it works.
Per scanline (1d), per field (2d), per multiple fields (3d) comb filtering.
http://www.neogaf.com/forum/showpost.php?p=188331485&postcount=11840
I'm asking myself if interlacing could appear again in the future given that
LCD displays now try to start to mimic CRTs by using BFI techniques and stuff,
and also given that we get much higher refresh rates which would reduce some
further issues with interlacing. The BFI technique would help the eye to
reduce the sample-and-hold characteristics of common LCD displays which leads
to motion blur for the eye. For, if you blank an illuminated moving spot fast
enough (like a CRT does) then the eye does its job predicting its motion
ahead, which is basically why interlacing works on CRTs, i.e. the eye
extrapolates the motion of the previous field weaving it with the current
field producing an almost proper frame (synced in time). And using a higher
refresh rate could help to reduce some of the annoying interlacing issues,
i.e. interline/edge flickering. With a display refresh rate of >= 2x60Hz
(60Hz NTSC) edge flickering will be greatly reduced. For, with 2x (120Hz) the
edges will flicker at a rate 60Hz, which should be sufficient esp. considering
you won't sit close to the TV. Sure, using 3x (180Hz) the now 90Hz-flickering
fill be gone for humans.
Of course, you wouldn't want all this on the studio-end, but on the consumer-
end with everything adjusted to the perceptional characteristics of humans,
you can gain some saving like cutting the necessary transmission bandwidth in
half, power savings, etc., or you can get double the resolution in time for
the same bandwidth.
Don't know if we will see it again. But there is a trend in adapting all the
backends for humans to their perceptional characteristics now that we know
more about it, like these adapted RGBW displays
The basics principles on which most of these things rely on is that with an
every increase in resolution etc. the redundancy increases as well, which can
be seen by making a spectral analysis of many devices. For example, for a TV
with a high resolution the lines will look quite similar. Making a spectral
analysis of a b/w video signal reveals that (a) the spectrum is discreet due
to its periodic nature (scanning) and (b) that the multiples of the
fundamental frequency (horizontal frequency) aren't that much modulated due to
the fact that the scanlines look quite similar from line to line. This leaves
a lot of space in the luminance video spectrum which translated into
redundancy. And this was basically the reason why the principle of color
TV (NTSC/PAL composite) video works at all, i.e. it utilizes this redundancy
in the luminances spectrum to merge the color information into these free
slots. Clever! And this wasn't obvious from the beginning as can be seen on
how the RCA struggled to fit the color information into the same 5MHz b/w
video bandwidth in the earlier '50.