Exactly how do HDTVs upconvert 480i to 1080i?

Fatghost

Gas Guzzler
I'm not really familiar with how this process works, but as I understand it, many HDTVs upconvert 480i to 1080i. How does the TV figure that out though?


Can someone fill me in on how upscalers/deinterlacers work?


I'm completely retarded when it comes to TV technology.

Thanks.
 
Well... a de-interlacer takes interlaced video and as best as it can, generates progressive scan video frames that look as if they were created or televised as progressive scan from the very beginning. Because progressive scan frames have twice the number of scan lines as equivalent interlaced scan video fields, the terms de-interlacer and doubler are often interchanged. The simplest doubler just causes each scan line to be output twice in the time it took the original scan line to arrive. A tripler generates output frames with three times the scan lines, a quadrupler generates output frames four times the scan lines and so on and so forth.....If the source is interlaced video, a quadrupler should do good de-interlacing first, then redouble the lines. A tripler starting off with interlaced video should do good de-interlacing followed by scaling with a 3:2 ratio, rather than simply tripling the lines.

A line doubler is a kind of scaler. More specifically a line doubler is intended to create output video frames with exactly twice the number of scan lines while a scaler usually has several ratios of output scan lines to input scan lines per frame.

Complicating things further, there are *TWO* different sources of moving images...Video (usually in multiples of 30 frames per second and can be either interlaced or progressive) and Film (Almost always progressive and in multiples of 24 frames per second)...complicating this is the fact this is for the NTSC format[never twice the same color] :) wheras PAL is in multiples of 25fps and progressive.....the Digital ATSC [always twice the same color] :D format is a great big umbrella that covers compatible NTSC and PAL formats..

At any rate, Video" mode is the "normal" mode for a de-interlacer in which the incoming content is not expected to have any special characteristics in terms of sameness from one field to the next. The de-interlacer must use its most sophisticated analysis formulas in constructing the progressive frames. In film mode a de-interlacer takes into account the 3-2-3-2 repeat sequence (3-2 pulldown; 2-3 pulldown) of subject matter in successive NTSC video fields produced from 24 frame per second film source. By keeping track of this pattern the de-interlacer can quickly find the matching field to weave when constructing the progressive scan video frames; such matching field either precedes or follows. (For PAL video of a 24 fps film or for NTSC renditions of the few 30 fps films, the repeat pattern goes 2-2-2-2, called 2:2 pulldown)

Notice I say "usually" alot because when creating DVDs, there are two methods in which video is transferred: from film, and from a video camera source, and this is where we get both interlaced and progressive sources to Video...

When you go to the movie theaters, the film you watch has 24 frames per second of information. The video you watch at home, however, is presented at 60 fields per second. To fit 24 frames/sec into 60 fields/sec, a processed called telecine is used. Telecine breaks each film frame into 2 fields (call them A and B), and produces a regular cadence (a repeated pattern) to produce 60 fields per second. The end result is that the 48 fields/sec (generated from 24 frames/sec) is "expanded" to 60 fields/sec by repeating certain fields. The key concept here is that the two fields used to produce the cadence are from the SAME frame of the image (and thus can be recombined later to reproduce the original frame).

When video is produced from a video camera, however, the output is already 60 fields per second. The difference from film mode (besides the rate) is that each field is taken from a different point in time (i.e. not the same frame): while field A is taken at a point in time, field B is taken 1/60th of a second later. The key concept here is that the two fields are taken from DIFFERENT "frames" of the image, thus is is impossible to reconstruct a frame for a given point in time.

When a deinterlacer recombines a signal, it can recombine it in two modes: film and video. When in film mode, the deinterlacer is smart enough to recognize the cadence produced by the telecine process. It can see that the two fields originated in the same frame, and recombine them to produce an exact replica of the original frame. When in video mode, however, it is not possible to recombine two fields into an original frame since each field was captured at a different instance of time. The resulting deinterlaced image can suffer from combing in areas of the image which moved from one field to the next.

Thus, deinterlacers have two modes: film and video. When in film mode, the original frame of film can be perfectly recovered whereas in video mode, it is impossible to recover the original frame. Some companies have technology which attempt to interoplate between different video fields (such as Faroudja's DCDi and KeyDigital's ClearMatrix scalers/deinterlacers. Many deinterlacers will switch between these two modes automatically, however, some do not.

As a side note, it is still important to have a good video mode on a deinterlacer even when watching films. Sometimes a bad cut or edit is made when producing the DVD which interrupts the cadence of fields. In this case, the deinterlacer cannot reconstruct the original frame of film from the fields presented, and has no choice except to revert to its video mode. Good deinterlacers will do this automatically to prevent excessive combing, (Faroudja calls it "motion adaptive deinterlacing) and will switch back into film mode as soon as the cadence is recovered. Poor deinterlacers will stay in film mode too long, and recombine fields from different film frames causing artifacts....

Check out this site for a good visual on how all this crap works :D

http://members.rogers.com/tholbrook/other/pgscan.html

Hope all this helps clear it up for you :lol
 
This is the real answer on how it works...

sill.jpg
 
My guess is it either duplicates the field, or interpolates the field, ala in photoshop in realtime.

lachesis
 
Now the other question, how do you know if your HDTV upconverts or not? I have a sony KF-42WE610, my display choices for non progressive content are high definition, progressive and cinemotion which is a 3-2 pulldown, i guess that means it upconverts it by force? Thats good or not? And which resolution does it upconvert it to?
 
Buggy Loop said:
Now the other question, how do you know if your HDTV upconverts or not? I have a sony KF-42WE610, my display choices for non progressive contect are high definition, progressive and cinemotion which is a 3-2 pulldown, i guess that means it upconverts it by force? Thats good or not? And which resolution does it upconvert it to?

If you have a sony Grand Wega projection LCD, then the screen can actually only output stuff at 1 resolution (it's "native" resolution). I believe for Sony this is unusual (like 1368 x 800 something), so it's actually converting all the signals that come into the TV.
 
Top Bottom