Makes no sense the context a videos. I'd like you to take your smartphone camera and move from a dark area to a bright area.
Eh, smartphone cameras probably capture between 3 and 5 f-spots of luminosity.
For reference, your eyes
can see anywhere from 10-14 f-stops of instantaneous dynamic range. Instantaneous, meaning if you DON'T allow your pupils to adjust.
If you take pupils adjusting to different lighting conditions into account, then your eyes have a range exceeding 24 f-stops.
A TV should be able to display around 8 f-spots
For reference;
Compact cameras capture 5-7 stops
DSLR capture 8-12 stops: If you shoot in RAW, you'll see that you can increase or decrease exposure in post, so that stuff that appeared as pitch black/solid white on your monitor, becomes visible. The detail has been captured, it's just out of your TV/Monitor's range.
A scene with a dynamic range of 3 f-stops has a white that is 8X as bright as its black (2^3 = 2x2x2 = 8)
HDR photography means taking multiple pictures at different exposures, to capture the entire range.
Now, for your CG scene, you need a much larger ratio between maximum and minimum light intensities than your tv can display at once. So it becomes inevitable that the game needs to adjust to the range that should be visible on your TV at any given time. It cannot display the inside of a tunnel and a bright noon sky at the same time; not if their luminosity values are physically accurate. And you want them to be accurate because they affect how lighting is calculated.
I suppose you could post process your image so that values are crushed within the 0-255 range before displaying it.