I wonder then why this approach hasn't been used more frequently in the past.
Takes processing power and a good data pipe. I'm also guessing it looks like turds at non 1:1 pixel resolutions, as the blurring and artifacts would be compounded by up-scaling.
The first post example doesn't seem to have it right either. From my reading every other frame is doing the 1/2 render and interpolation with the next full frame.
1080P frame 1 > 960 + 960 Calc between frame 1-3 > 1080P frame 3
So you're getting 2/3 frames at 1080P, and the one in between is a best guess using data from 1/2 a frame and the two good frames. It also might kick in dynamically based on performance, and in the future it could become much more complicated (maybe just sections/tiles of a frame are done this way, or certain shaders)
Shame on GG for not being upfront, but can you blame them with the DERP going on in this thread? Even the title is wrong.