Andrew addresses this question in some depth. I won't reproduce his explanation here, which is worth reading in its entirety and covers the issues of pipelining, buffering, and CPU/driver-GPU interactions. Interestingly, Andrew believes that in the case of latency spikes, buffered solutions may produce smooth frame delivery to the display. However, even if that's the case, the timing of the underlying animation is disrupted, which is just as bad:
"This sort of "jump ahead, then slow down" jitter is extremely visible to our eyes, and demonstrated well by Scott's follow-up video using a high speed camera. Note that what you are seeing are likely not changes in frame delivery to the display, but precisely the affect of the game adjusting how far it steps the simulation in time each frame. . . . A spike anywhere in the pipeline will cause the game to adjust the simulation time, which is pretty much guaranteed to produce jittery output. This is true even if frame delivery to the display (i.e. rendering pipeline output) remains buffered and consistent. i.e. it is never okay to see spikey output in frame latency graphs."
Disruptions in the timing of the game simulation, he argues, are precisely what we want to avoid in order to ensure smooth gameplayand Fraps writes its timestamps at a critical point in the process:
"Games measure the throughput of the pipeline via timing the back-pressure on the submission queue. The number they use to update their simulations is effectively what FRAPS measures as well."
In other words, if Fraps captures a latency spike, the game's simulation engine likely sees the same thing, with the result being disrupted timing and less-than-smooth animation.