No, certainly they can't match it at first.
Like I said you're taking him out of context and without understanding the intrincancies of what he's saying. which is what latency in outputting something means.
He's reffering to how PC's have bottlenecks due to their nature. The reason for those bottlenecks vary a lot, from the complex OS running in background to the fact that developers can't possibly have time to optimize for every single gpu in particular/they have to keep the game optimized in abstract terms (and build optimizations on top); also has to do with how they don't solder a gpu right next to the cpu, ports, drivers,
darned API's, namely DirectX and software get's in the way, and how memory banks tend to be bigger but often not consisting of RAM as good (at least at the time a generation starts); also, commercial GPU's lack a dedicated framebuffer; meaning the main video ram bank get's tapped for outputting the image (also happens on X360 since 10 MB are not enough and ps3 doesn't have a embedded edram framebuffer) and that RAM certainly isn't as fast as one could hope, or dedicated.
That doesn't mean that the console will have in practical terms double the performance though, just that there'll be less latency to it. How to explain this... Xbox didn't have a dedicated framebuffer, it used RAM from the main bank for the z-buffer and framebuffer which was something you could consider a bottleneck against PS2 who had 4 MB of 2560-bit eDRAM, it's still regarded as the more powerful one despite the fact that it hindered it's efficiency (and obtaining 60 frames on games), because PS2 couldn't match what was going on even if it had reduced latency outputting.
what you do before outputting and with raw power, has less to do with latency (since the latency is when it comes to outputting it/storing it to output, meaning you can't possibly match the detail a 3 GFlop GPU is pulling with a 1.5 GFlop machine, only with tricks, same for resolution and stuff like effects and AA, because that's dependent on computing power, not latency.
Mentioning it's equivalent to more GFlops is a big falacy and shouldn't be even implied for what it means in practical terms. Consoles are more efficient as closed boxes, yes, but the extent they are is very limited, they still can only output what they computation power allows them to (just like PC GPU's actually), as for the rest they can try to reduce the difference by optimization and roundabout ways.