Not quite, it represents a difference in raw float point throughput, how that throughput translates to performance is a whole different ball game.
And that's not even talking about efficiency, difference in architectures... Your work simply might not need the extra operations. Say for example that all you want to do is to draw big black images as fast as you can. The performance won't be determined by the flop rate at all.
Of course, there's not much use in big black images, but not every game is going to be hold back by alu performance, heck even in some fairly high profile games in this generation developers have come out and said that they still some processing power unused on 7 year old hardwares (360 and Ps3) and that they can't put that to good use because they are being hold back by their memory.
My point is: Not knowing all the details of the architectures means we can't say precisely where each of them is going to have an advantage over the other. Not knowing which kind of games these consoles are going to run means we can't say pretty much nothing about their final performance. Say in a hypothetical scenario Orbis massive bandwidth gives it an immense edge over durango in deferred rendering. But for some reason developers decide to stick to forward rendering (be it lowest common denominator, gpgpu being used in a way where they can have the forward rendering advantages and the deferred ones too, etc) and in forward rendering orbis extra bandwidth doesn't make much a difference, but durango's memory setup allows it to compensate the float point advantage and then some, but by a smaller margin then DF would yield, so developers decide to stick to that for parity's sake.