Sorry, that's the thing, we don't really know.
Have you ever thought OpenCL and GPGPU ar derived from CUDA? We don't know how optimized AMD/ATi cards are when it comes to API and drivers. Take this as an example, PC Nvidia OpenGL drivers are still better than AMD/ATi's, but that doesn't seem to have played a part on consoles. We don't know how code written specifically for these architectures behaves; ATi actually changed VLIW5 to VLIW4 not because VLIW5 was ineffective but because the 5-way overhead was almost never used so they'd rather use the silicon spared by that extra stage to be used in more stream processors. There's no telling (at least I don't know) if you have a closed platform that is 5-way if that overhead is gonna be ignored; I mean it's still there.
Closed architecture allows you to cater specifically for the platform in question; and there's not much you can't do on DirectX 10 but can do on DirectX 11, for example. (DirectX 11 is more about better/faster execution than it is about being able to do stuff you couldn't)
This is also the main difference between shader model 4 and 5.
And yes, there's a relation to GPGPU functions because it happens in the same place (stream processors), but compute shader doesn't really mean that it's now "awesome" to run general purpose code; it's like saying a cpu is good at general purpose because it's whetstone performance is good. The gpu could be crap at OpenCL and still be very respectable at SM5.0. Apples to oranges; you have to put everything into perspective unless it means exactly that; good compute shader performance doesn't mean it's good for taking over general purpose tasks from the cpu; for I don't see my cpu doing shaders; these units were first and foremost and still are, meant for graphics, not general purpose, it just happens that the expanding of it's power and functions actually opened the door to execute them in a more viable way.Not really.
What I said is, the VLIW architecture didn't stop at R7xx and for all purposes when Nintendo was known to be using it there wasn't much difference in GFlop/watt in that architecture; even if the feature set was a little behind. Point being, putting those features in there wouldn't be that hard, as they can be backward applied on the very same architecture they were already applied. Do you even know the difference between previous shader models? I haven't read SM5 documentation but I'm guessing it's a forward evolution with more instruction slots and registers;
the usual.
Those changes are like making a custom cpu with extra wide registers or something; certainly doable in a custom design.
What I said is that even if that wasn't revised for it, most of the things we're talking about are viable; after all it's not like Shader Model 5/Direct X 11 is regarded as ground breaking. It's more of a matter of optimizing for the architecture in question.
Remember how Bioshock required SM3?
It was backported without much effort, the team just didn't do it, but they could easily do so, if it was for a console (closed platform) release. Point being: it'll still be a modern gpu, and thus stuff can be scaled down for it.
Call it a pre-emptive strike if you will, because unlike your reasoning I'm not gonna claim it's sm5, or how likely it is. I don't really care to make a guess so I'm covering both bases.Being R7xx based doesn't tell us much tbh; everything excluding the GCN architecture (77xx and 79xx) can be considered R7xx based; it's not that clear cut; they were more about adding peripheral stuff and adding to what was there than changing the core.