I'm a little confused when I see people citing stuff like 'I'm expecting a 5870', or 6870, 6950, 6970, 6990, 7990 ... whatever. What exactly is meant by that? Do you actually think they'll just take a PC part and throw it in a console? I just don't think it makes sense in the console space. The earlier cards are missing features, and the upcoming cards have too much performance. Consoles have different requirements than PC's, and I think people are picking the wrong priorities.
When designing a system, obviously Sony and MS want something that's going to last as long as possible. There are a lot of considerations and priorities battling against an arms race for raw power however. An efficient, balanced system that utilizes as much of its theoretical performance as possible (few bottlenecks), while leaning towards the things that will let it remain competitive graphically and feature-wise seems like the goal.
For older generation cards, sure you can die shrink them for improved efficiency, but then you're missing features. And in some cases, even these older cards can be seen as wasting your transistor budget if they are pushing things that don't need to be pushed in the console space. Similarly, you can't expect them to just grab a high-end part from the then current GPU lines. It's simply too much size, heat, draw, and cost. Some sort of custom design is what makes the most sense.
The way I see it, one of the most important things needed to stay competitive versus PC peers is feature-set. You want to be capable of the same graphics effects/features and compute features for as long as possible, even if the performance doesn't actually match. This of course is in opposition to what many are requesting which takes a current or high-powered last gen design, shooting for raw performance of current features. In my mind, even if such a design can keep up with something like polygons or texturing, things instantly look dated when you start see new features being regularly utilized on PCs ... like a new graphics effect or some more efficient rendering method (eg. tessellation) making your raw performance capabilities lose relevance.
With that in mind, I would imagine they'd be deriving their design from Southern Islands ... and including some features expected to be in the next shader model (obviously this depends on the status of DX at the time). Basically it will be as up to date as possible. What it won't have is the same raw performance as a higher-end PC card - because it isn't needed. A console is not a PC, and therefore has different requirements. It doesn't need as much fill-rate or RAM because it's limited to a single 1080p display for most titles. Like it or not, console users don't have the same expectations for IQ, so it doesn't need to do 16x AA/AF. You get the picture. What I think it does need is to take advantage of what raw power it has. That means things like memory speed and bandwidth are important. Less, faster VRAM or use of eDRAM - basically whatever gives the best bang-for-the-buck.
Basically it all comes down to making the most balanced design that targets 1080p and has the needed effects and features to not appear 'dated' for as long as possible - even if that means giving up raw performance. I feel such a design would be: a custom design derived from the next-gen GPU's , includes any features they can pack in from the next shader model, less VRAM but higher bandwidth, paired down shader units, ROP's, etc versus a high-end card.