Digital Foundry - Metro Redux (Console Analysis)

This is false, i own both systems and played destiny for both, the difference is not very noticeable in actual gameplay, most things are not very noticeable during gameplay...which is why DF has tools and other ways to pick games apart.
You are also comparing the destiny before the patch to take advantage of the power it was given because of the resources taken from the kinect. Thats like everyone comparing the ps4 and x1 diablo now....there is a patch that will fix a issue with the ps4 version. Just another fanboy, because if everyone pref 1080p then you would be a pc gamer...ps4 has how many games that have a solid 1080p and 60fps? exactly, and i think people would obviously prefer solid frame rate over the 1080p.

First, never did I state I saw a difference in gameplay between the versions, but that I could tell the PS4 version was more clear/crisp image wise. Secondly, the point of talking about them pre-patch is to contribute to the statement I made, not to knock on the Xbox version. Thirdly, fanboy? Am I a PS4 fanboy or an Xbox fanboy? I hope you're not accusing me of being an PS4 fanboy considering I'm quite admittedly a huge Xbox person, but I see it as it is. The 1080p version of a game > 900p version of the game.

And me saying most people would prefer the 1080p version has nothing to do with PC gaming as I never stated that's the only thing people cared about, but if given the ability to choose between the PS4 or X1 version, most of not all would pick the PS4 version. That's not a knock on the Xbox version, that's just a benefit to the PS4 version.

Can't even be honest/constructive without being labeled a fanboy.
 
This gen has a ways to go... but it has been shown that a 7850-7870 can run similarly at PS4 like settings.

Especially when Ryse comes out we will know more about the x1 equivs. But the PC release of Dead Rising 3 has shown that 720p lower settings (x1 settings) requires a pretty low end GPU.

Yes I'm not surprised one bit by Dead Rising 3's steep requirements considering that the Xbox One version is 720p.
Personnally, I think a 7870 is bit on the optimistic side if you want PS4 parity. The reason why I'm saying this is because compute isn't that much used for now, but later, those 8 ACEs will be put to use and I suspect a beefier GPU will be required just to be on par. But by then I suppose entry level hardware will provide the necessary grunt.

As you say Ryse will be a very interesting case.
 
This gen has a ways to go... but it has been shown that a 7850-7870 can run similarly at PS4 like settings.

Especially when Ryse comes out we will know more about the x1 equivs. But the PC release of Dead Rising 3 has shown that 720p lower settings (x1 settings) requires a pretty low end GPU.

Was going to mention ryse will be interesting to see a similar power PC with gpu/cpu, im guessing cpu will be hard as its a low clocked 8 core I dont know of a desktop 8 core like that. Going to guess overheads would be a issue to and early dev tools when comparing eiva way will be interesting.
 
33.5% more pixels rendered does not make for anything like a 33.5% better looking image, though. As resolution goes up, the numbers go up exponentially all the while the return for the extra pixels diminishes. Rapidly in many cases.

Its a number shock argument rather than anything that meaningful. Says more about the machine running it than the actual comparison in visuals.

And you know all this, too...


Others have described what it is, but you may have also heard it been called 'shimmering'.

Stop making sense. It apparently makes 33% increase in self e-penis perception to some people, as they keep bringing this % increase every single time, eventhou due to the exponential percentage doesn't even translate into a 33% better visuals.

As someone who already owns the 2 consoles, I can say a lot of good things about the PS4, but being 40% more powerful is probably the one has done practically no difference whatsoever. Games kind of look the same in both systems... If so what SONY is doing with dev and publishers relationships it is far more important IMO.
 
Yes I'm not surprised one bit by Dead Rising 3's steep requirements considering that the Xbox One version is 720p.
Personnally, I think a 7870 is bit on the optimistic side if you want PS4 parity. The reason why I'm saying this is because compute isn't that much used for now, but later, those 8 ACEs will be put to use and I suspect a beefier GPU will be required just to be on par. But by then I suppose entry level hardware will provide the necessary grunt.

As you say Ryse will be a very interesting case.

For those games that use compute, definitely this could maybe come to be reality. For the meantime though, a 7870 runs higher settings at a higher framerate.

A big reason why only a 7870 is required for this is due to the increased RAM and GPU clock speeds on PC due to higher thermals. While the PS4 GPU runs at 800Mhz and a flat 176GB/S on the RAM... a 7870 is unlocked and runs in excess of 1000Mhz quite quite easily.
 
Was going to mention ryse will be interesting to see a similar power PC with gpu/cpu, im guessing cpu will be hard as its a low clocked 8 core I dont know of a desktop 8 core like that. Going to guess overheads would be a issue to and early dev tools when comparing eiva way will be interesting.

Here is what Sebbi from Redlynx has to say on the matter :

High end quad core PC CPUs aren't the problem, since a single 3+ GHz Haswell core can run tasks of two lower clocked Jaguar cores in the same allocated time slot (16.6 ms)
http://forum.beyond3d.com/showthread.php?t=64968&page=24

I assume CPU won't be much of a problem. VRAM and compute will be the major point of contention I reckon.

For those games that use compute, definitely this could maybe come to be reality. For the meantime though, a 7870 runs higher settings at a higher framerate.
A big reason why only a 7870 is required for this is due to the increased RAM and GPU clock speeds on PC due to higher thermals. While the PS4 GPU runs at 800Mhz and a flat 176GB/S on the RAM... a 7870 is unlocked and runs in excess of 1000Mhz quite quite easily.
Thanks for clearing up things for me. ;)

I feel constrained and very uncomfortable with my current GPU (770 with only 2gb of onboard memory). Even though the memory bandwith is decent I want more VRAM, at least 4.

Not really. There is no evidence or any research for now that You need more than 2 ACE to use asynchronous computing effectively.
This begs the question then, why did Sony went the extra mile and shoved 8 in the console's GPU ? There must be a good reason.
The R9 290X has 8 ACEs so I'm lead to believe this will greatly improve performance. My point is that the hardware required for 1:1 against the PS4 right now for multiplats will not cut it in the next 18 months or so. I can't believe developpers have really hit the ceiling just yet.
 
The reason why I'm saying this is because compute isn't that much used for now, but later, those 8 ACEs will be put to use and I suspect a beefier GPU will be required just to be on par

Not really. There is no evidence or any research for now that You need more than 2 ACE to use asynchronous computing effectively.
 
Stop making sense. It apparently makes 33% increase in self e-penis perception to some people, as they keep bringing this % increase every single time, eventhou due to the exponential percentage doesn't even translate into a 33% better visuals.

As someone who already owns the 2 consoles, I can say a lot of good things about the PS4, but being 40% more powerful is probably the one has done practically no difference whatsoever. Games kind of look the same in both systems... If so what SONY is doing with dev and publishers relationships it is far more important IMO.

Exactly how I feel. I own both but still buy most games for xb1 and will for metro as well.
 
I...I'm sorry. I didn't want to open a can of worms.
My question is genuine, I don't want to derail the thread or anything.



So, according to you what PC are we looking at for PS4-equivalent settings in 8th games ?
Am I just too pessimistic with my earlier estimation ?


That is true from my experience hence why I was curious.

I don't see why it would be much different from previous generations. Heck I don't understand why the idea of console optimisation is such a problematic reality for some PC users.

Because, they are talking about specific implementation of algorithms and dx9, not full games.
There is not a single game in last 10 years that required two times more performance on PC in comparison to console in similar setting. Not a one.
Then again weren't we hearing that PC hardware was up to 2 generations ahead of console hardware?
 
I feel constrained and very uncomfortable with my current GPU (770 with only 2gb of onboard memory). Even though the memory bandwith is decent I want more VRAM, at least 4.
GOod Texture streaming should help. I say "good" because recently poorly designed streaming in games like Titanfall and Watch Dogs makes me also worry.

At worst, you have to turn down the texture resolution /streaming cache... but have much higher framerates than the console version with better FX quality.

Nvidia got away with some pretty dastardly VRAM amounts in the GK104 family. Heck, even 3GB on the 780Ti is waaaaaaay too little for the asking price.
 
Not really. There is no evidence or any research for now that You need more than 2 ACE to use asynchronous computing effectively.

AMD isn't upping ACE quantity in its newer chips for nothing. Context switching between compute and graphics has been slow in older gen GPUs. To the point that AMD previously recommended using one GPU (or APU) for compute loads and a separate one for rendering if you wanted to maximise use of resources in mixed-task loads, rather than context switching on one chip. Longer context switching = wasted resources.
 
Over brighten can make things look soft but why did DF only apply it to the PS4 version? As you can see the contrast in the X1 version is very similar to the older PS4 shot. Regardless what's being done, everything in the newer PS4 shot looks blurrier than the older one.
The new one isn't overbright, brightness looks about the same between the two images. The only difference that leaps out at me is that the second image has pretty low contrast, which does make it look a little washed-out on this screen. It's likely that they were captured differently.

*A couple minutes later.*

I checked the image's luminance histograms. Aside from the "PS4" on the image (which is pure white), the data is all within the 16-240 region. It looks like DF was interpreting limited-range incoming video data as full range.

So yeah, the newer images are messed up. The data should have been presented over a larger luminance range.
 
GOod Texture streaming should help. I say "good" because recently poorly designed streaming in games like Titanfall and Watch Dogs makes me also worry.
At worst, you have to turn down the texture resolution /streaming cache... but have much higher framerates than the console version with better FX quality.
Nvidia got away with some pretty dastardly VRAM amounts in the GK104 family. Heck, even 3GB on the 780Ti is waaaaaaay too little for the asking price.

Titanfall does not use streaming at all. It loads all the textures at once, the reason given is the frantic pace of the game. Watch Dogs uses ludicrous amounts of VRAM while textures themselves don't seem to justify those requirements.
I know 2GB won't cut it very soon, 4 even on a 256 bit should be enough.

I also agree with you regarding the 780ti, it's crazy what Nvidia can get away with sometimes.
 
Titanfall does not use streaming at all. It loads all the textures at once, the reason given is the frantic pace of the game. Watch Dogs uses ludicrous amounts of VRAM while textures themselves don't seem to justify those requirements.
I know 2GB won't cut it very soon, 4 even on a 256 bit should be enough.

I also agree with you regarding the 780ti, it's crazy what Nvidia can get away with sometimes.

That was their excuse, but some of the fastest pace games ever made have used texture streaming.

It is a bad tech decision... nothing more :D
 
AMD isn't upping ACE quantity in its newer chips for nothing. Context switching between compute and graphics has been slow in older gen GPUs. To the point that AMD previously recommended using one GPU (or APU) for compute loads and a separate one for rendering if you wanted to maximise use of resources in mixed-task loads, rather than context switching on one chip. Longer context switching = wasted resources.

People from Oxide which were experimenting with asynchronous compute in Mantle did not have a problem with 2 ACE.

----
I'm guessing Mantle, DX12 and the new NVIDIA drivers bringing over a myth to PCs?

Go and check what they are really bringing and to what components.
 
I'm guessing Mantle, DX12 and the new NVIDIA drivers bringing over a myth to PCs?

No of course not. They will enable PCs to run amazing draw call amounts and open up new anti-aliasing and lighting methods.

No console game currently is running ridiculous draw call amounts at all. And I question whether they have the power, despite their ability to bundle draws better, to take advantage of that.
 
I'm guessing Mantle, DX12 and the new NVIDIA drivers bringing over a myth to PCs?

They'll widen the gap even more and allow for better utilization of powerful hardware. As we've already seen with Mantle, the most gains are to be found with low end CPUs or ultra high end GPUs. In normal cases the benefits are quite small.

Edit: What Dictator93 said.
 
Does that mean they could not benefit from more ?

No, this doesnt mean that they are useless, but it means that for most use cases 2 ACE shouldnt be a bottleneck in compute related tasks.
Its not like we havent been using compute in a past, actually tons of games were already used it extensively.
 
No of course not. They will enable PCs to run amazing draw call amounts and open up new anti-aliasing and lighting methods.

No console game currently is running ridiculous draw call amounts at all. And I question whether they have the power, despite their ability to bundle draws better, to take advantage of that.
They've certainly helped bring over optimisation to the PC side so its good news.

However what's with the mass denial of console optimisation from a section of the PC gamers here, after all those things are only trying to bring emulate the fixed hardware nature of consoles.
 
They've certainly helped bring over optimisation to the PC side so its good news.

However what's with the mass denial of console optimisation from a section of the PC gamers here, after all those things are only trying to bring emulate the fixed hardware nature of consoles.

historical precedent mainly.
 
No, this doesnt mean that they are useless, but it means that for most use cases 2 ACE shouldnt be a bottleneck in compute related tasks.
Its not like we havent been using compute in a past, actually tons of games were already used it extensively.

Thanks for the clarification. I know compute has been used already but no exactly on a wide scale. Codemasters used it for their GI in Dirt Showdown, GRID 2/Autosport, Sleeping Dogs uses it for HDAO, 4A Games for the DOF, same goes for Irrationnal Games in BS Infinite.

I wonder what practical use will developpers make of compute now that their 3 target platforms support it.

However what's with the mass denial of console optimisation from a section of the PC gamers here, after all those things are only trying to bring emulate the fixed hardware nature of consoles.
Careful here. I don't think they believe low-level access brings nothing valuable in return, but rather the impact is overstated.
 
They've certainly helped bring over optimisation to the PC side so its good news.

However what's with the mass denial of console optimisation from a section of the PC gamers here, after all those things are only trying to bring emulate the fixed hardware nature of consoles.

It's not denial my good man, it is the complete and utter lack of proof. I would be more than happy to accept the awesome benefits of console optimization if one would show some real-world examples of said optimization in action. When I read Digital Foundry articles and see the 7790 beating the Xbox One and the 7870 beating the PS4, I can't accept it as gospel.
 
People from Oxide which were experimenting with asynchronous compute in Mantle did not have a problem with 2 ACE.

Define 'did not have a problem'..?

Mantle exposes controls over the ACE, and thus helps devs to control things better. That certainly presents an improvement over the situation they were in before, but it does not mean 2 ACEs present an optimal scenario. It's also probably something that depends on what you're loading the chip with.

You're appealing to authority in Oxide, but forgive me if I appeal to AMD's authority on this, and their presentations about heterogeneous computing and the need for more ACE hardware in future chips to improve things further.

Its not like we havent been using compute in a past, actually tons of games were already used it extensively.

I wouldn't say extensively. GPU compute has been mostly limited to 'out of loop' tasks to date (i.e. 'fluff' effects work). There's a number of steps needed to improve the versatility and efficiency of GPU compute, which have only started to emerge recently. I wouldn't call it a mature field yet.
 
However what's with the mass denial of console optimisation from a section of the PC gamers here, after all those things are only trying to bring emulate the fixed hardware nature of consoles.
Its mostly, because most people here who thinks 'coding to metal' or 'console optimisations' thinks that PC hardware is running games with 50% efficiency, which is ridiculous concept from a get go.

Mantle, OpenGL and DX12 are big steps for PC, but its not because they will enable two times higher throughput, but they will:
- eliminate bottlenecks on high end [multi-gpu and multi-cpu scaling] and low end [cpu optimizations]
- enable new techniques to be used universally across all platforms, like access to hardware based AA not only via api calls, but also via shader calls or compute or different types of rasterization, or lower access to tessellators etc, different handling of OOT or transparency shadowing
- allow for asynchronous computing

---
You're appealing to authority in Oxide, but forgive me if I appeal to AMD's authority on this, and their presentations about heterogeneous computing and the need for more ACE hardware in future chips to improve things further.
Which have not a single usage to this day and on the contrary Nvidia showed that they didnt need to change their compute calls handling for their Flex and Flameworks systems that relies not only heavy on compute, but also on low latency and synchronization.
 
I really hate the word 'optimisation'. It's like people just throw it around in graphics discussions without giving anything remotely technically in depth as to what they mean by it.
 
It's not denial my good man, it is the complete and utter lack of proof. I would be more than happy to accept the awesome benefits of console optimization if one would show some real-world examples of said optimization in action. When I read Digital Foundry articles and see the 7790 beating the Xbox One and the 7870 beating the PS4, I can't accept it as gospel.
Then surely consoles games have never looked better over the course of the gen, or am I console pleb for speaking otherwise? The rhetoric has been used before might I assure you and time and time again its by the same people who swear their PC hardware will outperform consoles for the rest of the gen.

Then again do you call proof of Gears 1 to Judgement a lie, or that Witcher 2 running at a respectable graphics setting on a console from 2005 a lie, or just insecure denial?
 
Which have not a single usage to this day and on the contrary Nvidia showed that they didnt need to change their compute calls handling for their Flex and Flameworks systems that relies not only heavy on compute, but also on low latency and synchronization.

Let's not confuse need with desirability. As far as nVidia hardware goes, I'm not so well versed with their pipeline - maybe it has for some time been well set up to manage context switching between render and compute tasks to really minimise any downtime in the switch between task types.

But I'm not saying you NEED more ace hardware to do compute or even impressive levels of compute work on a given GPU. I'm saying that AMD's research and roadmaps suggest that improvements in ACE setups will help to more efficiently utilise a given chip in mixed workloads.

As for seeing this pan out in the real world, I guess we should maybe wait to compare compute heavy games on hw that is otherwise similar but for ACE setup. But you were suggesting there was no research to say there was a benefit...I'm guessing AMD's decision to significantly beef up this hardware isn't just coming purely from a hunch.
 
Then surely consoles games have never looked better over the course of the gen, or am I console pleb for speaking otherwise? The rhetoric has been used before might I assure you and time and time again its by the same people who swear their PC hardware will outperform consoles for the rest of the gen.

Then again do you call proof of Gears 1 to Judgement a lie, or that Witcher 2 running at a respectable graphics setting on a console from 2005 a lie, or just insecure denial?

We've been already over this. PC hardware that run Gears 1 better than xbox 360, run games from 2011/2012 better too.
There are examples on youtube for games like Battlefield 3, Tomb Raider, Crysis 2 or Mass Effect 2/3. Optimizations that hit consoles, also were affecting pc hardware in similar rate.

----
But I'm not saying you NEED more ace hardware to do compute or even impressive levels of compute work on a given GPU. I'm saying that AMD's research and roadmaps suggest that improvements in ACE setups will help to more efficiently utilise a given chip in mixed workloads.
Thats why i said, they are not useless, but they shouldnt decide on performance in multiplatform games few years from now.
 
Yes slightly, by rendering 33.5% more pixels on screen.


And the result is a slight difference visually. Pixels aren't everything of what comprises next gen visuals, in fact it plays a really small role compared to the advances in lighting and textures, subsurface scattering, post processing effects and all of the assets that make next gen games look next gen, and as you can see by both games side by side, it's not the 33.5% more pixels that's making that differentiation from the last gen versions.
 
Then surely consoles games have never looked better over the course of the gen, or am I console pleb for speaking otherwise? The rhetoric has been used before might I assure you and time and time again its by the same people who swear their PC hardware will outperform consoles for the rest of the gen.

Then again do you call proof of Gears 1 to Judgement a lie, or that Witcher 2 running at a respectable graphics setting on a console from 2005 a lie, or just insecure denial?

Look, I really don't want to get into this argument again when the other side consistently fails to provide evidence to support its claims. Read this article, I'm out.

http://www.eurogamer.net/articles/digitalfoundry-2014-r7-260x-vs-next-gen-console
 
Look, I really don't want to get into this argument again when the other side consistently fails to provide evidence to support its claims. Read this article, I'm out.

http://www.eurogamer.net/articles/digitalfoundry-2014-r7-260x-vs-next-gen-console

If evidence is to be had, it'll most likely emerge more dramatically in the second half of the gen rather than in a launch title comparison. At least in previous generations the relationship between console and PC performance of a similar spec hasn't stayed static over the course of a gen, far from it. People say this gen will be different, but we'll see I guess.
 
If evidence is to be had, it'll most likely emerge more dramatically in the second half of the gen rather than in a launch title comparison. At least in previous generations the relationship between console and PC performance of a similar spec hasn't stayed static over the course of a gen, far from it. People say this gen will be different, but we'll see I guess.

This gen is different but HSA and GPGPU features will still allow it to grow over time. I do think that the comparisons to PC hardware results will be substantially different before but that doesn't mean that games won't continue to improve as the gen goes on.
 
If evidence is to be had, it'll most likely emerge more dramatically in the second half of the gen rather than in a launch title comparison. At least in previous generations the relationship between console and PC performance of a similar spec hasn't stayed static over the course of a gen, far from it. People say this gen will be different, but we'll see I guess.
Examples?
Because i have one:
https://www.youtube.com/watch?v=jHWPGmf_A_0
 
Mantle, OpenGL and DX12 are big steps for PC, but its not because they will enable two times higher throughput, but they will:
- eliminate bottlenecks on high end [multi-gpu and multi-cpu scaling] and low end [cpu optimizations]
- enable new techniques to be used universally across all platforms, like access to hardware based AA not only via api calls, but also via shader calls or compute or different types of rasterization, or lower access to tessellators etc, different handling of OOT or transparency shadowing
- allow for asynchronous computing
You're right, these improvements are being bought over to PCs but these are also the very things you can do much more easily when you have completely fixed hardware which just so happen be what consoles are. Mantle, DX 12, OpenGL etc can only do so much when the hardware is different for every setup.

Look, I really don't want to get into this argument again when the other side consistently fails to provide evidence to support its claims. Read this article, I'm out.

http://www.eurogamer.net/articles/digitalfoundry-2014-r7-260x-vs-next-gen-console
You're out yet you're confident enough to claim that there is no proof.
 
Then again weren't we hearing that PC hardware was up to 2 generations ahead of console hardware?

The high-end is even further ahead and the mid is doing very well, something that wasn't necessarily the case in 2005. Things have changed quite a bit.

Then surely consoles games have never looked better over the course of the gen, or am I console pleb for speaking otherwise? The rhetoric has been used before might I assure you and time and time again its by the same people who swear their PC hardware will outperform consoles for the rest of the gen.

On the other side are people who are apparently convinced that PCs lose performance over time and can't deal with increased graphical quality without investing big amounts of money because 'no optimization'.
 
The high-end is even further ahead and the mid is doing very well, something that wasn't necessarily the case in 2005. Things have changed quite a bit.



On the other side are people who are apparently convinced that PCs lose performance over time and can't deal with increased graphical quality without investing big amounts of money because 'no optimization'.
You are wasting your time arguing against someone who read a few twitter comments by Carmack and now they think they are a tech expert.
 
Specs of the system in your vid:

Intel C2D E6500 2.93 ghz
ATi Radeon X1950
4gb RAM

That's hardware that's 3-4x more powerful than an 360 and its only on par? We haven't even factored in cost!

On the other side are people who are apparently convinced that PCs lose performance over time and can't deal with increased graphical quality without investing big amounts of money because 'no optimization'.
Glad I'm not one of them.
 
Specs of the system in your vid:

Intel C2D E6500 2.93 ghz
ATi Radeon X1950
4gb RAM

That's hardware that's 3-4x more powerful than an 360 and its only on par? We haven't even factored in cost!

Glad I'm not one of them.

an E6600 and a X1950 is definitely not 3-4 x more powerful than Xenos and the 6 threaded PPC in the 360....

The point isnt about cost...
 
But then you induce all of the artifacting (as well as over-softening) which occurs.

Also, where does TLOU-remaster use volumetric lights? I would not mind seeing some screens of it!



As I stated earlier, comparing the volumetric lighting in KZSF to metro is disingenuous. One looks much better than the other.

Artifacting yes, but TLoU and Uncharted did it well without inducing much of it, especially the Remaster where it's pretty much non existent. Also volumetric lighting should look soft because it's basically light interacting with dust particles and dust particles give it a soft edge, I always had a personal dislike of how sharp it looked in Metro 2033.

In TLoU, all enemy flash lights are volumetric, plus a lot of lights.
EWnWjmY.jpg

Example of enemy flash light interacting with player.
http://i.cubeupload.com/hfYtmz.jpg

Here you can see the shadow it produces despite the light being off screen (screen space light shafts don't produce shadow shafts...or whatever they are called). This isn't the best shot that I could take in this location but it's apparent near his head.
YfnQgbh.jpg

pi5rPLu.jpg

FCDlIse.jpg


http://i.imgur.com/uFsFTkt.jpg
http://i.imgur.com/qJrWTjC.jpg

Any artifacts you see (like Joel's T shirt in the last pic, is due to Facebook)
 
Those TLOU shots really don't look at all comparable to what was posted of Metro 2033 earlier in the thread, volumetric lighting wise. They could almost just be a few screen-aligned sprites.

That is surprising to me, why would he make such claims then ?
The problem is a lack of exactness when people talk about this. (greatly exacerbated by the age of twitter communication and its byte-sized quotables)

When you run a given HLSL shader on a console GPU and an equivalent PC GPU, they'll perform the same. If the PC GPU is twice as fast the shader will run twice as fast. How could it be different? The same shader compiler made by the same company is compiling it, and the same hardware is running it. Now, game developers may spend more time low-level optimizing a shader specifically for a single console GPU than they do for every PC GPU, but how much of a difference such hardware-specific optimizations really make greatly depends on the situation, and it's a huge time investment.
 

Compare that GPU vs a 360 with 2005/2006 era titles vs 2011+ era multiplats. It would have chewed Oblivion up but the same couldn't be said for Skyrim, for example. I'm not sure it would even run some of the latest stuff. Would have been fine with CoD 2006, but Ghosts? Assassin's Creed (2007), but Black Flag?

'Spec inflation' on multiplats and console ports as the last gen worn was a thing. Maybe it won't be this gen, but I wouldn't be terribly confident about betting on that.
 
Top Bottom