Digital Foundry: Hands-on with the Forza Horizon 2 demo

no AF to 16x AF on Skyrim for even low end GPU's costs at most 1 FPS. It is so trivial today that there is no reason not to have it as for the cost it adds a lot to the IQ of the scene. Not sure why devs have struggled with it for so many releases so far this gen.

Since almost every dev on every console seems to not be implementing it at high-quality settings, I'd imagine it's more likely you're mistaken about the idea of it having no performance cost, considering the modicum of effort it would take to enable.
 
I was amazed at the image quality of the demo, probably the first game this gen thst has really impressed me graphically.
 
I would really like to know how they are handling the data on xbone and how they manage the data between ddr3 and esram... in my understanding the esram works like a cache but you have more influence in how it behaves and what it does for you.

Ms guidelines for performance on xbone are actually the opposite. Ms says everything should happen on the esram, and you should use ddr3 as a big cache and the move engines as a way to move data in and out of esram ahead of time.

Most of it is probably NDA'd, but they do have some public patents for streaming systems using pretty much the xbone setup, which gives a better vision on how this scenario could be possible.

But basically, according to Ms, whether it's drawing pixels, or vertex data, or reading data for filtering or shading, or anything really is done faster on esram compared to the main ram.
 
With the environment style they went with, things could have ended up looking rough without good IQ. That 4xAA makes the valley scenery look fantastic.

ibejeMzv5O7SNb.gif
 
And I'm speaking to my arcade gaming self from the 80s: "And then it will be 2014, the framerate in racing games will be..." 80s-self: "...I BET WE WILL RUN AT FUCKING 240FPS WITH ALL 3D AND VR HELMETS." Todays self: "No, but in 2014, half the framerate will be considered as an amazing job and games shall look ok. You can drive off track if you feel like it." 80s-self:"-.'-"
 
Ms guidelines for performance on xbone are actually the opposite. Ms says everything should happen on the esram, and you should use ddr3 as a big cache and the move engines as a way to move data in and out of esram ahead of time.

Most of it is probably NDA'd, but they do have some public patents for streaming systems using pretty much the xbone setup, which gives a better vision on how this scenario could be possible.

But basically, according to Ms, whether it's drawing pixels, or vertex data, or reading data for filtering or shading, or anything really is done faster on esram compared to the main ram.

Oh, any details on this? This sounds wrong, especially when dealing with big data but I don't know how the current state of compression techniques and partitioning of data is.
So the move engines are to accelerate the data between esram and ddr3, ok. Still ms puts the memory pyramid upside down, it seems.

Edit: found something on dualshockers, thanks.
 
It's the higher bandwidth requirement that might be the bottleneck here. AF with uncompressed textures can be very bandwidth heavy. You don't need additional space in the RAM afaik as the AF is processed by reading those textures.
Uncompressed textures?

What confuses me is the odd variation in texture sharpness close to the camera here.

ipkA2ZBHeUJKe.jpeg
 
And I'm speaking to my arcade gaming self from the 80s: "And then it will be 2014, the framerate in racing games will be..." 80s-self: "...I BET WE WILL RUN AT FUCKING 240FPS WITH ALL 3D AND VR HELMETS." Todays self: "No, but in 2014, half the framerate will be considered as an amazing job and games shall look ok. You can drive off track if you feel like it." 80s-self:"-.'-"

Your 80s self would pass out if it saw FH2 in motion on a 1080p set.

Your 80s self would call you a liar for calling real footage a "game", when waking back up.
 
Since almost every dev on every console seems to not be implementing it at high-quality settings, I'd imagine it's more likely you're mistaken about the idea of it having no performance cost, considering the modicum of effort it would take to enable.

I doubt it very much, AF is AF. There is nothing special about it that would suddenly make it cost 10x more than it does on the PC especially considering how close the consoles are to closed box PC's with a low overhead OS and APIs. That should make it even easier to implement but for some reason devs just are not bothering with it and it is baffling.
 
Oh, any details on this? This sounds wrong, especially when dealing with big data but I don't know how the current state of compression techniques and partitioning of data is.
So the move engines are to accelerate the data between esram and ddr3, ok. Still ms puts the memory pyramid upside down, it seems.

Edit: found something on dualshockers, thanks.

Some slides regarding ESRAM from a conference:
 
PGR4 sold almost 2 million copies...

Yeah, but it was bundled in Europe and it was also bundled with the Live messenger kit. The initial sales weren't great at all in comparison to previous entries in the series. It was a poor decision by Microsoft to release it so close to Halo 3.
 
Your 80s self would pass out if it saw FH2 in motion on a 1080p set.

Your 80s self would call you a liar for calling real footage a "game", when waking back up.

Sure, and he would then touch the controller and say "Damn, it may look great, but that feels like someone programmed some bubble gum into the physics."
 
And I'm speaking to my arcade gaming self from the 80s: "And then it will be 2014, the framerate in racing games will be..." 80s-self: "...I BET WE WILL RUN AT FUCKING 240FPS WITH ALL 3D AND VR HELMETS." Todays self: "No, but in 2014, half the framerate will be considered as an amazing job and games shall look ok. You can drive off track if you feel like it." 80s-self:"-.'-"

You want the same graphics difference that there was between arcades and home consoles in the 80s? Sure, just pay eight thousand dollars for custom-made hardware (not the rebuilt PC they use now in arcades) that only plays ten or so games written specifically for it, sucks electricity like a champion and occupies the same size as a small car.

Sure, and he would then touch the controller and say "Damn, it may look great, but that feels like someone programmed some bubble gum into the physics."

Compared to what? OutRun?
 
Forza Horizon 1 already had MSAA (2x I think). That's why they're so nice looking.

Anti-Aliasing is so important.

they had this stupid orange filter made this game better than it was.

but still, oh my gawd, i fcking loved fh1, i want to play 2, but dont have the money for an x1 ㅠ.ㅠ
 
We've had dynamic MSAA in games like Resident Evil 5, couldn't you max out texture filtering at lower speeds/rendering load?
 
This is perhaps the most impressive aspect,



They pulled it off on 360 and now they've done it again.

And to think, this is only their second game....

Wizards, yo.

Well its not technically their 2nd game, Playground games is more or less Blizzard Creations, so they are thoroughbreds in racing games
 
I doubt it very much, AF is AF. There is nothing special about it that would suddenly make it cost 10x more than it does on the PC especially considering how close the consoles are to closed box PC's with a low overhead OS and APIs. That should make it even easier to implement but for some reason devs just are not bothering with it and it is baffling.

Do you think the people at all these different development houses are just dumb? I sort of mean that seriously: the lack of AF is pretty common across the board, including titles that are skillfully made in most other technical areas. I'm more inclined to think there's something we don't understand about the implementation that makes it a trade-off, than to think so many people forgot to "flip a switch."
 
Do you think the people at all these different development houses are just dumb? I sort of mean that seriously: the lack of AF is pretty common across the board, including titles that are skillfully made in most other technical areas. I'm more inclined to think there's something we don't understand about the implementation that makes it a trade-off, than to think so many people forgot to "flip a switch."

Exactly. It's not that they will see this thread and think “oh look, the genius at this forum told us what we forgot!“
 
With the environment style they went with, things could have ended up looking rough without good IQ. That 4xAA makes the valley scenery look fantastic.

ibejeMzv5O7SNb.gif


While playing there are many times the game looks damn near photorealistic, depending on your camera view. Kudos to the team really.
 
No trilinear filtering in this case.

In fact, I would say that's a trilinear filtering kicking.


What Marlenus have to learn, is that the evidence of his PC forcing AF with minimal performance cost in one game, means nothing on other game, and much less on other platform.

In first place, dedicated logic can be sitting idle on your PC GPU if not AF is used. So activating it could not affect performance at all on a game that wasn't aware that those resources where there to start with. Or your PC can be unbalanced, and some other element (CPU, bus) can be the limiting factor, so going crazy with GPU settings won't affect an already bottlenecked performance.

AFx16 isn't free on any current GPU. It take bandwidth and eat TMU's caches for breakfast.
 
Microsoft needs to buy them, before EA, Ubisoft or Activison does. That or face another loss of 3rd party talent like Bioware and Bizarre Creations.

Don't forget the more recent situation they found themselves in, where Amazon bought Double Helix while DH was working on Killer Instinct.

Iron Galaxy looks to be filling the role nicely so far, but still.

Couldn't be quoted enough... Preach!!!
 
Do you think the people at all these different development houses are just dumb? I sort of mean that seriously: the lack of AF is pretty common across the board, including titles that are skillfully made in most other technical areas. I'm more inclined to think there's something we don't understand about the implementation that makes it a trade-off, than to think so many people forgot to "flip a switch."

That would explain the lack of AF on the PS4 version of Thief, or the decision to use 900p + 4xMSAA on UFC.

The lack of AF is just stupid. No card since the DDR3 version of the Radeon 5570 has had an issue with 8xAF and even then that card only had a 7 FPS drop in the worst case scenarios. That card also only had 28.6 GB/s of memory bandwidth so it was amazingly slow yet it could handle AF quite well.

Since then AMD have improved bandwidth efficiency in their cards and they have also improved AF performance (both speed and quality).
 
I absolutely hate the fact this game has no AF, but sadly with consoles sacrifices have to be made. The game looks absolutely gorgeous! while i am gutted with the lack of AF i will let them off for the shear technical achievement of the rest of it
 
That would explain the lack of AF on the PS4 version of Thief, or the decision to use 900p + 4xMSAA on UFC.

The lack of AF is just stupid. No card since the DDR3 version of the Radeon 5570 has had an issue with 8xAF and even then that card only had a 7 FPS drop in the worst case scenarios. That card also only had 28.6 GB/s of memory bandwidth so it was amazingly slow yet it could handle AF quite well.

Since then AMD have improved bandwidth efficiency in their cards and they have also improved AF performance (both speed and quality).
Doesn't PS4 have a 50% TMU advantage and I can't see how memory bandwidth is an issue in a game like Strider which is pretty basic looking all-round.
 
That would explain the lack of AF on the PS4 version of Thief, or the decision to use 900p + 4xMSAA on UFC.

The lack of AF is just stupid. No card since the DDR3 version of the Radeon 5570 has had an issue with 8xAF and even then that card only had a 7 FPS drop in the worst case scenarios. That card also only had 28.6 GB/s of memory bandwidth so it was amazingly slow yet it could handle AF quite well.

Since then AMD have improved bandwidth efficiency in their cards and they have also improved AF performance (both speed and quality).

You can't be more wrong.

AF depends of TMUs L1 cache (and that's higher speed than VRAM btw). Trilinear cost double than bilinear, and AF cost as many number of taps x bilinear. And that isn't free by any means, since filtering textures isn't the only thing that a TMU does.
 
Top Bottom