Fallout 4 PC Performance Thread

Yeah, in fire fights with lot's of enemies and effects my framerates tank to the 20-30's

I have an overclocked i5 3570k and I was sure PhysX was forced on GPU.

1440p with everything maxed out.

Same here. Ultra PhysX in BL2 and BL Pre-Sequel tank my FPS in certain scenario.

It only happens in these games, I personally think it's a bug with that engine or games.
 
My load times have increased quite a bit since this patch. A quick google shows that I'm not the only one. Any of you guys experience this?
 
My load times have increased quite a bit since this patch. A quick google shows that I'm not the only one. Any of you guys experience this?

Some just said earlier in this thread that it had to do with the new PHYSX effects? Set them to high and try loading a game again. (Weapon debris)

Apparently "ultra" causes long load times.
 
Reading around it seems HBAO+ can have a pretty large impact on FPS, which is a shame as I've gotten performance just where I want it, not sure I am prepared to sacrifice up to 15fps (as others are reporting) for slightly better IQ. Hopefully the debris effect isn't to impactful on settings lower than Ultra.

Once they fix the load time problem I'll give it a try for myself.
 
Reading around it seems HBAO+ can have a pretty large impact on FPS, which is a shame as I've gotten performance just where I want it, not sure I am prepared to sacrifice up to 15fps (as others are reporting) for slightly better IQ. Hopefully the debris effect isn't to impactful on settings lower than Ultra.

Once they fix the load time problem I'll give it a try for myself.

Where have you seen HBAO+ hainv such a large performance hit ?
 
You make a valid point, but i think it's disingenuous to describe it as "solely because". Strangely enough, the most heavy hitting areas seem to be places that do not give you base building features like Diamond City and the Corvega Assembly Plant. And pretty much anywhere in the wilderness if you're looking through a scope. Diamond city could have a ton of its objects and shaders merged since it is a very static place.

Bethesda has undirectly admitted there's a problem, by their patch notes mentioning improvements to Corvega and scope performance, there's clearly more to it than "this is how the game has to be, because it is the way it is designed to be".
Yeah, "solely because" is probably a bit of an exaggeration, even though I was talking solely about the amount of draw calls, not the total frametime. I should mention that as far as I know, the base building -like features exist all around the world and the runtime has access to individual decorations outside of set base building areas. Not to say that's the most efficient way of doing things (though I can see how it would make the level production a little easier, less riskier and support modding stability better), but it should add quite a bit of overhead in the amount of draw calls.

I also definitely wouldn't say it's "the way it should be" since I haven't got even close to a fraction of information needed to have say in the bigger picture related to the technology and production, but as is with any game technology, there's always pros and cons, and in the case of the engine it comes with superior modding support compared to pretty much any AAA game out there right now.

In any case, really glad to see HBAO+ implemented in patch since the AO was very much lacking without it. Since I've not had that many qualms about performance (other than I would love to up the shadow distance, but that too is a result of amount individual objects to be drawn I'm not too hopeful to see many improvements there) nor have had any stability issues or bugs, I am really eager to see the new toolkit and learn more about how technology has evolved.
 
Seems a little longer, but not really noticeable (my load times were never all that great to begin with). I've got particles on High.

Performance seems about the same as it did before. Whatever improvements they made are offset by the HBAO+, but that's a good thing.. feels like HBAO doesn't actually hurt performance all that much, at least on my rig.
 
I am getting solid 60 with overclocked i7 6700K and 980 ti at 2K on Ultra. Wish I could play at 4K, framerate drops to low 40's when I try that.
 
How do I turn on HBAO+ and PhysX settings? I don't see them in game or in launcher and I am running 1.3 beta.
 
How do I turn on HBAO+ and PhysX settings? I don't see them in game or in launcher and I am running 1.3 beta.

Nvidia exclusive. Presumably they are CUDA based, hence the exclusivity.
Edit : this is true for the weapon debris, HBAO+ runs on AMD cards.
 
Something has increased my loading times. A lot. Before the new patch it was like 10 seconds from boot or fast traveling, something reasonable. But now it feels like Bloodborne at launch's loading times, at least 40 seconds, maybe a minute. The first time the game loaded from the main menu I saw FOUR hints on the screen. Never seen something like that before. Context: I was playing in borderless fullscreen, which I've never done before, but why would that increase the loading times? My save file sizes have also not increased significantly since the last time I played and they're around 11 MB. Edit: I see most of you have longer load times, too, but I haven't changed any of the graphics settings.

Also a few of my mods are disabled like Pleasant UI. I use the custom config menu and I clicked archive invalidated to see what it did (it erased my string edits). Do I still need to use string edits to use mods? NMM warned me about something involving using string edits or something like that. AFAIK most of them work so maybe I should just reinstall the ones I notice are missing.
 
Yeah, in fire fights with lot's of enemies and effects my framerates tank to the 20-30's

I have an overclocked i5 3570k and I was sure PhysX was forced on GPU.

1440p with everything maxed out.

This happens to everyone. Some people simply don't care about the framerate dropping and frametime wildly fluctuating and say the game is fine.
 
HBAO+ performance impact evaluated :
15154622627l.jpg


http://www.overclock3d.net/reviews/gpu_displays/fallout_4_retested_hbao_performance_impact/1

It's early in the morning and my eyes are blurry. Is this saying Nvidia takes a greater performance his with HBAO on than AMD?

Something has increased my loading times. A lot. Before the new patch it was like 10 seconds from boot or fast traveling, something reasonable. But now it feels like Bloodborne at launch's loading times, at least 40 seconds, maybe a minute. The first time the game loaded from the main menu I saw FOUR hints on the screen. Never seen something like that before. Context: I was playing in borderless fullscreen, which I've never done before, but why would that increase the loading times? My save file sizes have also not increased significantly since the last time I played and they're around 11 MB. Edit: I see most of you have longer load times, too, but I haven't changed any of the graphics settings.

Also a few of my mods are disabled like Pleasant UI. I use the custom config menu and I clicked archive invalidated to see what it did (it erased my string edits). Do I still need to use string edits to use mods? NMM warned me about something involving using string edits or something like that. AFAIK most of them work so maybe I should just reinstall the ones I notice are missing.

Same here. AMD card user. No Pysx on and all that. I did turn on HBAO but my load times went from 10-20 second to like a minute. I'm on an SSD.
 
The long load times were unacceptable for me, so I rolled back. HBAO+ looks amazing but I don't want to waste time sitting loading screens.

I am running from SSD and am pretty sure its the new patch causing the long load times, since after rolling back its been fast and smooth.
 
It's early in the morning and my eyes are blurry. Is this saying Nvidia takes a greater performance his with HBAO on than AMD?

I don't see why it can't considering that HBAO+ is a postprocessing pass dependent on Z fetches and framebuffer blurring and Fury has gobs of bandwidth to do well here.
 
I decided to re-enable textures to allow for more colors, but doing so made some changes for some reason to mipmapping settings. When I did not have any textures enabled, effectively making most characters and objects purple, the mipmapping setting could be changed easily with noticable results. This is what I used (in Fallout4Prefs.ini) with textures disabled:

iTexMipMapSkip=65536

This made foliage, fire etc. appear as simple blocks. But the same setting with textures enabled does not change foliage at all, it appears as normal. My intention is to turn Fallout 4 into what I made Fallout 3 look like (blockiness most apparent in the first image):



Am I missing something here? I find it very strange it works with textures disabled, but not when they are enabled.
 
Quick question: I felt something odd in the mouse movement the first 2 days i played. I switched to the controller afterwards.

Have that been resolved?
 
That looks bad, only the high end cards can achieve stable 60fps in 1080p? Or is there a crazy amount of AA applied?

http--www.gamegpu.ru-e3s4h.jpg
They're doing something wrong in their settings. My 680 is getting better than what that chart says, and I'm running 1440p.

I can see using benchmarks like that as a simple comparison tool, but it completely fails if you're trying to use it to estimate what your actual performance will be like. Turn down a couple of settings and bam, another 20-30fps. They should make a version of those with "optimized play settings" or some such, the settings that players will actually use (like turning god rays down), and then run the comparison.
 
They're doing something wrong in their settings. My 680 is getting better than what that chart says, and I'm running 1440p.

I can see using benchmarks like that as a simple comparison tool, but it completely fails if you're trying to use it to estimate what your actual performance will be like. Turn down a couple of settings and bam, another 20-30fps. They should make a version of those with "optimized play settings" or some such, the settings that players will actually use (like turning god rays down), and then run the comparison.

they are just testing at max settings
 
This game has some very terrible performance overall. Area's in downtown Boston can absolutely destroy performance. Settings like shadow distance to High or Ultra will absolutely kill FPS in denser areas if you don't have a very powerful CPU. You can go from 60FPS to 30FPS very quickly and sometimes it doesn't seem like scene complexity increases at all.
 
a 7870 right there with a 770. 280x beating a 780ti. more and more games.

Funny how it was "a game running on an old engine" because of which it was favoring NV just a month ago and now it's a part of that "more and more games" mantra of yours.

This is a benchmark done on a specific scene on a beta version of patch 1.3. Before jumping to conclusions you should at least wait for the patch to be released even though I'd prefer to have couple other sites do the test as well to have more data.

i dont think thats likely to change the situation for nvidia. the architecture isnt expected to change from maxwell.

The Pascal architecture will change from Maxwell. Not that it needs to as Maxwell is perfectly fine in comparison to GCN.
 
Benchmark is flawed. No amount of power can hold a 60+FPS experience at all times. Even a $10,000 PC will dip to the 40's and 50's when maxing the game out. The engine is a steaming pile of shit that has been rotting since 2005.
 
Benchmark is flawed. No amount of power can hold a 60+FPS experience at all times. Even a $10,000 PC will dip to the 40's and 50's when maxing the game out. The engine is a steaming pile of shit that has been rotting since 2005.

image.php


truth
 
That looks bad, only the high end cards can achieve stable 60fps in 1080p? Or is there a crazy amount of AA applied?

http--www.gamegpu.ru-e3s4h.jpg

Makes sense if all settings (especially shadow distance and god rays) are maxed out, along with physx and HBAO+. I get 10-15fps drops from a vsync'd 60fps in Diamond City on my 970 if shadow distance is above medium.
 
Funny how it was "a game running on an old engine" because of which it was favoring NV just a month ago and now it's a part of that "more and more games" mantra of yours.

This is a benchmark done on a specific scene on a beta version of patch 1.3. Before jumping to conclusions you should at least wait for the patch to be released even though I'd prefer to have couple other sites do the test as well to have more data.



The Pascal architecture will change from Maxwell. Not that it needs to as Maxwell is perfectly fine in comparison to GCN.


nvidia themselves have released slides saying pascal = maxwell + hbm and mixed precision support, the latter of which will likely only be used for deep learning. id expect games use to be minimal if it happens at all.

wrt fallout 4, yes it is on a very outdated engine, but with this new beta patch its now one of the ever increasing games where nvidia lags behind. i dont see how you can deny this trend. mind showing me a bunch of recent games where a 770 hangs with a 290 and 290x? that would be the corollary to the variety of games ive shown you where a 280x hangs with a 780 and 780ti. every 6 months amds chips move up in performance compared to nvidia. checking techpowerups performance summaries over the years shows huge changes in gpu positioning.
 
nvidia themselves have released slides saying pascal = maxwell + hbm and mixed precision support, the latter of which will likely only be used for deep learning. id expect games use to be minimal if it happens at all.
NV released slides showcasing the biggest changes from their previous architecture. They do this all the time and it never was as easy as "prev arch + new features" in the end. Pascal will be different from Maxwell down to its shader SIMDs - that much you can be certain of.

As for mixed precision its usage will depend on how much performance benefit it will have in real world applications. Games can certainly use it as they are already on mobile h/w.

wrt fallout 4, yes it is on a very outdated engine, but with this new beta patch its now one of the ever increasing games where nvidia lags behind. i dont see how you can deny this trend. mind showing me a bunch of recent games where a 770 hangs with a 290 and 290x? that would be the corollary to the variety of games ive shown you where a 280x hangs with a 780 and 780ti. every 6 months amds chips move up in performance compared to nvidia. checking techpowerups performance summaries over the years shows huge changes in gpu positioning.
Well, there is no trend as you're still using your selective benchmark approach and this FO4 case is just a perfect highlight of how it was "old engine" before and thus it didn't count and is a "part of trend" now based on just one benchmark of a beta patch in your words. All in all this is just bullshit.

mind showing me a bunch of recent games where a 770 hangs with a 290 and 290x?
Do your homework and search for benchmarks yourself. There are a plenty of recent games where this exact situation is happening (when 770 isn't VRAM limited which it usually is in recent games and I think that I've shown you this already in technicolor), you just chose to not see them because they are going against that "trend" theory of yours.
 
NV released slides showcasing the biggest changes from their previous architecture. They do this all the time and it never was as easy as "prev arch + new features" in the end. Pascal will be different from Maxwell down to its shader SIMDs - that much you can be certain of.

As for mixed precision its usage will depend on how much performance benefit it will have in real world applications. Games can certainly use it as they are already on mobile h/w.


Well, there is no trend as you're still using your selective benchmark approach and this FO4 case is just a perfect highlight of how it was "old engine" before and thus it didn't count and is a "part of trend" now based on just one benchmark of a beta patch in your words. All in all this is just bullshit.


Do your homework and search for benchmarks yourself. There are a plenty of recent games where this exact situation is happening (when 770 isn't VRAM limited which it usually is in recent games and I think that I've shown you this already in technicolor), you just chose to not see them because they are going against that "trend" theory of yours.

so gamegpu.ru, pcgameshardweare.de and techpowerup benchmarks are me using benchmarks selectively?

may 2013
perfrel_1920.gif

perfrel_2560.gif


september 2014
perfrel_1920.gif

perfrel_2560.gif


june 2015
perfrel_1920.gif

perfrel_2560.gif


jan 2016
perfrel_1920_1080.png

perfrel_2560_1440.png
 
Top Bottom