Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Potential box art of XseX games from Amazon

EXhrSY5VcAEX9Fi


EXbZOhbVAAAF4zt
 
Last edited:
Great breakdown, highlighting why the SSD in PS5 is such a big deal and comparing it in graphic form to the Series X.
Also illustrates why having a chip dedicated to audio is essential to achieve what they want on PS5.
Pretty balanced, does say that ray tracing will be better on series X as the cpu and gpu do that 'on the fly'.
Also highlights Xbox's backwards compatibility advantage.

 
Honestly atm ray tracing is a wate of power of consoles, for not a massive payoff, in a few years sure, and if ms/sony can do a form of DSLL 2.0, then maybe ray tracing can be worth it.
 
Honestly atm ray tracing is a wate of power of consoles, for not a massive payoff, in a few years sure, and if ms/sony can do a form of DSLL 2.0, then maybe ray tracing can be worth it.
That's what mid-gen refresh is for!
There must be something to chase, some tech of the moment that is easy to market and sell. So raytracing should probably be the focus of that 2023/2024 upgrade.
 
Honestly atm ray tracing is a wate of power of consoles, for not a massive payoff, in a few years sure, and if ms/sony can do a form of DSLL 2.0, then maybe ray tracing can be worth it.

Like, even a 2080 Ti ain't doing proper Ray Tracing very well. I'll be surprised as hell if any console hardware is more capable even with all of the advantages it has of being a fixed target to develop for.
 
Potential box art of XseX games from Amazon

EXhrSY5VcAEX9Fi


EXbZOhbVAAAF4zt


This "optimized for" seems like a bad marketing, it means other games that doesn't have the logo aren't "optimized for" which will be looked negative from people, consumer need to think in his brain that every game is optimized for his console.

Hope Sony won't follow this imo not that good marketing move.
 
This "optimized for" seems like a bad marketing, it means other games that doesn't have the logo aren't "optimized for" which will be looked negative from people, consumer need to think in his brain that every game is optimized for his console.

Hope Sony won't follow this imo not that good marketing move.

"Optimized For" is a really bad turn of phrase in my opinion. Makes it sound like they didn't even test the shit to run on lower-end hardware. Maybe it's just me.

Should've just said, "Better on Series X" or "Series X Enhanced"
 
Big BRUHHHH if a cross gen title can't do 60fps without RT we are secrwed 🤦🤦🤦

Such a big disappointment.
8JpfhGt.jpg

I can't see that being true, and 1080p@60fps is just a joke. Didn't see something impressive so far, and AC if it's with full RT then it would make sense at 30fps but there is no reference there . Or it could be that Windows-based API isn't in good shape.
 
Last edited:
Like, even a 2080 Ti ain't doing proper Ray Tracing very well. I'll be surprised as hell if any console hardware is more capable even with all of the advantages it has of being a fixed target to develop for.

ray tracing and teraflops is this gens buzz words

In the real world, the tech is still in beta form, and isnt a game changer yet, I'm sure itll speed up dev time in the future, since no baked lighting, still years away from that tho (imo)
 
I can't see that being true, and 1080p@60fps is just a joke. Didn't see something impressive so far, and AC is with full RT then it would make sense at 30fps but there is no reference there . Or it could be that Windows-based API isn't in good shape.
I don't know man but they mentioned that a few titles are 60fps but AC Valhalla wasn't one of them
 
60fps is a reality on a simple GTX 1660 (medium/very high 1440p) :
No, its not. Because you thinking within current gen games and specs.

Now imagine, next Assasins Creed, full raytraced graphic engine, every window and surface has true reflections, cities are full of life people, much beyond what AC: Unity wanted to do with crowds (thousands people on the screen), everything is physics simulated, light, clothes, wind, leaves, water, volumetric smoke and fog, graphics is like from the quixel tech demos and so on, so real.

And now, imagine that it is possibile only on next-gen systems but i.e 1800p checkerboard/dlss/whatever @ 30 fps which can still looks great. And now even 1080p is not simple enough to do 60 fps with so much details. You can go lower, but there might be other bottlenects, that prevent system to get stedy 60 fps with so much features on the screen.

So, it is not so simple to get game runnig 60 fps but lower res, that was already designed to use full potential of next gen system in 30 fps.

(Sorry for my bad english)
 
I've always assumed this generation would be using checkerboard, dlss and/or variable resolution to achieve fake 4K@60 or 4K@30 with RT on... which is perfectly fine imho.
I think there is no point in scaling resolution, if you can achieve the same image quality at lower resolution.

In that spirit, and as a developer myself, I think pushing boundaries on I/O and sound is muuuch smarter than CU swinging.
You simply dont need that many TFs anymore.
 
Last edited:
I'm not that worry about Valhalla, it's coming from BF and Origins team dev, 2 of the best AC games imo, and the most gorgeous. I replayed just recently ACBF and it's still quite very cool looking for a PS4/XO launch game release in 2013.

"Optimized For" is a really bad turn of phrase in my opinion. Makes it sound like they didn't even test the shit to run on lower-end hardware. Maybe it's just me.

Should've just said, "Better on Series X" or "Series X Enhanced"

Yeah, enhanced already seems like a better nomination for that.
 
Last edited:
I don't think it works that way. Otherwise, what would refrain Sony from calling their system a 560 GB/s system rather than a 448GB/s one?
The amount of lanes (i.e. bus width) connected to the chips is important. RAM has a certain transfer rate per pin, the general total max being 14 Gbps, which is what the consoles are using. GDDR6 RAM chips can theoretically reach 16 Gbps, but, that would be pushing things too hard for the consoles, most likely.
All that aside, each chip has a max amount of lanes that can be connected with it, so the more chips you have, the more lanes you require. The whole reason MS went with their weird configuration was to increase the amount of chips so that they can have more lanes, i.e. increase the bus width. If they could achieve the same bandwidth by using 2GB chips, they would have done it, since that is cheaper in all ways.
The only way to achieve a higher bandwidth with the same chips is to increase the data rate higher than 14Gbps. The PS5 for example, would be able to reach 512GB/s bandwidth with its 8x 2GB chips, if the chips were 16Gbps instead of 14GBps.

I was just talking aboit XSX memory only, The Ps5 having 256 bus is obviously 448. Here below if you want to compare :


6XV5dEy.png
 
Last edited:
This "optimized for" seems like a bad marketing, it means other games that doesn't have the logo aren't "optimized for" which will be looked negative from people, consumer need to think in his brain that every game is optimized for his console.

Hope Sony won't follow this imo not that good marketing move.

I agree and tht "optimized" logo is ugly as sin and very off putting. It just doesn't look like it belongs.
 
Last edited:
How many times the Playstation 4 is more powerful in comparison with Game Cube for run Resident Evil 1 at 30 fps?

I never get tired of posting this, but unfortunately its up to devs. We probably gonna get more 60fps games or games that already run at 60, but more stable this time.
Especially with last gen ports like uncharted collection, god of war 3 and TLoU all running at locked 60fps (or very close, I could not tell if there were drops).
 
60fps is a reality on a simple GTX 1660 (medium/very high 1440p) :




30fps is so 2013


Per Anandtech's testing, at 4k/Very high (What people want from next gen) a 2080 (which is a decent comp for XSX) can only manage upper-mid 40s. For a game 2 years newer and even more demanding, XSX will either have to tone down settings or live with 30. Even a 2080Ti fails to hit a perfect 4k/60, so one would assume for AC:V, a 3070 will be necessary for 4k/60, at minimum. So, 30 FPS is only "so 2013" if you live at 1440, which is what's truly "so 2013"

111548.png
 
Especially with last gen ports like uncharted collection, god of war 3 and TLoU all running at locked 60fps (or very close, I could not tell if there were drops).

Kingdom Hearts 2 is by far my favorite because I played on PS2 and those games really show the differencial. I have both God of War 3, TLoU and Uncharted collection, so for those who played in last gen they feel the differencial.
 
Kingdom Hearts 2 is by far my favorite because I played on PS2 and those games really show the differencial. I have both God of War 3, TLoU and Uncharted collection, so for those who played in last gen they feel the differencial.
I do, I meant that I can't tell if TLoU has drops or no (I play in the pro at 4k).
 
I don't think it works that way. Otherwise, what would refrain Sony from calling their system a 560 GB/s system rather than a 448GB/s one?
The amount of lanes (i.e. bus width) connected to the chips is important. RAM has a certain transfer rate per pin, the general total max being 14 Gbps, which is what the consoles are using. GDDR6 RAM chips can theoretically reach 16 Gbps, but, that would be pushing things too hard for the consoles, most likely.
All that aside, each chip has a max amount of lanes that can be connected with it, so the more chips you have, the more lanes you require. The whole reason MS went with their weird configuration was to increase the amount of chips so that they can have more lanes, i.e. increase the bus width. If they could achieve the same bandwidth by using 2GB chips, they would have done it, since that is cheaper in all ways.
The only way to achieve a higher bandwidth with the same chips is to increase the data rate higher than 14Gbps. The PS5 for example, would be able to reach 512GB/s bandwidth with its 8x 2GB chips, if the chips were 16Gbps instead of 14GBps.
I've had a look into the physical hardware side of things and I'm still not convinced – but am less inclined to fully rule out your claim, having looked extensively at the AMD Infinity Fabric stuff, the Zen2 floor plan, and the shot of the exposed XsX APU – which is all looking very inconclusive.

Conventionally a CPU core will try to offload its outer cache(L3 in this case) through the cache attached memory controller to the pool of memory it is always scheduled to read and write to. And that pool is also the one that provides the lowest latency because of wiring length, which in this case is the 6GB. It is convention to do it this way to avoid data starving or block the cores by being unable to offload or fill the outer cache(to then cascade the copying through the cache hierarchy) and the idea of data starving and blocking the CPU and GPU at once just to avoid an asynchronous copy from 6GB to 10GB, doesn't sound good for parallel processing or good for latency, or good for utilisation(IMHO); especially when the L3 cache typically is used far more for data destined for the 6GB, and that cache data will be stuck waiting for the portion destined for 10GB to get the memory controller to make it happen for it to offload or be filled from the 6GB.
(AFAIK) Conventionally the memory controllers are initialized by the low level bios/uefi system prior to boot strapping -hence why RaspPi and laptops with iGPUs fix the memory split before boot strapping – and in the initialization the controller/s is setup to enqueue commands from the outer CPU cache - so certainly not likely that a to-the-metal approach for developers would let them change how data gets to each memory pool.

But in saying all that, the Infinity fabric design of the Zen2 and Vega, which I assume XsX uses, or the early Infinity Architecture because of the old style 60 CU GPU of the XsX looks like it might be able to decouple the memory controllers (for all chiplets) behind the Infinity Fabric to make CPU memory access to the 10GB trivial, but the split bandwidths and different bus widths suggests something hasn't been decoupled for that optimal setup. A normal Zen 2 would have two 64bit memory controller units (MCU). One for each L3 cache - and one L3 cache per 4C/8T module - and we know that XsX has 192 bit bus width for the Zen2, so has 3x MCUs (and logically will have three L3 caches), but RDNA2 AMD GPUs will have either 256 bit bus or 384 bit, 4 MCUs 2 per side, or 6 MCUs, with 3 per side. But neither of these setups fits with the XsX's 5 MCUs.

If the XsX's ten 32bit chips are interfaced with 3 MCUs in the Zen2, and another 5MCUs in the GPU, then what you are claiming seems highly plausible as the MCUs would likely manage the complexity. However, if the memory is connected to 3 MCUs in the Zen2 and only 2 MCUs live in the GPU, then for latency reasons alone, I would expect the CPU to pass all data to the GPU by copying to the 6GB, and then to the 10GB. The picture of the exposed XsX chip has 5 bright silver units on its North edge, a cluster of 3 in the middle and one further wide on each side – I think they could be the MCUs because they are before the black moat, that I assume is the Infinity Fabric that wires to all the 10 GDDR6 modules, and that matches the Zen2 type design (AFAIK).

I don't know if either of you read the previous comment I made (above ), but the XsX memory access isn't about the memory chips as such (IMHO), it is more to do with the memory controller units (where? And how many?) and how they interconnect - through whatever version of AMD's infinity fabric /Infinity architecture.
Hopefully someone here has been tracking AMD developments since AMD introduced HyperTransport (which Infinity fabric and Architecture are modern supersets) to explain the wiring setups and differences between each version. AFAIK it has been suggested on a PC website that AMD are ditching the large CU GPU solution in the future, and going with 36-40 CU GPUs with an everything everywhere approach using AMD's Infinity Architecture with their Epyc chips as a proper crossfire replacement, allowing them to scale CPU and GPU performance with multiple Architecture interconnected chiplets and devices like SSDs and interfacing to a shared unified pool of RAM. The solution is apparently part of them winning a Exoscale contract that rolls out in 2022.
My hunch, is that Fabric is a 16nm top layer cross connecting moat, Architecture is at 16nm and either Sony's recent queue patent or a ringbus like the Elemental Interconnection Bus from the Cell, with the final solution probably being 7nm and a 3D stacked solution of infinity architecture layers.

Everyone has been wondering about RDNA versions of the GPUs, but judging by the difference in XsX and PS5 bus width/s, CU counts, clocks and IO solutions I'm wondering if the real difference in the APUs is what version of Infinity fabric or architecture each has gone with. I'm guessing infinity fabric and 5 MCUs total for the XsX, and Infinity Architecture 2022 version and 8 MCUs for the PS5 solution.
 
AC: Valhalla could be 60 fps easily on the next gen console, even in lower resolution... but it won't.

And you know why?

Not because the developers are lazy.
Not because the new consoles aren't powerful enough.

But because 60 fps is a PROMISE.

A promise that all the next AC titles will also be 60 fps.
Because once you go 60 fps in a cyclic series, there's no turning back.
And it's not an easy promise, especially for open-world games.

Can you imagine that after so many years another Call of Duty only go for30 fps? What a flame on all medias it would be.

No, go back to the year 2013 when AC: Black Flag came out.
This cross-gen title could also run on PS4/XO in 60 frames. Easily.
But it didn't, for a simple reason.

Because Ubisoft already knew at that time that with the more ambitious projects they were planning for AC: Unity and later titles like AC: Origins/Odyssey, they wouldn't be able to deliver 60 fps for the current generation.

Can you imagine how much AC:Unity and later AC:Origins/Odyssey would have be downgraded if they had to deliver 60 fps on the current generation?
Yeah. And it would even have a negative impact on sales if these games were graphically much worse than other titles from competition aiming for 30 fps.

And Ubisoft knows very well that it doesn't matter that they have now enough power to make AC: Valhalla to run at 60 fps on next-gen,
But they also know well that 60 fps target would very limits the ambitions for truly next-gen Assassin's Creeds.

So, that's why do I think that Ubisoft (and many other companies,) do not want to close the 30 fps door so they can deliver much more ambitious things in the future, and the promise that all sequels game will also be always 60 fps, may just not be accomplished.

Unless they're pushing the graphical fidelity to absurd new levels, I don't see why they would fail to hit 60 fps or even higher considering that this is a Cross gen game. For example Racing games are going to be 120fps
 
And it's not an easy promise, especially for open-world games.
You obviously never played the last gen ports released on the ps4 (TLoU, uncharted, god of war 3... Among others)... All 60fps, none prevented thw next installment from targeting 30fps (uncharted 4 even had to change it during development).

Obviously, I agree that I disagreed with the choice to go back to 30, I would gladly take at least an unlocked mode in uncharted 4 in the pro, like god of war ps4 did... I'll pick my poison.
 
AC: Valhalla could be 60 fps easily on the next gen console, even in lower resolution... but it won't.

And you know why?

Not because the developers are lazy.
Not because the new consoles aren't powerful enough.

But because 60 fps is a PROMISE.

A promise that all the next AC titles will also be 60 fps.
Because once you go 60 fps in a cyclic series, there's no turning back.
And it's not an easy promise, especially for open-world games.

Can you imagine that after so many years another Call of Duty only go for30 fps? What a flame on all medias it would be.

No, go back to the year 2013 when AC: Black Flag came out.
This cross-gen title could also run on PS4/XO in 60 frames. Easily.
But it didn't, for a simple reason.

Because Ubisoft already knew at that time that with the more ambitious projects they were planning for AC: Unity and later titles like AC: Origins/Odyssey, they wouldn't be able to deliver 60 fps for the current generation.

Can you imagine how much AC:Unity and later AC:Origins/Odyssey would have be downgraded if they had to deliver 60 fps on the current generation?
Yeah. And it would even have a negative impact on sales if these games were graphically much worse than other titles from competition aiming for 30 fps.

And Ubisoft knows very well that it doesn't matter that they have now enough power to make AC: Valhalla to run at 60 fps on next-gen,
But they also know well that 60 fps target would very limits the ambitions for truly next-gen Assassin's Creeds.

So, that's why do I think that Ubisoft (and many other companies,) do not want to close the 30 fps door so they can deliver much more ambitious things in the future, and the promise that all sequels game will also be always 60 fps, may just not be accomplished.

Digital Foundry has a great video on why AC wasn't 60fps, it was because of the anaemic CPUs on the PS4 and Xone. It's that simple.
 
Big BRUHHHH if a cross gen title can't do 60fps without RT we are secrwed 🤦🤦🤦

Such a big disappointment.

Bruh I swear framerates is not serious lol. I'll take 30fps and increased shadows/lighting, particle fx, destruction, dynamic features, etc., any day of the week but that's just me lol. 30 frames also feels closer to what I see in films or general viewing anyway 🤣🍻.

It's that they were marketed and hyped poorly; and presented as something they were not.

I don't really remember Sony hyping up the GDC presentation other than a blog post right before it took place unless I'm mistaken? When Sony's blog post launched explaining the "deep dive" I was under no impressions or assumptions that they'd reveal the console. I figured it was a remote possibility but that's it, with a grain of salt. But I was certainly expecting a tech talk of some kind.
 
Last edited:
They might do, but I dont, 60 FPS locked and anything over 1440p upscaled they can do whatever they want.

4K I dont notice difference between native, 1600p or Checkerboard or Temporal solution on Pro, I bet most others wont either.

But frame rate is glaring 30 vs 60.

My friend, the difference is "Massive" between all of these, but 1600-1800p with 60fps is not a deal breaker to me anyway. Any person with healthy sight will notice that from at least 2-5 meters away when put side by side. 16K is the near max limit for the eye to notice above it, but we're still waiting for PS5 Pro for checkerboard/native 8K gaming by 2023-2024 :messenger_sunglasses:

For Borderlands 3, I went with crappy resolution for higher framerates on PS4 Pro. For FPV it must be 60fps minimum anyway.
 
ray tracing and teraflops is this gens buzz words

In the real world, the tech is still in beta form, and isnt a game changer yet, I'm sure itll speed up dev time in the future, since no baked lighting, still years away from that tho (imo)
Ray tracing is the holy grail of graphics.
 
Digital Foundry has a great video on why AC wasn't 60fps, it was because of the anaemic CPUs on the PS4 and Xone. It's that simple.

For sure this is it but it completely makes a mockery of, and undermines, the 'Optimized for Xbox Series X' badge. Or worse the whole thing is a lie.....
 
Let me say this. Those who are happy with just a simple logo reveal and a picture of a controller have no right to criticize the amount of games we saw. Yes the games are not for everyone but it beats the shit out of a logo and a controller.

just my opinion :)
NOPE.

One does not justify the other.

Plus with the logo and controller reveals, they never overhyped them to the moon and back and misled people.

We all laughed at the logo reveal (for different reasons). So let's please laugh at MS' blunder now.
 
For sure this is it but it completely makes a mockery of, and undermines, the 'Optimized for Xbox Series X' badge. Or worse the whole thing is a lie.....

Yes, it's absurd why they haven't clarified why Vallhala is not 4k 60. Maybe they want to push it higher?? Or its going to be 30?? It's weird because it should have been easy to hit 4k 60 considering this is a cross gen game.
 
We told you guys that the Xbox Series X doesn't have any "secret sauce" in its GPU, or any "secret sauce" in its SSD architecture, the PS5 has everything the Xbox has in terms of GPU but maybe some difference on the API-level.

That's it.

PS5's SSD DESTROYS XSX's SSD!! No comparison, anyone who says otherwise is a Phil Spencer taint lover.
The PS5 SSD is far superior to the XSX, far superior. But the accelerators/software to remove bottlenecks are 50/50 at this point. Until we know fulle details of XVA we can only say the PS5 will have much higher throughput.
 
MS has already shown their hand. they wont go until july. sony cant afford to go wait that long, even if that is what their plan was.

I think they believe MS is well over $499 and want to see what they will price the series x at. Lockhart is a red herring imo, i dont think it launches on day 1. I think MS threw that out there to make Sony think twice before launching at $499, and downgrade their specs which as we see from the ram bandwidth downgrade, actually kinda worked.

Sony has no choice but to reveal the console and price in June. They kinda screwed themselves by going with a smaller die, and now they simply cant launch at the same pricepoint as Series X. But if Series X is $549 or $599 then they can go with a $500 PS5.

If the PS5 was $399, sony would have zero reservations, but I think that expensive SSD is the reason why Sony is going over $399. the question is how much smaller is that die really. Especially with the repurposed 3D audio CU, those I/O processors, and coherency engines. If it's around 300mm2, they will likely save $20 compared to the Xbox. $40 if the prices have doubled from last gen. How much cheaper is 448GBps ram compared to 560 MS is going with? 25% more bandwidth for 25% more costs? So $25 if PS5 RAM is $100?

But again, MS was smart and only ordered 10 GB of ultra fast ram, and 8 gb of even slower ram. so i bet it cost them the same as Sony. Best case scenario for Sony, their 825GB of SSD costs the same as 1TB of MS SSD. Their RAM costs $25 less and their Chip is around $40 less. Their cooling solution is inexpensive like cerny said so it's likely another $10-15 in savings That's almost $80 in savings that can help them launch $50 cheaper.

Worst case scenario, their ssd is more expensive, their die size is the same, their ram costs are the same and the cooling solution costs only a fraction less than MS's vapor chamber cooling. So they have a console that has 20% less tflops with the same BOM as MS. They will have to launch at $500 and hope for the best.
I don't think any company will sell above $499 based off the lessons from the PS3. Phil has already said the price will be competitive.
 
I see where are you coming from but a cross gen title should easily do 60fps because in 1 or 2 years we will see many games with RT which is very demanding, it seems 60fps on open world games is just a dream after all 😞
The type of RT these consoles will support is likely pretty "low tier". We'll get basic support for most games, with less visually demanding games like minecraft showing off higher tiers of RT, as we've seen.
 
Yes, it's absurd why they haven't clarified why Vallhala is not 4k 60. Maybe they want to push it higher?? Or its going to be 30?? It's weird because it should have been easy to hit 4k 60 considering this is a cross gen game.

Let's see where it ends up. I really don't get it though as a point was made at how few resources and how quickly 4K graphics were implemented with e.g. Xbox One X. IIRC Playground said they got the Chuckwalla dev kit and within a day had Horizon 3 up and running in 4K. 60FPS+ should be just as easy to implement?
 
Lol I forgot how triggered NeoGaf can get.

I'm sorry guys I forgot that in this relm the Contollers and Logos are whats most important.

I think its shitty to downplay what some of these indie devs are working on as trash. There some very good games on show but I guess most of you only want to see exclusive AA game only.

I will go discuss the games in other forums where discussion is civil and doesn't build down to "BuT ThEy HAvE MoRE LiKEs oN InSTAgRAm AnD THE MEaninG BeHInD thE LoGO is VeRy SeNTiMEntal sLoLOL AaA GamES ARe THe OnLY GaMeS WORth TAlKinG AboUT"
Man how do you deflect this on others? First you deflect MS blunder then you deflect your own triggered status on to others :messenger_tears_of_joy:

Listen, it's MS to blame, and they admitted to their mistake. So no need to defend them on this one. Call it what it is.

About the indie games, that's not the dev's fault. No one is blaming them for anything. It's just how MS hyped the crap out of these average current gen looking indie games as nextgen games. If the same games were advertised as current gen games, no one would've said anything. Actually, I don't think anyone blasted them for anything apart from them not looking nextgen at all. And it's true. That's not an insult. So of course this backfired at MS.
 
Well he he!......I expected Guerilla to crush toes.....SWWS will lead the pack once again next gen....These guys always juggle between best graphics all gen anyway.

Not surprised, it's been the case since 1994. With GG's smart engine (Decima) they made magic with Horizon Zero Dawn at 1800p@30fps. With all the unprecedented tech in the PS5 we're gonna faint when the first trailer hits :lollipop_tears_of_joy:

9ba224ded888d067e7493bff0d5e115e.jpg

Horizon-Zero-Dawn-15.jpg
 
Last edited:
Per Anandtech's testing, at 4k/Very high (What people want from next gen) a 2080 (which is a decent comp for XSX) can only manage upper-mid 40s. For a game 2 years newer and even more demanding, XSX will either have to tone down settings or live with 30. Even a 2080Ti fails to hit a perfect 4k/60, so one would assume for AC:V, a 3070 will be necessary for 4k/60, at minimum. So, 30 FPS is only "so 2013" if you live at 1440, which is what's truly "so 2013"

111548.png
I dont like this analogy personally. I have watched alot of tech youtube channels and did research on some different websites when I was starting to play AC Odyssey on my PC and almost all of them agree that very high settings in this case is just a waste of resources and not worth the return you get in graphic fidelity. I think this proves more so that lazy developing/non optimization can lead to horribly wasted resources. Ubisoft has a bad rep in the PC community for lazy optimization from my experience doing research trying to figure out the best settings for my pc set up. Its actually funny alot of the time people just say you would be better off just playing on consoles since they had a smoother experience with less headache. IMHO even the cheapest TVs and monitors out there nowadays are 60 Hz and game companies should set the bar there and develop around maintaining that and these new CPUs in the ps5 and XsX should help achieve that easily compared to that joke jaugar.
 
Status
Not open for further replies.
Top Bottom