While the original article may be misleading clickbait, there is still a worthwhile discussion to be had about whether or not supporting older hardware holds a game back.
Absolutely!
I think the key point being missed here is this: does the graphical or technical effect you are trying to achieve inform the gameplay? In other words, if you got rid of your massive crowds, advanced physics, fluid dynamics or whatever, would the hypothetical game break?
I agree. The thing is that core gameplay
ususally - though not always -takes up a relatively small amount of your central processing resources. It has done on most platforms through history, with things like calculating what to draw and where, telling the GPU what it's doing, animating models, drawing particles, none-gamplay physics effects, decompressing assets etc etc taking much more comparatively. And many of those things are quite scalable with things like LOD, frame rate, animation quality etc.
But core gameplay does have a point beyond which dropping performance just
breaks the game. Collisions fail, physics malfunction, the game stutters and hangs, you lose sync with other machines and can't catch up, etc.
And then you just have a to hack away at core gameplay, which I think is the point no-one wants to reach. Especially not the people making the game!
Let's take a hypothetical multiplayer shooter. This shooter has tall grass that players can hide in. It also has distortion effects that make it difficult for the player to see if they are hit.
If you ported that game to a platform that could not render those effects at a reasonable frame rate, either the game doesn't get ported or you have to remove those effects from the higher spec platform. Otherwise the matches are unfair.
Any game planning to also release on the PC already needs to have gameplay independent from that kind of scaling. Any time your game is on PC, or supporting more than one console (i.e. MS and Sony, or base and Pro) you already have to have some kind of resilience to scaling planned in. Normally the game becomes unplayable due to performance before the game actually breaks - though console vendors should have conditions that prevent performance dropping this low, meaning the game isn't allowed to release.
What we see today, is toned down graphical effects for the lower spec platform that have little to nothing to do with gameplay. You can turn off Lara's bouncy hair. You can lower the resolution. Lower quality textures can be used. And so on.
But if you have a game in which you are trying to do advanced AI and track hundreds of unique objects on a realistically rendered ocean and there are 100 players on the server, that's going to be a tall order for the lower spec platform, and the game might not be in development because of that.
You can certainly hit that point. One of the problems of tuning super simulation-heavy games, is making them fun to play and also balanced. Increasing scale is one aspect that has proven very popular, with 100 player deathmatch games being very successful. But these games are often cloud powered and have server clusters to thank for that.
I'm sure you could make a Scarlett game that couldn't physically run in any meaningful way on X1 from launch day, but would that also be a hit game that pushed the platform, or would it be more of a tech demo...?
It will be interesting to see if anything more ambitious in gameplay scope than RDR2 or Cyberpunk launches in the first 12 months of the next gen!
I think we've reached diminishing returns on the GPU side. It's time to see what developers can do with some beefed up CPUs.
Certain types of game hit diminishing returns on the CPU side a long time ago, I think - like platformers. Others are still hungry for everything that they can get ... like Star God Damn Citizen.
You also have to factor in development time and budget into how far you can push new systems early on, I reckon.
Sorry for the wall of words, but as said, it's an interesting topic!