Clear
CliffyB's Cock Holster
A bit earlier tonight I made a post in a thread about Switch 2 about how little I cared about the relative merits of upscaling tech. It was a little snarky, and a bit self-deprecating insofar as I mentioned that when I was young'un. and TV was all you ever watched on TV and monitor screens, we got by pretty well with "blurry" SD resolutions.
The thing is though, it really snapped into focus (pardon the pun) for me, how I feel like we've all been conned into thinking about graphics in a certain way. A way I believe is pretty unhealthy and mainly exists to keep us buying generation after generation of new hardware and software.
First things first, and I'm going to try and keep this brief, is that to demonstrate the negatives, the positives need to be understood first. Because graphics *do* matter.
The growth in resolution we've seen over the years was important because essentially raster graphics are just a 2-dimensional pixel array. And the more dots you have to play with, the more text you can show, the more identifiable objects you can have flying around without it turning into an undistinguishable mess, and so forth.
Some styles of game are impractical to make without decently high resolutions, because the key thing is that the player can see what they need to.
Detail and shading is important also, because it allows us to identify smaller but still useful bits of visual information beyond basic shape and form. We can see material, texture, unique identifying details and so forth.
And obviously motion/animation being smooth and responsive is critical for "feel".
So, at this point you're probably thinking, that's a *LOT* of positives, where are the negative you promised jackass? Where's the "con"?
Ok; here it comes:
The "Con" is the illusion that improvements in the dimensions I just laid out, stretch ever onward towards infinity, when they don't. They simply extend to the maximum point of utility.
Resolution is an interesting one, as its the easiest dimension to demonstrate the limitations of, and yet has been successfully pushed over and over.
Anyone old enough to recall the transition from CRT displays to flat-screens will no doubt recall with horror how utterly fucking horrible older games suddenly looked, and why modern emulations and upscalers need to massage the image so much to make things look the way we remember them! Now obviously there are differences in the properties of different types of display media, but still the end result is that sharpness i.e resolution is as much a blessing as a curse.
Also the size of displays needs to be factored in, and of course the ratio of physical screen size to pixel-density matters. Nowadays we can see detailed imagery on displays that fit in our palms to ones that fill entire walls... which is nice... but that's more about accessibility than actual improvement.
Perhaps moving on to detail illustrates my point better; these days we have game characters so detailed we can discern individual pores in their skin; but I ask you what fucking use is that?
Its a novelty, the "big picture" i.e. what's happening holistically is where the meaning is.
Again, my argument is that "more" isn't necessarily better, yet this is the lie we're constantly being asked to accept.
Obvious example being the push to 8k, when only an inconsequential amount of the media in the history of the world was recorded in that level of detail! Its just bullshit!
Bear in mind actual reality has an infinite amount of complexity and detail to discover should we choose to focus in on it. Yet we only rarely do that because its inconsequential to our daily lives. Its not the detail and shading and texture that matters... its the relevance of it.
So tell me why when it comes to games and gaming related hardware its suddenly important how many hairs are on our character's heads? How the sub-surface scattering makes their skin look natural? How the irises in their eyes refract light and so on!
What I'm getting at is so much is done in pursuit of verisimilitude and "realism" for remarkably little gain.
To close this rant up, I obviously have to address the thing that really sparked my train of thought off in this direction in the first place:
Fake resolution through upscaling and Fake frame-rate improvement through motion smoothing. Both being essentially computational tricks to give the illusion that the hardware is capable of doing more than its actually able to do.
These are good tricks, but aren't they just serving to mitigate the rod created for the devs own backs by indulging in excessive irrelevant detail ? The big picture being the same, just that more dots are being algorithmically created to fill in the gaps?
Same deal with frame interpolation; historically games were single-threaded so if one part slowed down, all parts displayed slower. Hence we got the idea that more framerate equals more responsiveness even though input polling and logic could always be un-hooked from display rate with crafty coding. Now through the snake-oil logic of frame-gen we get MORE frames even when the logic is running at the exact same rate as if it wasn't there, and we get shysters and half-wits talk about it "improving latency", even when generating said frames being an extra computational task it has to be a net loss to some degree!
Bottom line; the "lie" is we're getting more, when really its just the same old same old with a bit of visual spit and polish on top.
Its a con, the same way as saying having a big-ass TV brings "the cinema experience back home", when the real difference with the cinematic experience was that it was a night out with an audience creating a communal event.
Sorry guys, the big "more" in actuality always seems to be just more money for the same stuff!
tl;dr:
More is not the same as better. Stop chasing numbers and instead focus on meaningful improvement. Its just a sales pitch to keep you spending.
The thing is though, it really snapped into focus (pardon the pun) for me, how I feel like we've all been conned into thinking about graphics in a certain way. A way I believe is pretty unhealthy and mainly exists to keep us buying generation after generation of new hardware and software.
First things first, and I'm going to try and keep this brief, is that to demonstrate the negatives, the positives need to be understood first. Because graphics *do* matter.
The growth in resolution we've seen over the years was important because essentially raster graphics are just a 2-dimensional pixel array. And the more dots you have to play with, the more text you can show, the more identifiable objects you can have flying around without it turning into an undistinguishable mess, and so forth.
Some styles of game are impractical to make without decently high resolutions, because the key thing is that the player can see what they need to.
Detail and shading is important also, because it allows us to identify smaller but still useful bits of visual information beyond basic shape and form. We can see material, texture, unique identifying details and so forth.
And obviously motion/animation being smooth and responsive is critical for "feel".
So, at this point you're probably thinking, that's a *LOT* of positives, where are the negative you promised jackass? Where's the "con"?
Ok; here it comes:
The "Con" is the illusion that improvements in the dimensions I just laid out, stretch ever onward towards infinity, when they don't. They simply extend to the maximum point of utility.
Resolution is an interesting one, as its the easiest dimension to demonstrate the limitations of, and yet has been successfully pushed over and over.
Anyone old enough to recall the transition from CRT displays to flat-screens will no doubt recall with horror how utterly fucking horrible older games suddenly looked, and why modern emulations and upscalers need to massage the image so much to make things look the way we remember them! Now obviously there are differences in the properties of different types of display media, but still the end result is that sharpness i.e resolution is as much a blessing as a curse.
Also the size of displays needs to be factored in, and of course the ratio of physical screen size to pixel-density matters. Nowadays we can see detailed imagery on displays that fit in our palms to ones that fill entire walls... which is nice... but that's more about accessibility than actual improvement.
Perhaps moving on to detail illustrates my point better; these days we have game characters so detailed we can discern individual pores in their skin; but I ask you what fucking use is that?
Its a novelty, the "big picture" i.e. what's happening holistically is where the meaning is.
Again, my argument is that "more" isn't necessarily better, yet this is the lie we're constantly being asked to accept.
Obvious example being the push to 8k, when only an inconsequential amount of the media in the history of the world was recorded in that level of detail! Its just bullshit!
Bear in mind actual reality has an infinite amount of complexity and detail to discover should we choose to focus in on it. Yet we only rarely do that because its inconsequential to our daily lives. Its not the detail and shading and texture that matters... its the relevance of it.
So tell me why when it comes to games and gaming related hardware its suddenly important how many hairs are on our character's heads? How the sub-surface scattering makes their skin look natural? How the irises in their eyes refract light and so on!
What I'm getting at is so much is done in pursuit of verisimilitude and "realism" for remarkably little gain.
To close this rant up, I obviously have to address the thing that really sparked my train of thought off in this direction in the first place:
Fake resolution through upscaling and Fake frame-rate improvement through motion smoothing. Both being essentially computational tricks to give the illusion that the hardware is capable of doing more than its actually able to do.
These are good tricks, but aren't they just serving to mitigate the rod created for the devs own backs by indulging in excessive irrelevant detail ? The big picture being the same, just that more dots are being algorithmically created to fill in the gaps?
Same deal with frame interpolation; historically games were single-threaded so if one part slowed down, all parts displayed slower. Hence we got the idea that more framerate equals more responsiveness even though input polling and logic could always be un-hooked from display rate with crafty coding. Now through the snake-oil logic of frame-gen we get MORE frames even when the logic is running at the exact same rate as if it wasn't there, and we get shysters and half-wits talk about it "improving latency", even when generating said frames being an extra computational task it has to be a net loss to some degree!
Bottom line; the "lie" is we're getting more, when really its just the same old same old with a bit of visual spit and polish on top.
Its a con, the same way as saying having a big-ass TV brings "the cinema experience back home", when the real difference with the cinematic experience was that it was a night out with an audience creating a communal event.
Sorry guys, the big "more" in actuality always seems to be just more money for the same stuff!
tl;dr:
More is not the same as better. Stop chasing numbers and instead focus on meaningful improvement. Its just a sales pitch to keep you spending.
Last edited: