... Most the time, the kind of scuzzy/dirty look of the game is by choice, I don't like things to be so perfect for some reason.
No need to declare yourself. Am not talking about design choices. That's not
my job. I even don't like if people want to tell the artist how (s)he should
do things in his/her game. Even if you would use a massive giant blur covering
the whole screen beyond recognition of any detail, or whatever, it would be
entirely up to you. Right? I like to play a game like the artist had it in
mind and not how I envision it in my mind. Anyhow. I just wanted to let you
know that from over here the game looks a bit blurry (going by the given
material) such that you can consider the reason for why this might be the
case. Going by the (video) material, it is impossible to say show sharp the
images are once they got blurred/touched in any way (due to whatever process,
i.e. by youtube or other forms of transcoding considerations). So to know what
sort of blur is real and what was introduced via transcoding etc. can only
be answered given the originals, which is what I asked for, for getting a
better picture on my side. And I have to say, the originals do look much better
and the slight blur here and there perfectly matches the overall atmosphere,
by design, as you said. Well, you may perhaps post a native image now and then
to show people how the game will in all likelihood look on their end. :+
I'm not the best at this sort of thing but probably a shader or camera effect. Using an object like a lens (circle) you capture the screen using a 2nd camera with a fish eye, culling the lens object, then overlay that capture as a texture or sprite on the "lens" object and mask the bounds of the object with a shader.
Similar to how you would setup multiple cameras to reproject what they are seeing on in-game "screens" to simulate "security camera" footage as textures every frame on a monitor object.
Would be the easiest using in-game tools but nothing is stopping you from rolling your own effects for exacting results.
Edit: you can probably set the viewport of the secondary camera to a smaller FOV when needing to capture a small section to reduce overhead.
Indeed, render to texture would do. What I was asking for requires sort of an
deferred approach similar to deferred rendering, since with a z-buffer engine
the required information is already lost resp. wasn't even computed. So when
an opaque window, for example, partially covers a button, the remaining
visible pixels of the button won't be enough to apply for a post-processing
lens. That's an issue I had with my UI, when a button gets clipped by
(the bounds of) a window and rendered. I now can defer the clipping,
blending etc. such that I have all the information available. For sure, it
requires more resources ... and is why I wanted to know if Unity may have
found a better way with their new UI.