Game Graphics Technology | 64bit, procedural, high-fidelity debating

Dreams converts SDF data to point clouds for rendering, Claybook ray-traces SDF directly.
I'm sure there are lot of differences.

Very impressive tech.

Interesting. This probably accounts for the fact that Dreams seems to allow more flexibility in the kinds of "brushes" it employs.
 
Not sure if this was posted already, but it's really interesting talk so well worth posting again.

D3D12 Future, VR and Beyond - Oxide Games at Capsaicin 2017.
Some very nice ideas on rendering for VR and how decoupled shading can help.
Rendering only one eye at 30fps and using result for rasterization at 90fps and both eyes. (Dropping frame at shading instead of rasterization to keep smooth visible framerate.)
Thx. Well, I'm wondering whether this shade-skipping will also work for quite
fast games. Sure, one won't notice with a slow paced ones like an RPG, but a
fast one may be much more difficult to get right, I guess. Btw; where is this
"Not Enough Bullets" demo showing all that stuff?
 
Thx. Well, I'm wondering whether this shade-skipping will also work for quite
fast games. Sure, one won't notice with a slow paced ones like an RPG, but a
fast one may be much more difficult to get right, I guess. Btw; where is this
"Not Enough Bullets" demo showing all that stuff?
Jerky shadows from fast moving object to ground might be the most obvious telltale sign for the shading framerate not being same as rasterization framerate.
If object rotates or translates in any way the result doesn't look identical between frames and thus may be harder to notice.

It should also be possible to have several different framerates for shading in a scene, so distant scenery could be shaded quite rarely and even blended between shading states and most obvious areas shaded every frame.
Also it might be possible to have basically temporal AA for shading, basically improve shading over period of time.

What is given is that high resolutions for rasterization should be quite cheap as it is reduced to very simple shading.


Pretty sure the demo hasn't been released in to wild.
 
So apparently Prey 2 was going to be rather amazing on a technical level before, well, you know
fuck bethesda
.

Link.
1. Virtual texturing, super high res HDR lightmaps (64k x 64k), PBR before it was popular, deferred shading, SSR in 2ms on 360 & PS3

Link.
2 Auto focus DOF, approx area lights, precomputed soft shadows (UE4 stationary lights). We nailed the standard tech of this gen last gen
 
This is really fascinating.

Neural Network modelled ambient occlusion - it approaches ground truth in a pretty cool way.
http://theorangeduck.com/media/uploads/other_stuff/nnao.pdf
nnao1cs0g.png
 
EVE Valkyrie update adds improved visuals as well as Multi-Sample G-Buffer Anti-Aliasing(MSGAA)

http://graphics.cs.williams.edu/papers/AggregateI3D15/Crassin2015Aggregate-lowres.pdf

is all i could find thats similar. anyone have any info

I don't think that it's the same as AGAA described in this paper as they don't seem to use MSAA h/w anywhere in their approach.

It's also interesting to note that MSGAA seem to require 175% of output resolution to function - I wonder if some kind of decoupled shading is at play here again.

Edit: Or it may actually be the case - I wonder why the renaming though:
We use a high multi-sampling rate (e.g.,8xMSAA which is natively supported by the GPU, and up to 32 samples per pixel with emulation) to ensure the geometry buffer captures the fine-scale geometric details.
 
I don't think that it's the same as AGAA described in this paper as they don't seem to use MSAA h/w anywhere in their approach.

It's also interesting to note that MSGAA seem to require 175% of output resolution to function - I wonder if some kind of decoupled shading is at play here again.

Edit: Or it may actually be the case - I wonder why the renaming though:

whens the next GDC type tradeshow?
 
Siggraph 2017 is next week, though i am not sure what mean by tradeshow.

http://openproblems.realtimerendering.com/s2017/index.html

There's some of the stuff people will be presenting and then there is of course everyone's favorite advances in real-time rendering though they haven't revealed it's content yet.
Looks like an interesting line up. Hope there's some stuff on various reconstruction techniques as well. Hoping more devs put R&D into that area.
 
Looks like an interesting line up. Hope there's some stuff on various reconstruction techniques as well. Hoping more devs put R&D into that area.

You might be interested in the "rendering of cod infinite warfare" pdf linked above. They have a pretty cool multi-resolution technique.
 
Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination

http://research.nvidia.com/publication/2017-07_Spatiotemporal-Variance-Guided-Filtering:

New technique allowing one to use a single spp input from a path tracer and approaching the reference image using spatiotemporal filters.
The results are quite amazing, even it is obviously not perfect (especially when the input does not provide enough information, such as during very dark/badly lit scenes) very surprised to see such a stable and denoised final image running in 10ms at 1920x1080p.


EDIT: SIGGRAPH time !
 
Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination

http://research.nvidia.com/publication/2017-07_Spatiotemporal-Variance-Guided-Filtering:

New technique allowing one to use a single spp input from a path tracer and approaching the reference image using spatiotemporal filters.
The results are quite amazing, even it is obviously not perfect (especially when the input does not provide enough information, such as during very dark/badly lit scenes) very surprised to see such a stable and denoised final image running in 10ms at 1920x1080p.


EDIT: SIGGRAPH time !

Thats awesome, though its really performance intensive. 10ms on Titan X for 1080p is quite a lot, but in 30hz it could work.
 
Thats awesome, though its really performance intensive. 10ms on Titan X for 1080p is quite a lot, but in 30hz it could work.

It might be "quite a lot", but this is still a massive improvement over bringing path tracing techniques for real time rendering (and not interactive rendering).
If the cost of the technique is linear, it would be trivial in two GPU generations, especially as techniques are improving rapidly.

Another similar technique published nearly at the same time is this one, and if there is something I would find problematic is that the denoising process, especially in this last technique in my opinion, seems to lose a lot of details in the materials, making things kind of... flat/overly smooth ? Thus losing an advantage from using path tracing in the first place, but this might be because the scenes themselves are maybe not really detailed in the first place.
 
It might be "quite a lot", but this is still a massive improvement over bringing path tracing techniques for real time rendering (and not interactive rendering).
If the cost of the technique is linear, it would be trivial in two GPU generations, especially as techniques are improving rapidly.

Another similar technique published nearly at the same time is this one, and if there is something I would find problematic is that the denoising process, especially in this last technique in my opinion, seems to lose a lot of details in the materials, making things kind of... flat/overly smooth ? Thus losing an advantage from using path tracing in the first place, but this might be because the scenes themselves are maybe not really detailed in the first place.

I mean i fully agree with you. I'm just looking by near future :)
High end Volta should reduce it to 5ms and i think we can easily calculate lighting in 1080p and the reproject to 4k and output everything else in 4k to save performance higher resolutions.
 
So the GG presentation on Anti-Aliasing, Checkerboarding, spherical area lights and height fog was posted.

The AA and Checkboarding sections are rather awesome. They make note of how using full-resolution hints (like triangle index buffer, depth, or A-tested stuff) was too expensive for them to get 2160c running at 30fps in Horzion (probably bandwidth bound), so they opted just doing everything at checkboarded resolution... albeit rotated and rearranged so FXAA and and TAA make it look the way it does in the end, whilst being efficient.

Quite fascinating.

It also explains the detail pattern you see in Horizon screen shots at 4K where it does not resolve anything smaller than 2X2 pair pixels. Kinda awesome.
checker41kxi.png
 
So the GG presentation on Anti-Aliasing, Checkerboarding, spherical area lights and height fog was posted.

The AA and Checkboarding sections are rather awesome. They make note of how using full-resolution hints (like triangle index buffer, depth, or A-tested stuff) was too expensive for them to get 2160c running at 30fps in Horzion (probably bandwidth bound), so they opted just doing everything at checkboarded resolution... albeit rotated and rearranged so FXAA and and TAA make it look the way it does in the end, whilst being efficient.

Quite fascinating.

It also explains the detail pattern you see in Horizon screen shots at 4K where it does not resolve anything smaller than 2X2 pair pixels. Kinda awesome.
checker41kxi.png

Surprised they use fxaa at all. Is their any mention at all of fp16 or ID buffer use/performance?
 
Surprised they use fxaa at all. Is their any mention at all of fp16 or ID buffer use/performance?

no idbuffer usage for their checkerboarding at all, but they did not mention anything if they used fp16 on ps4pro sepcifically. If I were to conjecture about it, I would say the full res idbuffer being generated in hardware is not their problem, but as they mention... not having the bandwidth at all for it to get the image up to 2160c. They could have done 1800c with using a full res idbuffer, but chose a different path and did not use it at all to get to 2160c.

They mention that SMAA (1x) was too expensive for them and they were in fact using it in production at some points. FXAA is a good fit for soft images (pixel debrightening and it shades diagonals wonderfully, hence the cool tenagram idea they came up with) and they already were targeting a soft image. They do end up using some post-sharpening as they mention. FXAA still sucks though for long near horizontal / near vertical edges.
 
So the GG presentation on Anti-Aliasing, Checkerboarding, spherical area lights and height fog was posted.

The AA and Checkboarding sections are rather awesome. They make note of how using full-resolution hints (like triangle index buffer, depth, or A-tested stuff) was too expensive for them to get 2160c running at 30fps in Horzion (probably bandwidth bound), so they opted just doing everything at checkboarded resolution... albeit rotated and rearranged so FXAA and and TAA make it look the way it does in the end, whilst being efficient.

Quite fascinating.

It also explains the detail pattern you see in Horizon screen shots at 4K where it does not resolve anything smaller than 2X2 pair pixels. Kinda awesome.
checker41kxi.png

This is really, really clever

I'm shocked Horizon is using FXAA as well. Never seen such good results from it before. I guess that tangram pattern works well with it.
 
no idbuffer usage for their checkerboarding at all, but they did not mention anything if they used fp16 on ps4pro sepcifically. If I were to conjecture about it, I would say the full res idbuffer being generated in hardware is not their problem, but as they mention... not having the bandwidth at all for it to get the image up to 2160c. They could have done 1800c with using a full res idbuffer, but chose a different path and did not use it at all to get to 2160c.

They mention that SMAA (1x) was too expensive for them and they were in fact using it in production at some points. FXAA is a good fit for soft images (pixel debrightening and it shades diagonals wonderfully, hence the cool tenagram idea they came up with) and they already were targeting a soft image. They do end up using some post-sharpening as they mention. FXAA still sucks though for long near horizontal / near vertical edges.

Damn. Ive been hoping to get some devs giving their real world results of how useful ID buffer really is compared to "software" solutions
 
no idbuffer usage for their checkerboarding at all, but they did not mention anything if they used fp16 on ps4pro sepcifically. If I were to conjecture about it, I would say the full res idbuffer being generated in hardware is not their problem, but as they mention... not having the bandwidth at all for it to get the image up to 2160c. They could have done 1800c with using a full res idbuffer, but chose a different path and did not use it at all to get to 2160c.

They mention that SMAA (1x) was too expensive for them and they were in fact using it in production at some points. FXAA is a good fit for soft images (pixel debrightening and it shades diagonals wonderfully, hence the cool tenagram idea they came up with) and they already were targeting a soft image. They do end up using some post-sharpening as they mention. FXAA still sucks though for long near horizontal / near vertical edges.

This part of it fascinates me because it makes me think that there's still a lot that could be done in order to improve reconstruction on better hardware. I've been expecting PS5 and such to still utilize it so it'll be neat to see how these techniques evolve.
 
Damn. Ive been hoping to get some devs giving their real world results of how useful ID buffer really is compared to "software" solutions

Yeah, still waiting on that I guess. But heck, if you think the result looks good in Horizon... then we have a pretty good idea of how an id-buffer-less checkerboarding can look.
This part of it fascinates me because it makes me think that there's still a lot that could be done in order to improve reconstruction on better hardware. I've been expecting PS5 and such to still utilize it so it'll be neat to see how these techniques evolve.
Considering GG were hardware limited here to make specific version of checkerboarding (full resolution helper buffers), and ended up coming up with this successful alternative, it should be interesting what devs like DICE, ubi, and others produce in a space where they are less bandwidth/hardware bound in the next few years (on PC). It also makes me wonder what other cool software solutions we shall see! Perhaps more expensive solutions on AMD hardware on PC could point in the conceptual direction where next gen consoles will also take reconstruction when they have more bandwidth to play with.

Very exciting!
 
Yeah, still waiting on that I guess. But heck, if you think the result looks good in Horizon... then we have a pretty good idea of how an id-buffer-less checkerboarding can look

Yeah it looks great. I wonder if their are api limitations on the pc platform resulting in the poor implenetations we currently have
 
Considering GG were hardware limited here to make specific version of checkerboarding (full resolution helper buffers), and ended up coming up with this successful alternative, it should be interesting what devs like DICE, ubi, and others produce in a space where they are less bandwidth/hardware bound in the next few years (on PC). It also makes me wonder what other cool software solutions we shall see! Perhaps more expensive solutions on AMD hardware on PC could point in the conceptual direction where next gen consoles will also take reconstruction when they have more bandwidth to play with.

Very exciting!

I would love to see this sort of thing become more commonplace on PC. I think it would put an end to a lot of people complaining about it not being "real" 4k when they haven't seen it in action.

I really hope Insomniac discusses their temporal injection technique next year once Spider-Man is released. It's hugely promising.

Yeah it looks great. I wonder if their are api limitations on the pc platform resulting in the poor implenetations we currently have

It could be something that's tough to implement in a GPU agnostic way. It seems like you need to know preciesly how the GPU will handle various elements in order to make it work well.
 
I would love to see this sort of thing become more commonplace on PC. I think it would put an end to a lot of people complaining about it not being "real" 4k when they haven't seen it in action.

Options are always good, i'm all for it.
Regarding real 4k You also need to consider that on PC you are sitting closer to the screen so artifacts are more visible than from general TV distance.
 
Options are always good, i'm all for it.
Regarding real 4k You also need to consider that on PC you are sitting closer to the screen so artifacts are more visible than from general TV distance.

That's definitely the major hurdle for this sort of thing gaining traction on PC but it would be a great option for those who want to use their TVs for PC gaming.
 
Options are always good, i'm all for it.
Regarding real 4k You also need to consider that on PC you are sitting closer to the screen so artifacts are more visible than from general TV distance.

But monitors are generally much smaller than TVs so doesn't it cancel out?
 
But monitors are generally much smaller than TVs so doesn't it cancel out?

If you were sitting the same distance away from the screen yes, but I don't know many people who sit 6 or more feet away from their monitor.
 
Top Bottom