New Brigade 3 video (Real time path tracing, running on 2 GTX Titan)

Update: The full presentation is now online:
http://nvidia.fullviewmedia.com/gtc2014/S4766.html

Gaming relevant (12:15):
* Curved rendering for Oculus/Morpheus
* Lightfield rendering
* Foveated rendering
* Temporal reprojection + prediction
* Noise filtering
* Cloud gaming via Amazon EC2

My opinion: Well, at least they are ambitious :)
bj6v0jrceaapofe72u1f.jpg

OTOY said:
@JonOlick @ID_AA_Carmack #VR / #oculus display is our focus. Noise filter + dev SDK soon: http://bit.ly/1hKAZyz pic.twitter.com/rUzWf1fQxQ

Original Post
What is path-tracing:
Path tracing uses stochastic simulation to render three-dimensional scenes.
Path tracing physically accurately simulates global illumination, depth of field, motion blur, caustics or ambient occlusion.
Effects that have to be manually added and/or are very hard to do otherwise.
It is used in offline rendering for e.g. motion pictures, advertisement or architecture.

Why the noise?
Path tracing uses a random sampling ray tracing method (Monte Carlo). It needs a very large amount of rays to reduce noise, just like with photons in real photography.

What about the frame rate?
Path tracing is an "unbiased" renderer. Rendering a scene twice as long gives the same result as averaging two renders of the same scene.
So it always has the frame rate you set it too, more rendering power will just reduce noise levels. Four times the power, half the noise.

What is Brigade
An extremely fast GPU path tracer in development by OTOY, specifically designed for real-time applications like video games.
More info here: http://brigade3.com/

Why I think it's important now
Perfect for virtual reality. Path tracing eliminates the need for distortion shaders, and achieves life-like graphics.
Rendering cost reductions thanks foveated rendering make it viable even for modest GPUS.
Though that needs eyetracking while using head mounted displays, still an area of research.

The new video released Mar 28 (at GTC 2014)
Link: https://www.youtube.com/watch?v=BpT6MkCeP7Y
 
gpus from 2015 o maybe 2016 will do this in real time....will see something incredible in the next years! O_____________O
 
I've been keeping track of this for a long time.

Holy shit. Looking better with every video they release. The level of quality retained during motion is an indication of the advancements of both the software and hardware power. Once the noise is eliminated, which of course may not be for a very long time, a lot of scenes may be indistinguishable from reality to the layman. That prospect is incredibly exciting.
 
Looks like a substitute for software rendering in many applications. Its game utility looks a bit suspect at the moment.
 
Noise makes stuff harder to encode / compress - something that would have been a flat color or a cleanly-defined edge now has a bunch more detail that the compression algorithm has to attempt to account for. I imagine that most lossy video compression techniques aren't tuned to handle effectively random noise very well, since most things that you'd want to make a video of (real life or animated) aren't this noisy.

The lighting in this is neat, and far more realistic than what I see in most modern games. It's hard not to notice that it's all stationery environments at this point, though. Where are the skinned and animated meshes that they mention on their site?
 
Now if they can just cut it down to 1 titan. One of the programmers already said that the ps4 is capable of some path-tracing effects, but nothing like color bleeding and the extensive dof obviously.
 
Is it just me or does the noise also make encoding appear worse?

Still looking fine as hell, though.

I agree. I think they should've exported the video as 4k, since apparently even if your video is lower res than that, it's less compressed/artifacted.

Of course, that depends on whether the render or Youtube was at fault.
 
Noise makes stuff harder to encode / compress - something that would have been a flat color or a cleanly-defined edge now has a bunch more detail that the compression algorithm has to attempt to account for. I imagine that most lossy video compression techniques aren't tuned to handle effectively random noise very well, since most things that you'd want to make a video of (real life or animated) aren't this noisy.

The lighting in this is neat, and far more realistic than what I see in most modern games. It's hard not to notice that it's all stationery environments at this point, though. Where are the skinned and animated meshes that they mention on their site?

There is a dancing demo they released a while ago, i'll look for it.

Edit: This one? https://www.youtube.com/watch?v=Ckupsw6B_48
 
gpus from 2015 o maybe 2016 will do this in real time....will see something incredible in the next years! O_____________O

If NVIDIA's roadmap is on path toward 2016/2017, then I think Pascal has a shot at running the engine decently combined with a simple game.

The progress the team has made has been incredible thus far.
 
I thought a lot of the footage was real life for a second. I would LOVE to use VR in the night street area. Those graphics, night time, people chasing you, and 3d sounds! hnnnnnnnnngggg
 
I've been keeping track of this for a long time.

Holy shit. Looking better with every video they release. The level of quality retained during motion is an indication of the advancements of both the software and hardware power. Once the noise is eliminated, which of course may not be for a very long time, a lot of scenes may be indistinguishable from reality to the layman. That prospect is incredibly exciting.
What's really exciting is How much easier the engine Will make the lives of programmers and artists. Think of an indie title like outlast with these type of graphics. No more approximation of anything!
 
That's pretty much photo-realistic. Amazing time to be video game aficionado. I can't wait to see what games will look like in March 2024.
 
Path tracing is an "unbiased" renderer. Rendering a scene twice as long gives the same result as averaging two renders of the same scence.
So it always has the frame rate you set it too, more rendering power will just reduce noise levels. Four times the power, half the nose.

So this means that they render different pixels each frame and blend them in time? When the camera pauses it takes a matter of a few seconds for the noise to be hard to spot, and many more for it to completely disappear...

But let's say if takes only a second for the image to be perfect you'd still need 60 times the power for 30 fps scene with every frame being non noisy and 120 times the power for a 60fps scene, correct?

If that's the case, then well... This is just many years off if it really takes 100 times 2 titans to be able to render properly XD

Now I'm depressed :(
 
One inexorable advancement after another, the Metaverse is inching closer to reality.

If that's the case, then well... This is just many years off if it really takes 100 times 2 titans to be able to render properly XD

Now I'm depressed :(

Don't be, that's only about 10-15 years away. :)
 
So this means that they render different pixels each frame and blend them in time? When the camera pauses it takes a matter of a few seconds for the noise to be hard to spot, and many more for it to completely disappear...

But let's say if takes only a second for the image to be perfect you'd still need 60 times the power for 30 fps scene with every frame being non noisy and 120 times the power for a 60fps scene, correct?

If that's the case, then well... This is just many years off if it really takes 100 times 2 titans to be able to render properly XD

Now I'm depressed :(

I wonder whether Moore's law will end before we get to that. CPUs are already tapering off, no?
 
If that's the case, then well... This is just many years off if it really takes :(
Not quite.
Noise is proportional to 1/√n, so 9 times the rendering power will only reduce noise by a factor of three.
On the other hand, if you accept 4 times the noise you only need 1/16th the rendering power. That's why it takes so long to converge (i.e "clean up")

But that doesn't include the constant factor, which can vary by orders of magnitude.
For example, huge improvements can be made by making ray distribution more efficient.
(Images: same rendering time)
fig62wkxo.jpg


So we are much closer than you think. :)
 
The noise is indeed a problem, but this should revolutionize CGI rendering: with a consumer grade GPU you can get a WYSIWYG preview by just letting the camera still for a second or two.

Also, engines like Unity and Unreal could allow you to preview the lightmaps in real-time before baking them.

So this means that they render different pixels each frame and blend them in time? When the camera pauses it takes a matter of a few seconds for the noise to be hard to spot, and many more for it to completely disappear...

But let's say if takes only a second for the image to be perfect you'd still need 60 times the power for 30 fps scene with every frame being non noisy and 120 times the power for a 60fps scene, correct?

If that's the case, then well... This is just many years off if it really takes 100 times 2 titans to be able to render properly XD

Now I'm depressed :(

The image is never perfect, since the rays are shot at random. It converges to perfection, but it would take infinite time to be 100% noise free. However, not even digital photography is noise free and at that point the resulting image could be considered "finished".
 
Update: The full presentation is now online: (new thread?)
http://nvidia.fullviewmedia.com/gtc2014/S4766.html

Gaming relevant (12:15):
* Curved rendering for Oculus/Morpheus
* Lightfield rendering
* Foveated rendering
* Temporal reprojection + prediction
* Noise filtering
* Cloud gaming via Amazon EC2

My opinion: Well, at least they are ambitious :)
bj6v0jrceaapofe72u1f.jpg

OTOY said:
@JonOlick @ID_AA_Carmack #VR / #oculus display is our focus. Noise filter + dev SDK soon: http://bit.ly/1hKAZyz pic.twitter.com/rUzWf1fQxQ
 
it's gonna be crazy when you don't need huge render farms to crank out an amazing looking FX piece. Definitely getting there. Can't wait to see where GPU rendering is in 10 years.
 
path tracing is and will always be the future of computer graphics.

Seriously though, this seems great and more practical than conventional ray tracing techniques
 
Couldn't they do shortcuts where they path trace in specific areas of interest in the scene, and then extrapolate the shading for adjacent pixels?
 
Couldn't they do shortcuts where they path trace in specific areas of interest in the scene, and then extrapolate the shading for adjacent pixels?
That's the idea behind noise-filtering (or noise reduction)
They haven't shown any yet though AFAIK
 
Could a system like Killzone SFs MP work in which the game fills in the noise with pixels from the previous frame therefor cutting the performance draw in half? Ghosting and blur would be side effects but it allows this tech to advance quicker until our gpus are ready.
 
After 20nm GPU cards arive [or their refresh], engine created with this kind of engine could be used for a game in which you are playing a robot or person who has received a implant that enabled it to see for the first time. That will explan to the gamer why he is seeing "graining" of the image.

Anyhow, great example of the ray-tracing in real-time. Still not perfect [IQ wize], but we are getting there.
 
I just watched the presentation from the OP's link, and its great. They are planning to implement everything from Octane [film rendering suite] into Brigade [game engine], everything unified. Games will get full film quality assets and effect.

Very ambitious stuff.
 
Getting better everytime i see it, still too much noise. But at this rate i wouldn't be surprised to see it in high quality in only a few years.
 
Will this run on PS4?


Yes I'm kidding. lol


Realistically I would say that games are at least 10 years off from actually having that level of visual fidelity. That being said it's definitely exciting to see. This combined with VR could be astounding.
 
But that doesn't include the constant factor, which can vary by orders of magnitude.
For example, huge improvements can be made by making ray distribution more efficient.
(Images: same rendering time)
fig62wkxo.jpg

Yeah, with a lot of indirect light or complicated paths, path tracing can become very inefficient, so something like MLT would be useful. I asked about MLT on one of their videos but they didn't say anything. Maybe it doesn't reach "steady state" fast enough, who knows (in Metropolis sampling one often does "burn-in" by discarding the initial samples to get rid of any bias caused by the initial state). Maybe they are working on it.

But I'm sure that when we have a bit more power and/or foveated rendering becomes a reality, the necessary statistical methods will be developed.
 
A nighttime asylum horror game with this tech on a good VR head set with full motion tracking = crap in your pants.
 
Impressive demo. It seems like the noise during perspective changes should be masked by motion blur to a degree when running vr simulations. Is this correct?
 
In the presentation they wheren't talking about oculus and sony hmd support for real time implementations but for 3d movies

Just watched it.

It's the same video from youtube. :/
Ehh yeah, like -SD- said.. "1 gig MOV of the latest video:"
 
Top Bottom