not as pricey as expected but still not very much of a consolation for people who expected this to work on your previous generation graphics card.. I'd love to see it running on my laptop's 1050Ti.Nice twitter thread about performance
Its because reflections are a single ray. Ao requires multiple. Doing it with fewer rays per pixel at more pixels leaves more raw data for the post filtering to work with. If they can spend the memory on a larger buffer, they should even super sample the thing. As shifty said, with raytracing, you mostly pay for the rays. How you store their results is mostly just about having the memory for it.PICA PICA does the reflections ray tracing at half-res.
I'm sure many games will do RTAO at a quarter-res, just like SSAO.
Resolutio is still very important in ray tracing. You have to trace rays for every pixel you want to shade so more pixels = more rays. That cost should be reduced as much as possible. It benefits the performance of the filtering as well.Its because reflections are a single ray. Ao requires multiple. Doing it with fewer rays per pixel at more pixels leaves more raw data for the post filtering to work with. If they can spend the memory on a larger buffer, they should even super sample the thing. As shifty said, with raytracing, you mostly pay for the rays. How you store their results is mostly just about having the memory for it.
Thank you, I needed that! I actually blew coffee out of my nose at my monitor reading it and I don't think I've even smiled in a day.Maybe they were counting the projected price with crypto-mining market impacts?
Really surprised he thinks they might be able to reach 30fps rendered for demo (I assume again with those 2 next gen GPUs but he is not clear) real-time rendering and without seeing the noise by end of the year/early next year; would had expected it to be much later than that for full path traced.Otoy's Unity talk:
Skip to 26:45 to watch the magic: unbiased path tracing with multiple bounces at 1080p, 50spp, 1 second, running on two (undisclosed) GPUs. Basically a generation ahead of the RTX demos.
It's a neat demo but not sure what you mean by "a generation ahead" here. The demo is running 2 orders of magnitude slower than any of the DXR demos even assuming those GPUs are Titan V's and not prototypes of something newer. Indeed it is getting significantly fewer rays/second than any of the DXR demos, although that is of course to be expected to some extent with path tracing.Skip to 26:45 to watch the magic: unbiased path tracing with multiple bounces at 1080p, 50spp, 1 second, running on two (undisclosed) GPUs. Basically a generation ahead of the RTX demos.
Skip to 26:45 to watch the magic: unbiased path tracing with multiple bounces at 1080p, 50spp, 1 second, running on two (undisclosed) GPUs. Basically a generation ahead of the RTX demos.
So if we modelled the two slit experiment we'd get...? ;-)"It's basically a physics engine for light. There's [SIC] no shortcuts. It just renders reality as it should be".
It's a neat demo but not sure what you mean by "a generation ahead" here. The demo is running 2 orders of magnitude slower than any of the DXR demos even assuming those GPUs are Titan V's and not prototypes of something newer. Indeed it is getting significantly fewer rays/second than any of the DXR demos, although that is of course to be expected to some extent with path tracing.
Not trying to downplay path tracing demos, but these sorts of demos running that far away from real time are nothing new really. Very useful for the target market (production) but the pref/quality tradeoff doesn't make sense for games and won't for a long time even with DXR.
AFAIK path-tracing can't simulate that.Just listened to the start with
So if we modelled the two slit experiment we'd get...? ;-)
Listening to the rest of it now.
FWIW: Remembered there's some early info on the light map bakery here:
Interesting to see a mention of Imagination/PowerVR's Wizard (running at 2Watts) at 18:25, which is amusing given the description, in the preceding minute or so, of the multi-monster-GPU systems being employed.
I think Andrew knows what the Picapica DXR demos is doing to get the frame rate it didConsider this:
Sure, no renderer matches reality 100% but Octane gets pretty close. The point is that while on the surface it runs much slower than the DXR demos the reality is that it's also doing FAR more. I think that applying some optimizations like reduced resolution, lower number of spp and less light bounces you could get performance comparable to the DXR demos while still being quite ahead of them not only in quality but in rendering features.I think Andrew knows what the Picapica DXR demos is doing to get the frame rate it did
PicaPica has a pure path-tracing mode for testing purposes (ground truth) so we know how slower path tracing is versus the hybrid that PicaPica uses and the visual tradeoff it makes to get that rendering speed.
Also as Simon pointed out the idea that a path tracer is physically correct or unbiased is pure PR. All renderers (outside Uni physics dept) chose a variety simplifications to meet the demands of its users. A path tracer goes for fairly high quality for most materials in real world scenes, it chooses to throw various things away that usually don't matter.
The slit experiment is one example where you could see this but in fact every real material is wrong and biased due to instantaneous energy transfer in almost every renderer. One of the common saying is that unbiased renderers are 'energy conserved' as a good thing. However in reality surfaces aren't energy conserved in an instance, many material absorb and re-emit energy over time (the reason black things got hotter than white things) that isn't modeled in a path tracer (that I know off) as its usually apparent mostly outside the visual spectrum. However if you rendering in the IR wavelength or chose a fluorescent material, it would look total wrong in a path tracer.
Or to put it simply it is all smoke and mirrors even in offline path tracers... demos like PicaPica choose a variety of techniques to keep the framerate high. Having RT allows different strategies with different quality/time tradeoffs for both realtime and offline renderers but it doesn't magically make it physically correct.
Sure, no renderer matches reality 100% but Octane gets pretty close. The point is that while on the surface it runs much slower than the DXR demos the reality is that it's also doing FAR more. I think that applying some optimizations like reduced resolution, lower number of spp and less light bounces you could get performance comparable to the DXR demos while still being quite ahead of them not only in quality but in rendering features.
Could be wrong though. I guess we'll see by the end of the year.
Apparently that was running on 80 GPUs at the time. They've now fully integrated Brigade into Octane proper. Supposedly now the priority is to optimize it to get it running in real-time speeds.if they applied that denoisoct
i think if they were using that temporal neural net ai de noise filter they would be way ahead of anything else here.
look at brigade engine from 4 years ago