Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Things we don’t know:

1. How fast an optimized RT + tensor denoising implementation runs on Turing
2. The area cost of RT + Tensor hardware

It will be very interesting to see how performance evolves on BFV, SOTR and Exodus in the next few months. I think it’s too early to deny the feasibility of RT acceleration in next gen consoles.
 
Similar to how rasterization hardware has changed greatly over the years, I suspect the same with RT.
What we see with Turing today is just be the beginning. Inefficient compared to what will be coming in the future.

That's why I'm hopeful that 3 more years, we may get something in the console space.
 
No one doubts RT is the future. The question is if it is a good trade-off regarding silicon on a mainstream console coming out 2020.
Even reasonably small amount of accelerated rays could be beneficial for many techniques. (To help in cases when rasterization sucks the most.)
 
Consoles are coming by 2020 on 7nm, by that time an RTX 2070 equivalent GPU will be much smaller and cheaper to produce. If RT is incorporated into the refresh consoles (say by 2023), it will be even more cheaper. Also consoles don't have strict fps requirements, they will make do with 30fps on 1080p just fine. You then upscale that 1080p output to whatever your heart desire. And 2070 will be capable of doing minimum 30fps at that resolution. Also by 2020 we should have 3070 which will be even more capable. Consoles can also involve their new powerful Ryzen CPU cores in RT process. Making things a easier.

Perhaps you don’t game on consoles much but I can assure you that literally nobody wants to see 1080p in 2020. Even today, if I’m honest.

The minimum will be some kind of checkerboarding/reconstruct technique up to 4K, which of course is more costly than 1080p but not as much as native 4K, which also isn’t really needed in the whole screen when 90% of the games smear a huge portion of the edges of the image with ungodly amounts of “cool” post process effects like Chromatic Abomination, motion blur and whatnot.

And on a side note, us in the Console Minor Race have spent the last few years asking the Console gods (small g) for 60fps on more than a few token examples, but it’s clear now that that’s not gonna happen.
 
Perhaps you don’t game on consoles much but I can assure you that literally nobody wants to see 1080p in 2020. Even today, if I’m honest.

The minimum will be some kind of checkerboarding/reconstruct technique up to 4K, which of course is more costly than 1080p but not as much as native 4K, which also isn’t really needed in the whole screen when 90% of the games smear a huge portion of the edges of the image with ungodly amounts of “cool” post process effects like Chromatic Abomination, motion blur and whatnot.

And on a side note, us in the Console Minor Race have spent the last few years asking the Console gods (small g) for 60fps on more than a few token examples, but it’s clear now that that’s not gonna happen.
I think reconstruction is definitely the way forward here. Even RT itself requires some form of denoising, so... perhaps you're not all that far off the mark.

Few people understand our love for 4K HDR. It is one of those get used to it, hard to go back type of things.
 
4K HDR isn't at all common on the PC, right? HDR monitors are few and far between. If PC eventually transitions to 4K HDR 120+ Hz monitors, 1080p may be frowned upon. That said, 1080p120 HDR might be better in many cases than 4K60 HDR thanks to motion resolution yada yada heard it all before. Either way, there are definite compromises to be made in incorporating RT in hardware at the moment.
 
4K HDR isn't at all common on the PC, right? HDR monitors are few and far between. If PC eventually transitions to 4K HDR 120+ Hz monitors, 1080p may be frowned upon. That said, 1080p120 HDR might be better in many cases than 4K60 HDR thanks to motion resolution yada yada heard it all before. Either way, there are definite compromises to be made in incorporating RT in hardware at the moment.
standards for HDR is also pretty messed up for some reason. Only consoles appear to be doing HDR correctly.
 
I love how we're back at the times of the Amiga raytracing-demo "Juggler". Primary rays ... hard shadows ... perfect reflections ...

We're in the era of unbiased pathtracers for quite a while, we approximate distributed raytracing when we calculate AO in screen space (integrals that is), or when we convolute cubemaps. One shouldn't underestimate the scales of raytracing, 10x more rays is peanuts, light falls off inverse quadratically.
10 GRays/s is 45 primary rays per 2560x1440@60fps to use to approximate an integral through point-sampling. Need more bounces? Your integrals can hardly be called that anymore, no NN is going to make all the cool stuff we actually have (Soft shadows, DOF, MB, Roughness, GI, etc. pp.) out of 15 rays, three bounces deep, over the hermisphere at a surface point.
Ok, let's reduce the resolution 1280x720, now we got ourselfs 60 rays ... hmmmm. Good luck.

A novel cone-tracing hardware would have been ... more useful ... maybe ... more interesting definitely.
 
I think reconstruction is definitely the way forward here. Even RT itself requires some form of denoising, so... perhaps you're not all that far off the mark.
Agree.

Few people understand our love for 4K HDR. It is one of those get used to it, hard to go back type of things.
Well, I think most of us can understand that, but loving 4K/HDR doesn't necessarily mean you have to think a lower standard is something one can't bear. The same could be said when we got HD and I assure you I still enjoy PS1 games, and even older games. ;-)
 
I love how we're back at the times of the Amiga raytracing-demo "Juggler". Primary rays ... hard shadows ... perfect reflections ...

We're in the era of unbiased pathtracers for quite a while, we approximate distributed raytracing when we calculate AO in screen space (integrals that is), or when we convolute cubemaps. One shouldn't underestimate the scales of raytracing, 10x more rays is peanuts, light falls off inverse quadratically.
10 GRays/s is 45 primary rays per 2560x1440@60fps to use to approximate an integral through point-sampling. Need more bounces? Your integrals can hardly be called that anymore, no NN is going to make all the cool stuff we actually have (Soft shadows, DOF, MB, Roughness, GI, etc. pp.) out of 15 rays, three bounces deep, over the hermisphere at a surface point.
Ok, let's reduce the resolution 1280x720, now we got ourselfs 60 rays ... hmmmm. Good luck.

A novel cone-tracing hardware would have been ... more useful ... maybe ... more interesting definitely.

 

Non-dynamic, tech demo, single light source, garbage reflection (worst than a cubemap).

Addendum: We haven't even talked (or heard) of participating media yet (glass, skin, non-uniform transmissivity). ... The BVH is the best effort regarding geometric complexity, which isn't the principle problem in the first place. Something innovative looks at the integral problem (which screen-space hack-algorithms look at a lot!). Raytracing is brute forcing, and 40 years of science have not changed it. It scales worst than most algorithms (the science and the algorithm ;) ). It's used because you can throw more hardware at it, like 1000x more hardware, a factor 100x wider and a factor 1000x faster than in the beginning. I'd buy it for offline rendering any day. But good luck RTX considering realtime.
 
Last edited:
Non-dynamic, tech demo, single light source, garbage reflection (worst than a cubemap).

Addendum: We haven't even talked (or heard) of participating media yet (glass, skin, non-uniform transmissivity). ... The BVH is the best effort regarding geometric complexity, which isn't the principle problem in the first place. Something innovative looks at the integral problem (which screen-space hack-algorithms look at a lot!). Raytracing is brute forcing, and 40 years of science have not changed it. It scales worst than most algorithms (the science and the algorithm ;) ). It's used because you can throw more hardware at it, like 1000x more hardware, a factor 100x wider and a factor 1000x faster than in the beginning. I'd buy it for offline rendering any day. But good luck RTX considering realtime.
Obviously first gen RTRT isn't going to be as good as current offline path tracers, come on :LOL:

On the topic of transparency and translucency, here's SEED's Siggraph 2018 update on PICA PICA:

http://on-demand.gputechconf.com/si...re-brisebois-pica-pica-and-nvidia-turing.html
 
Obviously first gen RTRT isn't going to be as good as current offline path tracers, come on :LOL:

On the topic of transparency and translucency, here's SEED's Siggraph 2018 update on PICA PICA:

http://on-demand.gputechconf.com/si...re-brisebois-pica-pica-and-nvidia-turing.html
This presentation is lovely as it shows alternative ways of use tracing to get good quality with limited budget.
Even with tracing all sorts of probe, texture, splat techniques will be viable options, especially as now one can 'easily' sample surrounding environment from them.


Funnily Touring brought variable shading which should be lovely to ease burden of better screenspace FX. (Render larger fov than screen and shade the outside area in very coarse manner.)
 
Obviously first gen RTRT isn't going to be as good as current offline path tracers, come on :LOL:

I only compare RT vs. what we have in rasterizer space currently. Proposing raytracing as a "superior" alternative to existing practical or impractical algorithms which are analytic and/or integral based, is poor. Calling raytracing hardware innovative for a sector where you can't smash the problem with your own scales is poor.

If you come forward and say that you solved the geometry submission problem (overdraw, z-buffer, etc.) by using a BVH, that's honest and fine. If you say, hey now you can have irregular z-buffers, that's fine. And then also saying, well, we're at 20% (whatever exact number, because economy of silicon, concurrency and non-coherence) the speed of what a rasterizer can do for the screen - and you're free to ask for any other rays which are not raster-based - that's very nice. But because it's 20% speed it's kinda academic in the moment. They put a BVH in there and it's not going to be faster anytime soon (they already pulled the trump card, their highly optimized Optix implementation), and you can't make the chip have 5x more silicon easily. Non-coherent memory access is the specter that haunts memory hierarchies, pointer-chasing of all things, and then GDDR[6] which is optimized for long burst of continous i/o, with horrible latency.

Where is the scaling suppose to come from? Just to get to parity with rasterization. I'll immediately drop my sceptisism if I see how to find a way around some really hard to beat universal problems in computing.

We don't get what we need, that is first and foremost solutions for unsolved problems in primary casts (area integrals). It's bad to propose an algorithm to end-users which is so processing-hungry that it devours the money-pockets of generations of consumers to come. Maybe you heard of Jevon's paradox. The technological advance in this case isn't used to allow better algorithms run faster (or become practical), it's used to run more of the inferiour algorithms. Fixed function stuff has the ability to be much higher speed and much more energy efficient, please Nvidia, just give us some generalized FF compute stuff for notorious real-world problems. Look at the math and the code, generalize it, pick the winner, and ship it. Raytracing ain't that IMHO.
 
Back
Top