Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Actually, simple 2011 version was a good HD release...


..it's the 2013 disneycrapremasteredshit (C) -like that ruined everything.

On the topic of color grading, this video highlight something I've always noticed. Ignoring the stupid modern color changes, the side by side does expose how the original had a palette that had most hilights tend to a baby blue or pinkish color. It's something I noticed on most 90's movies. Just like the aesthetical fad now is this teal and orange, 90's had this baby blue and baby pink one. Albeit I imagine back then this was more correlated to the actual chemistry of the film used, but even then such chemistry was balenced with intent to hit the notes they deemed best.
 
Last edited:
UE apparently still has very low quality chromatic aberration, colors should never separate like that. (Sorry pet peeve of mine..)

This sort of aberration is actually correct, banding and all, if you're looking at something emitting discrete wavelengths, such as a LCD. I'm actually trying it right now with my glasses. You can also have strange banding from flourescent light souces, again due to their emission spectra.

Where you won't have banding is in sunlight, such as in the demo. But maybe the developers haven't been outside for a while?
 
The blog is arguing for low contrast and colour science, not colour correction. The whole blog post is basically an argument against what you're talking about, and even uses a movie example from the Hobbit of what looks bad.
The blogger is arguing in favor if color correction. He just likes the results to be soft.

Nowadays devs and filmmakers just color grade for the sake of it. This trend is a blight to the modern visual arts. Lighting and set design > color grading.

Actually, simple 2011 version was a good HD release...


..it's the 2013 disneycrapremasteredshit (C) -like that ruined everything.
Yeah, the 2013 version is an abomination.

On the topic of color grading, this video highlight something I've always noticed. Ignoring the stupid modern color changes, the side by side does expose how the original had a palette that had most hilights tend to a baby blue or pinkish color. It's something I noticed on most 90's movies. Just like the aesthetical fad now is this teal and orange, 90's had this baby blue and baby pink one. Albeit I imagine back then this was more correlated to the actual chemistry of the film used, but even then such chemistry was valences with intent to hit the notes they beemed best.
A defect of film stock. Digital solves it, look at the difference in color between SW episode I (shot on film) and episodes II and III (shot digitally). Also contrast to the modern SW films and the awful color grading they use.
 
This sort of aberration is actually correct, banding and all, if you're looking at something emitting discrete wavelengths, such as a LCD. I'm actually trying it right now with my glasses. You can also have strange banding from flourescent light souces, again due to their emission spectra.

Where you won't have banding is in sunlight, such as in the demo. But maybe the developers haven't been outside for a while?
Interesting, thanks.
Sadly it's very common to just shift RGB channels outward without any blending/blur radially.

This reminded me that Weta has spectral raytracer Manuka now a days.
The rabbit hole toward better quality light simulation just keeps getting deeper.
 
I wonder how compatible raytracing GI is with foveated rendering ... when the already low sample rate has to go even lower for the periphery I don't think it's going to work out well.
 
Interesting, thanks.
Sadly it's very common to just shift RGB channels outward without any blending/blur radially.

This reminded me that Weta has spectral raytracer Manuka now a days.
The rabbit hole toward better quality light simulation just keeps getting deeper.

I mean, I'd argue that if your chromatic aberration filter is so wide that the color banding is visible, you're abusing the feature. ;)

Simulating low quality camera optics has always baffled me. I mean, it makes sense in a movie where you're compositing in with video from a real physical camera and you want the optics to be uniform across the frame, but standalone?

Spectral rendering is something of a special purpose thing. The sorts of effects it captures are present in every day life to an extent, but are usually undesirable, or at very least unintuitive. Like a paint that looks different in daylight versus artificial light due to the way the emission and absorption lines in the spectra line up. It's not clear if you even want to simulate this in a virtual world. However, I can think of some applications where it would be quite valuable. Maybe you want a program to visualize various paint mixes under various types of lighting for painting a room.
 
I wonder how compatible raytracing GI is with foveated rendering ... when the already low sample rate has to go even lower for the periphery I don't think it's going to work out well.
Should be fine. It'd be the same as tracing the image at lower resolution.
 
I wonder how compatible raytracing GI is with foveated rendering ... when the already low sample rate has to go even lower for the periphery I don't think it's going to work out well.

It means you can bias your sampling to get better accuracy in the section of the screen your eye is looking at.

On the other hand, undersampling tends to lead toward flickering type artifacts, which your peripheral vision is actually more sensitive to than your center of gaze. Experimentation is needed.
 
I mean, I'd argue that if your chromatic aberration filter is so wide that the color banding is visible, you're abusing the feature. ;)
Absolutely and yet a quite commonly used that way
For default view it never should be that strong.
For special cases like using security or drone cameras it should be ok. (And classic cases of shrooms and such effects.)
It means you can bias your sampling to get better accuracy in the section of the screen your eye is looking at.

On the other hand, undersampling tends to lead toward flickering type artifacts, which your peripheral vision is actually more sensitive to than your center of gaze. Experimentation is needed.
For flickering one should already filter specular mipmaps appropriately.
Perhaps adjusting mipmap bias and some blur at locations where shading amount changes?

And yes, interesting times ahead.
Texture/object space shading should have some advantage in general stability and allow variable shading.
 
Last edited:
http://c0de517e.blogspot.com/2019/03/an-unbiased-look-at-real-time-raytracing.html

A blog post from the ex technical director of rendering of Activision, he was working on R&D rendering team.

Same he has some ideas like JoeJ:

DO - Invest in caching and temporal accumulation ideas. Beyond screen-space. These will likely be more effective, and useful for a wide variety of effects. Also, do think about finer-grained solutions to launch work / update caches / update on demand. Fo this, real-time raytracing might help indirectly, because it needs in order to be performant the ability to launch shader work from other shaders. That general ability, if implemented in hardware, and exposed to programmers, could be useful in general, and it's one of the most interesting things to think about when we think of hardware raytracing.

Edit: very funny I find this blog post because he changes of work going from Vancouver Activision team to California and he will do a blog post to announce where he works.
 
Last edited:
Texture/object space shading should have some advantage in general stability and allow variable shading.

Stability isn't the only important thing, smooth convergence is too. When you suddenly have to get close to the final solution inside the foveation point you still can't afford to flicker from the previous solution ... you should go from blurred to sharp, poorly "denoised" aliased isn't necessarily the same as blurred.
 
Absolutely and yet a quite commonly used that way
For default view it never should be that strong.
For special cases like using security or drone cameras it should be ok. (And classic cases of shrooms and such effects.)

For flickering one should already filter specular mipmaps appropriately.
Perhaps adjusting mipmap bias and some blur at locations where shading amount changes?

And yes, interesting times ahead.
Texture/object space shading should have some advantage in general stability and allow variable shading.
PICA PICA uses texture space shading for transparent/translucent objects.
 
Stability isn't the only important thing, smooth convergence is too. When you suddenly have to get close to the final solution inside the foveation point you still can't afford to flicker from the previous solution ... you should go from blurred to sharp, poorly "denoised" aliased isn't necessarily the same as blurred.
I would solve it this way: Shade higher mip map levels if out of focus (or out of view, occluded etc.). When it comes into focus (or view) interpolate the lower mip maps to fill the higher ones, and increase detail over time with a simple exponential average filter.
I do it this way with my GI stuff and it works fine for this kind of low frequency data, but i don't know how it would work for the full image, especially for specular.

The topic 'object space shading' is very broad. (I adopt the term over texture space shading because most people use it now after the Oxide talk.)
I see those options:

* Store just irradiance and combine with material when building the frame, or store radiance with material already applied?
The former is surely better in this discussed foveated scenario, but also allows lower shading than texture resolution in general. With normal maps being the highest resolution texture usually, that's also quite a loss of detail. If you have denoising in mind however, it's the only option.

* Store just the stuff in frustum, or store the full environment around the camera?
I think the former is the usual assumption, e.g. in Sebbis overview given here some time back, leading to a guess of 1.3 times the shading area.
But the latter could still use less LOD on the cameras back. The information would be still guaranteed to be there if requested.
The latter also becomes more interesting if shading is really expensive. It is what i have in mind when i talk about it, but the memory / shading requirement being maybe up to 8 times more makes it so unattractive.

* Store just the diffuse term, or diffuse and specular?
Can specular be cached at all without looking bad? Using high res frustum model for specular and low res environment model for diffuse?
Gains complexity, but starts to make sense...

This seperation also makes sense if we think about cloud gaming. For a multiplayer game the diffuse part could be shared, multiple servers could calculate accurate GI more easily.
Btw, my personal vision of cloud gaming always was this: Stream diffuse lightmaps and texture / model data, but build the final frame on a thin client (smartphone class low cost HW). This way the latency problem could be solved.
I still think this would be 'cloud gaming done right', and it would also enable VR/AR, but the problem is how to calculate specular on a thin client? There are surely options, but likely no photorealism is possible.
There is common belief diffuse GI would be much more expensive than specular reflections, but this is true only for special cases like perfect mirrors or no specular at all. I think specular will turn out more expensive on the long run.
This is also the main argument how one could convince me about a need for FF RT, and a point where i disagree with many game developers who say reflections are not soooo important or could be faked / approximated.
 
I think specular will turn out more expensive on the long run.
What really sucks about this is: You may have a view where there are almost no reflections are apparent, but then you turn back and there is a big car and now reflections all over the screen.
So if reflections are most expensive, we need a way to keep constant frame times nevertheless. Object space could help with this. Ir could end up doing more work then necessary, but it prevents frame drops in worst cases.
 
But that's true of graphics in general and you don't aim for constant frametimes. You could be looking out over simple fields at a sunset and sky-box, and then turn to see a dense and busy city and frametimes drops from 220 fps to 48 fps.
 
But that's true of graphics in general and you don't aim for constant frametimes.

That's a half truth. Graphic engineers do favor algorithms and architectural solutions that have more constant cost. That was one of the biggest drives for deferred referring for example.
And specially when it comes to Ray tracing, the fact you can all of the sudden have a reflective object covering most of the screen has historically been the most frequent example of why hybrid raytracing was impractical for games. I've heard it dozens of times through the years. And when you look at Dice's solution in BFV, it was wholy conceptualized around that very problem. The very first step is analysing the screen and allocating a constant number of rays across the parts that need them most. Trying to maintain frametime constant very much is a heavy consideration. If it happens to drop when you stare at the sky or a wall is a happy occurance, but every other real world scenario should have as small a variance in frame cost as technologically possible. That is what devs strive for.
 
Definitely you want to minimise spikes and crashes, but you aren't generally aiming for a locked 30/60 fps no matter what and games have variable framerates between worst and best cases. If a sudden full-screen reflective vehicle can drop the framerate, it's more important to consider how frequently that happens and how important it is to engineer around than to work with that as your worst-case and build everything around that. If 99% of the time reflections add little impact, just go with that solution and tolerate the high-impact events just as you do all the other sudden high-impact events that drop framerates.

Basically, I don't see that raytracing has any special considerations in that regards versus rasterisation. If a dev wants a locked framerate for any renderer, they can choose that, but RT isn't a special case. If the argument is that RT can tank framerate, like down to single digits when the screen is filled with reflections, than I agree a maximal framerate impact needs to be designed for (no matter what, don't go below 20 fps). A constant rendering time isn't really (more) necessary though.
 
more Shadow Of Tomb Raider RTX comparisons, shots with more shadows are RTX On

Webp-net-gifmaker-2.gif

https://i.postimg.cc/ZqQPwYM6/Webp-net-gifmaker.gif
https://i.postimg.cc/VL6VL4Fz/Webp-net-gifmaker-2.gif
https://i.postimg.cc/NjgxMzX2/Webp-net-gifmaker-1.gif
https://i.postimg.cc/BbD8jf92/Webp-net-gifmaker-2.gif
https://i.postimg.cc/YCt4CPXm/Webp-net-gifmaker-3.gif
https://i.postimg.cc/QC32Wc5t/Webp-net-gifmaker-2.gif
https://i.postimg.cc/j2yJMYLc/Webp-net-gifmaker-5.gif
https://i.postimg.cc/QMxW0z1r/Webp-net-gifmaker-7.gif
https://i.postimg.cc/c1Vw0XyQ/Webp-net-gifmaker-8.gif
 
Last edited:
Back
Top