Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Discussion in 'Rendering Technology and APIs' started by Scott_Arm, Aug 21, 2018.

  1. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,995
    Likes Received:
    2,563
    On the topic of color grading, this video highlight something I've always noticed. Ignoring the stupid modern color changes, the side by side does expose how the original had a palette that had most hilights tend to a baby blue or pinkish color. It's something I noticed on most 90's movies. Just like the aesthetical fad now is this teal and orange, 90's had this baby blue and baby pink one. Albeit I imagine back then this was more correlated to the actual chemistry of the film used, but even then such chemistry was balenced with intent to hit the notes they deemed best.
     
    #1601 milk, Mar 30, 2019
    Last edited: Mar 30, 2019
  2. Dictator

    Newcomer

    Joined:
    Feb 11, 2011
    Messages:
    129
    Likes Received:
    310
    Hmmmm. I sure would like them to release the cinematic demo not in the editor to test the performance on a number of GPUs.
     
    pharma likes this.
  3. keldor

    Newcomer

    Joined:
    Dec 22, 2011
    Messages:
    74
    Likes Received:
    107
    This sort of aberration is actually correct, banding and all, if you're looking at something emitting discrete wavelengths, such as a LCD. I'm actually trying it right now with my glasses. You can also have strange banding from flourescent light souces, again due to their emission spectra.

    Where you won't have banding is in sunlight, such as in the demo. But maybe the developers haven't been outside for a while?
     
  4. OCASM

    Regular Newcomer

    Joined:
    Nov 12, 2016
    Messages:
    922
    Likes Received:
    881
    The blogger is arguing in favor if color correction. He just likes the results to be soft.

    Nowadays devs and filmmakers just color grade for the sake of it. This trend is a blight to the modern visual arts. Lighting and set design > color grading.

    Yeah, the 2013 version is an abomination.

    A defect of film stock. Digital solves it, look at the difference in color between SW episode I (shot on film) and episodes II and III (shot digitally). Also contrast to the modern SW films and the awful color grading they use.
     
  5. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,343
    Likes Received:
    443
    Location:
    Finland
    Interesting, thanks.
    Sadly it's very common to just shift RGB channels outward without any blending/blur radially.

    This reminded me that Weta has spectral raytracer Manuka now a days.
    The rabbit hole toward better quality light simulation just keeps getting deeper.
     
  6. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,808
    Likes Received:
    473
    I wonder how compatible raytracing GI is with foveated rendering ... when the already low sample rate has to go even lower for the periphery I don't think it's going to work out well.
     
    Heinrich4 likes this.
  7. keldor

    Newcomer

    Joined:
    Dec 22, 2011
    Messages:
    74
    Likes Received:
    107
    I mean, I'd argue that if your chromatic aberration filter is so wide that the color banding is visible, you're abusing the feature. :wink:

    Simulating low quality camera optics has always baffled me. I mean, it makes sense in a movie where you're compositing in with video from a real physical camera and you want the optics to be uniform across the frame, but standalone?

    Spectral rendering is something of a special purpose thing. The sorts of effects it captures are present in every day life to an extent, but are usually undesirable, or at very least unintuitive. Like a paint that looks different in daylight versus artificial light due to the way the emission and absorption lines in the spectra line up. It's not clear if you even want to simulate this in a virtual world. However, I can think of some applications where it would be quite valuable. Maybe you want a program to visualize various paint mixes under various types of lighting for painting a room.
     
  8. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,737
    Likes Received:
    11,213
    Location:
    Under my bridge
    Should be fine. It'd be the same as tracing the image at lower resolution.
     
  9. keldor

    Newcomer

    Joined:
    Dec 22, 2011
    Messages:
    74
    Likes Received:
    107
    It means you can bias your sampling to get better accuracy in the section of the screen your eye is looking at.

    On the other hand, undersampling tends to lead toward flickering type artifacts, which your peripheral vision is actually more sensitive to than your center of gaze. Experimentation is needed.
     
    Dictator and milk like this.
  10. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,343
    Likes Received:
    443
    Location:
    Finland
    Absolutely and yet a quite commonly used that way
    For default view it never should be that strong.
    For special cases like using security or drone cameras it should be ok. (And classic cases of shrooms and such effects.)
    For flickering one should already filter specular mipmaps appropriately.
    Perhaps adjusting mipmap bias and some blur at locations where shading amount changes?

    And yes, interesting times ahead.
    Texture/object space shading should have some advantage in general stability and allow variable shading.
     
    #1610 jlippo, Mar 31, 2019
    Last edited: Mar 31, 2019
    milk likes this.
  11. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    3,520
    Likes Received:
    2,133
    Location:
    Barcelona Spain
    http://c0de517e.blogspot.com/2019/03/an-unbiased-look-at-real-time-raytracing.html

    A blog post from the ex technical director of rendering of Activision, he was working on R&D rendering team.

    Same he has some ideas like JoeJ:

    Edit: very funny I find this blog post because he changes of work going from Vancouver Activision team to California and he will do a blog post to announce where he works.
     
    #1611 chris1515, Mar 31, 2019
    Last edited: Mar 31, 2019
    JoeJ, Heinrich4, OCASM and 1 other person like this.
  12. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,808
    Likes Received:
    473
    Stability isn't the only important thing, smooth convergence is too. When you suddenly have to get close to the final solution inside the foveation point you still can't afford to flicker from the previous solution ... you should go from blurred to sharp, poorly "denoised" aliased isn't necessarily the same as blurred.
     
  13. OCASM

    Regular Newcomer

    Joined:
    Nov 12, 2016
    Messages:
    922
    Likes Received:
    881
    PICA PICA uses texture space shading for transparent/translucent objects.
     
  14. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    463
    Likes Received:
    557
    I would solve it this way: Shade higher mip map levels if out of focus (or out of view, occluded etc.). When it comes into focus (or view) interpolate the lower mip maps to fill the higher ones, and increase detail over time with a simple exponential average filter.
    I do it this way with my GI stuff and it works fine for this kind of low frequency data, but i don't know how it would work for the full image, especially for specular.

    The topic 'object space shading' is very broad. (I adopt the term over texture space shading because most people use it now after the Oxide talk.)
    I see those options:

    * Store just irradiance and combine with material when building the frame, or store radiance with material already applied?
    The former is surely better in this discussed foveated scenario, but also allows lower shading than texture resolution in general. With normal maps being the highest resolution texture usually, that's also quite a loss of detail. If you have denoising in mind however, it's the only option.

    * Store just the stuff in frustum, or store the full environment around the camera?
    I think the former is the usual assumption, e.g. in Sebbis overview given here some time back, leading to a guess of 1.3 times the shading area.
    But the latter could still use less LOD on the cameras back. The information would be still guaranteed to be there if requested.
    The latter also becomes more interesting if shading is really expensive. It is what i have in mind when i talk about it, but the memory / shading requirement being maybe up to 8 times more makes it so unattractive.

    * Store just the diffuse term, or diffuse and specular?
    Can specular be cached at all without looking bad? Using high res frustum model for specular and low res environment model for diffuse?
    Gains complexity, but starts to make sense...

    This seperation also makes sense if we think about cloud gaming. For a multiplayer game the diffuse part could be shared, multiple servers could calculate accurate GI more easily.
    Btw, my personal vision of cloud gaming always was this: Stream diffuse lightmaps and texture / model data, but build the final frame on a thin client (smartphone class low cost HW). This way the latency problem could be solved.
    I still think this would be 'cloud gaming done right', and it would also enable VR/AR, but the problem is how to calculate specular on a thin client? There are surely options, but likely no photorealism is possible.
    There is common belief diffuse GI would be much more expensive than specular reflections, but this is true only for special cases like perfect mirrors or no specular at all. I think specular will turn out more expensive on the long run.
    This is also the main argument how one could convince me about a need for FF RT, and a point where i disagree with many game developers who say reflections are not soooo important or could be faked / approximated.
     
    jlippo, AlBran and BRiT like this.
  15. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    463
    Likes Received:
    557
    What really sucks about this is: You may have a view where there are almost no reflections are apparent, but then you turn back and there is a big car and now reflections all over the screen.
    So if reflections are most expensive, we need a way to keep constant frame times nevertheless. Object space could help with this. Ir could end up doing more work then necessary, but it prevents frame drops in worst cases.
     
  16. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,737
    Likes Received:
    11,213
    Location:
    Under my bridge
    But that's true of graphics in general and you don't aim for constant frametimes. You could be looking out over simple fields at a sunset and sky-box, and then turn to see a dense and busy city and frametimes drops from 220 fps to 48 fps.
     
    DavidGraham likes this.
  17. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,995
    Likes Received:
    2,563
    That's a half truth. Graphic engineers do favor algorithms and architectural solutions that have more constant cost. That was one of the biggest drives for deferred referring for example.
    And specially when it comes to Ray tracing, the fact you can all of the sudden have a reflective object covering most of the screen has historically been the most frequent example of why hybrid raytracing was impractical for games. I've heard it dozens of times through the years. And when you look at Dice's solution in BFV, it was wholy conceptualized around that very problem. The very first step is analysing the screen and allocating a constant number of rays across the parts that need them most. Trying to maintain frametime constant very much is a heavy consideration. If it happens to drop when you stare at the sky or a wall is a happy occurance, but every other real world scenario should have as small a variance in frame cost as technologically possible. That is what devs strive for.
     
    JoeJ and Scott_Arm like this.
  18. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,737
    Likes Received:
    11,213
    Location:
    Under my bridge
    Definitely you want to minimise spikes and crashes, but you aren't generally aiming for a locked 30/60 fps no matter what and games have variable framerates between worst and best cases. If a sudden full-screen reflective vehicle can drop the framerate, it's more important to consider how frequently that happens and how important it is to engineer around than to work with that as your worst-case and build everything around that. If 99% of the time reflections add little impact, just go with that solution and tolerate the high-impact events just as you do all the other sudden high-impact events that drop framerates.

    Basically, I don't see that raytracing has any special considerations in that regards versus rasterisation. If a dev wants a locked framerate for any renderer, they can choose that, but RT isn't a special case. If the argument is that RT can tank framerate, like down to single digits when the screen is filled with reflections, than I agree a maximal framerate impact needs to be designed for (no matter what, don't go below 20 fps). A constant rendering time isn't really (more) necessary though.
     
  19. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,514
    Likes Received:
    8,717
    Location:
    Cleveland
    Please don't inline several large GIFs.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...