RayTracing?

Discussion in 'Beginners Zone' started by Davros, Jul 20, 2019.

  1. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,891
    Likes Received:
    2,307
    Nvidia has released a tech demo
    NVIDIA Apollo 11 Moon Landing RT Tech Demo
    Heres some of the blurb
    With RTX, each pixel on the screen is generated by tracing, in real time, the path of a beam of light backwards into the camera (your viewing point), picking up details from the objects it interacts. That allows artists to instantaneously see accurate reflections, soft shadows, global illumination and other visual phenomena.

    Can someone explain the bold part, thats not backwards if I film something photons dont come out of the camera and reflect of objects until hitting a light source
     
  2. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    13,276
    Likes Received:
    3,725
    The path of the beam of light is traced backwards from the camera, vs forwards from the light source. Ray tracing is always relative from the camera vs the light sources.
     
  3. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,891
    Likes Received:
    2,307
    As someone who doesnt know how its done in graphics only how light works in real life that seems backwards
    how do you even know there is a ray to trace or if there is a light source, or its intensity or colour . Do you calculate all possible rays
    that could hit the camera and discard those that dont end at a light source ?
    and when you get to the light source and discover its red do you have to adjust all your previous calculations to take that into account ?
     
  4. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    2,750
    Likes Received:
    127
    Location:
    Taiwan
    Basically if a photon can go from A to B, then it can go from B to A. So it does not matter to the end result if it's calculated forward or backwards.
    Now if you do it forward then it's very likely that a lot of the light coming out of a light source does not actually get into the camera, so it's wasteful to compute them. On the other hand, when shooting the ray from the camera, it's easier to compare the relative location between the point where the ray hit an object and a light source, so it's easier to determine how the light source affect the color of the point.
    There are some bi-directional ray tracing algorithm for handling specific situations, but they're probably not fast enough for real time rendering.
     
  5. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,891
    Likes Received:
    2,307
    I'm sure it seems perfectly logical to you but I'm struggling with it
    lets take this : "when shooting the ray from the camera,"
    1 : How do you know there is a ray to trace
    2 : How do you know what colour and intensity it is
    3 : How do you know what angle it hit the camera at
     
  6. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,891
    Likes Received:
    2,307
    Asking just out of pure curiosity
    If I raytraced this scene
    would a or b appear on my monitor?
    [​IMG]
     
    milk likes this.
  7. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,986
    Likes Received:
    2,557
    1: What does that mean? We are trying to solve what light got to each pixel, as such, there is at least one ray per pixel, if if is completely occluded, than that's our result: no light ever reaches our pixel.
    2: Shaders. The thing about physics is that, for a fully known system you have a snapshot of, with all position, mass, speed, acceleration, etc variables known, you can predict both future and past states with perfect accuracy using the very same equations. Same for pathtracing. The same equations work in reverse.
    3: A propper camera does not let light coming from random directions in. They try to approximate a pin-hole effect. So does your eye by the way. As such, for each pixel, you only have to trace initial rays from the angle the gets through our virtual pin-hole position. After first hit, more rays may be spawned though.
    A naive raytracer will just spawn thousands of rays from the first hit randomly, to see if it hits a light or not, and shoot another thousand random rays from the secondary hits and so on. Of course that shoots a bunch of useless rays and as the resulting image is being constructed it starts out nearly black (very few paths ultimately found a light) and noisy (the patha that did find lights vs. the ones that didn't are very randomly distributed). That's why nobody uses a completely naive path tracer.

    Enter the world of pathtracing optimisations.

    The first, more obvious one: After you found your first hit, first and foremost, trace against all primary lightsources, this quickly gives you initial lighting info right from the start. If you wanna be even more optimal, you can prioritize the nearest sources, and maybe have an acceleration structure that allows you to discard occluded sources early before even shooting a ray. Of course, most path tracers work with area lights, and those need hundreds or thousands of rays to approximate properly, so this optimization alone is not enough for those. These will benefit more from the same kind of optimizations that benefit secondary illumination and GI in general.

    Second most obvious optimization, use the material's BRDF to prioritize where your rays are shot first. Hit a mirror? Easy, your secondary bounce ray will go straight in the specular reflection direction. Hit a diffuse surface, bad luck, your ray can go anywhere. Shoot something in between, prioritize your random rays near the specular reflection, but not exactly at it.

    Third most obvious optimization, keep track of how much potential energy your path could carry. Say, you are following a path that has bouced off of 10 different surfaces by now. You've solved their BRDF function at every step along the way, you know that by now, even if your ray ends up hitting a supernova, the resulting added energy won't cary over enough light to the final pixel to increase it's intensity more than 1%. Kill it. Forget that path for now, and prioritize paths that have more chances of influencing your render meaningfully.

    Fourth obvious optimization: You've already solved many paths. Your framebuffer already has an understandable but still noisy image showing up. Analyse where the most noise is in screenspace, and prioritize shooting rays from those problem pixels.

    Those are just a taste of what pathtracers do. To give you an idea. You can read the hundreds of articles and papers on the subject if you wanna go in-depth. It's trully facinating.

    Path tracing does shoot many rays that end up being useless, but still only a fraction of useless rays of what you propose instead. Suppose we do the oposite. Suppose we shoot rays from the lightsources towards the world. Even for a tiny-ass scene like the sponza atrium. You have a world of say a couple hundred metres squared, by a couple dozen metres high, with rays bouncing off the sun across all parts and all directions of this space. How many of those end up actually hitting our small camera in the middle of all this? A fraction. A 0.00x kind of fraction, if not less. Probably much less actually.
    That's why pathtracers start paths from the camera.


    In realtime land though, they have been focusing a lot on optimizations that have just recently entered the toolset of offline renderers. Specially: Spacio-temporal Screenspace filtering/denoising of the final framebuffer. It comes from the assumption that neighbor pixels probably have similar lighting conditions, so one can filter a bit of the lighting from a pixel's neighbor to get some extra paths for cheap.
    When you are doing that, distributing your rays so that they produce more evenly distributed dither across the screen also becomes more useful, because it produces easier to filter results.
     
    #7 milk, Jul 21, 2019
    Last edited: Jul 21, 2019
  8. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,986
    Likes Received:
    2,557
    a

    Light moves like rays in ray-tracing.
     
  9. imerso

    Newcomer

    Joined:
    Jul 18, 2010
    Messages:
    58
    Likes Received:
    44
    Davros: in standard raytracing, things happen reversed. Instead of shooting rays from the light sources, rays are shot from the camera.

    In reality, the screen (yes, the flat 2d screen plane) is scanned from top left to bottom right, pixel by pixel, and one ray is shot in the camera direction.

    Those rays have no color yet. By doing intersection tests against the world, if an object is hit, then a new ray is shot in the direction of a light source, to see if that point can "see" the light. If it sees the light, then it is lit by that light, else it is in shadow.

    Then if the object's material is reflective, the ray is bounced (in a new direction that is the surface normal against original direction) and if it hits another object, it adds that other object's color.

    When it finishes traveling, then we found that pixel's color and can start calculating the next pixel.

    This is a gross explanation for whitted raytracing, but I hope you understood that each pixel in the screen becomes a ray, shot from the camera plane in the camera direction.
     
    Davros likes this.
  10. imerso

    Newcomer

    Joined:
    Jul 18, 2010
    Messages:
    58
    Likes Received:
    44
    Lets not confuse raw ray tracing (whitted ray tracing) with path tracing, though. Ray tracing won't shoot thousands of rays from the first hit! Path tracing will, to achieve global illumination.

    Path tracing is still a very slow process, and the reason for the noisy GI, while raw ray tracing is faster but does not include GI.
     
  11. imerso

    Newcomer

    Joined:
    Jul 18, 2010
    Messages:
    58
    Likes Received:
    44
    Milk: you're confusing ray tracing with path tracing.
     
    milk likes this.
  12. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,986
    Likes Received:
    2,557
    Yeah, I was describing path tracing there, and used the term ray tracing instead of it a couple of times.
     
    imerso likes this.
  13. hughJ

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    734
    Likes Received:
    263
    You can think of ray tracing in similar terms as rasterization. With ray tracing you've got the camera and scene geometry in a world coordinate system, and you apply 3D transformations to your camera and its projected rays, and perform ray->object intersection tests to generate color values in your frame buffer. With rasterization you're instead applying a 3D transform to move the contents of the world into the camera's coordinate system, along with the camera's frustum parameters (perspective/FOV) such that the unclipped elements of the scene get squeezed into a rectangular box with parallel sides. At this point the polygon->raster conversion sampling is equivalent to shooting parallel rays from the screen's pixels into that rectangular box, and the Z-test is equivalent to the nearest-intersection comparison that's done in ray tracing. The coloring, shading and material properties you deal with after completing those intersections are the same thing either way (ray tracing can use the phong shading model, refraction, reflection, Schlick's approximation, etc just as you see in any D3D/OGL pixel shader code.) In this sense you can view the advent of SGI,opengl and consumer 3D rasterization as the result of an algorithm optimization that maximizes the sampling rate of that initial intersection test by reducing the problem space, but as a result it's poorly suited to tasks that fall outside that problem space.

    In other words: what makes ray tracing powerful is the fact that you're transforming the camera's rays rather than the world -- this means you're able to perform further intersection tests from arbitrary perspectives with no added overhead, where as with rasterization you would need to re-transform the geometry of the world into a new perspective frustum for every vantage point of reflection, refraction, etc of any surface position and surface normal.

    The hard part of ray tracing is the fact that you need to sort your world geometry (and continuously re-sort the dynamic geometry) in an acceleration structure, have that structure's entire contents resident in fast VRAM, and have a fuck-ton of compute to hit a particular threshold of samples per pixel per second. And even if you do manage to achieve all of that, you're still stuck with the fact that 99.999% of consumers are well below that threshold and still will be a decade from now.
     
    iroboto likes this.
  14. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,891
    Likes Received:
    2,307
    Thanks, so that means ray tracing doesnt take in account constructive and destructive interference.
     
  15. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    445
    Likes Received:
    523
    You shoot one ray from the eye through each pixel of the screen. (Totally independent of scene / lights - just a matter of camera projection.)
    When this ray hits something, you trace a second ray from the hitpoint to the light source. If the light is not occluded, you shade the hitpoint accordingly.

    This is very basic Whitted Raytracing but there are methods that shoot rays from the lights as well:
    Bidirectional Path Tracing combines paths originating form both the camerea and the lights.
    Photon Mapping shoots rays from the lights, stores the hitpoints as "photons" and later you gather nearby photons froam camera ray hitpoints to get global illumination effects.
    Although those methods are typical for offline rendering, we already see some similar ideas in games and demos.

    Also worth to mention: In games the first step of shooting rays through screen pixels is replaced by rasterization because it's faster. (Quake RTX being the only exception AFAIK)
     
    pharma and chris1515 like this.
  16. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,986
    Likes Received:
    2,557
    Yes. Light moves like particles in ray tracing. If a cat is inside a box, it is always dead, because a computer cat is not a real cat. It has no soul.
     
  17. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    3,487
    Likes Received:
    2,120
    Location:
    Barcelona Spain
    https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-overview

    Good definition of raytracing

     
    pharma likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...