Photon Mapping

Discussion in 'Rendering Technology and APIs' started by chris1515, Jun 7, 2019.

  1. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,414
    Likes Received:
    10,784
    Location:
    Under my bridge
    How do you design hardware to trace a limited ray count? That's like suggesting a GPU is designed to only shade a number of pixels to make it faster at shading that number of pixels. You have a number of compute units, and these process pixel shaders, and shade pixels, whatever the resolution. You can't make a compute unit faster by limiting it to 1080p framebuffers. The ROPS draw the pixels, however many you want, as quickly as they can. You can't make a ROP faster by limiting it to 1080p framebuffers. Likewise, with ray tracing, you cast rays, however many you choose, a handful for AI, and billions for total scene illumination. Once your hardware has traced all those rays, whether on CPU or compute or accelerated HW, you have your data to use however you want, such as constructing an image. The process of tracing a ray is independent of screen size.

    I am unable to envision a hardware design that can trace rays in a finite number, unless you have literally 2 million sampling units that can each trace one ray per frame for a 1080p image. Realistically, HWRT is going to be a form of processor that'll take workloads and produce results as quickly as it can, to be used however they are used.

    Perhaps, thinking aloud, as ideas come to me, the RT process is coarse grained, not tracing down to the geometry level, making it suitable for lighting but not sharp reflections? That would involve less memory so allow caches to be more effective. Hardware cone-tracing? Well, no, it's called ray tracing in the slide.
     
  2. Dictator

    Newcomer

    Joined:
    Feb 11, 2011
    Messages:
    108
    Likes Received:
    248
    I was not talking about the HW limiting the ray count or something (well, AMD does limit the tessellation factor manually or automatically in its driver actually on PC as an example), sorry if it came off that way. Rather I was talking about the implementation in theoretical games games. Like the devs limis the ray count - or AMD RT sponsored titles would design the ray count on casting based upon specific AMD HW limitations/features.
     
  3. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,414
    Likes Received:
    10,784
    Location:
    Under my bridge
    Not sure I understand. Devs can and should be free to cast however many rays they want. We already see RTX scaling ray counts depending on which box you have, and distributing rays based on importance.
     
  4. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    394
    Likes Received:
    475
    Ofc you always adjust to what the hardware can do, but just take the Crytek demo which gets good results even without HW. Seems there is no need to worry much.
    If their RT would be so weak it makes sense only with help from the cloud they would just drop it i guess.
    Dealing with limitations / decisions like 'can barely trace the characters' would cause a lot of complexity for little benefit - they would drop it as well.
     
    chris1515 likes this.
  5. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    3,342
    Likes Received:
    1,937
    pharma likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...