Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Discussion in 'Rendering Technology and APIs' started by Scott_Arm, Aug 21, 2018.

  1. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,601
    Likes Received:
    643
    Location:
    New York
    We get that you have proof that you're not able to share with us about your great RT implementation that nobody else on the planet is able to match.

    Point is that if real-time RT was viable on general compute architectures we would know about it since the actual experts in the field aren't quite as shy about sharing their accomplishments.
     
  2. OCASM

    Regular Newcomer

    Joined:
    Nov 12, 2016
    Messages:
    921
    Likes Received:
    874
    Early on what you need is speed and ease of implementation. That means fixed function acceleration and triangles. DXR itself is very programmable so devs can experiment with it all they want even without hardware acceleration. And they will since thanks to RTX now there's consumer demand for RTRT.
     
    pharma and eloyc like this.
  3. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    10,783
    Likes Received:
    10,800
    Location:
    The North
    guys, hate to back seat mod here, but attack his arguments not the person posting. JoeJ offers a perspective of many of the current landscape.
     
    Billy Idol likes this.
  4. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    981
    Likes Received:
    1,108
    I've planned it for current gen consoles, but already missed that. If i succeed i can't tell when results can be shown public. If i fail i can tell you that in about half a year maybe.

    I if i could compete RTX with a compute based RT implementation, i would not hang out here at the forum. I would sit with a laptop at AMD headquarters, haha :)
    Some posts above i've said myself classical RT on GPU requires at least 'changes' in hardware . All i criticize is black boxes and fixed function.
    I never said i would be an expert in classical RT myself, or i would have a great classical RT implementation (i have none).

    I see this all may sound contradicting, but unfortunately i'm not a researcher who's getting paid or earns reputation for publications.
    When i say RT can perform well on regular compute, then i mean raytracing alternative geometry. Not industry standard triangle meshes with custom material shaders, which i refer as 'classical'.
    It's difficult to argue when you have to keep secrets, and it's even more difficult not to sound as mister 'know it all better' maybe, which i certainly not intend.

    My risk of failure is very high, and i'm not Bruce Bell who thinks avoiding a perspective divide is a great invention. I do not aim to replace classical RT or triangle meshes either, to be clear.

    The attention you guys spend on me is totally unexpected.
    I'm just one out of hundrets or thousands or more who's experimenting with unconventional ideas. 1% of those guys succeed, so don't take mo so serious, but don't rip me apart either.
    I don't have time for so much self defense, to search for public code on github etc. just to proof every single word i say, even things that really everyone knows.
    This forum is a great resource of news about RTX, so i'll stay and thanks for this. But i'll stay as a quiet observer, and hopefully i manage to swallow the next provocation in silence, sigh... ;)
     
    Billy Idol, milk and eloyc like this.
  5. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,028
    Location:
    Under my bridge
    Do you know about Claybook? It's raytraced via compute using SDF. Look up posts by sebbbi on this board as he gives a decent amount of info.

    As for the attacks, you've made a significant claim that AMD GPUs are vastly superior than nVidia for compute, and you've said the implementation of Radeon Rays isn't particularly optimal, which are not insignificant claims. The way you've worded yourself can come across as arrogant to some readers - not me, but the written word doesn't carry the voice which wrote it, and the words you've used could be read with an air of superiority which seems to be what trinibwoy is reacting to.

    Don't take it too hard. If you can't post details, you'll just have to agree to disagree, but you can't expect to have proper conversation with people based on knowledge you have which we don't which you won't share. ;)
     
    #225 Shifty Geezer, Oct 3, 2018
    Last edited: Oct 3, 2018
    pharma, OCASM and Malo like this.
  6. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,786
    Likes Received:
    3,740
    Location:
    Barcelona Spain
    Claybook is traytracing sdf based...
     
  7. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    981
    Likes Received:
    1,108
    Ok... :)
    Yes i know Claybook. Ask Mr. Aaltonen what he has to say (if he wants) about GCN compute performance. He should know.
    Ask the developers of RadeonRays about their goals. Pretty sure they target content creation and they need a framework to test out realtime approaches.
    Sounding arrogant, ok, my fault. I don't hear this the first time. It's not intended. Maybe because i've learned english by reading teaching computer science stuff. Probably i sound teaching and 'knowing better' now all the time i guess. It's really not intended.

    I don't take it hard. I'm not angry about anyone here. Like said i like the forum.
    But i disagree about proper conversation. I can have proper conversation with others while talking around secrets and i'm used to that. Others have their own. We exchange ideas, discuss advantages / disadvantages, make proposals... all this works while maintaining secrets. AMD and NV guy can still have a drink and discuss GPUs.
    But here this is not the case. There is no exchange, i do not learn from the other people here - they just keep asking me questions. And when the secrets become a central topic, it is time to leave.

    But i'll be back when i have questions myself. :) Thanks for the regulation work and the honest words, man!
     
  8. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,601
    Likes Received:
    643
    Location:
    New York
    I agree 100%. Thought it was clear I was addressing the repeated unfounded claims and not the person making them. Apologies if it came off otherwise .... Shifty already articulated the issue with the posts.
     
  9. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    21,705
    Likes Received:
    7,342
    Location:
    ಠ_ಠ
    Speaking of textures, I was reminded of Okami HD's "super-resolution"

    http://www.capcom-unity.com/gregaman/blog/2012/11/05/okami-hd-powered-by-technical-innovation-love
    ---
    Anyways, pretty interesting to see nV & MS work together on a bunch of things lately.

    :mrgreen:
     
    #229 TheAlSpark, Oct 3, 2018
    Last edited: Oct 3, 2018
  10. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,601
    Likes Received:
    643
    Location:
    New York
    That’s fair but we should critique RT in realistic terms. The world is currently made up of triangles and we need to acknowledge the realities of chip manufacturing technology and limited developer and artist budgets.

    The only relevant question is how best to use available resources to improve games / graphics. If there are other techniques that produce similar or better results within those limits then of course we would want to learn more about that.

    I recently reread some of Anandtech’s old articles from the Voodoo / Riva / early Geforce days. It’s amazing how far we’ve come but it’s been due to steady progress over many years and every single feature started off as a much slower, less flexible version of what it would eventually become. RT will be no different.
     
    dobwal and OCASM like this.
  11. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    If you have a die shot, I’d love to see it!

    Because right now, all we have are artistic impressions that are anything but.
     
  12. Ethatron

    Regular Subscriber

    Joined:
    Jan 24, 2010
    Messages:
    869
    Likes Received:
    277
    You know, this is the wrong argument. Sometimes we can only have a proper discussion if one has written SM6.0 compute shaders (example). The "knowledge" is the public API documentation? No, it's experience. That's unshareable and very individual. In the working space, there is first and foremost respect and acceptance, and an understanding that results vary, dramatically at times, because of the diverging experience and algorithms used.

    Let's call it a professional discussion, where everyone tries to learn from the others experience and thoughts (it's super rare to find people lecturing or evangelizing or selling, it's overwhelmingly just asking question to each other), and in the end a discussion is something enriching in itself, because it makes you think and talk about the stuff you like to explore.

    There is no need to proof, because there is no need to lie, and if mistakes are made there is no need to blame, only to learn.
     
    Silent_Buddha, iroboto and milk like this.
  13. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,441
    Likes Received:
    3,319
    Yeah, I for one was finding the guys info interesting to hear about. I don't see why he or anyone would lie up posts like those. Why take the trouble to fabricate something that specific on a group this niche for no apparent purpose. Who would do this falsely and what for? For heaven's sakes guys... Chill a bit. Give him the benefit of the doubt and let's hear what he has to say. Take salt to taste with it if you wish, but easy on accusative tone. This place is supposed to be fun and good times.
     
    #233 milk, Oct 4, 2018
    Last edited: Oct 4, 2018
  14. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    981
    Likes Received:
    1,108
    Thanks for all the backing, guys! Like said i don't take anything personal, and i see what i've said seems to be just polarizing, which is not bad. But also all this led to some doubts on myself.

    So i looked up some more kernels of RadeonRays, but my impression does not change. Basically i rule them all out early because they use a binary tree, which results jumping around in memory like crazy, and GCN performs badly with this. NV is much more forgiving to bad memory access patterns.
    (Unrelated info: NV is also more forgivng to unoptimized code. Or, if you prefer: AMD honors optimization much more.)
    I don't say the code here is unoptimized or bad, but in my opinion binary tree is the worst choice. Using a tree with larger branching factors (e.g. 8, 16... children per node) allows to read child nodes from coherent memory, and also to process them in parallel if desired. Also the tree has much less levels, which limits divergence.
    Add this to my previous suggestion (which can lead to reduced bandwidth by a factor up to 64!), and you see why i am not impressed if we talk about realtime RT. I would say RadeonRays is 'high performance', but i would not say it is 'realtime'.
    That's just my personal opinion. But my criticism is just a response to you, mentioning RadeonRays in context of realtime RT or even hardware acceleration. This makes no sense to me. RR is very fine for content creation, because it does not require Cuda.

    My personal pessimism and doubt in AMDs raytracing experience may be similar out of place! It's no fact, just a personal guess! AMD managed to surprise with beating their competitors more than once.


    The second point, vendor compute performance is something i can't proof, but i see it appears exaggerated, even to many experienced programmers.
    But i repeat some points: Most recent GPUs not included. GCN needs more optimization work and careful design of memory access patterns (some pitfalls that easily go unnoticed).
    Further, some personal impressions: Game industry has still not learned to utilize compute - they think in triangles and pixel shaders. Other industries has been already won by NVs Cuda and have no need to optimize.
    This is why we see the insane compute power of GCN so rarely. But it's there, my numbers are real. Notice that all my optimizations work for NV too. I do not optimize exclusively for AMD, and i maintain different codepaths for both vendors in case of differing best choices. (Which luckily seems not necessary anymore with more recent GPUs)


    If i would want to criticize myself, i would really pick other points, likely:
    Accuse NV to do blackbox and fixed function to protect their RT lead at the cost of limiting general progress and innovation. <- why did nobody react to this? That's a real exaggerated insinuation maybe. But you go havoc on my performance analysis, which is real (but you have to add to others for an average).
    Also, my apologizes to Bruce Bell. That was really out of place.
    ... probably i'm wrong with other things too. I'm often wrong, like everybody else.

    I was not aware what i've seen is not real. It showed Tensor and RT having the same area as shader cores.
    So you're right and i've made potentially wrong conclusions.


    Agree about triangles, but not because they are state of the art. They are just the most efficient way to approximate geometry in practice. (Exception is something diffuse like a branchy bush with many leaves).
    But i disagree with your optimism in your remaining comment.
    You are just wrong: The core of rasterization (ROPs) is still fixed function, or can you draw a curved triangle, or do occlusion culling while rendering back to front like Quake does? No, you can't. All you can do is early Z and occlusion culling. Both requires to draw the entire triangle.
    Now you can argue that's no problem - today we cull stuff at larger granulary etc. and you are right.
    But raytracing is different. Rasterizing a trianlge is simple enough you can select one out of two possible options, make it a fixed function and good is. Raytracing however is still an open problem. On both CPU and GPU. Now all research on this open problem is entirely in the hands of a profit oriented minority.
    Maybe that's just the kind of specialization our time requests, but maybe it is just to early to close this topic for public research.
    In any case, i doubt the core will ever become programmable. The harm may have already happened and may be irreversible. We can not be sure about that.



    About SDF, well... we can not compare this to anything discussed here. If we want RT GI, i personally think we have to rule it out. Together with Voxel Cone Tracing, Light Propagation, etc.
    The problem is if we talk about lighting surfaces, doing this with a volume data structure requires more memory and more samples no matter how good your compression is.
    Also the volume data appears attracting to implement simple algorithms, and simple is good. But the truth is that it's just but brute force. Sphere tracing is brute force, memory is limited and slow. No good choice if we need to relate every point in space to each other, and perform a visibility test in between. We can not solve a O(n^3) problem using brute force. Or even if we could, we should choose the better approach just to save energy.

    Personal opinion - i've failed with volume data approaches - others are still working on it and achieve results. Personal. And not meant as criticism towards Claybook or SDF in general. I only talk about application for full GI.
    For example i like this work here, which seems to be a volume based diffusion approach:
    I've experimented with this too years ago, but the problem is: With reduced volume resolution light leaks just like crazy. Volume data is no good approximation at low resolution (you can not express multiple walls within a single voxel, not even a single wall well). Voxel Cone Tracing has the same problem, SDF too.
    Surfels can be tuned to cause overocclusion instead, which is acceptable, and there is no global spatial limitation like a grid. At some point each approach breaks down, hopefully at a distance far enough from the camera. (See the Many LODs paper i've mentioned in the other thread if you're interested: )


    So, that's it. I need to continue work now. Costs too much time to do introductions on GI ;) ... see ya!
     
  15. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,601
    Likes Received:
    643
    Location:
    New York
    Thought this video was very well done. Nice walk through 3D history and a peek into a promising ray/path traced future.

     
    OCASM, Malo and pharma like this.
  16. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,786
    Likes Received:
    3,740
    Location:
    Barcelona Spain
    For quantum break, Remedy choose to not use voxel cone tracing because of the wall problem...
     
    pharma likes this.
  17. OCASM

    Regular Newcomer

    Joined:
    Nov 12, 2016
    Messages:
    921
    Likes Received:
    874
    CryEngine uses sparse voxel octree global illumination. It's hardly perfect but much better than nothing:

    https://docs.cryengine.com/pages/viewpage.action?pageId=25535599
     
    pharma and Lightman like this.
  18. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,575
    Likes Received:
    2,294
    Porsches, Storm Troopers, and Ray Tracing: How NVIDIA and Epic are Redefining Graphics
    October 1, 2018


    Question 1: If “Speed of Light” had been made through traditional techniques like light baking or lightmaps, how would the end result differ? How would the development time change?
    Answer [Ignacio Llamas, NVIDIA]: The entire demo is about fully dynamic studio lighting with multiple area lights. Using lightmaps would not be an option for this kind of setup. When everything is dynamic, traditional rasterization-based techniques are simply insufficient to correctly capture area lights shadows, correct artifact-free reflections and the diffuse light interactions. There was simply no way to do this without real-time ray tracing and the performance of Turing GPUs.

    Question 2: “Speed of Light” delivered on the promise of photorealistic results from ray tracing. Can you give us a sense of how long it took to produce that clip? How big was the team that worked on it?

    Answer [Ignacio Llamas, NVIDIA]: From a technology standpoint we started from where the “Reflections” demo left off — which means ray traced area lights, reflections and ambient occlusion. We had about three months to push the technology to the next level to meet the higher demand of fully dynamic ray traced lighting in this demo. To accomplish this we had about eight rendering engineers across NVIDIA and Epic involved to various degrees.

    Answer [Francois Antoine, Epic Games]: The “Speed of Light” demo is actually made of two components — the cinematic ‘attract mode’ and the interactive lighting studio, and they both use the exact same vehicle asset. If I were to break it down by sub-project, we had two people working on the Speedster asset, three people working on the interactive lighting studio and about five people working on the cinematic. The production of the entire “Speed of Light” project took about eight weeks and happened in parallel with the development of new ray-traced rendering features.

    Question 3: Is “Speed of Light” using cinematic-quality assets, or in-game quality assets?

    Answer [Ignacio Llamas, NVIDIA]: The original CAD model is as detailed as you can get. The in-engine version, tessellated to either 10 or 40 million polygons is in the range that we can consider cinematic quality. In addition to the polygon count, the other thing that makes the model cinematic quality is the physically-based materials, which have an amazing amount of detail and closely match reference samples and car photography.

    Answer [ Francois Antoine, Epic Games]: The Porsche Speedster asset used in the “Speed of Light” was directly tessellated in Unreal Engine’s DataSmith using Porsche’s actual CATIA CAD manufacturing files. The first iteration of the Speedster was 40 million polygons, which we then selectively re-tessellated down to just shy of 10 million polygons. Ignacio had surprised us by saying that this optimization would not significantly impact the performance when rendering using RTX and that proved to be the case. The project performance was very similar with either version of the car! This is a real plus for the visualization of large enterprise datasets.
    ...
    Question 4: The materials in the demo were highly varied, with a strong focus on reflective and translucent surfaces… how did you build those materials for the demo?

    Answer [ Francois Antoine, Epic Games]: Indeed, when we first got wind of Turing’s ray-tracing feature set, we immediately thought of an automotive-focused project. Cars are all about smooth, curvy reflections — what we call “liquid lines” in industry speak — and we couldn’t think of any other subject that could benefit more from Turing’s ray-tracing. In order to get these reflective and translucent materials to look as accurate as possible, we emphasize the use of high quality real-world reference, in some cases going as far as ordering car parts and disassembling them to better understand their internal structures and how they interact with light. This is exactly what we did with the Speedster’s tail lights — this new found understanding coupled with the more physically accurate behavior of ray-tracing allowed us to achieve much more realistic taillights than we previously could.

    Question 5: Is the entire demo ray-traced, or have some rasterization techniques been used?

    Answer [Juan Canada, Epic Games]: The demo uses hybrid techniques where a raster base pass is calculated first. On top of that, ray traced passes are launched to calculate complex effects that would be very hard to achieve with traditional raster techniques.

    Question 6: There’s so much to take in watching the “Speed of Light” clip. Are there any little details in the sequence that people might be prone to miss? Which moments show off ray tracing most effectively?

    Answer [Francois Antoine, Epic Games]: There is a lot more innovative tech there than you will notice — and that is a good thing! The tech shouldn’t attract your attention, it should just make the image look more plausible. For example, the light streak reflecting in the car are not coming from a simple texture on a plane (as would traditionally be done in rendering), but is instead an animated textured area lights with ray-traced soft shadows. This mimics how these light streaks would be created in a real photo studio environment, with light affecting both the diffuse and specular components of the car’s materials and creating a much more realistic light behavior. Oh, and it’s amazing to finally have proper reflections on translucency, thanks to ray-tracing!
    ....
    Question 9: From a game development perspective, what are the long-term advantages to supporting ray-tracing in your development pipeline today?

    Answer [Juan Canada, Epic Games]: Ray tracing not only will allow to simulate more sophisticated optical phenomena than what has been seen to date in real-time graphics. It also brings remarkable advantages to the workflow. Ray tracing is more predictable and generates less visual artifacts than other techniques. Also code tends to be simpler: while there will be a transition period where mixing raster and ray tracing will require of advanced machinery to get both worlds working together, in the long term ray tracing will lead to code easier to maintain and expand.

    Question 10: Do you have suggestions for how developers can use rasterization and ray tracing together to maximize the efficiency of their art pipeline? It seems like we’re experiencing a best-of-both-worlds moment — rasterization is still great for its efficient geometry projection and depth calculation, while ray tracing makes lighting faster.

    Answer [Ignacio Llamas, NVIDIA]: For developers, my advice is to use the best tool for each problem. Rasterization is great at handling the view from the camera with efficient texture LOD computation. As long as the geometric complexity is below some point, it is still the right answer. If the amount of geometry goes up significantly, ray tracing can be more efficient. Then use ray tracing to solve all those problems that it is best at, such as dynamic area light shadows, reflections, ambient occlusion, diffuse GI, translucency with physically correct absorption and refraction, or caustics. For artists, the choice regarding using rasterization or ray tracing may already be made for them by the engine developer. I think what’s important for artists and their pipeline is making sure their flows adapt to enable the best possible quality that can be achieved now that ray-tracing is enabling looks that were not possible before. This means learning about the range of options they have, such as correct area lights and reflections, and making informed decisions on material parameters based on this. It may also mean for example ensuring that materials have new physically based parameters, such as correct absorption and index of refraction, which may have been ignored before.




    See link for more Q & A ....
    https://news.developer.nvidia.com/nvidia-epic-games-and-real-time-ray-tracing/

     
    #238 pharma, Oct 7, 2018
    Last edited: Oct 7, 2018
    Lightman and OCASM like this.
  19. Samwell

    Newcomer

    Joined:
    Dec 23, 2011
    Messages:
    127
    Likes Received:
    154
    Nice small devblog post with some Non-Triangle Raytracing acceleration numbers:

    https://devblogs.nvidia.com/my-first-ray-tracing-demo/#disqus_thread

    Authors system had a Titan V, so 173 FPS on 2080Ti vs 60 on TV or 80-90 vs 30 FPS in the second scene. Nearly 3x faster than TV doesn't sound so bad.
     
    OCASM, pharma, eloyc and 1 other person like this.
  20. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,575
    Likes Received:
    2,294
    What's also interesting from that link is how easy it is to add soft shadows.
     
    OCASM and eloyc like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...