Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Teraflop monsters however are NOT inadequate at real-time RT. I know this from personal experience. I don't need to proof this to myself, i only need to look at my screen here. I will not proof it to you.

We get that you have proof that you're not able to share with us about your great RT implementation that nobody else on the planet is able to match.

Point is that if real-time RT was viable on general compute architectures we would know about it since the actual experts in the field aren't quite as shy about sharing their accomplishments.
 
If we are talking about consoles adopting it, than the actual implementation matters a lot as console devs will inevitably get APIs (or versions of the in case of MS) that expose more of the underlying operations under the hood. If the hardware implementation is hardly programmable, that's a whole generation of missed opportunities for inventive alternative solutions and experimentation with different optimizations.
Early on what you need is speed and ease of implementation. That means fixed function acceleration and triangles. DXR itself is very programmable so devs can experiment with it all they want even without hardware acceleration. And they will since thanks to RTX now there's consumer demand for RTRT.
 
We get that you have proof that you're not able to share with us about your great RT implementation that nobody else on the planet is able to match.

Point is that if real-time RT was viable on general compute architectures we would know about it since the actual experts in the field aren't quite as shy about sharing their accomplishments.
guys, hate to back seat mod here, but attack his arguments not the person posting. JoeJ offers a perspective of many of the current landscape.
 
Sorry if you already said it but when do you think you will be able to show us your stuff?
I've planned it for current gen consoles, but already missed that. If i succeed i can't tell when results can be shown public. If i fail i can tell you that in about half a year maybe.

We get that you have proof that you're not able to share with us about your great RT implementation that nobody else on the planet is able to match.

Point is that if real-time RT was viable on general compute architectures we would know about it since the actual experts in the field aren't quite as shy about sharing their accomplishments.

I if i could compete RTX with a compute based RT implementation, i would not hang out here at the forum. I would sit with a laptop at AMD headquarters, haha :)
Some posts above i've said myself classical RT on GPU requires at least 'changes' in hardware . All i criticize is black boxes and fixed function.
I never said i would be an expert in classical RT myself, or i would have a great classical RT implementation (i have none).

I see this all may sound contradicting, but unfortunately i'm not a researcher who's getting paid or earns reputation for publications.
When i say RT can perform well on regular compute, then i mean raytracing alternative geometry. Not industry standard triangle meshes with custom material shaders, which i refer as 'classical'.
It's difficult to argue when you have to keep secrets, and it's even more difficult not to sound as mister 'know it all better' maybe, which i certainly not intend.

My risk of failure is very high, and i'm not Bruce Bell who thinks avoiding a perspective divide is a great invention. I do not aim to replace classical RT or triangle meshes either, to be clear.

The attention you guys spend on me is totally unexpected.
I'm just one out of hundrets or thousands or more who's experimenting with unconventional ideas. 1% of those guys succeed, so don't take mo so serious, but don't rip me apart either.
I don't have time for so much self defense, to search for public code on github etc. just to proof every single word i say, even things that really everyone knows.
This forum is a great resource of news about RTX, so i'll stay and thanks for this. But i'll stay as a quiet observer, and hopefully i manage to swallow the next provocation in silence, sigh... ;)
 
Do you know about Claybook? It's raytraced via compute using SDF. Look up posts by sebbbi on this board as he gives a decent amount of info.

As for the attacks, you've made a significant claim that AMD GPUs are vastly superior than nVidia for compute, and you've said the implementation of Radeon Rays isn't particularly optimal, which are not insignificant claims. The way you've worded yourself can come across as arrogant to some readers - not me, but the written word doesn't carry the voice which wrote it, and the words you've used could be read with an air of superiority which seems to be what trinibwoy is reacting to.

Don't take it too hard. If you can't post details, you'll just have to agree to disagree, but you can't expect to have proper conversation with people based on knowledge you have which we don't which you won't share. ;)
 
Last edited:
I've planned it for current gen consoles, but already missed that. If i succeed i can't tell when results can be shown public. If i fail i can tell you that in about half a year maybe.



I if i could compete RTX with a compute based RT implementation, i would not hang out here at the forum. I would sit with a laptop at AMD headquarters, haha :)
Some posts above i've said myself classical RT on GPU requires at least 'changes' in hardware . All i criticize is black boxes and fixed function.
I never said i would be an expert in classical RT myself, or i would have a great classical RT implementation (i have none).

I see this all may sound contradicting, but unfortunately i'm not a researcher who's getting paid or earns reputation for publications.
When i say RT can perform well on regular compute, then i mean raytracing alternative geometry. Not industry standard triangle meshes with custom material shaders, which i refer as 'classical'.
It's difficult to argue when you have to keep secrets, and it's even more difficult not to sound as mister 'know it all better' maybe, which i certainly not intend.

My risk of failure is very high, and i'm not Bruce Bell who thinks avoiding a perspective divide is a great invention. I do not aim to replace classical RT or triangle meshes either, to be clear.

The attention you guys spend on me is totally unexpected.
I'm just one out of hundrets or thousands or more who's experimenting with unconventional ideas. 1% of those guys succeed, so don't take mo so serious, but don't rip me apart either.
I don't have time for so much self defense, to search for public code on github etc. just to proof every single word i say, even things that really everyone knows.
This forum is a great resource of news about RTX, so i'll stay and thanks for this. But i'll stay as a quiet observer, and hopefully i manage to swallow the next provocation in silence, sigh... ;)

Claybook is traytracing sdf based...
 
Do you know about Clay Book? It's raytraced via compute using SDF. Look up posts by sebbbi on this board as he gives a decent amount of info.

As for the attacks, you've made a significant claim that AMD GPUs are vastly superior than nVidia for compute, and you've said the implementation of Radeon Rays isn't particularly optimal, which are not insignificant claims. The way you've worded yourself can come across as arrogant to some readers - not me, but the written word doesn't carry the voice which wrote it, and the words you've used could be read with an air of superiority which seems to be what trinibwoy is reacting to.

Ok... :)
Yes i know Claybook. Ask Mr. Aaltonen what he has to say (if he wants) about GCN compute performance. He should know.
Ask the developers of RadeonRays about their goals. Pretty sure they target content creation and they need a framework to test out realtime approaches.
Sounding arrogant, ok, my fault. I don't hear this the first time. It's not intended. Maybe because i've learned english by reading teaching computer science stuff. Probably i sound teaching and 'knowing better' now all the time i guess. It's really not intended.

Don't take it too hard. If you can't post details, you'll just have to agree to disagree, but you can't expect to have proper conversation with people based on knowledge you have which we don't which you won't share. ;)

I don't take it hard. I'm not angry about anyone here. Like said i like the forum.
But i disagree about proper conversation. I can have proper conversation with others while talking around secrets and i'm used to that. Others have their own. We exchange ideas, discuss advantages / disadvantages, make proposals... all this works while maintaining secrets. AMD and NV guy can still have a drink and discuss GPUs.
But here this is not the case. There is no exchange, i do not learn from the other people here - they just keep asking me questions. And when the secrets become a central topic, it is time to leave.

But i'll be back when i have questions myself. :) Thanks for the regulation work and the honest words, man!
 
guys, hate to back seat mod here, but attack his arguments not the person posting. JoeJ offers a perspective of many of the current landscape.

I agree 100%. Thought it was clear I was addressing the repeated unfounded claims and not the person making them. Apologies if it came off otherwise .... Shifty already articulated the issue with the posts.
 
Thanks.

Good superresolution method could be lovely for cases like Rage as it uses bicubic upsampling for scaling virtual textures.
Speaking of textures, I was reminded of Okami HD's "super-resolution"

http://www.capcom-unity.com/gregaman/blog/2012/11/05/okami-hd-powered-by-technical-innovation-love
the original image is blown up via a multitude of different algorithms, producing a impression of statistical data about the silhouette of the image. This statistical impression allows them to then procedurally produce a high-quality approximation of the original image in beautiful HD.

---
Anyways, pretty interesting to see nV & MS work together on a bunch of things lately.

:mrgreen:
 
Last edited:
I see this all may sound contradicting, but unfortunately i'm not a researcher who's getting paid or earns reputation for publications.
When i say RT can perform well on regular compute, then i mean raytracing alternative geometry. Not industry standard triangle meshes with custom material shaders, which i refer as 'classical'. It's difficult to argue when you have to keep secrets, and it's even more difficult not to sound as mister 'know it all better' maybe, which i certainly not intend.

That’s fair but we should critique RT in realistic terms. The world is currently made up of triangles and we need to acknowledge the realities of chip manufacturing technology and limited developer and artist budgets.

The only relevant question is how best to use available resources to improve games / graphics. If there are other techniques that produce similar or better results within those limits then of course we would want to learn more about that.

I recently reread some of Anandtech’s old articles from the Voodoo / Riva / early Geforce days. It’s amazing how far we’ve come but it’s been due to steady progress over many years and every single feature started off as a much slower, less flexible version of what it would eventually become. RT will be no different.
 
To see how much GPGPU 'flops' is lost, just take a die shot and look up the area RT and Tensor cores take.
If you have a die shot, I’d love to see it!

Because right now, all we have are artistic impressions that are anything but.
 
Don't take it too hard. If you can't post details, you'll just have to agree to disagree, but you can't expect to have proper conversation with people based on knowledge you have which we don't which you won't share. ;)

You know, this is the wrong argument. Sometimes we can only have a proper discussion if one has written SM6.0 compute shaders (example). The "knowledge" is the public API documentation? No, it's experience. That's unshareable and very individual. In the working space, there is first and foremost respect and acceptance, and an understanding that results vary, dramatically at times, because of the diverging experience and algorithms used.

Let's call it a professional discussion, where everyone tries to learn from the others experience and thoughts (it's super rare to find people lecturing or evangelizing or selling, it's overwhelmingly just asking question to each other), and in the end a discussion is something enriching in itself, because it makes you think and talk about the stuff you like to explore.

There is no need to proof, because there is no need to lie, and if mistakes are made there is no need to blame, only to learn.
 
Yeah, I for one was finding the guys info interesting to hear about. I don't see why he or anyone would lie up posts like those. Why take the trouble to fabricate something that specific on a group this niche for no apparent purpose. Who would do this falsely and what for? For heaven's sakes guys... Chill a bit. Give him the benefit of the doubt and let's hear what he has to say. Take salt to taste with it if you wish, but easy on accusative tone. This place is supposed to be fun and good times.
 
Last edited:
Thanks for all the backing, guys! Like said i don't take anything personal, and i see what i've said seems to be just polarizing, which is not bad. But also all this led to some doubts on myself.

So i looked up some more kernels of RadeonRays, but my impression does not change. Basically i rule them all out early because they use a binary tree, which results jumping around in memory like crazy, and GCN performs badly with this. NV is much more forgiving to bad memory access patterns.
(Unrelated info: NV is also more forgivng to unoptimized code. Or, if you prefer: AMD honors optimization much more.)
I don't say the code here is unoptimized or bad, but in my opinion binary tree is the worst choice. Using a tree with larger branching factors (e.g. 8, 16... children per node) allows to read child nodes from coherent memory, and also to process them in parallel if desired. Also the tree has much less levels, which limits divergence.
Add this to my previous suggestion (which can lead to reduced bandwidth by a factor up to 64!), and you see why i am not impressed if we talk about realtime RT. I would say RadeonRays is 'high performance', but i would not say it is 'realtime'.
That's just my personal opinion. But my criticism is just a response to you, mentioning RadeonRays in context of realtime RT or even hardware acceleration. This makes no sense to me. RR is very fine for content creation, because it does not require Cuda.

My personal pessimism and doubt in AMDs raytracing experience may be similar out of place! It's no fact, just a personal guess! AMD managed to surprise with beating their competitors more than once.


The second point, vendor compute performance is something i can't proof, but i see it appears exaggerated, even to many experienced programmers.
But i repeat some points: Most recent GPUs not included. GCN needs more optimization work and careful design of memory access patterns (some pitfalls that easily go unnoticed).
Further, some personal impressions: Game industry has still not learned to utilize compute - they think in triangles and pixel shaders. Other industries has been already won by NVs Cuda and have no need to optimize.
This is why we see the insane compute power of GCN so rarely. But it's there, my numbers are real. Notice that all my optimizations work for NV too. I do not optimize exclusively for AMD, and i maintain different codepaths for both vendors in case of differing best choices. (Which luckily seems not necessary anymore with more recent GPUs)


If i would want to criticize myself, i would really pick other points, likely:
Accuse NV to do blackbox and fixed function to protect their RT lead at the cost of limiting general progress and innovation. <- why did nobody react to this? That's a real exaggerated insinuation maybe. But you go havoc on my performance analysis, which is real (but you have to add to others for an average).
Also, my apologizes to Bruce Bell. That was really out of place.
... probably i'm wrong with other things too. I'm often wrong, like everybody else.

New If you have a die shot, I’d love to see it!

Because right now, all we have are artistic impressions that are anything but.

I was not aware what i've seen is not real. It showed Tensor and RT having the same area as shader cores.
So you're right and i've made potentially wrong conclusions.


That’s fair but we should critique RT in realistic terms. The world is currently made up of triangles and we need to acknowledge the realities of chip manufacturing technology and limited developer and artist budgets.

The only relevant question is how best to use available resources to improve games / graphics. If there are other techniques that produce similar or better results within those limits then of course we would want to learn more about that.

I recently reread some of Anandtech’s old articles from the Voodoo / Riva / early Geforce days. It’s amazing how far we’ve come but it’s been due to steady progress over many years and every single feature started off as a much slower, less flexible version of what it would eventually become. RT will be no different.

Agree about triangles, but not because they are state of the art. They are just the most efficient way to approximate geometry in practice. (Exception is something diffuse like a branchy bush with many leaves).
But i disagree with your optimism in your remaining comment.
You are just wrong: The core of rasterization (ROPs) is still fixed function, or can you draw a curved triangle, or do occlusion culling while rendering back to front like Quake does? No, you can't. All you can do is early Z and occlusion culling. Both requires to draw the entire triangle.
Now you can argue that's no problem - today we cull stuff at larger granulary etc. and you are right.
But raytracing is different. Rasterizing a trianlge is simple enough you can select one out of two possible options, make it a fixed function and good is. Raytracing however is still an open problem. On both CPU and GPU. Now all research on this open problem is entirely in the hands of a profit oriented minority.
Maybe that's just the kind of specialization our time requests, but maybe it is just to early to close this topic for public research.
In any case, i doubt the core will ever become programmable. The harm may have already happened and may be irreversible. We can not be sure about that.



About SDF, well... we can not compare this to anything discussed here. If we want RT GI, i personally think we have to rule it out. Together with Voxel Cone Tracing, Light Propagation, etc.
The problem is if we talk about lighting surfaces, doing this with a volume data structure requires more memory and more samples no matter how good your compression is.
Also the volume data appears attracting to implement simple algorithms, and simple is good. But the truth is that it's just but brute force. Sphere tracing is brute force, memory is limited and slow. No good choice if we need to relate every point in space to each other, and perform a visibility test in between. We can not solve a O(n^3) problem using brute force. Or even if we could, we should choose the better approach just to save energy.

Personal opinion - i've failed with volume data approaches - others are still working on it and achieve results. Personal. And not meant as criticism towards Claybook or SDF in general. I only talk about application for full GI.
For example i like this work here, which seems to be a volume based diffusion approach:
I've experimented with this too years ago, but the problem is: With reduced volume resolution light leaks just like crazy. Volume data is no good approximation at low resolution (you can not express multiple walls within a single voxel, not even a single wall well). Voxel Cone Tracing has the same problem, SDF too.
Surfels can be tuned to cause overocclusion instead, which is acceptable, and there is no global spatial limitation like a grid. At some point each approach breaks down, hopefully at a distance far enough from the camera. (See the Many LODs paper i've mentioned in the other thread if you're interested:
)


So, that's it. I need to continue work now. Costs too much time to do introductions on GI ;) ... see ya!
 
Thanks for all the backing, guys! Like said i don't take anything personal, and i see what i've said seems to be just polarizing, which is not bad. But also all this led to some doubts on myself.

So i looked up some more kernels of RadeonRays, but my impression does not change. Basically i rule them all out early because they use a binary tree, which results jumping around in memory like crazy, and GCN performs badly with this. NV is much more forgiving to bad memory access patterns.
(Unrelated info: NV is also more forgivng to unoptimized code. Or, if you prefer: AMD honors optimization much more.)
I don't say the code here is unoptimized or bad, but in my opinion binary tree is the worst choice. Using a tree with larger branching factors (e.g. 8, 16... children per node) allows to read child nodes from coherent memory, and also to process them in parallel if desired. Also the tree has much less levels, which limits divergence.
Add this to my previous suggestion (which can lead to reduced bandwidth by a factor up to 64!), and you see why i am not impressed if we talk about realtime RT. I would say RadeonRays is 'high performance', but i would not say it is 'realtime'.
That's just my personal opinion. But my criticism is just a response to you, mentioning RadeonRays in context of realtime RT or even hardware acceleration. This makes no sense to me. RR is very fine for content creation, because it does not require Cuda.

My personal pessimism and doubt in AMDs raytracing experience may be similar out of place! It's no fact, just a personal guess! AMD managed to surprise with beating their competitors more than once.


The second point, vendor compute performance is something i can't proof, but i see it appears exaggerated, even to many experienced programmers.
But i repeat some points: Most recent GPUs not included. GCN needs more optimization work and careful design of memory access patterns (some pitfalls that easily go unnoticed).
Further, some personal impressions: Game industry has still not learned to utilize compute - they think in triangles and pixel shaders. Other industries has been already won by NVs Cuda and have no need to optimize.
This is why we see the insane compute power of GCN so rarely. But it's there, my numbers are real. Notice that all my optimizations work for NV too. I do not optimize exclusively for AMD, and i maintain different codepaths for both vendors in case of differing best choices. (Which luckily seems not necessary anymore with more recent GPUs)


If i would want to criticize myself, i would really pick other points, likely:
Accuse NV to do blackbox and fixed function to protect their RT lead at the cost of limiting general progress and innovation. <- why did nobody react to this? That's a real exaggerated insinuation maybe. But you go havoc on my performance analysis, which is real (but you have to add to others for an average).
Also, my apologizes to Bruce Bell. That was really out of place.
... probably i'm wrong with other things too. I'm often wrong, like everybody else.



I was not aware what i've seen is not real. It showed Tensor and RT having the same area as shader cores.
So you're right and i've made potentially wrong conclusions.




Agree about triangles, but not because they are state of the art. They are just the most efficient way to approximate geometry in practice. (Exception is something diffuse like a branchy bush with many leaves).
But i disagree with your optimism in your remaining comment.
You are just wrong: The core of rasterization (ROPs) is still fixed function, or can you draw a curved triangle, or do occlusion culling while rendering back to front like Quake does? No, you can't. All you can do is early Z and occlusion culling. Both requires to draw the entire triangle.
Now you can argue that's no problem - today we cull stuff at larger granulary etc. and you are right.
But raytracing is different. Rasterizing a trianlge is simple enough you can select one out of two possible options, make it a fixed function and good is. Raytracing however is still an open problem. On both CPU and GPU. Now all research on this open problem is entirely in the hands of a profit oriented minority.
Maybe that's just the kind of specialization our time requests, but maybe it is just to early to close this topic for public research.
In any case, i doubt the core will ever become programmable. The harm may have already happened and may be irreversible. We can not be sure about that.



About SDF, well... we can not compare this to anything discussed here. If we want RT GI, i personally think we have to rule it out. Together with Voxel Cone Tracing, Light Propagation, etc.
The problem is if we talk about lighting surfaces, doing this with a volume data structure requires more memory and more samples no matter how good your compression is.
Also the volume data appears attracting to implement simple algorithms, and simple is good. But the truth is that it's just but brute force. Sphere tracing is brute force, memory is limited and slow. No good choice if we need to relate every point in space to each other, and perform a visibility test in between. We can not solve a O(n^3) problem using brute force. Or even if we could, we should choose the better approach just to save energy.

Personal opinion - i've failed with volume data approaches - others are still working on it and achieve results. Personal. And not meant as criticism towards Claybook or SDF in general. I only talk about application for full GI.
For example i like this work here, which seems to be a volume based diffusion approach:
I've experimented with this too years ago, but the problem is: With reduced volume resolution light leaks just like crazy. Volume data is no good approximation at low resolution (you can not express multiple walls within a single voxel, not even a single wall well). Voxel Cone Tracing has the same problem, SDF too.
Surfels can be tuned to cause overocclusion instead, which is acceptable, and there is no global spatial limitation like a grid. At some point each approach breaks down, hopefully at a distance far enough from the camera. (See the Many LODs paper i've mentioned in the other thread if you're interested:
)


So, that's it. I need to continue work now. Costs too much time to do introductions on GI ;) ... see ya!

For quantum break, Remedy choose to not use voxel cone tracing because of the wall problem...
 
Thanks for all the backing, guys! Like said i don't take anything personal, and i see what i've said seems to be just polarizing, which is not bad. But also all this led to some doubts on myself.

So i looked up some more kernels of RadeonRays, but my impression does not change. Basically i rule them all out early because they use a binary tree, which results jumping around in memory like crazy, and GCN performs badly with this. NV is much more forgiving to bad memory access patterns.
(Unrelated info: NV is also more forgivng to unoptimized code. Or, if you prefer: AMD honors optimization much more.)
I don't say the code here is unoptimized or bad, but in my opinion binary tree is the worst choice. Using a tree with larger branching factors (e.g. 8, 16... children per node) allows to read child nodes from coherent memory, and also to process them in parallel if desired. Also the tree has much less levels, which limits divergence.
Add this to my previous suggestion (which can lead to reduced bandwidth by a factor up to 64!), and you see why i am not impressed if we talk about realtime RT. I would say RadeonRays is 'high performance', but i would not say it is 'realtime'.
That's just my personal opinion. But my criticism is just a response to you, mentioning RadeonRays in context of realtime RT or even hardware acceleration. This makes no sense to me. RR is very fine for content creation, because it does not require Cuda.

My personal pessimism and doubt in AMDs raytracing experience may be similar out of place! It's no fact, just a personal guess! AMD managed to surprise with beating their competitors more than once.


The second point, vendor compute performance is something i can't proof, but i see it appears exaggerated, even to many experienced programmers.
But i repeat some points: Most recent GPUs not included. GCN needs more optimization work and careful design of memory access patterns (some pitfalls that easily go unnoticed).
Further, some personal impressions: Game industry has still not learned to utilize compute - they think in triangles and pixel shaders. Other industries has been already won by NVs Cuda and have no need to optimize.
This is why we see the insane compute power of GCN so rarely. But it's there, my numbers are real. Notice that all my optimizations work for NV too. I do not optimize exclusively for AMD, and i maintain different codepaths for both vendors in case of differing best choices. (Which luckily seems not necessary anymore with more recent GPUs)


If i would want to criticize myself, i would really pick other points, likely:
Accuse NV to do blackbox and fixed function to protect their RT lead at the cost of limiting general progress and innovation. <- why did nobody react to this? That's a real exaggerated insinuation maybe. But you go havoc on my performance analysis, which is real (but you have to add to others for an average).
Also, my apologizes to Bruce Bell. That was really out of place.
... probably i'm wrong with other things too. I'm often wrong, like everybody else.



I was not aware what i've seen is not real. It showed Tensor and RT having the same area as shader cores.
So you're right and i've made potentially wrong conclusions.




Agree about triangles, but not because they are state of the art. They are just the most efficient way to approximate geometry in practice. (Exception is something diffuse like a branchy bush with many leaves).
But i disagree with your optimism in your remaining comment.
You are just wrong: The core of rasterization (ROPs) is still fixed function, or can you draw a curved triangle, or do occlusion culling while rendering back to front like Quake does? No, you can't. All you can do is early Z and occlusion culling. Both requires to draw the entire triangle.
Now you can argue that's no problem - today we cull stuff at larger granulary etc. and you are right.
But raytracing is different. Rasterizing a trianlge is simple enough you can select one out of two possible options, make it a fixed function and good is. Raytracing however is still an open problem. On both CPU and GPU. Now all research on this open problem is entirely in the hands of a profit oriented minority.
Maybe that's just the kind of specialization our time requests, but maybe it is just to early to close this topic for public research.
In any case, i doubt the core will ever become programmable. The harm may have already happened and may be irreversible. We can not be sure about that.



About SDF, well... we can not compare this to anything discussed here. If we want RT GI, i personally think we have to rule it out. Together with Voxel Cone Tracing, Light Propagation, etc.
The problem is if we talk about lighting surfaces, doing this with a volume data structure requires more memory and more samples no matter how good your compression is.
Also the volume data appears attracting to implement simple algorithms, and simple is good. But the truth is that it's just but brute force. Sphere tracing is brute force, memory is limited and slow. No good choice if we need to relate every point in space to each other, and perform a visibility test in between. We can not solve a O(n^3) problem using brute force. Or even if we could, we should choose the better approach just to save energy.

Personal opinion - i've failed with volume data approaches - others are still working on it and achieve results. Personal. And not meant as criticism towards Claybook or SDF in general. I only talk about application for full GI.
For example i like this work here, which seems to be a volume based diffusion approach:
I've experimented with this too years ago, but the problem is: With reduced volume resolution light leaks just like crazy. Volume data is no good approximation at low resolution (you can not express multiple walls within a single voxel, not even a single wall well). Voxel Cone Tracing has the same problem, SDF too.
Surfels can be tuned to cause overocclusion instead, which is acceptable, and there is no global spatial limitation like a grid. At some point each approach breaks down, hopefully at a distance far enough from the camera. (See the Many LODs paper i've mentioned in the other thread if you're interested:
)


So, that's it. I need to continue work now. Costs too much time to do introductions on GI ;) ... see ya!
CryEngine uses sparse voxel octree global illumination. It's hardly perfect but much better than nothing:

https://docs.cryengine.com/pages/viewpage.action?pageId=25535599
 
Porsches, Storm Troopers, and Ray Tracing: How NVIDIA and Epic are Redefining Graphics
October 1, 2018
We had the chance to ask Ignacio Lamas, Senior Manager of Real-Time Ray Tracing Software at NVIDIA, Juan Cañada, Ray Tracing Lead at Epic Games, and Francois Antoine, Director of Embedded systems at Epic Games some questions about the work they’ve done on ray tracing:
Question 1: If “Speed of Light” had been made through traditional techniques like light baking or lightmaps, how would the end result differ? How would the development time change?
Answer [Ignacio Llamas, NVIDIA]: The entire demo is about fully dynamic studio lighting with multiple area lights. Using lightmaps would not be an option for this kind of setup. When everything is dynamic, traditional rasterization-based techniques are simply insufficient to correctly capture area lights shadows, correct artifact-free reflections and the diffuse light interactions. There was simply no way to do this without real-time ray tracing and the performance of Turing GPUs.

Question 2: “Speed of Light” delivered on the promise of photorealistic results from ray tracing. Can you give us a sense of how long it took to produce that clip? How big was the team that worked on it?

Answer [Ignacio Llamas, NVIDIA]: From a technology standpoint we started from where the “Reflections” demo left off — which means ray traced area lights, reflections and ambient occlusion. We had about three months to push the technology to the next level to meet the higher demand of fully dynamic ray traced lighting in this demo. To accomplish this we had about eight rendering engineers across NVIDIA and Epic involved to various degrees.

Answer [Francois Antoine, Epic Games]: The “Speed of Light” demo is actually made of two components — the cinematic ‘attract mode’ and the interactive lighting studio, and they both use the exact same vehicle asset. If I were to break it down by sub-project, we had two people working on the Speedster asset, three people working on the interactive lighting studio and about five people working on the cinematic. The production of the entire “Speed of Light” project took about eight weeks and happened in parallel with the development of new ray-traced rendering features.

Question 3: Is “Speed of Light” using cinematic-quality assets, or in-game quality assets?

Answer [Ignacio Llamas, NVIDIA]: The original CAD model is as detailed as you can get. The in-engine version, tessellated to either 10 or 40 million polygons is in the range that we can consider cinematic quality. In addition to the polygon count, the other thing that makes the model cinematic quality is the physically-based materials, which have an amazing amount of detail and closely match reference samples and car photography.

Answer [ Francois Antoine, Epic Games]: The Porsche Speedster asset used in the “Speed of Light” was directly tessellated in Unreal Engine’s DataSmith using Porsche’s actual CATIA CAD manufacturing files. The first iteration of the Speedster was 40 million polygons, which we then selectively re-tessellated down to just shy of 10 million polygons. Ignacio had surprised us by saying that this optimization would not significantly impact the performance when rendering using RTX and that proved to be the case. The project performance was very similar with either version of the car! This is a real plus for the visualization of large enterprise datasets.
...
Question 4: The materials in the demo were highly varied, with a strong focus on reflective and translucent surfaces… how did you build those materials for the demo?

Answer [ Francois Antoine, Epic Games]: Indeed, when we first got wind of Turing’s ray-tracing feature set, we immediately thought of an automotive-focused project. Cars are all about smooth, curvy reflections — what we call “liquid lines” in industry speak — and we couldn’t think of any other subject that could benefit more from Turing’s ray-tracing. In order to get these reflective and translucent materials to look as accurate as possible, we emphasize the use of high quality real-world reference, in some cases going as far as ordering car parts and disassembling them to better understand their internal structures and how they interact with light. This is exactly what we did with the Speedster’s tail lights — this new found understanding coupled with the more physically accurate behavior of ray-tracing allowed us to achieve much more realistic taillights than we previously could.

Question 5: Is the entire demo ray-traced, or have some rasterization techniques been used?

Answer [Juan Canada, Epic Games]: The demo uses hybrid techniques where a raster base pass is calculated first. On top of that, ray traced passes are launched to calculate complex effects that would be very hard to achieve with traditional raster techniques.

Question 6: There’s so much to take in watching the “Speed of Light” clip. Are there any little details in the sequence that people might be prone to miss? Which moments show off ray tracing most effectively?

Answer [Francois Antoine, Epic Games]: There is a lot more innovative tech there than you will notice — and that is a good thing! The tech shouldn’t attract your attention, it should just make the image look more plausible. For example, the light streak reflecting in the car are not coming from a simple texture on a plane (as would traditionally be done in rendering), but is instead an animated textured area lights with ray-traced soft shadows. This mimics how these light streaks would be created in a real photo studio environment, with light affecting both the diffuse and specular components of the car’s materials and creating a much more realistic light behavior. Oh, and it’s amazing to finally have proper reflections on translucency, thanks to ray-tracing!
....
Question 9: From a game development perspective, what are the long-term advantages to supporting ray-tracing in your development pipeline today?

Answer [Juan Canada, Epic Games]: Ray tracing not only will allow to simulate more sophisticated optical phenomena than what has been seen to date in real-time graphics. It also brings remarkable advantages to the workflow. Ray tracing is more predictable and generates less visual artifacts than other techniques. Also code tends to be simpler: while there will be a transition period where mixing raster and ray tracing will require of advanced machinery to get both worlds working together, in the long term ray tracing will lead to code easier to maintain and expand.

Question 10: Do you have suggestions for how developers can use rasterization and ray tracing together to maximize the efficiency of their art pipeline? It seems like we’re experiencing a best-of-both-worlds moment — rasterization is still great for its efficient geometry projection and depth calculation, while ray tracing makes lighting faster.

Answer [Ignacio Llamas, NVIDIA]: For developers, my advice is to use the best tool for each problem. Rasterization is great at handling the view from the camera with efficient texture LOD computation. As long as the geometric complexity is below some point, it is still the right answer. If the amount of geometry goes up significantly, ray tracing can be more efficient. Then use ray tracing to solve all those problems that it is best at, such as dynamic area light shadows, reflections, ambient occlusion, diffuse GI, translucency with physically correct absorption and refraction, or caustics. For artists, the choice regarding using rasterization or ray tracing may already be made for them by the engine developer. I think what’s important for artists and their pipeline is making sure their flows adapt to enable the best possible quality that can be achieved now that ray-tracing is enabling looks that were not possible before. This means learning about the range of options they have, such as correct area lights and reflections, and making informed decisions on material parameters based on this. It may also mean for example ensuring that materials have new physically based parameters, such as correct absorption and index of refraction, which may have been ignored before.




See link for more Q & A ....
https://news.developer.nvidia.com/nvidia-epic-games-and-real-time-ray-tracing/

 
Last edited by a moderator:
Nice small devblog post with some Non-Triangle Raytracing acceleration numbers:

https://devblogs.nvidia.com/my-first-ray-tracing-demo/#disqus_thread

Those shadows are sharp while moving around, not noisy. For this view I get about 60 FPS – lower than before, since I’m shooting three shadow rays instead of just the random one for the hemispherical light. If I zoom in, so that everything is reflecting and spawning a lot of reflection rays (and fewer shadow rays, as I don’t compute shadows for entirely reflective surfaces), it drops to no lower than 30 FPS.

I mentioned these stats to Tomas Akenine-Möller, who had just received an RTX 2080 Ti. I wasn’t sure this new card would help much. RT Cores accelerate bounding volumes, sure, but they also include a dedicated triangle intersector that won’t get used. He runs the demo at 173 FPS for the view above, and 80-90 FPS for a zoomed-in view, such as the one in figure 5:

Authors system had a Titan V, so 173 FPS on 2080Ti vs 60 on TV or 80-90 vs 30 FPS in the second scene. Nearly 3x faster than TV doesn't sound so bad.
 
What's also interesting from that link is how easy it is to add soft shadows.
The images seen here are the results after a few seconds of convergence. The softer the shadow, the more rays that need to be fired, since the initial result is noisy (that said, researchers are looking for more efficient techniques for shadows). I did not have access in these demos to denoisers. Denoising is key for interactive applications using any effects that need multiple rays and so create noise.

What impressed me about adding soft shadows is how trivial it was to do. There have been dozens of papers about creating soft shadows with a rasterizer, usually involving clever sampling and filtering approximations, and few if any of them being physically accurate and bullet-proof. Adding one of these systems to a traditional renderer can be person-months of labor, along with any learning curve for artists understanding how to work with its limitations. Here, I just changed a few lines of code and done.
 
Back
Top