Next gen lighting technologies - voxelised, traced, and everything else *spawn*

The Dreams approach certainly has benefits but performance isn't one of them. You say huge worlds but so far we've only seen small scenes (AFAIK). As you say it's noisy and doesn't support the standard PBR workflow. It works for what it's trying to accomplish but I wouldn't recommend it for the majority of games.

That's not quite accurate. Performance of splatting is totally dynamic. You can think of it as a form of rasterization but with built in LOD mechanism.
In Dreams they do this by generating points at the surface with uniform distribution but irregular. At distance they just skip over a subset of points. (Think of instead drawing triangles, draw only the texels of its texture as points, and at distance you go up one mip level so the number reduces to 1/4th)
They have videos where people copy paste parts of the scene to build a city in very short time, and there seems little FPS drop. (I'm puzzled by that myself, because i doubt they use any form of hidden surface removal - just insane brute force compute power?)

Having no support for LOD ist really the main limitation of triangle based rasterization and RT. Computer graphics is mainly about two problems: Visibility and LOD. The rise of GPUs has put the latter out of our attention a bit, but it is still important, especially if we aim for GI in realtime.
This is the major point why i am not so convinced about fixed function hardware, because LOD is a open problem everywhere and solving it always disagrees with FF HW.

PBR works with splatting without any issue. I only meant for Dreams they did not do it because the content is made by the players and they would not like to place environment probes manually, and the devs have very different goals for the artstyle anyways (painterly).
If you read the paper, the programmer has experimented with all kinds of awesome high tech, but it was the artist that has pushed him to do 'boring' splatting, which gave the artistic results finally as intended.


I agree it's not meant to replace triangles for games yet, but at increasing detail levels it would beat triangle rasterization at some point without any doubt. (A graph of triangles is a very complex data structure in comparison to a point hierarchy, but both are just an approximation.)
I expected rasterization HW to become deprecated and finally removed from GPUs. Just compute would remain, and texture filters of course. No limitations. I still think it will happen this way and also RT cores will disappear again... but i see it will take much longer than i hoped for :)
I might sound unrealistic here, but we made raster HW to put triangles on screen fast. Now this is the smallest problem we have. For GI we need to 'render' the scene from any point, not just for the eye. We have a very different problem now than we have had 20 years ago.
 
That's not quite accurate. Performance of splatting is totally dynamic. You can think of it as a form of rasterization but with built in LOD mechanism.
In Dreams they do this by generating points at the surface with uniform distribution but irregular. At distance they just skip over a subset of points. (Think of instead drawing triangles, draw only the texels of its texture as points, and at distance you go up one mip level so the number reduces to 1/4th)
They have videos where people copy paste parts of the scene to build a city in very short time, and there seems little FPS drop. (I'm puzzled by that myself, because i doubt they use any form of hidden surface removal - just insane brute force compute power?)

Having no support for LOD ist really the main limitation of triangle based rasterization and RT. Computer graphics is mainly about two problems: Visibility and LOD. The rise of GPUs has put the latter out of our attention a bit, but it is still important, especially if we aim for GI in realtime.
This is the major point why i am not so convinced about fixed function hardware, because LOD is a open problem everywhere and solving it always disagrees with FF HW.

PBR works with splatting without any issue. I only meant for Dreams they did not do it because the content is made by the players and they would not like to place environment probes manually, and the devs have very different goals for the artstyle anyways (painterly).
If you read the paper, the programmer has experimented with all kinds of awesome high tech, but it was the artist that has pushed him to do 'boring' splatting, which gave the artistic results finally as intended.


I agree it's not meant to replace triangles for games yet, but at increasing detail levels it would beat triangle rasterization at some point without any doubt. (A graph of triangles is a very complex data structure in comparison to a point hierarchy, but both are just an approximation.)
I expected rasterization HW to become deprecated and finally removed from GPUs. Just compute would remain, and texture filters of course. No limitations. I still think it will happen this way and also RT cores will disappear again... but i see it will take much longer than i hoped for :)
I might sound unrealistic here, but we made raster HW to put triangles on screen fast. Now this is the smallest problem we have. For GI we need to 'render' the scene from any point, not just for the eye. We have a very different problem now than we have had 20 years ago.

I don't find the tweet now but it seems Dreams rendering has evolved, point splatting had too much holes, now they mix points splatting and raymarching cubes ( it was a tweet by Alex Evans).

And it seems they use hybrid raytraced shadows/shadow maps

 
point splatting had too much holes, now they mix points splatting and raymarching cubes
uuuh... i remember imperfect shadow maps / many lods papers proposed just hacky solutions for this as well. :(
until it's released in a matter of weeks
I have a better solution and Alex Evans might be interested in discussing this. (I would just share it - i need some contact to game industry anyways.)
How could i approach him quickly? Filling out a job offering form on MM website? Creating Twitter account and send him PM? Would this work?
 
You might have a better solution, but newbies talking to top-of-the-field experts saying, "I'm better than you," never goes down well. A tweet will likely be read but as an unknown with no prior and no links to a blog that's been showing pioneering tech etc., you'll probably get overlooked in all the noise. Open discussion about tech will not be patentable and ideas are generally shared freely to advance the industry.

You need a demo. You then need to work hard getting anyone to even look at your demo. If it looks unbelievable, you maybe lucky and get plenty of attention from industry vets calling it out as BS. But eventually you'll get attention, and then you can try and do something with it, either starting a company to license the tech (Euclideon "Unlimited Detail" which has ended up in arcade machines), or securing gainful employment with a dev or engine.

I recently watched a film "The Man Who Knew Infinity" on Netflix, about a nobody Mathematician from India who far excelled his peers in ability, but of course no-one would listen to him. He had to slowly prove himself (and died from TB before he ever had the chance...). Having an idea is a tiny part of getting anywhere. It's far more about salemanship and making noise. There are plenty of folk who have done well with just salemanship and making a noise without any meaningful inventiveness, and many, many great ideasmen who've faded into obscurity unheard.
 
You might have a better solution, but newbies talking to top-of-the-field experts saying, "I'm better than you," never goes down well. A tweet will likely be read but as an unknown with no prior and no links to a blog that's been showing pioneering tech etc., you'll probably get overlooked in all the noise. Open discussion about tech will not be patentable and ideas are generally shared freely to advance the industry.

You need a demo. You then need to work hard getting anyone to even look at your demo. If it looks unbelievable, you maybe lucky and get plenty of attention from industry vets calling it out as BS. But eventually you'll get attention, and then you can try and do something with it, either starting a company to license the tech (Euclideon "Unlimited Detail" which has ended up in arcade machines), or securing gainful employment with a dev or engine.

I recently watched a film "The Man Who Knew Infinity" on Netflix, about a nobody Mathematician from India who far excelled his peers in ability, but of course no-one would listen to him. He had to slowly prove himself (and died from TB before he ever had the chance...). Having an idea is a tiny part of getting anywhere. It's far more about salemanship and making noise. There are plenty of folk who have done well with just salemanship and making a noise without any meaningful inventiveness, and many, many great ideasmen who've faded into obscurity unheard.

And another things, Alex Evans did not work on the rendering part for the last two years, he was working on the audio part(audio tools). Simon Brown was refining the rendering part and I think they used raymarching cubes mix with point splatting because it was used in one of the failed prototypes.
 
Last edited:
The devil is in details. A lot of ideas get discarded due to details or practicalities. When you have team of hundreds of people needed to create triple a game the constraints are very different than a lone wolf doing tech at home. There is a lot of inertia from existing solutions and changes happen slowly. Especially when things need to ship on certain date for the company to make money.

I wonder if ray traycing could gain popularity on the artist productivity benefit alone even if some performance was lost along the way.
 
You might have a better solution, but newbies talking to top-of-the-field experts saying, "I'm better than you," never goes down well. A tweet will likely be read but as an unknown with no prior and no links to a blog that's been showing pioneering tech etc., you'll probably get overlooked in all the noise. Open discussion about tech will not be patentable and ideas are generally shared freely to advance the industry.

You need a demo. You then need to work hard getting anyone to even look at your demo. If it looks unbelievable, you maybe lucky and get plenty of attention from industry vets calling it out as BS. But eventually you'll get attention, and then you can try and do something with it, either starting a company to license the tech (Euclideon "Unlimited Detail" which has ended up in arcade machines), or securing gainful employment with a dev or engine.

I recently watched a film "The Man Who Knew Infinity" on Netflix, about a nobody Mathematician from India who far excelled his peers in ability, but of course no-one would listen to him. He had to slowly prove himself (and died from TB before he ever had the chance...). Having an idea is a tiny part of getting anywhere. It's far more about salemanship and making noise. There are plenty of folk who have done well with just salemanship and making a noise without any meaningful inventiveness, and many, many great ideasmen who've faded into obscurity unheard.
nailed it!
I just want to second this commentary, in that if you really want to be profitable off these concepts, you need to expose them fully for criticism and peer evaluation. Value of the technology is _not_ in the technology itself, but the experience you own in developing it. This is why certain studios/teams are allowed to be continually funded, because of the expertise and knowledge they have in the field is worth more than the product itself. That is where your worth is. If you hang onto the idea alone you miss the part where it's execution that makes a great product, and to execute well you need the experience of others in the field that have executed a great deal to see where your product may not address the core concerns of other developers.
 
Ok, thanks for all this, guys! Makes sense and is not so different to the industry that i come from.
Working hard on the demo, and the rest will then will be hard too... :)
 
That's not quite accurate. Performance of splatting is totally dynamic. You can think of it as a form of rasterization but with built in LOD mechanism.
In Dreams they do this by generating points at the surface with uniform distribution but irregular. At distance they just skip over a subset of points. (Think of instead drawing triangles, draw only the texels of its texture as points, and at distance you go up one mip level so the number reduces to 1/4th)
They have videos where people copy paste parts of the scene to build a city in very short time, and there seems little FPS drop. (I'm puzzled by that myself, because i doubt they use any form of hidden surface removal - just insane brute force compute power?)

Having no support for LOD ist really the main limitation of triangle based rasterization and RT. Computer graphics is mainly about two problems: Visibility and LOD. The rise of GPUs has put the latter out of our attention a bit, but it is still important, especially if we aim for GI in realtime.
This is the major point why i am not so convinced about fixed function hardware, because LOD is a open problem everywhere and solving it always disagrees with FF HW.

PBR works with splatting without any issue. I only meant for Dreams they did not do it because the content is made by the players and they would not like to place environment probes manually, and the devs have very different goals for the artstyle anyways (painterly).
If you read the paper, the programmer has experimented with all kinds of awesome high tech, but it was the artist that has pushed him to do 'boring' splatting, which gave the artistic results finally as intended.


I agree it's not meant to replace triangles for games yet, but at increasing detail levels it would beat triangle rasterization at some point without any doubt. (A graph of triangles is a very complex data structure in comparison to a point hierarchy, but both are just an approximation.)
I expected rasterization HW to become deprecated and finally removed from GPUs. Just compute would remain, and texture filters of course. No limitations. I still think it will happen this way and also RT cores will disappear again... but i see it will take much longer than i hoped for :)
I might sound unrealistic here, but we made raster HW to put triangles on screen fast. Now this is the smallest problem we have. For GI we need to 'render' the scene from any point, not just for the eye. We have a very different problem now than we have had 20 years ago.
The fact is we haven't seen anything from Dreams that can rival the best of the AAA space in terms of fidelity. All they've shown are very small limited scenes. Also, NVIDIA experimented with a fully programmable rasterization pipeline a few years ago and it turned out to be an order of magnitude slower than the fixed function pipeline.
 
Another things after Claybook annoucement, Alex Evans said it could have been an interesting solution to try go full compute raytracing but I think they did not have the time to try it.

EDIT: It was compatible with the user created content approach unlike current rasterization hack...
 
The fact is we haven't seen anything from Dreams that can rival the best of the AAA space in terms of fidelity.
Really a matter of opinion. They can do many things impossible for Call of Duty and the other way around. But we will never see a fair comparison:
Dreams has no occlusion system. CoD has Umbra or something.
User generated content <-> professinal artists
Both have completely differing visual goals.

NVIDIA experimented with a fully programmable rasterization pipeline a few years ago and it turned out to be an order of magnitude slower than the fixed function pipeline
Likely they did triangles?
The point is: Splatting a pixel is just a single atomic op. Try do render that many points by drawing a triangle for each. Would this make sense? No, and the same is it the other way around.
Dreams has proven it can fill the whole screen with individual points. One could add PBR and occlusion culling without issues.
Do you think we need HW triangle rasterization forever, although filling the screen is no longer our primary performance problem?
If so, why do you think NV does such experiments at all? Maybe they prepare for the upcoming need to emulate it?

I mean, we're really offtopic now and likely there comes up a threat about Dreams. Also i'm not one of those who want to see replacements for triangles. But it IS a good example of how FF HW becomes always questionable at least after some time.
Claybook is another. Performance is jaw dropping. And you could add stuff to this as well...

it could have been an interesting solution to try go full compute raytracing
Hmmm... would this make cone tracing easy? I don't like raymarching, but i want cone tracing... i need to think about this...
 
Really a matter of opinion. They can do many things impossible for Call of Duty and the other way around. But we will never see a fair comparison:
Dreams has no occlusion system. CoD has Umbra or something.
User generated content <-> professinal artists
Both have completely differing visual goals.


Likely they did triangles?
The point is: Splatting a pixel is just a single atomic op. Try do render that many points by drawing a triangle for each. Would this make sense? No, and the same is it the other way around.
Dreams has proven it can fill the whole screen with individual points. One could add PBR and occlusion culling without issues.
Do you think we need HW triangle rasterization forever, although filling the screen is no longer our primary performance problem?
If so, why do you think NV does such experiments at all? Maybe they prepare for the upcoming need to emulate it?

I mean, we're really offtopic now and likely there comes up a threat about Dreams. Also i'm not one of those who want to see replacements for triangles. But it IS a good example of how FF HW becomes always questionable at least after some time.
Claybook is another. Performance is jaw dropping. And you could add stuff to this as well...


Hmmm... would this make cone tracing easy? I don't like raymarching, but i want cone tracing... i need to think about this...
Sure, the Dreams approach has its benefits but lets not pretend it doesn't have a huge cost.

Maybe NVIDIA did the experiment to prove how important fixed function is for performance :p
 
Maybe NVIDIA did the experiment to prove how important fixed function is for performance :p
Hihi, sure! So that's why they ended 10x slower ;)
But seriously - why would they want to, having the lead in HW rasterization? No no, wait 10 years, and then they will say: "We have 10 years of experience in removing rasterization HW. See our new invention: SSS 5080 Ss" :D
 
The fact is we haven't seen anything from Dreams that can rival the best of the AAA space in terms of fidelity. All they've shown are very small limited scenes. Also, NVIDIA experimented with a fully programmable rasterization pipeline a few years ago and it turned out to be an order of magnitude slower than the fixed function pipeline.
I didn't reply before this thread is getting way OT and I'm not sure how to manage it, and wanted to avoid more noise. Dreams has shown large levels in races. We also know the maths of SDF means levels can be huge. If they are constrained in Dreams, it's likely a design choice so users aren't overwhelmed.

Sure, the Dreams approach has its benefits but lets not pretend it doesn't have a huge cost.
What costs? The main issues voiced so far has been lack of tools. We need the game to compare performance.
 
Hihi, sure! So that's why they ended 10x slower ;)
But seriously - why would they want to, having the lead in HW rasterization? No no, wait 10 years, and then they will say: "We have 10 years of experience in removing rasterization HW. See our new invention: SSS 5080 Ss" :D
You never know.

I didn't reply before this thread is getting way OT and I'm not sure how to manage it, and wanted to avoid more noise. Dreams has shown large levels in races. We also know the maths of SDF means levels can be huge. If they are constrained in Dreams, it's likely a design choice so users aren't overwhelmed.

What costs? The main issues voiced so far has been lack of tools. We need the game to compare performance.
Where's the footage of those large levels? If proof is presented I'll concede the point.
 
I just tried the "Insane" SSR reflections in Gears 4 on my 2080. @1440p Ultra graphics, my fps got slashed instantly from 102fps to 60fps! And for what? Insane just expands the number of reflected objects, and the number of affected surfaces, it also increases the resolution of reflections and tries to simulate them in water in a realistic way. Nothing earth shattering, and they are still SSR.

Worse yet in areas with a lot puddles I can't maintain 60fps locked @1440p! Meanwhile I can do 1440p60 @Ultra DXR in many areas with heavy reflections in Battlefield's Rotterdam map. This just shows how much rasterization has reached a blockade in terms of extracting image quality from a given performance target. Many effects just exert a massive toll on the hardware without a proportionate increase in IQ.
 
Where's the footage of those large levels? If proof is presented I'll concede the point.
In their Twitch channel you have most of their videos: twitch.tv/media_molecule

Now I don't have the time to watch them all again, but I think that in videos such as World Building, Little Big Planet and Made by the Molecules you can find relatively big levels, such as the asteroid game. Also, there are videos where they begin with a little area, then zoom out a lot and start pasting that same area over and over again to create a huge area, and they do that without apparent slowdowns.
 
Where's the footage of those large levels? If proof is presented I'll concede the point.
Unless the track is a tiny loop, it has to be a far bigger level. But as I said, even if levels in Dreams are constrained, that doesn't mean the tech is.
 
Back
Top