Next gen lighting technologies - voxelised, traced, and everything else *spawn*

I'm not working in game industry,but i assume your concerns don't apply to most.
Smaller Indie teams use some U engine, and programming efford spent on U engine is shared by a whole lot of games. So it's worth it.
AAA Studios use their own engines. Few programmers in comparison to many artists. Hiring one programmer more to focus on RT won't hurt the budget, but helps to remain competive. So it's worth it.

My feeling is you are overly optimistic on this. Especially if the low level API would require tweaks on every gpu configuration. There isn't that many games that have ray tracing even when considering the barrier for entry is minimal from API pov.

Consoles could drive low level API as they are the environment where platform owner might be willing to go all in for one specific hw for very long period of time.
 
Consoles could drive low level API as they are the environment where platform owner might be willing to go all in for one specific hw for very long period of time.
Went back through couple of the posts but I'm still unclear.

When you say low level and high level api what do you mean?
DX12 vs DX11?
DX12 vs an engine like unity or unteal?
 
Went back through couple of the posts but I'm still unclear.

When you say low level and high level api what do you mean?
DX12 vs DX11?
DX12 vs an engine like unity or unteal?

Low level would be something where developer has to implement their own bvh generation/update and traverse through that bvh by implementing their own logic. Current API is high level where bvh generation and traversal are black boxes. Developer just gives the geometry to API and bvh appears. Similarly developer just shoots a ray against the bvh and gets results back without knowing how exactly the traversal is implemented. Current API is pretty much the simplest/easiest to implement API that could be done and yet there is very few ray tracing games available. My gut feeling is that adding more to the barrier of entry would make even less games with ray tracing available.
 
Low level would be something where developer has to implement their own bvh generation/update and traverse through that bvh by implementing their own logic. Current API is high level where bvh generation and traversal are black boxes. Developer just gives the geometry to API and bvh appears. Similarly developer just shoots a ray against the bvh and gets results back without knowing how exactly the traversal is implemented. Current API is pretty much the simplest/easiest to implement API that could be done and yet there is very few ray tracing games available. My gut feeling is that adding more to the barrier of entry would make even less games with ray tracing available.

It depends on your requirements. If high level gives you all you need, you'll use it. But if not, working around limitations is more work and worse result than having more options to choose from, and doing the right thing from the start.
Adding more options to an API, or even offering a different alternative lower level API doe not change entry barriers for those who decide to go higher level.
And about whom do we talk? It seems only two options are left: AAA, where experts won't care so much about ease of entry (don't confuse this with effort to port engine once from high to low level, which is an issue), or using U engine where you don't need to care at all. Who needs easy entry, aside from learning purposes?
 
Considering that it's only the RTX range of Nvidia gpus that supports RT i personally think there's been a reasonable amount of games. Which will only expand once next gen comes out and RDNA2 cards are released.

These initial games was important, as a lot was learned from them which will directly feed into games /engines that get RT implemented in them.
 
It depends on your requirements. If high level gives you all you need, you'll use it. But if not, working around limitations is more work and worse result than having more options to choose from, and doing the right thing from the start.
Adding more options to an API, or even offering a different alternative lower level API doe not change entry barriers for those who decide to go higher level.
And about whom do we talk? It seems only two options are left: AAA, where experts won't care so much about ease of entry (don't confuse this with effort to port engine once from high to low level, which is an issue), or using U engine where you don't need to care at all. Who needs easy entry, aside from learning purposes?

Try to think this from HW pov. HW is not like software. You lock the architecture years before shipping, design is ready way before chip is in stores and you have to balance transistors used to fit what you predict is future. It's entirely possible to fck up the design and end up with chip that doesn't work or works poorly and have to take 2-3 years before fixing it in next iteration. Or perhaps you are too early and only get complaints that the new features suck because there is no games and please just make old games faster. By the time games are there your hw is already obsolete and you might have to support that obsolete thing 10 more years.

Against that background it makes sense to balance risk and start simple. Then evolve as needed. Once you give low level access you have to support that forever even if it turned out to be mistake. If you try to force vendor specific API it's even more unfortunate situation. HW from idea to shipping takes many, many years and things don't happen like in sw realm.
 
Try to think this from HW pov. HW is not like software. You lock the architecture years before shipping, design is ready way before chip is in stores and you have to balance transistors used to fit what you predict is future. It's entirely possible to fck up the design and end up with chip that doesn't work or works poorly and have to take 2-3 years before fixing it in next iteration. Or perhaps you are too early and only get complaints that the new features suck because there is no games and please just make old games faster. By the time games are there your hw is already obsolete and you might have to support that obsolete thing 10 more years.

Against that background it makes sense to balance risk and start simple. Then evolve as needed. Once you give low level access you have to support that forever even if it turned out to be mistake. If you try to force vendor specific API it's even more unfortunate situation. HW from idea to shipping takes many, many years and things don't happen like in sw realm.

Well, all those years of development, risks of failure about function or adoption, work on stuff that ends up a mistake... really all of this applies to me as software developer just as much.
So we could look at this as conflict of interest between software and hardware progress.
But in the end we all pull on the same rope, and it's requests about new features that often brings them to reality. So if many devs request features to implement LOD, and HW already could support them, risk of it turning out a mistake in the future is small. Deprecating and emulating remains always an option too.
 
Blog post from the co-founder and Technical Director of Wolcen Studio.

The need for Lumen - Simon Majar

Damn, more impressed with the raytraced test scene he setup than with Lumen, at least in terms of image quality. That's some amazingly detailed lighting, I'm kind of surprised I notice the extra detail even in diffuse bounces but I do. And long assed trace distance so now worries in this scene about bouncing around corners/into interiors. Too bad about that performance.


But really, if one can setup a combination of tracing, trace and SVO or distance field for most of the scene, and only spawn triangle tracing ray threads when detail is actually going to be hit, you'd get the best of both worlds. EG what Cryengine is doing with their upcoming update this year, what they showed off for reflections with the Noir demo. Reminder that while the PS5 was running the UE5 demo at 1440p 30, this was running detailed reflections on a Vega 56. Still, as long as UE5 kinda finds some kind of workaround or something for nanite, Lumen should be extensible to something like this as well.

 
I find it kinda funny that just as hardware raytracing is trying to return 3D apis to a higher level of abstraction with the underlying computational resources hidden from developers (at least the way NVIDIA does it) Sweeney's long promised compute based rendering engine finally goes into production.
 
Although the weaknesses of Sweeney's compute based solution haven't been exposed yet, so we've no way to compare approaches. There are a lot of things missing in the demo; how does Nanite and Lumen cope with these versus RTRTHW?
 
Although the weaknesses of Sweeney's compute based solution haven't been exposed yet, so we've no way to compare approaches.
You mean, other than that explicit "can't handle small geometry correctly"? E.g. don't get the wrong idea, and think you can e.g. do a physically correct lampshade with Lumen, while that does work flawlessly with full RTRT. Small object with luminous elements still need a light map baked. Only fixed scale global illumination is handled correctly.

Also, Lumen appears to operate on a world-aligned, static grid. Unclear if dynamic objects can even attribute to GI, and if they do, if there is any significant cost for updating the illumination model on-the-fly.
 
The demo missed lots if dynamic elements like characters. How does it fair with trees with wind-blown branches? What about dynamic shaders? In Uncharted, rock's appearance changes when it gets wet. That's not apparent in this demo when Lumen runs through the water. Is that just something that didn't make the cut, or can the engine not cope?

Having just watched the demo again, I've just realised the camera cuts! This gives a good look at the LOD (that someone linked to earlier but I didn't notice the context nor insight). At 7:08 we cut to statues and see this:

upload_2020-5-22_18-1-31.png

At 7:11, it's resolved to:

upload_2020-5-22_18-2-8.png

This is the LOD in action. I don't see it any other cuts though which perhaps at greater zoom-outs, the lower LODs are sufficient? They also mention "no authored LODs" for the Z-brush statue, leaving room for baked submeshes, or all in object hierarchy.
 
Having just watched the demo again, I've just realised the camera cuts! This gives a good look at the LOD (that someone linked to earlier but I didn't notice the context nor insight). At 7:08 we cut to statues and see this:
Looks more like accumulationg lighting details in screenspace than changing LOD?
There is a lot of the former to see. AO, shadows and even bounce lighting fades in within some frames.
 
Looks more like accumulationg lighting details in screenspace than changing LOD?
There is a lot of the former to see. AO, shadows and even bounce lighting fades in within some frames.

Yes someone else note this is not LOD but missing shadows. One of the Epic guy told without proper shadowing, geometry looks like normal maps.
 
Although the weaknesses of Sweeney's compute based solution haven't been exposed yet, so we've no way to compare approaches.
But that's not why it's funny.

Perhaps if Sweeney had moved to compute sooner ray tracing would have been introduced differently. This isn't about ray tracing vs. rasterization ... this is about finally climbing out of the hell hole no-LOD triangle soups keeps us stuck in (which RTX as it currently stands cements in place).
 
Dynamic object GI contribution:

Before the final flying segment, the camera gets really close to the girl, sitting right behind her sholder. There you can see some light bouncing off of her sholder i to her chin. I'm assuming that's all screenspace though.

For a glimpse at curious GI artifacts:

Seconds before the aformentioned scene, when she is running towards the door to the outside, about to leave the statues room. Look at the cealing of that passage-way she is running towards. You see some splochy bounced lighting resolved better as she aproaches. They way it fades into place betrays the temporal accumulation, and the round stain-like shape it forms tells me spacial filtering denoising.

I also bet they integrated SS and whatever world space GI methods they have into a single ray trace, like they did for shadows, because that just seems like the most ellegant way to do it without opening up a combinatorial explosion of hacks to solve double-lighting/shadowing.
 
The demo missed lots if dynamic elements like characters. How does it fair with trees with wind-blown branches? What about dynamic shaders? In Uncharted, rock's appearance changes when it gets wet. That's not apparent in this demo when Lumen runs through the water. Is that just something that didn't make the cut, or can the engine not cope?

Having just watched the demo again, I've just realised the camera cuts! This gives a good look at the LOD (that someone linked to earlier but I didn't notice the context nor insight). At 7:08 we cut to statues and see this:

View attachment 3921

At 7:11, it's resolved to:

View attachment 3924

This is the LOD in action. I don't see it any other cuts though which perhaps at greater zoom-outs, the lower LODs are sufficient? They also mention "no authored LODs" for the Z-brush statue, leaving room for baked submeshes, or all in object hierarchy.

Thanks! And yeah while lighting hasn't accumulated, I'd also say there's latency lag for both geo and texturing, worse than I thought too. Though maybe they can optimize it and drop a second or so to resolve.
 
May 22, 2020
Until now, developers used another technique called rasterisation.
It first appeared in the mid-1990s, is extremely quick, and represents 3D shapes in triangles and polygons. The one nearest the viewer determines the pixel.
Then, programmers have to employ tricks to simulate what lighting looks like. That includes lightmaps, which calculate the brightness of surfaces ahead of time, says Mr Ronald.
But these hacks have limitations. They're static, so fall apart when you move around. For example you might zoom in on a mirror and find that your reflection has disappeared.
...
But with these workarounds, "pretty quickly you lose that realism in a scene," observes Kasia Swica, Minecraft's senior program manager, based in Seattle.

One "fiendish problem" for ray tracing has involved how shaders can call on other shaders if two rays interact, says Andrew Goossen, a technical fellow at Microsoft who works on the Xbox Series X.
GPUs work on problems like rays in parallel: making parallel processes talk to each other is complex.
Working out technical problems for improving ray tracing will be the main tasks "in the next five to seven years of computer graphics, at least," says Mr Ronald.

In the meantime games companies will use other techniques to make games look slicker.

Earlier this month Epic Games, the makers of Fortnite, released its latest game architecture, the Unreal Engine 5.
It uses a combination of techniques, including a library of objects that can be imported into games as hundreds of millions of polygons, and a hierarchy of details treating large and small objects differently to save on its demands on processor resources.

For most game makers such "hacks and tricks" will be good enough, says Mr Walton.
https://www.bbc.com/news/business-52541218
 
Back
Top