Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Not sure it makes all that much sense with all the CPU cores most people have, it's not like audio takes significant processing power in this day and age.
 
Actually, the space between the leaves produce a pinhole effect:

16831042551_152a35aba2_o.jpg


Way more complicated task than just casting shadows. Maybe after 100 years and for PC only.....
Very easy and fast to fake though.
Sample shadowmap or RT with the sample points residing in shape you want instead of full disk of sun. (Pretty sure it would not get all the funky and subtle effects which happens with layers on layers though.)
Those are real life photos, how the real life fails to cast shadows?!?!

To compute the softening of the shadows, it is used the distance between the receiving point and casting point, and the distance to the light source.
If the distance to the sun is so big, so it is ignored, then this should make you realize how small the softening of shadows is in a sunny day without clouds.

Softening will happen mostly due clouds.
At earth sun is visible as disk with size of ~0.5 degrees without taking account of possible participating media.
So ~8cm of blur from 10 meters distance from occluder to surface perfectly sun facing surface?
 
Last edited:
At earth sun is visible as disk with size of ~0.5 degrees without taking account of possible participating media.
So ~8cm of blur from 10 meters distance from occluder to surface perfectly sun facing surface?

I tried to verify:

float solidAngle = 0.5334f / 180 * PI;
float penumbraWidthPerUnit = tan(solidAngle);
SystemTools::Log("penumbraWidthPerUnit = %f\n", penumbraWidthPerUnit); // 0.009310

So assuming the air widens the angle, a number of 0.01 is easy to remember :)

Seems right - shadow of a 4m high roof outside has approx approx 4cm penumbra on the ground.

Edit: Actually the roof is more likely 3-3.5m high, so 0.01 is still a bit too small?
Ooops: We need to double the number, becasue solid angle is given from axis to cone, so i get 0.018621

Edit2: Going out again, no - the first number fits to what i see. I'll stick at 0.01 if i ever need it :)
 
Last edited:
Yes. The GI of Metro skips whole objects too.
Can you post some screenshots of the exact objects you mean? I have never once found a single object that skips GI.

I mean a direct screenshot of the object in question, perferably with objects around it that do contribute and accept to show the contrast between an object skipping and on ethat does not.
 
I did a lot of whining about missing functionality in current game APIs here. To correct myself, we have this now already: https://www.khronos.org/registry/vu...vkspec.html#vkCmdBeginConditionalRenderingEXT
It does not yet allow to generate work on GPU, but it does allow to skip over future commands that turn out useless during GPU execution. (almost on par with what Mantel has)
This should give me most of the benefit i'm after. :love: :D I'll report what a difference it makes, should also help me with RT...
(Currently looking for a bug that causes random crashes since full 3 days! Damn low level APIs...)
 
...oh no. Conditional rendering includes compute dispatch, but again the limitation is it does not include pipeline barriers.
No barriers, no control flow. So it's useless for me and whining stays upright. :(
 
NVIDIA has VK_NVX_device_generated_commands ... AMD only exposes something similar on the XBOX One I think (Frostbite devs mentioned it).
 
Yeah, Mantle solved this quite easy. It's conditional command buffer execution can even run a loop, so you could write a solver that breaks if error is small enough. Impossible with VK (but DX12 can't do it either to be fair).
I guess all those guys can't agree on more than a common denominator - how should they? Maybe they'll add an extension to the extension if i'm lucky, and in some years we have a second OpenGL. :(

If it all goes wrong again with low level APIs, what's next? Vendor APIs seem the only option. DX etc. can still be maintained on top of this.
 
What I find most surprising about raytracing in games is it still looks like games. Raytracing in offline rendering tends very much towards photorealism, but RT'd games aren't anything like and instead look like current games with some improvements. I wonder what it'll take to change so games start to look like RT'd CGI in terms of photographic quality?
 
That videos honestly does not spend nearly enough time with the camera focusing on what the RT effects are doing in the game - I wonder if a programmer was responsible for the video or just someone else on the team/NV. Like in the area where they showed off reflections - they had the camera at parallel view to the ground and such. They should have had it look directly at the ground to show how it has off screen reflections. Or that diffuse GI porition should have had a side by side!

I think it will be quite the big difference in control.

If you have to specifically point out RTX effects because they aren't immediately noticeable, is it really that impressive? IE - most people running around in a game are going to be looking parallel to the ground and not constantly staring directly at the ground.

That's been my experience. Currently, unless you are looking specifically for RT effects that you know are there, the impact in games (well Metro: Exodus is the only I've had direct experience with, my friend that has a RTX 2080 doesn't have BFV because he isn't into that type of game) isn't generally obvious except in relatively rare circumstances.

In other words, people that are expecting to see RT will look for it and find it impressive. People (like your average gamer) who aren't necessarily expecting it or know what to look for, will have a hard time finding it uniformly better than the best rasterized games. There will be times when it's quite noticeable, but in general, it's just kinda better, but not to a really noticeable degree.

Regards,
SB
 
Some of the demos are very CGI though. Is it a lack of horsepower in the current RTX cards that's holding back the look, so demo's can push more photorealism than games will ever get, or lack of ground-roots design for RT, so rasterising holding it back, or devs just not knowing how to use it yet?

The one real major difference is lighting. RT games haven't got RT lighting to look like RT lighting. Some demos, including voxel lighting, have managed to get pretty close to CGI in lighting look, but these RTs games aren't even close. The lighting in Quake II's latest is pretty good though. That one is perhaps let down by the materials. However, towards the end it looks pretty mediocre.

 
Probably takes time to get the art looking good just like how it took time for devs to go beyond "UE3" standard/programmer-art look.
 
What I find most surprising about raytracing in games is it still looks like games. Raytracing in offline rendering tends very much towards photorealism, but RT'd games aren't anything like and instead look like current games with some improvements. I wonder what it'll take to change so games start to look like RT'd CGI in terms of photographic quality?
Probably because most of offline rendering are meant for static scenes where everything is tailored for a single and well defined framing projection.
 
Looking the UE4 RT video, i think it exposes well the performance limitations, but also the software limitations.
UE is a very traditional engine, and although there was remarkable progress in TAA, PBS, tools and ease of use, i do not expect wonders with performance and raytracing. We surely can expect a bit more from Frostbite and even much more from A4 because those guys do not need to maintain compatibility with thousands of users and their workflows.
On one hand UE / Unity makes it very easy nowadays to help with adoption, on the other hand those engines likely make it hard to fully utilize the potential from such new hardware. Even if there would be a big market. I expect more from custom engines and i hope they will still dominate AAA game development.

Looking at UE4 GI implementation it is too slow to be used in games, but it is also the only correct one using RTX we have seen so far in a game engine. They say it needs faster hardware and it's a thing for the future. In they way they implement it, this is surely true.
Without GI there is zero reason to expect photorealism. Soft shadows and reflections do not help with this, even if they add a wow here and a ohhh there. But it will keep looking like games.

So the question is still: Can we solve the GI problem, now, using RTX?
Sure! But can we do it fast enough?

Morgan McGuire has announced this, but i think it was not shown broadly yet: https://morgan3d.github.io/articles/2019-04-01-ddgi/
More pictures here, and he says the probes can be calculated in 1ms: https://twitter.com/casualeffects (Don't know anything about lag or scene size limits, but that's great performance as it seems!)

Does it look real? Nope. Can it be improved if we use 5 or 10ms? Will it look realistic? I don't know.

But after one year of RTX, and not really any news about it on GDC, we likely have no reason to think wonders will happen.
RT is the future, as it always was. That's fine. But the assumption it would bring realism may be much more enthusiasm and wishful thinking than ground truth. I somehow think many people are currently in the process to realize this, including myself.

I'm still optimistic realism is doable right now, but i am (or was) willing to sacrifice a lot of details. RT has raised the bar regarding to those details. My definition of realism may no longer be enough to fulfill peoples exceptions.
Also we see great demos each year at GDC or elsewhere. Demos that show IQ not practical to be achieved in games. This is how the industry shoots itself into the foot, because it can only cause disappointment on the long run after a short wow.
It is very bad marketing. And the same applies to the suggestion 'Rasterization is wrong and RT is right', which seems to come up in an unintended way. Both are useful tools, but neither is the solution for realistic realtime graphics. 'It just works' is just wrong.

But we'll get there... :D
 
What I find most surprising about raytracing in games is it still looks like games. Raytracing in offline rendering tends very much towards photorealism, but RT'd games aren't anything like and instead look like current games with some improvements. I wonder what it'll take to change so games start to look like RT'd CGI in terms of photographic quality?

The CGI industry has had decades of synthetic looking ray traced images.
The modern photorealistic look of current CGI is a result of PBR, which games already approximate, and ray traced path-tracing, by shooting thousands of rays per pixel. With the great strides we've done in high-quality spacio-temporal denoising, we might be able to approach the same perceptual result with just a couple dozen rays per pixel, but we ain't even there yet. That's probably just a generation away though. Fingers crossed.
 
That is what I believe, too. We are probably one or two generations behind perceptually photo-realistic real-time rendering.

Also, most photo-realistic cgi are overlays rendered above real footage. And they are not real-time! That makes things "easier", let's say.

I remember when the first low-poly gouraud shaded toruses appeared... then two generations later thousands of gouraud shaded polygons scenes were commonplace.

Seems like we will see this happening again...
 
Last edited by a moderator:
Also, most photo-realistic cgi are overlays rendered above real footage. And they are not real-time! That makes things "easier", let's say.
Yeah, most scenes are composites too, they render the environment and characters separately and composite them on top of each other later, many scenes are even retouched in a post process manner frame by frame.

Games will continue to look like games as long as we are in the hybrid RT-Rasterization era, IMO.
 
Back
Top