I keep mentioning this but I don't think it has really registered with the tech press/common folks yet but... building BVHs and TLASs for large, dynamic games is currently still an unsolved problem. The limitations I foresee here are a lot less to do with tracing and shading rays and a whole lot more to do with maintaining these expensive data structures. Any problems with various forms of geometry that Nanite has, raytracing has even worse.
I'm not sure if Cyberpunk is actually tracing primary rays even on "path tracing" mode... it's kind of unclear between some of the tech press around it vs. the SIGGRAPH presentation (which contradict each other in multiple places).
To be clear, that's not what I was saying. The look is achieved by the shaky cam, independent hand and head movement, and amazing animations. But IMO it's built on a foundation on mouse controls. Because without that, the basic camera movement itself would be far to smooth, even and slow to be convincing as a true body cam, regardless of how much you shake it.
The Matrix city demo, Dying Light 2, The Witcher 3, Cyberpunk 2077, Icarus, Metro Exodus, Watch Dogs Legion, Crysis 1/2/3 Remastered, Spider-Man Remastered, Spider-Man Miles Morales, Chernobylite, Hitman 3, A Plague Tale: Requiem, Warhammer 40K: Darktide, Far Cry 6, Battlefield 2042, and Hogwarts Legacy are the current games that did RT well on either open worlds or semi open very large levels.Does anyone happen to know of a game that manages hundreds of thousands or millions of instances with RT currently?
That's not the question though... none of these games actually have anywhere near that complexity in the raytracing structure. They use simplified scenes with fairly aggressive HLOD to get down to reasonable numbers because there's just no way (at least on PC currently) to handle those instance counts dynamically in DXR right now. This works well for secondary effects but is obviously not good enough for primary visibility and probably not good enough for direct shadowing.The Matrix city demo, Dying Light 2, The Witcher 3, Cyberpunk 2077, Icarus, Metro Exodus, Watch Dogs Legion, Crysis 1/2/3 Remastered, Spider-Man Remastered, Spider-Man Miles Morales, Chernobylite, Hitman 3, A Plague Tale: Requiem, Warhammer 40K: Darktide, Far Cry 6, Battlefield 2042, and Hogwarts Legacy are the current games that did RT well on either open worlds or semi open very large levels.
Fortnite also does it on a very large map.
I mean... calling it "path tracing" in the first place is a bit weird if primary visibility is still rasterized. And then I'm even more confused that folks are trying to draw a fundamental distinction between what that path is doing and what something like Lumen and other GI systems are doing.The marketing and technical materials that I’ve seen have been consistent on that point. Primary visibility is rasterized to a gbuffer. However light sampling for shading those primary visibility hits is done via raytracing. Secondary bounces are raytraced as well. What were the contradictions that you noticed?
Does that include the Matrix City as well?None of these games actually have anywhere near that in the raytracing structure.
But it does, who said it doesn't?but then I'm not sure why it doesn't support emissive geometry like RTXDI advertises.
Yes, of course. It uses highly simplified geometry for the RT scene.Does that include the Matrix City as well?
Right, but that's what we've been doing since RT hardware came out; lower frequency secondary effects that are mostly near the viewer. It's a (big) barrier if the goal is to do (I don't even know what to call it now) "actual" path tracing for primary and shadow visibility, including things like translucent stuff, participating media, caustics, etc.And why does that have to matter though? The end result is that you can integrate multiple ray tracing effects in open worlds, BVH is not a barrier to that.
One of the developer interviews IIRC. But also I would expect it to look a lot more different between the modes if all of the neon geometry was actually emissive in one but not the other.But it does, who said it doesn't?
I've been playing Cyberpunk a bit and it is interesting because it has fairly complex lighting but very simple geometry, all things considered (both from a triangle count and instance count perspective). This post (https://chipsandcheese.com/2023/03/22/raytracing-on-amds-rdna-2-3-and-nvidias-turing-and-pascal/) pegs the instance count at ~50k instances which in UE5 terms is more like Fortnite than something like CitySample (assuming a full BVH, which obviously it cannot afford to do currently).
To be clear, when I say simple geometry I'm just saying it has far less geometric density than something like CitySample, both in terms of triangles and instances. I'm not applying any sort of art direction opinion here, just stating the fact that it has a lot less.That does not make it simple geometry. It can also be better optimized via culling etc. GTA V often has 130 000 events per frame and still it cannot hold up visually.
I mean... calling it "path tracing" in the first place is a bit weird if primary visibility is still rasterized. And then I'm even more confused that folks are trying to draw a fundamental distinction between what that path is doing and what something like Lumen and other GI systems are doing.
Other bits of confusion between the SIGGRAPH presentation and what I've seen online were the former saying neural radiance cache is being used (which TBH further blurs the line on whether or not you should be calling this path tracing I think... I don't know that anyone would consider irradiance caching or similar path tracing of course), but others saying it is not. Furthermore some media references geometric light sources and emissive (as was the main selling point of RTXDI at one point) whereas others say it is still using analytic lights and the former are not yet supported.
I'm not sure if it's worth it at this stage to be honest. Regardless of where we are on the curve, I do think the priority is still to get faster and more general BVH construction and raytracing. How much fixed function that ends up being is still up in the air of course. I suspect rasterization will be with us for a long while yet, but honestly Nanite does a good enough job of that in the mean time. The real challenge is that with Nanite moving the bar on geometric detail, now we really want to get that level of geometric detail in raytracing as well...Do yo think it it would be worth IHvs overhauling the geometry and rasterizing hardware to support efficient rendering of tiny polygons? Would this be a benefit over doing it in software as Nanite currently does?
I wouldn't say any of these things "invalidate" it either, but the more they layer the more the term is blurred for me especially when they introduce significant bias. Why is NRC still path tracing but a surface cache is not? Are surfels path tracing too? But then why is irradiance caching not conventionally considered path tracing? Of course some of this is semantic and unimportant, but it does feel like the term has now been pulled into marketing to a level that we're going to have to make up something new to explain to people why what an offline, reference path tracer (whether biased or unbiased) does is different still from these games.I wouldn’t say caching invalidates path tracing. Neither does denoising. We’re probably going to need both for a very long time. All of these GI techniques are doing some form of tracing, caching and sample reuse. Lumen is tracing SDFs and caching radiance in surface cache tiles. ReSTIR GI is tracing the BVH and caching radiance in screen space buffers at pixel granularity. Neural radiance caching is tracing the BVH and using the attributes of the hit point to train the network which can be used as a world space cache.
There's a slide from the GDC presentation (I think this one? https://schedule.gdconf.com/session...nto-the-night-city-presented-by-nvidia/894332) that has "emissive geometry" (and NRC) as "potential future work". Perhaps they got to one but not the other between the time the presentation was made and release?CDPR and Nvidia have said that RTXDI is picking up nearly all emissive geometry in CP077. Haven’t seen anyone say otherwise.
I wouldn't say any of these things "invalidate" it either, but the more they layer the more the term is blurred for me especially when they introduce significant bias. Why is NRC still path tracing but a surface cache is not? Are surfels path tracing too? But then why is irradiance caching not conventionally considered path tracing? Of course some of this is semantic and unimportant, but it does feel like the term has now been pulled into marketing to a level that we're going to have to make up something new to explain to people why what an offline, reference path tracer (whether biased or unbiased) does is different still from these games.
RESTIR and friends are still a good step into enabling more complex lighting of course, but I'm not sure why we're picking this point to call it path tracing rather than just calling it the name the paper already has.
There's a slide from the GDC presentation (I think this one? https://schedule.gdconf.com/session...nto-the-night-city-presented-by-nvidia/894332) that has "emissive geometry" (and NRC) as "potential future work". Perhaps they got to one but not the other between the time the presentation was made and release?
Previously, shadowing used to be a very painful part of the process. In games, we use a shadow budget, meaning that we can only cast shadows from carefully selected lights in our game. In many cases, we also use very inaccurate shadow maps. So imagine a busy street, with only a few lights capable of casting a shadow, while most of them just light through objects, making them look very unrealistic and disconnected from the surroundings. With RTXDI, we get up to a thousand or even more lights casting super realistic soft shadows on the screen. Practically every light you see casts shadows. That is a fundamental change in realism, depth of the scene, and dimensionality. But it is also a critical element if we assume that every light source, be it a lamp, neon, or screen, is emitting an Indirect Illumination, meaning light that bounces off of surfaces in the world. Without proper shadows in the Direct Illumination part, the Indirect Illumination part would feel totally off and unrealistic.
ِHere on my end, all the emissives in the game are casting light and shadows, I could provide some screenshots.One of the developer interviews IIRC. But also I would expect it to look a lot more different between the modes if all of the neon geometry was actually emissive in one but not the other.
As long as they can progress at the same pace that nvidia is progressing, they will be a generation behind in performance which is fine. It's very difficult to ask a company that is behind a generation or 2 in R&D time to catch up to group that is pouring all their resources into this. But keeping pace is a reasonable goal, ensuring that gap doesn't continue to widen over the generations. They have their strengths as well, and they'll find a way to make use of it.I'm worried that AMD is too far behind to catch up before the next-generation of consoles release IMO.
Traditionally (Whitted style) ray tracing is referring to tracing a ray from camera, finding a hit, and then making bespoke decisions whether to spawn lights towards lights, refraction rays, GI rays and reflection rays. This then may be continued at next vertex if it's recursive ray tracing.Using importance sampling for all BRDFs is a classic halmark of PT in the offline world. Area lights and emissive geometry is also another halmark of PT.
Rederman with ray tracing had area lights, ray traced GI, ray traced reflections, importance sampling, recursive ray tracing, various caches and denoising, and while it was able to deliver some amazing visuals like Davy Jones in the Pirates of the Caribbean, production was quite complicated due to various bespoke solutions and their complicated interactions.RenderMan was originally based on the Reyes scanline rendering algorithm [Cook et al. 1987], and was later augmented with ray tracing, subsurface scattering, point-based global illumination, distribution ray tracing with radiosity caching, and many other improvements. But we have recently rewritten RenderMan as a path tracer. This switch paved the way for better progressive and interactive rendering, higher efficiency on many-core machines, and was facilitated by the concurrent development of efficient denoising algorithms.
At a first glance this may seem like a relatively small difference, but it's what transformed the entire movie industry when Arnold proved that path tracing isn't just a theoretical algorithm.The reasons for the switch to path tracing are that it is a unified and flexible algorithm, it is well suited for progressive rendering, it is faster to first pixel, it is a single-pass algorithm (unlike the point-based approaches that require pre-passes to generate point clouds), and geometric complexity can often be handled through object instancing.
I'm not sure I totally buy the distinction you are making... feedback algorithms for multi-bounce are taking samples that are "multi-bounce" as well after the first frame. And if you want to draw a line between doing that without taking additional visibility samples, so does ReSTIR, NRC and friends with both temporal and neighbor path reuse.I think the main reason is that light samples in ReSTIR and NRC are actual multi-bounce paths. In Lumen and prior methods each sample is a single bounce and multiple bounces are simulated by self-querying the cache over multiple frames. NRC self-queries too but its input samples already have multiple bounces.
Oh of course, they are great techniques. They just already have names and calling them "path tracing" adds some amount of confusion, as this conversation demonstrates.No where near full blown path tracing of course but enough to distinguish them from prior art.
Actually they very carefully say "almost every light source" in the interviews I've heard, but TBH I assume that's some irrelevant detail.Interesting. CDPR has done a few interviews and while they didn't explicitly mention emissive geometry it would be strange if emissives were excluded when they say "every light source".
The neon signs in CP2077 must be emissive right or can they be done with area lights?
Hard to tell on those first two if that's "just" reflections or similar. The third one might be emissive but it could also be an analytic light there. Can you get comparison shots of "path tracing" to max raytracing without that on in a case where it's clear that we are missing *light*, not just shadows (because the latter isn't necessarily a fair comparison since they could almost certainly afford RT shadows on most/all analytic lights at the performance level on a 4090, etc)?ِHere on my end, all the emissives in the game are casting light and shadows, I could provide some screenshots.
I don't know what you mean here. The PDFs are a part of the BRDF which is per-pixel.Traditionally (Whitted style) ray tracing is referring to tracing a ray from camera, finding a hit, and then making bespoke decisions whether to spawn lights towards lights, refraction rays, GI rays and reflection rays. This then may be continued at next vertex if it's recursive ray tracing.
Path tracing is traditionally referring to a specific light transport algorithm, which traces a full path from camera, where each vertex is based on pure probability.
I wouldn't be that strict on the term personally. The GPUs today don't have enough bandwidth to implement the offline world's techniques. I'm ok with realtime path-tracing definitions for games. They certainly can't be done the same way and will need tricks/hardware/special drivers/etc.. in order to get something looking fairly decent.Renderman's history is an useful reference, as they started from various hybrids with ray tracing and later switched to path tracing:
Rederman with ray tracing had area lights, ray traced GI, ray traced reflections, importance sampling, recursive ray tracing, various caches and denoising, and while it was able to deliver some amazing visuals like Davy Jones in the Pirates of the Caribbean, production was quite complicated due to various bespoke solutions and their complicated interactions.
This is where switch to path tracing comes in, trading performance for a single unified algorithm guaranteeing to resolve all lighting effects if you wait long enough:
At a first glance this may seem like a relatively small difference, but it's what transformed the entire movie industry when Arnold proved that path tracing isn't just a theoretical algorithm.
When you have separate solutions for direct Illumination and GI you start to blur this line. Where do you place emissive in this setup? Is it direct or indirect? Both can handle it, but will produce different results as they have their custom assumptions.
Which paths do you resolve using diffuse GI and which using reflections? Diffuse GI can use heavy denoising and can reuse nearby paths though ReSTIR spatial reuse or NRC. Reflections can't, as you would notice immediately any lag or duplicated pixels in mirror reflections. Depending on your choice of heuristic you get a different discontinuity between those two effects.
What about other custom per ray path shortcuts like lack of local light shadowing in rays calculating diffuse GI in path traced Quake 2?
If we start blurring this line then why Battlefield, Spiderman, Metro or Lumen isn't path tracing? At the end of the day everyone traces some rays from pixels using importance sampling, caches some data and runs spatiotemporal denoising at the end.