Huh?Actually, it doesn't apply equally since with RT you can do adaptive super sampling. You only pay the cost for a few pixels instead of rendering the whole screen at a higher resolution.
Huh?Actually, it doesn't apply equally since with RT you can do adaptive super sampling. You only pay the cost for a few pixels instead of rendering the whole screen at a higher resolution.
oh okay. so similar in nature then. Multiple passes required to do different things. Is this the spp count we're referring to?You need multiple rays for tracing these things. Reflections need coherent rays to preserve the quality of the source. Ambient lighting (simulating surface roughness) needs scattered rays to sample a wide area. Shadows need rays per light, with soft shadows needing multiple rays per light to produce better shadowing with less noise. However well your denoising works mitigates some of those requirements, reducing sample quality on shadow rays etc.
I have no idea! the 1spp examples above are only for lighting including shadows for those light sources. I presume bounces are included in that sample, so one sample at several iterations, but I've no idea how many rays. Lighting and reflections combined will definitely need multiple samples, as will multiple light sources, each one needing a sample. One of the demos explained they pick the three most significant light-sources in a scene for tracing shadows. It should be something like 1 sample for each light source including ambient, so two for outside with sky-light and sun, and one for reflections. And then you should be adding one for each iteration of transparency, so a glass sphere outside would add one ray per pixel additional to trace the refraction. I don't know if they count that in their sample count or ray count. And that all changes with a hybrid renderer too.oh okay. so similar in nature then. Multiple passes required to do different things. Is this the spp count we're referring to?
I don't understand. That's showing what a 2080GTX can do raytracing. How well would that scene be rendered with a 2080's worth of silicon focussed on rasterising using volumetric lighting? That's our comparison point that's missing. Earlier you were saying that we needed ray-tracing for things like having proper room illumination. My vids show that we can have that sort of lighting without RT, but we don't have any examples of how far rasterisation can be pushed, although we do have some examples of realtime GI showing it's attainable at least.Multiple sources of lights, multiple shadows. Quality is very good imo.
hmmm worth exploring.I have no idea! the 1spp examples above are only for lighting including shadows for those light sources. I presume bounces are included in that sample, so one sample at several iterations, but I've no idea how many rays. Lighting and reflections combined will definitely need multiple samples, as will multiple light sources, each one needing a sample. One of the demos explained they pick the three most significant light-sources in a scene for tracing shadows. It should be something like 1 sample for each light source including ambient, so two for outside with sky-light and sun, and one for reflections. And then you should be adding one for each iteration of transparency, so a glass sphere outside would add one ray per pixel additional to trace the refraction. I don't know if they count that in their sample count or ray count. And that all changes with a hybrid renderer too.
I don't understand. That's showing what a 2080GTX can do raytracing. How well would that scene be rendered with a 2080's worth of silicon focussed on rasterising using volumetric lighting? That's our comparison point that's missing. Earlier you were saying that we needed ray-tracing for things like having proper room illumination. My vids show that we can have that sort of lighting without RT, but we don't have any examples of how far rasterisation can be pushed, although we do have some examples of realtime GI showing it's attainable at least.
I have to ask, how many of you pro RT folks
I agree. As a result though, I don't think anyone should be making broad claims like RT is better quality at the same cost, unless they can present compelling evidence. At the moment we know RT has shown better quality and should theoretically be better overall where performance is no limit. However, we don't know how far voxelised solutions can go and we should also acknowledge as a technology, voxelised solutions are a comparable solution for realtime rendering because they work on integrals instead of guessing these from noisy data. We have real-world examples of 4TF of compute beautifully lighting some scenes which 4TF of raytracing on compute can't match, and possibly 4TF of compute+RT hardware can't match either. With cone tracing, you refine the size of the cone to get more detail. With RT, you increase the number of samples. Both require more processing to improve quality. Both will scale differently, and both will be imperfect when using the fastest solutions. So, the take home for me at least, is next-gen lighting, the real differentiator for next gen visuals, doesn't need raytracing hardware, shifting the value consideration for RT hardware to the value of reflections and the value of ease of implementation and integration into engines.I'll be honest and guess that our ideal comparison will never happen
1) CryEngine's SVOGI and the UE4 VXGI demos. Unless you mean this type of voxel lighting: https://forum.beyond3d.com/posts/2046759/I'm not ignoring it. You just haven't qualified it. They improved the denoising for 1spp tracing. Okay. Now qualify how you get better results at the same cost. What's your reference data for voxelised lighting?
Definitely. It doesn't prove quality is better than voxelised lighting at the same cost though. For that, you need comparable data on the alternative.
1080p with ray tracing is better because you can use adaptive super sampling AA giving you much better IQ than rasterization alone.Huh?
RT is already part of DXR and Vulkan. It's here to stay. Just embrace itI agree. As a result though, I don't think anyone should be making broad claims like RT is better quality at the same cost, unless they can present compelling evidence. At the moment we know RT has shown better quality and should theoretically be better overall where performance is no limit. However, we don't know how far voxelised solutions can go and we should also acknowledge as a technology, voxelised solutions are a comparable solution for realtime rendering because they work on integrals instead of guessing these from noisy data. We have real-world examples of 4TF of compute beautifully lighting some scenes which 4TF of raytracing on compute can't match, and possibly 4TF of compute+RT hardware can't match either. With cone tracing, you refine the size of the cone to get more detail. With RT, you increase the number of samples. Both require more processing to improve quality. Both will scale differently, and both will be imperfect when using the fastest solutions. So, the take home for me at least, is next-gen lighting, the real differentiator for next gen visuals, doesn't need raytracing hardware, shifting the value consideration for RT hardware to the value of reflections and the value of ease of implementation and integration into engines.
RT is already part of DXR and Vulkan. It's here to stay. Just embrace it
I don't see it at all right now under the $400 mark. Embrace that!
Possibly why it may never make it to to consoles .... the fall back approach will work just fine for $400 consoles.
However, it is possible there might be $500+ RT capable console in the works that many will buy.
Back of my head wondering because RT does AO, GI, reflections, and soft shadows likely being able to calculate all of them? using a single ray, more load, but another ray may not need to be cast in the sense that each feature doesn’t need another ray pass.
.
oh okay. so similar in nature then. Multiple passes required to do different things. Is this the spp count we're referring to?
So 1 spp to say 10spp. The 10spp is doing dramatically more things like shadows, AO, etc etc.?
I have no idea! the 1spp examples above are only for lighting including shadows for those light sources. I presume bounces are included in that sample, so one sample at several iterations, but I've no idea how many rays. Lighting and reflections combined will definitely need multiple samples, as will multiple light sources, each one needing a sample. One of the demos explained they pick the three most significant light-sources in a scene for tracing shadows. It should be something like 1 sample for each light source including ambient, so two for outside with sky-light and sun, and one for reflections. And then you should be adding one for each iteration of transparency, so a glass sphere outside would add one ray per pixel additional to trace the refraction. I don't know if they count that in their sample count or ray count. And that all changes with a hybrid renderer too.
Vega is a 14/16 nm part. There’s going to be a large size reduction in the transition to 7nm by itself. 16FF to 7nm SoC scales down 70% in area. If HPC only scales down 50%, you could fit Vega 64 and have 100mm^2 left for CPU and non-memory I/O. Power scales 60% per TSMC, so the 50% may even be conservative, assuming power density is kept constant.
I suspect we can assume that the power delivery and cooling will be at least as good as last gen. Hopefully as good as the X.
For reference, Zeppelin die minus memory controller is 198mm^2 in 14nm. That would fit inside the leftover budget from above after a 50% shrink.
It’s for these reasons I’m assuming at least one of the next gen consoles will be 12TF or greater.
After seeing RDR2, especially on XB1-X, we can survive another generation without RT.
Have you a link to these engines being pushed on GTX 1080 hardware? I can only find examples running on mid-range and older cards (4 TF).1) CryEngine's SVOGI and the UE4 VXGI demos. Unless you mean this type of voxel lighting:
The fact no-one's got a demo of them doesn't show the quality is inferior. The existing demos show great quality, and show the quality can be ramped up versus performance. Ergo, until we see a demo with a large rectangular light source and vertical rails or similar comparable set-up on a GTX 1080 to compare the shadowing versus the RT example, nothing is proven.2) Where are the voxel cone tracing demos that look and run as good?
That's a totally independent feature to upscaling. You can do that whether you upscale or not. When it comes to turning 1080p pixels to 4K pixels, reducing the number of pixel samples needing to be drawn, all rendering methods can use ML based solutions or algorithmic solutions.1080p with ray tracing is better because you can use adaptive super sampling AA giving you much better IQ than rasterization alone.
Again, we're trying to have an actual discussion here. If all you want to do is blow RT's trumpet and say its better at everything, you're just generating noise.RT is already part of DXR and Vulkan. It's here to stay. Just embrace it
I'm not stating that as a claim, but from what I'd noticed in the demos, they weren't running 4K or high framerate. Apparently some are, so I stand corrected, but that's where people providing data and more detailed discussion rather than one-liners like "better quality at same cost" really helps discussions along.
I think all of us have not appreciated how far some lighting solutions have come. The VXGI stuff is several years old, DX11 based, but includes soft specular reflections which can't do RT's perfect mirroring but overall, the solution provides realtime GI with ambient occlusion, soft shadowing, and soft reflections. Yet no-one here knew about it. A game designed for XB1X using this technique would be spectacular, and if it weren't for the inclusion of RTX in nVidia's latest pro-focussed GPUs, we'd be talking about different solutions with a unified view on their occlusion and game engines' short-term future. We'd be looking at BFV showing cone-traced specular highlights that runs on all GPUs instead of RTX-specific ray-traced reflections.Yeah, when you look at RDR2 you don't think "this would look half decent with ray traced lighting" you think "how did they perform this witchcraft?".
I don't think anyone has discounted today's lighting solutions. But they are either baked, or have a static form of GI. There are very few games that have global dynamic GI and those that do, still have hard limitations. The performance is not great.I think all of us have not appreciated how far some lighting solutions have come. The VXGI stuff is several years old, DX11 based, but includes soft specular reflections which can't do RT's perfect mirroring but overall, the solution provides realtime GI with ambient occlusion, soft shadowing, and soft reflections. Yet no-one here knew about it. A game designed for XB1X using this technique would be spectacular, and if it weren't for the inclusion of RTX in nVidia's latest pro-focussed GPUs, we'd be talking about different solutions with a unified view on their occlusion and game engines' short-term future. We'd be looking at BFV showing cone-traced specular highlights that runs on all GPUs instead of RTX-specific ray-traced reflections.
This is where console tech wants to be versatile, to enable acceleration of different solutions. If the RT hardware can be used to accelerate volume traversal and perform alternative shape tracing, it's inclusion is more valuable.