Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
You need multiple rays for tracing these things. Reflections need coherent rays to preserve the quality of the source. Ambient lighting (simulating surface roughness) needs scattered rays to sample a wide area. Shadows need rays per light, with soft shadows needing multiple rays per light to produce better shadowing with less noise. However well your denoising works mitigates some of those requirements, reducing sample quality on shadow rays etc.
oh okay. so similar in nature then. Multiple passes required to do different things. Is this the spp count we're referring to?

So 1 spp to say 10spp. The 10spp is doing dramatically more things like shadows, AO, etc etc.?
 
oh okay. so similar in nature then. Multiple passes required to do different things. Is this the spp count we're referring to?
I have no idea! ;) the 1spp examples above are only for lighting including shadows for those light sources. I presume bounces are included in that sample, so one sample at several iterations, but I've no idea how many rays. Lighting and reflections combined will definitely need multiple samples, as will multiple light sources, each one needing a sample. One of the demos explained they pick the three most significant light-sources in a scene for tracing shadows. It should be something like 1 sample for each light source including ambient, so two for outside with sky-light and sun, and one for reflections. And then you should be adding one for each iteration of transparency, so a glass sphere outside would add one ray per pixel additional to trace the refraction. I don't know if they count that in their sample count or ray count. And that all changes with a hybrid renderer too.

Multiple sources of lights, multiple shadows. Quality is very good imo.
I don't understand. That's showing what a 2080GTX can do raytracing. How well would that scene be rendered with a 2080's worth of silicon focussed on rasterising using volumetric lighting? That's our comparison point that's missing. Earlier you were saying that we needed ray-tracing for things like having proper room illumination. My vids show that we can have that sort of lighting without RT, but we don't have any examples of how far rasterisation can be pushed, although we do have some examples of realtime GI showing it's attainable at least.
 
I have no idea! ;) the 1spp examples above are only for lighting including shadows for those light sources. I presume bounces are included in that sample, so one sample at several iterations, but I've no idea how many rays. Lighting and reflections combined will definitely need multiple samples, as will multiple light sources, each one needing a sample. One of the demos explained they pick the three most significant light-sources in a scene for tracing shadows. It should be something like 1 sample for each light source including ambient, so two for outside with sky-light and sun, and one for reflections. And then you should be adding one for each iteration of transparency, so a glass sphere outside would add one ray per pixel additional to trace the refraction. I don't know if they count that in their sample count or ray count. And that all changes with a hybrid renderer too.

I don't understand. That's showing what a 2080GTX can do raytracing. How well would that scene be rendered with a 2080's worth of silicon focussed on rasterising using volumetric lighting? That's our comparison point that's missing. Earlier you were saying that we needed ray-tracing for things like having proper room illumination. My vids show that we can have that sort of lighting without RT, but we don't have any examples of how far rasterisation can be pushed, although we do have some examples of realtime GI showing it's attainable at least.
hmmm worth exploring.
I think the point is that you can't separate what's there. The game demoes shown with RTX are hybrid, the whole setup is meant to be hybrid. There is overlap/a lot of double usage of the hardware going on to make this scene happen. But the quality of what's shown is very good without the usage of say tensor cores for instance. I'll be honest and guess that our ideal comparison will never happen. I just don't see it happening. DXR is a bolt on API, it's not a re-write of the renderer and it doesn't muck around with anayone's existing engine.

From a budget perspective, many companies are not willing to move games or rewrite engines because the cost of labour in building content from the ground up often outstrips the cost of retrofitting the engine (see RDR2) for instance. They chose a substandard reconstruction technique, but that's likely because the engine has some other limitations. With DXR, it looks like a simple bolt on, it's easy to do and it'll be for all vendors who support it.

Nvidia's VXGI implementation is a Gamesworks plugin, so they just fire and forget. I don't know if developers are interested in all of them creating their own voxel techniques vs going straight for RT. Anyway, it's OT. Ill look more into how the RT spp is calculated.
 
Stop trying to drag the console wars into it. This about the viability of RT and importance to the next-gen consoles (can they live without it, or is it essential?). No-one's even 'bashing' raytracing, if you actually follow the discussion. Some people are utterly in love with the idea, and some are more questioning, is as polarised as this discussion has got, with platforms not even entering into it. We've zero idea what Sony and MS are putting in their next boxes, so how the hell could anyone be choosing the Sony and non-Sony solutions??
 
I'll be honest and guess that our ideal comparison will never happen
I agree. As a result though, I don't think anyone should be making broad claims like RT is better quality at the same cost, unless they can present compelling evidence. At the moment we know RT has shown better quality and should theoretically be better overall where performance is no limit. However, we don't know how far voxelised solutions can go and we should also acknowledge as a technology, voxelised solutions are a comparable solution for realtime rendering because they work on integrals instead of guessing these from noisy data. We have real-world examples of 4TF of compute beautifully lighting some scenes which 4TF of raytracing on compute can't match, and possibly 4TF of compute+RT hardware can't match either. With cone tracing, you refine the size of the cone to get more detail. With RT, you increase the number of samples. Both require more processing to improve quality. Both will scale differently, and both will be imperfect when using the fastest solutions. So, the take home for me at least, is next-gen lighting, the real differentiator for next gen visuals, doesn't need raytracing hardware, shifting the value consideration for RT hardware to the value of reflections and the value of ease of implementation and integration into engines.
 
If gen 9 from MS and Sony are 2020, we'll hopefully have a pretty good idea about RT around GDC 2019. We might get leaks from devs making games for 2020 and beyond. And even if leaks are light, often the GDC talks foretell what is coming up in the next 2 years. Is Navi still H1 2019?
 
I'm not ignoring it. You just haven't qualified it. They improved the denoising for 1spp tracing. Okay. Now qualify how you get better results at the same cost. What's your reference data for voxelised lighting?
Definitely. It doesn't prove quality is better than voxelised lighting at the same cost though. For that, you need comparable data on the alternative.
1) CryEngine's SVOGI and the UE4 VXGI demos. Unless you mean this type of voxel lighting: https://forum.beyond3d.com/posts/2046759/

2) Where are the voxel cone tracing demos that look and run as good? ;)

1080p with ray tracing is better because you can use adaptive super sampling AA giving you much better IQ than rasterization alone.

I agree. As a result though, I don't think anyone should be making broad claims like RT is better quality at the same cost, unless they can present compelling evidence. At the moment we know RT has shown better quality and should theoretically be better overall where performance is no limit. However, we don't know how far voxelised solutions can go and we should also acknowledge as a technology, voxelised solutions are a comparable solution for realtime rendering because they work on integrals instead of guessing these from noisy data. We have real-world examples of 4TF of compute beautifully lighting some scenes which 4TF of raytracing on compute can't match, and possibly 4TF of compute+RT hardware can't match either. With cone tracing, you refine the size of the cone to get more detail. With RT, you increase the number of samples. Both require more processing to improve quality. Both will scale differently, and both will be imperfect when using the fastest solutions. So, the take home for me at least, is next-gen lighting, the real differentiator for next gen visuals, doesn't need raytracing hardware, shifting the value consideration for RT hardware to the value of reflections and the value of ease of implementation and integration into engines.
RT is already part of DXR and Vulkan. It's here to stay. Just embrace it :p
 
Possibly why it may never make it to to consoles .... the fall back approach will work just fine for $400 consoles.
However, it is possible there might be $500+ RT capable console in the works that many will buy.
 
Possibly why it may never make it to to consoles .... the fall back approach will work just fine for $400 consoles.
However, it is possible there might be $500+ RT capable console in the works that many will buy.

After seeing RDR2, especially on XB1-X, we can survive another generation without RT. However, if Cyberpunk 2077 PC version totally massacre's the next-generation console editions with gobs of gorgeous RT lighting, shading, and reflections... then I will find a local wormhole, hop in, and beat myself as he, I, are writing this post. :yep2:
 
Back of my head wondering because RT does AO, GI, reflections, and soft shadows likely being able to calculate all of them? using a single ray, more load, but another ray may not need to be cast in the sense that each feature doesn’t need another ray pass.
.

oh okay. so similar in nature then. Multiple passes required to do different things. Is this the spp count we're referring to?

So 1 spp to say 10spp. The 10spp is doing dramatically more things like shadows, AO, etc etc.?

I have no idea! ;) the 1spp examples above are only for lighting including shadows for those light sources. I presume bounces are included in that sample, so one sample at several iterations, but I've no idea how many rays. Lighting and reflections combined will definitely need multiple samples, as will multiple light sources, each one needing a sample. One of the demos explained they pick the three most significant light-sources in a scene for tracing shadows. It should be something like 1 sample for each light source including ambient, so two for outside with sky-light and sun, and one for reflections. And then you should be adding one for each iteration of transparency, so a glass sphere outside would add one ray per pixel additional to trace the refraction. I don't know if they count that in their sample count or ray count. And that all changes with a hybrid renderer too.

In path tracing the SSP basically means amount of paths to light.
Meaning you shoot ray and bounce single ray around storing BRDF contributions from surfaces until you hit light. (So without some sort of cut out point the ray can bounce infinite amount before hitting lightsource. (Camera within a mirror ball).)

SSP can thus have huge amounts of shading and shooting rays in pretty much randomly. (If no some sort of caching or deferred shading approach is used.)
 
Last edited:
Vega is a 14/16 nm part. There’s going to be a large size reduction in the transition to 7nm by itself. 16FF to 7nm SoC scales down 70% in area. If HPC only scales down 50%, you could fit Vega 64 and have 100mm^2 left for CPU and non-memory I/O. Power scales 60% per TSMC, so the 50% may even be conservative, assuming power density is kept constant.

I suspect we can assume that the power delivery and cooling will be at least as good as last gen. Hopefully as good as the X.

For reference, Zeppelin die minus memory controller is 198mm^2 in 14nm. That would fit inside the leftover budget from above after a 50% shrink.

It’s for these reasons I’m assuming at least one of the next gen consoles will be 12TF or greater.

Forgot they showed this at computex, but the 2x numbers numbers were right on the money. The 1.35x performance seems to be on top of the 2x numbers, but I’m not even booking that clock boost.

https://www.overclock3d.net/news/gp...-_increases_investment_in_graphics_hardware/1

25081456688l.jpg


Many enthusiasts also regard Vega as set too high in terms of voltage at stock and show it’s a little bandwidth starved. I think APU fine tuning and GDDR6 may help if Navi turns out no different.
 
Last edited:
1) CryEngine's SVOGI and the UE4 VXGI demos. Unless you mean this type of voxel lighting:
Have you a link to these engines being pushed on GTX 1080 hardware? I can only find examples running on mid-range and older cards (4 TF).
2) Where are the voxel cone tracing demos that look and run as good? ;)
The fact no-one's got a demo of them doesn't show the quality is inferior. The existing demos show great quality, and show the quality can be ramped up versus performance. Ergo, until we see a demo with a large rectangular light source and vertical rails or similar comparable set-up on a GTX 1080 to compare the shadowing versus the RT example, nothing is proven.

If your statement is fact, please back it with supporting evidence. If it's just your opinion, please qualify it as such with a prefix like, "I expect..." or "I would assume..."

1080p with ray tracing is better because you can use adaptive super sampling AA giving you much better IQ than rasterization alone.
That's a totally independent feature to upscaling. You can do that whether you upscale or not. When it comes to turning 1080p pixels to 4K pixels, reducing the number of pixel samples needing to be drawn, all rendering methods can use ML based solutions or algorithmic solutions.

RT is already part of DXR and Vulkan. It's here to stay. Just embrace it :p
Again, we're trying to have an actual discussion here. If all you want to do is blow RT's trumpet and say its better at everything, you're just generating noise.
 
I'm not stating that as a claim, but from what I'd noticed in the demos, they weren't running 4K or high framerate. Apparently some are, so I stand corrected, but that's where people providing data and more detailed discussion rather than one-liners like "better quality at same cost" really helps discussions along.

Most of the demos had substantial more than 30fps. The Star Wars demo was changed from quality to get a bit more than 24fps on the Volta GPUs and therefore the goal was not gaming but a demo for pre-visualization and production tools in film and television. Due to the jump from Volta to Turing the demo of formerly four GPUs now even runs on a single GPU. In the end, the interplay of shading and intersections is crucial and content dependent. Raytracing in gaming can be used arbitrarily beyond pixel shading. Large parts of the console and UHD/144fps fraction may be rightly sceptical about this technology but one also needs to concider the late game in technical developments and their evolution. Raytracing also adresses universal problems such as divergence and spatial data structures.

For AO and GI I still see solutions apart from raytracing but not for reflections and shadows. There is not the raytracing effect anyway. One can do thousands of things with Raytracing but not everything will be possible with the current generation.
 
Last edited:
Yeah, when you look at RDR2 you don't think "this would look half decent with ray traced lighting" you think "how did they perform this witchcraft?".
I think all of us have not appreciated how far some lighting solutions have come. The VXGI stuff is several years old, DX11 based, but includes soft specular reflections which can't do RT's perfect mirroring but overall, the solution provides realtime GI with ambient occlusion, soft shadowing, and soft reflections. Yet no-one here knew about it. ;) A game designed for XB1X using this technique would be spectacular, and if it weren't for the inclusion of RTX in nVidia's latest pro-focussed GPUs, we'd be talking about different solutions with a unified view on their occlusion and game engines' short-term future. We'd be looking at BFV showing cone-traced specular highlights that runs on all GPUs instead of RTX-specific ray-traced reflections.

This is where console tech wants to be versatile, to enable acceleration of different solutions. If the RT hardware can be used to accelerate volume traversal and perform alternative shape tracing, it's inclusion is more valuable.
 
I think all of us have not appreciated how far some lighting solutions have come. The VXGI stuff is several years old, DX11 based, but includes soft specular reflections which can't do RT's perfect mirroring but overall, the solution provides realtime GI with ambient occlusion, soft shadowing, and soft reflections. Yet no-one here knew about it. ;) A game designed for XB1X using this technique would be spectacular, and if it weren't for the inclusion of RTX in nVidia's latest pro-focussed GPUs, we'd be talking about different solutions with a unified view on their occlusion and game engines' short-term future. We'd be looking at BFV showing cone-traced specular highlights that runs on all GPUs instead of RTX-specific ray-traced reflections.

This is where console tech wants to be versatile, to enable acceleration of different solutions. If the RT hardware can be used to accelerate volume traversal and perform alternative shape tracing, it's inclusion is more valuable.
I don't think anyone has discounted today's lighting solutions. But they are either baked, or have a static form of GI. There are very few games that have global dynamic GI and those that do, still have hard limitations. The performance is not great.

the idea that all Voxel based GI is equal in quality or performance is false. LPVs GI worked on an xbox one with Fable Legends. The one in CryTek only works for 1 large source.
That's hardly the same as multiple light sources in a scene all of them doing GI everywhere.

Most of these GI solutions work best outdoors where it's straight forward, but in a room, that's another story. Nor is it going to work well with every engine.
 
Status
Not open for further replies.
Back
Top