AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Does it matter, though? It's ought to be more than enough for 4K

The only thing that matters to me is figuring out why the fillrate/compute ratio changed so drastically compared to previous generations.
It could be that 64ROPs are plenty for 4K, and AMD isn't predicting an uptick in highend monitor resolution anytime soon, like you say.
 
Tahiti - 2048 - 32 - 64:1
Hawaii - 2816 - 64 - 44:1
Fiji - 4096 - 64 - 64:1
Vega 10 - 4096 - 64 - 64:1
Vega 20 - 4096 - 64 - 64:1
Navi 21 - 5120 - 64 - 80:1
 
Actually I think marbles rtx did have caustics on the marble, the Thing is that the denoiser in motion erases them. Still shots from the Demo have caustics on the marble for example.

I think that might just be the specular reflection under the metallic surface and there's another segment in the video showing that as well which explains the lighting conditions in first scene and why the marble caustics are entirely absent during the dynamic sequences ...
 
hum..... so 64 ROPs @ 2GHz = 1TB/s for FP16 writes or blending INT8 (8 bytes per pixel). Until GDDR6/HBM2, there wouldn't have been enough bandwidth to feed more fillrate anyway (@ Hawaii & Vega clocks on GDDR5/HBM1), compute needs aside.

Even with a 384-bit bus, that gives about 768GB/s @ 16Gbps - kind of wonder if the clocks will be modest anyway just to keep power consumption in check, so maybe Navi-dad is more in the 16-17TF range.

Perhaps then there would be a Pro Card with HBM2 to push things further while keeping bus-power in check.


I guess? :p
 
I think that might just be the specular reflection under the metallic surface and there's another segment in the video showing that as well which explains the lighting conditions in first scene and why the marble caustics are entirely absent during the dynamic sequences ...

Nvidia did come up with a decently efficient raytraced caustics solution, at least for direct lighting. No idea if they used it in the demo though.

Regardless most engines are mixed deferred/forward today anyway, even the ones that say they're deferred or forward. So ROPs are still relevant, though how important they are, to which engine, is hard to say. Raymarched volumetrics are growing more and more important, I wouldn't be surprised to see them start to replace alpha translucency for particles pretty soon. After all most alpha based translucent particle systems, even with a bunch of modern tricks slapped on, still stick out as blurry undersampled junk nowadays, especially with temporal aa supersampling everything else but the already undershaded particles.
 
Last edited:
Nvidia did come up with a decently efficient raytraced caustics solution, at least for direct lighting. No idea if they used it in the demo though.

It's important to remember that the affected lighting conditions in some of these more complex cases can have different contributions from the multiple ray paths. That being said I do not think caustics are contributing to what we are we're seeing because of the little transparent objects I've seen such as the empty jar or eyeglass, none of them seem to exhibit transmittance or non-uniform shadowing. While it is possible that denoising can introduce bias as mentioned and thus potentially darken the concentrated patches of light, the demo fails in ALL cases with the other objects which suggests that caustics aren't handled at all. There's also not nearly as much freedom with the materials either in the demo with the lack of translucent materials so it's more than likely that caustic effects were omitted in it's entirety because it probably wasn't designed very well to handle those materials ...(would've been a pretty bad showcase if they tried)

Regardless most engines are mixed deferred/forward today anyway, even the ones that say they're deferred or forward. So ROPs are still relevant, though how important they are, to which engine, is hard to say. Raymarched volumetrics are growing more and more important, I wouldn't be surprised to see them start to replace alpha translucency for particles pretty soon. After all most alpha based translucent particle systems, even with a bunch of modern tricks slapped on, still stick out as blurry undersampled junk nowadays, especially with temporal aa supersampling everything else but the already undershaded particles.

I don't think ray marching will be all that popular either. It has many of the same issues as ray tracing does. It might even be more problematic with acceleration structures like voxels which have tons of empty space meaning a lot of rays are being tested against a small area in the voxels covered by geometry. This will cause a lot of rays being missed and wasted on.

ROPs are still probably going to be the best bet for handling transparency going forward well into the coming generation ...
 
likely that caustic effects were omitted in it's entirety because it probably wasn't designed very well to handle those materials ...(would've been a pretty bad showcase if they tried)
"Not handled well" in the sense that it would be several magnitudes too expensive to render correctly. Realistically speaking, how many bounces does the rest of the scene require, to handle anything up to plausible ambient occlusion? How many samples and bounces do you need to get a stable solution for caustics? By far too many. See e.g. https://www.interstation3d.com/tutorials/mray_caustics/caustics_example01.htm last paragraph for a visual comparison how quickly caustics degenerate with reasonable sample counts.
 
I don't think ray marching will be all that popular either. It has many of the same issues as ray tracing does. It might even be more problematic with acceleration structures like voxels which have tons of empty space meaning a lot of rays are being tested against a small area in the voxels covered by geometry. This will cause a lot of rays being missed and wasted on.

ROPs are still probably going to be the best bet for handling transparency going forward well into the coming generation ...

Eh, you just begin and end the raymarch in some tight AABB that defines each particle system you have. Skips empty space pretty handily and you can merge the particle system to lower octree mips over distance to keep performance relatively linear. Still the real trick is to have the particle system moving and high frequency while also using temporal aa to upsample it in some manner that's actually cheap enough to be a benefit.

That being said for flat translucent surfaces like windows traditional rendering with ROPs will probably stick around for quite a while, right now that's good enough.
 
Last edited:
8K resolution is in all terms completely irrelevant even for highest end. As long as you can do 4k it's enough

An AMD engineer said last year that 8k gaming "is not as far away as we might think" and that he "thinks that with the next-generation of cards, multiple cards, you'll be able to do 8k".
 
"Not handled well" in the sense that it would be several magnitudes too expensive to render correctly. Realistically speaking, how many bounces does the rest of the scene require, to handle anything up to plausible ambient occlusion? How many samples and bounces do you need to get a stable solution for caustics? By far too many. See e.g. https://www.interstation3d.com/tutorials/mray_caustics/caustics_example01.htm last paragraph for a visual comparison how quickly caustics degenerate with reasonable sample counts.

That example was a fairly tame involving caustics. There are nightmare scenarios exhibiting chains of complex transmittance->specular->transmittance->specular paths which will pose as a challenge for just about any sampling methods to converge on the correct solution.

I think we should start dropping the 'bounce' terminology and start speaking in terms of 'paths' since it will be more meaningful in the future when we discuss light scattering effects with ray tracing. Light does more than just reflect on a diffuse/specular surface and it too can enter/exit from a medium as well ...

Rendering caustics and participating media in real-time will be for another generation to come. The latter will prove to be a big if since it could determine whether or not we give up on the idea of being able render realistic clouds/fog/smoke altogether ...
 
Status
Not open for further replies.
Back
Top