Optimizations of PC Titles *spawn*

Strange Brigade in particular is an interesting one, as back in the day when it was Vega vs Pascal it was always used as an example of a title that heavily favoured AMD:

https://www.guru3d.com/news-story/amd-radeon-graphics-with-strange-brigade.html
https://www.guru3d.com/articles-pages/amd-radeon-vii-16-gb-review,15.html

It was one of the few good DX12 implementations back then that also went the extra mile to squeeze some extra performance out of the AMD cards using Async compute.

Of course, back then, the AMD cards were heavy on compute resources compared to their price-equivalent Nvidia cards - with Ampere's huge number of FP32 cores, the situation is now reversed.
That's not a conspiracy, Nvidia just had a huge generational leap in FP32 compute, and that's reflected in a game that was optimized to make use of it well.

Every GCN optimized game runs superb on Ampere. I think it makes sense because those games are compute oriented and focused less on the fixed function side (no Tessellation, low geometry etc.).
 
They don't. And Nvidia doesn't either (outside their scientist group).
You know, we have image quality metrics since uhhh ... ~40 years in image compression. Even pseudo visual ones since, ah ... I think ~20 years. Pretty standard in offline rendering evaluation too. And respectable BRDF approximation papers. I'm fairly sure there is only one, and one only way to teach a learning adaptive algorithm about "correct" information ... though ... c'mon, let us humans not be lied to by some uncorruptible objective hard math. Pfff. Doesn't sell.

Sorry ;)

Image quality in this case is literally more art than science. An objective quantitative measure of IQ isn't very useful at all.
 
They don't. And Nvidia doesn't either (outside their scientist group).
You know, we have image quality metrics since uhhh ... ~40 years in image compression. Even pseudo visual ones since, ah ... I think ~20 years. Pretty standard in offline rendering evaluation too. And respectable BRDF approximation papers. I'm fairly sure there is only one, and one only way to teach a learning adaptive algorithm about "correct" information ... though ... c'mon, let us humans not be lied to by some uncorruptible objective hard math. Pfff. Doesn't sell.

Sorry ;)
https://research.nvidia.com/publication/2020-07_FLIP

I watched the related presentation at HPG 2020 recently. It's pretty interesting and I think we may well have escaped the crappy local minima of SSIM and the like.


Image quality in this case is literally more art than science. An objective quantitative measure of IQ isn't very useful at all.
You can run a "reference" ray trace pass for game engine graphics (which will take some time) to see what the image quality should be, and then use that reference image to assess the quality of the real time ray tracing code.
 
Developers will decide that, not Intel or Nvidia. It's just shader code after all just like Nvidia's HBAO.
I was under the impression that NVidia gave developers a "Gameworks" DLL to bundle with the game. Maybe that's no longer the case?

Dunno how the NVAPI DLL relates to "Gameworks":

https://developer.nvidia.com/nvapi
https://docs.nvidia.com/gameworks/content/gameworkslibrary/coresdk/nvapi/group__dx.html

Requires NDA Edition for full control of this feature.

Oh...
 
https://research.nvidia.com/publication/2020-07_FLIP

I watched the related presentation at HPG 2020 recently. It's pretty interesting and I think we may well have escaped the crappy local minima of SSIM and the like.

Yeah, I met Tomas at HPC 2018, awesome guy. I wish I've had such a professor at U way back. I'm also friends with the other Thomas from SSIM. Basically my neighbourhood. :)
There are crazy ideas floating around there and in general considering machine learning, they look like sick magic, but they are real hardcore math and logic with proven boundaries and predictable behaviour and all. It's sad that PR is giving into the lower instinct (or whatever), instead of trying to educate and explain and lift everyone a step up, mentally.
 
You can run a "reference" ray trace pass for game engine graphics (which will take some time) to see what the image quality should be, and then use that reference image to assess the quality of the real time ray tracing code.

Yeah I’m not saying it’s not possible, just that the output won’t be very useful. Such an objective measure for example will give horrendous scores to screen space reflections and cube maps which we already know are quite acceptable. If it’s used as a measure of how many rays are good enough for a given effect it might be somewhat handy. If it’s used as a measure of which of several competing
effects (shadows, ao, reflections) should get a higher ray allocation it’s really useless.
 
How anyone can unironically complain about a perceived lack of optimisation — or intentional performance gimping — for Nvidia cards on AMD sponsored titles, after over a decade of “The Way It’s Meant to be Played” or GameWorks, is honestly baffling.

If you want to talk about needless compute for no IQ gains, take a look at the tesselation shenanigans Nvidia was pulling just to penalise AMD’s GCN uarch.
 
How anyone can unironically complain about a perceived lack of optimisation — or intentional performance gimping — for Nvidia cards on AMD sponsored titles, after over a decade of “The Way It’s Meant to be Played” or GameWorks, is honestly baffling.

If you want to talk about needless compute for no IQ gains, take a look at the tesselation shenanigans Nvidia was pulling just to penalise AMD’s GCN uarch.
yeah and in the meantime, one man studio game following MS DX12 RT API, without help of anybody, produces that:
Bright Memory RT 3070 vs 6800.jpg

Edit: sarcasm aside, looking at Minecraft RTX or DF analysis of Watch Dog Legion Console vs PC (where XBsX quality is equivalent of what a RTX2060 can do), it's very clear that RDNA2 is far behind RTX30 when talking RT. Nothing to be ashamed for AMD, it's their first generation, but please stop this non sense of gimped titles, that's not what's happening this gen...
 
Last edited:
yeah and in the meantime, one man studio game following MS DX12 RT API, without help of anybody, produces that:
View attachment 4978

Edit: sarcasm aside, looking at Minecraft RTX or DF analysis of Watch Dog Legion Console vs PC (where XBsX quality is equivalent of what a RTX2060 can do), it's very clear that RDNA2 is far behind RTX30 when talking RT. Nothing to be ashamed for AMD, it's their first generation, but please stop this non sense of gimped titles, that's not what's happening this gen...
Are you really comparing Nvidia performance with DLSS to AMD performance at native?
 
yeah and in the meantime, one man studio game following MS DX12 RT API, without help of anybody, produces that:
View attachment 4978

Edit: sarcasm aside, looking at Minecraft RTX or DF analysis of Watch Dog Legion Console vs PC (where XBsX quality is equivalent of what a RTX2060 can do), it's very clear that RDNA2 is far behind RTX30 when talking RT. Nothing to be ashamed for AMD, it's their first generation, but please stop this non sense of gimped titles, that's not what's happening this gen...

I checked out that video and it was interesting, thanks (Link if anyone's interested). To note, it was a RTX2060 Super they were using, not a vanilla 2060. And it was compared to a 52CU RDNA2 GPU at 1.8 Ghz with no infinity cache. A 72/80CU 2.2 Ghz RDNA2 GPU with Infinity cache will likely perform better. From preliminary tests/blurbs/news,etc it appears at least at RTX 3070 levels, if not higher (Game optimization/driver maturity dependent of course). It seems fairly clear that Nvidia is ahead in Ray Tracing performance, and given it's the second gen of their marquee feature, I don't think anyone's surprised. However, this is still early and there are significant optimizations possible on both sides for various games. So give it some time and wait for actual results instead of jumping to conclusions. If RT is a priority for you and you want to buy RTX cards go ahead, but this incessant noise in literally every thread that AMD's raytracing is bad/horrible/whatever is getting quite ridiculous frankly.
 
Hey, it seems RDNA2 finally shows its true power in Ray Tracing. And look at the SAM gains!
Thanks for optimizations, AMD. Hell, AMD has optimized it sooo well that even RTX 2080 Ti is capable of beating that 3090!

It seems Valhalla and The Dirt 5 optimizations are really portable and help to leverage the potential of RDNA to full extend!
 
Last edited:
If RT is a priority for you and you want to buy RTX cards go ahead, but this incessant noise in literally every thread that AMD's raytracing is bad/horrible/whatever is getting quite ridiculous frankly.

Also, why is there this general assumption that Navi 21's RT performance will be stagnant and equal to the performance on release day?

When Turing released, RT performance on the first implementations was pretty terrible while IQ gains were almost negligible.
But AMD releases a new card that is running RT on games optimized exclusively for nvidia's hardware and we should assume this is the best we'll ever see from said new card?

Why does one get the benefit of the doubt for years, while the other gets a lifetime sentence?
Because a couple of developers said nvidia cards are faster at incoherent rays? Perhaps games and game engines should be made to avoid incoherent rays (secondary/tertiary bounces if I understood correctly) as much as possible, since from what I'm reading those are pretty slow on nvidia hardware (and slower on RDNA2, reportedly).
It seems this performance problem has been in the radar from more than a couple of years now.


Unless of course, we're about to see subpixel triangles / Hairworks saga 2.0, which is nvidia pushing for developers to kill the performance on their own cards just because it kills performance on their competition disproportionately more.
And if that's the case, expect to see lots of opposite sided mirrors everywhere just so people can see triple-mirrored rays / images that put the 3080 running at 40 FPS but it's ok because the 6800XT won't go above 10 FPS.
 
How exactly "Ampere is twice as fast" best case scenario for AMD? I'm pretty sure 3080 isn't beating 6800XT in most RT games 2:1
It's not a statement, it's a will.
(it was a tongue-in-cheek comment on my side)

Just to be clear: I think Ampere (GA102) being "at least twice as fast" at RT than Navi 21 is just what nvidia's PR will try to sell everywhere from now on, since they just got significantly behind on rasterization perf/watt. Their message will change radically from "faster at gaming" to "faster at raytracing" and all of a sudden raytracing will be the most important thing to ever happen in gaming.
 
Just to be clear: I think Ampere (GA102) being "at least twice as fast" at RT than Navi 21 is just what nvidia's PR will try to sell everywhere from now on, since they just got significantly behind on rasterization perf/watt. Their message will change radically from "faster at gaming" to "faster at raytracing" and all of a sudden raytracing will be the most important thing to ever happen in gaming.

You’re assuming that faster at gaming and faster at raytracing are going to be 2 different things this generation from a typical gamer’s perspective. Nvidia’s marketing team has a pretty easy job if big titles continue to embrace RT.

Practically speaking Nvidia has no reason to change course. AMD has a very compelling product but it’s not the first time that’s happened and we know how that usually goes in terms of market penetration. All the other stuff that helps Nvidia stay ahead - brand recognition, software, feature set, developer relations are still there. Of course if Nvidia’s advantage in RT solidifies they will beat us over the head with it but that’s expected.
 
Unless of course, we're about to see subpixel triangles / Hairworks saga 2.0, which is nvidia pushing for developers to kill the performance on their own cards just because it kills performance on their competition disproportionately more.
And if that's the case, expect to see lots of opposite sided mirrors everywhere just so people can see triple-mirrored rays / images that put the 3080 running at 40 FPS but it's ok because the 6800XT won't go above 10 FPS.

I think this is exactly how this and the next generation will play out.

64x tesselation for under water assets, anyone?
 
Back
Top