Optimizations of PC Titles *spawn*

Techpowerup doesn't always retest all older GPUs with new drivers. HUB always does. You also have to factor in the performance variances due to differing benchmark scenes.

That's fair and all, I just wanted to point out the fact not all the reviews use the latest drivers. Also, there are some games tested with integrated benchmark - so I'd assume these are testing the same stuff.
 
That's fair and all, I just wanted to point out the fact not all the reviews use the latest drivers. Also, there are some games tested with integrated benchmark - so I'd assume these are testing the same stuff.
Both reviewers sometimes use their own gameplay scenes rather than built in benchmarks.
 
These are titles that Nvidia sponsored and that are even linked in their site. I don't know what exactly AMD bundled in the past, but recent bundles with AMD stuff included Rainbox Six: Siege and AC: Valhalla, with one of them not being in the Nvidia sponsorship list and the other being an AMD sponsored title. I own several of the titles in the list and they all show the Nvidia advertising at start. About the ratings: these include exactly these titles in them, so they reflect the status of the optimization (and that's why I feel the claims of "not being fair to Nvidia" are ridicolous). A thing I don't understand is the difference between scores even in the same game taken from different sites, i.e. Techpowerup scores don't add with i.e. recent Hwunboxed tests about the latest Navi10 drivers:


What I can think about is techpowerup and Computerbase using an older WHQL driver (or older numbers) instead of the most recent - i.e. in the reviews I see 20.8.3 WHQL and 20.7.2 while we are atm at 20.11.1 as the latest and 20.9.1 as WHQL.on AMD site - and frankly the 20.9.1 is ther since September (before the Ampere launch).
That 7% slower on average in 1440p you linked to aligns rather good with TPUs "average fps" in 1440p and also somewhat with Computerbase's -10% in their rating.

From the list you gave, I only own Strange Brigade (from said AMD bundle) and for me at least, it does not show any Nvidia art. I since googled a bit in order to see if memory serves well and in this case it did: https://www.dsogaming.com/news/resi...de-will-be-optimized-for-amds-graphics-cards/
„According to Scott Herkelman, VP and GM at AMD Radeon Gaming, The Division 2, Resident Evil 2 Remake and Strange Brigade will be optimized for AMD’s graphics cards.“
 
HUB tests on 14 games with new drivers gave 5700XT a very slight advantage at 1080p and a slight disadvantage at 1440p and 4K compared to the 2070 super, while TPU and CB have a 7-10% disadvantage even at 1080p. Btw, I don't own Strange brigade, but several titles that are on that list had Nvidia adverstising on it, from The Witcher, to Company of Heroes, World of Tanks, Assassin's Creed saga before Valhalla and so on. It is not a mistery that Nvidia had very good ties with developers in the past - like it's not a mistery that both Control and Cyperpunk 2077 are Nvidia heavily sponsored titles, like many other in the past. Heck, this was even discussed TONS of times in articles and reviews. Nvidia itself was even proud of that. What I can see and it can be understood by Herkelman's declaration is that now (finally) AMD is dojng the same by helping developers in optimizing their games for AMD hardware, too. And it was expected, as both consoles' hardware is AMD stuff as well as the minority of the PC market.
 
HUB tests on 14 games with new drivers gave 5700XT a very slight advantage at 1080p and a slight disadvantage at 1440p and 4K compared to the 2070 super, while TPU and CB have a 7-10% disadvantage even at 1080p.
Here's (in the picture I attached) the 1440p result from the video you linked, complete with timecode. It's noted at the top: -7% for the 5700XT compared to 2070S, which was my point. No need to juggle with slight, very slight and turn to hard numbers only on TPU and CB which you seem not to take as reliable sources in this instance. Not sure why tho.

Radeon Software 20.9.1 landed on september 16th, the video you linked was uploaded on september 12th, so drivers possibly are not too different, at least Computerbase did everything anew with 20.7.1 and 20.8.1 for Horizon Zero dawn.

Just wondering about the points you raise on the viability of individual tests. I am not debating optimizations on one card being good and on the other being bad or vice versa. It's pretty clear that effects YOU (as a developer) design would be a good fit for YOUR hardware. Whether they are engineered to specifically hamper the other guys' hardware has been analyzed to death with Crysis Roadblocks'o'Triangles which of course were ridiculous. But after that?

edit: Apparently, I grabbed a graph out of the video which showed previous results the youtuber obtained. Please disregard the screenshot and parts of my above comment then!
 

Attachments

  • Unbenannt.PNG
    Unbenannt.PNG
    293.2 KB · Views: 9
Last edited:
Here's (in the picture I attached) the 1440p result from the video you linked, complete with timecode. It's noted at the top: -7% for the 5700XT compared to 2070S, which was my point. No need to juggle with slight, very slight and turn to hard numbers only on TPU and CB which you seem not to take as reliable sources in this instance. Not sure why tho.

Radeon Software 20.9.1 landed on september 16th, the video you linked was uploaded on september 12th, so drivers possibly are not too different, at least Computerbase did everything anew with 20.7.1 and 20.8.1 for Horizon Zero dawn.

Just wondering about the points you raise on the viability of individual tests. I am not debating optimizations on one card being good and on the other being bad or vice versa. It's pretty clear that effects YOU (as a developer) design would be a good fit for YOUR hardware. Whether they are engineered to specifically hamper the other guys' hardware has been analyzed to death with Crysis Roadblocks'o'Triangles which of course were ridiculous. But after that?

Ehm... it seems you did not hear what the video said. They said those were the results the LAST time thay tested the cards before those new drivers. Their current results were at minute 12:33.
By the way, my point is that optimization for one or the other architecture is something always happened and will always happen (especially with ray tracing, unfortunately) because of economical rasons. I unfortunately accept this as a fact, my hope is that developers will always give their best shot for both vendors. But people whining that a few games running well on AMD architecture is "unfair competition" after years of seeing the same or worse on the other side make me laugh.
 
Ehm... it seems you did not hear what the video said. They said those were the results the LAST time thay tested the cards before those new drivers. Their current results were at minute 12:33.
That's right. I skipped to the graphs, because I cannot be bothered to watch feature length videos all day long.
So, you're saying -7% is the old result and they did not give a new one in the same manner of the (apparently older) graph i grabbed from a minute later? Then I'll update my above post to clarify.


By the way, my point is that optimization for one or the other architecture is somethig always happened and will always happen (especially with ray tracing, unfortunately). I unfortunately accept this as a fact, I can only hope a developer wil lgive the bets shot for both vendors. But people whining that a few games running well on AMD architecture is "unfair competition" after years of seeing the same or worse on the other side makes me laugh.
Ok, fair point. But why is optimization happening with raytracing unfortunate?
 
Oh, just noted, you think i take TPU and CB as not reliable sources. That's not the case (otherwise I would have not linked TPU's results in a previous post). I said that they probably tested with older drivers and newer drivers are likely to give better results, that'all.
 
Ok, fair point. But why is optimization happening with raytracing unfortunate?

Not optimization, but ray tracing will be almost surely the "next war" about optimization. As Chris post above says, the way AMD and Nvidia are doing ray tracing in hardware differs quite a bit, and this leads to have quite different way of optimization. This should be taken care in the drivers mainly but of course application side has a lot to say about that. It's not ray tracing in itself that is "unfortunate", but next optimization war will be, in my opinion.
 
Last edited:
With Intel entering the fray, it will be interesting to see how this turns out. Apparently they have no problem implementing Nvidia's in-house ray tracing extensions on their hardware.
Wait a sec, VK_KHR_ray_tracing (KHR = KHRonos) is NOT NV in-house extension, VK_NV_ray_tracing is NV specific one.

That ANV commit is pointing to KHR one, there are no mentions about NV specific extension.
Code:
   case SpvExecutionModelRayGenerationKHR:
      return MESA_SHADER_RAYGEN;
   case SpvExecutionModelAnyHitKHR:
      return MESA_SHADER_ANY_HIT;
   case SpvExecutionModelClosestHitKHR:
      return MESA_SHADER_CLOSEST_HIT;
   case SpvExecutionModelMissKHR:
      return MESA_SHADER_MISS;
   case SpvExecutionModelIntersectionKHR:
      return MESA_SHADER_INTERSECTION;
   case SpvExecutionModelCallableKHR:
       return MESA_SHADER_CALLABLE;
 
Wait a sec, VK_KHR_ray_tracing (KHR = KHRonos) is NOT NV in-house extension, VK_NV_ray_tracing is NV specific one.

That ANV commit is pointing to KHR one, there are no mentions about NV specific extension.
Code:
   case SpvExecutionModelRayGenerationKHR:
      return MESA_SHADER_RAYGEN;
   case SpvExecutionModelAnyHitKHR:
      return MESA_SHADER_ANY_HIT;
   case SpvExecutionModelClosestHitKHR:
      return MESA_SHADER_CLOSEST_HIT;
   case SpvExecutionModelMissKHR:
      return MESA_SHADER_MISS;
   case SpvExecutionModelIntersectionKHR:
      return MESA_SHADER_INTERSECTION;
   case SpvExecutionModelCallableKHR:
       return MESA_SHADER_CALLABLE;
They are currently using Khronos extensions, but mentioned they would have no issues using NV extensions. So seems their hardware would have no issues.
Intel plans to keep using Khronos' open-source ray tracing extensions as much as possible. However, they might look at implementing Nvidia's in-house extensions if they find more and more games using Nvidia compared to Khronos' open-source solution.
 
Not optimization, but ray tracing will be almost surely the "next war" about optimization. As Chris post above says, the way AMD and Nvidia are doing ray tracing in hardware differs quite a bit, and this leads to have quite different way of optimization. This should be taken care in the drivers mainly but of course application side has a lot to say about that. It's not ray tracing in itself that is "unfortunate", but next optimization war will be, in my opinion.
A fun thing just came to mind: If this turns out to be true and different optimizations are implemented with the intent to hurt performance on competitor's cards, 1st-gen RT-titles might actually interesting to compare performance, since with them, no one would have had an idea what hurts AMD implementation the most.
 
HUB tests on 14 games with new drivers gave 5700XT a very slight advantage at 1080p and a slight disadvantage at 1440p and 4K compared to the 2070 super, while TPU and CB have a 7-10% disadvantage even at 1080p. Btw, I don't own Strange brigade, but several titles that are on that list had Nvidia adverstising on it, from The Witcher, to Company of Heroes, World of Tanks, Assassin's Creed saga before Valhalla and so on. It is not a mistery that Nvidia had very good ties with developers in the past - like it's not a mistery that both Control and Cyperpunk 2077 are Nvidia heavily sponsored titles, like many other in the past. Heck, this was even discussed TONS of times in articles and reviews. Nvidia itself was even proud of that. What I can see and it can be understood by Herkelman's declaration is that now (finally) AMD is dojng the same by helping developers in optimizing their games for AMD hardware, too. And it was expected, as both consoles' hardware is AMD stuff as well as the minority of the PC market.

Strange Brigade in particular is an interesting one, as back in the day when it was Vega vs Pascal it was always used as an example of a title that heavily favoured AMD:

https://www.guru3d.com/news-story/amd-radeon-graphics-with-strange-brigade.html
https://www.guru3d.com/articles-pages/amd-radeon-vii-16-gb-review,15.html

It was one of the few good DX12 implementations back then that also went the extra mile to squeeze some extra performance out of the AMD cards using Async compute.

Of course, back then, the AMD cards were heavy on compute resources compared to their price-equivalent Nvidia cards - with Ampere's huge number of FP32 cores, the situation is now reversed.
That's not a conspiracy, Nvidia just had a huge generational leap in FP32 compute, and that's reflected in a game that was optimized to make use of it well.
 
What's unclear to me is how developers are currently assessing the ray-budget versus image quality question, when deciding how to use ray tracing in their games.

They don't. And Nvidia doesn't either (outside their scientist group).
You know, we have image quality metrics since uhhh ... ~40 years in image compression. Even pseudo visual ones since, ah ... I think ~20 years. Pretty standard in offline rendering evaluation too. And respectable BRDF approximation papers. I'm fairly sure there is only one, and one only way to teach a learning adaptive algorithm about "correct" information ... though ... c'mon, let us humans not be lied to by some uncorruptible objective hard math. Pfff. Doesn't sell.

Sorry ;)
 
Strange Brigade in particular is an interesting one, as back in the day when it was Vega vs Pascal it was always used as an example of a title that heavily favoured AMD:

https://www.guru3d.com/news-story/amd-radeon-graphics-with-strange-brigade.html
https://www.guru3d.com/articles-pages/amd-radeon-vii-16-gb-review,15.html

It was one of the few good DX12 implementations back then that also went the extra mile to squeeze some extra performance out of the AMD cards using Async compute.

Of course, back then, the AMD cards were heavy on compute resources compared to their price-equivalent Nvidia cards - with Ampere's huge number of FP32 cores, the situation is now reversed.
That's not a conspiracy, Nvidia just had a huge generational leap in FP32 compute, and that's reflected in a game that was optimized to make use of it well.

But in fact the point of my post is that if one wants to find the example of "unfair competition" by results one can say whatever he wants. For pointing out unfair competition one cannot simply say "according to my perception it runs badly, so it was intentionally crippled", one must point out in which way and how. It may be simply a case of an application which runs better on a determined hardware, and it may stay that way or it may change once that vendor drivers' team starts to work on that title, it may be a poorly optimized game (Ubisoft, I am speaking about you). In the past the overtellexated sea was clearly an unfair attempt, but recently I have no notice of similar behavior. I fear this will happen again with ray tracing, as said above.
 
Back
Top