Optimizations of PC Titles *spawn*

Discussion in 'Architecture and Products' started by pjbliverpool, Nov 16, 2020.

  1. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    Doesn't this make hardware performance comparisons difficult if rt shadows don't have the same quality or intensity in games.
     
  2. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    That's a lame excuse. I'm sure AMD was in bed with console game developers for years.
     
  3. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
    Outside of the required work with Sony and Microsoft, I doubt there was much if any collaboration between AMD and console developers. Also, isn’t all of AMD software tech open source? No top secret NDAs required like with Nvidia?
     
    Krteq likes this.
  4. OlegSH

    Regular

    Joined:
    Jan 10, 2010
    Messages:
    797
    Likes Received:
    1,624
    Surprizing that it runs much slower only on Nvidia's hardware
    Odyssey - https://tpucdn.com/review/evga-gefo.../images/assassins-creed-odyssey-2560-1440.png
    RTX 2080 - 63 FPS
    RX 5700 XT - 55.4 FPS

    Valhalla - https://tpucdn.com/review/assassins...nce-analysis/images/performance-2560-1440.png
    RTX 2080 - 55.1 FPS
    RX 5700 XT - 53.3 FPS

    AC Valhalla is the first attempt to port Anvil to DX12, it seems to be heavily skewed to RDNA simply because AMD was working with them.
    Wonder why they didn't leave DX11. Also, scaling is worse in DX12 for nvidia GPUs as well, RTX 2080 Ti just 15% faster than RTX 2080, while on avarage it's 21% faster in 1440p.

    Is there any evidence of the better lighting?

    I actually like it more than native + TAA in WDL because it's much more temporarily stable in the game.
    Default TAA in WDL misses tons of shimmering and edges, but that's simply a trade off with TAA in the game, they could have made it much more stable at the cost of additional blurriness.
    DLSS should not be adjusted for the TAA appearance in any game, it has it's own strengths.
     
    DegustatoR, T2098 and DavidGraham like this.
  5. SimBy

    Regular

    Joined:
    Jun 21, 2008
    Messages:
    700
    Likes Received:
    391
    The only vendor with a proven history of intentionally gimping performance for 0 IQ gain for not just AMD but their own older cards is Nvidia. #concreteslab #crysis2
     
    Wesker, Krteq, CeeGee and 1 other person like this.
  6. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
  7. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
  8. Leoneazzurro5

    Regular

    Joined:
    Aug 18, 2020
    Messages:
    335
    Likes Received:
    348
    Again, what is your point? That developers prioritize optimizing for their sponsors first? It was the same, or worse, with Nvidia.
    I put here something:

    https://www.nvidia.com/it-it/geforce/games/

    This is the VERY LONG list of Nvidia sponsored titles, the vast majority of them ran quite badly on AMD hardware at the beginning. And some still run quite better on Nvidia cards, without any reason, as you say. Just look at TPU reviews. So again, why is this OK for you but not the opposite?

    Just look at the reviews, or at the game.
     
    Wesker likes this.
  9. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    Radeon Rays ... Custom AABB, GPU BVH Optimization, API backends.
     
  10. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    Huh? Why would it cause that? All cards using RT acceleration will use that library, not just AMDs.
    What he meant is that they're not using UE4's built-in library, but their own. They could have named it SuperLaserTracer9000.dll, but decided to go with amdrtshadows.dll (it's probably straight from AMD and I wouldn't be surprised to see it pop up at GPUOpen sooner or later)

    It was never meant to be "exclusive solution", but they can't advertise others and at the moment they've only validated it to work with Ryzen5000+500+RX6000 -combo, and no other vendors had at the time communicated on planning on implementing Resizable BAR support, thus it at the time of the press release is/was exclusive to that combo
     
    Wesker, Kej, CeeGee and 2 others like this.
  11. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    If that's how you want to view how they advertised the feature, okay. There is definitely more neutral interpretations on what was said.
    https://www.tomshardware.com/news/nvidia-amd-smart-access-memory-tech-ampere
     
  12. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    You're calling that "more neutral interpretation"? One needs quite a stretch of imagination to think AMD is pushing for "walled all-AMD gaming PC garden" by enabling standard PCIe-feature they've finally validated on a consumer card too.
    It's not like AMD would have been silent about working towards enabling Resizable BAR support, hell, they implemented the support for it in Linux. And they of course know that every other company knows about the feature, it's benefits etc too. Why no-one else did it before, who knows. Perhaps there's a reason why it's been so far validated on only one platform.
     
    Ethatron, CeeGee, Lightman and 2 others like this.
  13. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    Yeah, it does take quite a stretch.

    https://www.tweaktown.com/news/76238/nvidia-has-its-own-smart-access-memory-like-amds-new-rdna-2-coming/index.html

    [​IMG]
     
    DegustatoR, OlegSH and PSman1700 like this.
  14. OlegSH

    Regular

    Joined:
    Jan 10, 2010
    Messages:
    797
    Likes Received:
    1,624
    That's not prioritization, that's crippling performane for 100% of high end HW and 80% of discrete GPUs users with API, where there are literally infinity ways to make things slow without any visible benefit and opportunity to fix the broken stuff in driver since devs chose to care about the minority of their audience for whatever reason.

    And it's certainly not the same with Nvidia. I don't see how slow tesselation, geometry processing and looks like ray-tracing now on AMD hardware are Nvidia's faults. Nobody blamed AMD for slow computer shaders in the Dirt and other titles with forward+ renderers due to shared memory atomics usage instead of prefix sum (which Nvidia suggested as optimization for Keplers), it was future proof architecture of GCN.
     
  15. Leoneazzurro5

    Regular

    Joined:
    Aug 18, 2020
    Messages:
    335
    Likes Received:
    348
    Sorry to say, but you are simply saying -again- that if a developer optimizes for an architecture and that architecture is Nvidia - it's a legit optimization - if it's AMD - it's unfair competition. Nvidia has its history of "unfair" optimizations (here in this very same thread someone else than me reported such cases and there are a lot more - and no, amplifying tessellation over any reasonable amount only to cripple the competitor's performance without any gain in image quality is NOT a fair optimization by any means). And again, we are not even speaking about optimizations for GCN - the vast majority of the Nvidia sponsored titles I linked runs better on Nvidia cards having lower specs than the comparable RDNA cards "without any reason" for using your words, and RDNA is by no means the same mess GCN is from the optimization point of view.

    From RECENT techpowerup reviews:

    Control RTX off
    https://www.techpowerup.com/review/msi-geforce-rtx-3070-gaming-x-trio/10.html

    Divinity Original Sin 2
    https://www.techpowerup.com/review/msi-geforce-rtx-3070-gaming-x-trio/14.html

    Gears 5 original PC release (and we saw this running quite well on RDNA2 consoles)
    https://www.techpowerup.com/review/msi-geforce-rtx-3070-gaming-x-trio/18.html

    Hitman2
    https://www.techpowerup.com/review/msi-geforce-rtx-3070-gaming-x-trio/19.html

    Metro Exodus RTX off
    https://www.techpowerup.com/review/msi-geforce-rtx-3070-gaming-x-trio/20.html

    Strange Brigade
    https://www.techpowerup.com/review/msi-geforce-rtx-3070-gaming-x-trio/26.html

    These are all Nvidia sponsored titles where the 5700XT runs on par or worse than a 2070, despite having better basic specs (and we are not even putting Ray tracing on the scale here) and not having "any reason" to perform worse - and there are more games where anyway the 5700XT performs significantly worse than the 2070 Super when they have similar specs. Some of these don't have stunning visuals, too.

    So in this case "it's all OK" but if something happens that reverses this perception "it's not fair". You seem desperate to imply that RDNA architecture is simply worse than Turing or even Pascal without any evidence of this.
     
    Wesker and Lightman like this.
  16. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
    He mentions slow geometry processing not being Nvidia's fault. I don't see how slow compute is AMD's fault.
     
    xpea likes this.
  17. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    [compacted it a little]
    Not chiming in on the fair-vs-unfair optimization dicussion, but wondering: Are those really all Nvidia sponsored titles? Looked at some and they seems to perform according to general ratings on TPU and Computerbase, where 2070S is 10-ish % faster than 5700 XT.
    Also, I was under the impression, AMD even bundled Strange Brigade with their cards (and at least some game out of the Hitman franchise, though I don't remember if it was Hitman 2 specifically). Would they do so with an Nvidia-sponsored title?
     
    DegustatoR and pharma like this.
  18. Leoneazzurro5

    Regular

    Joined:
    Aug 18, 2020
    Messages:
    335
    Likes Received:
    348
    These are titles that Nvidia sponsored and that are even linked in their site. I don't know what exactly AMD bundled in the past, but recent bundles with AMD stuff included Rainbox Six: Siege and AC: Valhalla, with one of them not being in the Nvidia sponsorship list and the other being an AMD sponsored title. I own several of the titles in the list and they all show the Nvidia advertising at start. About the ratings: these include exactly these titles in them, so they reflect the status of the optimization (and that's why I feel the claims of "not being fair to Nvidia" are ridicolous). A thing I don't understand is the difference between scores even in the same game taken from different sites, i.e. Techpowerup scores don't add with i.e. recent Hwunboxed tests about the latest Navi10 drivers:



    What I can think about is techpowerup and Computerbase using an older WHQL driver (or older numbers) instead of the most recent - i.e. in the reviews I see 20.8.3 WHQL and 20.7.2 while we are atm at 20.11.1 as the latest and 20.9.1 as WHQL.on AMD site - and frankly the 20.9.1 is ther since September (before the Ampere launch).
     
  19. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    In real time you have to compromise on the count of rays used. So for example reflections only cover a short distance from the camera (so you get pop-in as the camera moves) and reflections are blurry due to low sample count.

    It's similar to how there's a limited count of filtered texels that you can fit into a frame (bandwith and filtering throughput restrictions apply as well as quality of latency hiding). Ray casts/bounces are notionally "budgeted" per frame, with additional problems to deal with like BVH build/update.

    In using the "ray budget", you then have to decide which effects you want to use. At the same time you have to decide how much quality each effect gets. And how to make quality scalable (low, medium, high settings).

    Something like GI, which shows the greatest count of rays in the image you posted, misses a typical real time rendering technique: temporal accumulation. e.g. 1 sample per 25 pixels per frame is enough, and over 5 or 15 frames that will provide a "high quality" result (using jittered sampling, say). That's why I compared ray traced GI with SVOGI in CryEngine, because there are games that already do high quality GI whilst not using hardware accelerated ray tracing.

    So this tells us that while the ideal for GI requires a high ray count, in a real time game the ray count would be cut down substantially. And it still looks really excellent.

    It seems that reflections have become the focus of ray tracing in most games because they're really easy to show in marketing the game. Though Duke Nukem might have something to say about the history of high quality reflections in games.



    The shadows in Call of Duty are really nice and they required a lot of R&D to make them practical. It seems it wasn't as simple as they were hoping. Battlefield reflections have problems with pop-in and depend heavily upon screen space imposters. Watch Dogs: Legion also has problems with pop-in and uses screen space reflections a lot of the time.

    What's unclear to me is how developers are currently assessing the ray-budget versus image quality question, when deciding how to use ray tracing in their games.
     
    Lightman, pharma and CarstenS like this.
  20. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
    Techpowerup doesn't always retest all older GPUs with new drivers. HUB always does. You also have to factor in the performance variances due to differing benchmark scenes.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...