AMD Radeon RDNA2 Navi (RX 6700 XT, RX 6800, 6800 XT, 6900 XT) [2020-10-28, 2021-03-03]

Discussion in 'Architecture and Products' started by BRiT, Oct 28, 2020.

  1. HLJ

    HLJ
    Regular Newcomer

    Joined:
    Aug 26, 2020
    Messages:
    385
    Likes Received:
    633
    Who won what market?
     
    PSman1700 likes this.
  2. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    309
    Likes Received:
    348
    This isn't a surprise seeing as how AMD managed to pass off basic design features like the binding model, pipeline state objects, and command buffers from Mantle into D3D12. I imagine Nvidia were banking really hard on bindless buffers/textures and command lists/ExecuteIndirect/device generated command taking off in an alternate world ...

    The former concept is a forgone conclusion since D3D12 doesn't expose NV-style bindless functionality and instead features descriptor indexing. NV hardware has a single global table pool for textures and samplers so how they're even implementing descriptor indexing with multiple descriptor tables is a total mystery. I won't go into details again about D3D12's other hazardous characteristics in it's binding model which were covered elsewhere and I don't think NV envisioned it being used the way it currently is now.

    D3D12 also features indirect rendering via ExecuteIndirect which maps nicely to NV hardware since it's a subset of device generated commands so this is one of the few upsides for them there. On AMD, changing the resource bindings in the command signature could potentially add overhead so the API should only be used with draw or dispatch commands to hit the fast path in the hardware.
     
    DavidGraham, Lightman, T2098 and 3 others like this.
  3. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,617
    Location:
    msk.ru/spb.ru
    Most of their DX11 results are the opposite of what they see in DX12 and if they imply that the same is happening in DX11 too then their explanation for why it is happening makes no sense. When some game shows the same behavior in different APIs then why are we blaming the APIs for this?

    APIs are designed according to h/w, not the other way around.

    I don't know what you mean by "bet" here. Care to explain?

    You really don't. APIs are designed after the h/w. Some APIs are better suited for some h/w than the other but that is an API issue, not a h/w one.

    They never "emulated" async compute. And it wouldn't matter anyway because async compute doesn't happen in the driver.

    Which should point you into thinking on how relevant the explanations which mention any kind of scheduling or async compute are.
     
    PSman1700 likes this.
  4. tsa1

    Newcomer

    Joined:
    Oct 8, 2020
    Messages:
    52
    Likes Received:
    52
    Who is blaming and whom? I was sure people in this thread understand what a 'trade-off' is and why nVidia opted for this. I can offer you even better spin for this - you can just say that nVidia became the victim of its own prowess - its driver is so efficiently multithreaded that it can hog whole CPU for itself and leave little for everything else. Cue in DPC latency issue and other stuff that happened in the last 5-7 years or so ("driver reset" if your kernel is keeping the GPU busy for more than 2 seconds)
     
  5. Putas

    Regular Newcomer

    Joined:
    Nov 7, 2004
    Messages:
    533
    Likes Received:
    176
    APIs are designed both after and before h/w. Future has to be considered as well.
    What issue it causes differs case by case and likely depends on point of view.
     
  6. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,139
    Likes Received:
    1,291
    I still think it would be easier and much more efficient to have API per vendor these days.
    MS and Khronos seems more of an obstacle and black box collection than helpful.
    Maintaining BC surely becomes a problem with time, but i don't see how that's worse than piling up one driver hack per game anyways.
     
  7. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,617
    Location:
    msk.ru/spb.ru
    If something differs case by case then it's not a general issue and likely is due to how an application is using the API+h/w.
     
    PSman1700 likes this.
  8. BlackAngus

    Newcomer

    Joined:
    Apr 2, 2003
    Messages:
    133
    Likes Received:
    25
    This just makes more work for developers and will lead to "sponsored games" with even more performance disparity than today.
    Games will take longer and be more expensive. That's not what anyone wants.
     
  9. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,139
    Likes Received:
    1,291
    Sounds this is response to my vendor API request two posts up?

    Well, here's my arguments (but i'm not in the games industry, so contains a lot of guesses):
    Indie Company: Can still use U engines or DX/GL/VK. They are not affected.
    AAA Company: If it's indeed more work they can stick to those general APIs too.
    But i doubt it is more work. I assume engine developer costs is peanuts in comparison to content creation, and working around limitations or issues takes more time than learning APIs, which probably end up mostly similar. Likely they can achieve performance targets faster.
    I definitively would. Glide was easier to use than OpenGL. Mantle was simpler than Vulkan, but had important features still missing elsewhere.
    Also, since i'm here i whine about restricted RTX and DXR, black boxed BVH, missing expose of AMD RT flexibility, missing ability of device side enqueue or something similar. I do so for a reason, and if there were vendor APIs, i just would not have a problem with any vendor. Pretty sure of that.
    I could develop more efficient software in less time, even if i had to learn 3 APIs instead just one. Granted. Guess this would help some other people as well.

    I don't see a problem with game companies dealing with NV or AMD for support and marketing either. Eventually this means leaving some things behind, eventually we would not have as many RT games yet if this would not happen. Why should vendor APIs affect this? Likely it just stays the same as is.

    > That's not what anyone wants.
    There was a time when i would have fully agreed. Sadly it is gone, due to increased complexity on all ends. Trying to have common standards over differing things becomes the harder, the more we try to squeeze the best out if it / the more complex those things became.

    So i can not agree with any of your points, although that's the usual response i get for my opinion.
    I really think the only problem is backwards compatibility. That's a big one, and hard to predict. Probably too early before a transition to chiplets. But after that, maybe the idea comes up once more... It's not that i'm totally sure here, but we should not rule it out for all times.
     
  10. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    742
    Likes Received:
    439
    It would be much harder to maintain backwards compatibility across generations.
     
    pjbliverpool likes this.
  11. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    2,906
    Likes Received:
    409
    Location:
    Taiwan
    I think we have been through this for multiple times. It's one of the "wheel of reincarnation" in computer industry.
    The problem with vendor specific API is they tend to have various caveats due to historical reasons. Some might be small bugs or undefined behavior somehow used by a popular title (or worse, titles) then it's stuck and can't be fixed without causing serious problems. Then after a few years you have something that's very ugly with a lot of pitfalls and likely to be much less efficient.
    A general API is like a gravity field: specific vendors' implementation might still have bugs, but they'll have to fix it to adhere to the common standard, instead of saying "it is not a bug, it's a feature." This way, after a few years the implementations from most vendors are going to behave more consistently. They all fall into the gravity field of compatibility.
    Another way is vendor specific extensions, which allows vendors to explore new features. On paper it sounds like a good idea, but in practice vendor specific extensions tend to behave like vendor specific API. Of course, it's probably better as once a technology is mature enough, it can be incorporated into the general API and thus people can put the specific extensions behind.
     
  12. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,139
    Likes Received:
    1,291
    hmmm yeah... maybe i construct arguments here, and my true goal is not really efficiency but flexibility. I do not think we need to maximize performance at all costs - current tech feels more overpowered than restricted, tbh.
    There is also some dissatisfaction about Vulkan and it's >DX12 complexity spanning from mobile to desktop. I'd hope to get a simpler API. Not really a problem but still a burden.
    It's not that current situation is that bad, and maybe seeing vendors struggling about APIs too is a from of consolidation ;)
     
  13. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,798
    Likes Received:
    1,981
    Location:
    Earth
    It doesn't help that there is big difference on memory access between nvidia and amd. Infinity cache probably helps more in lower resolutions versus higher resolutions. Without seeing what is the actual bottle neck it's quite impossible to say from outside what is culprit for this or that. Maybe plotting fps versus resolution in same card and while also playing with texture resolutions could give indication if infinity cache is relatively good at 1080p or not.
     
  14. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,617
    Location:
    msk.ru/spb.ru
  15. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    9,792
    Likes Received:
    3,959
    Location:
    Finland
    Nothing odd about it, when you only count cases where there's no CPU bottlenecks like raytracing and top-end hardware.
    With rasterization, even with top-end CPUs 6900 XT and 3090 are close competitors, with game selection and to some extent resolution deciding which goes on top.
    If you instead throw in low-end CPU, even generations old AMD cards beat NVIDIAs RTX 30 offerings (obviously raytracing excluded)
     
  16. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,072
    Likes Received:
    7,034
    I wonder if this is tied to console development. After all, the mid-gens had pretty decent Polaris and "Vegaris" iGPUs that had to pair with 2GHz Jaguars. Over at Nvidia I think few people thought of optimizing >$1000 graphics cards to work well on <$200 CPUs.



    -
    Feed not. Report and add to Ignore, or do not. There is no feed.
    (imagine this in Yoda's voice and it'll get funnier, I promise!)

    I can't see half the content being posted in the RDNA2 thread, and it's fabulous because I know I'm not missing a drop of valid discussion / information, and I'm not being bothered by the usual social media marketing agents I mean professional trolls I mean some users anymore.
     
  17. Rodéric

    Rodéric a.k.a. Ingenu
    Moderator Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,060
    Likes Received:
    955
    Location:
    Planet Earth.
    I got rid of all the noise, if you don't agree with one another go chat privately, don't bother other people wanting on topic information.
     
    manux, PSman1700, Malo and 1 other person like this.
  18. dskneo

    Regular

    Joined:
    Jul 25, 2005
    Messages:
    650
    Likes Received:
    193
    I beg your pardon but it was fully on-topic and it wasn't about agreeing or disagreeing like two children as you are offensively suggesting. Thank you very much.

    It was about a constant attempt to landscape the presented evidence with wrong, uninformed or downright truth bending biased arguments. Its any forum member responsibility to expose the wrong doings and do the right thing even if it looks ugly. Cheers.
     
    ToTTenTranz likes this.
  19. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,072
    Likes Received:
    7,034
    https://videocardz.com/newz/amd-con...-of-infinity-cache-while-vangogh-apu-lacks-it

    Navi 23 with 2MB L2 cache and 32MB Infinity Cache, Van Gogh is an APU with 1MB L1 cache and no Infinity Cache for iGPU.

    I imagine that if the iGPU of an APU were to ever use Infinity Cache, it would probably be more efficient/effective to just increase the L3 on the CPU complex and put both CPU and GPU as clients of that pool.
    IIRC it's what Intel has been doing for a while, or at least since Gen9.


    So if Van Gogh's iGPU were to ever get access to a large(r) L3, I think it would be as a client to the CCXs' L3.



    The message is the same: don't engage, don't feed, don't help perpetuating an ultimately pointless discussion. Just hit the report and add to the ignore list.
    Usually the best moment to leave the conversation is not when you "win the argument" (good luck winning arguments on the Internet BTW..). It's when you reach the conclusion that the other side isn't willing to discuss their points but rather enter an usual cycle of changing goalposts or being dishonest to eventually drive the discussion into shit-slinging. That is their purpose, their goal, and your goal must be to not fall into it.

    Besides, someone being wrong on the internet isn't the end of the world.
     
  20. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,129
    Likes Received:
    510
    Nah, real SLCs for APUs are coming.
    Earlier than that iirc.
     
    Lightman, ethernity and ToTTenTranz like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...