AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

And do go all sad with that decision. They won their bet. They won the market. But the fight never stops and AMD has been designing DX12 since 2012. Eventually the market would turn to DX12, and all it cost AMD to bet on DX12 so early was everything.

Who won what market?
 
This isn't a surprise seeing as how AMD managed to pass off basic design features like the binding model, pipeline state objects, and command buffers from Mantle into D3D12. I imagine Nvidia were banking really hard on bindless buffers/textures and command lists/ExecuteIndirect/device generated command taking off in an alternate world ...

The former concept is a forgone conclusion since D3D12 doesn't expose NV-style bindless functionality and instead features descriptor indexing. NV hardware has a single global table pool for textures and samplers so how they're even implementing descriptor indexing with multiple descriptor tables is a total mystery. I won't go into details again about D3D12's other hazardous characteristics in it's binding model which were covered elsewhere and I don't think NV envisioned it being used the way it currently is now.

D3D12 also features indirect rendering via ExecuteIndirect which maps nicely to NV hardware since it's a subset of device generated commands so this is one of the few upsides for them there. On AMD, changing the resource bindings in the command signature could potentially add overhead so the API should only be used with draw or dispatch commands to hit the fast path in the hardware.
 
How so? You probably haven't watched the video till the end, it's specifically mentioned there. And who is "you people", might I ask? And what the issue is as well...
Most of their DX11 results are the opposite of what they see in DX12 and if they imply that the same is happening in DX11 too then their explanation for why it is happening makes no sense. When some game shows the same behavior in different APIs then why are we blaming the APIs for this?

You design your architecture and software stack according to what API you think the market will favour in the present/near future. You don't become DX12 compliant by simply printing the words "DX12 support" on the box.
APIs are designed according to h/w, not the other way around.

Nvidia bet on DX11 on their hardware design.
I don't know what you mean by "bet" here. Care to explain?

When you design to be very fast on one specific API, you are make concessions in others.
You really don't. APIs are designed after the h/w. Some APIs are better suited for some h/w than the other but that is an API issue, not a h/w one.

And to clarify, today we are not surprised that they have DX12 overhead issues with Pascal (they emulated features like async compute)
They never "emulated" async compute. And it wouldn't matter anyway because async compute doesn't happen in the driver.

we are surprised because the same overhead remains in Turing and Ampere
Which should point you into thinking on how relevant the explanations which mention any kind of scheduling or async compute are.
 
When some game shows the same behavior in different APIs then why are we blaming the APIs for this?
Who is blaming and whom? I was sure people in this thread understand what a 'trade-off' is and why nVidia opted for this. I can offer you even better spin for this - you can just say that nVidia became the victim of its own prowess - its driver is so efficiently multithreaded that it can hog whole CPU for itself and leave little for everything else. Cue in DPC latency issue and other stuff that happened in the last 5-7 years or so ("driver reset" if your kernel is keeping the GPU busy for more than 2 seconds)
 
You really don't. APIs are designed after the h/w. Some APIs are better suited for some h/w than the other but that is an API issue, not a h/w one.

APIs are designed both after and before h/w. Future has to be considered as well.
What issue it causes differs case by case and likely depends on point of view.
 
I still think it would be easier and much more efficient to have API per vendor these days.
MS and Khronos seems more of an obstacle and black box collection than helpful.
Maintaining BC surely becomes a problem with time, but i don't see how that's worse than piling up one driver hack per game anyways.
 
This just makes more work for developers and will lead to "sponsored games" with even more performance disparity than today.
Games will take longer and be more expensive. That's not what anyone wants.
 
This just makes more work for developers and will lead to "sponsored games" with even more performance disparity than today.
Games will take longer and be more expensive. That's not what anyone wants.
Sounds this is response to my vendor API request two posts up?

Well, here's my arguments (but i'm not in the games industry, so contains a lot of guesses):
Indie Company: Can still use U engines or DX/GL/VK. They are not affected.
AAA Company: If it's indeed more work they can stick to those general APIs too.
But i doubt it is more work. I assume engine developer costs is peanuts in comparison to content creation, and working around limitations or issues takes more time than learning APIs, which probably end up mostly similar. Likely they can achieve performance targets faster.
I definitively would. Glide was easier to use than OpenGL. Mantle was simpler than Vulkan, but had important features still missing elsewhere.
Also, since i'm here i whine about restricted RTX and DXR, black boxed BVH, missing expose of AMD RT flexibility, missing ability of device side enqueue or something similar. I do so for a reason, and if there were vendor APIs, i just would not have a problem with any vendor. Pretty sure of that.
I could develop more efficient software in less time, even if i had to learn 3 APIs instead just one. Granted. Guess this would help some other people as well.

I don't see a problem with game companies dealing with NV or AMD for support and marketing either. Eventually this means leaving some things behind, eventually we would not have as many RT games yet if this would not happen. Why should vendor APIs affect this? Likely it just stays the same as is.

> That's not what anyone wants.
There was a time when i would have fully agreed. Sadly it is gone, due to increased complexity on all ends. Trying to have common standards over differing things becomes the harder, the more we try to squeeze the best out if it / the more complex those things became.

So i can not agree with any of your points, although that's the usual response i get for my opinion.
I really think the only problem is backwards compatibility. That's a big one, and hard to predict. Probably too early before a transition to chiplets. But after that, maybe the idea comes up once more... It's not that i'm totally sure here, but we should not rule it out for all times.
 
Sounds this is response to my vendor API request two posts up?

Well, here's my arguments (but i'm not in the games industry, so contains a lot of guesses):
Indie Company: Can still use U engines or DX/GL/VK. They are not affected.
AAA Company: If it's indeed more work they can stick to those general APIs too.
But i doubt it is more work. I assume engine developer costs is peanuts in comparison to content creation, and working around limitations or issues takes more time than learning APIs, which probably end up mostly similar. Likely they can achieve performance targets faster.
I definitively would. Glide was easier to use than OpenGL. Mantle was simpler than Vulkan, but had important features still missing elsewhere.
Also, since i'm here i whine about restricted RTX and DXR, black boxed BVH, missing expose of AMD RT flexibility, missing ability of device side enqueue or something similar. I do so for a reason, and if there were vendor APIs, i just would not have a problem with any vendor. Pretty sure of that.
I could develop more efficient software in less time, even if i had to learn 3 APIs instead just one. Granted. Guess this would help some other people as well.

I don't see a problem with game companies dealing with NV or AMD for support and marketing either. Eventually this means leaving some things behind, eventually we would not have as many RT games yet if this would not happen. Why should vendor APIs affect this? Likely it just stays the same as is.

> That's not what anyone wants.
There was a time when i would have fully agreed. Sadly it is gone, due to increased complexity on all ends. Trying to have common standards over differing things becomes the harder, the more we try to squeeze the best out if it / the more complex those things became.

So i can not agree with any of your points, although that's the usual response i get for my opinion.
I really think the only problem is backwards compatibility. That's a big one, and hard to predict. Probably too early before a transition to chiplets. But after that, maybe the idea comes up once more... It's not that i'm totally sure here, but we should not rule it out for all times.
It would be much harder to maintain backwards compatibility across generations.
 
Sounds this is response to my vendor API request two posts up?

Well, here's my arguments (but i'm not in the games industry, so contains a lot of guesses):
Indie Company: Can still use U engines or DX/GL/VK. They are not affected.
AAA Company: If it's indeed more work they can stick to those general APIs too.
But i doubt it is more work. I assume engine developer costs is peanuts in comparison to content creation, and working around limitations or issues takes more time than learning APIs, which probably end up mostly similar. Likely they can achieve performance targets faster.
I definitively would. Glide was easier to use than OpenGL. Mantle was simpler than Vulkan, but had important features still missing elsewhere.
Also, since i'm here i whine about restricted RTX and DXR, black boxed BVH, missing expose of AMD RT flexibility, missing ability of device side enqueue or something similar. I do so for a reason, and if there were vendor APIs, i just would not have a problem with any vendor. Pretty sure of that.
I could develop more efficient software in less time, even if i had to learn 3 APIs instead just one. Granted. Guess this would help some other people as well.

I don't see a problem with game companies dealing with NV or AMD for support and marketing either. Eventually this means leaving some things behind, eventually we would not have as many RT games yet if this would not happen. Why should vendor APIs affect this? Likely it just stays the same as is.

> That's not what anyone wants.
There was a time when i would have fully agreed. Sadly it is gone, due to increased complexity on all ends. Trying to have common standards over differing things becomes the harder, the more we try to squeeze the best out if it / the more complex those things became.

So i can not agree with any of your points, although that's the usual response i get for my opinion.
I really think the only problem is backwards compatibility. That's a big one, and hard to predict. Probably too early before a transition to chiplets. But after that, maybe the idea comes up once more... It's not that i'm totally sure here, but we should not rule it out for all times.

I think we have been through this for multiple times. It's one of the "wheel of reincarnation" in computer industry.
The problem with vendor specific API is they tend to have various caveats due to historical reasons. Some might be small bugs or undefined behavior somehow used by a popular title (or worse, titles) then it's stuck and can't be fixed without causing serious problems. Then after a few years you have something that's very ugly with a lot of pitfalls and likely to be much less efficient.
A general API is like a gravity field: specific vendors' implementation might still have bugs, but they'll have to fix it to adhere to the common standard, instead of saying "it is not a bug, it's a feature." This way, after a few years the implementations from most vendors are going to behave more consistently. They all fall into the gravity field of compatibility.
Another way is vendor specific extensions, which allows vendors to explore new features. On paper it sounds like a good idea, but in practice vendor specific extensions tend to behave like vendor specific API. Of course, it's probably better as once a technology is mature enough, it can be incorporated into the general API and thus people can put the specific extensions behind.
 
hmmm yeah... maybe i construct arguments here, and my true goal is not really efficiency but flexibility. I do not think we need to maximize performance at all costs - current tech feels more overpowered than restricted, tbh.
There is also some dissatisfaction about Vulkan and it's >DX12 complexity spanning from mobile to desktop. I'd hope to get a simpler API. Not really a problem but still a burden.
It's not that current situation is that bad, and maybe seeing vendors struggling about APIs too is a from of consolidation ;)
 
It doesn't help that there is big difference on memory access between nvidia and amd. Infinity cache probably helps more in lower resolutions versus higher resolutions. Without seeing what is the actual bottle neck it's quite impossible to say from outside what is culprit for this or that. Maybe plotting fps versus resolution in same card and while also playing with texture resolutions could give indication if infinity cache is relatively good at 1080p or not.
 
Odd that NVIDIA can beat AMD being so "limited"...
Nothing odd about it, when you only count cases where there's no CPU bottlenecks like raytracing and top-end hardware.
With rasterization, even with top-end CPUs 6900 XT and 3090 are close competitors, with game selection and to some extent resolution deciding which goes on top.
If you instead throw in low-end CPU, even generations old AMD cards beat NVIDIAs RTX 30 offerings (obviously raytracing excluded)
 
If you instead throw in low-end CPU, even generations old AMD cards beat NVIDIAs RTX 30 offerings (obviously raytracing excluded)

I wonder if this is tied to console development. After all, the mid-gens had pretty decent Polaris and "Vegaris" iGPUs that had to pair with 2GHz Jaguars. Over at Nvidia I think few people thought of optimizing >$1000 graphics cards to work well on <$200 CPUs.


In this entire conversation on this forum about the subject, you:

- Made up arguments against things nobody said (limited reading comprehension at the very least!)
- Lied about you never saying things you said 10 times over in this very thread.
- Lied about others saying things they never said.
- Posted 4 comments refuting the findings of a Video you only admitted watching 5 posts later. (this = zero credibility).

-
Feed not. Report and add to Ignore, or do not. There is no feed.
(imagine this in Yoda's voice and it'll get funnier, I promise!)

I can't see half the content being posted in the RDNA2 thread, and it's fabulous because I know I'm not missing a drop of valid discussion / information, and I'm not being bothered by the usual social media marketing agents I mean professional trolls I mean some users anymore.
 
I got rid of all the noise, if you don't agree with one another go chat privately, don't bother other people wanting on topic information.
I beg your pardon but it was fully on-topic and it wasn't about agreeing or disagreeing like two children as you are offensively suggesting. Thank you very much.

It was about a constant attempt to landscape the presented evidence with wrong, uninformed or downright truth bending biased arguments. Its any forum member responsibility to expose the wrong doings and do the right thing even if it looks ugly. Cheers.
 
https://videocardz.com/newz/amd-con...-of-infinity-cache-while-vangogh-apu-lacks-it

Navi 23 with 2MB L2 cache and 32MB Infinity Cache, Van Gogh is an APU with 1MB L1 cache and no Infinity Cache for iGPU.

I imagine that if the iGPU of an APU were to ever use Infinity Cache, it would probably be more efficient/effective to just increase the L3 on the CPU complex and put both CPU and GPU as clients of that pool.
IIRC it's what Intel has been doing for a while, or at least since Gen9.


So if Van Gogh's iGPU were to ever get access to a large(r) L3, I think it would be as a client to the CCXs' L3.



I beg your pardon but it was fully on-topic and it wasn't about agreeing or disagreeing like two children as you are offensively suggesting. Thank you very much.

It was about a constant attempt to landscape the presented evidence with wrong, uninformed or downright truth bending biased arguments. Its any forum member responsibility to expose the wrong doings and do the right thing even if it looks ugly. Cheers.
The message is the same: don't engage, don't feed, don't help perpetuating an ultimately pointless discussion. Just hit the report and add to the ignore list.
Usually the best moment to leave the conversation is not when you "win the argument" (good luck winning arguments on the Internet BTW..). It's when you reach the conclusion that the other side isn't willing to discuss their points but rather enter an usual cycle of changing goalposts or being dishonest to eventually drive the discussion into shit-slinging. That is their purpose, their goal, and your goal must be to not fall into it.

Besides, someone being wrong on the internet isn't the end of the world.
 
I imagine that if the iGPU of an APU were to ever use Infinity Cache, it would probably be more efficient/effective to just increase the L3 on the CPU complex and put both CPU and GPU as clients of that pool.
Nah, real SLCs for APUs are coming.
IIRC it's what Intel has been doing for a while, or at least since Gen9.
Earlier than that iirc.
 
Back
Top