AMD RDNA3 Specifications Discussion Thread

it's not my intention to stretch this topic here, but i remember reading some article of a cost efficient method to make carbon nanotubes from regular coal by being blown inside a pressurized chamber with high consistency but unfortunately can't find that article.
But anyway my point was/is seeing as how the cost of silicon wafers are reaching stupid high levels as well, can we really confidently say it wouldn't be cheap? As for ubiquitous, it can be made from coal, how is that not cheap or ubiquitous?
Silicon makes up 27.7% of the earth's crust while carbon is only 0.025%.
 
they should demand a certain IHV to stop dragging their own feet towards making a working solution in this area. If Nvidia are so concerned about developers abusing APIs in ways they don't want then maybe Microsoft should start unilaterally giving either AMD or Intel an API advantage for once instead so that it will totally motivate Nvidia to implement a real solution or else they lose out ...
Remember this?

The D3D12 binding model causes some grief on Nvidia HW. Microsoft forgot to include STATICdescriptors in RS 1.0 which then got fixed with RS 1.1 but no developers use RS 1.1 so in the end Nvidia likely have app profiles or game specific hacks in their drivers.
You seem to admit that Microsoft already gave AMD an API advantage with DX12 which caused overhead and problems for non AMD hardware. So I don't know why are you so upset about this (DXR) and not the other (DX12)?

Microsoft is known to be sloppy and slow with these things. You say they fixed the DX12 situation with SM6.6 (after so many years), so maybe DXR could be fixed down the road in a similar manner?
 
What do you believe are the biggest road blocks to getting you most of what you want from the API?
1. Convince IHVs they have to specify their data structures. If they come up with new one, they also must support older ones if the application requests them.
2. API has to add related queries and versioning.

At this point all flexibility is given, but we need specialized implementation per vendor.
Likely we would stick at this, or may continue this way:

3. Convince MS that vendor extensions are now absolutely required. Khronos already has this.
4. IHVs come up with their own extensions, like maybe:
a) To precompute BVH and cache it to disk on the client, similar to current shader and pipeline caches.
b) To ease up creating / modifying BVH from compute shaders at runtime.

5. After it shows IHVs do similar things and devs actually use the stuff, API could try to come up with generic abstractions, like: A template BVH data structure, which drivers then convert into specific HW formats. A generic compute API, which allows to create /modify at runtime. The vendor compiler translates generic statements into specific instructions.

How easy this is depends on how similar IHV data structures are. If no treelets for compression are in use, it's dead simple. If only one vendor uses this, maybe the others should be allowed and encouraged to adopt it.
Eventually IHVs would be even happy to converge at similar formats, benefiting from the works of competitors. I doubt there are any inventions worth to keep secret here.

6. Distribution platforms like Steam could support background processes, which game devs could use to precompute BVH while downloading. Or per IHV versions of the game. Just an idea in case.

7. Celebrate better perf. helping RT to become a success and practical, ray tracing detailed and richer scenes, PC platform now being on par with consoles so related innovation is worth to invest.
It's a bit of a chicken and egg problem. But Epics interest alone would be already enough to justify the effort. A related question is: Do other engine makers wait with their Nanite until RT is ready, or do they already work in it?
 
That's certainly an admirable goal, but the minimum requirements for Stalker 2 are listed as an RX580. Even if AMD SOCs can match an M1 in perf/watt, an admirable and reachable goal, that would still be a good deal below requirements @10w, as it'd need another 50% fps somewhere. Now maybe with platform specific optimizations and low enough settings it could work.

So if there's enough Steam Deck platform optimization it could hit minimum requirements. But otherwise Valve wants to cut down on weight, which means not adding anymore heavy and bulky batteries, as well as improve battery life on heavier games where it can be sub 2 hours, which could mean matching a 10wish tdp. A lower TDP would also save weight in cooling systems.
With RX580 i could already match Portal RTX lighting, assuming you wouldn't miss the sharp reflections, very easily.
That's why i want APU desktop. A little box with proper cooling. Drawing more power than M1 but with proper GPU for games.
Bulky handhelds are just the bait to get us there...

Biggest road block is arguably a political one since Microsoft seemingly refuses to let either AMD or Intel enjoy an API advantage at the expense of Nvidia ...
I do not see an API advantage for NV, just a common disadvantage for all. What do you mean?
The API is just classical RT, assuming static models. Standard and ancient stuff at high level. NV needs NVAPI to expose improvements as well.
The problem is there's no custom APIs from AMD, imo. And there are no VK extensions either. Somewhat lazy and lacking self confidence. They need to spend more on software.
I guess Intel will do better in case they get some market share.
 
Most of this goes directly against the whole idea of a common API and what developers want. You begin with saying that PC APIs are holding back consoles somehow and then propose to split the landscape not only between PCs and consoles but between different PC IHVs as well. It is a fully counter productive idea.
 
UE5 shadowing solution makes all of this very much irrelevant though. So again a problem which doesn't really exist in practice.
Idk how UE5 does it's soft shadows, but it does not really work. There are work arounds, but they suck. Which is why we want RT in the first place.

You do not hesitate even to depict RT as useless just to come up with baseless arguments against... what precisely?
I'll stop right here.
 
You begin with saying that PC APIs are holding back consoles somehow and then propose to split the landscape not only between PCs and consoles but between different PC IHVs as well. It is a fully counter productive idea.
Sure. It is counter productive. And it sucks.
But that's all that's left for a fix after some self appointed experts came up with an API replicating simplified concepts of offline workflows you would read in a RT introduction textbook. (Offline rendering does way more flexible things than DXR enables, btw.)

The damage has been done, and the potential fix now isn't nice. But that's not my fault. It's theirs.
 
This presumes that what's good for professional rendering is precisely what's good for real time graphics.

Nvidia seems to be fine exposing capabilities that aren’t supported by DXR. Ampere added motion blur acceleration. Ada added shader execution reordering, displaced micromeshes and opacity micromaps. And they’re apparently the ones holding back the industry.

What’s stopping AMD from doing the same for the things they care about? Do they actually have a vision for faster RT or do they just want a level playing field where every implementation is equally slow (but more flexible). This is really a question for @Lurkmass.
 
Sure. It is counter productive. And it sucks.
But that's all that's left for a fix after some self appointed experts came up with an API replicating simplified concepts of offline workflows you would read in a RT introduction textbook. (Offline rendering does way more flexible things than DXR enables, btw.)

The damage has been done, and the potential fix now isn't nice. But that's not my fault. It's theirs.
that's was the plan from early begining. Take over industry. Jensen and the band play a bit dirty game here, MS allowed to do so
 
Nvidia seems to be fine exposing capabilities that aren’t supported by DXR. Ampere added motion blur acceleration. Ada added shader execution reordering, displaced micromeshes and opacity micromaps. And they’re apparently the ones holding back the industry.

What’s stopping AMD from doing the same for the things they care about? Do they actually have a vision for faster RT or do they just want a level playing field where every implementation is equally slow (but more flexible). This is really a question for @Lurkmass.
they are one round behind
 
Sure. It is counter productive. And it sucks.
But that's all that's left for a fix after some self appointed experts came up with an API replicating simplified concepts of offline workflows you would read in a RT introduction textbook. (Offline rendering does way more flexible things than DXR enables, btw.)

The damage has been done, and the potential fix now isn't nice. But that's not my fault. It's theirs.
Wow. These "self appointed experts" have released Battlefield 5 which was the first time that real time raytracing was used in a practical way in a game.
 
If you want to talk about Switch 2, please take the discussion to the existing Switch 2 thread.

If you want to talk about RT API evolution, please create a new thread and move the discussion there.

If you want to talk about something else, please create a new thread and move the discussion there.

If you want to talk about RDNA3 specifications, please remain seated...
 
RDNA4, RDNA4!

We know what RDNA 3 is specs wise. Mid chip is 60cus, low chip is 32, laptops look to be 12/24(?). Ram will be 16gb/8gb, etc. Future chips will have much higher clockspeeds, 7950 is a possibility within the next year or so. RDNA3 specs, done.
 
I wonder how AMD will scale at the Frontend. Interesting is that 4090 with 11 GPCs was not evolving over 3090 with 7 gpcs on the frontend side. Does anybody know why AMD is bad as MultiDrawInstance and what is the most relevant value here?

Did some linear interpolation to compare both at 2 ghz I think it would be linear to clock speed.

Interesting is that 4090 is not scaling with clock at normale Pipeline. Basix reports that a clock increase of 6% (2,55 to 2,72Ghz) results in 20% gain in old Pipeline. New Pipeline with mesh shader only increase 6% So normale Pipeline has a other clock domain?

Values from basix and Langly from 3dcenter.
FEA0DA47-7868-46FE-BE3E-0A4458F58B22.png8F9F0321-C709-406B-BD9C-52D928D5F85D.jpeg56DC3D53-C1FC-4896-816C-17A35AB4E6D2.jpeg
 
Last edited:
Back
Top