AMD RDNA3 Specifications Discussion Thread

upcoming games that will not give the setting to disable ray tracing
Which games?
Does the demands of RT scale with resolution or is it a fixed cost across all resolutions?
Example for the latter would be RTXGI, which traces per probe, not per pixel.
But for very most things it scales with resolution.
And if AMD thinks they even have a shot at 4090 in raster they would have put it in the charts, like they did with the 6900XT vs 3090.
They wanted to, but could not afford.
One man developer uses Raytracing in his game: https://store.steampowered.com/app/1186640/Pumpkin_Jack/

One person makes a 533mm^2 chip obsolet.
What's the RT fx he's using? Raytraced blob shadows?

As it is, it's far more likely that I'll grab an RTX 3070 for 300 USD once it gets that low (used or new) or a 6800 XT for 400 USD if it gets that low (used or new).
Same here. Will wait till some 6800 gets below 500 eur i guess.
I feel like the new generation of GPUs does not provide enough uplift so i would really want it.
But this is not because they're bad, it is because the old generation seems more than good enough for the next years.
Till yet, i have not played a single game where i've missed a RT GPU at all. Some games i liked got a RT patch only after i was long done with them, which is pretty pointless.

I guess that's what most people think. Maybe a flood of sub 30 fps UE5 games will change this, but i rather assume they'll still have graphics options.
 
That is where I am at. I will bitch and moan when we get details about exactly what they are doing.
I have been upset about a bunch of GPU changes over the years.
Started with the vendor based post processing stuff, now we can't have exact performance comparisons.
Then dynamic GPU clocks in ~2012, not knowing exactly what performance to expect when buying the same product. (obviously not as big of a deal as I thought)
Then we got DLSS and other features that can dynamically change IQ to boost performance, so now the entire frame being rendered isn't a direct comparison.
Now we are getting fake frames injected for more FPS, yay interpolation....

Well, that's marketing for you. Actual reviews - which you've always had to wait for to get an accurate picture - tend to disaggregate this stuff, you're not going to see reviews of the 7900 based just on FSR running.
 
Having no acceleration structure is arguably more elegant in the above case I described as opposed to dealing with the warts of acceleration structures such as need to reduce LoDs or the distance of geometry included ...
Having no acceleration structure doesn't make doubling/tripling/ quadrupling and so on of geometry with planar reflections any easier to handle. Planar reflections simply don't scale in practice for exactly that reason - they don't have acceleration structure, not even mentioning all other limitations.
 
Well you can still turn most of that shit off, so there's that.

Cost.
The whole gimmick of RDNA3 is that it's cheap.
Focus on raw PPA metrics above all else.
What happened to the entirely new compute unit? Since when does tacking on a dual issue fp32 instruction qualify as that? And the PPA is lower than the competition.
 
I guess thread bans will need to be handed out in order to keep this thread about AMD RDNA3.

Stop turning everything into a competition pissing match.
 
BVH allows to handle all that stuff
Not really because you're giving up spatio-temporal stability of being able to render perfect reflections in a single sample per pixel. Any realistic BVH in games are going to have to weigh in the cost between rebuilding/refitting and often cut out some distant geometry ...

Rebuilding the acceleration structure every time while including all geometry would be extremely prohibitive for any modern game ...
I just don't get it why would somebody want to get rid of BVH and bolt on tons of hacks in the meantime.
If you have to make cheats in the BVH such as using lower LoDs and cutting out geometry then it's not a clear cut increase in quality over planar reflections. Sometimes hacks can result in higher quality in specific scenarios ...
 
That stuck out to me during the presentation... they mentioned some area and power savings at certain times that you typically would talk about performance.
This will go into quite a plethora of iGPs so the focus is stupid evident, isn't it?
I mean for fucks sake AMD poached an entire Imagination team that was making cheapskate GPUs.
 
Lumen could improve improve even further on the quality of their software RT by implementing planar reflections for flat and highly specular surfaces ...
I thought about analyzing the framebuffer to find a best fit of say 6 planes, then use SSR methods to reduce the error. Wonder how well this would work.
It's really ugly, though.
 
I think the value for the cards looks pretty good, assuming you're not interested in ray tracing. Right now there's a real split on who wants ray tracing and who doesn't. These cards have their place. Personally I wish AMD had a more interesting ray tracing story. I think in the very long run having a more programmable ray tracing api will be better, but right now the more restrictive hardware accelerated path is getting the best results. Personally, my current gpu is more than enough for playing multiplayer games, so the only reason for me to upgrade would be to be able to play single player games with all of the RT nonsense turned on.
 
Performance Estimates:
Performance on Doom Eternal: 7900xtx 160fps, 4090: 200fps, 4080: 125fps
Performance on Metro EE: 7900xtx: 55fps, 4090: 95fps, 4080: 60fps
Performance on RE Village: 7900xtx: 205fps, 4090: 199 fps, 4080: 125fps
Performance on MWII: 7900xtx: 200 fps, 4090: 120fps, 4080: 75fps

Those are the titles that matter. Obviously destroys the competition on MWII, but that's probably not a good indicator for the future.

On more relevant titles it's enough above a 4080 for less of a price that it should be a good deal.

Edit- On the future: This die can obviously go much, much faster than it is right now. Whether we'll see a 25% peformance bump with faster ram in the future, I don't know. It seems like the RT engine and RAM are holding it back? By doubling the SIMD per CU but limiting the RT engine... well it feels weird. I wonder what the split clock domain can be extended to.
 
Last edited:
OK, just got to watch the presentation and the recaps from Gamers Nexus and Hardware Unboxed.

Time for my own knee-jerk reactions.

Not too sure what I was expecting, but this left me "whelmed". Not overwhelmed or underwhelmed.

Objectively, I think they did a good enough job.

-The 7900 series appears to be priced well compared to it's competition (4080 16GB).
-They stepped up their software suite with easy to use features and are trying to feature match the competition
-They stepped up their encoding hardware and appear to be competitive with the current NVENC and address a major area they were lacking
-They are iterating quickly on FSR with the announcement of FSR 2.2 and FSR 3 [referred to on stage as "fluid motion" :)]
-They made some multi-year technical partnerships to ensure that future AAA titles run well on their hardware

When recapped like that, it seems "fine". They did what they were supposed to do.

Having said that, it still doesn't feel like it's enough. At this point they are barely keeping pace with NV and that isn't enough to win over people on the fence.

Realistically it just doesn't seem like AMD's Radeon team has the resources to be "neck-and-neck" competitive with NVs Geforce team.

A thought: I am wondering if their best efforts will come when they are co-developing hardware alongside other partners (Xbox team, PlayStation team, Valve, etc) as they can provide more resources such as money and people.
 
-They stepped up their encoding hardware and appear to be competitive with the current NVENC and address a major area they were lacking
This doesn't really belong in this thread, but they've been "good enough" since early this year and actually exactly in the same ballpark as Intel and NVIDIA with their h.264 encoder quality for a while already. H.265 was always fine. No-one just uses the new SDKs for whatever reason, this seems to change now thanks to OBS cooperation. Hopefully it means older gens get to use the improvements they actually already have too.
 
Back
Top