AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
Improving RT performance is just a question of the number of transistors spend for it. There is no magic here. RT is pure brute force.
 
Pictured: https://videocardz.com/newz/amd-rad...pictured-two-8-pin-power-connectors-confirmed
1667207533284.png
This would still fit into my case, and my PSU is still compatible.
We can clearly already see this is an old fashioned product. Probably it has just dot product acceleration. How the hell should i do my matrix multiplies for magic AI with lame dot products? So lame.
At least they have a xtXXX model, to make the dying enthusiast in my inner self happy.
 
The 9700 Pro and 6800 Ultra less than doubled transistors over the 8500/FX 5900, yet were much more powerful in shader bound applications. The 6800 Ultra could see a 4X or greater improvement in DX9 games.
Neither the 9700 Pro nor the 6800 Ultra doubled performance of their predecessor in actual games, much less tripled or quadrupled it(which is apparently what people are expecting RDNA 3 to do) outside of corner cases.
 
Neither the 9700 Pro nor the 6800 Ultra doubled performance of their predecessor in actual games, much less tripled or quadrupled it(which is apparently what people are expecting RDNA 3 to do) outside of corner cases.
The 9800 Pro alone is 3.8X faster than the FX 5900 XT in this HL2 benchmark:


We also see the 9700 (non pro) more than doubled performance over the 9550 (8500 equivalent) in DirectX 8.

In this Far Cry 1.3 benchmark, the 6800 GT is 2.85X faster than the FX 5900 Ultra. (Edit: rising to 3.2X at 1600x1200)


Even in Halo, you could see greater than a 2X boost:

 
Last edited:
The 9800 Pro alone is 3.8X faster than the FX 5900 XT in this HL2 benchmark:


We also see the 9700 (non pro) more than doubled performance over the 9550 (8500 equivalent) in DirectX 8.

In this Far Cry 1.3 benchmark, the 6800 GT is 2.85X faster than the FX 5900 Ultra. (Edit: rising to 3.2X at 1600x1200)


Even in Halo, you could see greater than a 2X boost:

So basically corner cases. Far Cry 1.3 is using features the FX series doesn't even support which invalidates it for the purposes of this argument. Not to mention the FX series was a hardware disaster when it comes to anything DX9 to begin with.
 
So basically corner cases. Far Cry 1.3 is using features the FX series doesn't even support which invalidates it for the purposes of this argument. Not to mention the FX series was a hardware disaster when it comes to anything DX9 to begin with.
HL2 is a corner case? Halo is a corner case? And even accounting for the maybe ~20% boost in performance for SM3 on the 6800 doesn't do much to change the huge multipliers involved.

As you said the FX series was a disaster for DirectX 9, just like RDNA 2 is arguably a disaster for ray tracing, at least in the heaviest titles. That's how you get these huge improvements from generation to generation.
 
The 9800 Pro alone is 3.8X faster than the FX 5900 XT in this HL2 benchmark:


We also see the 9700 (non pro) more than doubled performance over the 9550 (8500 equivalent) in DirectX 8.

In this Far Cry 1.3 benchmark, the 6800 GT is 2.85X faster than the FX 5900 Ultra. (Edit: rising to 3.2X at 1600x1200)


Even in Halo, you could see greater than a 2X boost:


The FX series was basically broken on DX9, so it's not quite a fair comparison. It just didn't work well. So the 6800 which fixed this, shows a skewed increase in performance in some specific DX9 games. Halo and Far Cry included. But i wouldn't look at the FX generation as a relevant example. It was so out there, that gen.

 
The FX series was basically broken on DX9, so it's not quite a fair comparison. It just didn't work well. So the 6800 which fixed this, shows a skewed increase in performance in some specific DX9 games. Halo and Far Cry included. But i wouldn't look at the FX generation as a relevant example. It was so out there, that gen.

The point of the comparison is not to say that such improvements are "normal", but that they can happen when a vendor has a suboptimal implementation of a particular feature (in this case DirectX 9), that then gets fixed. That's exactly what we're talking about with ray tracing, where RDNA 2 suffers a disproportionately large performance penalty in the heaviest titles. If AMD "fixes" this, 2.5X+ boosts are not out of the question.
 
HL2 is a corner case? Halo is a corner case? And even accounting for the maybe ~20% boost in performance for SM3 on the 6800 doesn't do much to change the huge multipliers involved.

As you said the FX series was a disaster for DirectX 9, just like RDNA 2 is arguably a disaster for ray tracing, at least in the heaviest titles. That's how you get these huge improvements from generation to generation.
A corner case to me is a rare occurrence. it doesn't mean the title in question is obscure.
 
AMD can not fix it without spending transistors. So is AMD able to increase transistor density over the benefits of the node jump? CNDA2 for example is a disaster. So just spending cheap transistors on compute units is not a solution.
 
A corner case to me is a rare occurrence. it doesn't mean the title in question is obscure.
So Anandtech benchmarked 5 different HL2 levels and saw similar results. That doesn't indicate it was a rare ocurrence. Anandtech and Guru 3D also saw similar results in Halo. In the video linked by Phantom88, we see the 5800 Ultra fall significantly behind the 9700 Pro in Halo, Far Cry and FEAR.
 
So Anandtech benchmarked 5 different HL2 levels and saw similar results. That doesn't indicate it was a rare ocurrence. Anandtech and Guru 3D also saw similar results in Halo. In the video linked by Phantom88, we see the 5800 Ultra fall significantly behind the 9700 Pro in Halo, Far Cry and FEAR.
5 benchmarks of the same game are 1 occurrence. I also disagree that RDNA 2 RT is comparable to NV30's shoddy DX9 support. A better example would be GCN tessellation compared to Kepler.
 
AMD can not fix it without spending transistors. So is AMD able to increase transistor density over the benefits of the node jump? CNDA2 for example is a disaster. So just spending cheap transistors on compute units is not a solution.
Xtor desnity is a very odd blanket metric that's hardly ever relevant in how modern designs work.
You don't even have a semblance of idea as to how your big gfx11 WGP is budgeted.
 
5 benchmarks of the same game are 1 occurrence.
So what would it take to "prove" the FX 5900 has terrible performance in DirectX 9 under HL2? Or you're saying HL2 itself is a corner case, because it's just 1 title? (Ignoring Halo and Far Cry and the linked video above showing the 9700 Pro getting 2.86x the performance of the 5800 Ultra in FEAR).
 
So what would it take to "prove" the FX 5900 has terrible performance in DirectX 9 under HL2? Or you're saying HL2 itself is a corner case, because it's just 1 title? (Ignoring Halo and Far Cry and the linked video above showing the 9700 Pro getting 2.86x the performance of the 5800 Ultra in FEAR).
I’m saying the 6800 ultra generally did not provide double the performance of its predecessor. Same for the 9700 pro. There were corner case games that were the exception and not the rule. I’m also saying that RDNA 2 isn’t in the same boat as NV30. It’s my opinion that RDNA 3 performance in RT games will generally be at the level of Ampere and not Ada.
 
I’m saying the 6800 ultra generally did not provide double the performance of its predecessor. Same for the 9700 pro. There were corner case games that were the exception and not the rule. I’m also saying that RDNA 2 isn’t in the same boat as NV30.
I'm talking specifically about shader bound DirectX 9 titles. Such titles are not corner cases after 2004/2005, but the norm. But sure in UT2k4 you're not going to see such a big advantage, just like in Dirt 5 AMD's ray traced performance is fine.

The greater than 2X performance improvement is expected in heavily RT bound titles where AMD is already far behind. (Cyberpunk, Control etc.)
 
Status
Not open for further replies.
Back
Top