AMD Execution Thread [2024]

I dont think any 'approach' matters if AMD simply dont deliver on architectural performance. RDNA2 was brilliant in this regard and what had many hopeful for RDNA3, but RDNA3 turned out to be the biggest dud they've ever produced instead. I really dont think people appreciate how much of lead weight this put on the GPU market. AMD had started some real progress and momentum was very much within reach, but RDNA3 killed it. Dead. In its tracks. All the positivity and expectations AMD had built up as a potential challenger were done, and now nobody believes in AMD on the GPU side whatsoever, and not unjustifiably so. I'll say again - RDNA3 is beyond bad. It's one of the worst architectures AMD has ever produced.

AMD's chiplet strategy could have been competitive and perhaps significant in changing the market had RDNA3 been good.
'Nobody' you say ? Maybe so for the PC enthusiast audience but console vendors and some graphics programmers clearly don't share that same view ...

I don't know about "worst architecture" when it's the first to feature proper and performant support for Work Graphs (biggest API change since compute shaders!) which could potentially propel it to new heights in terms of performance ...
 
Fuck no, it's still passable. Nothing like Vega.
Just a weird PPA miss.

It's ok.
OK enough for them to maintain strict sane powertargets, even.
Dont agree at all. Vega had the excuse of being hamstrung of GCN fundamentals and a super late example of AMD trying to wring the cloth of GCN by that point. Vega 56/64 were also produced on Global Foundries 14nm, which was notably inferior to TSMC 16nm. It also only came a year after Polaris.

RDNA3 had no excuses at all. It was a big architectural change, it had lots of time in development, with a long gap between it and RDNA2, it had a large node improvement to the best (practically) available node process of the time, and it was coming off the back of Radeon's biggest success since Hawaii.

Unlike previous disappointments from Radeon, RDNA3 was simply a dud. It is very literally borked and not working as AMD intended.
 
Vega had the excuse of being hamstrung of GCN fundamentals and a super late example of AMD trying to wring the cloth of GCN by that point. Vega 56/64 were also produced on Global Foundries 14nm, which was notably inferior to TSMC 16nm. It also only came a year after Polaris.
It was a dud that wasn't ballpark passable.
all RDNA3 parts are ok at worst.
That's the thing.
It is very literally borked and not working as AMD intended.
It's a weird fuckup that behaves weirdly but it's still passable 'nuff.
Anyway, strixes soon(tm).
 
'Nobody' you say ? Maybe so for the PC enthusiast audience but console vendors and some graphics programmers clearly don't share that same view ...

I don't know about "worst architecture" when it's the first to feature proper and performant support for Work Graphs (biggest API change since compute shaders!) which could potentially propel it to new heights in terms of performance ...
Buzz word nonsense.

You're also trying to speak for people who aren't actually speaking, which is silly. Where are 'console vendors' spreading the gospel of RDNA3, exactly?

I'm an F1 fan, and this 'potential' has me laughing, cuz I hear it so much. "Oh it has the potential to be great, we just need to keep waiting for that potential to be UNLOCKED!" - says the hopium sniffers who almost always end up disappointed.
 
It was a dud that wasn't ballpark passable.
all RDNA3 parts are ok at worst.
That's the thing.

It's a weird fuckup that behaves weirdly but it's still passable 'nuff.
Anyway, strixes soon(tm).
You're not saying anything.

RDNA3 parts are only 'ok' from a consumer sense cuz AMD has priced them accordingly. In terms of actual expected improvements given all available advantages, RDNA3 is outright disastrous.
 
Buzz word nonsense.

You're also trying to speak for people who aren't actually speaking, which is silly. Where are 'console vendors' spreading the gospel of RDNA3, exactly?

I'm an F1 fan, and this 'potential' has me laughing, cuz I hear it so much. "Oh it has the potential to be great, we just need to keep waiting for that potential to be UNLOCKED!" - says the hopium sniffers who almost always end up disappointed.
Are you here to have an actual discussion on technical merits or are you more interested hurling Ad hominem to everyone disagree with ?
 
Fuck no, it's still passable. Nothing like Vega.
Just a weird PPA miss.

It's ok.
OK enough for them to maintain strict sane powertargets, even.
I would argue R600 was easily their biggest dud. Vega was late and power hungry but it was feature competitive and a performance tie/win vs the competition.
 
How can you be absolutely certain of your last statement especially in the latter case when perf/logic complexity matters more than ever in an era where the leading integrated graphics circuit designer has shown us yesterday that they're still stuck on the very same process node technology as before ?

I’m not looking into a crystal ball. I’m referring to what’s happening right now. Intel and Nvidia are all in on dedicated RT transistors. AMD is already doing intersection in hardware.

But let’s follow your train of thought. If transistor budget is becoming more precious over time then perf/transistor will be that much more important. For RT specifically with current APIs it’s an easy bet that dedicated transistors will win in that metric.
 
For RT specifically with current APIs it’s an easy bet that dedicated transistors will win in that metric.
Frankly the issue with RT is opportunity cost and IQ incrementalism both.
The last big move before that was what, PBR and various deferred GI schemes at the start of the 8th gen? But that was at 28nm where xtors were getting measurably cheaper so perf hits were manageable.

RT needs more things than just dedicated BVH walkers. You'd need moar membw and bigger caches and other things that no longer quite grow on the trees.
 
Frankly the issue with RT is opportunity cost and IQ incrementalism both.
The last big move before that was what, PBR and various deferred GI schemes at the start of the 8th gen? But that was at 28nm where xtors were getting measurably cheaper so perf hits were manageable.

RT needs more things than just dedicated BVH walkers. You'd need moar membw and bigger caches and other things that no longer quite grow on the trees.

Agreed but those things are also true if you’re trying to run RT in software. They’re probably even more important as running RT traversal on a 32-wide SIMD is inherently less cache friendly than Nvidia’s (and presumably Intel’s) dedicated MIMD hardware.

Software wins when the APIs evolve to be too flexible to build hardware around. That future isn’t here yet.
 
I’m not looking into a crystal ball. I’m referring to what’s happening right now. Intel and Nvidia are all in on dedicated RT transistors. AMD is already doing intersection in hardware.

But let’s follow your train of thought. If transistor budget is becoming more precious over time then perf/transistor will be that much more important. For RT specifically with current APIs it’s an easy bet that dedicated transistors will win in that metric.
@Bold Are you somehow implying that RT perf will be paramount in the future ? Because that would very much contradict your claim of "not looking into a crystal ball" ...

Just because AMD implemented ray intersection testing in hardware doesn't mean that they have identical reasons to Intel/Nvidia for doing such. They implemented 'tessellation' and 'ROVs" too but does anyone "genuinely" think they believed that these rendering features would take off ?

Sometimes, features aren't implemented for "performance" reasons but they're implemented purely for the purposes of getting the "API supported" rubber stamp ...
 
They’re probably even more important as running RT traversal on a 32-wide SIMD is inherently less cache friendly than Nvidia’s (and presumably Intel’s) dedicated MIMD hardware.
Obviously.
Software wins when the APIs evolve to be too flexible to build hardware around. That future isn’t here yet.
Kinda the issue, nothing exposes traversal cores in any capacity to the programmer yet.
DXR as an API is extremely rudimentary.
Are you somehow implying that RT perf will be paramount in the future ?
If we want to push the IQ yeah.
Question is always incrementalism.
PBR was a huge bump at reasonable opportunity cost which RTRT so far is not.
Just because AMD implemented ray intersection testing in hardware
They're gonna do a lot more than that.
The bag is on MS/Khronos side to make an RTRT API that's worth shit.

(AMD's also gonna bolt matrix crunchers onto RDNA5 SIMDs because MS said so. ISVs rule the world!)
 
@Bold Are you somehow implying that RT perf will be paramount in the future ? Because that would very much contradict your claim of "not looking into a crystal ball" ...

Didn’t realize we were debating the future of RT. But yes I think casting rays into a triangle soup is the future of game rendering. Everything else is a hack.

Just because AMD implemented ray intersection testing in hardware doesn't mean that they have identical reasons to Intel/Nvidia for doing such. They implemented 'tessellation' and 'ROVs" too but does anyone "genuinely" think they believed that these rendering features would take off ?

Sometimes, features aren't implemented for "performance" reasons but they're implemented purely for the purposes of getting the "API supported" rubber stamp ...

That would be sad if true. It implies AMD is fumbling in the dark with no clear vision for the future. The last time they made noise about something was Mantle which (while it didn’t exactly turn out that great) at least showed some form of leadership.
 
Back
Top