AMD RDNA3 Specifications Discussion Thread

Something seems off about RDNA3. I have the feeling they wanted to do at least one dual GCD card to compete at the top but couldn't get it work properly
 
So from these slides, how does Raster and RT compare to the 4090 and 3090Ti?

So from these slides, how does Raster and RT compare to the 4090 and 3090Ti?
Difficult to say , if it's really a 1,5x6950XT in raster, it should be more or less at 4080 16Gb level (it will depend a lot on the engine), RT is worse, more memory, less price, likely better video engine (software is a question mark though)
 
MSRP is a little lower than expected, I didn't think either would be under $1k. 20GB of GDDR6 for $899 is impressive on that front at least.

But damn, they would have to be. I was pessimistic regarding RT gains but not enough - even those improvements were below my low estimates, we're looking at likely 3080-level RT performance for a $1k card. I'm not nearly as gung-ho as RT as others on here, but those are some pretty damn weak gains.

A 4090 is absolutely worth 50% more, without a question.

A % price increase/decrease wrt 'value' doesn't scale linearly though. It's one thing to be 50% more expensive than your competitor at $300, it's another when you're starting at $1600 - that simply is an insurmountable barrier to the vast majority of GPU buyers. $900/$1K is a damn high wall to climb for many too, but it's at least not completely in the netherworld like a $1600 card is, the "% increase" is largely irrelevant to me at that price.
 
Oh no, they're gonna gut ray tracing in every engine and game they partner with. Snowdrop engine, so Avatar, he mentions Splinter Cell remake.
I am going to assume these partnerships are just easy money for these game companies. They are already doing all this work for xbox versions of the games. Now they just extend that to the pc hardware
 
I'm realy disapointed from the chiplt layout. I thought that they make small CU Dies and one Big memory die. For me this makes all no sense.

This let's them offload some of the big area stuff (that doesn't scale as well) that was previously on right on the GPU, which allows for more of the other stuff than would otherwise be possible. This design was also rumored for a long time, so not a surprise for most.
Separating the main GPU into multiple chiplets is something AMD, Nvidia, and Intel are pursuing but they haven't worked it all out yet.
 
Great prices. Not quite enough to seriously compete with the 4090 but at those prices, no need. These are very normal prices for high-end cards that people can afford. NVIDIA is trying hard to push $2000 high-end GPUs as the norm.

Really? The 7900XT is almost 40% more expensive than it's direct predecessor (6800XT) for what appears to be roughly the same gain without actually fixing the already broken RT performance. If this was still the RDNA2 generation then that would be interesting but for a new generation of GPU where that level of performance increase is expected, I just see this as a massive price hike. Sure the 7900XTX is giving the expected generational price/performance jump over the 6900XT but those products are always horrible value anyway.
 
They bought Xilinx so they could catch up?
They ever had problems with H.264 quality really and it's actually now competitive with NVIDIA & Intel already (no-one just uses the new SDKs with updates). H.265 was always fine.
 
So from these slides, how does Raster and RT compare to the 4090 and 3090Ti?
It's undoubtedly going to be faster than AD103 in rasterization performance because the slightly higher transistor count and better scaling of rasterization HW on AMD GPUs as of recent is going to dictate that end result. As for RT, it could be either faster or slower than GA102 depending on the specific game/benchmark ...
 
That is a bit confusing what it is. It is FSR2 + ??
Probably Radeon Anti-Lag (which has been available longer than NVIDIA has had any similar techs, but for whatever reason no-one noticed before Reflex (which wasn't NVIDIAs first step into it either))
 
So from these slides, how does Raster and RT compare to the 4090 and 3090Ti?
Raster-wise it appears to be in the .85-.9x range of 4090.
RT-wise it appears to be a bit faster with FSR2 than 4090 without DLSS.

Now the question is what FSR2 settings did they use for those RT benchmarks.
 
how does Raster and RT compare to the 4090 and 3090Ti?
Maybe a little better than amper(3090ti)? Or at the same level?
image.png
In RE village(Very heavy PT effects for amd) 50% perfomance improvment
For compare the 4090 showed a much greater improvement in performance
Screenshot_1.png
 
Back
Top