AMD RDNA3 Specifications Discussion Thread

For some RT performance is all that matters and this "only" seems to match last gens fastest card if even that.
That, and the view that if performance was higher then AMD would’ve focused on benchmarks throughout their presentation.
 
For some RT performance is all that matters and this "only" seems to match last gens fastest card if even that.
I can appreciate the sentiment, but do we really have enough info to make that determination right now is what I'm trying to figure out. Seems like a lot of people are doing a lot of guessing and predictions based on very little data.
 
I can appreciate the sentiment, but do we really have enough info to make that determination right now is what I'm trying to figure out. Seems like a lot of people are doing a lot of guessing and predictions based on very little data.
Well, AMD isn't really promising more either. We know 6950XT performance so we can estimate pretty well where 7900XTX lands at
 
Again, did I miss something? How can y'all be doom and glooming about this part when we really haven't seen any kind of real performance evaluation of it?

I can appreciate the sentiment, but do we really have enough info to make that determination right now is what I'm trying to figure out. Seems like a lot of people are doing a lot of guessing and predictions based on very little data.

Sure, we haven't seen full reviews, but AMD put out plenty of "official" performance numbers (and details of the systems that gave those numbers) from which folks can at least ballpark the performance of the 7900 series. I can't think of a reason AMD would have underplayed or sandbagged its numbers given Nvidia has already launched the 4090 and put out its own "official" performance numbers for the 4080. AMD would want to put its card in the best light to excite gamers (the group targeted by the presentation, "Hello Gamers!") to hold off on Nvidia's cards and buy AMD's cards when they come in December. I mean, when the cards release in December, there's no reason to believe the 7900 series relative performance against the 6900 series, as shown at the presentation, will have changed appreciably, right? And we know how the 6900 series lines up against Ampere and the 4090 (and, to a much less extent, the 4080). So, even if we're all getting a little ahead of ourselves, we're not likely that far off in the big picture.

The doom and gloom comes from what appears to be missed expectations (well-founded or not). That probably starts with AMD's apparent failure to close the gap on RT, but as discussed in this thread, there are other expectations that do not seem to be met or exceeded. We'll see!
 
Well, AMD isn't really promising more either. We know 6950XT performance so we can estimate pretty well where 7900XTX lands at

Perhaps, although didn't one slide have subtext that mentioned testing was for 7900 XT versus 6900 XT? The performance uplift going from 7900 XT to 7900 XTX should be greater than what we got with 6900 XT to 6950 XT.

If so, the performance delta between 7900 XTX and 6950 XT should be greater.

Regards,
SB
 
I can't think of a reason AMD would have underplayed or sandbagged its numbers
Honestly the first thing that comes to mind for me is they don't know the real numbers yet. A month of driver development can make an enormous difference, and it wouldn't be the first time they weren't sure of the performance at the announcement.

Y'all could be right and I'm way off base, I'm just not passing judgement until the real reviews start coming out.
 
1 rasterised prim/shader engine per clock (same rate as RDNA1), doubled cull rates so that's 4 primitives culled/SE per clock? Could be interesting with nanite, maybe N33 will have slightly better than average perf increases vs N23. Big L0 and L1 increases will be very important for APUs since they already added LPDDR5-6400 for Rembrandt

More accurately it can rasterise 33B tris/s and transform/cull 198B/s
That's because it's 1 polymorph engine/TPC and they cull 0.5x prims/PM engine per clock iirc? Then 1 triangle rasterised per GPC per clock
 
The main thrust of my argument was that if we are to say RT performance doesn't matter because we'll just turn it off anyway, then you are accepting that on your $900 GPU, you are getting a lesser experience in some respects than even an Xbox Series S gamer.

I guess I just struggle to relate to this hypothetical person that would always use the RT mode on console games (which in many titles, can mean you're locked at 30fps or an unlocked framerate below 60), yet will also never use it on a card that will give them ~3X the performance with the same settings.

Like sure, I haven't taken an extensive survey, and perhaps it's a very biased example as it's based on people that use Internet forums. But when the topic of RT comes up, far more often than not, the biggest pushback against its value that I see comes from console gamers. I expect more often than not, when given the choice, it's just not enabled due to the resolution/performance hit.

So in reality, I don't think this individual who thinks in this binary between "RT is worthless/RT means the best graphics" is really that common. I think most make their judgment on a title by title basis, for some the more physically accurate lighting/shadows is worth it for the performance/resolution hit. For others, the scenes where these enhancements are actually noticeable are too rare to justify the drawbacks to the rest of the presentation that is ever-present. The person that would disable RT on their 7900 is sure as hell not going to enable it on their console tiles if given the choice, so it's not 'they're getting worse graphics' - they're getting far better graphics and performance vs. the mode they would also use on the consoles.

Don't forget too, that where the Ampere/ADA architecture really shines with RT is when the higher precision settings are used. When the lower settings that consoles employ are also used on RT titles, RDNA naturally doesn't suffer quite as catastrophically by comparison. Whether those lowered settings make RT pointless though of course, will be a matter of debate - but if they do, then the "but consoles will use RT" argument makes even less sense.

On the other hand if you turn RT on, then the AMD GPU is much slower than it's competing Nvidia GPU. Both options are bad IMO.

And on the other-other-hand, for the equivalent price bracket, the Nvidia GPU could end up being much slower than the 7900 in rasterization, which is still important for a massive number of games, especially when you value a high frame rate experience.

But when RT comes into play - as it does in many big titles and will in many more moving forwards

I guess the contention here is always going to be what 'into play' means, as it's such a variable impact depending on the title and the particular precision level used. The main argument is that in the future, this impact will be far more significant, as much more of the rendering pipeline will be enhanced, or fully based on RT. Ok then, but you can't expect people to fully embrace a hypothetical future - they're going to be buy the hardware they feel can enhance the games they're playing now.

I mean the extra outlay you spend on a PC over a console is justified for many reasons, but one of those is certainly that all of your existing library is enhanced immediately rather than hopes and prayers for it to be fully taken advantage of in an indeterminate future.

potentially only competing with a 3080Ti which can be picked up for much less

Well here, the cheapest 3080ti is $1350 CAD, perhaps that's colouring my perception of the value the 7900 might offer too. Outside of the US unfortunately, these large price drops due to the crypto collapse are far more rare, with the exception of the truly boutique cards like the 3090ti. That's another factor - will they actually be able to sell these at MSRP? If so, the 7900XTX will be priced in 3090/3080ti territory here, which is certainly a different comparison vs the US when Newegg in your neck of the woods is selling 3080ti's for $750.

I do think the 7900XT is overpriced though and an obvious upsell to the XTX, it should be $799. Well hell, considering the mindshare Nvidia has both should be lower to really have a threat of making a serious dent. We'll see how Nvidia's prices end up in 2023 when then finally burn through Ampere stock though and they adjust accordingly, while it would likely still lose a fair bit in rasterization, a potential $999 4080 16GB makes the case for a 7900XTX far weaker, even for people who don't value RT as highly as some here.
 
Last edited:
Again, did I miss something? How can y'all be doom and glooming about this part when we really haven't seen any kind of real performance evaluation of it?

This GPU should be able to clock much higher than it does, that's official info too. Raytracing performance should have seen larger gains than keeping up with non raytraced gains. Thus the disappointment and confusion.

Been implied there's a silicon bug holding back the GPU. Could mean the rest of RDNA3 being better, or even this GPU being replaced by a fixed one next year. Verification and other such steps are getting really fast thanks to updated tools.
 
Last edited:
I’m curious if AMD is basing their claim of this being the most advanced GPU solely on the basis of it being a chiplet design.
 
I’m curious if AMD is basing their claim of this being the most advanced gaming GPU solely on the basis of it being a chiplet design.
Question fixed, adding a important detail of their slogan.
To me it's:
<1000: gaming
>1000: content creation
Maybe that's in parts how they mean it too.
At the moment the claim seems valid, although it's hard to swallow even the slightest disappointment at the price.
The problem is that the disappointment originates from comparsons with competing products at much higher price.
The higher the prices, the larger the disappointment, no matter what's the value on the plus side.

Things would look so much better if they would launch entry level / midrange first.
I see they can't right now due to stock, but maybe next time...
 
Nanite is mostly pure compute, doesn't use GPU geometry h/w. They are still looking into using mesh shaders there AFAIR.

It is using Mesh shader/primitive shader for bigger triangle. I think Nanite is only for triangle covering less than 4 pixels.
 
Back
Top