AMD Navi Product Reviews and Previews: (5500, 5600 XT, 5700, 5700 XT)

And RTX-is-the-devil and RT offers no benefits in current games that showcase it because the feature is missing in Navi?
Edit: Let's get back on topic and leave personal opinion and likes/dislikes for another thread.

I'm not seeing much of RTX is the devil in this thread. I am seeing a bunch of who aren't interested in the trade-offs (at the current level of RT in games) that currently exist if you want to enable RT. And it isn't like we're alone otherwise Turing would be selling like crazy.

That's a whole lot different from thinking that RT offers no benefits or that RTX is the devil.

There are people who think RT as offered in RTX is worth it and there are people who don't. Most of us don't try to convince the people that like the current implementations of RT in games that they are wrong, so why are the RTX people trying to convince people who have experienced RT with RTX cards that they are wrong to not want to invest in the current implementation?

Respect for viewpoints goes both ways.

Regards,
SB
 
Are we talking about me? I don't even own a Turing card. I played Metro Exodus on my 1080 Ti and thought the lighting was great. I got into the game after NV released the driver to run RTX on Pascal. It ran acceptably (for me) at 1080p.

I think other than that I've only briefly tried Quake 2 RTX. So I don't know about other RTX game examples.
 
Last edited:
Are we talking about me? I don't even own a Turing card. I played Metro Exodus on my 1080 Ti and thought the lighting was great. I got into the game after NV released the driver to run RTX on Pascal. It ran acceptably (for me) at 1080p.

I think other than that I've only briefly tried Quake 2 RTX. So I don't know about other RTX game examples.

Well, at least I'm not talking about you. :) I just see someone who likes RT in any form.

Regards,
SB
 
When nvidia releases their 7nm cards AMD will already have their big navi out, probably for a while. They will reduce prices for the 5700 to compete when they have to.

AMD is behind using 7nm process, when both sides are on 7nm we will see how things go.

Nvidia is not making anything out of ordinary with their cards. They are just battling company that hasnt put 1/10 of $$ in their GPU division compared to Nvidia in last 5 years. Its no surprise that before 2014 both where nip and tuck for more then a decade, but obviously when your company is hanging on a string for much of a decade its hard to battle with your competitors on two fronts.

I really hope next year 'big Navi' will offer more competition, as we very much need it. AMD's cpu do very well atleast, they dont beat Intel yet but offer a very good price/performance ratio.

Respect for viewpoints goes both ways.

I see Turings RTX and AMD's RT as the same thing really, as they probably will be about the same thing (Turing and RDNA2). Both GPU companies and even the consoles are going to have this 'limited RT tech'. Nvidia was first out with it last year, and here's hope they will improve on Turings successor in the RT department.
 
AMD is behind using 7nm process, when both sides are on 7nm we will see how things go.

Yes these mid ranged cards are behind in absolute performance vs cards costing hundreds of dollars more, but performance per dollar they are very competitive. What do you want us to do, wait till next fall until Nvidia releases their 7nm chips just to compare them?

Like I said by that point they will have big Navi and if they rebrand the 5700's they will reduce prices to remain competitive with Nvidia as they have always done.

I really hope next year 'big Navi' will offer more competition, as we very much need it. AMD's cpu do very well atleast, they dont beat Intel yet but offer a very good price/performance ratio.


AMD is beating Intel in every metric now, single threaded and multi, Intel only wins a handful of applications.

I see Turings RTX and AMD's RT as the same thing really, as they probably will be about the same thing (Turing and RDNA2). Both GPU companies and even the consoles are going to have this 'limited RT tech'. Nvidia was first out with it last year, and here's hope they will improve on Turings successor in the RT department.


Yes, there we agree I'm sure in the future it will have mainstream adoption and much better performance but until then it's not a must have feature. With AMD hardware being in consoles it stands to reason that RT on console ports will run just fine on AMD hardware. Maybe not the 5700's but next gen for sure.

I'm sure Nvidia will improve on it, but it isn't the point of this thread.
 
I think it's even more interesting that AMD is able to fit 2560 stream-processors in 10,3 bil. transistors, while Nvidia needs 10,6 bil. transistors for 2304 stream-processors. If we substract the transistors needed for RT-Cores (3 %), resulting number of transistors (~10,3 bil. for both GPUs) is even closer and make such comparision even more valid.

Anyway, I believe that AMD could have been a bit more bold and use the same configuration as Hawaii: 2816 stream-processors (+64 ROPs), which would be a bit more balanced.

Turing has 2x the instruction schedulers / dispatchers compared to Navi. For every 64 stream processors Turing issues 4 instructions per clock to Navi’s 2. Not to mention the separate INT pipeline. That wasn’t free and could explain some of the overhead. But still very, very close. Makes for some really interesting comparisons.

5700xt and 2070 super are almost perfectly matched specs wise.
 
Last edited:
Going through the numbers...

Congrats to AMD, you advanced all the way, to re-architecting the efficiency of Polaris, almost. Going by the numbers, using transistor count as a rough estimate as trying to compare silicon node advance is a fucking nightmare, the rx590 is about half the transistor count of an Rx 5700 Xt (screw you AMD GPU marketing!). If you go by performance numbers of GTAV, Total War Three Kingdoms, etc. the 5700 is, about twice as fast as a Polaris rx 590. And when it comes to power consumption, well that's similar to Polaris for double the performance! Except the 7nm pretty much accounts for all of that advancement, as it nominally takes up half the power. To sum: If you doubled a Polaris 10 and taped it out on 7nm, the performance overall, and per mm, and per watt, would look a lot like a Navi whatever the 5700 is.

Polaris was released three years ago. So, sure, AMD rolled back the de-advancement (gamewise) of Vega. And the larger caches and a few extra features will help RDNA performance wise going forward. But close to no efficiency gains in terms of performance per watt or performance per transistor after three years, and no advanced hardware features the competition already has, feels apathetic at best. The only way I can call RDNA a success is if it's a stepping stone towards chiplets. Going by the diagrams released, it very well could be the intention. Just like with Zen 2 there's self contained blocks that have their own cache and don't need access to each other. In fact each block has it's own memory controller, an advancement over Zen 2 that would be needed for GPU chiplets.

But are chiplets actually going to come? Is Infinity Fabric going to get such a massive bandwidth boost within a year or two that AMD can produce a chiplet based GPU? I don't know. It would be worth it, no doubt of that. Design costs would plummet, scaling would only be bound by bandwidth and power supply constraints (600 watt GPU anyone?). And most importantly scaling to bigger GPUs would become much closer to linear costwise, allowing AMD to drastically undercut Nvidia and Intel in price. Yet getting enough bandwidth for a GPU chiplet is a huge, huge, huge issue. Getting to CPU chiplets was already hard, AMD is the first to do it. But GPUs are yet another, even larger mountain. And unless AMD plans on scaling it soon, or RDNA has some easy issues holding back performance, it just doesn't feel like much of an advancement.
 
Last edited:
Turing has 2x the instruction schedulers / dispatchers compared to Navi. For every 64 stream processors Turing issues 4 instructions per clock to Navi’s 2. Not to mention the separate INT pipeline. That wasn’t free and could explain some of the overhead. But still very, very close. Makes for some really interesting comparisons.

5700xt and 2070 super are almost perfectly matched specs wise.

There are also the Tensor cores taking up additional room and transistors. We have no idea what % of transistors they or the RT cores actually use (where does 3% even come from? If it’s based on area it’s not valid).
 
5700xt and 2070 super are almost perfectly matched specs wise.
Yep, in fact, they are much closer specs wise than RTX2070 vs 5700 because RTX 2070 has 3 GPCs and can rasterize only 48 pixels per clk, thus 64 ROPs throughput in 2070 is limited by the rasterizers.
The major difference between 2070 Super vs 5700 XT is the number of GPC again (but there might be downsides with load-balancing and work distribution for partially empty GPCs), 2070 Super has higher triangle setup rate and can rasterize more triangles with less than 16 pixels screen size, other than that, ROPs throughput wise both cards are equal, so still much closer comparison than RTX2070 vs 5700
 
And when it comes to power consumption, well that's similar to Polaris for double the performance! Except the 7nm pretty much accounts for all of that advancement, as it nominally takes up half the power. To sum: If you doubled a Polaris 10 and taped it out on 7nm, the performance overall, and per mm, and per watt, would look a lot like a Navi whatever the 5700 is.
Your entire argument hinges on the assumption that shifting from 14nmGF to 7nmTSMC ”should” provide a factor of two increase in performance/W.

Not to mention the usual caveats regarding performance/W as a metric.
 
Going through the numbers...

Congrats to AMD, you advanced all the way, to re-architecting the efficiency of Polaris, almost. Going by the numbers, using transistor count as a rough estimate as trying to compare silicon node advance is a fucking nightmare, the rx590 is about half the transistor count of an Rx 5700 Xt (screw you AMD GPU marketing!). If you go by performance numbers of GTAV, Total War Three Kingdoms, etc. the 5700 is, about twice as fast as a Polaris rx 590. And when it comes to power consumption, well that's similar to Polaris for double the performance! Except the 7nm pretty much accounts for all of that advancement, as it nominally takes up half the power. To sum: If you doubled a Polaris 10 and taped it out on 7nm, the performance overall, and per mm, and per watt, would look a lot like a Navi whatever the 5700 is.

Polaris was released three years ago. So, sure, AMD rolled back the de-advancement (gamewise) of Vega. And the larger caches and a few extra features will help RDNA performance wise going forward. But close to no efficiency gains in terms of performance per watt or performance per transistor after three years, and no advanced hardware features the competition already has, feels apathetic at best. The only way I can call RDNA a success is if it's a stepping stone towards chiplets. Going by the diagrams released, it very well could be the intention. Just like with Zen 2 there's self contained blocks that have their own cache and don't need access to each other. In fact each block has it's own memory controller, an advancement over Zen 2 that would be needed for GPU chiplets.

But are chiplets actually going to come? Is Infinity Fabric going to get such a massive bandwidth boost within a year or two that AMD can produce a chiplet based GPU? I don't know. It would be worth it, no doubt of that. Design costs would plummet, scaling would only be bound by bandwidth and power supply constraints (600 watt GPU anyone?). And most importantly scaling to bigger GPUs would become much closer to linear costwise, allowing AMD to drastically undercut Nvidia and Intel in price. Yet getting enough bandwidth for a GPU chiplet is a huge, huge, huge issue. Getting to CPU chiplets was already hard, AMD is the first to do it. But GPUs are yet another, even larger mountain. And unless AMD plans on scaling it soon, or RDNA has some easy issues holding back performance, it just doesn't feel like much of an advancement.
They still have to improve their power consumption (but we'll know more next year when both will be on the same node), but perf / tflops is exactly at Turing level now. I think this is impressive and that's only the first gen of RDNA.

8kKkt7K.png


https://www.computerbase.de/2019-07/radeon-rx-5700-xt-test/#update1
 
They are impressed by AMD's new software stuff, the image sharpening from driver level apparently works pretty well and the anti-lag is also decent.

So apparently there are clock limits on Navi cards, 1850 and 2150. So that the 2.1Ghz german tom's got, wasn't the ceiling but the boost mechanism keeping it there.
 
Back
Top