AMD Navi Product Reviews and Previews: (5500, 5600 XT, 5700, 5700 XT)

Upscale method Navi used is great. Extremely hard to pinpoint difference between native and upscaled.

Fantastic performance for encoding with Navi + Zen2 and better streaming then 2080TI card.
Thanks. Currently the streaming results with NVIDIA cards might be compromised with Zen2 though, since apparently for whatever reasons NVIDIA cards are performing worse on Zen2 than they should (ie in games GeForce+Zen2 is slower than GeForce+Intel, while Radeon+Zen2 is faster than Radeon+Intel (doesn't necessarily apply universally to all games, but there's some strange quirks needing investigations still))
 
Thanks. Currently the streaming results with NVIDIA cards might be compromised with Zen2 though, since apparently for whatever reasons NVIDIA cards are performing worse on Zen2 than they should (ie in games GeForce+Zen2 is slower than GeForce+Intel, while Radeon+Zen2 is faster than Radeon+Intel (doesn't necessarily apply universally to all games, but there's some strange quirks needing investigations still))
Related to this discussion.

https://forum.beyond3d.com/posts/2076790/
 
Yeah, that one has been bugging me. Unfortunately I don't have a good explanation right now. I consistently get those results. It may be another driver oddity.
FWIW, I'm getting 99.5 GPix/s on the XT and 88.9 on the vanilla 5700, which seems in-line with RX Vega64 at 96.9 and Radeon VII 110.1.

Thank you realy much for this Information CarstenS. will we see this value in a C't article? I realy miss your deep dive articles
Thanks - it's not going to be in the first article in the mag, but I'm planning to sneak it in soon. :)
 
Last edited:
Yeah, that one has been bugging me. Unfortunately I don't have a good explanation right now. I consistently get those results. It may be another driver oddity.

The part that I find very, very strange is Navi's poor performance in colour compression. It's supposed to have better DCC, and almost everywhere, but the compression ratio is barely above 1 in your tests. Any idea what's going on?

@Rys, based on what you know about Navi and the B3D suite, any clue?
 
perf / tflops is exactly at Turing level now. I think this is impressive and that's only the first gen of RDNA.
Like @Bondrewd said, we need to see 64+ CU scaling to see the whole picture.

Anyway, here is my lame attempt to plot AMD's perf/TFLOPS development (based on computerbase.de reviews):
gcn_scalinglikvs.png
 
Does anyone have detail on how the AMD anti-lag feature works? What's the difference between this feature and Nvidia Control Panel "Max pre-rendered frames 1"?
 
Does anyone have detail on how the AMD anti-lag feature works? What's the difference between this feature and Nvidia Control Panel "Max pre-rendered frames 1"?
The info I have gathered, IIRC, "the CPU tells the GPU what frames are needed to be rendered in anti-lag mode" so you get less FPS by a few percent technically but gain CPU synchronizing inputs and frames with a pre-rendered=1 (this I am unsure of) at the same time. So it's a bit like pre-render frames = 1 but with extra sauce. The info is scant.
 
The info I have gathered, IIRC, "the CPU tells the GPU what frames are needed to be rendered in anti-lag mode" so you get less FPS by a few percent technically but gain CPU synchronizing inputs and frames with a pre-rendered=1 (this I am unsure of) at the same time. So it's a bit like pre-render frames = 1 but with extra sauce. The info is scant.

So this guy test a gtx?070 against an RX 5700 and found that the amd card had lower input lag with the feature enabled in CS GO, but I don't know if Nvidia Control Panel sets max pre-rendered frames to 1 for CS GO by default. From what I've read, anti-lag and and max pre-rendered frames sound like pretty much the same thing.

 
So this guy test a gtx?070 against an RX 5700 and found that the amd card had lower input lag with the feature enabled in CS GO, but I don't know if Nvidia Control Panel sets max pre-rendered frames to 1 for CS GO by default. From what I've read, anti-lag and and max pre-rendered frames sound like pretty much the same thing.

If you look in the above video from Level1techs at 12:35, you'll get some info.

Also look at the video below @ 22:55
 
So this guy test a gtx?070 against an RX 5700 and found that the amd card had lower input lag with the feature enabled in CS GO, but I don't know if Nvidia Control Panel sets max pre-rendered frames to 1 for CS GO by default. From what I've read, anti-lag and and max pre-rendered frames sound like pretty much the same thing.
It was suspected at first, but it's not the same thing according to AMD. (5:45 mark)
 
Yet getting enough bandwidth for a GPU chiplet is a huge, huge, huge issue.
Not necessarily if chiplets run independent tasks?
Rasterization, lighting, physics, audio, AI. That's a lot of work that could run with little need on communication.
Although this would lead to underutilization of various FF functionality - but i would know how to get rid of this problem ;)
 
It was suspected at first, but it's not the same thing according to AMD. (5:45 mark)

I'm still not sure I understand the difference. It must be subtle. Maybe it's the synchronization part. All it sounds like they're doing is limiting the number of frames that can be queued, but that's been available on Nvidia and AMD drivers for a while.
 

Thank you for sharing, but when linking to a source of information, especially a video, please include a short summary of its contents, even just a single sentence. Otherwise, it's difficult to ascertain whether it's worth the time to watch it.
 
Thank you for sharing, but when linking to a source of information, especially a video, please include a short summary of its contents, even just a single sentence. Otherwise, it's difficult to ascertain whether it's worth the time to watch it.
Will do!

Temperature, OC and performance 5700XT with Arctic Accelero Extreme IV:
Temp under gaming load with cooler swaped 65 C degree's, hotspot at 75 C degree's stable boost 1800+ MHz
OC temp (extra fan to the back) 60 C and hotspot at 90 C boost to ca 2040 MHz

Default power draw new cooler: 180w
OC power draw: 240-271w too short to gauge an avg in the video.

 
I didn't see the "s" on your first statements.

Anyway, I find it high for a "small" 7nm chip.

I wonder if it's a situation like Vega, where they overvolt the gpus, by a lot sometimes, by "default".

This indeed seems the be the case with Navi. In my samples of Radeon RX 5700 and XT I managed to lower the voltage by a whole 0.1 v. As the result, power consumption dropped by 13 and 7 W respectively. So much for matching voltage to the individual GPU to avoid unnecessary overhead that AMD was talking about back when Polaris launched.
 
Back
Top