AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

LM is also super-nice for CPU temps, and CPU hotspots are a big thing this gen (since, well, first gen in a while with real CPUs).
First gen *ever* for PS actually!
Second for Xbox (the OG had a P3).

(Assuming "real" == "out-of-order superscalar" which is a perfectly fine definition in my book. Also yes I know Jaguars are OOO-superscalar but I'll find some other excuse to disqualify them.)
 
Assuming "real" == "out-of-order superscalar" which is a perfectly fine definition in my book
More of "used the same cores as the respective vendor's higher end client and server parts" which it does.
Also yes I know Jaguars are OOO-superscalar but I'll find some other excuse to disqualify them
Those were genuine netbook/tablet cores.
Not quite the gear one might find good for gaming purposes.
 
Rumors seem to point to 2.6GHz clock on some 6700XT cards which is interesting in that not all 6800XT/6900XT cards can reach that clock. So AMD are juicing them more than the 1.2V on 6900XT or have improved their chip design for higher clocks even further.
 
It says on the spec page for 6700XT that it boosts up to 2581 MHz alright. And we all know how much even 6900 XT exceeds it's "max" Boost already. So why is 2,6 GHz and a little more even surprising?


edit: This is not to say, that AMDs engineers did not do a great job on the design clocking that high without letting power go through the roof, mind you.
 
Last edited:
It says on the spec page for 6700XT that it boosts up to 2581 MHz alright. And we all know how much even 6900 XT exceeds it's "max" Boost already. So why is 2,6 GHz and a little more even surprising?

The RX 6800 claims a game clock at 1815MHz boost up to 2105MHz but in reality it averages at 2200MHz or more.
The declared game and boost clocks in RDNA2 cards are so far very sandbagged.

If we assume a similar proportion of real-life clock boosts, then the RX6700 XT might actually be closer to 2700MHz than to 2600MHz.


Fast forward from people thinking 2.23GHz on the PS5 would be impossible to maintain to now, with a GPU clocking a lot higher that is similar in everything but infinity cache.
Typical power consumption might not even be that much higher, even (150W PS5 iGPU vs ~200W Navi 22?). I'll guess the mobile variants of Navi 22 will clock similarly to the PS5's.
 
Fast forward from people thinking 2.23GHz on the PS5 would be impossible to maintain
I feel like this is a mixture or perhaps we diverged from the original argument here, honestly it was a lot of talk about things before price was released. So I'm not going to say you wouldn't find posts that I wrote that were dead out wrong.

However.

The issue wasn't 2.23Ghz being maintained, though it became that, it was originally whether or it could maintain that at the 399 price point.
When specs were released it was highly suggested that many that Sony would combat by placing themselves inbetween Series S and X, under the thinking that it would be 299, 399, and 499 respectively.

When you look back the arguments, a lot of them talked about binning, yield and temperatures and the fixed power draw. Many believed here that only a slightly more expensive cooler would be sufficient to cool the PS5 and it would not add to the price point. The claim, was not 100% correct or 100% wrong either. The PS5 is 499 the same price as XSX with less of everything: memory chips and storage chips, small die size. And a larger heatsink and a more expensive TIM and process, and still resulting in the same price. So it's not that it was impossible that Sony couldn't hit those numbers, it just wasn't likely for a mainstream consumer device. They did manage to get to 399 though, which I didn't expect them to do and we know removing a disc drive is not $100, but reports still have the device losing money at 499.

Comparing the characteristics of a high performance GPU to that of a mainstream console is not quite apples to apples either. GPUs are thermally and power regulated. As long as there is power to draw and the thermals can keep the chip cool the boost will stay up. That's very divergent from PS5. If power draw goes up, there is a hard limit it shares with the CPU. The PS5 is not thermally regulated for it's clockspeed, with the binning on PS5 being wide, they should be regulating it's clock with respect to the lowest silicon they are willing to accept within that bin. So the idea that it can 'maintain' average boost characteristics in the same vein as a 6800 is unlikely. Given it's price point of the 6700XT at 479, I don't think PS5 will have clocking characteristics like it.
 

Well, PS5 doesnt maintain it all of the time either, it actually downclocks the GPU or trades with the CPU. So no, it can't maintain 2.23Ghz all of the time. How often it can or cant maintain those speeds we dont really know (yet). It probably depends on game-load for both CPU/GPU. I can imagine the GPU downclock happening before the CPU needs to.

Comparing the characteristics of a high performance GPU to that of a mainstream console is not quite apples to apples either. GPUs are thermally and power regulated. As long as there is power to draw and the thermals can keep the chip cool the boost will stay up. That's very divergent from PS5. If power draw goes up, there is a hard limit it shares with the CPU. The PS5 is not thermally regulated for it's clockspeed, with the binning on PS5 being wide, they should be regulating it's clock with respect to the lowest silicon they are willing to accept within that bin. So the idea that it can 'maintain' average boost characteristics in the same vein as a 6800 is unlikely. Given it's price point of the 6700XT at 479, I don't think PS5 will have clocking characteristics like it.

And yet, i see some doing that here on the forums. For the first, high-performance dGPU's do not share the same variable clock boosting going on as the PS5 does (for gods sake). Its kinda different, dGPU's usually have advertised minimum performance base clocks, with the ability to clock higher, not lower.With say a RTX2080Ti you get 13.x TFs worth of Turing, as advertised by NV, but the actual clocks going way up from there if PSU/temps allow for it, which gaming pc's easily will considering higher end parts arent stuffed with under-powered PSU's or airflow.
Still, we see the occasional PC gpus have been doing this for years' kind of comments which just isnt really true. Also, dGPUs usually dont have to trade with the CPU when things get tight.

6700XT is a much more capable GPU as opposed to the PS5's (if tech specs are correct). Its a 40CU, 2.4 baseclock part that boosts all the way to 2.6Ghz with 13.21TF's of raw compute at its disposal (a whole TF more then XSX). It also doesnt have to content its BW (384gb/s) with the CPU or rest of the system, isnt constrained by a small PSU, and packs 12GB of GDDR6 to its own. 6700XT is a 3060Ti competitor in normal rendering. Not to forget its a full RDNA2 product.
The PS5 is going to land much closer to a 6700 non-XT.

What PS5 is up to par with is the nvme department or even having an advantage due to DS not being ready yet. Regarding raw speeds though, there are already 7gb/s solutions before RTX IO/DS come into play. If NV wasnt lying, we can expect 14gb/s and much higher on current RTX ampere hardware.

The 2013 consoles atleast sported higher tier mid range GPUs, with much more ram to boot then pc gpus at the time, in a time where ray tracing-like next gen features and DLSS didnt even exist.
Their CPus where shitty, but atleast they sported 8 cores as opposed to most pc's sporting four.
 
Last edited:
I feel like this is a mixture or perhaps we diverged from the original argument here, honestly it was a lot of talk about things before price was released. So I'm not going to say you wouldn't find posts that I wrote that were dead out wrong.

However.

I don’t think people need to justify being wrong about the PS5 specs.
If we had been given a glimpse of rdna2 dGPU clocks by March 2020 then no one would have batted an eye at the claimed 2.23GHz on the PS5.

Instead, at the time all we had were RDNA1 clocks and the Series X (which arguably uses RDNA1 clocks).
There's no shame in being wrong or suspicious of the PS5's final GPU clocks.



The PS5 is 499 the same price as XSX with less of everything: memory chips and storage chips, small die size. And a larger heatsink and a more expensive TIM and process, and still resulting in the same price.
The moment MSRP != BOM+Assembly, I think this is an exercise in futility.
Both are probably subsidizing the hardware (maybe not the Series S), but one may be subsidizing more than the other and AFAIK there's no official report on this (other than Sony admitting to it in fiscal reports).


Given it's price point of the 6700XT at 479, I don't think PS5 will have clocking characteristics like it.
The 6700XT's price point comes from a period of dGPU drought from a pandemic and crypto craze where its direct competitors 3060 Ti and 3070 have much higher street prices, and it's substantially faster than the 3060 that is selling for the same $479.

Its clocks have little to do with the price, and mostly with the chip's power/clock curves.
 
Its clocks have little to do with the price, and mostly with the chip's power/clock curves.
Yea I think if 6700XT was announced with 2580Mhz before PS5 was announced with 2230Mhz. I think that would have settled the argument earlier or there may not have been much discussion about it at all. It would fall in line with separation between the top bin (XT) and something I would expect from a console.

I think clocks still have to do with price, that's just a matter of parametric yield. I don't know what the price differential between the top and bottom bins would be, but typically as I understand it consoles employ redundancy and reduced/fixed clock speeds to bring usable yield up and ultimately bring console prices down.
 
2589 MHz Avg. at stock
2809 MHz Avg with OC

it's just mental gymnastics for me how they got such a high clock speed there OC'd; total system consumption only being 15W more or so. Looking at it.

It really shows how conservative MS went for it's choice of locked clock. They should have dared dreamed a little bigger here.
 
it's just mental gymnastics for me how they got such a high clock speed there OC'd; total system consumption only being 15W more or so. Looking at it.

It really shows how conservative MS went for it's choice of locked clock. They should have dared dreamed a little bigger here.

The thing is, we have no idea how stable their OC is across all games on the platform (PC). And more importantly how stable that OC would be across all retail cards featuring that chip. There are likely retail cards that can't hit that OC. And we don't know what yield cut-off MS was targetting. Without the possibility of salvage chips, they would have to go more conservative with clocks.

Regards,
SB
 
Remember, they've got to leave some room for mid-gen updates...

How useful are those going to be for these SKUs though? Next year RDNA3 hits, and these are out this year. The bigger die obviously has a lot to work with if they can change the memory controller out and release them towards the end of this year. I can see 2.6ghz/20gbps for the "6950xt" or whatever and 2.4ghz/18gbps or so for the non xt. But are they going to launch a mid tier RDNA2 refresh maybe 6 months before RDNA3 comes out?

Well, maybe if RDNA3 hits only the high end for a while.
 
How useful are those going to be for these SKUs though?

I should have made it clear I was talking about Microsoft Xbox mid-gen upgrade, since it was mentioned because of the seemingly conservative clocks.

On the PC side I expect whatever RDNA 2.5 / 3 turns out to be to drive AMD's next broader product offerings.
 
Back
Top