AMD RX 7900XTX and RX 7900XT Reviews

AMD tried to sell your a six core CPU for $299...
At least it's faster than the 5600X for slightly less money (looking at launch MSRPs and inflation). And Intel's 13 gen stuff is a nice improvement in perf/$. 4080 is clearly worse than 3080 in this respect. It is more expensiver than it is faster.
 
This makes me wonder why CPU prices are not skyrocketing. Possible reasons are: Intel has no advanced manufacturing :mrgreen: and AMD is using the premium process sparingly with chiplets. Also I guess CPU performance is not really increasing at a comparable rate.

If chiplets are the answer, too bad they fucked it up with N31. Whether it was a bug or just lackluster design, N31 is offering similar value proposition to the abysmal 4080. This won't be earning AMD any marketshare.

Also can anyone source Jen-Hsun on that conference call talking about how they were trying to get GPU prices back to pandemic/mining levels? I know he said it on an investor call but I can't find it.

One big thing to come out of the Ryzen 7x00 release is the increase in platform cost. Many complained about how high the new chips were priced along with the increase in price on the motherboards and ddr 5. I think cpu price increases are more incremental vs the graphics side. I also imagine that for many users they purchase new video cards more often than cpus and that video cards give a more visual improvement in terms of performance changes.
 
One big thing to come out of the Ryzen 7x00 release is the increase in platform cost. Many complained about how high the new chips were priced along with the increase in price on the motherboards and ddr 5. I think cpu price increases are more incremental vs the graphics side. I also imagine that for many users they purchase new video cards more often than cpus and that video cards give a more visual improvement in terms of performance changes.
I am curious if the increased platfrom cost is temporary. It always works this way with RAM. Hopefully the motherboards come down.

But there are other factors. Even today most games are limited by 1 or 2 threads, so a $330 13600K will pretty much get you maximum gaming performance. GPUs just don't work this way.
 
This makes me wonder why CPU prices are not skyrocketing. Possible reasons are: Intel has no advanced manufacturing :mrgreen: and AMD is using the premium process sparingly with chiplets. Also I guess CPU performance is not really increasing at a comparable rate.

If chiplets are the answer, too bad they fucked it up with N31. Whether it was a bug or just lackluster design, N31 is offering similar value proposition to the abysmal 4080. This won't be earning AMD any marketshare.

Also can anyone source Jen-Hsun on that conference call talking about how they were trying to get GPU prices back to pandemic/mining levels? I know he said it on an investor call but I can't find it.

AMD tries to use cutting edge nodes for their CPUs (Client and Data Center) whenever possible as that's the major profit driver the company. It uses advanced nodes and has the highest margins. Data Center has a slightly higher margin than Client. Most of AMD's advanced node wafer starts are used for CPUs because that's the most profitable and highest margin segment for them, despite the relatively low cost of their CPUs (Client segment).

The higher cost of their Ryzen CPUs is a combination of increased cost to manufacture combined with higher market success versus Intel allowing them to price their CPUs higher in order to increase margins. IE - the increase in cost for Ryzen CPUs is only partially due to increased fab costs.

AMD has a split between cutting edge nodes and older nodes for Gaming (client GPU and semi-custom [consoles for example]). This has the lowest margins of all their segments. One major factor that affects AMD's margins is that in most market segments where they compete with NV cards on price, they include significantly more memory. Memory costs are currently far higher than the cost of the GPU itself. Board costs outside of memory and GPU are significantly lower. The high cost of memory is the largest reason that margins for AMD client GPUs are so relatively anemic for AMD compared to NV.

Regards,
SB
 
Last edited:
I am curious if the increased platfrom cost is temporary. It always works this way with RAM. Hopefully the motherboards come down.

But there are other factors. Even today most games are limited by 1 or 2 threads, so a $330 13600K will pretty much get you maximum gaming performance. GPUs just don't work this way.
I think amd has slashed about $50 off the cpus but the motherboards are still a bit high. I think for amd in particular when ryzen first launched the chips were noticeably lower priced than intel , the boards were also noticeably lower priced. The ram was equal. Now the chips were priced on par or higher than intel , the motherboards were priced similar and unlike Intel you had to run ddr 5 with the new chips. So it was a shock for many who may have entered into ryzen with a cheaper board since so many generations of ryzen could exist on the original chipsets.

Also i think today more and more games need more threads. With the consoles having what really amounts to a ryzen 3700 I think its going to become more of an issue for games targeting current gen games. You might actually even want a 12 core in the next few years to make up for the increased overhead from windows vs console os's
 
AMD has a split between cutting edge nodes and older nodes for Gaming (client GPU and semi-custom [consoles for example]). This has the lowest margins of all their segments. One major factor that affects AMD's margins is that in most market segments where they compete with NV cards on price, they include significantly more memory. Memory costs are currently far higher than the cost of the GPU itself. Board costs outside of memory and GPU are significantly lower. The high cost of memory is the largest reason that margins for AMD client GPUs are so relatively anemic for AMD compared to NV.
That is interesting. I had no idea about the cost of GDDR. What about GDDR6 vs GDDR6X? Would 16GB of GDDR6 be comparable in price to 10GB of GDDR6X? 24GB of GDDR6 vs 16GB of GDDR6X? Also I'm noticing GDDR6 is hardly any slower than GDDR6X, and this is confusing to me.
 
This makes me wonder why CPU prices are not skyrocketing. Possible reasons are: Intel has no advanced manufacturing :mrgreen: and AMD is using the premium process sparingly with chiplets. Also I guess CPU performance is not really increasing at a comparable rate.
CPU dies are *tiny* compared to GPUs, and so the marginal increase in BOM cost hurts a lot less than GPUs.
 
This really seems to have upset Hardware Unboxed. For a while now they seemed to have been saying RT is not important, FSR is as good as DLSS etc and they have just gone nuclear as of this review and are using things that they previously deemed not important to sink the boot in. Like in that tweet in a reply he said nvidia didn't really lie cause frame generation and when asked about aib cards brushed it off with i've got other things to work on before I bother with them. They are one of the places I expected to be a bit more upbeat about the 7900xtx.
You clearly don’t watch HUB as that is not their messaging.
 
CPU dies are *tiny* compared to GPUs, and so the marginal increase in BOM cost hurts a lot less than GPUs.
The vast majority of the BOM for a CPU is not the die? I don’t see why it matters that they are smaller if most of the cost is in the silicon. We’re talking about % increase in cost over the previous generation.

Well I guess with chiplets it matters especially if the CCDs are smaller than the IODs.
 
The problem only exists on the AMD-designed boards so far. Those are the black-and-red cards that are sold by AMD as well as some of its partners. What is happening is there’s a huge delta between the temperature of the main compute die and an adjacent hotspot. The delta is so large that it’s beyond the spec designed by AMD. This is causing GPUs to throttle thermally as the hotspot reaches over 100C. For example, on custom cards from Asus, XFX, and Sapphire the delta between temps isn’t bigger than 20C. However, on some of the AMD-designed boards, it’s as high as 53C. That means the Graphics Compute Die (GCD) is 56C, and the hotspot is 109C.
 
Nothing. The inability to hit clocks is claimed to be due bug but remains to be confirmed later if/when they do A1 for 7950 refresh.
If a retape is needed then it would be a new base metal revision and would be a "Bx" most likely a "B0".
If that indeed happens, it could be seen as early as end of Q2 since they likely would have known and started the process in late Sept/Oct.

I'm still expecting Navi 32 either late Q1 or early Q2.
 
If a retape is needed then it would be a new base metal revision and would be a "Bx" most likely a "B0".
If that indeed happens, it could be seen as early as end of Q2 since they likely would have known and started the process in late Sept/Oct.

I'm still expecting Navi 32 either late Q1 or early Q2.
If a retape is needed yes, but if metal spin is enough it would be A1 (which still takes 3-6 months apparently)
 
Also, when it can't dual issue from the same wavefront, why can't they make it so that it could fill the other ALU from another wavefront that is ready to go?

Seems like a no-brainer at a high level, so there must be some difficult technical hurdle to overcome to make it work.
 
Last edited:
Back
Top