AMD Execution Thread [2023]

Status
Not open for further replies.
A deeper analysis of N33, especially in comparison to N31.

At the top end of the clock speed range, big RDNA 3 often clocks the frontend more than 10% faster than the shader engine. Even at more modern clock speeds, the frontend clocks 5-10% faster. In contrast, Navi 33 runs the frontend slower than the shader array. Perhaps the analysis AMD presented at the RDNA 3 launch presentation applies to large GPUs with gigantic shader arrays, but the situation is reversed with a GPU of the RX 7600’s size

wasn´t lower shader clock a reason for power saving ?
 
Last edited:
https://videocardz.com/newz/amd-con...ature-zen5-cpu-and-navi-3-5-gpu-architectures

AMD-RYZEN-8000-AM5.jpg
 
wasn´t lower shader clock a reason for power saving ?
Seems they deemed it not worth the implementation cost of decoupling cost? N33 is on a slower 6nm node after all.

RX 7600 doesn’t fully benefit from RDNA 3’s decoupled frontend and shader clocks. The two clock domains end up running at around the same frequency, possibly because the RX 7600’s smaller shader array can be well fed at lower shader clocks. In any case, the RX 7600 typically doesn’t reduce power by clocking down the shader array when the frontend’s the bottleneck.

They also appear to cut costs in other areas.

But the RX 7600 goes even further to cut costs. It loses big RDNA 3’s bigger vector register file. It uses TSMC’s 6 nm process, which won’t provide the performance and power savings that the cutting edge 5 nm process would

In prior generations, AMD fabbed x60 and x600 series cards on the same cutting edge node as their higher end counterparts, and used the same architecture too. However, they clearly felt a sharp need to save costs with this generation, compromising the RX 7600 a bit more than would be expected for a midrange GPU.
 
A bit reassuring, given that AMD originally only stated that AM5 would be supported 'through' 2025, which left a fair bit of ambiguity as to when support would actually end. Saying it will scale into 2026 suggests that Zen 6 is more likely on AM5, making buying into the platform now a bit more appealing.

Though less exciting is that it seems Zen 5 desktop APU's will not be built with RDNA4.
 
They haven't this far. AMD's software strategy is so boneheaded it makes me mad.
I'd assume they started with the cpu side first since it was the section that lead to them being profitable and will hopefully then hire up on the graphics side
 
I'd assume they started with the cpu side first since it was the section that lead to them being profitable and will hopefully then hire up on the graphics side

You don't have the same SW needs on the CPU side. It's the investment mindset that is wrong at AMD.
 
I'd assume they started with the cpu side first since it was the section that lead to them being profitable and will hopefully then hire up on the graphics side
It's because of supporting OpenCL that severely crippled the development of their compute stack. ROCm isn't even a decade old while their top competitor's project is well over TWICE the age of AMD's own current leading project but make no mistake that they are indeed doubling down on what they're doing ever since their acquisition of Xilinx. Xilinx ceased all development on their OpenCL/SYCL compute stack and are redirecting resources into ROCm/HIP integration ...

There's real progress with AMD's compute stack as seen with the Blender project whether people like to admit or not ...
 
Seems ROCm still needs major fixes and reworks.
Sure, he uses a consumer-grade RDNA3 card but yeah, that guy sums it pretty well.

AMD/ATi has never had a software-first mindset. It is not a software company.

The horrible software culture is being mentioned several times:
* Being open source all the way on the outside, but relying on tons of crucial closed source internals.
* Critical bugs not being addressed.
* Lacking communication
* Half-assed public documentation efforts (ISA and that's it)
* The self-documenting commit messages...
 
Harder to sell cards at the elevated pricing without covid or crypto mining to prop up those prices. So, reduce consumer GPU shipments to keep margins up and shift silicon wafer starts to more profitable and in demand products.

Regards,
SB
 
Some suggest the guy isn't worth his fame these days, he has done some dirty tricks apparently
 
Some suggest the guy isn't worth his fame these days, he has done some dirty tricks apparently
Hackers talking about hackers, nothing more.
 
Harder to sell cards at the elevated pricing without covid or crypto mining to prop up those prices. So, reduce consumer GPU shipments to keep margins up and shift silicon wafer starts to more profitable and in demand products.

Regards,
SB

Margins only go so far and don’t help with fixed costs, R&D etc. If this pricing trend continues revenue will become a problem.
 
Margins only go so far and don’t help with fixed costs, R&D etc. If this pricing trend continues revenue will become a problem.

Revenue should go up as you're using the same wafers for more expensive and higher margin parts. At least until demand for AI silicon (NV) or CPU silicon (AMD) dies off.

As it is, revenue for the current high pricing of video cards isn't sustainable at covid lockdown/crypto mining levels (pricing + volume), so it's either greatly reduce prices or ship less cards.

Unlike previous generations, outside of the 4090, no GPU has really been sold out for weeks after launch, everything has been easily obtainable reflecting demand that is significantly less than supply. Usually launch represents the highest demand:supply ratio due to lower supply combined with high demand (anticipation).

Regards,
SB
 
I dont really agree with the logic here. Yes, using 5c cores would 'reduce gaming performance' to some degree, but some percentage substracted off full Zen 5 performance is still gonna mean perfectly potent gaming performance(especially if Zen 5 is supposed to be a larger step up than Zen 4 was as a general architecture). If that's how AMD can justify putting in a proper 'discrete level' integrated GPU, then I think that makes more sense.

I mean, AMD arguably already are fine with hobbling gaming performance to some degree in their desktop APU's by sizeable reductions in the L3.

Or put another way, I dont at all buy that they'll include a 24CU+ iGPU while also packing in full sized cores when there's a better alternative that allows them to keep the die size down while still performing well. We're still not talking some high end GPU here, so I imagine in the vast majority of situations, users will be GPU limited, making any 10 or even 20% reduction in CPU performance pretty meaningless.
 
Unlike previous generations, outside of the 4090, no GPU has really been sold out for weeks after launch, everything has been easily obtainable reflecting demand that is significantly less than supply. Usually launch represents the highest demand:supply ratio due to lower supply combined with high demand (anticipation).
Being sold out is hardly an indication of anything besides the fact that momentary demand is higher than supply. Most GPUs hasn't been "sold out" after their launch if we're looking at historical data.
 
Status
Not open for further replies.
Back
Top