AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
https://www.anandtech.com/show/15104/amd-adds-radeon-rx-5300m-to-mobile-gpu-lineup

Ryan made a nice writeup. It seems the PRO SKU's differ a little bit as per usual. 5300M PRO featuring 22 CU's instead of 20 but with a cut down memory bandwidth from 128 to 96 with a maximum of 3 GB of VRAM. For what purpose though? What's the benefit?

I've been looking forward to this for a long time. Might be time to upgrade my MacBook at long last. My little trooper from 2011 needs retirement. I had been looking casually at the Vega 20 specced MacBook Pro but balked at the price for that option. The new 5300M PRO of the "standard" specification seems to be stronger still for a much more agreeable sum.
 
5300M PRO featuring 22 CU's instead of 20 but with a cut down memory bandwidth from 128 to 96 with a maximum of 3 GB of VRAM.

Radeon Pro 5300M has 20 CUs + 128bit bus, while Radeon RX 5300M has 22 CUs + 96bit bus. And Radeon Pro 5500M has 24 CUs at up-to 1300 MHz, while Radeon RX 5500M has 22 CUs at up-to 1645 MHz. Pro SKUs are power/performance optimized, non-Pro SKUs are price/performance optimized. Different configurations allows to sell every manufactured Navi 14 GPU (there are models with disabled CUs, with disabled ROPs and with disabled memory channel, so many different defects can be "cured"). Apple seems to get exclusively fully functional GPUs with 24 CUs (like they got Polaris 11 with all 16 CUs while PC segment got only 14 CUs).
 
5300M PRO featuring 22 CU's instead of 20 but with a cut down memory bandwidth from 128 to 96 with a maximum of 3 GB of VRAM. For what purpose though? What's the benefit?
Save money on a lower-end SKU by putting one less 8Gbit GDDR6 chip.
 
Huh, the ability to cut down on memory bandwidth is interesting, how do they do that exactly? The functional blocks were represented as 64bit buses apiece, but it always looked like disabling one would cut bandwidth to an entire block of WGPs.

Obviously it's possible with current RDNA some way or another. Could mean we'll see further binned RDNA cards in the future. RX5600 with 192bit bus/6gb of ram for $200 or something. Or maybe they'll wait for RDNA 2 and 6XXX cards, 12gb of ram would probably be easily within min spec for next gen games (maybe?)
 
Huh, the ability to cut down on memory bandwidth is interesting, how do they do that exactly? The functional blocks were represented as 64bit buses apiece, but it always looked like disabling one would cut bandwidth to an entire block of WGPs.

Obviously it's possible with current RDNA some way or another. Could mean we'll see further binned RDNA cards in the future. RX5600 with 192bit bus/6gb of ram for $200 or something. Or maybe they'll wait for RDNA 2 and 6XXX cards, 12gb of ram would probably be easily within min spec for next gen games (maybe?)
Even though the simplified block diagrams talk about "64-bit memory controllers", they're actually 16-bit if I'm not mistaken, as seen in this multilevel cache hierarchy slide, there's 16 memory controllers which translates to 16bit controllers, since 16x16 = 256bit
upload_2019-11-15_11-45-9.png

edit:
Isn't this dictated by GDDR6 anyway? Each GDDR6 chip has two independent 16-bit channels
(late edit: fixed stupid mistake, slide of course, not die :D )
 
Last edited:
Isn't this dictated by GDDR6 anyway? Each GDDR6 chip has two independent 16-bit channels
Can they decouple the 2*16bit channels in a single GDDR6 module though? I'd assume they split the 32bit channel for electrical reasons.

I'm just wondering if they could achieve a bandwidth granularity below 32bit.
 
Can they decouple the 2*16bit channels in a single GDDR6 module though? I'd assume they split the 32bit channel for electrical reasons.

I'm just wondering if they could achieve a bandwidth granularity below 32bit.
As far as I know (and a quick google run), they are truly completely independent channels with their own command and address & data buses, as in, "2 memory chips in 1"
 
If so I would assume its on the RDNA 2 architecture. I hope anyway. It just doesn't seem financially sound to release a high end, non RT, part if A) It's immediately hampered by lack of RT against the current competition and B) facing imminent internal competition with the release of RDNA 2 with RT and C) possibly a new nVidia arch with even stronger RT chops than Turing soon~ish.
 
Or they just have a really big Navi. ¯\_(ツ)_/¯

If so I would assume its on the RDNA 2 architecture. I hope anyway. It just doesn't seem financially sound to release a high end, non RT, part if A) It's immediately hampered by lack of RT against the current competition and B) facing imminent internal competition with the release of RDNA 2 with RT and C) possibly a new nVidia arch with even stronger RT chops than Turing soon~ish.

Supposedly it's 16GB version of Radeon Pro W5700 which just got announced as 8GB variant, but of course it could be "big Navi" too.
https://www.amd.com/en/press-releas...7nm-professional-pc-workstation-graphics-card
 
Apple have lost their collective marbles and gone off the deep end.
Those prices are absolutely absurd.
Why? A 32Gb HBM2 professional card isn't exactly cheap from anyone and it includes the Apple Tax. You're not exactly buying a gaming GPU with it.
 
Status
Not open for further replies.
Back
Top