The difference is, Navi was developed from Compute-heavy GCN while NVIDIA has been focusing comparatively more on the graphics side since Kepler or soTuring went more compute (and graphics?), Navi the other way around.
The difference is, Navi was developed from Compute-heavy GCN while NVIDIA has been focusing comparatively more on the graphics side since Kepler or soTuring went more compute (and graphics?), Navi the other way around.
Polaris 30 is ~7 TFlops with 5.7 bln transistors and 225W TBP. That's ~70% of Navi 10 Tflops in ~60% of transistor budget.Polaris lacks fp16, which could eventually affect seen benchmark results.
I never used Polaris, but 2 or 3 older GCN generations. There was no big difference - performance always scaled with CU count and clock as expected.
Well, there are probably some other reasons too, like GCN using narrower SIMDs compared to RDNA and spending relatively less of a die budget on graphical features. But the main reason is likely simple - Vega 20 is faster than Navi 10 in compute right now... which would mean GCN is only 'good for compute' because they have larger GCN chips available right now. That's pretty much what i would think too.
Tough market with a $799 price. Price differential between RTX Quadro 4000 and Radeon W5700 is on average $100, and can find products with a $60 differential.
In the professional visualization segment, any card is no match for an RTX Quadro, there are dozens of applications that RTX accelerated right now. A Quadro RTX is the standard to get now.Tough market with a $799 price. Price differential between RTX Quadro 4000 and Radeon W5700 is on average $100, and can find products with a $60 differential.
So here is the Navi 12. Thus highend SKUs are left to Navi 2x - hopefully featuring RDNA2.5600XT spotted
5600XT spotted:
https://videocardz.net/amd-radeon-rx-5600-xt/
GIGABYTE Radeon RX 5600 XT 6GB GAMING OC
GV-R56XTGAMING OC-6GD
GIGABYTE Radeon RX 5600 XT 6GB OC
GV-R56XTOC-6GD
192-bit bus and 6GB VRAM
5600XT spotted:
https://videocardz.net/amd-radeon-rx-5600-xt/
GIGABYTE Radeon RX 5600 XT 6GB GAMING OC
GV-R56XTGAMING OC-6GD
GIGABYTE Radeon RX 5600 XT 6GB OC
GV-R56XTOC-6GD
192-bit bus and 6GB VRAM
They're actually 16bit memory controllers192-bit bus mean 6x 32 bit GDDR6 memory controllers. Highly likely there is a 30 CU part in the 5600-series, but if the 5600-series is a cut down Navi 10 e.g. Navi10LE it could go as high as 36 CU in steps of two, though unlikely. From a business prospective I doubt 36 CU due to cannibalization of RX 5700 @ 1080p, 1440p.
The max so far seen for Navi 10 is 5 WGP per 32-bit memory controller, Navi 14 has 3 WGP per controller. If this is Navi 12 then could be at 4 WGP per controller resulting in 32 CU for Navi 12. But an extra Chip in between 158 mm^2 and 251 mm^2? I doubt that makes much sense strategically from yields, wafer costs and the amount of wafers AMD can use at TMSC right now.
I'm strongly leaning towards this being a cut down part...
If anything it would probably be 4x16bit grouped as 64-bit since AMD uses 64-bit on highest level block diagram. But regardless of that, they list 16 MC's (x16 = 256bit) in RDNA Cache hierarchy -slide.Can they be decoupled from two (16+16-bit) channel though, from any practical standpoint? As far as I've read on GDDR 6 they can't, they can be joined in pseudo channel mode with penalty. But I am not an engineer by trade. So even if one channel is 16-bit, the bus-width of the controller to the memory module is 2x16-bit aka 32-bit
https://www.jedec.org/standards-documents/docs/jesd250b
The quote is from here, page 22: https://gpuopen.com/wp-content/uploads/2019/08/RDNA_Architecture_public.pdf