AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
IIRC, Fury X was a bit above the ideal freq/voltage curve for Fiji XT and the partner cards for Fiji XL had overblown powerlimits. The Nano was the best product there, with it's major negative point being too loud under full load.
 
that's why they call it RDNA
They already have RDNA2 on the roadmaps, so it's going the same circle as GCN.

"Radeon DNA" is a great name and it should designate a pool of technologies that can be used in specific APUs/GPUs, rather than specific iteration of the GPU architecture.

I.e. 'RDNA compute' with a choice of GCN-based instruction set, a combination of full/simplified compute units, special AI/tensor/whatever units, double-precision for professional cards etc.; 'RDNA bandwidth compression' includes color compression, hierarchial z-buffer compression, primitive culling, geometry binning etc; 'RDNA image enhancement' includes temporal antialising, image sharpening, WCG/10bit and 4K upscaling etc., and so on.
 
Just a note. Scaling of RDNA gaming performance past 40CUs to 64 or even 80CUs remains to be seen.

That is a bit obvious. The point we are making, is what Dr Su mentioned it is coming. The Navi chip she held up and said this was just the start. We also know that BIG navi is coming, so speculating on how big, or how many CU within mm^2 is a worthy discussion. And I think a discussion some people are looking to avoid.

Navi with added SPU and 64 CU's... would be how big of a chip..? (365mm^2..?)
 
That is a bit obvious. The point we are making, is what Dr Su mentioned it is coming. The Navi chip she held up and said this was just the start. We also know that BIG navi is coming, so speculating on how big, or how many CU within mm^2 is a worthy discussion. And I think a discussion some people are looking to avoid.

Navi with added SPU and 64 CU's... would be how big of a chip..? (365mm^2..?)
Depends if they scale (384 bit) or change the memory interface (HBM).
 
That max power consumption is an unrealistic scenario as it is achieved through a power virus, the Fury non X is probably being limited through a BIOS of draining that much power. It's a worthless metric.
It is not. It shows, that it's possible to limit power consumption significatnly without affecting performance too much. ComputerBase measured almost 100W difference (power consumption was tedsted in Last in Ryse: Son of Rome, no power virus), Hardware.fr also measured 101-116W difference (depending on sample of Fury X, measured in Anno 2070), resulting in 61-64% power consumtion of the Nano compared to Fury X (100 %). So, reducing 5700X's power consumption from 225W to 150W (67 %) without losing more than 10-15 % of performance should be possible (if the power scaling of Navi is similar to the power scaling of Fiji).
 
So, reducing 5700X's power consumption from 225W to 150W (67 %) without losing more than 10-15 % of performance should be possible (if the power scaling of Navi is similar to the power scaling of Fiji).
Do you expect the same physics to apply to others?
 
So, reducing 5700X's power consumption from 225W to 150W (67 %) without losing more than 10-15 % of performance should be possible (if the power scaling of Navi is similar to the power scaling of Fiji).
Now who is blindlessly speculating?

And no it shouldn't, one chip (FuryX) is obviously pushed way past it's operational efficiency to the point it needed water cooling to even be a viable product, while the other (5700 XT) is relatively comfortable with it's clocks and didn't need water cooling.

On a different note, 5700XT was originally called RX 690, before AMD chose to change course and call it 5700.

https://www.pcgamesn.com/amd/radeon-rx-5700-xt-690-graphics-card-e3
 
[…]while the other (5700 XT) is relatively comfortable with it's clocks and didn't need water cooling.
IMHO, that remains to be seen. I am not so sure at what point in the clocks/voltage curve RX 5700 and XT sit. That the higher clocked XT 50th AE has the same TDP as the XT and is selected through binning might indicate that Navi10 indeed sits in a yet comfortable position.

But even if. We would have a hypothetical 7nm, 80 CU product with HBM2 consuming 300 watts at roundabout 2080 Ti performance. The latter being on 12nm, using GDDR6 and specced at 40-50 watt less. I am not sure if that comparison and armchair architecting would make much sense.
 
And no it shouldn't, one chip (FuryX) is obviously pushed way past it's operational efficiency to the point it needed water cooling to even be a viable product…
Simple fact, that a product is equipeed by water cooler, doesn't mean it needs one. In fact Fury X had lower TDP (275 W) than most of the AMD's high-end models (300 W for HD 7970 GE, 294 W for R9 290X, 295W for Vega 64, 300 W for Radeon VII).

while the other (5700 XT) is relatively comfortable with it's clocks
That's a fact? (source?) Or just a speculation? (based on?) I would say it's the second most power-demanding ~250mm² GPU ever released. I can't remember a graphics card equipped with 250mm² GPU, which had 225W or higher TDP with the exception of Radeon RX 590, which runs way beyond its comfortable clocks. Does 5700 XT really look to be clocked within its comfort zone? I don't think so. It could be clocked a bit higher, obviously, but it seems to be quite far beyond the sweet spot.
 
I would say it's the second most power-demanding ~250mm² GPU ever released.
because it´s been pushed to it´s limit, as it seems it´s an AMD/RTG habit for couple of years.... That doesn´t give much hope for an efficient big SKU based on RDNA architecture.
 
Can Sombody explain how the new frontend ist working? We have a Geometry processor ? There is written that all shader can take date from the geometry processecor without calculating it in the "prim units"

Although the basic structure of Navi 10 with RDNA is similar to Polaris 10 and Vega 10 with GCN, it still differs in some aspects. For example, in RDNA there is only one Geometry Processor compared to the other four on Vega, but it has become much more powerful. In addition, this works in conjunction with four new "prim units" (one per shader block), which have similar tasks, although all shader blocks can get data directly from the Geometry Processor without having to calculate them in the Prim Units.

https://www.computerbase.de/2019-06/navi-radeon-rx-5700-xt/[/quote]
 
Navi block diagrams don't show the DSBR.. is that dropped or it's just part of the geometry processor and not labeled?
 
There are blocks labelled rasterizer in the Navi diagram, but at present the organization and duties of the geometry units and the geometry pipeline for Navi is a little unclear.
Those rasterizers could support the DSBR functionality, but that may not warrant mention if the functionality hasn't changed. I have not seen specific mention of managing the binning hardware in driver changes like there were for Vega yet, but the public-facing changes took some time to surface. Since the DSBR can default to standard rasterization function, it wasn't as urgent to get the enablement out with the initial changes.
 
Does 5700 XT really look to be clocked within its comfort zone? I don't think so. It could be clocked a bit higher, obviously, but it seems to be quite far beyond the sweet spot.
If we take the TBPs at face value, we get 20-25% more TFLOPS (depending if you compare vanilla with XT or XT AE) for 25% more TBP.
 
Status
Not open for further replies.
Back
Top