AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Only Sony and Microsoft know exactly what arch the PS5/Scarlett will be on, and even they are probably still weighing tradeoffs and exact specs. Is it going to be easier to go with 7nm+ next year, or 7nm? What's the node upgrade path for TSMC like, so they can bring costs down, is 6nm which is cell compatible with 7nm better than 7nm+? There's no indication 7nm+ has slipped, in fact it's probably in oversupply. It needs EUV scanners, which TSMC bought almost all of this year, seemingly in the anticipation of Apple paying up for the newest node as always. But (rumored) Apple balked, so TSMC might have a ton of extremely expensive EUV scanners sitting around doing nothing. Will this lower the price and timeline for RDNA2? I dunno.

Even the very terms "RDNA 2" and "Next Gen" seem intentionally vague. Are they still considered "Navi" for the purposes of "PS5/Scarlett have Navi GPUs"? AMD also has reason to be vague, they're answerable to investors. If they can't guarantee RDNA2 will launch next year changing a chart to vaguely show it might somehow launch in 2021 is just covering their ass legally, as well as in PR terms. The only conclusions that can solidly be drawn are "Navi or something very close, with raytracing, exists somewhere, on some node, and will launch in consoles next year, and for consumers by 2021 or earlier".
Apple reportedly balked at 7nm+ pricing. And with 6nm offering a path to EUV for 7nm, I don’t see why consoles would want 7nm+.
 
Apple reportedly balked at 7nm+ pricing. And with 6nm offering a path to EUV for 7nm, I don’t see why consoles would want 7nm+.

6nm doesn’t offer better density or power efficiency than 7nm+. It’s an upgrade path for 7nm products that’s cheaper than migrating to 5nm or 7nm+.
 
Apple reportedly balked at 7nm+ pricing. And with 6nm offering a path to EUV for 7nm, I don’t see why consoles would want 7nm+.
Lets add some more information.
As far as I can see, the source of this is probably this article at Motley Fool, who in turn quoted Bluefin Research Partners who wrote this:
Our latest checks suggest that AAPL is not interested in paying more for 7nm+ shrink for their next generation iPhone models in 2019, given the early price increases that TSM has quoted. It is our opinion that the EUV costs due to the inefficient throughput is one the key contributors to the projected price increase.
Note that there is nothing about actual manufacturing choices in that quote, only a claim that TSMC had wanted a price premium for 7nm+ wafers that Apple found a bit much. Ashraf Eassa (at Motley Fool) then went through options for Apple - mainly stay at their current process or negotiate the pricing.
Can anyone bring any other source to the table, that actually claims that manufacturing of Apples next SoC will not use EUV for some layers?
 
Only Sony and Microsoft know exactly what arch the PS5/Scarlett will be on... There's no indication 7nm+ has slipped, in fact it's probably in oversupply.
For all we know, the architecture will be 'Navi' and it will include raytracing. I'm assuming hardware BVH tree traversal, like in DirectX Raytracing pipeline. That would align the release of RDNA2 desktop GPU part with console APU part (parts).

Console APUs have to use the most current process available in early 2020 - AMD needs to build the inventory for the November launch, so they can't wait for the '7nm+' process to mature.

Even the very terms "RDNA 2" and "Next Gen" seem intentionally vague. Are they still considered "Navi" for the purposes of "PS5/Scarlett have Navi GPUs"?
I take it as 'Navi plus hardware raytracing based on BVH tree traversal'.

The only conclusions that can solidly be drawn are "Navi or something very close, with raytracing, exists somewhere, on some node, and will launch in consoles next year, and for consumers by 2021 or earlier".
I would be perfectly OK with vague references to the architecture used. I assume they have a selection of IP building blocks to suit a specific application.

Looking at the changes from GCN to RDNA, the improvements are mainly in task scheduling, memory caches, and fixed-function blocks for color compression and geometry culling. The compute front-end - the register model and the instruction set of GCN compute units - did not
change much, it's really a hybrid of GCN and 'next gen'.

Hardware raytracing puts a lot of stress on memory bandwidth, so we will probably see further improvements to caches and memory and ither fixed-function blocks. This would warrant 'RDNA2' / 'next-gen' designation, even if compute blocks don't change much.


IMHO there was too much emphasis on 'GCN' in AMD marketing. It's cool to know every little detail of the architecture down to instruction mnemonics, but the focus on compute units downplays other significant changes in the architecture. They need to start marketing specific GPU generations, even when the changes between them are incremental, so we don't go like 'Oh, they made yet another GCN part... meh.'
 
Last edited:
6nm doesn’t offer better density or power efficiency than 7nm+. It’s an upgrade path for 7nm products that’s cheaper than migrating to 5nm or 7nm+.
No, but it’s an optical shrink compatible with 7nm. That’s what makes it attractive given the 7 to 7+ gains are meager.

This is the path 590 took with 12nm.

Lets add some more information.
As far as I can see, the source of this is probably this article at Motley Fool, who in turn quoted Bluefin Research Partners who wrote this:

Note that there is nothing about actual manufacturing choices in that quote, only a claim that TSMC had wanted a price premium for 7nm+ wafers that Apple found a bit much. Ashraf Eassa (at Motley Fool) then went through options for Apple - mainly stay at their current process or negotiate the pricing.
Can anyone bring any other source to the table, that actually claims that manufacturing of Apples next SoC will not use EUV for some layers?

Ming Chi Kuo has not stated which process they’re on. He’s the only source I’d trust unfailingly.
 
Last edited:
IMHO there was too much emphasis on 'GCN' in AMD marketing. It's cool to know every little detail of the architecture down to instruction mnemonics, but the focus on compute units downplays other significant changes in the architecture. They need to start marketing specific GPU generations, even when the changes between them are incremental, so we don't go like 'Oh, they made yet another GCN part... meh.'

that´s why they call it RDNA ;)
 
RDNA is scalable. There is nothing from stopping AMD from merging 4 units, instead of 2. Or changing the ratio's of compute vs rasterize, etc. Or in tying in more special function units, etc.

Imagine those NAVI ring buses extending out to more "cores", to make RDNA chips as big One would deem. And something like 80 CU's, will not use double the space and would be knocking on the 2080ti's door (in Games).
 
RDNA is scalable. There is nothing from stopping AMD from merging 4 units, instead of 2. Or changing the ratio's of compute vs rasterize, etc. Or in tying in more special function units, etc.

Imagine those NAVI ring buses extending out to more "cores", to make RDNA chips as big One would deem. And something like 80 CU's, will not use double the space and would be knocking on the 2080ti's door (in Games).
Pretty sure an 80 CU RDNA would beat a 2080 ti with margin barring scaling issues (bottlenecks, BW)
 
No, but it’s an optical shrink compatible with 7nm. That’s what makes it attractive given the 7 to 7+ gains are meager.

This is the path 590 took with 12nm.

Ming Chi Kuo has not stated which process they’re on. He’s the only source I’d trust unfailingly.

Yes. 6nm makes sense when you have a 7nm product and you don’t want to spend the investment of going to 7nm+ or 5nm.

7nm to 6nm is cheaper than 7nm to 7nm+ but that doesn’t mean 7nm+6nm is cheaper than 7nm+.

Apple may have scoffed at 7nm+ but Apple doesn’t want 7nm+ in 2020. It wants it now which comes at premium if you want to have one of the first products on a leading edge node.
 
Last edited:
Then it wouldn't beat the 2080ti. It is a simple matter of power efficiency and max power consumption. Pick any max power target, the design with greater perf/w will always provide greater performance at equal consumption.
Perf/Watt changes with clocks. Clocks are a linear scaling. Power scales somewhere between the square and cube of the voltage.
 
Radeon Fury Nano offered 86-89 % of Fury X performace (ComputerBase, 4k-1440p) and consumed 45-75 % of X's power (depending on load type, TechPowerUp, card only). If Navi behaves in a similar manner, losing 10-15 % of its performace should lower the PowerConsumption of 40 CUs + 64 ROPs from 225 to 150 watts. Hypothetical product with 80 CUs + 128 ROPs should offer 170-180 % of 5700XT's performance. Using HBM, which would reduce die size and power-consumption a bit, maybe a few % more. GeForce RTX 2080 Ti has 165 % performance of GeForce RTX 2070 (4k, ComputerBase), so 300W Navi could beat it by 3-12 % (if 5700XT and RTX 2070 perform identically). For a high-end product with high margins AMD could also use a bit more agressive binning and pick low-leakage chips operating well at lower voltage.

Anyway, such product released in summer 2020 would be quite late, Nvidia's 7nm generation will likely offer this level of performance at sub-200W TDP.
 
Perf/Watt changes with clocks.
Doesn't matter. Perf/w is Perf/w. There is no getting around what I wrote; the design with the best perf/w will provide the most performance at a given power budget.

Clocks are a linear scaling.
Yeah, I am aware. I can add and subtract, sometimes even multiply and divide when I get real fancy.
Power scales somewhere between the square and cube of the voltage.
Also already aware of that..... perhaps something I don't know. But it wouldn't matter anyway.
 
Doesn't matter. Perf/w is Perf/w. There is no getting around what I wrote; the design with the best perf/w will provide the most performance at a given power budget.

Yeah, I am aware. I can add and subtract, sometimes even multiply and divide when I get real fancy.
Also already aware of that..... perhaps something I don't know. But it wouldn't matter anyway.
And 2080 ti is a fixed entity. It has one perf/Watt point and that’s the clock and voltage profile already chosen by Nvidia.
 
Radeon Fury Nano offered 86-89 % of Fury X performace (ComputerBase, 4k-1440p) and consumed 45-75 % of X's power (depending on load type, TechPowerUp, card only). If Navi behaves in a similar manner, losing 10-15 % of its performace should lower the PowerConsumption of 40 CUs + 64 ROPs from 225 to 150 watts. Hypothetical product with 80 CUs + 128 ROPs should offer 170-180 % of 5700XT's performance. Using HBM, which would reduce die size and power-consumption a bit, maybe a few % more. GeForce RTX 2080 Ti has 165 % performance of GeForce RTX 2070 (4k, ComputerBase), so 300W Navi could beat it by 3-12 % (if 5700XT and RTX 2070 perform identically). For a high-end product with high margins AMD could also use a bit more agressive binning and pick low-leakage chips operating well at lower voltage.

Anyway, such product released in summer 2020 would be quite late, Nvidia's 7nm generation will likely offer this level of performance at sub-200W TDP.
Just a note. Scaling of RDNA gaming performance past 40CUs to 64 or even 80CUs remains to be seen.
 
Status
Not open for further replies.
Back
Top