AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Hm... well, I don't agree with her on "it is still very early" (regarding RT). Just "early", maybe, but not that "very". Not at this point. RT is actually a real thing for many, many developers, either because they already launched games with some sort of RT options or because they're planning to do it soon.

I wonder why so much mystery...

Thats the only answer she could give. Even if RDNA2 has kickass RT capabilities it’s not like she can hype raytracing right now with AMD having nothing on the market. It would be a free advertisement for Turing.
 
One turn of phrase that I noticed came up is "you should expect" for a number of these statements, which isn't the same thing as saying AMD will.
I think there's a very good chance these things will come to pass this year, but it looks like the CEO doesn't want to get AMD pinned down if something goes awry.

The question about the power efficiency enhancements gave me pause:
LS: When we put Vega into a mobile form factor with Ryzen 4000, we learned a lot about power optimization. 7nm was a part of it sure, but it was also a very power optimized design of that architecture. The really good thing about that is that what we learned is all applicable to Navi as well. David’s team put a huge focus on performance per watt, and that really comes out of the mobile form factor, and so I’m pleased with what they are doing. You will see a lot of that technology will also impact when you see Navi in a mobile form factor as well.

I think I'm interpreting this as AMD learned a lot about mobile-focused power optimization, and such optimizations going into a mobile Navi.
At least that seems more charitable than interpreting it as AMD hadn't thought much about power efficiency until just now.
Even so, I'd ask if they hadn't at least explored avenues down that path until recently. Maxwell benefited significantly from Nvidia's mobile attempts, and those benefits directly struck against AMD's competitiveness. Wouldn't that have hinted to AMD that that would have been a good direction to explore, and how seriously would we take all those pie-in-the-sky HPC APU papers and patents about advanced power-efficiency measures if years later AMD started touting what it just learned about this "power-efficiency" idea?
 
I don't think there was anything vague about her statement.
To me (as a non-native speaker), Su's statement sounds very vage and suggestive -> „You should expect that our discrete graphics as we go through 2020 will also have ray tracing“ [my bold] and she repeats that a few times. Maybe it's just for legal reasons, so AMD cannot be held responsible if for some reason, „our expectations“ should not come to pass. :)
 
It's the first time I notice that RDNA2 is only mentioned in the context of 7nm+, I am guessing RDNA 2 will only appear when AMD starts shipping some products on 7nm+.
First time? It's was marked as "7nm+" before there even was "RDNA" or "RDNA2", when it was still just "Next gen" next to Vega and Navi mentioned by name.
 
The question about the power efficiency enhancements gave me pause:


I think I'm interpreting this as AMD learned a lot about mobile-focused power optimization, and such optimizations going into a mobile Navi.
At least that seems more charitable than interpreting it as AMD hadn't thought much about power efficiency until just now.
Even so, I'd ask if they hadn't at least explored avenues down that path until recently. Maxwell benefited significantly from Nvidia's mobile attempts, and those benefits directly struck against AMD's competitiveness. Wouldn't that have hinted to AMD that that would have been a good direction to explore, and how seriously would we take all those pie-in-the-sky HPC APU papers and patents about advanced power-efficiency measures if years later AMD started touting what it just learned about this "power-efficiency" idea?

They are talking about the Vega IGP in the 15w TDP mobile APUs.

So, they are talking about the dozens of optimizations that would save maybe 1-2w. Stuff you don't think about when you have a 50w TDP chip.
Just like they probably don't know about all the stuff Nvidia learned with Tegra or like what Samsung will have to do with their AMD IP, to save that .5w

Comes down to low hanging fruit and ROI.
 
They are talking about the Vega IGP in the 15w TDP mobile APUs.

So, they are talking about the dozens of optimizations that would save maybe 1-2w. Stuff you don't think about when you have a 50w TDP chip.
Just like they probably don't know about all the stuff Nvidia learned with Tegra or like what Samsung will have to do with their AMD IP, to save that .5w

Comes down to low hanging fruit and ROI.
That is true, but I would have interpreted the impact Maxwell had as a sign that the ROI might need to be evaluated again.
I wouldn't discount a 1-2W savings on a GPU vastly smaller, as I interpret the broad improvement just with Vega as a hint that there's some fraction of it that could be more broadly applied, and applying those savings over a GPU 10x the size adds up to something more noticeable.
Power efficiency is already a brutal slog of incremental improvements over as broad a front as possible, and this actually goes to my question about the HPC fluff AMD put out where even those fractions of a Watt would have mattered.
 
They are talking about the Vega IGP in the 15w TDP mobile APUs.

So, they are talking about the dozens of optimizations that would save maybe 1-2w. Stuff you don't think about when you have a 50w TDP chip.
Just like they probably don't know about all the stuff Nvidia learned with Tegra or like what Samsung will have to do with their AMD IP, to save that .5w

Comes down to low hanging fruit and ROI.

There wasn't only power savings in vega mobile, but also a 59% increase in performance, too...
 
There wasn't only power savings in vega mobile, but also a 59% increase in performance, too...
The increase in performance is actually higher 3DMark score. Some of it comes from higher clocks, some from faster memory, but some is still unaccounted for
 
Honestly, it's probably just 56% efficiency improvement over the Vega 8 in the 12nm Picasso, which can be directly attributed to process shrink.

The Vega 20 didn't get that much efficiency improvement over Vega 10 because it had twice the HBM2 memory controllers and the CUs were changed to support 1:2 FP64 throughput.
 
Lisa Su doesn't talk about 59 % higher efficiency, but 59 % higher performance:
And that refers to 3DMark graphics score, can't remember exactly which the compared model was nor which 3DMark though.
 
upload_2020-1-9_15-51-20.png
Here is a statement posted on reddit from AMD on what they mean about the 59% more efficient. I'm going to guess most of the improvement comes from the increase in memory bandwidth and clock speed increase over anything else even as AMD themselves like to attribute some of it to "secret sauce".
 
It's expensive as all hell. 8gb of GDDR6 is $25, or around that. Back in 2017 HBM2 cost $80 for 4gb. I mean, even if the price has halved or more, the choice between the two isn't that hard.

Who is selling 8GB of GDDR6 for $25? No one.... you couldn't get 8GB of GDDR5 for <$60 when it was massively supplied.
Cost of the entire memory subsystem for Vega10 was $100-$150.
I don't know where you are getting your numbers from, but they are significantly off.

Next you are going to tell us that Vega10 BOM was >$300 and Vega20 BOM was >$800....

Edit- Also Nvidia switched to GDDR5X which had a 20-30% premium compared to GDDR5.
 
Last edited:
Why has amd abandoned hbm2?
It wasn't. AMD is selling GPUs with HBM2 right now.

They just don't have any Navi 7nm offering at the high end that would justify having HBM2. The highest end RX5700 XT uses a chip that is just as big as the RX480/580, and it uses the same amount of memory channels.
Down the road, it can probably scale down to $200 or less, and at that point you wouldn't want HBM2 keeping you from driving those prices down.
 
Status
Not open for further replies.
Back
Top