Nvidia Post-Volta (Ampere?) Rumor and Speculation Thread

Status
Not open for further replies.
That graph looks like revenue, as in $$ not units. Given the price of Turing cards were considerably higher than Pascal wouldn't this be a given? Or am I reading this wrong? Also combined with both 2080ti and non-ti at launch, compared to only 1080 for Pascal?

Revenue and margins are what counts for corporations as they drive profits.
 
Revenue and margins are what counts for corporations as they drive profits
Well of course, but that graph is a marketing piece, not internal or investor. It gives the impression that they sold 45% more Turing than Pascal and there are already articles with that type of wording because there are websites and people that merely spew whatever marketing provides.

The power of graphs.
 
But didn't NVIDIAs margins crash pretty hard after Turing launch?

Topic is about Turing only and no Turing margins did not crash.

What crashed was sales for mid-to-low end Pascal cards as they were awash in excess inventory at AIB's because said AIB's missed the memo that Crypto was crashing and ordered too many the previous quarters. Because of that they then didn't order any new cards the last quarter.
 
Last edited:
Well of course, but that graph is a marketing piece, not internal or investor. It gives the impression that they sold 45% more Turing than Pascal and there are already articles with that type of wording because there are websites and people that merely spew whatever marketing provides.
.

It is not a marketing piece as you state. It is directly from a NVIDIA investors day held for analysts.

NVIDIA held its investor day recently and one of the more interesting declarations at the event was the fact that the company actually sold a lot more Turing GPUs than Pascal in the first 8 weeks of desktop revenue.

https://wccftech.com/nvidia-turing-...e-than-pascal-in-first-eight-weeks-of-revenue
 
Topic is about Turing only and no Turing margins did not crash.

What crashed was sales for mid-to-low end Pascal cards as they were awash in excess inventory at AIB's because said AIB's missed the memo that Crypto was crashing and ordered too many the previous quarters. Because of that they then didn't order any new cards the last quarter.
So, let me get this straight: AIBs are to blame for the post-turing crash? Because they had flooded the channel with lower performing Pascal cards and they did not buy higher cost Turing card, which they allegedly would be able just to move through inventory? I fail to see any reason here.
 
So, let me get this straight: AIBs are to blame for the post-turing crash? Because they had flooded the channel with lower performing Pascal cards and they did not buy higher cost Turing card, which they allegedly would be able just to move through inventory? I fail to see any reason here.


That is not what I said at all.


Turing card revenues are higher than Pascal card revenues for the first 8 weeks of sales as shown in the chart above shown at the Investors Day recently.

Turing cards are selling at a lower volume than Pascal did because of the higher prices even with lower sales volume the Price times sales quantity is higher for Turing vs Pascal.

and yes AIBs are to blame for the revenue shortfall for Nvidia last quarter because they ordered way too many lower performing Pascal cards in the previous quarter.

How hard is it to understand too many widgets in inventory results in no more widget orders until widget inventory gets reduced.
 
Pretty much a expected timeframe for a 7nm EUV chip, which could be announced at GTC2020in march 2020.

They used to pride themselves on being able to tapeout a design with only a couple of metal spins, though it obviously failed a fair few times. Still, a year seems a long timeframe.

I'd just take their word for it and assume they are really avoiding 7nm for now, I fully expect it to still be a clusterfuck a year from now.
 
They used to pride themselves on being able to tapeout a design with only a couple of metal spins, though it obviously failed a fair few times. Still, a year seems a long timeframe.

I'd just take their word for it and assume they are really avoiding 7nm for now, I fully expect it to still be a clusterfuck a year from now.

1 year is the normal timeframe for a big chip in a new process from tapeout to product. GP100 was also 1 year, vega was nearly 1 year on a known process. Polaris was faster, i think just 9 months, as was Turing. But Polaris was a simpler chip and Nvidia had very good process experience for Turing, which probably saved them some metal spins.
 

It appears to be in the same vein as other, interconnected "neuron" deep learning chips. The hypothesis being that learning is local (short range) and neural nets only connect locally, so you can scale up cheaply with chiplets (or similar) that only have local interconnects, because they don't need direct access to chiplets that are further away. Which has nothing to do with how GPUs, in the "Graphical" sense work as they need far too much universal bandwidth, and especially raytracing which needs the opposite of local interconnects.

So yeah that has zero to do with multi GPUs, but may have something to do with Nvidia's future GPUs. In that if they find a non monolithic design to be efficient, at least in terms of cost vs performance, they might ditch using GPUs for deep learning altogether. Which might be good news for video game consumers, as they won't need to buy questionably useful deep learning cores on their gaming GPUs anymore.
 
Last edited:
Will Ampere succeed both Volta and Turing, or just Volta in the HPC space. I hope Ampere will do both, because Turing is not that great and needs to be replaced as soon as possible.

It was a long gap in time between Pascal and Turing, and it was still a long time between Volta for HPC and Turing for GeForce/consumer cards.
 
Status
Not open for further replies.
Back
Top