Nvidia Post-Volta (Ampere?) Rumor and Speculation Thread

Status
Not open for further replies.
Because Ampere seems to a rather recent addiction, if you look at the roadmaps, it was not mentioned in the past.
There are no public roadmaps beyond Volta. Literally the only thing we can go by Heise picking up a name.

Even if they are right, how would you know it’s a recent addition? It could have existed as an internal name for years for all we know.
 
LOL, so a chip almost 2 years past Pascal is "unlikely" to be the "real" next generation? What?

Yeah, unlikely this is a real new generation. My bet is on Pascal shrink to 12nm. The naming is probably a diversion. Why? Because it saves money, and I bet they think they don't need to put any more effort in the gaming sector. And they are probably right, until they get caught cold by AMD (Navi) or Intel (whatever they are cooking).

I am going to go out on the limb and assume that there are a few people working there who are not interested in being "caught cold" ala Intel.
 
What's the point of Pascal 12nm or Volta 12nm? Too large die, too expensive. They could just dump the price of a GP104 to $300 and still make a killing.

The Ampere 10nm story makes sense.

What? Too large for what? What do you expect? There will not be anything anywhere near as big as the GV100 die. Everything they do new costs money, and there is no reason for them to invest a Cent more than they have to - Pascal is running great for them as it is.
 
Pascal isn't going to generate a new influx of revenue however, even a "12nm" version. If they want to continue pushing the gaming market and maintain/strengthen their dominance, they'll be advancing the tech.
 
Assuming the Ampere codename for next GeForce GPUs is true, I expect Ampere to simply be Volta minus the Tensor cores, NVLink 2, and other bits. Will use GDDR6 memory and be focused on graphics performance for GeForce and Quadro lines. Will start launching cards in Q2 shortly after the March GTC announcement.

Flagship GA104 (GeForce Ampere) GTX 1180

Enthusiast & Ultra Enthusiast GA100 or GA102 in GTX 1180 Ti and Titan Ampere some time later.

Midrange GA106 / GTX 1160 in Q3 2018
 
Ampere seems like a codename like Volta, rather than a branding name. I don't see Nvidia suddenly having different codenames for what amounts to a cutdown Volta like they normall have been doing.
 
TSMC didn't create a custom Nvidia 12nm FFN (as in FinFet Nvidia) process so Nvidia could proceed to transition to 7nm as soon as humanly possible.

I expect Nvidia will create at least a full lineup (pro + consumer cards) on 12FFN in 2018 before they move to 7nm.



https://www.anandtech.com/show/1136...v100-gpu-and-tesla-v100-accelerator-announced
AFAIK 12FFN is exactly the same process as "regular" 12nm (aka improved 16nm), NVIDIA just bought the early (risk?) capacity and gave it their name
 
LOL, so a chip almost 2 years past Pascal is "unlikely" to be the "real" next generation? What?

I'm reading that on different forums from time to time and i'm unsure whether these are just amd fanboys believing this or people, which never worked in companys which develop stuff or even heard stuff about product development.
Or maybe people believe that a 128Shader/SM Chip has to be a pascal refresh, but underestimate that Voltas FP/W improvement will definately find it's way into the next gen.
 
google translate:

nVidia skips "Volta" in the gaming sector and will bring the "Ampere" generation in Q2 / 2018
This is the same rumor regurgitated again as we heard a week or two ago elsewhere. It might mean something or it might mean nothing. In of itself it adds nothing new though other than as an example of the Great Internet Echo Chamber. :)

Everything they do new costs money, and there is no reason for them to invest a Cent more than they have to
Bah. Everything in cutting-edge semiconductors costs money. Nvidia claimed at the time of pascal's unveiling to have spent a billion dollars developing it. According to you they wouldn't actually have though, because it'd be expensive? Lolwut.
 
This is the same rumor regurgitated again as we heard a week or two ago elsewhere. It might mean something or it might mean nothing. In of itself it adds nothing new though other than as an example of the Great Internet Echo Chamber. :)

But is it really? 10nm in particular.
When put this way, 10nm sounds like a nobrainer:

For new nVidia graphics chips after the Pascal generation you actually always needed a new real production fullnode - with everything else either the performance or the power consumption requirements would not be met.
In addition, the delay of a new graphics chip generation at nVidia (the time span of 18 months to date and probably a total of ~ 24 months after the release of the current generation is comparatively long for nVidia )definitely indicates that you are waiting for something that will be ready to go until a certain time - like the 10nm production for large graphics chips.

For smartphone SoCs that is already in mass production since spring 2017, but usually the first months are blocked exclusively for large orders from Apple and Samung and subsequently the new production must first mature in such a way that produces the much larger graphics chips to a meaningful production yield can be.
One year later than the first corresponding SoCs here is a rule of thumb, which has worked well in recent years - and now in the case of the 10nm production of TSMC fits well with the second quarter of 2018
 
But is it really? 10nm in particular.
When put this way, 10nm sounds like a nobrainer:

For new nVidia graphics chips after the Pascal generation you actually always needed a new real production fullnode - with everything else either the performance or the power consumption requirements would not be met.
In addition, the delay of a new graphics chip generation at nVidia (the time span of 18 months to date and probably a total of ~ 24 months after the release of the current generation is comparatively long for nVidia )definitely indicates that you are waiting for something that will be ready to go until a certain time - like the 10nm production for large graphics chips.

For smartphone SoCs that is already in mass production since spring 2017, but usually the first months are blocked exclusively for large orders from Apple and Samung and subsequently the new production must first mature in such a way that produces the much larger graphics chips to a meaningful production yield can be.
One year later than the first corresponding SoCs here is a rule of thumb, which has worked well in recent years - and now in the case of the 10nm production of TSMC fits well with the second quarter of 2018
It's just speculation based on the assumption that nvidia can do no wrong nor have any issues whatsoever (damn people have short memory). Next gen GeForces will almost certainly use "12nm" like GV100 does
 
Assuming the Ampere codename for next GeForce GPUs is true, I expect Ampere to simply be Volta minus the Tensor cores, NVLink 2, and other bits. Will use GDDR6 memory and be focused on graphics performance for GeForce and Quadro lines. Will start launching cards in Q2 shortly after the March GTC announcement.

Flagship GA104 (GeForce Ampere) GTX 1180

Enthusiast & Ultra Enthusiast GA100 or GA102 in GTX 1180 Ti and Titan Ampere some time later.

Midrange GA106 / GTX 1160 in Q3 2018

Given recent history this seems like the most reasonable expectation until there's more concrete evidence that Nvidia are doing something different this time.
 
Well getting rid of the tensor cores may require a different chip/SM layout... maybe that's this ampere they talk about.
They've had this before without any such distinction of 2 families or some such (FP32/64 CUDA-core configuration completely different within family)
 
But is it really? 10nm in particular.
When put this way, 10nm sounds like a nobrainer:

For new nVidia graphics chips after the Pascal generation you actually always needed a new real production fullnode - with everything else either the performance or the power consumption requirements would not be met.
In addition, the delay of a new graphics chip generation at nVidia (the time span of 18 months to date and probably a total of ~ 24 months after the release of the current generation is comparatively long for nVidia )definitely indicates that you are waiting for something that will be ready to go until a certain time - like the 10nm production for large graphics chips.

For smartphone SoCs that is already in mass production since spring 2017, but usually the first months are blocked exclusively for large orders from Apple and Samung and subsequently the new production must first mature in such a way that produces the much larger graphics chips to a meaningful production yield can be.
One year later than the first corresponding SoCs here is a rule of thumb, which has worked well in recent years - and now in the case of the 10nm production of TSMC fits well with the second quarter of 2018

10nm is a half step node designed solely for mobile SOCs and doesn't have the specs for large chips. Just like with 20nm both Nvidia and AMD will be skipping it. For AMD, at least by appearances, their relationship with Global Foundries will allow them to get to "7nm" by sometime next year, though probably only for small chips as GF will be upgrading to better yields/planned EUV insertion (somewhere) by 2019; that and new nodes have low enough yields that you need smaller chips to amortize failure rate anyway. Meanwhile TSMC, which is who Nvidia uses, has a roadmap that is somewhat behind GF's. Thus the switch to the "improved 16nm" 12nm. The two designs are compatible, you can just move chip tapeouts from one to the other without much trouble.

At this point Nvidia just has to take what it can get. But other than mobile chips AMD's node advantage over Nvidia probably won't last too long. By the time AMD is likely to get larger GPUs out on 7nm Nvidia should be right there with them, or at least not terribly far behind. I.E. next year don't be surprised if AMD and Intel are the major laptop/(Windows) mobile device players with Nvidia more left out than they usually are, but other than that window they'll be back in soon enough.
 
At this point Nvidia just has to take what it can get. But other than mobile chips AMD's node advantage over Nvidia probably won't last too long. By the time AMD is likely to get larger GPUs out on 7nm Nvidia should be right there with them, or at least not terribly far behind. I.E. next year don't be surprised if AMD and Intel are the major laptop/(Windows) mobile device players with Nvidia more left out than they usually are, but other than that window they'll be back in soon enough.
The question is will there be high performance oriented 7nm for NVIDIA, GloFos 7nm is supposed to be optimized for high performance parts, but no word on TSMC doing same
 
10nm is a half step node designed solely for mobile SOCs and doesn't have the specs for large chips. Just like with 20nm both Nvidia and AMD will be skipping it. For AMD, at least by appearances, their relationship with Global Foundries will allow them to get to "7nm" by sometime next year, though probably only for small chips as GF will be upgrading to better yields/planned EUV insertion (somewhere) by 2019; that and new nodes have low enough yields that you need smaller chips to amortize failure rate anyway. Meanwhile TSMC, which is who Nvidia uses, has a roadmap that is somewhat behind GF's. Thus the switch to the "improved 16nm" 12nm. The two designs are compatible, you can just move chip tapeouts from one to the other without much trouble.

At this point Nvidia just has to take what it can get. But other than mobile chips AMD's node advantage over Nvidia probably won't last too long. By the time AMD is likely to get larger GPUs out on 7nm Nvidia should be right there with them, or at least not terribly far behind. I.E. next year don't be surprised if AMD and Intel are the major laptop/(Windows) mobile device players with Nvidia more left out than they usually are, but other than that window they'll be back in soon enough.

Qualcomm's Centriq 2400 series server chips are 398 mm². That would be quite adequate for a high-end GPU.
https://www.qualcomm.com/news/relea...logies-announces-commercial-shipment-qualcomm
 
Qualcomm's Centriq 2400 series server chips are 398 mm². That would be quite adequate for a high-end GPU.
https://www.qualcomm.com/news/relea...logies-announces-commercial-shipment-qualcomm
It's not about die size, it's about performance.
10LPE is okay for server clocks, totally not okay for high-performance speed demon consumer GPUs.
I would be surprised to see any product shipping to consumers on 7nm in 2018.
A11X/A12.
Yours, Apple.
 
Status
Not open for further replies.
Back
Top