Nvidia Post-Volta (Ampere?) Rumor and Speculation Thread

Status
Not open for further replies.
I have not heard Nvidia mention their entire "Ampere" portfolio, is on the same process node. Nvidia has said, their next datacenter/a.i./server farm GPU would be on 7nm, they did not say that their gaming GPUs & smaller dies, would be on 7nm. There is mention of Nvidia shopping TSMC's "other than" 7nm nodes, though.

And since Ampere for DataCenter, binned down to gaming GPU would take 2-3 months, from AMpere's August release, would make Ampere for gaming too far away. So, I personally think Jensen has a SUPER + incoming with Ampere-like architecture, just on an advanced TSMC 12nm node. OC'd to hell, because of really good 12nm+ thermals and efficiencies. And priced just right for gaming.
They said specifically that most chips will be made by TSMC, which indicates most gaming chips too. Most likely it's a case like Pascal - TSMC for everything but the lowest end chips which are made by Samsung
 
They said specifically that most chips will be made by TSMC, which indicates most gaming chips too. Most likely it's a case like Pascal - TSMC for everything but the lowest end chips which are made by Samsung

TSMC 7nm capacity is used up. That is why I feel Nvidia might release Ampere for gaming on a refined 12nm node. And use 7nm for GA100 and GA102 dies for their Business sector GPUs. 12nm+ makes sense if you stop and think about the cost, power and efficiency aspects of it all. Specially, if Nvidia was put off guard by Navi (rdna), so close after Vega20.
 
TSMC 7nm capacity is used up. That is why I feel Nvidia might release Ampere for gaming on a refined 12nm node. And use 7nm for GA100 and GA102 dies for their Business sector GPUs. 12nm+ makes sense if you stop and think about the cost, power and efficiency aspects of it all. Specially, if Nvidia was put off guard by Navi (rdna), so close after Vega20.
We don't know whether NVIDIA will use N7 or N7+, N7+ hasn't had any reported capacity issues so far AFAIK and N7 capacity issues should be resolving themselves as Apple is going 5nm
 
We don't know whether NVIDIA will use N7 or N7+, N7+ hasn't had any reported capacity issues so far AFAIK and N7 capacity issues should be resolving themselves as Apple is going 5nm

Haven't they basically been confirmed to be using Samsung 7nm? Releasing on yet another 12nm just seems the way to ruin with Xe coming up and RDNA2 seemingly being a large improvement over one.

They announced as much nearly a year ago. Unless Samsung's process is in such deep shit that they can't deliver almost anything the TSMC is producing stuff just seems like a baseless rumor.
 
Haven't they basically been confirmed to be using Samsung 7nm? Releasing on yet another 12nm just seems the way to ruin with Xe coming up and RDNA2 seemingly being a large improvement over one.

They announced as much nearly a year ago. Unless Samsung's process is in such deep shit that they can't deliver almost anything the TSMC is producing stuff just seems like a baseless rumor.
Yes, they have confirmed they will use Samsung to make some chips. They've also re-iterated that TSMC will still make most of their chips.
 
Confirmation that HPC version of Ampere is coming this summer:
Indiana University is the proud owner of the first operational Cray “Shasta” supercomputer on the planet. The $9.6 million system, known as Big Red 200 to commemorate the university’s 200th anniversary and its school colors, was designed to support both conventional HPC as well as AI workloads. The machine will also distinguish itself in another important way, being one of the world’s first supercomputers to employ Nvidia’s next-generation GPUs.
Full Story at the source:
https://www.nextplatform.com/2020/0...rst-production-cray-shasta-supercomputer/amp/
 
NVIDIA’s Next-Gen GPU Is Up To 75% Faster Than Current-Gen – Will Be Deployed in Big Red 200 Supercomputer This Summer

https://wccftech.com/nvidia-next-gen-ampere-gpu-75-percent-faster-existing-gpus

It is also mentioned that Big Red 200 gained an additional 2 Petaflops of performance even though it uses a smaller number of GPUs than the Volta V100 based design. The reason for going with less number of next-generation GPUs is simply because they offer 70-75% better performance than existing parts

Volta and Turing share lots of similarities with Turing offering a more refined version of Volta. If the 70-75% performance increase figures are close to the real thing, then we can definitely see close to 50% performance gain or even higher in the consumer variants. Do note that previous rumors had already said that NVIDIA's next-generation GPUs would offer 50% performance while being twice as efficient as Turing GPUs.
 
What memory config should we expect on the flagship gaming GPU? GDDR6 clock speeds were basically maxed out this generation.

16 Gbps on a 384-bit bus gets to 768 GBps. That's about 25% more bandwidth than the 2080 Ti. Doesn't seem like enough especially for the resolutions and framerates we're chasing these days.

Return of 512-bit buses or will Nvidia finally bring HBM2 to consumer cards?
 
Worth noting is that the comparison is against Volta, not Turing, and they don't specify on what metrics they expect it to be 70-75% faster. Also it's about the Tesla which may or may not even use the same architecture as next gen consumer GPUs, but certainly won't use same chip with any GeForce.

Also 3x power efficiency (+50% speed at half the power) which WCCFTech is referring to is obviously not happening unless NVIDIA found the holy grail or something similar.
 
We cant downplay nvidia yet untill the new product is out there.
Who's downplaying them? 3x power efficiency would be same as card performing like 2080 Ti while consuming less than 1650 Super, that kind of progress would be unheard of.
 
According to Igorslab, NVIDIA's board partners are gearing up for a massive overhaul of their PCB designs, in anticipation of NVIDIA's next gen cards, a method called "backdrill" will be used to allow for much higher operating frequencies than possible now with the current board designs, it's also more expensive. The interpretation of this is that NVIDIA is gearing up to introduce really fast new GPUs.

Nvidia now relies on the backdrill method for Ampere
This automatically results in a significantly lower bit error rate, avoids jitter and can reduce the signal attenuation. Conversely, this naturally leads to a higher channel bandwidth and higher data rates. Which brings us back to the starting point of the consideration of performance increase. According to Samsung, there is currently no significantly faster GDDR6 available than the 16Gbps, so that Nvidia’s change in manufacturing technology for the new boards must be viewed as a blanket measure for all areas where signal integrity and bandwidth are important.

And I think it’s also the first time that board partners have been trained and tested in this way in advance. So we can expect something really fast, that’s for sure. It doesn’t really matter whether it’s the 50% that is really circulated, or whether we are looking for a compromise between increasing performance and reducing power consumption. Such pull-ups to ensure a successful market launch are only done if you really have something presentable in your pocket. That’s a fact.
https://www.igorslab.de/en/why-the-...er-the-practice-phase-igorslab-investigative/
 
Last edited:
NVIDIA is making a killing in quarterly profits right now, and with no competition, I don't think prices will stabilize to lower levels anytime soon.
Erm, last time I checked AMD provides ample competition in everything but the very highest end
 
Erm, last time I checked AMD provides ample competition in everything but the very highest end
Halo effect is important, that's one, NVIDIA has 4 GPUs above AMD's highest Navi choice, 2070S, 2080, 2080S, 2080Ti, not counting Titan RTX of course.

Secondly, looking at the market right now, that's not true, AMD still doesn't provide competition when their GPUs lack essential hardware features, NVIDIA has way options than them at every price point and is selling way more GPUs. The 5500XT had poor reception, same for the 5600XT, the 5700 series is being outmatched by the super series in sales, especially with the current driver woes, the 5700XT is the only successful Navi choice for AMD right now, but the recent driver problems have cast a big shadow over it.

Thirdly, on the process front AMD is a node behind NVIDIA as well, that doesn't matter to consumers right now, but it matters a hell of a lot more next gen. It gives NVIDIA headroom to experiment and push their advantage further.
 
While it's factually incorrect to say there is no competition I'd say that AMD still has far from enough mind- and market share for NVIDIA to be feeling much of a dent from it. AMD is moving forward impressively, but NVIDIA has certainly won the marketing game. Pre-builds, laptops, supercomputers et. al. still mainly offer NVIDIA GPU solutions. So the result is effectually the same.

AMD will need to have a strong counter to NVIDIA's next product lineup for them to really feel the burn I suspect. They're shifting prices around a little, now, because they have to in the low to mid-range.
 
Halo effect is important, that's one, NVIDIA has 4 GPUs above AMD's highest Navi choice, 2070S, 2080, 2080S, 2080Ti, not counting Titan RTX of course.

Secondly, looking at the market right now, that's not true, AMD still doesn't provide competition when their GPUs lack essential hardware features, NVIDIA has way options than them at every price point and is selling way more GPUs. The 5500XT had poor reception, same for the 5600XT, the 5700 series is being outmatched by the super series in sales, especially with the current driver woes, the 5700XT is the only successful Navi choice for AMD right now, but the recent driver problems have cast a big shadow over it.

Thirdly, on the process front AMD is a node behind NVIDIA as well, that doesn't matter to consumers right now, but it matters a hell of a lot more next gen. It gives NVIDIA headroom to experiment and push their advantage further.

While the halo effect is important, I don't think it'd be-all, end-all. Volkswagen flourished from the Beetle and Golf (Rabbit in the US). A well made product for the right price is key. Not to say it's unimportant, just that I think it gets overstated. And as to the performance of the 5500 and 5600 cards, we just can't say yet. They've been out for far too little time to make an adequate judgement on its success. Though they've had a few hurdles on launch I think it's rash to rashly disregard them as trash.

Lastly, AMD is on 7nm with RDNA, and NVIDIA is still on 14nm. Though this will change too.... 7nm with the new lineup, so I don't know where you got the process disadvantage from? Or have I missed something and NVIDIA is jumping to 5nm?
 
Status
Not open for further replies.
Back
Top