Nvidia Volta Speculation Thread

Yes, it looks like they scrapped Volta and Maxwell is now an architecture being used for over 2 years like Kepler before it.

The default hypothesis was Volta is not very much different from Maxwell.
And even Kepler is not that different. I tend to see Kepler/Maxwell/Pascal a bit like Radeon HD 2000/3000/4000 (we had G80 derivatives for ages too)
 
I wonder what has happened to Nvidia's furthest-off GPU architecture, Einstein which AFAIK was meant to follow Volta / Pascal in 2018+ and wasn't Einstein meant to be the GPU architecture for Echelon, their exascale supercomputer project ?
 
http://www.legitreviews.com/nvidia-gtc-2014-opening-keynote-highlights_138267

An updated GPU roadmap was then shown that showed that Pascal would be the successor to Maxwell. This came as a shock to us as we thought that Volta would be the successor to Maxwell. We asked NVIDIA PR and they said that Volta is still out there and will be the architecture after Pascal. From what we gather Pascal will be the ideal solution for smaller form factor solutions and that Volta will be used across the board.
 
Last edited by a moderator:
I was responding to the assertion that Pascal would be for small form factors and only Volta would be used across the board from legitreviews.
 
There's small form factors as in mobile form factors, or small form factors as in high-density server compute cards.
It could be the latter.
 
I read it as the latter also. Giant interposers and TSV RAM stacks seem like they will make current Tesla pricing a bargain. Am I being too pessimistic in not expecting this tech to be feasible from a cost perspective in the consumer space by 2016?
 
Ashraf Eassa from Motley fool writes about Volta and muses whether it will use 16nm or 10nm TSMC. His point that NVidia has to supply Volta Teslas for the Summit supercomputer in 2017, and there's a chance TSMC's 10nm process may not be ready to provide the chips in time, so what's NVidia's risk management strategy?

The fact that an analyst asked such an oddly specific question about different Volta chips using different processes makes me think that they already knew some inside info. So we may see GV100 on TSMC 16nm, and followon chips on 10nm. NVidia has mixed process nodes before on the same architecture ( 40/55/65 nm for GTxxx) but not since then.
 
Ashraf Eassa from Motley fool writes about Volta and muses whether it will use 16nm or 10nm TSMC. His point that NVidia has to supply Volta Teslas for the Summit supercomputer in 2017, and there's a chance TSMC's 10nm process may not be ready to provide the chips in time, so what's NVidia's risk management strategy?

The fact that an analyst asked such an oddly specific question about different Volta chips using different processes makes me think that they already knew some inside info. So we may see GV100 on TSMC 16nm, and followon chips on 10nm. NVidia has mixed process nodes before on the same architecture ( 40/55/65 nm for GTxxx) but not since then.

TSMC 10nm will be ready in 2017..but at what capacity and yield..that's the question. IMHO the only mainstream product on TSMC 10nm in 2017 will be the Apple A11.
 
From The Motley Fool: "NVIDIA Corporation's Volta Rumored to Be a Monster."

The Motley Fool said:
According to a translation of USG Ishimura's ["a known leaker on Baidu"] post, the performance of GV104 -- the Volta-based successor to the recently released GP104 chip (which powers the GeForce GTX 1070 and GeForce GTX 1080 graphics cards) -- will offer "really strong" performance.

This performance enhancement, the user indicates, is due to the fact that the "sm structure [has] changed" relative to the prior-generation Pascal architecture (in NVIDIA's graphics architecture, "SM" is short for "streaming multiprocessor."
[…]
According to USG Ishimura, NVIDIA is actually planning not two, but three high-end gaming graphics processors based on Volta this time around: GV104, GV110, and GV102.
Perhaps the GV110 and GV102 correspond to the GP100 and GP102 respectively in terms of DP and HPC segmentation, although I don't think there's anything in the numbers that says things have to be that way. Also, the CUDA DLL from last year with the Pascal codenames also mentioned a GV100. Maybe GV100 became GV110 since then?
 
Might be a dual adapter setup featuring NVLink. Once they get HBM2 going the footprint should be small enough to pull it off. NVLink could also connect to a separate onboard memory pool like AMD's SSG if there are no patent issues there. They've already done the FINFET move so unless they're going 10nm already the transistor count shouldn't be changing that much.
 
I wonder what has happened to Nvidia's furthest-off GPU architecture, Einstein which AFAIK was meant to follow Volta / Pascal in 2018+ and wasn't Einstein meant to be the GPU architecture for Echelon, their exascale supercomputer project ?

Yes, what happened to einstein? don't NV usually show roadmaps every year? What's coming after volta? What replaced einstein in the roadmap? if i'm not mistaken it was maxwell -> einstein -> volta originally
 
Yes, what happened to einstein? don't NV usually show roadmaps every year? What's coming after volta? What replaced einstein in the roadmap? if i'm not mistaken it was maxwell -> einstein -> volta originally
I can't remember seeing such roadmap, but I do remember Pascal appearing in place of Volta without warning
 
I can't remember seeing such roadmap, but I do remember Pascal appearing in place of Volta without warning
No official architecture beyond Volta has been announced. As for Pascal, yeah, it was added to the public roadmap at GTC 2014, a year after Volta had been announced..
 
Back
Top