Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
Remove all these things from Volta and it's a pretty great gaming architecture. Is this what Turing/Ampère is? If so, does it warrant a new name? Does that make it a new architecture? I fear we may be venturing into semantics, here.

By your view you could just also say Pascal is semantically the same as Maxwell, but we know it is not.
For Turing process will be 12 nm vs 16 nm for Pascal, of course there also will be some architectural improvements like also Pascal had compared to Maxwell.
 
One more reason why using Architecture name as series name is a terrible idea. NVIDIA has used in the past chips from previous architectures under the same series, albeit the very low end ones.
Actually both have done it several times and not just limited to low end (NVIDIA's G92(b) for example was used in total of 4 series one of which was OEM-only series, AMD has used at least 2 different gen GCN's in all generations since HD7000)
 
By your view you could just also say Pascal is semantically the same as Maxwell, but we know it is not.
For Turing process will be 12 nm vs 16 nm for Pascal, of course there also will be some architectural improvements like also Pascal had compared to Maxwell.
And by your view P100 shouldn't be a Pascal, GK210 a Kepler and so on, you see where I'm going with this?
 
There are no gaming versions of Volta, there is a reason Titan Volta is called Titan V and not Titan XV.
Volta is a non gaming architecture HBM2 (gaming will use GDDR6), double precision, NV link, high tensor core mixed precision TFLOP/s (for AI/ML training) Volta can be used for gaming though but from a price/performance point of view it is not optimal.

Ummm, Pascal also fits that.
  • Versions with HBM2 and versions with GDDR
  • Versions with NVLink and versions without.
  • Versions with Pure Video and versions without.
  • Versions with double rate FP16 and versions without.
  • The list goes on.
https://en.wikipedia.org/wiki/Pascal_(microarchitecture)

GP100 (Pascal) was significantly different from consumer Pascal. So there's no reason that consumer Volta wouldn't be able to do the same thing. The only significant difference would be having to remove the tensor cores, otherwise it'd be the same as how NVidia differentiated between GP100 and the rest of the GP stack.

That doesn't meant that NV wouldn't choose to diverge from the naming scheme used for Pascal and instead call consumer Volta something else. They certainly could have called consumer Pascal something other than Pascal if they'd wanted and likely no one would have batted an eyelash.

Regards,
SB
 
Ummm, Pascal also fits that.
  • Versions with HBM2 and versions with GDDR
  • Versions with NVLink and versions without.
  • Versions with Pure Video and versions without.
  • Versions with double rate FP16 and versions without.
  • The list goes on.
https://en.wikipedia.org/wiki/Pascal_(microarchitecture)

GP100 (Pascal) was significantly different from consumer Pascal. So there's no reason that consumer Volta wouldn't be able to do the same thing. The only significant difference would be having to remove the tensor cores, otherwise it'd be the same as how NVidia differentiated between GP100 and the rest of the GP stack.

That doesn't meant that NV wouldn't choose to diverge from the naming scheme used for Pascal and instead call consumer Volta something else. They certainly could have called consumer Pascal something other than Pascal if they'd wanted and likely no one would have batted an eyelash.

Regards,
SB

Right, but from GP100 architecture there was no Titan version where from the V100 there already is the Titan V.
So it would be a bit confusing marketing wise there would be in the future a Titan XV based on a simplified consumer version of Volta. It would make more sense to give it another name like Turing.
 
Right, but from GP100 architecture there was no Titan version where from the V100 there already is the Titan V.
So it would be a bit confusing marketing wise there would be in the future a Titan XV based on a simplified consumer version of Volta. It would make more sense to give it another name like Turing.

That's all for the marketing team to decide. From an engineering standpoint Volta and whatever consumer GPU comes out based on Volta could be similar to P100 versus P10x. Marketing may want different names, but from an engineering standpoint Volta versus "Turing" may be basically the same as P100 versus P10x.

Regards,
SB
 
That's all for the marketing team to decide. From an engineering standpoint Volta and whatever consumer GPU comes out based on Volta could be similar to P100 versus P10x. Marketing may want different names, but from an engineering standpoint Volta versus "Turing" may be basically the same as P100 versus P10x.

Regards,
SB

I think we can agree that P100 and GP102 are quite different from an engineering point of view.
 
I think there should be more cards than just the actual Volta big-chip with tensor-cores.
Software-engineering could speedup if consumer-HW with those cores is avaiable.

Is there a way of using ray-tracing in a kind of low-end entry style, or has an engine to go all-in?
What i mean is: "Is there a way to use it with much lower fidelity, just to take manpower away from the programmers or evtl. as error-correction for lighting, refections and shadows."
Like an AI-auto-correction with just some few rays ?

beside that i´m very curious of how nvidia will maximize FPS and energy-efficiency, lower off-chip traffic, and how small the memory-interfaces will get with those 12Gbps GDDR6 Chips
 
That doesn't meant that NV wouldn't choose to diverge from the naming scheme used for Pascal and instead call consumer Volta something else. They certainly could have called consumer Pascal something other than Pascal if they'd wanted and likely no one would have batted an eyelash.
I think there are things more telling then HBM yes/no, NVLINK yes/no. GP100 is compute capability 6.0 and GP102 and later are compute capability 6.1. One thing I want to point out is that later version had dp4a and dp2a integer dot production instructions and NV specifically said that these would have been in GP100 as well, but they were not ready in time. And P100 launched about two months before 1080. V100 has been on the shelves for close to a year now so I think it's a pretty safe bet to say there are going to be architectural changes in the GPUs coming to market outside of HBM/NVLINK/... therefore Turing.
 
Right, but from GP100 architecture there was no Titan version where from the V100 there already is the Titan V.
The reason there's no Titan based on GP100 is because the GP102 performs better for rendering and is substantially cheaper to make.

Had nvidia kept their initial guidelines of the Titan line (max performance + max memory amount + uncapped compute features) then we might have seen a GP100 Titan.
But since the titan line eventually just settled down as "you pay twice for 10% more", with nvidia going as far as trying to dictate how their customers use the GPUs they own, I'd look at Titan V as the odd duck here.
 
Trolling would be NVidia cutting deep learning HW from Volta and calling the result Turing.
You mean Tensor-cores? That's actually highly likely option (along with cutting DP-speed as usual) - was the most likely option until they revealed RTX which seems to utilize Tensor-cores
 
Wondering what architectural improvements the next generation consumer NV GPU (lets call it Turing) will bring.
Pascal compared to Maxwell brought:
- GDDR5X
- unexpected high bump in GPU clock to about 2 Ghz.
- More memory bandwidth saving: frambuffer compression/decompression
- Better async compute
- Further acceleration of VR
- DP2/4 good for NN inferencing

So what would Turing speculatively bring ?
- GDDR6 sure
- Tensor cores, likely
- Something compute related to increase flexibility, possibly
- Something new related to accelerate graphics, likely
- Fixed function DLA, for high speed/low power inferencing, possibly
... ???
 
FP16 support the same way that Vega / PS4pro ?

It's not a terrible idea. Some things are already done in FP16 regardless of double rate hardware support just to cut down on VGPR usage, and with PS4 Pro and Vega supporting such there's a bit more usage coming in from developers here and there. Parts of Far Cry 5's water pipeline use FP16, probably used for other things as well.

Part of the reason AMD has it on Vega is to cut down on GPU design/tapeout costs, which are growing prohibitive with newer silicon nodes, and so they unified their consumer and pro GPU lines. And while Nvidia doesn't need to do that yet, doesn't mean it's not useful for games.
 
Last edited:
Probably full DX12 support as well, on par with Vega.
Does Resource Heap Tier 2 support require some drastic changes? It's the only thing missing from Volta compared to GCN5 (besides FP16 support but we know they already have chips with it)
 
Status
Not open for further replies.
Back
Top