Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
If G-Sync is demonstrably better than Adaptive Sync it would live on. If it isn't then, yes, it would quickly die once NVidia supported Adaptive Sync.

If quality was important, they could still have NVidia certified Adaptive Sync monitors. At that point it'd be up to the user whether to use a certified display or not. Heck, I have an Adaptive Sync monitor from Korea that isn't AMD FreeSync certified and it worked fine when I tried it with my 290 a couple years back.

Regards,
SB

Yeah G-sync is useless at this point. It's basically a tax on Nvidia users, a tax they pay because Nvidia in turn does such a great job at marketing. "Hey look we made realtime raytracing possible! Us, no one else! You don't know other realtime things including entire games are doing it much faster, nor that this is a Microsoft spec API, so you don't care, praise us!"

I do admire the job the marketers do, but it's bad for consumers, which includes me.

Vega was the great white hope, the dream, the crusher of Nvidia! Turns out it was just late. AMD must have shate their pants when Nv launched the 1080 way back in May 2016.

But I digress, no doubt a second hand 1080 Ti would be a great buy, the current BS is Pascal is overpriced, yet the GP102 may well go EOL unanswered.

Vega was great! In theory. In practice it was only great for crypto mining if you tweaked the hell out of custom drivers. The backend is there, it can even scale down to incredibly efficient tdp as the Raven Ridge stuff shows. But instead of ensuring all that theoretical performance translated to in game performance Vega had to throw a bunch of resources at costly tech like HBM, and weird experiments like primitive shaders and high bandwidth memory cache.

Hopefully Navi, with all the time it is taking and a new development head, will give a much better showing. And apparently a 7nm showing, which Ampere certainly isn't. I still find the timing of Ampere odd, 7nm is right there, it's a huge improvement. Why is Nvidia showing up with a new generation right now?
 
Vega was great! In theory. In practice it was only great for crypto mining if you tweaked the hell out of custom drivers. The backend is there, it can even scale down to incredibly efficient tdp as the Raven Ridge stuff shows. But instead of ensuring all that theoretical performance translated to in game performance Vega had to throw a bunch of resources at costly tech like HBM, and weird experiments like primitive shaders and high bandwidth memory cache.

You don't need to tweak anything in the drivers for crypto mining. Just install the newest driver. The harder part is tweaking Windows to get persistent results 24/7.
 
You don't need to tweak anything in the drivers for crypto mining. Just install the newest driver. The harder part is tweaking Windows to get persistent results 24/7.
Haven't looked into CM for a while now just because it nailed me down to the oldish blockchain special driver. So you say, new(er?) drivers come with the same performance now, specifically 1800+ H/s in CryptoNightv7? If that's so, maybe I should look into it again. :)
 
Haven't looked into CM for a while now just because it nailed me down to the oldish blockchain special driver. So you say, new(er?) drivers come with the same performance now, specifically 1800+ H/s in CryptoNightv7? If that's so, maybe I should look into it again. :)


Yup, the's been the case since vresion 18.4.something
 
Hopefully this next gen continues to prioritize power efficiency. The 960 in my htpc is a little long in the tooth. Looking forward to an x50 part with 1060 performance.
 
Hopefully this next gen continues to prioritize power efficiency. The 960 in my htpc is a little long in the tooth. Looking forward to an x50 part with 1060 performance.

What if there is no small Turing part? It might be a possibility to postpone the GT107/108 and directly launch them on 7nm. 7nm is not far away and the chips would be small enough even for a not so good yielding process. They also would be good test chips for the process.
 
Are there not elements that do not scale down with smaller process geometries than SRAM or logic? If what I remember here is true, then the smaller the chip was in the first place, the less could be gained from process shrinks because of necessary components.
 
What if there is no small Turing part? It might be a possibility to postpone the GT107/108 and directly launch them on 7nm. 7nm is not far away and the chips would be small enough even for a not so good yielding process. They also would be good test chips for the process.
For what it's worth, at least HWiNFO indicates only GV102 and GV104 being released at first, since they're only adding support to them in their next version
 
For what it's worth, at least HWiNFO indicates only GV102 and GV104 being released at first, since they're only adding support to them in their next version

That doesn't mean anything. After that version, a new one will come, which might support other chips. Not all gpus are released in the same month. I'm sure at least a 1160 will come, as it's the segment with the biggest competition and one of the best selling segments. But lower segments? They have a 1050Ti, which competes very good against a RX560. They could just reduce prices a bit and don't need a new gpu there. But it might be, that the notebook makers are desperate for a new 1150.
 
Hmm, virtualized GPU resources, no memory duplication and driver overhead for the masses?
Those have nothing to do with the physical link (or the encoding used) between GPUs AFAIK
Edit:
GPU resources have been virtualized for some time already if I'm not mistaken (when used as multiuser GPU), no memory duplication depends on the API support for proper multiGPU-mode + software support for specific rendering modes and I just can't see using different encoding or physical link having absolutely anything to do with driver overhead
 
NVIDIA AIB Manli registers GA104-400 - Ampere? And Lists GeForce GTX 2070 and 2080
Guru3D news article mentions EEC certification for Manli Technology Group's Ampere products. Also show registration for the 2070 and 2080 GeForce products in another .

index.php

index.php


https://www.guru3d.com/news-story/n...ga104-400-ampere-and-lists-2070-and-2080.html
 
NVIDIA AIB Manli registers GA104-400 - Ampere? And Lists GeForce GTX 2070 and 2080
Guru3D news article mentions EEC certification for Manli Technology Group's Ampere products. Also show registration for the 2070 and 2080 GeForce products in another .

index.php

index.php


https://www.guru3d.com/news-story/n...ga104-400-ampere-and-lists-2070-and-2080.html

If you notice, immediately preceding the "GeForceGTX 2070...2080" entries on that list are the entries for the rest of the Geforce *10* series, with the 1070, 1080, and 1080 Ti parts completely missing. Pretty sure that's just a typo. As for the GA104 bit - who knows at this point. We know GV104 and 102 exist, although they could be intended for Quadro parts with Ampere or Turing codenames used for GPUs intended for the GeForce brand.
 
Manli has contacted media, including us, to remove all references to the 2070 and 2080 etc, stating that they are not behind the registrations and they're now investigating how it's possible someone could register them with Manli's name
 
Guess that seals it. I’m really curious as to what GV104 will bring to the table that would match or beat GP102 at a considerably lower die size and power consumption. Given what we’ve seen from Titan V Volta performance per flop in games isn’t encouraging.

GDDR6 will help with bandwidth but it seems GV104 will need ~450mm^2 to deliver a useful upgrade.
 
My guess is the GV104 will be quite a bit larger than GP104.

But let me indulge in some wishful thinking for a second... weren't Tesla P100 and GTX 1080 released in relatively quick succession? I think it's been over a year since the V100 release, so maybe the architectural differences between the data-center and gaming lines of Volta are greater than for Pascal. From what I remember, the ray-tracing demos we've seen so far were running on multi-GPU V100 workstations with obscene price tags. At the same time, it's not really clear how much hardware V100 dedicates to accelerating ray-tracing. If the answer is "not much", and gaming-Volta could bring that sort of image quality within consumer reach, it would certainly motivate me to upgrade, even if the performance uplift in other scenarios were unimpressive.
 
Status
Not open for further replies.
Back
Top