Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
676 mm^2? LOL if true.
I know this was a little while ago, but since it wasn't responded to:

I doubt they can determine the die size from that picture. The measured area is likely close to the package size, and the die is usually far smaller than the package.
 
One thing I'm really interested in with this generation: will nVidia make a shift to supporting DisplayPort 1.2a's Adaptive Sync? If they ever had any designs in doing this, this next generation would be the one to do it. This should allow all FreeSync monitors to work with nVidia GPUs as well.

The main benefit of G-Sync over FreeSync is that G-Sync displays must meet rigid quality standards. nVidia could easily keep those quality standards in place for certification without necessarily requiring that they use the hardware module. But it would eliminate one huge selling point that AMD cards can have over nVidia.

For nVidia, I can see two reasons to resist this:
1. Lock-in. Having an incompatible display standard makes it potentially more difficult to switch from nVidia to AMD.
2. Revenue from the G-Sync component.

I'd be willing to bet that (2) is a minor contributor to the overall argument. Those components require a lot of investment, after all. And they can still market them for high-end displays which support features that Adaptive Sync displays simply can't support.

Lock-in makes more sense as an organizational argument, but even that argument is pretty weak when it eliminates lock-in for not only nVidia but AMD as well. Even though AMD is a much smaller fraction of the overall GPU market, FreeSync displays are far more common than G-Sync displays. Thus, I would tend to think that nVidia has more to gain from eliminating lock-in by supporting Adaptive Sync than not.

So overall I suspect that there are good reasons to start supporting Adaptive Sync (and continue with G-Sync for high-end displays). I'll be curious to see if they do so.
 
My guess is Nvidia enables AdaptiveSync over HDMI only where it would be less in conflict with G-Sync and limits the refresh range to something more suitable for TVs. Lock it down by vendor or certified model. As for rigid standards, FreeSync HDR technically does that as well, but Nvidia can't get royalties from that. They could make their own version of Adaptive Sync certification though and try to charge for it. G-Sync will disappear quickly if they universally enable adaptive sync. I doubt Nvidia is relevant enough for Samsung and other large display makers to bother with a license on all displays. It might be viable for higher tier models however.
 
G-Sync will disappear quickly if they universally enable adaptive sync.

If G-Sync is demonstrably better than Adaptive Sync it would live on. If it isn't then, yes, it would quickly die once NVidia supported Adaptive Sync.

If quality was important, they could still have NVidia certified Adaptive Sync monitors. At that point it'd be up to the user whether to use a certified display or not. Heck, I have an Adaptive Sync monitor from Korea that isn't AMD FreeSync certified and it worked fine when I tried it with my 290 a couple years back.

Regards,
SB
 
My guess is Nvidia enables AdaptiveSync over HDMI only where it would be less in conflict with G-Sync and limits the refresh range to something more suitable for TVs. Lock it down by vendor or certified model. As for rigid standards, FreeSync HDR technically does that as well, but Nvidia can't get royalties from that. They could make their own version of Adaptive Sync certification though and try to charge for it. G-Sync will disappear quickly if they universally enable adaptive sync. I doubt Nvidia is relevant enough for Samsung and other large display makers to bother with a license on all displays. It might be viable for higher tier models however.
Sort of. AMD is moving in this direction with FreeSync 2, but there really wasn't any significant certification bar for monitor manufacturers to claim FreeSync support originally. AMD tries to manage the chaos that situation creates by maintaining a page which lists supported monitors (and their features):
https://www.amd.com/en/products/freesync-monitors

Makers of G-Sync displays have to clear their displays with nVidia first, and they have to adhere to much stricter quality standards. It will be possible for nVidia to retain this system into the future with a certification system, even if the displays only support standards-compliant Adaptive Sync, not G-Sync. The proprietary hardware is a part of the picture of G-Sync, but nVidia uses that proprietary hardware as a wedge for quality control. That means that there is the "G-Sync tax", but also that there's no way to get a really cheap G-Sync display.

My hope is that nVidia will genuinely support standards-compliant Adaptive Sync, with G-Sync certification still existing in two forms:
1) Quality bar to ensure the Adaptive Sync-compliant display has certain critical features. Users could still use standards-compliant displays that don't meet this quality bar, but may get a warning in the control panel stating that this is an unsupported display.
2) Continued hardware sales for high-end displays (that is, displays similar in concept to this one: https://www.asus.com/us/Monitors/ROG-SWIFT-PG27UQ/). Who knows? They might try to break into monitor hardware more generally. That would be an interesting way that nVidia might be able to diversify, though it would be a high-volume, low-revenue space so they may just stick to the high-end.
 
If G-Sync is demonstrably better than Adaptive Sync it would live on. If it isn't then, yes, it would quickly die once NVidia supported Adaptive Sync.
Better yes, but is the cost worthwhile?

If quality was important, they could still have NVidia certified Adaptive Sync monitors. At that point it'd be up to the user whether to use a certified display or not. Heck, I have an Adaptive Sync monitor from Korea that isn't AMD FreeSync certified and it worked fine when I tried it with my 290 a couple years back.
I don't see quality being the issue as much as protecting the revenue stream. Any gamer going with adaptive sync in favor of gsync would be a loss in revenue. The only catch would be offsetting a potential loss in GPU sales from customers choosing AMD just for adaptive sync support.
 
Which standards? What do you mean ?
This article goes into a fair amount of detail:
https://www.techspot.com/article/1454-gsync-vs-freesync/

One example is low-framerate compensation (LFC), which is supported by all G-Sync displays. All adaptive-sync displays have a minimum refresh they support (typically between 30Hz-48Hz). Low framerate compensation repeats frames when the framerates drop too low to keep the display refresh within the supported range. Failing to do this can cause serious issues when framerates drop too low, and was a serious problem for early FreeSync displays (and probably still for cheap ones).

Part of the discrepancy just comes from the fact that the hardware that nVidia sells for use in these monitors supports these features, but as I understand it they also require a certain quality bar to be met for the display's other components.

Anyway, I really do hope that we move away from the proprietary hardware, but retain a certification system (similar to AMD's FreeSync 2).
 
I'm so tired of waiting. I just need to know if the new cards will have more VRAM than the current generation. If not, I'll just buy a used 1080 Ti, put it under water, and live with that until whatever comes after Volta/Turing/Ampere/whatever this generation is called.
 
I'm so tired of waiting. I just need to know if the new cards will have more VRAM than the current generation. If not, I'll just buy a used 1080 Ti, put it under water, and live with that until whatever comes after Volta/Turing/Ampere/whatever this generation is called.

No kidding. I thought we would have had new cards 12 months ago. At this point, my expectations for increased performance are fairly high especially if the rumored price increases are true. If NVIDIA fails to deliver either through sub-par performance increases or excessive pricing, I just may sit out until the next gen of GPU's. I've waited this long, what's a little longer...
 
Vega was the great white hope, the dream, the crusher of Nvidia! Turns out it was just late. AMD must have shate their pants when Nv launched the 1080 way back in May 2016.

But I digress, no doubt a second hand 1080 Ti would be a great buy, the current BS is Pascal is overpriced, yet the GP102 may well go EOL unanswered.
 
I'm so tired of waiting. I just need to know if the new cards will have more VRAM than the current generation. If not, I'll just buy a used 1080 Ti, put it under water, and live with that until whatever comes after Volta/Turing/Ampere/whatever this generation is called.
I'm sure they will. 16GB would seem to be reasonable at this point.
 
Compared to the past, the next generation isn’t later than usual.

We’re currently at 25 months between gp104 and g?1xx.

The time between gk104 and gm204 was 28 months. Gm204 to gp104 was 21 months and from gf100 and gk104 was 24 months (though it would have been much longer if gf100 wasn’t seriously delayed.)

In a way, the current cycle is very unusual, with only around 15 months between gp100 and gv100, despite the significant architectural changes between the two.
 
Yes, the timing is not unusually different than before. Only difference to earlier generations, with maxwell nvidia started to cancel the refresh generation. But these small increases were anyway useless.

The only reason GP100 to GV100 timeframe is so short was Summit/Sierra. They had their contracts and needed to fullfill it. This might have also pushed desktop back a small bit, as bringing two big chips in such a small timeframe is very ressource intensive.

I'm very curious what the next gen brings. Turing even taped out 18 months after Volta, if we take into account erinyes comment. It's timeframe is closer to Ampere, than to Volta. It might already have features from the former.
 
Compared to the past, the next generation isn’t later than usual.

We’re currently at 25 months between gp104 and g?1xx.

The time between gk104 and gm204 was 28 months. Gm204 to gp104 was 21 months and from gf100 and gk104 was 24 months (though it would have been much longer if gf100 wasn’t seriously delayed.)

In a way, the current cycle is very unusual, with only around 15 months between gp100 and gv100, despite the significant architectural changes between the two.

I do not fully agree with your assessment that the current cycle is "normal" compared to recent cycles. The Kepler to Maxwell cycle took longer than usual because TSMC's 28nm node was delayed. Maxwell to Pascal took longer than usual as well, thanks to everyone skipping the poor-performing 20nm node to wait for 16nm. The common cause then was manufacturing, a factor which is outside Nvidia's control. For the current generation we do not have the same problem. 12nm is the rumored process and has been available for quite some time.

The most likely cause, given all the market factors (crypto crap, lack of competition from RTG) is that NV is just plain sitting on the next gen.
 
I do not fully agree with your assessment that the current cycle is "normal" compared to recent cycles. The Kepler to Maxwell cycle took longer than usual because TSMC's 28nm node was delayed. Maxwell to Pascal took longer than usual as well, thanks to everyone skipping the poor-performing 20nm node to wait for 16nm. The common cause then was manufacturing, a factor which is outside Nvidia's control. For the current generation we do not have the same problem. 12nm is the rumored process and has been available for quite some time.

The most likely cause, given all the market factors (crypto crap, lack of competition from RTG) is that NV is just plain sitting on the next gen.

No, Maxwell and Kepler were on the same node and had a very long time between. You could say, that the redesign from 20nm to 28nm for maxwell took some time. But in this case you could've the same argument for Turing, which might have been planned for 10nm first. 2 years is the standard time between generations. Sometimes a bit less, sometimes a bit more.

No company ever sits on finished products a long time. It just makes zero sense. Maybe they delayed it 2,3 months because of mining. But definately not more. Sitting on a finished product just means loosing money.
 
I don't think they would sit on it. If they did leaks would surely occur. They may have felt they had more time to tweak for improved yield or something though. I do hope they support adaptive sync. If they do I will buy it. If not I will probably just wait at this point. My me for good graphics cards has mostly been crushed at this point.
 
The Kepler to Maxwell cycle took longer than usual because TSMC's 28nm node was delayed.
They are both using 28nm... Did you mean GF100 to GK104? If so: I don’t remember 28nm to be delayed to be honest. 28nm was very much a no-drama node.

Maxwell to Pascal took longer than usual as well, thanks to everyone skipping the poor-performing 20nm node to wait for 16nm.
Given that 20nm was always going to be an ugly duckling anyway that didn’t have a lot going for it, isn’t it more likely that Nvidia never planned to use 20nm at all?

The common cause then was manufacturing, a factor which is outside Nvidia's control. For the current generation we do not have the same problem. 12nm is the rumored process and has been available for quite some time.
Kepler and Maxwell has huge architectural improvements compared to their predecessors. You can’t just ignore those. If the next gen is 12nm, then they’ll need something compelling for people to buy it. (Of course: Volta is a very attractive architecture, so I expect that to be a major part of it.)

The most likely cause, given all the market factors (crypto crap, lack of competition from RTG) is that NV is just plain sitting on the next gen.
When the next gen was on the drawing board, Nvidia didn’t know AMD would decide to stop showing up for a while. They had already started the process of investing billions. Furthermore, you don’t develop something unless it has a clear benefit over your previous generation.

If you already have something that’s better anyway, why would you not release it? The lack of competition doesn’t suddenly invalidate those new benefits.

This is even more so with Navi being 7nm: this could compress the useful lifetime of the next gen.
 
Status
Not open for further replies.
Back
Top