HWiNFO isn't one of those thoughPrerelease hype click bait useless articles are ramping up now I see.
I know this was a little while ago, but since it wasn't responded to:676 mm^2? LOL if true.
G-Sync will disappear quickly if they universally enable adaptive sync.
Sort of. AMD is moving in this direction with FreeSync 2, but there really wasn't any significant certification bar for monitor manufacturers to claim FreeSync support originally. AMD tries to manage the chaos that situation creates by maintaining a page which lists supported monitors (and their features):My guess is Nvidia enables AdaptiveSync over HDMI only where it would be less in conflict with G-Sync and limits the refresh range to something more suitable for TVs. Lock it down by vendor or certified model. As for rigid standards, FreeSync HDR technically does that as well, but Nvidia can't get royalties from that. They could make their own version of Adaptive Sync certification though and try to charge for it. G-Sync will disappear quickly if they universally enable adaptive sync. I doubt Nvidia is relevant enough for Samsung and other large display makers to bother with a license on all displays. It might be viable for higher tier models however.
and they have to adhere to much stricter quality standards
Better yes, but is the cost worthwhile?If G-Sync is demonstrably better than Adaptive Sync it would live on. If it isn't then, yes, it would quickly die once NVidia supported Adaptive Sync.
I don't see quality being the issue as much as protecting the revenue stream. Any gamer going with adaptive sync in favor of gsync would be a loss in revenue. The only catch would be offsetting a potential loss in GPU sales from customers choosing AMD just for adaptive sync support.If quality was important, they could still have NVidia certified Adaptive Sync monitors. At that point it'd be up to the user whether to use a certified display or not. Heck, I have an Adaptive Sync monitor from Korea that isn't AMD FreeSync certified and it worked fine when I tried it with my 290 a couple years back.
This article goes into a fair amount of detail:Which standards? What do you mean ?
I'm so tired of waiting. I just need to know if the new cards will have more VRAM than the current generation. If not, I'll just buy a used 1080 Ti, put it under water, and live with that until whatever comes after Volta/Turing/Ampere/whatever this generation is called.
I'm sure they will. 16GB would seem to be reasonable at this point.I'm so tired of waiting. I just need to know if the new cards will have more VRAM than the current generation. If not, I'll just buy a used 1080 Ti, put it under water, and live with that until whatever comes after Volta/Turing/Ampere/whatever this generation is called.
Compared to the past, the next generation isn’t later than usual.
We’re currently at 25 months between gp104 and g?1xx.
The time between gk104 and gm204 was 28 months. Gm204 to gp104 was 21 months and from gf100 and gk104 was 24 months (though it would have been much longer if gf100 wasn’t seriously delayed.)
In a way, the current cycle is very unusual, with only around 15 months between gp100 and gv100, despite the significant architectural changes between the two.
I do not fully agree with your assessment that the current cycle is "normal" compared to recent cycles. The Kepler to Maxwell cycle took longer than usual because TSMC's 28nm node was delayed. Maxwell to Pascal took longer than usual as well, thanks to everyone skipping the poor-performing 20nm node to wait for 16nm. The common cause then was manufacturing, a factor which is outside Nvidia's control. For the current generation we do not have the same problem. 12nm is the rumored process and has been available for quite some time.
The most likely cause, given all the market factors (crypto crap, lack of competition from RTG) is that NV is just plain sitting on the next gen.
They are both using 28nm... Did you mean GF100 to GK104? If so: I don’t remember 28nm to be delayed to be honest. 28nm was very much a no-drama node.The Kepler to Maxwell cycle took longer than usual because TSMC's 28nm node was delayed.
Given that 20nm was always going to be an ugly duckling anyway that didn’t have a lot going for it, isn’t it more likely that Nvidia never planned to use 20nm at all?Maxwell to Pascal took longer than usual as well, thanks to everyone skipping the poor-performing 20nm node to wait for 16nm.
Kepler and Maxwell has huge architectural improvements compared to their predecessors. You can’t just ignore those. If the next gen is 12nm, then they’ll need something compelling for people to buy it. (Of course: Volta is a very attractive architecture, so I expect that to be a major part of it.)The common cause then was manufacturing, a factor which is outside Nvidia's control. For the current generation we do not have the same problem. 12nm is the rumored process and has been available for quite some time.
When the next gen was on the drawing board, Nvidia didn’t know AMD would decide to stop showing up for a while. They had already started the process of investing billions. Furthermore, you don’t develop something unless it has a clear benefit over your previous generation.The most likely cause, given all the market factors (crypto crap, lack of competition from RTG) is that NV is just plain sitting on the next gen.