Nvidia Pascal Announcement

You have to assume that Nvidia picked the stock clock for a reason, and that's most likely a sweet spot for efficiency. While +30% appears still plausible, it's certainly not going to happen on the same efficiency level.

I'm not sure that the chosen clocks are in the sweet spot for efficiency.

In terms of TDP (W)/Billion transistors:
980 165/5.2 = 32
1080 180/7.2 = 25
980 Ti 250/8.0 = 31

Compute-only, using HBM, has significant IO and not fully enabled, but included for reference
GP100 300/15.3 = 20

It's true that even though the process does promise 2x more power efficiency at the same level of circuit performance, there can be confounding factors like memory and non-ASIC power consumption.
I tried to adjust for the memory subsystem. Using some GDDR5 numbers from Fury's memory comparison, and roughly in line with some earlier percentages of power budget for memory being 20-30% (split the difference), I used the Ti's wider GDDR5 bus to derive a value for the interface's cost of 63W for a 384-bit bus.
The first three with GDDR5-ish interfaces (adjusted for width) go 24,19, 23. There is an assumption that the GDDR5X bus is more in line with the 980's 256-bit GDDR5 bus in power consumption than the 980 Ti's 384-bit bus. Using the latter's value puts the 1080 at a better ~16, although that might not be flattering for GDDR5X.
 
Sure, the transition to a new manufacturing process was way overdue. Though, I have higher expectations for Volta, when the 16nm FF process should be better utilized and more mature.
16FF is already pretty mature and Pascal is far from being conservative in terms of perf/W and perf/mm2 and it does so at the same time. Remember the TSMC 16FF+ page we used to reference here? The one that talked about X% perf increase OR Y% power decrease. The 1080 is not quite AND, but it seems to be going a long way if Nvidia's stated numbers are holding up.

What expectations do you have about Volta?
I have none other than that it will be faster with better power efficiency. But I don't think we'll see the kind of improvements that we saw with Maxwell or Pascal. You can only go for the kill so many times in terms of arch improvement, diminishing returns and all that, and the transition to FinFET is behind us now.
 
GDDR5X defaults to 1,35 volt, GDDR5 to 1,5 volt. Maybe factoring that in? Quadratically as for ASICs?
 
According to the table, it isn't. The overclocked 1080 is 3x as fast, but stock to stock it is about 2.43x as fast as a 970.

True, but I intend to get an OC version :) And when comparing to my current 670, even a stock 970 is like lightening.
 
So one real-world game performance result we have for 1080 albeit still just summary info goes back to the announcement last week.
They showed Doom obviously using beta drivers with Vulkan and the game on absolute top settings (nightmare) running between 130fps to a brief peak near 190fps.
pcgameshardware has done a recent review of Doom and the results are interesting even at 1920x1080 (which was the presentations setting as well); they used the setting below nightmare so less shadow detail.
In their test an AIB 980ti Palit Super Jetstream had a minimum 125 and average 158.7

So that is a surprising result, OK the 1080 was using Vulkan but considering the optimisation benefits are yet to translate to healthy boost in DX12 for NVIDIA this still looks impressive (or at least Vulkan is working better than DX12 lol), especially as the 1080FE was on the very top setting and the 980ti was a notch below and also 1080FE on beta drivers.
Fingers crossed pcgameshardware will repeat their test once the Vulkan patch is rolled out for a better comparison.
http://www.pcgameshardware.de/Doom-2016-Spiel-56369/Specials/Benchmark-Test-1195242/

But looks like it should raise some eyebrows regarding initial real-world result for Founders Edition 1080, anyway better information IMO than any supposed "leaked" results popping up on the internet.
Cheers
 
Last edited:
Average 158.7 fps. Just sayin'.
Thanks,
was on autopilot when typed max :)

Edit:
I should also had said worth checking out the reviews/comments just how much overhead setting it all to max/nightmare visuals has.
Even in the pcgameshardware review they have a quick click tab showing 76fps for ultra and 56fps for nightmare - this is a quick and dirty comparison they provided just to show how much it can hurt.
Also need over 6GB VRAM it seems according to the review for max nightmare settings.

Cheers
 
Last edited:
Reading this above, they say the 100% was already running at ~1.85 Ghz.
Amazingly the 14% faster 2.11 Ghz clock results in 24% speedup, no need even for overclocked memory.

Hmmm, according to that link. It's running 381 MHz faster than the non-overclocked one. Or ~22% faster on the core clock. Still impressive to get 24% more performance somehow from a 22% core overclock. Quantum physics? :p I'm taking it with a huge grain of salt. :) It's even less believable if it truly was only 14% higher.

Regards,
SB
 
Wir haben die Kapazität unseres Filter-Netzwerks erweitert und die Stromversorgung des PCBs auf geringe Impedanz optimiert. Als Folge haben wir die Effizienz der Spannungsversorgung gegenüber der GTX 980 um 6 Prozent verbessert und die Amplitude der Spannungsspitzen von 209 mV auf 120 mV reduziert, um bessere Übtertaktbarkeit zu ermöglichen.
So a portion of the better efficiency isn't due to on-die optimizations, but an optimized and horizontally upscaled voltage regulator setup with a lower impedance and hence smaller voltage swings and less overall noise in that.
(~10% relative gains in efficiency? Possibly more if this stabilized power supply has follow-up effects on-die, as the plotted efficiency appears to be only voltage regulator efficiency.)

The rest is just praising the optimized airflow of the FEs radiator design at the expense of shielding everything, and reasoning about the up-charge.
 
Hmmm, according to that link. It's running 381 MHz faster than the non-overclocked one. Or ~22% faster on the core clock. Still impressive to get 24% more performance somehow from a 22% core overclock. Quantum physics? :p I'm taking it with a huge grain of salt. :) It's even less believable if it truly was only 14% higher.

Regards,
SB

A huge grain of salt is indeed appropriate. Running that benchmark 1.5x faster than Titan-X, all with memory at 320GB/s vs 336GB/s.
Questionable, unless there is again a very big improvement in frame compression.
 
First waterblock compatible with 1080 from BYKSI (less than $100):

BYSKI-GeForce-GTX-1080-WaterBlock-1-900x544.jpg

http://videocardz.com/59916/byksi-announces-first-waterblock-for-geforce-gtx-1080

and on a side note, custom watercooled 1080 boards will be announced at Computex with 2.5GHz clocks :runaway::runaway::runaway:
 
That's weird. Any chance the API is enumerating single TPC as a multi-processor, since Pascal pairs two SMs into a TPC?
 
Back
Top