Nvidia Pascal Reviews [1080XP, 1080ti, 1080, 1070ti, 1070, 1060, 1050, and 1030]

So is this only useful for games that don't have framerate caps?
 
Does anyone have any idea what "SILK Smoothness" is? It's displayed in the video as one of the Nvidia control panel options. (frame 10:53)
 
Full list, thx to vcards

NVIDIA GeForce GTX 1070 Reviews:

NVIDIA GeForce GTX 1070 Video Reviews:
 
Not so long ago, some took it as gospel that HBM was The Only True Way for GPUs to move forward. Today, we see a GPU with just 256GB/s match or outperform the previous top-end with what considered a puny 336GB/s back then. And with only 15 out of 20 cores enabled, that's not even scraping the bottom of the barrel.

What a difference a year makes...

I think it's safe to stay with the prediction that next generation mid-end GPUs will continue to soldier on with GDDR5(X). Even the high-end still has a 40% upside going from 10 to 14Gbps.

Delta colour compression is the gift that just keeps on giving. I'm impressed by how effective it is, and curious as to why it wasn't implemented earlier. Was it too costly in silicon for an era of relatively abundant bandwidth and expensive transistors?
I think that, in the past, high BW for the same price could be taken for granted. Necessity is the mother of invention...
 
Reading some reviews the 1070 is close but slightly behind the Titan-X in games ,more so at high resolution.
Remarkably it has disabled a complete GPC, meaning 3 triangles and 48 pix per clock (vs 6 and 96 for GM200).
 

So:
Stock 1070 > 980ti
OC: 980ti > 1070

The oc he used on the 980ti is pretty easy to achieve unless you have a pure dud.

What it shows me is that the 1070 is really just a fanstastic card! Not just for the price but overall. Can't wait for the 1080Ti

He also baked the cards for 20mins before testing so that's great to see. Hope that becomes a trend for all testers.
 
Last edited:
Can someone tell if Computerbase used the EVGA precision OC tool or doing this manually?
Would be interesting to know what the maximum voltage to frequency Boost3 profile that can be set in EVGA Precision.
I understand that if anyone wants to use Afterburner, they need the new Beta version to work properly with Pascal.
http://www.guru3d.com/files-details/msi-afterburner-beta-download.html

Fingers crossed some publications will compare both utilities when trying to push the envelopes of OC on 1080/1070.

Also to get over the 125% power target in Boost2/Maxwell, did that need an unsupported firmware update or something else?

Cheers
 
Looks like the higher-end custom 1080 cards aren't getting the overclock potential that some were hoping.
It seems it's because the GPU is limited to 1.25V and no one seems to get past that:

http://videocardz.com/60631/asus-rog-strix-geforce-gtx-1080-offers-poor-overclocking

Apart from people using LN2, no one is reaching the 2114MHz core clock that was shown during the reveal.
I guess that's why some of the GTX 1080 custom cards will have 2 or 3 different switchable bios.
 
Can someone tell if Computerbase used the EVGA precision OC tool or doing this manually?
Would be interesting to know what the maximum voltage to frequency Boost3 profile that can be set in EVGA Precision.
I understand that if anyone wants to use Afterburner, they need the new Beta version to work properly with Pascal.
http://www.guru3d.com/files-details/msi-afterburner-beta-download.html

Fingers crossed some publications will compare both utilities when trying to push the envelopes of OC on 1080/1070.

Also to get over the 125% power target in Boost2/Maxwell, did that need an unsupported firmware update or something else?

Cheers

If you want to see OC perf better keep an eye on this: http://www.overclock.net/t/1601288/gtx-1080-owners-club-and-leaderboard

As some members already have 1080s on water and soon bios modded too.
 
Are there any compact GTX 1070 SKUs? (ala that one Gigabyte 970 model)

So far everything we've seen from the 1070 is using the exact same PCB as the 1080. Although, it's possible that we might see one given that both 1080/1070 have lower TDP than the 970/980.
 
Reading some reviews the 1070 is close but slightly behind the Titan-X in games ,more so at high resolution.
Remarkably it has disabled a complete GPC, meaning 3 triangles and 48 pix per clock (vs 6 and 96 for GM200).

I wonder whether or not NV truly disabled a GPC in 1070 or just disabled 5 SMs. The reason being that at lower resolutions (say 1080p) the GPU should be bottlenecked more by triangle throughput than the pixel shader engine. This is one of the primary reasons we saw 980 Ti/Titan X outperform the Fury X @ 1080p and even 1440p but begin to lose out at 4k. Given the suggested triangle functional unit deficit of 33% for 1070 vs. 1080 while also factoring in the ~3% boost clock speed deficit this should produce a performance gap as large as 37% yet when we look at the performance in games the gap seems to align more often with (i.e. fall within the bounds of) the shader performance differential (25% unit * 3% clocks = 28.75% aggregate) than it does with the suggested triangle performance differential. In fact, out of the 12 games + 1 synthetic benchmark tested @1080p only 1 exceeds the expected performance gap due to shader performance (The Division - 31.7%). The average performance difference is 19.7%, only about half what it could be if each 1070 has a disabled GPC.

Compared to the 980 Ti with its 6 tri/hz rate or supposedly 2x/per clock of the 1070 and it should come away winning at least some benchmarks @ 1080p but it wins precisely none (the Titan X wins exactly 1 benchmark again The Division by all of 1 fps compared to the 1070). Factoring in clock speed differential we would expect the following triangle throughput differential:
GM200 (980 Ti) 1075MHz *6 tri/hz = 6.45G/tri/s
GP104 (1070) 1683MHz * 3 tri/hz = 5.05G/tri/s
difference: 27.7%

This difference should be observable at the lower resolution of 1080p but again, it is never once observed! In fact, the Titan X with its identical triangle rate to the 980 Ti actually does one win benchmark vs. the 1070 so this cannot be explained by the triangle rate since we observe different performance results. However, if the 1070 does in fact not have a disabled GPC and has the full 4 tri/s rate expected in GP104 then that changes the picture entirely and it would have a triangle throughput advantage compared with GM200, rather than a deficit. In this case we would see the following triangle throughput differential:
GM200 (980 Ti) 1075MHz *6 tri/hz = 6.45G/tri/s
GP104 (1070) 1683MHz * 4 tri/hz = 6.73G/tri/s
difference: 4.3%

Now, all that being said I realize that triangle throughput is not the only bottleneck within a given frame rendered @1080p, however it should be the predominant bottleneck. I hope someone with both a 1080 and a 1070 can run the good old B3D suite and compare triangle throughput. If I'm wrong, I'm wrong. I just want to know what's going on.

Data sourced from Guru3d review, all tests @ 1080p high/highest settings:

Rise of the Tomb Raider DX12
1080: 131fps
1070: 108fps
difference: 21.2%

Hitman DX12
1080: 107fps
1070: 86fps
difference: 24.4%

Doom OpenGL
1080: 176fps
1070: 142fps
difference: 23.9%

FarCry Primal DX11
1080: 110fps
1070: 94fps
difference: 17%

Anno 2205 DX11
1080: 120fps
1070: 100fps
difference: 20%

Fallout 4 DX11
1080: 134fps
1070: 129fps
difference: 3.8%

GTA V DX11
1080: 159fps
1070: 152fps
difference: 4.6%

The Division DX11
1080: 108fps
1070: 82fps
difference: 31.7%

Thief DX11
1080: 125fps
1070: 104fps
difference: 20.2%

The Witcher III DX11
1080: 105fps
1070: 83fps
difference: 26.5%

Battlefield Hardline DX11
1080: 124fps
1070: 101fps
difference: 22.8%

Alien Isolation DX11
1080: 186fps
1070: 157fps
difference: 18.5%

3dmark 11 X Score
1080: 10085
1070: 8290
difference: 21.7%
 
I wonder whether or not NV truly disabled a GPC in 1070 or just disabled 5 SMs. The reason being that at lower resolutions (say 1080p) the GPU should be bottlenecked more by triangle throughput than the pixel shader engine. This is one of the primary reasons we saw 980 Ti/Titan X outperform the Fury X @ 1080p and even 1440p but begin to lose out at 4k. Given the suggested triangle functional unit deficit of 33% for 1070 vs. 1080 while also factoring in the ~3% boost clock speed deficit this should produce a performance gap as large as 37% yet when we look at the performance in games the gap seems to align more often with (i.e. fall within the bounds of) the shader performance differential (25% unit * 3% clocks = 28.75% aggregate) than it does with the suggested triangle performance differential.
Nvidia quite explicitly (and willingly, I might add) said so. No manveuvering around and hiding in the mist.
That being said, triangle setup rate should apart from explicitly directed test not limit the general performance of any card, Geforce nor Radeon. We are talking about billions of triangles here. What might rather fit than geometry (which is a function of SM at any rate), at least in some part, is rasterizer performance (thus pixel fill) which indeed is cut by a third+clock diff. And of course tessellation, which is limited by other stuff than pure triangle rate in more recent Radeon cards.

I hope someone with both a 1080 and a 1070 can run the good old B3D suite and compare triangle throughput. If I'm wrong, I'm wrong. I just want to know what's going on.
Here you go:
http://www.pcgameshardware.de/Nvidi.../GTX-1070-Benchmarks-Test-Preis-1196360/2/#a2
 
Nvidia quite explicitly (and willingly, I might add) said so. No manveuvering around and hiding in the mist.
That being said, triangle setup rate should apart from explicitly directed test not limit the general performance of any card, Geforce nor Radeon. We are talking about billions of triangles here. What might rather fit than geometry (which is a function of SM at any rate), at least in some part, is rasterizer performance (thus pixel fill) which indeed is cut by a third+clock diff. And of course tessellation, which is limited by other stuff than pure triangle rate in more recent Radeon cards.

Doing some more research now (looking at another review where they used different settings in some of the same games and came up with a greater performance advantage for the 1080).



Excellent! Thank you.
 
Using hardware unboxed's 1070 review we do see a greater performance differential @1080p more in line with expectations from a disabled GPC.

Quick examples:

Battlefield 4:
1080: min: 127 avg: 152
1070: min: 98 avg: 116
difference: min: 29.6% avg: 31%

Crysis 3:
1080: min: 83 avg: 100
1070: min: 55 avg: 67
difference: min: 50.9% avg: 49.3%

Anno 2205:
1080: min: 97 avg: 129
1070: min: 70 avg: 86
difference: min: 38.6% avg: 50%

Of course, now we seem to have the opposite problem where a GPU which should see at most a 37% performance advantage vs. another sees ~50% in some tests. Not sure what's going on there, driver issue with the 1070 perhaps or can anyone think of a likely architectural limitation?

Thanks CarstenS for making me take a second look at this!
 
Make sure you consider the variable boost as well. Mostly, you cannot really be sure what clock rates are being compared and some people are really sloppy regarding their benchmarks' reproducibility and comparability.
 
So far everything we've seen from the 1070 is using the exact same PCB as the 1080.
GDDR5x chips have 190 ball contacts, but GDDR5 have 170. Wouldn't that need a PCB change? Or are they using GDDR5 chips with updated packaging so you can reuse the PCB for both?
 
Back
Top