Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I think that, in the past, high BW for the same price could be taken for granted. Necessity is the mother of invention...Delta colour compression is the gift that just keeps on giving. I'm impressed by how effective it is, and curious as to why it wasn't implemented earlier. Was it too costly in silicon for an era of relatively abundant bandwidth and expensive transistors?
I guess that's why some of the GTX 1080 custom cards will have 2 or 3 different switchable bios.Looks like the higher-end custom 1080 cards aren't getting the overclock potential that some were hoping.
It seems it's because the GPU is limited to 1.25V and no one seems to get past that:
http://videocardz.com/60631/asus-rog-strix-geforce-gtx-1080-offers-poor-overclocking
Apart from people using LN2, no one is reaching the 2114MHz core clock that was shown during the reveal.
Can someone tell if Computerbase used the EVGA precision OC tool or doing this manually?
Would be interesting to know what the maximum voltage to frequency Boost3 profile that can be set in EVGA Precision.
I understand that if anyone wants to use Afterburner, they need the new Beta version to work properly with Pascal.
http://www.guru3d.com/files-details/msi-afterburner-beta-download.html
Fingers crossed some publications will compare both utilities when trying to push the envelopes of OC on 1080/1070.
Also to get over the 125% power target in Boost2/Maxwell, did that need an unsupported firmware update or something else?
Cheers
Agreed, just finding/pointing out whether the mainstream publications are possibly jumping the gun by saying it is impossible to hit that frequency/custom AIB will not be extreme OCers..If you want to see OC perf better keep an eye on this: http://www.overclock.net/t/1601288/gtx-1080-owners-club-and-leaderboard
As some members already have 1080s on water and soon bios modded too.
Are there any compact GTX 1070 SKUs? (ala that one Gigabyte 970 model)
Reading some reviews the 1070 is close but slightly behind the Titan-X in games ,more so at high resolution.
Remarkably it has disabled a complete GPC, meaning 3 triangles and 48 pix per clock (vs 6 and 96 for GM200).
Nvidia quite explicitly (and willingly, I might add) said so. No manveuvering around and hiding in the mist.I wonder whether or not NV truly disabled a GPC in 1070 or just disabled 5 SMs. The reason being that at lower resolutions (say 1080p) the GPU should be bottlenecked more by triangle throughput than the pixel shader engine. This is one of the primary reasons we saw 980 Ti/Titan X outperform the Fury X @ 1080p and even 1440p but begin to lose out at 4k. Given the suggested triangle functional unit deficit of 33% for 1070 vs. 1080 while also factoring in the ~3% boost clock speed deficit this should produce a performance gap as large as 37% yet when we look at the performance in games the gap seems to align more often with (i.e. fall within the bounds of) the shader performance differential (25% unit * 3% clocks = 28.75% aggregate) than it does with the suggested triangle performance differential.
Here you go:I hope someone with both a 1080 and a 1070 can run the good old B3D suite and compare triangle throughput. If I'm wrong, I'm wrong. I just want to know what's going on.
Nvidia quite explicitly (and willingly, I might add) said so. No manveuvering around and hiding in the mist.
That being said, triangle setup rate should apart from explicitly directed test not limit the general performance of any card, Geforce nor Radeon. We are talking about billions of triangles here. What might rather fit than geometry (which is a function of SM at any rate), at least in some part, is rasterizer performance (thus pixel fill) which indeed is cut by a third+clock diff. And of course tessellation, which is limited by other stuff than pure triangle rate in more recent Radeon cards.
GDDR5x chips have 190 ball contacts, but GDDR5 have 170. Wouldn't that need a PCB change? Or are they using GDDR5 chips with updated packaging so you can reuse the PCB for both?So far everything we've seen from the 1070 is using the exact same PCB as the 1080.