This is their way of saying "we don't want to stop making money with g-sync yet"http://www.geforce.com/hardware/10series/geforce-gtx-1080
2 - DisplayPort 1.2 Certified, DisplayPort 1.3/1.4 Ready.
I think both products are very well priced. The difference between the Founders Edition may be the boost clock since Jen Hsun said it would have crazy overclockability .... I think the demo may have been using the GTX 1080 Founders Edition with the 2114 MHz clock.Not sure what they mean by Founders Edition anyway. ( reference ? )... anyway 1070 is priced well aggressively ( 220$ less than the 1080 one )
So the 2.1GHz is either an OC'ed speed or an error in the frequency reading?
So the 2.1GHz is either an OC'ed speed or an error in the frequency reading?
Superb Craftsmanship. Increases in bandwidth and power efficiency allow the GTX 1080 to run at clock speeds never before possible -- over 1700 MHz -- while consuming only 180 watts of power. New asynchronous compute advances improve efficiency and gaming performance. And new GPU Boost™ 3 technology supports advanced overclocking functionality. - See more at: http://nvidianews.nvidia.com/news/a...eforce-gtx-1080#sthash.SG5ytVUP.ikA9s9iK.dpuf
Not at all. Adaptive Sync will always be an optional part of the DP spec: there is no reason to require it for business monitors etc. I think it's because the spec is too new for certification testing. If DP is anything like other telecom specs, a major part of certification testing is interop testing. I don't think there are DP 1.3 monitors on the market right now. Let alone DP 1.4.This is their way of saying "we don't want to stop making money with g-sync yet"
2.1GHz without liquid cooling. So much for 2GHz being a stretch...EVGA PrecisionX OC
GPU Clock: 2114 MHz
Memory Clock: 5508 MHz
GPU Temp: 67c
What? Only 20% faster at almost 100% more clock rate? Is that a joke?Roughly 25% faster than 980TI.
2.1Ghz gpu clock, 11Gbps memory clock.
The guaranteed clock rates are about 50% higher. And the Titan X has 3072 cores vs 2560 cores for the 1080.What? Only 20% faster at almost 100% more clock rate? Is that a joke?
Wouldn't it be *a little bit* oversized? With "HBM2" and everything...A question arises what will be the big graphics Pascal, given the 15B transistor budget.
will it be 2xGP104 ie 80 SMs, 5120 cores, 320 TMUs.
I'm hoping for the latter.
HBM2 isn't on-die, and the HBM2 is reportedly smaller than a GDDR5 controller. So the die size would be still somewhat plausible, when cutting on the register files compared to GP100.Wouldn't it be *a little bit* oversized? With "HBM2" and everything...
GeForce can render same triangle to multiple viewports without GS amplification costs. See GL_NV_viewport_array2 andDoes anyone know what this "Multi-Projection" support actually include? Sounds like that batches geometry for stereo projection prior to rasterization, so even the 2x perf is probably just absolute best case for a synthetic, geometry limited scene? Probably achieves the savings only during tesselation and geometry shader evaluation?