Nvidia Pascal Announcement

Well the equally clocked 980ti using LN2 for the Fire Strike Extreme score is 50% faster in comparison to the 1080; the 1080 score is 10,102 for graphics while the 980ti is 15,121
I would expect the scores to be closer, which is why I am not sure about the Fire Strike Extreme score that is from someone else and not the same who did the 3D Mark Performance.
Cheers
 
Does this mean that the 1080 is probably much cheaper than the 980Ti in production (while not significantly faster), and yet Nvidia will try to sell it for a new record price?

Regardless, equipping the 1070 with GGDR5 doesn't sound right. A 192bit memory controller for the GP104 does sound plausible, but the first step downwards would be to reduce the interface to 128bit while still staying on GDDR5X.

12GB GDDR5X for the 1080, 8GB for the 1070.

I'm not expecting GDDR5 again prior to a (underclocked, and heavily castrated) 1060, but then back to a 192bit interface again. Optional (vendor choice) down to 128bit.

For a total of 6/8GB, depending on the memory configuration.

Vendors are probably going to try to equip the GP104-150 with GDDR5X as well. But I don't expect that to work all to well. Overclocked GDDR5 might still work though, so the 1060 is the most likely candidate for early "OC" models.
 
Does this mean that the 1080 is probably much cheaper than the 980Ti in production (while not significantly faster), and yet Nvidia will try to sell it for a new record price?

Regardless, equipping the 1070 with GGDR5 doesn't sound right. A 192bit memory controller for the GP104 does sound plausible, but the first step downwards would be to reduce the interface to 128bit while still staying on GDDR5X.

12GB GDDR5X for the 1080, 8GB for the 1070.

I'm not expecting GDDR5 again prior to a (underclocked, and heavily castrated) 1060, but then back to a 192bit interface again. Optional (vendor choice) down to 128bit.

For a total of 6/8GB, depending on the memory configuration.

Vendors are probably going to try to equip the GP104-150 with GDDR5X as well. But I don't expect that to work all to well. Overclocked GDDR5 might still work though, so the 1060 is the most likely candidate for early "OC" models.
Its faster if going by those scores, although the Fire Strike Extreme score is not as high in the chart as say the 3DMark Performance.

But what to compare.
Some want to see a clock for clock comparison and adjust for reduced cores.
It could also be debated whether a reference design should only compare to a reference design rather than an OC AIB (which are already OC and with good cooling-capacitors-etc).
The reference 980ti is pretty average compared to the AIBs.
Unfortunately there is no way to tell if the same is the case for the 1070/1080 until we see both the reference and AIB benchmarked.

BTW how you coming to a 192-bit bus?
Most rumours and calculations seem to come back to a 256-bit bus for the 1070/1080 (with only the 1080 getting GDDR5X) and a 192-bit bus for the 1060(ti).
Also it seems the GDDR5X is the 10gb/s version (worked out from effective memory clock) , making the the 192-bit bus even more unlikely for the 1080.

Cheers
 
Last edited:
hmm the LN2 base clock is 1800, boost clocks are even higher.
Ah good point.
Also we do not know whether the 1080 is throttling just like the reference 980ti would.
That said who uses a 1080 with a 3770 cpu, bit strange that.
At least the 3Mark Performance result was using a 5820.

Cheers
 
Does this mean that the 1080 is probably much cheaper than the 980Ti in production (while not significantly faster), and yet Nvidia will try to sell it for a new record price?

Regardless, equipping the 1070 with GGDR5 doesn't sound right......
Just to say worth remembering this is meant to be use the 8gbps Samsung GDDR5 memory rather than the previous 7gbps.
Cheers
 
Does this mean that the 1080 is probably much cheaper than the 980Ti in production (while not significantly faster), and yet Nvidia will try to sell it for a new record price?
16FF per wafer cost is inevitable higher than 28nm. Defect density is higher as well. We don't know anything yet about how many units are disabled. And there is only one GDDR5X supplier, so no dual source for this part. So will the production cost be *much* cheaper today? I doubt it. 28nm is just a fantastic process. 12 months from now? Definitely. But the selling price may be lower then as well due to AMD competition.

I haven't seen any credible rumors about pricing, so where does this "record new price" come from? And what are you comparing it against? There's quite a range between a Titan Z and a GTX 980...

But since AMD decided not to show up for another 6 months at the high end, it stands to reason that Nvidia will try to reap some benefits from that. ;)

[quite]Regardless, equipping the 1070 with GGDR5 doesn't sound right.[/quote]
Why not? On the 970, they disabled the number of SMs by 19% and cut BW by 13%. Going from GDDR5X to GDDR5 cuts BW by 20%, so they can do something similar for the SMs.

A 192bit memory controller for the GP104 does sound plausible, but the first step downwards would be to reduce the interface to 128bit while still staying on GDDR5X.
Why? And why?
 
On that same token I must say that looking at that Kingpin world record result and drawing clock for clock comparisons from it is not really smart. The Kingpin card itself is little bit different compared to other special 980Tis, it having better memory chips, not used anywhere else. His result is much better than other very high and even higher clocked 980Tis. So clock for clock it makes other 980Tis look bad also :) There are all kinds of tweaks you can do to get a better score in a 3dMark test.
 
On that same token I must say that looking at that Kingpin world record result and drawing clock for clock comparisons from it is not really smart. The Kingpin card itself is little bit different compared to other special 980Tis, it having better memory chips, not used anywhere else. His result is much better than other very high and even higher clocked 980Tis. So clock for clock it makes other 980Tis look bad also :) There are all kinds of tweaks you can do to get a better score in a 3dMark test.
Yeah true,
although the other guy who posted 3D Mark Performance result using a 5820 showed quite a competitive graphics result when I did a search on Kingpin under that section.
Need to set to single GPU: http://www.3dmark.com/search#/?mode...3dm11/P/1042/500000?minScore=0&gpuName=NVIDIA GeForce GTX 980 Ti K|ngp|n.
The 1080 score: http://www.3dmark.com/3dm11/11223744

Yeah I agree the Fire Strike Extreme do seem to be exotic in terms of setup and as mentioned the bios by another earlier.
Still the results from 3D Mark 11 Performance seemed to sit better in the chain when looking specifically at its graphics score.
Although probably both scores are dubious anyway :)
The leaked ones before release for AMD Fury were far from correct, if I remember rightly.
Cheers
 
Last edited:
Yeah true,
although the other guy who posted 3D Mark Performance result using a 5820 showed quite a competitive graphics result when I did a search on Kingpin under that section.
Need to set to single GPU: http://www.3dmark.com/search#/?mode...3dm11/P/1042/500000?minScore=0&gpuName=NVIDIA GeForce GTX 980 Ti K|ngp|n.
The 1080 score: http://www.3dmark.com/3dm11/11223744

Haha I was browsing through you link and wondering that I'm not seeing Kingpin anywhere, then I realized I was looking for the user not the card. The Extreme preset world record is done by the guy who the card is named after :) and his result is much better than others running the same card. There can be any number of reason for this, but yeah looking at the top 10 results in the world might not be the best comparison for some random 3dmark run with the new card.

Also I must embarrassingly admit after doing more research on Maxwell overclocking, that actually disabling the boost clocks is quite rare now... So the real boost clocks in that world record with the 980Ti actually were quite a bit higher than 1880Mhz, around 2.1Ghz...:oops:, but yeah still with a very custom bios and the card drawing over 1000W.
 
I plan on using a 1070 with my stock 3770K. Don't think it will be a problem at all.
For sure but my point was pertaining to someone having a Qual/Engineering sample GPU and testing it with 3770k rather than a more modern generation; If they can get hold of that card already then it is likely they have either a current Enthusiast or I7 available on their test-development-QA bench as well.
Just like what we saw with the person providing the 3DMark Performance result (5820 with the CPU-motherboard probably left with basic settings, which might also explain the very average physics score in that test).
Although it is possible the results were faked.
Cheers
 
Last edited:
For sure but my point was pertaining to someone having a Qual/Engineering sample GPU and testing it with 3770k rather than a more modern generation; If they can get hold of that card already then it is likely they have either a current Enthusiast or I7 available on their test-development-QA bench as well.
Just like what we saw with the person providing the 3DMark Performance result (5820 with the CPU-motherboard probably left with basic settings, which might also explain the very average physics score in that test).
Although it is possible the results were faked.
Cheers
Maybe he brought the thing home with him and tested it on his personal PC? I dunno there are lots of possible explanations, not the least of which is that he figured "well this testbed isn't in use and should suit my purpose just fine".
 
Maybe he brought the thing home with him and tested it on his personal PC? I dunno there are lots of possible explanations, not the least of which is that he figured "well this testbed isn't in use and should suit my purpose just fine".
If you go down that road you risk excuses for any rumour/leak/benchmarks....
TBH having been responsible for some sensitive hardware in the past myself, I would be rather leery taking it out of the secure lab-office when your officially under an NDA and also responsible for it.
Cheers
 
Last edited:
GTX 1080 is about 5 - 10 % more energy efficient than GTX 980, promised to be faster than GTX 980 SLI
 
Roughly 25% faster than 980TI.

2.1Ghz gpu clock, 11Gbps memory clock.
 
http://www.geforce.com/hardware/10series/geforce-gtx-1080


GPU Engine Specs:
2560NVIDIA CUDA® Cores
1607Base Clock (MHz)
1733Boost Clock (MHz)
Memory Specs:
10 GbpsMemory Speed
8 GB GDDR5XStandard Memory Config
256-bitMemory Interface Width
320Memory Bandwidth (GB/sec)
Technology Support:
YesMulti-Projection
YesVR Ready
YesNVIDIA Ansel
Yes - SLI HB Bridge SupportedNVIDIA SLI® Ready1
YesNVIDIA G-SYNC™-Ready
YesNVIDIA GameStream™-Ready
3.0NVIDIA GPU Boost™
12 API with feature level 12_1Microsoft DirectX
YesVulkan API
4.5OpenGL
PCIe 3.0Bus Support
Windows 7-10, Linux, FreeBSDx86OS Certification
Display Support:
7680x4320@60HzMaximum Digital Resolution1
DP 1.42, HDMI 2.0b, DL-DVIStandard Display Connectors
YesMulti Monitor
2.2HDCP
Graphics Card Dimensions:
4.376"Height
10.5"Length
2-SlotWidth
Thermal and Power Specs:
94Maximum GPU Tempurature (in C)
180 WGraphics Card Power (W)
500 WRecommended System Power (W)3
8-Pin

1 - 7680x4320 at 60 Hz RGB 8-bit with dual DisplayPort connectors or 7680x4320 at 60 Hz YUV420 8-bit with on DisplayPort 1.3 connector.
2 - DisplayPort 1.2 Certified, DisplayPort 1.3/1.4 Ready.
3 - Recommendation is made based on PC configured with an Intel Core i7 3.2 GHz processor. Pre-built system may require less power depending on system configuration.
 
Back
Top