Nvidia Pascal Announcement

You probably need to balance efficiency vs having enough performance for enthusiasts to cycle out their older cards.

For example, I have absolutely no reason to replace by 980Ti with a 1080Ti if it's only 20% faster but 60% more power efficient. It means nothing. For me to spend money on new card, I rather they make it 60% faster and 20% more efficient.

While reduced power, cooling and all that is great to have, they take a backseat to outright performance at the enthusiast level. Since enthusiast level = highest margins, you need to ensure your performance bump is enticing enough for people to make the jump.

I think that leaves out various optimization points that are less consumer-focused but still important from an economic and practical standpoint.
Nvidia's positioning of the product actually does make things closer to 60% faster and 20% more efficient, given that it's positioning as a successor to the GTX 980 (non-TI).
Area-wise, it's more in that range than the die area taken up by the 980Ti.

Did Nvidia label the 1080 a Ti? I thought it was non-Ti in the tradition of a someday mid-tier leader.

Risk-management, given the process uncertainties and unknown competition, would also encourage something where going all-out in area might be discouraged. One can push clocks and lose efficiency later in the game to position a product, but those mm2 get locked in much earlier along with any trouble in terms of cost/yield that might cause.

There's also a desire to leave some room to grow for what might be long-lived node.
 
For example, I have absolutely no reason to replace by 980Ti with a 1080Ti if it's only 20% faster but 60% more power efficient. It means nothing. For me to spend money on new card, I rather they make it 60% faster and 20% more efficient.

You don't need to replace your graphics card every generation. nvidia probably knows this as well. AFAIK, they plan their marketing efforts to target people with 2/3 year-old graphics cards, not 1 year-old ones.
 
Wouldn't be possible for AMD or Nvidia to make a mid-rage VGA with HBM2 and without the need of an ex-power connection?
 
"Quite probably" most GP104 chips will allow a 33% overclock? Wow..
Could actually the case.

You have to assume that Nvidia picked the stock clock for a reason, and that's most likely a sweet spot for efficiency. While +30% appears still plausible, it's certainly not going to happen on the same efficiency level.

That reference board is also only half-assembled with voltage regulators, plus the option to add in a second 8-pin connector. So I personally must assume that perhaps 2.5 Ghz or so is reachable, with sufficient cooling. And while we are going to see cards which are highly overclocked by factory default, that's not going to leave much of the efficiency Nvidia likes to have Pascal associated with.

You don't need to replace your graphics card every generation. nvidia probably knows this as well. AFAIK, they plan their marketing efforts to target people with 2/3 year-old graphics cards, not 1 year-old ones.
I actually do expect that upgrading from a Maxwell series to a Pascal series does make a difference this time. Maxwell isn't going to age well, and neither is any of the previous archs, with regard to how flexible Pascal is in certain points. This is sooner or later going to mean that driver and application side optimizations for the previous archs are going to phase out, and this time with a much worse impact than e.g. Kepler to Maxwell had.
 
"Quite probably" most GP104 chips will allow a 33% overclock? Wow..

I said would get too close, not that they all hit the same frequency. The announced GTX 1080 has a boost clock of 1733Mhz, with the actual clock being somewhere in the region of 1850Mhz. I believe that most of them will at least clock to 2100-2200Mhz and if they let you play with voltages and power limits, you can move that range a bit higher.

If they used higher binned chips, better voltage regulation hardware and an AiO cooler, what makes you think it wouldn't be clocking much further?

You and many other people often seem to have very unrealistic expectation what binning lets you do. The variance in these chips isn't that much, and that variance doesn't even really show up as higher clocks on air or water cooling. Some chips consume less power, but don't really clock higher, some chips clock little bit higher, but need a lot of power to do so.
There have been various overclocking focused SKUs from 3rd parties over the years with supposedly binned chips, but in the end the ceilings aren't far apart and to push through that you need more voltages and the power consumption gets out of hand quickly.

The variance isn't enough to create a new performance bracket.
 
Last edited:
The only compelling feature in the consumer Pascal is indeed what the 16nm FF process provides -- clock speed and power efficiency, the most straightforward performance boost. The architecture itself is mostly rehashed Maxwell, and even the stated differences in the ISA revisions with Pascal are marginal. Most of the signature features -- HBM2, NVLink, free ECC and 1/2-speed DP are all in the GP100. No wonder Nvidia emphasised so hard on a rather niche feature, like the SMP rendering for VR and multi-monitor (where the impressive advantage over Titan X was stated repeatedly), since there's little left for the pedestrian Pascal to shine, besides the speeding clock-rates.
 
That reference board is also only half-assembled with voltage regulators, plus the option to add in a second 8-pin connector
Previous reference designs also had similar provisions, mostly used in the professional space where the same board design is kept, but the other 8-pin connector is used for airflow reasons. I don't think that in the past you could use both of them simultaneously to greater effect.

Maxwell isn't going to age well, and neither is any of the previous archs, with regard to how flexible Pascal is in certain points. This is sooner or later going to mean that driver and application side optimizations for the previous archs are going to phase out, and this time with a much worse impact than e.g. Kepler to Maxwell had.
From what's been published until now, I don't see many optimizations for general gaming that would apply to Pascal but not to Maxwell. Quite the contrary.
 
This time we only have FireStrike scores. According to the leaker GTX 1080 was running at 2114 MHz clock, which is roughly 381 MHz more than stock 1733 MHz boost clock. This is the first time new Pascal GP104-based graphics cards shows its true potential. We are observing a substantial boost in performance, GTX 1080 has 152/161/161% of GTX TITAN X stock performance in FireStrike Performance/Extreme and Ultra respectively.

NVIDIA-GeForce-GTX-1080-Overclocking-3DMark-Performance.png


GTX-1080-Clocks.jpg


http://videocardz.com/59882/nvidia-geforce-gtx-1080-3dmark-overclocking-performance
 
Reading this above, they say the 100% was already running at ~1.85 Ghz.
Amazingly the 14% faster 2.11 Ghz clock results in 24% speedup, no need even for overclocked memory.
 
Reading this above, they say the 100% was already running at ~1.85 Ghz.
Where does it say that? The 100% is running at the regular base clock - which only may boost up to 1733Mhz, it's not even locked to that.

And we have yet to see the power consumption at 2.1Ghz.
 
Where does it say that? The 100% is running at the regular base clock - which only may boost up to 1733Mhz, it's not even locked to that.

And we have yet to see the power consumption at 2.1Ghz.
What is the thought about looking at power consumption at the higher OC, potential limits of power supply or more to do with efficiency/TDP?
From an efficiency/TDP perspective I just shrug when it comes to extreme OC and also apply this to AMD products; although it is good to know the practical performance window-headroom available to most consumers (air cooling solution).
Cheers
 
Supposedly, it's also roughly 3 times as expensive (a little less maybe, 970s go for ~280 EUR here and falling).

When does price scale in a linear fashion at the high end in anything?

I'm just happy to see the performance being pushed. Will be interesting to see where the OC ranges for air and water come in at.

Aftermarket 980ti's were around 1450-1500 with outliers on each side.

My take is that an OC'd 980Ti will be slightly ahead of a stock 1080. The thermal performance is really impressive though.
 
The only compelling feature in the consumer Pascal is indeed what the 16nm FF process provides -- clock speed and power efficiency, the most straightforward performance boost.
Which happens to be the reason why people upgrade their GPUs.

The last major hardware addition to the graphics pipeline has been tessellation. That's 7 years ago?

What kind of new features were you expecting?

The architecture itself is mostly rehashed Maxwell, and even the stated differences in the ISA revisions with Pascal are marginal.
You consider that's a good thing, right? It means that driver improvements for Pascal should also apply for Maxwell.

Most of the signature features -- HBM2, NVLink, free ECC and 1/2-speed DP are all in the GP100.
None of those are compelling features for gaming workloads. (ECC is never free, BTW.)

No wonder Nvidia emphasised so hard on a rather niche feature, like the SMP rendering for VR and multi-monitor (where the impressive advantage over Titan X was stated repeatedly), since there's little left for the pedestrian Pascal to shine, besides the speeding clock-rates.
How was that different than the Fermi - Kepler transistor? Or Kepler - Maxwell? Go back to those introductions and observe how they also showed some niche features.
 
You consider that's a good thing, right?
Sure, the transition to a new manufacturing process was way overdue. Though, I have higher expectations for Volta, when the 16nm FF process should be better utilized and more mature.
It means that driver improvements for Pascal should also apply for Maxwell.
I hope so, since Nvidia doesn't have to stretch its driver support with yet another too radically different architecture this time.
 
Sure, the transition to a new manufacturing process was way overdue. Though, I have higher expectations for Volta, when the 16nm FF process should be better utilized and more mature.
Is it confirmed Volta will be on 16nm FF?
 
Back
Top