Nvidia Pascal Announcement

01-Clock-Rate_w_600.png


They should've called it the SnapForce 810 instead.
 
Btw this means that every review is wrong they are showing unreal number that wont be archivedable by anything unless they life in the north pole. Im hoping to see a review that use a summary frequency lets say 1.68Ghz and compare it to customs 980ti. Only then we will see the a real world scenario.
 
Hexus's article says otherwise, its more like 25% from the process and 75% from actual design.

And it was a straight answer from the lead designer of Pascal (minus the %'s)

http://hexus.net/tech/reviews/graph...gtx-1080-founders-edition-16nm-pascal/?page=2

We've spoken about the positive attributes of moving down manufacturing processes, and one such goodness is the ability to drive higher frequencies. One would expect the 16nm geometry to offer a reasonable bump in frequency, though not as high as Nvidia has achieved.

Jonah Alben, who oversaw Pascal, said to us that a huge amount of work had been done to minimise the number of 'violating paths' that stand in the way of additional frequency. This is critical-path analysis by another name, where engineers pore over the architecture to find and eliminate the worst timing paths that actually limit chip speed. If successful, as appears to be the case here, the frequency potential is shifted to the right, or higher. Alben reckoned that Nvidia managed a good 'few hundred megahertz' by going down this path, if you excuse the pun, so Pascal is Maxwell refined to within an inch of its life.
 
There is one consistent anomaly I have noticed from multiple reviews.
The Anno games (2070 and 2205) with the default boost profiles seems to be 1607MHz base clock (needs the temp/power target modified to boost)....
I wonder if any publications will delve into what is happening and use that as a good example, due to being quite consistent.

Cheers
 
There is one consistent anomaly I have noticed from multiple reviews.
The Anno games (2070 and 2205) with the default boost profiles seems to be 1607MHz base clock (needs the temp/power target modified to boost)....
I wonder if any publications will delve into what is happening and use that as a good example, due to being quite consistent.
It's not an anomaly. The Anno games have been doing this for years now. Go look through the reviews at Hardware.fr. It seems you're new at this.
 
Hexus's article says otherwise, its more like 25% from the process and 75% from actual design.
Didn't the article you linked say 'a few hundred megahertz'? Which sounds like 200 to 300 mhz which is a 50% improvement.

I find it really hard to belive because maxwell already archive 1.5Ghz +/- with overclock on 28nm. Thats 15% lower frequencies than Pascal. And the diagram doesn't show much improvement really.
Why would you compare overclocked to stock clocks?
 
Didn't the article you linked say 'a few hundred megahertz'? Which sounds like 200 to 300 mhz which is a 50% improvement.


Why would you compare overclocked to stock clocks?


A few is greater than 2 but less than 7 So yeah could be 200 it could be higher, lets find out what max frequencies are, the 980ti and Titan X maxes out at 1500, we don't know what the 1080 max's at but it looks above 2000 doesn't it? With the thermal and power barrier can't know for sure.
 
Last edited:
It's not an anomaly. The Anno games have been doing this for years now. Go look through the reviews at Hardware.fr. It seems you're new at this.
LOL gee thanks :)
Actually not quite because earlier models of NVIDIA (specifically 980ti as they have same cooler and was last Maxwell enthusiast created and 1080's power phase/regulation is closer to that than 980) did NOT remain at base clock and still boosted somewhat, I checked several sites.
And yes I did check for that, the 980ti still boosts Anno games so not same behaviour as 1080 from multiple review sites showing just base clock.

Edit:
To make this point but only to the response.
computerbase shows 3 games that remain at base clock on the 1080, however 2 of those were also used for the 980 ti and there they boosted albeit not greatly but still above boost spec.
Witcher 3 and The Talos Principle.
http://www.computerbase.de/2015-06/...-nvidia-titan/3/#abschnitt_die_turbotaktraten
http://www.computerbase.de/2016-05/...bschnitt_bis_zu_1785_mhz_takt_unter_dauerlast

Just using this in response to your post, and not using this in context of the anomaly of Anno on Pascal that is replicated by multiple reviews (so more proofed).

Edit 2:
OK I can see how this argument could go full circle as it could be argued I should use the other cards rather than 980ti and I accept that.
Although this is higher priced and using same cooling solution as 980ti and improved and greater power phase/regulation over the reference 980, but I understand where your coming from Jawed and wanting to compare it to the 980/970/earlier.
Cheers
 
Last edited:
Sorry if I missed it being linked in past, but oh man the Q&A regarding the pricing of the FE and what exactly makes it better than a previous reference is err rather painful to watch the response from NVIDIA staff on the stage.
You need to go to 3min50sec regarding the FE questions.

I would demand a raise if I was the NVIDIA employee on stage having to justify the FE and its pricing to 100s of reviewers and publications :)

I think the person in the background immersing themselves to VR had the right idea of getting out of it, just pretend none of it happened :D
Cheers
 
Last edited:
Sorry if I missed it being linked in past, but oh man the Q&A regarding the pricing of the FE and what exactly makes it better than a previous reference is err rather painful to watch the response from NVIDIA staff on the stage.
You need to go to 3min50sec regarding the FE questions.

I would demand a raise if I was the NVIDIA employee on stage having to justify the FE and its pricing to 100s of reviewers and publications :)

I think the person in the background immersing themselves to VR had the right idea of getting out of it, just pretend none of it happened :D
Cheers

And yet I still don't know who exactly will be selling Pascal at the lower MSRP? It's obvious Nvidia wont wth FE. Did Nvidia force their AiB partners to sell at least one SKU at lower MSRP? If not, I see no sane reason why any partner would sell their custom boards below FE price point.
 
Anyhow, If this behavior is frequent (ie happens in several other games), I find the fix pretty easy. NV should raise the mininum fan speed to prevent the card from going under the advertised boost clock,
 
If that turns out to be true (big IF) then it's the first time in years Nvidia have not released a >500mm^2 die for their high end GPU. I also expected to see HBM2 in the Titan to differentiate it from the Ti.

As for the boost clock thing, you can easily modify the bios to completely disable that (at least during load, you don't want to do that during idle :p), I've done that on my 970 (force constant voltage/core clock) because I was getting driver crashes on less demanding titles. We should be able to do the same thing on Pascal.
 
Last edited:
Hasn't the CUDA cores traditionally either been less or same on the Titan to the Tesla (both M and K series) counterpart?
This would break that model.
Cheers
 
Right, but before you didn't have an extraordinary expensive memory configuration in the Tesla series either. That Titan / 1080Ti is clearly still a consumer product with the GDDR5X memory type. Plus it likely also shares the same shader configuration as the Gp104, feature wise, rather than the improved half precision rate and sported on the GP100. It also only provides only a 1/32 DP to FP ratio this time.

All in all, it looks as if Nvidia decided to ditch the "professional" label originally assigned to the Titan series, and stepped down to market it only as the fastet consumer card, but nothing more.

Guess if AMD wanted to really hurt Nvidia, all they would need to do, is to give Polaris and Vega already the improved half precision rate, and promote the use of it aggressively. Because after all, that DOES mean doubled performance in all aspects when you don't need full SP precision. When I'm not mistaken, then it would allow to keep up with up to ~50% stronger hardware just by the reduction of effective computational cost.
 
Back
Top