Nvidia Pascal Reviews [1080XP, 1080ti, 1080, 1070ti, 1070, 1060, 1050, and 1030]

Nvidia GTX 1060 breaks world record with 3GHz+ core clock


Nvidia’s Pascal graphics chips have proved to be dominant in this GPU generation and that goes for clock speed totals as much as raw power. At the finals of the GALAX GOC 2016 overclocking series in Wuhan, China, several teams of ‘clockers were able to push the GP106 – the chip used in the GTX 1060 cards – to close to, 3GHz, while another team burst right through that ceiling.

The card in question was the Galaxy GTX 1060 HOF model, which as MobiPicker points out, was the same card that broke the 2.8GHz core barrier last month. This is a hefty overclock, with the stock core clock sitting at just 1,620MHz – though it does boost up to 1,847MHz. The overclock ultimately pushed the GTX 1060 in question to achieve higher pixel fill rates than a stock GTX 1080.
http://www.kitguru.net/components/g...060-breaks-world-record-with-3ghz-core-clock/
 
http://hothardware.com/reviews/nvidia-quadro-p6000-and-p5000-workstation-gpu-reviews?page=6

Anyone got idea what the hell is going on there? How can Quadro P6000 hit over twice the minimum FPS of Titan X in Hitman, when Hitman at that resolution takes around 5GB vram max, and Titan X has already 12GB?

Honestly, I don't really understand minimum fps.

Like, the thing you care about is the distribution of frame times. I've never understood why it's important to arbitrarily group frame times together into little bundles whose total time happens to equal one second.
 
http://hothardware.com/reviews/nvidia-quadro-p6000-and-p5000-workstation-gpu-reviews?page=6

Anyone got idea what the hell is going on there? How can Quadro P6000 hit over twice the minimum FPS of Titan X in Hitman, when Hitman at that resolution takes around 5GB vram max, and Titan X has already 12GB?
First: why do you assume it has anything even remotely to do with vram? What if game just needs 10 new complicated shaders and does the compilations and Quadro performs better because it's shader compilation settings would be more relaxed then the ones on Titan X aiming at generating the most efficient shader possible for the architecture?
Even if it is vram why do you think that game will use all of it? Don't you think there's probably a cap somewhere in the game? It has already been pointed out quite a few times around here how DX11 was quite nice on over subscription while DX12 is a bit of a pain (reservations in heaps, driver doesn't really have control over which part of the heap will be accessed). And as there might be hidden resources eating away your total available memory you'll find it really tricky to hit the max available memory wall.

Point is there's just no way to tell unless someone makes a complete API capture and a whole analysis of what's going on around those frames. It's just guessing and I think even without the info that we're dealing with a 12GB and 24GB card most developers wouldn't start thinking memory by default.

Also as ImSpartacus said: twice the minimum FPS? There's one frame that took 70ms. How does one sample point make an average?
 
Honestly, I don't really understand minimum fps.

Like, the thing you care about is the distribution of frame times. I've never understood why it's important to arbitrarily group frame times together into little bundles whose total time happens to equal one second.

It is still the next best thing after frametimes.
 
It is still the next best thing after frametimes.

Is it? And honestly, what is minimum fps? I wasn't kidding when I said I didn't understand it, lol.

I have a great handle on frame time distributions, their various percentiles and other metrics (I like what Tech Report does).

I also understand average FPS. You're totaling up frames and then dividing by the number of seconds in that given benchmark. It's imperfect, but it's clear how you calculate it.

To me, minimum FPS sounds like you're arbitrary splitting up the sequence of frames into "bundles" that are one second long. And then you're picking the bundle that has the fewest frames. That is ridiculous because it inherits the same exact limitations of average FPS. When Wasson titled his famous article "Inside The Second", the meaning behind that title was that we needed to look "inside" those arbitrary 1-second bundles of frames. The act of bundling them together will mask frame time issues because of the chances of a shitty long frame getting bundles with "good" short frames (remember this used to be notoriously rampant in crossfire). It doesn't matter if you're looking at average FPS or "minimum FPS" (if I'm interpreting it correctly). You're still seeing the same limitations because they are both gimped by the same bundling bullshit.

Honestly, the explanation for Kaotik's original question was probably just the ridiculous per-benchmark variance that a "minimum FPS" metric will suffer from. Maybe one run pairs two nearby long frames in the same "second" and that second easily gets the crown for "minimum FPS" while another run sees those long frames split across two seconds so now the "minimum fps" doesn't appear to be as bad. Same frame distribution, just different arbitrary bundling of "seconds".
 
To me, minimum FPS sounds like you're arbitrary splitting up the sequence of frames into "bundles" that are one second long. And then you're picking the bundle that has the fewest frames. That is ridiculous because it inherits the same exact limitations of average FPS. When Wasson titled his famous article "Inside The Second", the meaning behind that title was that we needed to look "inside" those arbitrary 1-second bundles of frames. The act of bundling them together will mask frame time issues because of the chances of a shitty long frame getting bundles with "good" short frames (remember this used to be notoriously rampant in crossfire). It doesn't matter if you're looking at average FPS or "minimum FPS" (if I'm interpreting it correctly). You're still seeing the same limitations because they are both gimped by the same bundling bullshit.
So you do understand it, but evaluate it only from perspective of single issue. It is not at all ridiculous once you look outside of stutter problems like crossfire had/has.
 
First: why do you assume it has anything even remotely to do with vram? What if game just needs 10 new complicated shaders and does the compilations and Quadro performs better because it's shader compilation settings would be more relaxed then the ones on Titan X aiming at generating the most efficient shader possible for the architecture?
Even if it is vram why do you think that game will use all of it? Don't you think there's probably a cap somewhere in the game? It has already been pointed out quite a few times around here how DX11 was quite nice on over subscription while DX12 is a bit of a pain (reservations in heaps, driver doesn't really have control over which part of the heap will be accessed). And as there might be hidden resources eating away your total available memory you'll find it really tricky to hit the max available memory wall.

Point is there's just no way to tell unless someone makes a complete API capture and a whole analysis of what's going on around those frames. It's just guessing and I think even without the info that we're dealing with a 12GB and 24GB card most developers wouldn't start thinking memory by default.

Also as ImSpartacus said: twice the minimum FPS? There's one frame that took 70ms. How does one sample point make an average?

Besides if framebuffer size would play any role here, the Fury X is doing more than just fine for its 4GB.
 
Both minimum fps and avereage fps are a lossy ( compressed) way of depicting the actual data i.e. frametimes.

However, minimum fps as a metric is vastly more sensitive to outliers, thus being much poorer at sumarizing user experience .

Hence FPS >> minumum FPS. What else is there to say?
 
However, minimum fps as a metric is vastly more sensitive to outliers, thus being much poorer at sumarizing user experience .

Hence FPS >> minumum FPS. What else is there to say?

Using your reasoning one could say something more: FPS >> minumum FPS >> frametimes
Damn outliers, what do they have to do with user experience?
 
Using your reasoning one could say something more: FPS >> minumum FPS >> frametimes
Damn outliers, what do they have to do with user experience?

No. The whole of my post didn't say that. Frametime is not comparable with the other metrics ; especially since having it available means you can infer both FPS & min FPS ;) .
 
Using your reasoning one could say something more: FPS >> minumum FPS >> frametimes
Damn outliers, what do they have to do with user experience?

I know you're joking, but just so others don't get confused, there are two kinds of outliers being discussed (that's the joke).
  • In one sense, it's absolutely critical to detect frame time outliers (i.e. unusually high frame times) because they are the ultimate actual cause of a subjectively poor gaming experience.
  • In the other sense, when you're benchmarking a game, you often run the benchmark several times in case there are statistical outliers that cause some iterations of your test to yield unrealistic results. Unfortunately, "minimum FPS" is painfully susceptible to statistical outliers because of how frames happen to be bundled in a given benchmark run.
U7Ghu2s.gif
 
When Wasson titled his famous article "Inside The Second", the meaning behind that title was that we needed to look "inside" those arbitrary 1-second bundles of frames. The act of bundling them together will mask frame time issues because of the chances of a shitty long frame getting bundles with "good" short frames (remember this used to be notoriously rampant in crossfire). It doesn't matter if you're looking at average FPS or "minimum FPS" (if I'm interpreting it correctly). You're still seeing the same limitations because they are both gimped by the same bundling bullshit.
When Scott was interviewed by GamersNexus he liked their approach of 1% and 0.1% fps figures for the minimum behaviour and felt they were applicable unlike min fps figure usually given with the average fps in some reviews.
Cheers
 
When Scott was interviewed by GamersNexus he liked their approach of 1% and 0.1% fps figures for the minimum behaviour and felt they were applicable unlike min fps figure usually given with the average fps in some reviews.
Cheers

I would agree with Wasson. Based on my interpretation of their explanation video below, GN is taking the bottom 1% & 0.1% of frames and calculating average FPS using those pools of frames (as opposed to calculating avg FPS from 100% of frames, which they also do). It seems like a good compromise that captures most of the indicators regarding whether or not there was a poor experience.

 
Back
Top