Gigabyte GeForce GTX 1050 Ti 4GB OC EDITION
http://www.guru3d.com/articles-pages/gigabyte-geforce-gtx-1050-ti-g1-gaming-oc-review,1.html
http://www.guru3d.com/articles-pages/gigabyte-geforce-gtx-1050-ti-g1-gaming-oc-review,1.html
http://www.kitguru.net/components/g...060-breaks-world-record-with-3ghz-core-clock/Nvidia’s Pascal graphics chips have proved to be dominant in this GPU generation and that goes for clock speed totals as much as raw power. At the finals of the GALAX GOC 2016 overclocking series in Wuhan, China, several teams of ‘clockers were able to push the GP106 – the chip used in the GTX 1060 cards – to close to, 3GHz, while another team burst right through that ceiling.
The card in question was the Galaxy GTX 1060 HOF model, which as MobiPicker points out, was the same card that broke the 2.8GHz core barrier last month. This is a hefty overclock, with the stock core clock sitting at just 1,620MHz – though it does boost up to 1,847MHz. The overclock ultimately pushed the GTX 1060 in question to achieve higher pixel fill rates than a stock GTX 1080.
http://hothardware.com/reviews/nvidia-quadro-p6000-and-p5000-workstation-gpu-reviews?page=6
Anyone got idea what the hell is going on there? How can Quadro P6000 hit over twice the minimum FPS of Titan X in Hitman, when Hitman at that resolution takes around 5GB vram max, and Titan X has already 12GB?
First: why do you assume it has anything even remotely to do with vram? What if game just needs 10 new complicated shaders and does the compilations and Quadro performs better because it's shader compilation settings would be more relaxed then the ones on Titan X aiming at generating the most efficient shader possible for the architecture?http://hothardware.com/reviews/nvidia-quadro-p6000-and-p5000-workstation-gpu-reviews?page=6
Anyone got idea what the hell is going on there? How can Quadro P6000 hit over twice the minimum FPS of Titan X in Hitman, when Hitman at that resolution takes around 5GB vram max, and Titan X has already 12GB?
Honestly, I don't really understand minimum fps.
Like, the thing you care about is the distribution of frame times. I've never understood why it's important to arbitrarily group frame times together into little bundles whose total time happens to equal one second.
Or as Gamers Nexus does and break this down to 1% and 0.1%, which is better than min fps IMO.It is still the next best thing after frametimes.
It is still the next best thing after frametimes.
Isn't that the definition of "Frames Per Second"To me, minimum FPS sounds like you're arbitrary splitting up the sequence of frames into "bundles" that are one second long.
So you do understand it, but evaluate it only from perspective of single issue. It is not at all ridiculous once you look outside of stutter problems like crossfire had/has.To me, minimum FPS sounds like you're arbitrary splitting up the sequence of frames into "bundles" that are one second long. And then you're picking the bundle that has the fewest frames. That is ridiculous because it inherits the same exact limitations of average FPS. When Wasson titled his famous article "Inside The Second", the meaning behind that title was that we needed to look "inside" those arbitrary 1-second bundles of frames. The act of bundling them together will mask frame time issues because of the chances of a shitty long frame getting bundles with "good" short frames (remember this used to be notoriously rampant in crossfire). It doesn't matter if you're looking at average FPS or "minimum FPS" (if I'm interpreting it correctly). You're still seeing the same limitations because they are both gimped by the same bundling bullshit.
First: why do you assume it has anything even remotely to do with vram? What if game just needs 10 new complicated shaders and does the compilations and Quadro performs better because it's shader compilation settings would be more relaxed then the ones on Titan X aiming at generating the most efficient shader possible for the architecture?
Even if it is vram why do you think that game will use all of it? Don't you think there's probably a cap somewhere in the game? It has already been pointed out quite a few times around here how DX11 was quite nice on over subscription while DX12 is a bit of a pain (reservations in heaps, driver doesn't really have control over which part of the heap will be accessed). And as there might be hidden resources eating away your total available memory you'll find it really tricky to hit the max available memory wall.
Point is there's just no way to tell unless someone makes a complete API capture and a whole analysis of what's going on around those frames. It's just guessing and I think even without the info that we're dealing with a 12GB and 24GB card most developers wouldn't start thinking memory by default.
Also as ImSpartacus said: twice the minimum FPS? There's one frame that took 70ms. How does one sample point make an average?
Isn't that the definition of "Frames Per Second"
You might if you have a lot of water mixed in with your gasoline.!Not really, nor do you need to drive for an hour to get your speed in miles per hour.
However, minimum fps as a metric is vastly more sensitive to outliers, thus being much poorer at sumarizing user experience .
Hence FPS >> minumum FPS. What else is there to say?
Using your reasoning one could say something more: FPS >> minumum FPS >> frametimes
Damn outliers, what do they have to do with user experience?
Using your reasoning one could say something more: FPS >> minumum FPS >> frametimes
Damn outliers, what do they have to do with user experience?
When Scott was interviewed by GamersNexus he liked their approach of 1% and 0.1% fps figures for the minimum behaviour and felt they were applicable unlike min fps figure usually given with the average fps in some reviews.When Wasson titled his famous article "Inside The Second", the meaning behind that title was that we needed to look "inside" those arbitrary 1-second bundles of frames. The act of bundling them together will mask frame time issues because of the chances of a shitty long frame getting bundles with "good" short frames (remember this used to be notoriously rampant in crossfire). It doesn't matter if you're looking at average FPS or "minimum FPS" (if I'm interpreting it correctly). You're still seeing the same limitations because they are both gimped by the same bundling bullshit.
When Scott was interviewed by GamersNexus he liked their approach of 1% and 0.1% fps figures for the minimum behaviour and felt they were applicable unlike min fps figure usually given with the average fps in some reviews.
Cheers