Nvidia Pascal Reviews [1080XP, 1080ti, 1080, 1070ti, 1070, 1060, 1050, and 1030]

GTX 1060 “SLI” Benchmark – Outperforms GTX 1080 with Explicit Multi-GPU
sli-gtx-1060-ashes-1080p.png


sli-gtx-1060-ashes-4k.png


http://www.gamersnexus.net/guides/2519-gtx-1060-sli-benchmark-in-ashes-multi-gpu

Here’s the actual data for 3 resolutions across one high end board from both AMD and NVIDIA. These charts show both the minimum and maximum scaling benefit an AMD or NVIDIA user with DX12 can expect over DX11, at each resolution.
The data shows that DX11 scaling does exist but at both 1080 and 1440, DX12 achieves better scaling than DX11. Not only does DX12 have an extremely good maximum scaling factor win over DX11, it also shows that the minimum scaling factor is also above the DX11 minimum scaling factor. Explicit DX12 mGPU in this game is just uncompromisingly better at those resolutions.

At 4k, you can see that we are essentially close to parity, within error tolerance. What’s not immediately clear is that DX12 retains potential wins over DX11 here. There are even more hidden wins not expressed by the data and it has to do with more unrealized CPU potential that game developers can take advantage of. The next section in this post will describe both how game developers can extract even more performance from these unrealized gains and why 4k does not show the same scaling benefits as lower resolutions in this title.

data_minimum3.png




data_maximum3.png
https://blogs.msdn.microsoft.com/di...rectx-12-multigpu-and-a-peek-into-the-future/
 
From Anandtech's review:





A 12% core + 10% memory overclock translates into 60W more power consumption. For a 180W TDP card, this is 33% more power in exchange for 12% higher clocks.
The GTX1070 only overclocked to 1700/1850MHz and the power his was much smaller at 20W.
Looks like the GP104's clock/power curve takes a dip after ~1800MHz.
It also depends upon the game, but importantly the latest generation (both 14nm/16nm) have tighter performance envelopes/scaling.
You seen the power draw for a decent OC 480 that is only looking at the GPU core and not overall that includes the memory/leakage/power loss?
It is pretty bad for a 110W GPU relative to the performance-voltage-frequency boost.

It is just as valid to look at it the way TomsHardware.de did, which is similar in approach to way we get some of the performance-voltage scaling/headroom/etc charts looking at silicon.

01-Power-Consumption-vs-Gaming-Performance-FPS.png



Anyway both AMD and Nvidia will hit heavy power consumption once they go beyond ideal performance envelopes with this generation, and also importantly the core temp of the GPU - which may be important factor when trying to squeeze decent actual useable performance out of the OC.
Here is the Tom's link if interested in their analysis.
http://www.tomshardware.de/gtx-1070...izienz-performance,testberichte-242131-3.html
Cheers
 
Last edited:
From Anandtech's review:





A 12% core + 10% memory overclock translates into 60W more power consumption. For a 180W TDP card, this is 33% more power in exchange for 12% higher clocks.
The GTX1070 only overclocked to 1700/1850MHz and the power his was much smaller at 20W.
Looks like the GP104's clock/power curve takes a dip after ~1800MHz.

GPU and Memory are overclocked beyond stock values and power goes up! Shocker! Really? You were complaining, rightly, about the Polaris issue with PCI Express being blown out of proportion by some members, but now you try to highlight GP104 in a bad light as well when its pushed out of official spec? Double standards how much? The card IS a 180 TDP card within the official specs obviously! What were you waiting for? Did the card blow up when overclocked? No. So what's your point with "For a 180W TDP card, this is 33% more power in exchange for 12% higher clocks." really?
 
Last edited:
FFS will you please put down that fanboy-heated pitchfork?

Claiming the GP104's performance/power curve dips after 1800MHz was purely a technical statement. Polaris 10's performance/power curve probably dips below the RX 480's 1266MHz boost clocks, if RX470's efficiency claims are anything to go by.
The GP104's (and probably GP106's too) "ideal" clocks for performance/power efficiency are something like 600MHz above Polaris 10's.
There, I said something factual that puts nvidia in a better light. Happy now?


(as if clocks were the only factor that determines performance or efficiency..)
 
FFS will you please put down your fanboy-heated pitchfork?

Claiming the GP104's performance/power curve dips after 1800MHz was purely a technical statement. Polaris 10's performance/power curve probably dips below the RX 480's 1266MHz boost clocks, if RX470's efficiency claims are anything to go by.
The GP104's (and probably GP106's too) "ideal" clocks for performance/power efficiency are something like 600MHz above Polaris 10's.
There, I said something factual that puts nvidia in a better light. Happy now?

There is no fanboy pitchfork, as you didn't see me say a single thing about Polaris did you? On the other hand, you accused other people of having an "agenda" several times already. Coming up with that sentence about a 180 TDP card pushing 33% more power when out of spec does not exactly show you as impartial. That is my single point.

EDIT - For clarity, that sentence, without the "For a 180 TDP card" remark would be just a fact. It is the remark that shows "intention".
 
Coming up with that sentence about a 180 TDP card pushing 33% more power when out of spec does not exactly show you as impartial.

What the hell? You need to calm down, take a deep breath and read my post again.

The 180W number was there only to calculate how much the increased 60W meant as a percentage.
- The GTX 1080 was overclocked past 1800MHz and it took a 60W (33%) power increase.
- The GTX 1070 was overclocked up to 1800MHz and it took a much lower 20W (13%) power increase.

Therefore, the efficiency (performance/power) curve for GP104 seems to take a dip after it's clocked past 1800MHz.
How is this statement not partial and where did I even suggest it was somehow a bad thing? All GPUs have exponential increases in power consumption after a certain clock range and I just pointed out which clock values seem to be for GP104.


OTOH it does feel like you had your hand on the holster waiting for the first opportunity to call me out on something.
 
Ok, my bad. I did understand it the wrong way. E.g. Like when some tried to deny Maxwell's efficiency in the past by making use of cherry picked data points like peak consumption or compute tasks. Sorry.

Plus, RX480 is a better pick than GTX1060 anyway ;)
 
FFS will you please put down that fanboy-heated pitchfork?

Claiming the GP104's performance/power curve dips after 1800MHz was purely a technical statement. Polaris 10's performance/power curve probably dips below the RX 480's 1266MHz boost clocks, if RX470's efficiency claims are anything to go by.
The GP104's (and probably GP106's too) "ideal" clocks for performance/power efficiency are something like 600MHz above Polaris 10's.
There, I said something factual that puts nvidia in a better light. Happy now?


(as if clocks were the only factor that determines performance or efficiency..)
You miss the point, both have ideal clocks (in terms of voltage-performance scaling) and going beyond that envelope impacts more quickly/aggressively than the previous 28nm technology, also there is the overall leakage/power loss associated with each design.
So it makes sense to mention Polaris as well, as they both have similar constraints in context of going outside that ideal window.
It is probably around 1950MHz for Pascal, and around 1230MHz for Polaris - but this is made more complex due to the voltage and power/temp targets along with power draw.
However another consideration is the die size and characteristics when looking at performance/Watts/Frequency that your using as variables; this behaviour seems to show that this time round the smaller dies are more efficient than the larger ones.
Lets keep it in perspective by using similar sized dies.
That would 1060 compared to 480 and both with reasonably similar performances (give or take depending upon game).

And this is seen clearly with this set of charts:
Power-Consumption-vs.-Clock-Rate_w_727.png



Performance-vs.-Power-Consumption_w_728.png


Now in terms of performance other games will have the 480 with much better fps and in certain ones faster than the 1060, but it would not change the position of the Watts used.
Anyway as they conclude (in context with what your discussing) at Tom's Hardware:
The GeForce GTX 1060 might not have a lot of overclocking headroom, but you should be able to get extra speed from the card, which translates to higher frame rates. At the other end of the spectrum, you could also save some power. Less-than-ideal frame times aren't great, but a card that maxes out at 62W is interesting, if only for exhibition purposes. It's clear to see how Nvidia gets away with just three power phases for its GP106.

AMD’s Radeon RX 480 represents a valid alternative for gamers who don’t place as much of an emphasis on power consumption. You'll just have to contend with more waste heat.
Cheers
 
EVGA GeForce GTX 1060 Video Card Review
Comments:
Question:
At stock settings, it appears that the EVGA SC card has a maximum clockspeed of 2012 MHz. However, how consistent is the clock speed over time? Does it remain constant at 2012MHz, or does it drop lower?

Answer:

Playing BF4 at 4K with Ultra Presets shows that the core clock with the overclock used in the review was at 2025MHz after it was at full operating temperatures. It started out at 2075MHz and then dropped to 2062MHz at around 42C then dropped to 2050MHz at 52C, 2037MHz at 57C and then once it hit 60C it hit 2025MHz and was basically flat there.
http://www.legitreviews.com/nvidia-evga-geforce-gtx-1060-video-card-review_184301
 
Last edited by a moderator:
Overclocking the GTX 1060 with PrecisionX OC
chart.jpg




We see some really good scaling with most of our games with this very good overclock of the GTX 1060 Founders Edition over stock settings. In some cases, the framerate increase makes a difference to the fluid playability of some of the games at 2560×1440, including with DOOM and with Star Wars Battlefront.

Hitman at 3840×2160 is the only benchmark that refused to launch with our maximum overclock so we left the stock figure in for it, the rest of the games were stable at 2100MHz.
http://www.babeltechreviews.com/overclocking-gtx-1060-precisionx-oc/
 
WTF! both reviews are not comparable with each other...Different settings in games and even the thermal are different represented!!!!(delta vs absolutes) WTF was kit guru wanting to do? they intentionally made both reviews this way...maybe a request by Asus? IDK...
 
WTF! both reviews are not comparable with each other...Different settings in games and even the thermal are different represented!!!!(delta vs absolutes) WTF was kit guru wanting to do? they intentionally made both reviews this way...maybe a request by Asus? IDK...
Different authors and their approach, but agree they should be using a universal template-process.

On the plus side looking at the results all the cards in the list were retested for each review, as can be seen by how GTX1070 has different results between the two reviews.
Still makes it a nightmare to compare well though.
Cheers
 
Different authors and their approach, but agree they should be using a universal template-process.

On the plus side looking at the results all the cards in the list were retested for each review, as can be seen by how GTX1070 has different results between the two reviews.
Still makes it a nightmare to compare well though.
Cheers
I can see ppl calling that the 1060 does 106FPS is GTA and the 480 only foes 78...
 
I can see ppl calling that the 1060 does 106FPS is GTA and the 480 only foes 78...
Yep,
many ways for readers to skew the results, without readers realising the settings are different such as only one of those GTA review tests use 2xMSAA.
Surprised no-one has told them, but it looks like Ryan Martin has only ever done the 1060 GPU reviews so far, they definitely need to agree on their testing before they both do more :)

Cheers
 
Back
Top