Nvidia Pascal Reviews [1080XP, 1080ti, 1080, 1070ti, 1070, 1060, 1050, and 1030]

wouldn't put too much stock in any of the current DX12/Vulkan benchmarks. It's like this with every new API. Need to wait about a year to let everyone get their shit together.
Doom IMHO seems to have a well implemented Vulkan implementation. I see no reason that wouldn't be a valid benchmark. Maybe because of the intrinsics, but I'm not sure Nvidia has enabled the functionality they require.

For Talos they've indicated it's still a work in progress with 3 steps. Last I heard they were only starting to optimize Vulkan prior to a likely switch. Shouldn't fault them for taking an existing game and implementing next generation technology for users to try.

Just because there is a learning curve with Vulkan/DX12 doesn't mean the benefits should be dismissed. In general everyone admits it takes some learning, but may in fact be easier and cheaper to use in practice. I'm not aware of any devs that have said they don't want to even bother with the new APIs.
 
Doom IMHO seems to have a well implemented Vulkan implementation. I see no reason that wouldn't be a valid benchmark. Maybe because of the intrinsics, but I'm not sure Nvidia has enabled the functionality they require.
It's 1 game in any case, so it's difficult to draw any meaningful conclusions from it.
 
So far, yes.
But then again, e.g. here in Germany, apart from the "Palit GeForce GTX 1060 Dual", essentially all custom models priced at MSRP are already sold out with no known resupply dates.
Seems like some new stock has arrived over night with Zotac and some Gainward cards.
http://geizhals.de/?cat=gra16_512&xf=1439_GTX+1060#xf_top

Guess we will have another look at the "MSRP" again in one or two weeks, see if it still holds.
That's why there's an "S" in MSRP, right? It's(supposed to be) a self-balancing economy after all, isn't it? No communism any more.
 
MSI GeForce GTX 1060 GAMING X Review .... aimed at the mainstream segment with a 279 USD.
So AfterBurner is in a new Beta stage development and voltage control is still pending. However, even withouth it you'll reach the 2100 MHz marker. The memory on our sample was insanely tweakable as we reached 9.6 GHz.

index.php


index.php
http://www.guru3d.com/articles_pages/msi_geforce_gtx_1060_gaming_x_review,1.html
 
Has anyone verified that a GDDR5X 1080 can actually achieve over ~300GB/sec of bandwidth?

Is there a good OpenGL synthetic that can test this?

No one on the CUDA forums has broken ~230 GB/sec.

Maybe the 10x0 devices are similar to Maxwell v2 and also default to a stock MEM clock power state when executing compute ("C") processes?
No problem getting over 230 GB/s. About 277 GB/s pure read b/w and 275 GB/s pure write b/w on a FE GTX 1080. Getting over 300 GB/s would be pretty incredible as that would be well over 90% memory utilization, which is tough on any GPU I've seen.

Also, you can easily check the memory clock in GPU-Z.
 
Couldn't throw the 1060 in there too?
His 1060 review is coming in a few days (before the RX480 Review apparently).
Anyway I would have preferred some more DX12 benchmarks like ROTR w/ latest DX12 patch, Quantum Break, Warhammer DX11vsDX12. And why no Doom OGL/Vulkan bench either?
 
No problem getting over 230 GB/s. About 277 GB/s pure read b/w and 275 GB/s pure write b/w on a FE GTX 1080.

Perfect. Thanks, that answers the question.

And you're correct about 90%... ~86% seems to be the ceiling on my cards (which is exactly 275 out of 320).

So the CUDA folks will have to file some more bug reports.
 
All of that said, there is one new feature to the video decode block and associated display controller on Pascal that’s not present on GM206: Microsoft PlayReady 3.0 DRM support. The latest version of Microsoft’s DRM standard goes hand in hand with the other DRM requirements (e.g. HDCP 2.2) that content owners/distributors have required for 4K video, which is why Netflix 4K support has until now been limited to more limited devices such as TVs and set top boxes. Pascal is, in turn, the first GPU to support all of Netflix’s DRM requirements and will be able to receive 4K video once Netflix starts serving it up to PCs.
Does this mean AMD doesn't support PlayReady 3.0? Or does he mean the first GPU he's reviewed? This could be big for choosing my HTPC GPU.
 
Does this mean AMD doesn't support PlayReady 3.0? Or does he mean the first GPU he's reviewed? This could be big for choosing my HTPC GPU.
I am under the impression, AMD said, they'd support PlayReady 3.0 as well.

--
On the matter of sales and availability, counting todays numbers for the one german shop which publicly list numbers sold, we have now

GTX 1080 (from May 27th): 3845 (69,9/day)
GTX 1070 (from June 10th): 6385 (155,7/day)
Radeon RX480 (from June 29th): 2260 (102,7/day)
GTX 1070 (from July 19th): 645 (322,5/day)

Pretty interesting numbers, although only from one single e-tailer in one single country (which is probably among the ones with a higher than average tendency towards more expensive hardware).
 
I am under the impression, AMD said, they'd support PlayReady 3.0 as well..
It's the first GPU I've reviewed (in order) that supports PlayReady 3.0. Though for what it's worth, I asked AMD about this close to the RX 480 launch and was never able to get a concrete answer about whether they supported it. Their marketing materials related to streaming video have always focused on HDR rather than 4K.
 
It's the first GPU I've reviewed (in order) that supports PlayReady 3.0. Though for what it's worth, I asked AMD about this close to the RX 480 launch and was never able to get a concrete answer about whether they supported it. Their marketing materials related to streaming video have always focused on HDR rather than 4K.
Thanks @Ryan Smith. Quick searches indicate the latest AMD cards do NOT support SL3000 level of PlayReady and unknown whether it's possible on the latest cards via update later since it's a hardware component. Would be really great if you could get that confirmation in time for your RX480 view release :)
 
From Anandtech's review:


Ryan Smith said:
Finally, let’s see the cost of overclocking in terms of power, temperature, and noise. For the GTX 1080FE, the power cost at the wall proves to be rather significant. An 11% Crysis 3 performance increase translates into a 60W increase in power consumption at the wall, essentially moving GTX 1080FE into the neighborhood of NVIDIA’s 250W cards like the GTX 980 Ti. The noise cost is also not insignificant, as GTX 1080FE has to ramp up to 52.2dB(A), a 4.6dB(A) increase in noise.


A 12% core + 10% memory overclock translates into 60W more power consumption. For a 180W TDP card, this is 33% more power in exchange for 12% higher clocks.
The GTX1070 only overclocked to 1700/1850MHz and the power his was much smaller at 20W.
Looks like the GP104's clock/power curve takes a dip after ~1800MHz.
 
A 12% core + 10% memory overclock translates into 60W more power consumption. For a 180W TDP card, this is 33% more power in exchange for 12% higher clocks.

It's more like going from 1.062v to 1.093v increased the power consumption. They should have overclocked it with stock voltage and then perhaps compare that to the overvolting. I'm almost certain that raising the volts didn't give any significant advantage in overclocking here, mostly just increased power consumption.
 
Back
Top