Nvidia Pascal Announcement

Nvidia Max-P adds 10 per cent performance
Moving on to the interesting new graphics component design from Samsung / Nvidia, it is claimed that the GeForce GTX 1060 Max-P graphics card "punches out at least 10 per cent more graphics power over 1060 Max-Q graphics". We have read about and tested various Max-Q laptops, so it will be interesting to get our hands on a Max-P design, especially as there is no product info available on the Nvidia site as yet.
http://hexus.net/tech/news/laptop/116906-samsung-notebook-odyssey-z-boasts-gtx-1060-max-p-graphics/
 
They mention 10% over Max-Q, so seems more like a standard mobile 1060 maybe with a subtle tweaked power envelope for the OEM *shrug*.
 
Just a heads-up that the recent 397.31 seems to go into an endless reboot when installing on GTX1060:
https://forums.geforce.com/default/topic/1049931/geforce-drivers/397-31-will-not-install/1/

Strange one to get through QA, will be interesting to see if any explanations are given to something so notable in terms of number of customers impacted and why it was not picked up before release.
Not just Win10 users affected.

Edit:
Looks like it may tie into certain AIB partners Gainward,PNY,Palit,Zotac:
Sora Geforce forum said:
1060 owners can stop providing information for now unless they have a card from a company other than Gainward, PNY, Palit or Zotac.

If you are outside of these and do have the issue, manuelg wants a dump of your vbios and the card info.
https://forums.geforce.com/default/...397-31-will-not-install/post/5332334/#5332334
 
Last edited:
According to Parsec, NVIDIA's NVENC, outperforms AMD's VCE by a factor of 2.5X, data was extracted from 250K sessions using RX 480, RX 470, GTX 1060 and GTX 1070 among others. video encoding performance is fixed across GPUs since the engine is the same. So, an RX470 will perform the same as RX480 and GTX1060 will perform the same as GTX1070.

zWSrgfZ.png


We found that hardware plays a huge role in the performance of these connections. The most important part of that being the encoding latency of the video. Encoding latency is the amount of time it takes for the hardware on a GPU to compress a frame of video captured off of the GPU to prepare it to be shipped across the internet to the guest PC. Nvidia’s NVENC is approximately 2.59 times faster than AMD VCE and 1.89 times faster than Intel Quick Sync. The median encoding latency for an Nvidia card is 5.8 milliseconds; whereas, the median encoding latency on VCE is 15.06 milliseconds. This encoding latency is measured across all Co-Play sessions in Parsec, so there’s definitely a performance difference between newer generation cards than older generation cards, which we will examine in a future post.

https://blog.parsecgaming.com/nvidi...latency-in-parsec-co-op-sessions-713b9e1e048a

The problem seems to stem from the fact that AMD's VCE performance has regressed across generations, comparing R9 380/285 and which uses VCE3.0, tops out at 128fps at 1080p H264 Quality, FuryX drops to 77fps and RX 480/470 drops further to 55fps.

https://github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.0#r9-285
https://github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.0#r9-380
https://github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.0#r9-fury-x
https://github.com/Xaymar/obs-studio_amf-encoder-plugin/wiki/Hardware-VCE3.4#rx-470


EDIT: Further encoding tests:

1*fKgZJuB5KBTOAyOREIN2BQ.png


https://blog.parsecgaming.com/new-n...rds-on-h-264-compression-latency-d32784464b94
 
Last edited:
Nvidia might overhaul GeForce GTX 1060 with GP104 (a GTX 1070 GPU)
A new rumor just surfaced, fresh from Asia. Nvidia seems to be overhauling the GeForce GTX 1060 one more time. We’ve seen a 6GB 8 Gbps and then 9 Gbps model, of course, the three GB model, heck even a 5GB model for the Asia region, and the latest rumor now indicates a GTX 1060 with a hacked GTX 1070 GPU.
...
That begs the question, as to why Nvidia would release these revised GTX 1060 cards? Perhaps clearing GPU stock to make room for new products? Then again, the 1070's have been selling like puppies with the recent mining craze, weird huh?
http://www.guru3d.com/news-story/nvidia-might-overhaul-geforce-gtx-1060-with-gp104-(1070).html
 
Maybe they're sitting on a whole bunch of GP104s that don't clock as high as the 1070. Or don't have as many shader units functioning. *shrug*

Anyway, NV's product offerings is getting kind of crowded, if this is true.
 
Maybe they're sitting on a whole bunch of GP104s that don't clock as high as the 1070. Or don't have as many shader units functioning. *shrug*

Anyway, NV's product offerings is getting kind of crowded, if this is true.
They use much lower clocks with GP104s already to be more efficient for a particular Tesla model to run at 50/75W and around 5.5 TFlops FP32.
Not sure what to make of it from a yield perspective as the GP104 uses the cut die/ large amount disabled SMs with the GTX1070 that is pretty flexibile in terms how Nvidia enable sections (another cost saving they benefit from more than AMD, which can influence product price flexibility-margin from IHV).

Edit:
I should had said, this could be stock-logistics clearance where they have too many dies for particular GPUs and in preparation for next generation.

Edit2:
Clarity on Tesla P4 already using lower clocked GPUs.
 
Last edited:
So far there's no GP104 products though which have memory channels or ROPs disabled unless I'm missing something. Quite unlike the GM104, so there could be some chips left due to that.
 
So far there's no GP104 products though which have memory channels or ROPs disabled unless I'm missing something. Quite unlike the GM104, so there could be some chips left due to that.
Interesting thought.
Relative to yield control by the SMs (never see full compliment P100 or V100 iteration) how great a problem though is requirement to disable ROPs/memory controller due to manufacturing process?
Titan V spec was done to differentiate from V100 Tesla and Quadro (came later) rather than manufacturer process issues and launched reasonably early in the cycle, there was a need for a model to support the more expensive Tesla but as we can see we have not got a 'GV102' GPU for awhile so they went that route.
If it is an issue then it works well now then for Titan V strategy, but they did not look to release a GP100 with memory channel/ROPs disabled and that was an expensive GPU to just discard.
The Quadro models from what I understand are not cut.

But then it seems this new model is just for the Chinese market so quite specific and could fit what you say or difficult to tell for sure (also way to clear stock-logistics) as they do receive some unusual models from both IHV.
 
Last edited:
If it is an issue then it works well now then for Titan V strategy, but they did not look to release a GP100 with memory channel/ROPs disabled and that was an expensive GPU to just discard.
The Quadro models from what I understand are not cut.

It seems you're forgetting the GP100 12Gb models. Of Course these have ROPs disabled. I'm thinking over the exact same reason as mczak. After time you just get enough chips with defects in the ROP/Memory Interface. Better than to just throw them away.
 
It seems you're forgetting the GP100 12Gb models. Of Course these have ROPs disabled. I'm thinking over the exact same reason as mczak. After time you just get enough chips with defects in the ROP/Memory Interface. Better than to just throw them away.
Yep sorry did forget about them as focused only on Quadro and primary HPC cards, but the Titan V launched too quickly to rely upon defects with scale of purchases beyond just consumers (quite a bit before the Quadro was available to clients).
That said if mczak is right the strategy would benefit longer term for Titan V from manufacturing cost perspective, I must admit I have never really thought about how big the scale is for memory controller defects for the GPUs and interesting point.
Cheers
 
Last edited:
I don't know how likely defects in ROP/L2/MC (but probably not PHYs I don't think you could reroute those and using differently placed memory chips would probably be impractical) are.
But, don't forget the GM204 GTX 970/980, where the former had 1 such piece disabled (but still used 256bit physical interface). I don't think nvidia did that just for product segmentation, they certainly didn't bother to tell anyone (which got them quite a lot of bad press, probably would have been no big deal if they would have just been more open about it, but in any case they didn't repeat it with the GTX 1070).
AMD OTOH generally doesn't seem to do cards with disabled ROPs (there's exceptions though).
 
I don't know how likely defects in ROP/L2/MC (but probably not PHYs I don't think you could reroute those and using differently placed memory chips would probably be impractical) are.
But, don't forget the GM204 GTX 970/980, where the former had 1 such piece disabled (but still used 256bit physical interface). I don't think nvidia did that just for product segmentation, they certainly didn't bother to tell anyone (which got them quite a lot of bad press, probably would have been no big deal if they would have just been more open about it, but in any case they didn't repeat it with the GTX 1070).
AMD OTOH generally doesn't seem to do cards with disabled ROPs (there's exceptions though).

The bad press didn't come from "not bothering to tell anyone about disabled ROP" but from actually straight up lying about the cards configuration and capabilities (it wasn't just the ROPs).
Looks like at least GeForce.com is only showing 10-series these days, but when the 900-series was still shown they never actually corrected all the specs if I'm not mistaken

GTX 970 GM204 or GM206?
GTX 970 was always GM204.
 
Back
Top