Nvidia Pascal Announcement

When comparing the perf/W and perf numbers, keep in mind the feature disparity between each vendors current generation.
Nvidia choose to limit the double rate half-precision (as well as the efficient pack/unpack instructions) to the GP100 only. AMD did include this in Polaris.

This is a pretty low hanging fruit for developers to pick (definitely easier than fiddling with a custom packed structure), so it's unlikely that upcoming titles will show the same relative efficiency on Polaris as current ones do.

Plus, Nvidia is most likely picking the best case, not an average over multiple titles, when comparing the 1060 to the RX 480. Where as the power management of the RX 480 is currently simply broken in a couple of titles.
Unless Nvidia can back their claims regarding efficiency against the RX 480 running the driver hotfix being released in the next few days, and list in detail in what setup they compared the cards, I wouldn't give anything on their relative numbers.


Last but not least: Giving the "exceptional" availability of the 1080 and 1070 cards, does really anyone expect that the stocks for the 1060 currently look any better than that?
Or is this just going to be yet another paper launch, with barely enough cards available to stir up some PR, but not to actually supply the demands?
Heck, even if Nvidia intended a MSRP of $300 for the 6GB model, if they don't get the stocks ramped up better this time, we are not going to see these cards for less than $350-400 in retail.
 
When comparing the perf/W and perf numbers, keep in mind the feature disparity between each vendors current generation.
Nvidia choose to limit the double rate half-precision (as well as the efficient pack/unpack instructions) to the GP100 only. AMD did include this in Polaris.
Polaris runs half precision ops at the same rate of single precision ops, there is no 2X rate. If you use half precision math in DX/OpenGL NVIDIA will run it at full precision on gaming parts, so there is no perf penalty.

Unless Nvidia can back their claims regarding efficiency against the RX 480 running the driver hotfix being released in the next few days, and list in detail in what setup they compared the cards, I wouldn't give anything on their relative numbers.
Are you saying that statements made by NVIDIA in the past should reflect future events? Are you serious?

To not mention that the "Polaris hot fix" might negatively affect performance. We just don't know yet.
 
News on the web spreading like wildfire is that the upcoming GeForce GTX 1060 could possibly not be SLI compatible. Judging from leaked photos it seems clear that the card does not have SLI connectors.

Obviously it could be possible that the card doesn't need any SLI bridge and moves its data over PCI-Express much like AMD does these days. But yeah, that would be a totally different direction for Nvidia as they always have required the SLI bridger hence I do not find that plausible (over PCI-e). The photos that surfaced certainly indicate the lack of SLI connectors.

http://www.guru3d.com/news-story/no-sli-for-geforce-gtx-1060.html
 
Raja said in an interview that polaris targets where chosen about 2 years ago.
Still given the seemingly true results of 1400+ and even 1500+ Rx480 clocks GDDR5x may be done to make some hideous 250W Rx 490. Hopefully nothing as bad as the FX-9000 joke. :runaway::runaway:

Actually would be Rx480x or Rx485x. AMD new naming will be that Rx485 will be 2nd revision of the card. I don't think bumping TPD up crazy will be done. And Rx490 AMD claimed was for over 256b bus.
http://www.pcgameshardware.de/screenshots/1280x1024/2016/06/Radeon-RX-400-Nomenklatur-pcgh.png
Polaris runs half precision ops at the same rate of single precision ops, there is no 2X rate. If you use half precision math in DX/OpenGL NVIDIA will run it at full precision on gaming parts, so there is no perf penalty.


Are you saying that statements made by NVIDIA in the past should reflect future events? Are you serious?

To not mention that the "Polaris hot fix" might negatively affect performance. We just don't know yet.
From what I have read quite a few sites have managed to reduce power by over 10% with minimal performance loss. Some even 15-20% with <1% performance loss.
 
Finally, big Pascal is coming to Gamers and as expected, it will be crazy fast:
http://vrworld.com/2016/07/05/nvidia-gp100-titan-faster-geforce-1080/
so fast that in many cases it will be CPU bound...

My old 7970's (CFX ) are still cpu bound in many case scenarios lol ... Sorry it was my funny addition .

And i can tell you that my 4930K 6 cores at 5ghz ( H2o ), im stil cpu bound at 2560x1440 ( sorry the article was really good before i read the cpu bound scenari )
 
Just to put some brakes on that - the site is BSN's Theo Valichs new site.
NVIDIA has also stated that PCIe version of P100 is coming in Q4, why on earth would they put out lower margin Titan out first?
Also, PCIe version of P100 is supposed to use very same GP100 as Mezzonnine version, and now it's suddenly GP102?
 
Just to put some brakes on that - the site is BSN's Theo Valichs new site.
NVIDIA has also stated that PCIe version of P100 is coming in Q4, why on earth would they put out lower margin Titan out first?
Also, PCIe version of P100 is supposed to use very same GP100 as Mezzonnine version, and now it's suddenly GP102?
Even more so: It's a direct referral from BSN's mainpage now.
 
So they think August 17th for the reveal.... I guess I'll hold off purchasing a 1080 for at least that long then!
They are doing what I thought.
According to VR releasing initially on Tesla and Quadro, and then after that coming to consumer as Titan.
Could be anywhere from 2-6 weeks difference between releases.
Just mentioning in case that amount of time is a consideration.

Cheers
 
Maybe the Titan is a GP100 based gpu and the ti is GP 102 based gpu. Would definitely give the Titan a purpose in the market.
 
Maybe the Titan is a GP100 based gpu and the ti is GP 102 based gpu. Would definitely give the Titan a purpose in the market.
They have said the GP100 is Tesla only, makes sense because Nvidia need a slightly different design/requirement for the next down in the tier and carry that across all segments, similar to what they did with GK110.
Cheers
 
Back
Top