D
Deleted member 13524
Guest
$999 for the Titan X.
For me, it seems they're anticipating that Fiji will put up quite a fight.
For me, it seems they're anticipating that Fiji will put up quite a fight.
$999 for the Titan X.
For me, it seems they're anticipating that Fiji will put up quite a fight.
Haven't all the Titan's (not the dual GPU one) been released at the $999 price?
No, there's nothing broken. From what I heard, the Maxwell with double precision got extra features added and was renamed Pascal. Evidently Nvidia wanted the best possible gaming and deep learning performance for Maxwell, and was content with letting the traditional HPC market wait a little longer.Is double-precision some how broken in Maxwell architecture? Building a ~600mm² GPU monster with ~200GFLOPs DP?
Could be there some internal fight about the superscalar approach used in Fermi Gen2 and Kepler, that GK210 was produced?
Moving less data around should be the big win for FP16 and a reason why it resurfaces again.- Claims 4x better FP16 performance than Maxwell. Probably means 2x more FP32 ALUs together with the theoretical 2x better FP16 performance for the mixed precision capability.
Moving less data around should be the big win for FP16 and a reason why it resurfaces again.
http://i.imgur.com/tdzmIb3.png
7Tflops single/0.2 double. So that'd be about an 1140MHz clock, boost clock probably. GTC stream for anyone who wants it:
http://www.ustream.tv/channel/gpu-technology-conference-2015
It's not that they're "trying to spin" this. I work in deep learning, and nobody uses FP64. We've been doing FP16 experiments instead, so far they look promising.Fun fact: The Titan X has about the same DP throughput as a Geforce GTX 580.
I think the GK110 is "young enough" to be kept in the market a bit longer for DP, and being stuck to 28nm means they had to cut somewhere.
Now nVidia is trying spin that FP32 and FP16 are spectacular for neural networks, which is why they spent 80% of the keynote talking about neural networks.
Since when has Nvidia ever cared about brand nomenclature confusion?Yes they did I was wrong in thinking that Nvidia would not degrade the Titan name by doing so.
This (Titan X) should not be a Titan at all but rather the GTX 980 Ti (which we know would have gimped DP)
I understand that, but GTC has always been their place to brag about their GPU compute and they used to dedicate a good part of their time to FP64 performance.It's not that they're "trying to spin" this. I work in deep learning, and nobody uses FP64. We've been doing FP16 experiments instead, so far they look promising.
I'd bet that GK210 and GK110 come from the very same wafer, only difference being a bit of laser trimming here and there.Also, they're not pushing GK110 for DP. Rather, GK210, a really different chip (2x register file, 2x shared memory).
With Maxwell NV get rid of the superscalar structure of ALUs. 1:4 or even 1:8 should be very cheap, at least you did not make some mistakes in design. 1:32 is just ridiculous.in the configuration of the chip , with the low ratio sp / dp , could have played a role the "cancellation" of 20nm ?
GK110 was around since end of 2012 so they needed 2 years to enable SM_37 with bigger caches? All data (CUDA, A1-stepping, time-frame) says it a new chip.I'd bet that GK210 and GK110 come from the very same wafer, only difference being a bit of laser trimming here and there.