frameavenger
Newcomer
Not really. Announced at GTC. Only useful for some very specific compute use cases. Priced for no volume at all. But feel free to disagree, and express outrage if you prefer. It's of no importance.
I AGREE.
Not really. Announced at GTC. Only useful for some very specific compute use cases. Priced for no volume at all. But feel free to disagree, and express outrage if you prefer. It's of no importance.
Not really. Announced at GTC. Only useful for some very specific compute use cases. Priced for no volume at all.
And if the gossip rumors are true, it was planned to be released earlier and then delayed to to issues. I can't be bothered to look up exactly how long it was delayed due to this, but shit happens in engineering.
But feel free to disagree, and express outrage if you prefer. It's of no importance.
I think it was originally meant to appeal to consumers, but plans changed when the 295 X2 came out.
(Not that I understand what the problem is with paper launches either.)
Yeah, that's another thing I don't understand: the fascination with product names of which we don't know anything in the first place.What I remember most about this year's GTC is that Volta disappeared off Nv's roadmap and Pascal was announced to be coming after Maxwell.
There's no reason for bigger Maxwells not to have similar perf/w and perf/mm2 improvements that were seen for GTX750Ti. That should be sufficient to lift them quite a bit beyond current performance levels. 16nm will give them yet another lift, but that's still at least a year away, and probably more. There's no good reason to wait for that.Just how much more performance can nvidia get out of 28nm without making unrealistically huge chips? It seems to me that they can only adjust the archtecture so much at 28nm before hitting a wall. If I were nvidia, I'd consider holding Maxwell back until 16nm was available.
138 = 2·3·23 doesn't work well for basically any number of SMMs anywhere near 15, so I don't think that number is correct at all (as opposed to the TMU count being correct and the shader count being incorrect).p.s.: the # of TMUs doesn't match the SMX's.
15 SMM may be a reasonable possibility since they could be going for 3x GM107 in terms of SMM count.
138 = 2·3·23 doesn't work well for basically any number of SMMs anywhere near 15, so I don't think that number is correct at all (as opposed to the TMU count being correct and the shader count being incorrect).
15 SMM may be a reasonable possibility since they could be going for 3x GM107 in terms of SMM count.
138 = 2·3·23 doesn't work well for basically any number of SMMs anywhere near 15, so I don't think that number is correct at all (as opposed to the TMU count being correct and the shader count being incorrect).
Or it could be that GPUz isn't reading the core count correctly.
GK104 was 4x the cores of GK107. GF/104GF114 was 4x the cores of GF107. I think when it's all said and done, a fully enabled GM204 chip will have 2,560 maxwell cores.
GP100? From the 35 watt figure its an approximate 35% improvement over the GM200. Could even Pascal be on 28nm?Hmmm, GM200 with twice the performance per watt of GK110 and a release by end for 2014? Interesting indeed!
http://www.3dcenter.org/news/nvidias-big-chips-gk210-gm200-gp100-bestaetigt
With Kepler they didn't have a theoretical upper boundary of how many transistors they can squeeze into 28nm to reach the 551mm2 of the GK110 either. On the other hand GM200 is bound by that and there has to be a reasonable performance difference for GM200 based SKUs (salvage parts included) above any GM204 based SKUs.