NVIDIA Maxwell Speculation Thread


I find much more interesting this graph, for once :D

perfwatt.gif
 
Looks like the whole design is dictated by the power envelope priority. Even the chopped multiprocessors into sub-units is done probably for more finer control over the clock/power gating in mind.
The performance will come mostly by the optimized memory pipeline, which was outlined as the main architectural target for this generation anyway. The feature set upgrade is apparently left on the sidelines for now, with exception for the usual revision iteration of CUDA.

I think we have to wait for Maxwell 2.0 on a finer process node for more changes.
 
Fantastic turn for NVIDIA. Amazing that they can increase perf/W by so much on the same node.
 
http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750-17.html

Better grab your GM107 before the coin mining crowd realize this... :???:

I read that to and it sure looks like Nvidia vastly improved their integer operations.

Of course, Bitcoin isn't the only cryptocurrency reliant on hashing. MaxCoin, for example, is a member of the SHA3 family, and it's supported in the latest version of CudaMiner. Curious as to how GeForce GTX 750 Ti sizes up to 650 Ti, we ran the following SHA2-based test in Sandra 2014:

crypto.png



There are big gains to be had from DirectX's Compute Shader, but throughput via CUDA is downright phenomenal. It's probable that Maxwell improves some of the integer operations that were slower on Kepler. Hopefully Nvidia opens up more about what the new architecture can do.
 
Comparing to 265 is a joke considering it's laughbly super inefficient 150W vs super efficient 60W card.
Well it's the same price point. And nvidia doesn't win perf/price there. Not too bad though with the huge lead in perf/w (nvidia didn't win perf/price neither with fermi, and they were completely uncompetitive perf/w there on top of it).
The architecture looks impressive to me though, the chip is slightly less complex than Bonaire and definitely faster (even more impressive considering the low memory speed), with a large perf/w advantage. And compute perf got a lot more competitive too.
Only DP rate is a bit of a sore point I guess, just when you think they can't get any lower it now seems to be 1/32 at least according to hardware.fr diagrams: http://www.hardware.fr/news/13568/nvidia-lance-geforce-gtx-750-ti-750-maxwell.html. But obviously, this is not a HPC chip :).
 
I don't know if AMD has any significant present in the discrete notebook market left, but i f they do, this thing is not going to help.
 
Well, the question now is if a 180 watts Maxwell chip on 28nm will be faster than a Titan...or what is the same, if this efficiency is translated to bigger chips.
 
Actually for Bitcoins there, Bonaire still wins. Seems to be rather an exception though (at least considering perf/w, though that's just a guess as there's no power measurement for this benchmark).

Litecoin is the more relevant benchmark, as Bitcoin is now dominated by ASICs (from what I've heard). But AMD apparently retains a significant advantage in Litecoin.

I don't know if AMD has any significant present in the discrete notebook market left, but i f they do, this thing is not going to help.

There are still a few holes waiting to be filled in AMD's roadmap:

AMD-R9-M200-Mobile-Graphics-Roadmap-2013-2014.jpg


They could be rebands, but I think they might be new chips. I mean, they're "coming soon". How long does it take to change a label? There's no reason to wait to rebrand chips, so I'm hoping GCN 2.0 is about to make an entrance.
 
Back
Top