NVIDIA Maxwell Speculation Thread

I understand that, but GTC has always been their place to brag about their GPU compute and they used to dedicate a good part of their time to FP64 performance.
It's good that they found an area where they could brag about FP16 performance, otherwise it'd be really awkward to talk about a card with very low FP64 performance in there.



I'd bet that GK210 and GK110 come from the very same wafer, only difference being a bit of laser trimming here and there.
You're wrong. GK210 is actually the same size as GM200. The extra cache bumped up the die area by 50mm^2.
 
Happy to admit I was wrong about the compute oriented card, apparently they traded off compute for gaming performance.

But I was dead right about the 12gb being utterly useless. Shadow of Mordor on Ultra and 4k, the absolute max you're going to find for V-Ram usage, doesn't push much of anything beyond the near 50% speed boost over a 290x even though the latter has only a third the ram. Nvidia COULD have halved the ram with no performance loss, but they just had to have the ridiculous, and meaningless, point on the back of the virtual box as it were.
 
Happy to admit I was wrong about the compute oriented card, apparently they traded off compute for gaming performance.

But I was dead right about the 12gb being utterly useless. Shadow of Mordor on Ultra and 4k, the absolute max you're going to find for V-Ram usage, doesn't push much of anything beyond the near 50% speed boost over a 290x even though the latter has only a third the ram. Nvidia COULD have halved the ram with no performance loss, but they just had to have the ridiculous, and meaningless, point on the back of the virtual box as it were.
NVIDIA will release a slightly cut down GM200 with 6GB in the near future. Really I can't fault them for selling the 12GB Titan at $1K if there are people buying them. I sure would. Profitsssss
 
GK210 is not the same chip as GK110. It is known.

GK210 is only find on a dual GPU Quadro card ? or im wrong ? ( i suddenly have a doubt ).

On the "if peoples are buying it"; i allready see some diehard fan having order 1 or 2.." ( this said not so much as i could have expect, even if the gpu have been 10% faster of a 980 thoses guys will have order it ( and will non stop break my ears thoses next 3 months by justify their decision )
 
This is not a gaming card. Memory is the bottleneck in deep learning today. I would gladly use 32 GB on a card if I could get it.

That's my point! Why's there 12gb of this stuff when FP64 compute is so cut down? I don't doubt they'll put out a cut down/salvage card out calling it a "980ti" or, whatever, a little while after AMD releases the 390x at a similar performance and $250 cheaper (or more). I suppose you can charge a thousand bucks for it and people that don't know any better will buy it though. Profit margins indeed.
 
But I was dead right about the 12gb being utterly useless. Shadow of Mordor on Ultra and 4k, the absolute max you're going to find for V-Ram usage, doesn't push much of anything beyond the near 50% speed boost over a 290x even though the latter has only a third the ram. Nvidia COULD have halved the ram with no performance loss, but they just had to have the ridiculous, and meaningless, point on the back of the virtual box as it were.

Actually there are other games that have higher V-Ram usage ...

Well, it is clearly an immense amount of memory, but Titan X works best at extreme resolutions and our 4K testing suggests that the 4GB found in the GTX 980 isn't quite enough to service 4K gaming on at least one of the games we tested. Meanwhile, other games use the memory as a vast cache - we spotted Call of Duty Advanced Warfare using up to 8.5GB of VRAM.
...
At 4K, the benchmark comparisons with the GTX 980 SLI set-up really show the card's strengths - frame-rates are competitive but as you can see from the videos (which also track frame-times - more indicative of the actual gameplay experience), the overall consistency in performance is significantly improved. Take Assassin's Creed Unity, for instance. GTX 980 SLI frame-rates are higher than the overclocked Titan X by nine per cent, but it comes at a cost - significant stutter. In this case, we suspect that ACU at 4K is tapping out the 4GB of RAM on the GTX 980, while Titan X has no real memory limitations at all.

http://www.eurogamer.net/articles/digitalfoundry-2015-nvidia-geforce-gtx-titan-x-review
 
NVIDIA is being dumber and dumber again, conserving clocks to achieve ridiculous power targets. The least they could have done is clock the card equally to GTX 980!
 
Happy to admit I was wrong about the compute oriented card, apparently they traded off compute for gaming performance.

But I was dead right about the 12gb being utterly useless. Shadow of Mordor on Ultra and 4k, the absolute max you're going to find for V-Ram usage, doesn't push much of anything beyond the near 50% speed boost over a 290x even though the latter has only a third the ram. Nvidia COULD have halved the ram with no performance loss, but they just had to have the ridiculous, and meaningless, point on the back of the virtual box as it were.

I suppose some people who need good GPGPU performance but don't need DP might find that appealing. But you're probably right that it's mostly marketing—they didn't want to be "outdone" by an 8GB card from AMD.
 
I understand that, but GTC has always been their place to brag about their GPU compute and they used to dedicate a good part of their time to FP64 performance.
I think Nvidia sees deep learning as the first GPU compute technology that covers a wide field of applications and speaks to one's imagination as well. (It definitely speaks to mine: I think neural nets unusually fascinating and typed in some source code from a magazine for one, sometime early nineties.) It may very well be a major revenue driver from them in the near future, and how wonderful that it doesn't need FP64!
Can't blame them for doubling down on that.

I'd bet that GK210 and GK110 come from the very same wafer, only difference being a bit of laser trimming here and there.
I think you'd lose that bet, but I'm more annoyed by the fact that this 'laser trimming' lingo is still in use. GPUs aren't the high precision A/D converters of the eighties. A simple fuse will do just fine. Thank you. (Nothing personal, just an irrational pet peeve of mine.)
 
Nvidia COULD have halved the ram with no performance loss, but they just had to have the ridiculous, and meaningless, point on the back of the virtual box as it were.
That 12GB will come in handy for their deep learning stuff. So just like the original Titan, it will have some extra appeal for researchers. That's all it needs.
And some gamers will just buy it because they can... Why not?
 
This is not a gaming card. Memory is the bottleneck in deep learning today. I would gladly use 32 GB on a card if I could get it.

If you need memory stack, buy a Firepro W9100 with 16GB / 512bit... 1/2 DP rate, 5.4Tflops,.. it was available 2 years ago.

I dont understand on a so specific things like that, you want to buy a gaming product with 12Gb memory who have only 6.4Tflops SP... If you work on this domain, even a 20K $ GPU is not a problem.
 
Last edited:
Single precious computation is very useful in the field like statistic analysis/machine learning/data mining, but not so for other area which involves lots iterative algorthims.

As for 12GB memory,its very useful for CUDA applications.

If your application can live with single precious computation, then I would say Titan X is a good update to GK110(about 2X performance based on my experience with GT980 and GK110), and AMD's products are out of question, their ecosystem and toolchain are too weak, even their blas routines are very poorly maintained.

With 28nm node, NV have to make hard design decision, the decision that trade FP64 for better FP32 performance seems give Intel a good chance to establish them in hetergenous computing market with their upcoming MIC2.
 
If you need memory stack, buy a Firepro W9100 with 16GB / 512bit... 1/2 DP rate, 5.4Tflops,.. it was available 2 years ago.
You should check out the vast body of neural network literature and libraries that uses FirePro. Don't worry about it taking too much of your time.
 
So TechReport is using the new Beyond3D Test Suite. Is there a front page where an article explaining this wonderful technology gets linked?

I like the black/random fillrate test. Very nice. And as I've long suspected (from before Maxwell) NVidia has been doing something to make fill more efficient.
 
So TechReport is using the new Beyond3D Test Suite. Is there a front page where an article explaining this wonderful technology gets linked?

I like the black/random fillrate test. Very nice. And as I've long suspected (from before Maxwell) NVidia has been doing something to make fill more efficient.
Also, since when has GTX 980 become mid-range card?
The mid-range GTX 980
 
Back
Top