You're wrong. GK210 is actually the same size as GM200. The extra cache bumped up the die area by 50mm^2.I understand that, but GTC has always been their place to brag about their GPU compute and they used to dedicate a good part of their time to FP64 performance.
It's good that they found an area where they could brag about FP16 performance, otherwise it'd be really awkward to talk about a card with very low FP64 performance in there.
I'd bet that GK210 and GK110 come from the very same wafer, only difference being a bit of laser trimming here and there.
NVIDIA will release a slightly cut down GM200 with 6GB in the near future. Really I can't fault them for selling the 12GB Titan at $1K if there are people buying them. I sure would. ProfitsssssHappy to admit I was wrong about the compute oriented card, apparently they traded off compute for gaming performance.
But I was dead right about the 12gb being utterly useless. Shadow of Mordor on Ultra and 4k, the absolute max you're going to find for V-Ram usage, doesn't push much of anything beyond the near 50% speed boost over a 290x even though the latter has only a third the ram. Nvidia COULD have halved the ram with no performance loss, but they just had to have the ridiculous, and meaningless, point on the back of the virtual box as it were.
GK210 is not the same chip as GK110. It is known.
This is not a gaming card. Memory is the bottleneck in deep learning today. I would gladly use 32 GB on a card if I could get it.
But I was dead right about the 12gb being utterly useless. Shadow of Mordor on Ultra and 4k, the absolute max you're going to find for V-Ram usage, doesn't push much of anything beyond the near 50% speed boost over a 290x even though the latter has only a third the ram. Nvidia COULD have halved the ram with no performance loss, but they just had to have the ridiculous, and meaningless, point on the back of the virtual box as it were.
Well, it is clearly an immense amount of memory, but Titan X works best at extreme resolutions and our 4K testing suggests that the 4GB found in the GTX 980 isn't quite enough to service 4K gaming on at least one of the games we tested. Meanwhile, other games use the memory as a vast cache - we spotted Call of Duty Advanced Warfare using up to 8.5GB of VRAM.
...
At 4K, the benchmark comparisons with the GTX 980 SLI set-up really show the card's strengths - frame-rates are competitive but as you can see from the videos (which also track frame-times - more indicative of the actual gameplay experience), the overall consistency in performance is significantly improved. Take Assassin's Creed Unity, for instance. GTX 980 SLI frame-rates are higher than the overclocked Titan X by nine per cent, but it comes at a cost - significant stutter. In this case, we suspect that ACU at 4K is tapping out the 4GB of RAM on the GTX 980, while Titan X has no real memory limitations at all.
Happy to admit I was wrong about the compute oriented card, apparently they traded off compute for gaming performance.
But I was dead right about the 12gb being utterly useless. Shadow of Mordor on Ultra and 4k, the absolute max you're going to find for V-Ram usage, doesn't push much of anything beyond the near 50% speed boost over a 290x even though the latter has only a third the ram. Nvidia COULD have halved the ram with no performance loss, but they just had to have the ridiculous, and meaningless, point on the back of the virtual box as it were.
I think Nvidia sees deep learning as the first GPU compute technology that covers a wide field of applications and speaks to one's imagination as well. (It definitely speaks to mine: I think neural nets unusually fascinating and typed in some source code from a magazine for one, sometime early nineties.) It may very well be a major revenue driver from them in the near future, and how wonderful that it doesn't need FP64!I understand that, but GTC has always been their place to brag about their GPU compute and they used to dedicate a good part of their time to FP64 performance.
I think you'd lose that bet, but I'm more annoyed by the fact that this 'laser trimming' lingo is still in use. GPUs aren't the high precision A/D converters of the eighties. A simple fuse will do just fine. Thank you. (Nothing personal, just an irrational pet peeve of mine.)I'd bet that GK210 and GK110 come from the very same wafer, only difference being a bit of laser trimming here and there.
That 12GB will come in handy for their deep learning stuff. So just like the original Titan, it will have some extra appeal for researchers. That's all it needs.Nvidia COULD have halved the ram with no performance loss, but they just had to have the ridiculous, and meaningless, point on the back of the virtual box as it were.
This is not a gaming card. Memory is the bottleneck in deep learning today. I would gladly use 32 GB on a card if I could get it.
You should check out the vast body of neural network literature and libraries that uses FirePro. Don't worry about it taking too much of your time.If you need memory stack, buy a Firepro W9100 with 16GB / 512bit... 1/2 DP rate, 5.4Tflops,.. it was available 2 years ago.
Also, since when has GTX 980 become mid-range card?So TechReport is using the new Beyond3D Test Suite. Is there a front page where an article explaining this wonderful technology gets linked?
I like the black/random fillrate test. Very nice. And as I've long suspected (from before Maxwell) NVidia has been doing something to make fill more efficient.
The mid-range GTX 980