NVIDIA Maxwell Speculation Thread

Nvidia probably doesn't want to fully disable the last .5 GB due to legal implications at this point. All they seem to do is use a blacklist for all games that will use more than 3.5 and show stuttering and limit the vram use below the threshold.

This "blacklist" and limit vram to 3.5 GB makes absolutely no sense at all.

If the game does not require more than 3.5 GB then the 970 runs it just fine.

If the game REQUIRES more than 3.5 GB then not using the extra 0.5 GB will actually make the game stutter MORE because the game will have to go to main memory which is even slower than the 0.5 GB segment.
 
This "blacklist" and limit vram to 3.5 GB makes absolutely no sense at all.

If the game does not require more than 3.5 GB then the 970 runs it just fine.

If the game REQUIRES more than 3.5 GB then not using the extra 0.5 GB will actually make the game stutter MORE because the game will have to go to main memory which is even slower than the 0.5 GB segment.

If the game doesn't require more than 3.5 GB but tries to use more than that, it will stutter when otherwise it wouldn't
 
This "blacklist" and limit vram to 3.5 GB makes absolutely no sense at all.

If the game does not require more than 3.5 GB then the 970 runs it just fine.

If the game REQUIRES more than 3.5 GB then not using the extra 0.5 GB will actually make the game stutter MORE because the game will have to go to main memory which is even slower than the 0.5 GB segment.
I don't think that's necessarily true. With all the talk about virtual texturing, the app may use some kind of texture atlas which is sized according to the graphic card memory. In that sense from a driver perspective it never runs out of memory since changing the atlas is the responsibility of the app. If now the texture atlas is bigger sized because it doesn't take into account the segmented memory, from a driver point of view it now might run out of "fast" memory hence the driver might need to swap things around (presumably by copying from fast to slow memory and back). And it would probably be possible the latter having quite a bit more impact than a smaller sized texture atlas (because the driver doesn't have all the knowledge the app has hence might not swap out the "right" bits).
Not sure though if some more clever driver couldn't overcome any shortcomings. In any case though I highly doubt the additional software complexity is worth bothering with the meager 1/8 more (very slow) ram that this gets you.
 
If the game doesn't require more than 3.5 GB but tries to use more than that, it will stutter when otherwise it wouldn't
This scenario would require driver to report 3.5GB available memory to the game and game knowing what to do about it. That's something that's testable.
 
And that is indeed my point.

Also, if there's ANY caching going on, which I am sure there is, then limiting the RAM to 3.5GB will also affect how much RAM is getting used. Just the comparisons between the 970 and 980 already show the drivers is doing something to this degree. In Mordor, the 980 reaches the 4GB barrier more easily than the 970. But does the game actually require 4GB? I sort of doubt it, since the 970 driver tries to avoid going over 3.5GB as well.
 
A lot of games seem to put a bunch of stuff in VRAM that isn't strictly necessary. Maybe they just fill it up because why not. I can assure you the 970 performs fantastically in Shadows of Mordor at 1080p with all max settings.
 
Holy mother of...... that's my expectations for Elder Scrolls VI
 
GK110 was 7B transistors. So at 8B transistors, I'm expecting about the same number of ALUs as the GK110 but with more SMMs than GK110 had SMXs.
 
GK110 was 7B transistors. So at 8B transistors, I'm expecting about the same number of ALUs as the GK110 but with more SMMs than GK110 had SMXs.
If the rumors are true that the GM200 doesn't have fast DP, then perhaps it will have almost as many SP units as the GK110 has SP and DP units.
 
GM204 had a 48.5% increase in transistor count over GK104
GM107 had a similar 44% increase in transistor count over GK107.
GM200 only as a 12% increase over GK110. GM200 almost definitely has limited FP64 operations, which is fine with me, I am just pointing out the discrepancy.

Probably should be as much of an increase in raw graphics throughput over GM204 as GK110 was over GK104 (50%).
 
It's an interesting perspective to take that taking the maximum non-boutique power configuration is today's version of power efficiency.
At least it proves they aren't desperate enough to force things beyond it for the non-custom variants, so that's an improvement.
I guess?
 
UPDATE: I ran into the TITAN X again at the NVIDIA booth and was able to confirm a couple more things. First, the GPU will only require a 6+8-pin power connections, indicating that NVIDIA is still pushing power efficiency with GM200.

"Only"? Isn't that still 300W? 8+6+pcie = 150 + 75 + 75, same as the 290X (some have more though, some 8+8 pins around).
 
Back
Top