NVIDIA Maxwell Speculation Thread

So if you want to OC your laptop either use a driver version prior to 347.29 or wait till the "hack" comes. :)

Nvidia once again managed to irritate a vast amount of end-users, this time the latest driver update disables overclocking for the mobile Series 900. Until recently these parts could be over or under clocked.

Now granted, I would not overclock or tweak a mobile GPU part myself either, the thermals are way too important inside a laptop for that. But we also know that some people like to do so and do acknowledge that many people have been OCing their mGPU for a long time (downclocking as well).

In their forums Nvidia explained that overclocking on the GTX 900M series was enabled by accident, and has since been disabled with the recent driver updates. To get you an idea of the revision numbers, starting GeForce R347 drivers (version 347.29) the functionality has been disabled on the GeForce GTX 900M series mobile GPUs.

http://www.guru3d.com/news-story/nvidia-disables-overclocking-for-series-900-mobile-gpus.html
 
I have a ASUS G750JM notebook with 860M and I noticed the disabled overclocking. ASUS ships a version of GPUTweak that adds 100MHz right out of the box.

However, another side to the Maxwell story is some people have been having BSOD problems. I was suffering from problems several months ago but they seem solved for me at least. Some people can't get their notebook stable on newer drivers than what ASUS provides themselves from last summer. ASUS and NV asked for full memory dumps from BSODs months ago. I have a feeling the disabled overclocking is connected with BSOD solutions and disabling in-the-box overclock programs.

If there's anything I've learned about gaming notebooks over the years it's that they tend to be uniquely touchy because of the hardware variations. Which is probably why NV and ATI used to not release notebook drivers directly.
 
Last edited:
Hmm, tech report did an article detailing overclocking the 960, but did not specify if memory OCing gave the bigger performance boost or core OCing did...

Does anybody know if the GPU is more bandwidth sensitive than core clock sensitive or not? Given the 128 bit bus, I suspect the former, but...
 
Hmm, tech report did an article detailing overclocking the 960, but did not specify if memory OCing gave the bigger performance boost or core OCing did...

Does anybody know if the GPU is more bandwidth sensitive than core clock sensitive or not? Given the 128 bit bus, I suspect the former, but...

I don't know, but the GTX 980 responds a bit better to core overclocking rather than memory, and since the GTX 960 is pretty much exactly half a GTX 980 in every way, I imagine it behaves very similarly:

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/22
 
Yes but the little 128bit bus and Vram ammount will certainly do, that the result by overclocking it should be way different.. you will get less botttleneck.
 
Yes but the little 128bit bus and Vram ammount will certainly do, that the result by overclocking it should be way different.. you will get less botttleneck.

The bandwidth is halved but so is everything else (shaders, TMUs, ROPs, etc.) so the balance should remain roughly the same.
 
The bandwidth is halved but so is everything else (shaders, TMUs, ROPs, etc.) so the balance should remain roughly the same.
Due to the much higher than half TDP the bottleneck could still shift a little (as the core clocks in practice may be higher). This is going to be dependent on the exact model, though.
FWIW hardware.fr also has some GTX 960 core/mem overclocking numbers, and there it seems to scale "roughly" the same with core and mem clocks (out of 4 tests 2 scale about the same with either core or mem clock, 1 pretty much only scales with core clock, and one only with memory clock) (http://www.hardware.fr/articles/932-25/gm206-gtx-960-overclocking.html). I guess it's really going to be app (and settings) dependent, but nevertheless I guess overall it looks quite balanced.
 
Due to the much higher than half TDP the bottleneck could still shift a little (as the core clocks in practice may be higher). This is going to be dependent on the exact model, though.
FWIW hardware.fr also has some GTX 960 core/mem overclocking numbers, and there it seems to scale "roughly" the same with core and mem clocks (out of 4 tests 2 scale about the same with either core or mem clock, 1 pretty much only scales with core clock, and one only with memory clock) (http://www.hardware.fr/articles/932-25/gm206-gtx-960-overclocking.html). I guess it's really going to be app (and settings) dependent, but nevertheless I guess overall it looks quite balanced.

Interesting. It really seems very application-dependent. Some I/O as well as part of the front-end must be identical for GM204 and GM206, which explains (at least in part) why the TDP of the former is not quite double that of the latter. But yes, since it can have an effect on power management, it can also affect overclocking results.
 
So, GTX 970 really doesn't like 2560x1600 gaming on Evolve:

http://www.techspot.com/review/962-evolve-benchmarks/page4.html

The biggest surprise may be that the GTX 970 dropped to 44fps at 2560x1600, only 3fps faster than the HD 7970 GHz Edition. While the GTX 970 was just 10% slower than the GTX 980 at 1080p, it's 20% slower at 2560x1600. If it were 10% slower at 2560x160 instead, the GTX 970 would have been on par with the GTX 780 Ti, which is what we'd expect to see.

It was recently exposed that the GTX 970's memory configuration has a 3584MB primary partition with a smaller 512MB partition. This significantly reduces performance once more than 3.5GB of VRAM is required and this would typically occur at 4K resolutions but it appeared at 2560x1600 in Evolve, which called for up to 4074MB of VRAM in our testing.

That said, the GTX 780 Ti only has a 3GB VRAM buffer yet it was much faster than the GTX 970. We aren't sure why that is. What we can say is that you'll need some serious horsepower at 2560x1600 or higher, and although the GTX 980 and R9 290X can deliver playable performance at 2560x1600, you'll definitely want to double up for 4K.
Presumably NVidia will tweak performance, since 780Ti "proves" (no frame time data, so perhaps not) that card memory doesn't need to be 4GB.

I wonder if NVidia has tweaked this for 2560x1440, but not 2560x1600?
 
Question: how much bandwidth does each memory controller on the 780 Ti have access to compared to the memory controllers that are attached to the fast 3.5 gigs on the 970?
 
I'm more impressed by performance of 280x compared to gtx770...

This game seems to really like GCN. I wonder if it's the game itself or the latest version of the CryEngine (v4) that's GCN-friendly. Based on this Wikipedia entry, Ryse used the same version of the CryEngine, and it seemed to really like GCN as well:

http://www.pcgameshardware.de/Ryse-PC-259308/Specials/Test-Technik-1138543/

If this is a real trend for this engine, it could make significant difference to the GPU market for a couple of years.
 
Nvidia Enables laptop overclocking again after critique

An interesting post was spotted on the GeForce forums. Nvidia Enables laptop overclocking again after a bucket-load of critique. You need to read carefully though as it states that only certain models will allow it.

In a new driver release Nvidia will enable it again, that driver release however is next month. if you already like to use overclocking for your laptop, you can use the 344.75 driver.
http://www.guru3d.com/news-story/nvidia-enables-laptop-overclocking-again-after-critique.html
 
Any thoughts the possibility of there being a Maxwell Mark 2 for mobile ie. Macbook Pros etc?
 
http://blogs.nvidia.com/blog/2015/02/24/gtx-970/

Jen-Hsun said:
Hey everyone,

Some of you are disappointed that we didn’t clearly describe the segmented memory of GeForce GTX 970 when we launched it. I can see why, so let me address it.

We invented a new memory architecture in Maxwell. This new capability was created so that reduced-configurations of Maxwell can have a larger framebuffer – i.e., so that GTX 970 is not limited to 3GB, and can have an additional 1GB.

GTX 970 is a 4GB card. However, the upper 512MB of the additional 1GB is segmented and has reduced bandwidth. This is a good design because we were able to add an additional 1GB for GTX 970 and our software engineers can keep less frequently used data in the 512MB segment.

Unfortunately, we failed to communicate this internally to our marketing team, and externally to reviewers at launch.

Since then, Jonah Alben, our senior vice president of hardware engineering, provided a technical description of the design, which was captured well by several editors. Here’s one example from The Tech Report.

Instead of being excited that we invented a way to increase memory of the GTX 970 from 3GB to 4GB, some were disappointed that we didn’t better describe the segmented nature of the architecture for that last 1GB of memory.

This is understandable. But, let me be clear: Our only intention was to create the best GPU for you. We wanted GTX 970 to have 4GB of memory, as games are using more memory than ever.

The 4GB of memory on GTX 970 is used and useful to achieve the performance you are enjoying. And as ever, our engineers will continue to enhance game performance that you can regularly download using GeForce Experience.

This new feature of Maxwell should have been clearly detailed from the beginning.

We won’t let this happen again. We’ll do a better job next time.

Jen-Hsun
 
Back
Top