I was happier with my laptop's GTX 1050Ti 4GB than I'll ever be with my desktop GTX 1060 3GB

Cyan

orange
Legend
Supporter
Back in 2017 I purchased a gaming laptop (i7-7700HQ and GTX 1050Ti 4GB) and while I didn't play many games between 2015 and 2021 I completed games like Doom (2016) on it without issues.

I also had a RX 570 and later a GTX 1080 -the best GPU I've ever had- which is now faulty.

Other than that, I have 2 GTX 1060 3GB, which are gifts and it is getting to a point where it's a pain to play games on it. In fact I end up uninstalling good games because of the technical issues the GPU is causing.

Yesterday I finally downloaded Total Warhammer 3. Even at the lowest possible settings, at 1440p, the GPU gets noisy and power consumption is like 110W-120W.

It took me many hours to download it, and the only way to play the game at a decent framerate entails dropping the resolution to 720p and lowest settings. Even so, the thing is noisy.

After the many hours I spent downloading the game, I decided to uninstall it 'cos of how the GPU struggles.

Wolfenstein II. It's playable, runs like a charm, but then a message appears saying that the memory budget of the GPU is not sufficient and the framerate is horrible! It says 60 fps but the hiccups are continuous. Alas, I decided to uninstall it.

Halo Infinite used to tell me that you can't set textures and a couple of other effects to anything higher than Low. This seems to have been patched anjd now it lets you set the settings you want, although of course the performance on it is low.

Also a message appears when you launch the game saying that the minimum recommended VRAM is 4GB, though the games just launches fine when you click on Continue.

Doom Eternal runs like a charm, the performance of that game is miraculous, although it doesnt let me get past Low settings, the game looks good even at Low.

The first 3 games I uninstalled yesterday in a jiffy 'cos of my GPU.

In addition, NIS via drivers works very well with my GPU, it looks good enough but I enjoy enabling HDR and thanks to nVidia policies, the GTX 1xxx series can't enabled NIS and HDR at the same time, both are only available to RTX 2000 and 3000 series. :mad:

Sure the HDR of my monitor is nothing to write home about, but while it's only 519 nits it looks better than SDR.

Thankfully, since I love CRPGs-ARPGs and similar games where this GPU is more than enough and combat is tactical and turn-based, I can still play those without issues, but I am getting tired of this graphics card, and I am just waiting to buy a new GPU when prices are reasonable or wait another year.

Although I am tending towards getting a new gaming laptop with a removable battery.
 
First, it looks like you're already keenly aware of your serious memory deficit on that 3GB card. Running out of VRAM is obviously crushing performance on any game with even a moderate texture budget.

Second, Orangelupa is absolutely pointing you in the right direction with his suggesting a downvolt, also called an undervolt. Forgive me while I tell you a slightly long-winded story to demonstrate what he's suggesting...

As part of my last PC build, I purchasd an EVGA 3080Ti FTW3 ULTRA (yes, ALL THE CAPS! haha.) There's a watercooled model called the HYBRID, and there's a strictly air-cooled one (the one I own) called the GAMING. Even though it's air cooled, the factory firmware bumps the base power limit to something insane like 375W (or therabouts) which then lifts the maximum power slider position to allow a whopping 450W! As you would expect, the firmware also bumps the boost clock to 1800MHz, along with base voltage up to (IIRC) 1.05v. Needless to say, a solid workload like Cyberpunk at max-everything settings gets this puppy STUPID hot no matter how hard you crank all the fans.

As any good PC tuner might, I decided to baseline the card performance to see what extra might be wrought from it. I quickly discovered Cyberpunk with all the features at full-tilt BUT with DLSS turned off makes the card sweat like absolutely no other -- a perfect way to real-world stress test! Using the NVIDIA performance overlay to view in real time, and MSI Afterburner to data log, I played a solid 30 minutes of Cyberpunk and reviewed the results. Somewhat unsurprisingly, the card would only very briefly touch the full boost clock of 1800MHz for about five seconds, and then would never even come close to the same numbers at any point in the rest of my gaming session. In the really demanding scenes, framerate would drop in to the mid-40's and the GPU clock would end up stuck around 1640/1655MHz and not much more. An occasional spike into the low 1700's migiht happen, on rare occasions, followed by a precipitous crash back into the 1600's again after the GPU hits >80*C. In nearly all cases, the GPU was thermally or power limited, and I basically never got to see my full boost clocks.

And so we begin: go into MSI Afterburner, click Control+F to enter curve editing mode, and I dragged the 750mv datapoint up to somewhere around 1500MHz. Use the shift key to select from just left of that 750mv datapoint to the very right edge of the window, press shift+enter twice to flatten the curve. I've now forced my card to boost no higher (in voltage) to 750mv, and no higher (in clockspeed) than 1500MHz. Go back to Cyberpunk, play for a while, record data, go back to Afterburner, side the 750mv datapoint up to maybe 1600Mhz, rinse / repeat until the app crashes.

Here's what I found, in a nutshell: my GPU could stabily maintain 1640MHz at 0.750v for hours on end while consuming literally half the power I observed earlier. Because the clock was capped at 1640, the peak framerate was notably lower than stock -- but the average framerate was only down by just a few frames max. Multiple rounds of testing over the course of a weekend found that 800mv could reliably sustain 1715MHz at a paltry 240W, 850mv could maintain 1790MHz at an acceptable 280W, and 875mv would get me all the way to 1840MHz pushing into the 330W max power consumption area.

And when I say sustained speeds, I actually mean sustained speeds. it wasn't the stock behavior of a brief glimpse of 1800MHz followed by a crash to the 1600's, it was legit sticking at 1715MHz, or 1790MHz, or even 1840MHz and not budging because it wasn't hitting either the thermal limit nor the power limit.

Undervolting is a seriously underated way to achieve better and potentially even more consistent performance.
 
Last edited:
Try downvolting it a bit. You also can cap its power (but can potentially reduce performance)
afaik. capping the power is actually some kind of undervolting. In the end the GTX 1060 is a good performer, but its age is really showing.

First, it looks like you're already keenly aware of your serious memory deficit on that 3GB card. Running out of VRAM is obviously crushing performance on any game with even a moderate texture budget.

Second, Orangelupa is absolutely pointing you in the right direction with his suggesting a downvolt, also called an undervolt. Forgive me while I tell you a slightly long-winded story to demonstrate what he's suggesting...

As part of my last PC build, I purchasd an EVGA 3080Ti FTW3 ULTRA (yes, ALL THE CAPS! haha.) There's a watercooled model called the HYBRID, and there's a strictly air-cooled one (the one I own) called the GAMING. Even though it's air cooled, the factory firmware bumps the base power limit to something insane like 375W (or therabouts) which then lifts the maximum power slider position to allow a whopping 450W! As you would expect, the firmware also bumps the boost clock to 1800MHz, along with base voltage up to (IIRC) 1.05v. Needless to say, a solid workload like Cyberpunk at max-everything settings gets this puppy STUPID hot no matter how hard you crank all the fans.

As any good PC tuner might, I decided to baseline the card performance to see what extra might be wrought from it. I quickly discovered Cyberpunk with all the features at full-tilt BUT with DLSS turned off makes the card sweat like absolutely no other -- a perfect way to real-world stress test! Using the NVIDIA performance overlay to view in real time, and MSI Afterburner to data log, I played a solid 30 mintues of Cyberpunk and reviewed the results. Somewhat unsurprisingly, the card would only very briefly touch the full boost clock of 1800MHz for about five seconds, and then would never even come close to the same numbers at any point in the rest of my gaming session. In the really demanding scenes, framerate would drop in to the mid-40's and the GPU clock would end up stuck around 1640/1655MHz and not much more. An occasional spike into the low 1700's migiht happen, on rare occasions, followed by a precipitous crash back into the 1600's again after the GPU hits >80*C. In nearly all cases, the GPU was thermally or power limited, and I basically never got to see my full boost clocks.

And so we begin: go into MSI Afterburner, click Control+F to enter curve editing mode, and I dragged the 750mv datapoint up to somewhere around 1500MHz. Use the shift key to select from just left of that 750mv datapoint to the very right edge of the window, press shift+enter twice to flatten the curve. I've now forced my card to boost no higher (in voltage) to 750mv, and no higher (in clockspeed) than 1500MHz. Go back to Cyberpunk, play for a while, record data, go back to Afterburner, side the 750mv datapoint up to maybe 1600Mhz, rinse / repeat until the app crashes.

Here's what I found, in a nutshell: my GPU could stabily maintain 1640MHz at 0.750v for hours on end while consuming literally half the power I observed earlier. Because the clock was capped at 1640, the peak framerate was notably lower than stock -- but the average framerate was only down by just a few frames max. Multiple rounds of testing over the course of a weekend found that 800mv could reliably sustain 1715MHz at a paltry 240W, 850MHz could maintain 1790MHz at an acceptable 280W, and 875mv would get me all the way to 1840MHz pushing into the 330W max power consumption area.

And when I say sustained speeds, I actually mean sustained speeds. it wasn't the stock behavior of a brief glimpse of 1800MHz followed by a crash to the 1600's, it was legit sticking at 1715MHz, or 1790MHz, or even 1840MHz and not budging because it wasn't hitting either the thermal limit nor the power limit.

Undervolting is a seriously underated way to achieve better and potentially even more consistent performance.
Thanks for the tips and suggestions! Gotta try that. I just use nVidia Geforce Experience overlay to check how the components are performing so I didnt install MSI Afterburner, but noise is something that worries me.

In fact I undervolted the Ryzen 7 3700X to 1.0875V and set the frequency to 4000MHz -up to 4100 or so is fine with that voltage, but 4GHz is just fine for me- and I couldn't be happier, temperatures rarely get past 41º C -45º or so in the summer- and the thing remains silent 99,99% of the time.

Taking into account that I am not going to change the fact that this is a very low tier GPU, undervolting is ideal to achieve my goal, which is playing in silence and not expecting much from the GPU.

Now with NIS at a driver level and stuff -shame not being able to have NIS + HDR- it's like giving this thing a second life while I decide whether to buy a new GPU when prices return to normal or wait another year for the RTX 4000 or any other GPU. Probably the laptop version of a future GPU.
 
Yeah, the blocking of NIS + HDR seems odd to me. Not specifically relevant to my needs, yet it almost seems a contrived case from NVIDIA to force an upgrade or something.

I'll tell you this for sure, and I've mentioned it elsewhere: a combination of capping the CPU boost of the i7-8750H and an undervolt of the GTX 1070 MaxQ on my Gigabyte Aero 15x v8 laptop makes an absolutely insane difference in sustained performance and noise level.
 
Yeah, the blocking of NIS + HDR seems odd to me. Not specifically relevant to my needs, yet it almost seems a contrived case from NVIDIA to force an upgrade or something.

I'll tell you this for sure, and I've mentioned it elsewhere: a combination of capping the CPU boost of the i7-8750H and an undervolt of the GTX 1070 MaxQ on my Gigabyte Aero 15x v8 laptop makes an absolutely insane difference in sustained performance and noise level.
undervolting laptop computers has to be a sight to behold.

My best friend IRL recently bought a laptop, one recommended by me, 'cos it has absolutely great features like a removable battery, SD card support -say like the Switch-, a 144Hz screen and a i5-10500H along with a RTX 3060 for 899€ ! :oops:

I didnt tell him yet, but yesterday I found this video and it can be undervolted.:oops:


That laptop is a beast -and the best bang for the buck device I've seen as of late-, as shown in this video of the Gigabyte G5 running 25 modern games.

 
Laptops are a very good value right now (due to eth). A RTX3070 equipped laptop will give you RTX2070/S dGPU performance in raw rasterization, more in ray tracing, and then you have dlss. Zen3/alder lake are very capable mobile cpu's aswell, its not like the laptops from a decade ago. You get very much performance for the money, all that in a laptop.
 
Laptops are a very good value right now (due to eth). A RTX3070 equipped laptop will give you RTX2070/S dGPU performance in raw rasterization, more in ray tracing, and then you have dlss. Zen3/alder lake are very capable mobile cpu's aswell, its not like the laptops from a decade ago. You get very much performance for the money, all that in a laptop.
ah the good ol' days, when I purchased Diablo 3 day one in 2012 and all I had was a i5-2500u -I think it was- processor which featured integrated HD 3000 graphics by Intel. The game ran like crap and had to lower the settings as much as I could for it to playable (30, 40fps).

Another advantage of laptops like the one you describe nowadays, aside from the price, is that the dedicated GPU has all VRAM available for games. nVidia Optimus switches to your dedicated GPU when running a game and the integrated GPU runs the video buffer of the OS and stuff, on the main RAM, which is nice.
 
First, it looks like you're already keenly aware of your serious memory deficit on that 3GB card. Running out of VRAM is obviously crushing performance on any game with even a moderate texture budget.

Second, Orangelupa is absolutely pointing you in the right direction with his suggesting a downvolt, also called an undervolt. Forgive me while I tell you a slightly long-winded story to demonstrate what he's suggesting...

As part of my last PC build, I purchasd an EVGA 3080Ti FTW3 ULTRA (yes, ALL THE CAPS! haha.) There's a watercooled model called the HYBRID, and there's a strictly air-cooled one (the one I own) called the GAMING. Even though it's air cooled, the factory firmware bumps the base power limit to something insane like 375W (or therabouts) which then lifts the maximum power slider position to allow a whopping 450W! As you would expect, the firmware also bumps the boost clock to 1800MHz, along with base voltage up to (IIRC) 1.05v. Needless to say, a solid workload like Cyberpunk at max-everything settings gets this puppy STUPID hot no matter how hard you crank all the fans.

As any good PC tuner might, I decided to baseline the card performance to see what extra might be wrought from it. I quickly discovered Cyberpunk with all the features at full-tilt BUT with DLSS turned off makes the card sweat like absolutely no other -- a perfect way to real-world stress test! Using the NVIDIA performance overlay to view in real time, and MSI Afterburner to data log, I played a solid 30 mintues of Cyberpunk and reviewed the results. Somewhat unsurprisingly, the card would only very briefly touch the full boost clock of 1800MHz for about five seconds, and then would never even come close to the same numbers at any point in the rest of my gaming session. In the really demanding scenes, framerate would drop in to the mid-40's and the GPU clock would end up stuck around 1640/1655MHz and not much more. An occasional spike into the low 1700's migiht happen, on rare occasions, followed by a precipitous crash back into the 1600's again after the GPU hits >80*C. In nearly all cases, the GPU was thermally or power limited, and I basically never got to see my full boost clocks.

And so we begin: go into MSI Afterburner, click Control+F to enter curve editing mode, and I dragged the 750mv datapoint up to somewhere around 1500MHz. Use the shift key to select from just left of that 750mv datapoint to the very right edge of the window, press shift+enter twice to flatten the curve. I've now forced my card to boost no higher (in voltage) to 750mv, and no higher (in clockspeed) than 1500MHz. Go back to Cyberpunk, play for a while, record data, go back to Afterburner, side the 750mv datapoint up to maybe 1600Mhz, rinse / repeat until the app crashes.

Here's what I found, in a nutshell: my GPU could stabily maintain 1640MHz at 0.750v for hours on end while consuming literally half the power I observed earlier. Because the clock was capped at 1640, the peak framerate was notably lower than stock -- but the average framerate was only down by just a few frames max. Multiple rounds of testing over the course of a weekend found that 800mv could reliably sustain 1715MHz at a paltry 240W, 850MHz could maintain 1790MHz at an acceptable 280W, and 875mv would get me all the way to 1840MHz pushing into the 330W max power consumption area.

And when I say sustained speeds, I actually mean sustained speeds. it wasn't the stock behavior of a brief glimpse of 1800MHz followed by a crash to the 1600's, it was legit sticking at 1715MHz, or 1790MHz, or even 1840MHz and not budging because it wasn't hitting either the thermal limit nor the power limit.

Undervolting is a seriously underated way to achieve better and potentially even more consistent performance.
done. Thanks! My GTX 1060 3GB also maxes out around 1800MHz too, I followed your step by step instructions and it's working like a charm in games.

Performance loss is quite negligible -not that I care much anymore, my GPU has ran its course with dignity, I am going to get a new GPU or a laptop in the future- but this thing is now dead silent, and power consumption is like 50-60W :mrgreen::LOL:, temps rarely got higher than 40º C, great stuff.

 
This makes me thinks...

Is that the thing that PS5 did on the fly, with much more detail and with much more parameters considered, to keep the soc running consistently inside the power budget?
 
Fantastic!!! Congrats on the new silence!! :love:

And I agree with you entirely btw; sometimes the utter silence is worth the two or three peak frames per second you might lose.
it's really good, even the "GPU cycles happy" Divinity Original Sin 2, which for whatever reason even locked at 60fps used to draw 110 to 120W -the max of the GTX 1060- of power at 1440p -so I used to decrease the resolution to 1768x992 and even so the game wasn't running silently-, now runs at 60fps with half the power consumption. The data about the GPU use is quite curious though. I am loving this.

As for high framerates, 165/144 fps aren't that hard to achieve on PC, even on mediocre GPUs like mine, but I keep that framerate for older games like Doom 3 or Quake, etc, where the GPU is sitting on its laurels the whole time.

Divinity Original Sin 2 data from the GPU (60 fps -57 in the screengrab, but it decreases when you take a screenshot-, 0.7 volts, 54W, 41º), 1440p, . :mrgreen:

 
Last edited:
It's insane how much the GPU power consumption becomes it's own worst enemy in this modern "smart self-clock-boosting" designs. I mean, we all knew it anyway, yet here we are purposefully depowering a GPU and actually getting significantly better performance with it.

In theory, this design goal of minimum power to acheve good performance is what NVIDIA's MaxQ design was supposed to promote. And even then, I too get measurably better clockspeed out of my 1070 MaxQ on my laptop by undervolting it. Bleh.
 
It's insane how much the GPU power consumption becomes it's own worst enemy in this modern "smart self-clock-boosting" designs. I mean, we all knew it anyway, yet here we are purposefully depowering a GPU and actually getting significantly better performance with it.

In theory, this design goal of minimum power to acheve good performance is what NVIDIA's MaxQ design was supposed to promote. And even then, I too get measurably better clockspeed out of my 1070 MaxQ on my laptop by undervolting it. Bleh.
did you use 1500MHz as the max basis frequency for undervolting the GPU? My best friend IRL purchased the laptop I mentioned some messages above this and I wonder how he could undervolt the RTX 3060 with ease while performance doesn't get a big impact
 
Honestly I don't even remember. I use the laptop frequently, however I stopped paying attention to the GPU particulars once I had a reliable, performant undervolt profile built. Now it just boots, MSI Afterburner auto-starts, auto-applies my undervolt profile, ThrottleStop does the same for my CPU and chipset, and the laptop just does what it needs to do without fuss :)

Next time I power it on, I'll check the settings and report back.

Edit: underVOLT, not underCLOCK hehe. Fixed my wording...
 
Last edited:
Fired up the Gigabyte Aero 15v8 as I promised. My undervolt on this 1070 MaxQ is 1709MHz at 0.800v. I actually haven't played Cyberpunk on this setup, so I fired up Steam, downloaded the game (yay gig fiber to the house!), configured the "high" preset and drove around for a while at 1920x1080 @60Hz with vsync on. Then I found out something interesting: the only metrics I can get out of the GeForce Experience overlay is CPU usage % and GPU usage %. Simliarly, the FPS meter in MSI Afterburner also didn't register, so I don't have valid, verifiable FPS data to give you. So let's talk observations. With the aforementioned graphics at the high preset, I played for about 20 minutes of walking around the little garden park near Corpo Plaza both at stock GPU settings and with my undervolt config.

With the GPU at stock: the overlay and MSI data logs both showed the GPU moving between 95 and 99% consumption . The GPU clock moved around only a little bit, but seemed to stay mostly around 1530-1545MHz and consumed between 78 and 83W of power with a GPU temp of 78*C. The CPU temp was roughly the same, obviously because it all shares the same heatpipe and fan assembly. CPU power consumption showed 21W, with clocks locked by ThrottleStop at 2.2GHz with a -155mv offset. Neither the CPU nor the GPU indicated any thermal throttling with these settings.

With the GPU undervolted: the overlay and MSI data logs both showed the GPU now hovering around 80-99% consumption. The GPU clock stayed put at 1709MHz the whole time, with power consumption numbers down considerably at 63-71W with a GPU temp of 72*C. Interesting find: more than once, the GPU Power Limit was tripped during my play. The firmware on this laptop doesn't expose a way to modify the power limit, and interestingly enough the power limt indicator didn't show a negative effect on logged clockspeed. Strange? CPU temp also moved down to 72*C with all other indications of power and clocks the same as before.

Framerate was pretty damned close to 60fps for both settings, to my eye. Despite the GPU being pegged out in the "stock" configuration, it didn't seem notably slower or choppy in gameplay. I did try using the inbuilt game benchmark for both, and I got wacky results. Even with vsync on at 60Hz, I'd get indicated framerate highs beyond 80fps. Sometimes my minimum FPS would be 26fps, sometimes it would be 40fps. The average for the undervolt hovered around 57fps, the average for the stock clocks hovered around 54.
 
Haha! No worries. It seems easy to postulate the lack of frame counter is linked to the Optimus tech on this laptop.
 
This makes me thinks...

Is that the thing that PS5 did on the fly, with much more detail and with much more parameters considered, to keep the soc running consistently inside the power budget?
which thing are you talking about? In the end it's closed hardware so they are doing something similar, that's for sure, taking into account the so called silicon lottery, just in case -this is why AMD for instance, keep their CPUs at a voltage level of 1.5V max by default, among other reasons-.

Fired up the Gigabyte Aero 15v8 as I promised. My undervolt on this 1070 MaxQ is 1709MHz at 0.800v. I actually haven't played Cyberpunk on this setup, so I fired up Steam, downloaded the game (yay gig fiber to the house!), configured the "high" preset and drove around for a while at 1920x1080 @60Hz with vsync on. Then I found out something interesting: the only metrics I can get out of the GeForce Experience overlay is CPU usage % and GPU usage %. Simliarly, the FPS meter in MSI Afterburner also didn't register, so I don't have valid, verifiable FPS data to give you. So let's talk observations. With the aforementioned graphics at the high preset, I played for about 20 minutes of walking around the little garden park near Corpo Plaza both at stock GPU settings and with my undervolt config.

With the GPU at stock: the overlay and MSI data logs both showed the GPU moving between 95 and 99% consumption . The GPU clock moved around only a little bit, but seemed to stay mostly around 1530-1545MHz and consumed between 78 and 83W of power with a GPU temp of 78*C. The CPU temp was roughly the same, obviously because it all shares the same heatpipe and fan assembly. CPU power consumption showed 21W, with clocks locked by ThrottleStop at 2.2GHz with a -155mv offset. Neither the CPU nor the GPU indicated any thermal throttling with these settings.

With the GPU undervolted: the overlay and MSI data logs both showed the GPU now hovering around 80-99% consumption. The GPU clock stayed put at 1709MHz the whole time, with power consumption numbers down considerably at 63-71W with a GPU temp of 72*C. Interesting find: more than once, the GPU Power Limit was tripped during my play. The firmware on this laptop doesn't expose a way to modify the power limit, and interestingly enough the power limt indicator didn't show a negative effect on logged clockspeed. Strange? CPU temp also moved down to 72*C with all other indications of power and clocks the same as before.

Framerate was pretty damned close to 60fps for both settings, to my eye. Despite the GPU being pegged out in the "stock" configuration, it didn't seem notably slower or choppy in gameplay. I did try using the inbuilt game benchmark for both, and I got wacky results. Even with vsync on at 60Hz, I'd get indicated framerate highs beyond 80fps. Sometimes my minimum FPS would be 26fps, sometimes it would be 40fps. The average for the undervolt hovered around 57fps, the average for the stock clocks hovered around 54.
thanks for the detailed post! In fact I created a second undervolt profile based on those numbers. 0.800V and 1705MHz -in my GPU's case-.

The previous undervolt profile: 0.750V + 1500MHz worked very well in games like Divinity Original Sin 2 or Art of Rally, but falls short in a game like Alien Isolation -this game has never been power hungry tbh, it runs like a charm on most machines-.

I think I am going to use more the 0.800V + 1700MHz profile 'cos the power consumption and temperatures are low and most games are performing well. Alien Isolation runs at 60 fps with this profile, which is what I wanted, with HDR on -no Auto HDR but Special K mod in this case-

p.s. on a different note, undervolt profiles aside, yesterday I tried classic Quake -the remastered version- at 165 fps and the game ran perfectly fine but I got motion sickness, I had to go to bed earlier than expected.
 
Back
Top