nvidia/OEMs sneaking slower-clocked MX150 variants into ultrabooks without disclosure..

  • Thread starter Deleted member 13524
  • Start date
D

Deleted member 13524

Guest
After AMD letting IHVs sell RX560 cards will less CUs enabled, it seems nvidia is doing the same, by selling the MX150 for certain ultrabooks with more aggressive throttling enabled (Max-Q versions?).

https://www.notebookcheck.net/Nvidi...12-variant-onto-some-Ultrabooks.289358.0.html

We've discovered two distinct versions of the GeForce MX150 with wide performance differences and power demands. The second version is notably slower and less demanding than the "standard" MX150 with underclocked clock rates, Boost rates, and VRAM not unlike a Max-Q GPU. This slower "MX150 Max-Q" can only be found on 13-inch Ultrabooks so far. We recommend being cautious if purchasing a notebook with the MX150 GPU as neither Nvidia nor the manufacturers have been explicitly advertising the slower GPU version.

It seems to have ~30% lower base/turbo clocks with a 10W TDP (from the 25W original). Notebookcheck is still trying to figure out which notebooks carry the Max-Q version, but so far they're the Lenovo 720S (which performs horribly on both AMD and Intel+nvidia versions due to terrible thermals), HP Envy 13, Xiaomi Mi Notebook Air 13 and Asus Zenbook UX331UN/A.


In 3dmark11 Performance, the Acer Swift 3 (regular MX150 + 8250U) has a ~4700 score, whereas Xiaomi Mi Notebook Air (Max-Q MX150 + 8250U) does only ~3500, a 34% difference.


Curiously
, ~3500 points in 3dmark11 Performance is also how much the Vega 8 does in the laptops with the Ryzen 5 2500U, such as the Acer Swift 3.

So what's happening is we're getting the Ryzen Mobile being compared to Intel laptops with the full-fledged MX150 when in practice the power consumption and performance should be similar to the smaller laptops carrying the 4-core KBL-R and the MX150 Max-Q.
 
As with the AMD issue, it's both Nvidia and the vendor at fault here. Though ultimately Nvidia would put it on the vendor, just like AMD did with the 560.
 
I just wonder why someone would need an MX150 either way, on a slim and light laptop. They are usually expensive and for the same kind of money you can get a proper gaming laptop with a GTX1050/1060 that are not way too heavy anymore. What a waste of Silicon...
 
As with the AMD issue, it's both Nvidia and the vendor at fault here. Though ultimately Nvidia would put it on the vendor, just like AMD did with the 560.

nVidia doesnt hide the fact that the implementation can very between OEMs:
Note: The below specifications represent the GPU features available. Actual implementation may vary by OEM model. Please refer to OEM website for actual shipping specifications.
https://www.geforce.com/hardware/notebook-gpus/geforce-mx150/specifications
 
nVidia doesnt hide the fact that the implementation can very between OEMs:
Thanks. It's not even a tiny footnote.

Is the MaxQ version an Nvidia nomenclature? Are they 2 distinct versions of the same chip by a design switch with the OEM setting which one for their intended use?
 
As with the AMD issue, it's both Nvidia and the vendor at fault here. Though ultimately Nvidia would put it on the vendor, just like AMD did with the 560.

AMD has changed their website for the RX560's tech specs to include models with 14 CUs.

nvidia OTOH presents no tech specs whatsoever on the MX150 webpage, only vague features like DX12 support and PCI-Express 3x.
Which is awfully convenient.
 
AMD has changed their website for the RX560's tech specs to include models with 14 CUs.

nvidia OTOH presents no tech specs whatsoever on the MX150 webpage, only vague features like DX12 support and PCI-Express 3x.
Which is awfully convenient.

Probably because it’s the end vendors that set the final performance parameters. If AMD sells a chip that has only 14 CUs, they better state that. If nVidia sells identical chips to different vendors who then chose the clock speeds that fits into systems price/performance envelope, it’s up to them manufacturer to disclose that. The two cases are not even remotely the same, it would be akin to asking IHV to list every single clock that card makers may chose.
 
Probably because it’s the end vendors that set the final performance parameters. If AMD sells a chip that has only 14 CUs, they better state that. If nVidia sells identical chips to different vendors who then chose the clock speeds that fits into systems price/performance envelope, it’s up to them manufacturer to disclose that. The two cases are not even remotely the same, it would be akin to asking IHV to list every single clock that card makers may chose.

So you think selling the same GPU with ~33% less performance than the default versions is just fine and no naming differentiation should be enforced.
Ok.


Why stop at the MX150 then? Why not sell a GTX1080 as a "GTX1080 Ti 8GB"? The performance difference between the two is even less than 33%.
 
So you think selling the same GPU with ~33% less performance than the default versions is just fine and no naming differentiation should be enforced.
Ok.


Why stop at the MX150 then? Why not sell a GTX1080 as a "GTX1080 Ti 8GB"? The performance difference between the two is even less than 33%.

Since when is that new? When mobile GPUs were still using DDR3 (I think none are now?) you had plenty of SKUs that either had DDR3 of GDDR5 with the same exact name, but with the DDR3 quite slower (sometimes 50%) because of low memory bandwidth. Is it correct to do that? Not really, but both AMD and NVIDIA were at fault for it.

Additionally, it is possible to overclock a GPU (at your own risk of course because of thermal constraints) to reach the original level of performance, while it is not possible to re-enable CUs, therefore the second situation is still worse since there is nothing the user can do.
 
Since when is that new? When mobile GPUs were still using DDR3 (I think none are now?) you had plenty of SKUs that either had DDR3 of GDDR5 with the same exact name, but with the DDR3 quite slower (sometimes 50%) because of low memory bandwidth. Is it correct to do that? Not really, but both AMD and NVIDIA were at fault for it.

Additionally, it is possible to overclock a GPU (at your own risk of course because of thermal constraints) to reach the original level of performance, while it is not possible to re-enable CUs, therefore the second situation is still worse since there is nothing the user can do.

Not being new doesn't mean it's right.
Can we agree that consumers should have enough transparency regarding performance ratings aligned with the spec list of what they're purchasing?

Regardless, I don't remember DDR3 vs. GDDR5 versions ever having a 33% difference in final performance. I myself have a GT650M in my laptop with DDR3, and what nvidia did at the time was significantly increase the core clock of the DDR3 version to somehow compensate for lack of bandwidth of the GDDR5 version. IIRC, the GT650M GDDR5 has a 800MHz core clock while mine tops at 950 or so. The final difference is about 15% or less between the two versions.
 
Not being new doesn't mean it's right.
Can we agree that consumers should have enough transparency regarding performance ratings aligned with the spec list of what they're purchasing?

Regardless, I don't remember DDR3 vs. GDDR5 versions ever having a 33% difference in final performance. I myself have a GT650M in my laptop with DDR3, and what nvidia did at the time was significantly increase the core clock of the DDR3 version to somehow compensate for lack of bandwidth of the GDDR5 version. IIRC, the GT650M GDDR5 has a 800MHz core clock while mine tops at 950 or so. The final difference is about 15% or less between the two versions.

Quoting myself, since you seemed to avoid it:

Additionally, it is possible to overclock a GPU (at your own risk of course because of thermal constraints) to reach the original level of performance, while it is not possible to re-enable CUs, therefore the second situation is still worse since there is nothing the user can do.

NVidia's situation is user modifiable, AMD's is not. That's a very significant difference.
 
NVidia's situation is user modifiable, AMD's is not. That's a very significant difference.

So let's put aside the fact that the "slow" MX150 is a 10W part while the normal one is 25W (150% difference), and that even trying to "overclock" the slow version to the values of the normal version will probably make the power distribution and cooling systems in those 1cm-thick ultrabooks go bonkers.

Does the fact that a user might be able to overclock the GPU makes this situation any better?
 
So let's put aside the fact that the "slow" MX150 is a 10W part while the normal one is 25W (150% difference), and that even trying to "overclock" the slow version to the values of the normal version will probably make the power distribution and cooling systems in those 1cm-thick ultrabooks go bonkers.

Does the fact that a user might be able to overclock the GPU makes this situation any better?

Than that is more the responsibility of the laptop vendor than nVidia. If NVidia gives them a 25W part and they take it down below to 10W to fit they power / heat objectives in all but a checkbox exercise, its them who are not respecting their customers.

But this discussion is ridiculous really, it has no sense TottenTranz. You are comparing what AMD did on a retail desktop add-in card that has not power and heat constraints at all to justify the change in specs, to a mobile chip that will 99% of the time have some kind of customization for the OEM to fit the chassis and cooling system design.
 
Last edited:
Than that is more the responsibility of the laptop vendor than nVidia. If NVidia gives them a 25W part and they take it down below to 10W to fit they power / heat objectives in all but a checkbox exercise, its them who are not respecting their customers.
So your opinion is the fault stands solely on the laptop makers?


You are comparing what AMD did on a retail desktop add-in card that has not power and heat constraints at all to justify the change in specs, to a mobile chip that will 99% of the time have some kind of customization for the OEM to fit the chassis and cooling system design.
The comparison is AMD allowed AIBs to launch 2 different cards with different performance levels with the exact same name (which is bad), and nvidia is allowing the same now with MX150 (equally bad).


And what's interesting is the timing.
The MX150 has been around since May 2017, but these laptops with the MX150 "Max-Q" started to appear at the same time as the Ryzen Mobile laptops entered the market.
 
The comparison is AMD allowed AIBs to launch 2 different cards with different performance levels with the exact same name (which is bad), and nvidia is allowing the same now with MX150 (equally bad).

Tell me: since when are AIBs capable of fusing CUs on a die by themselves? They just receive the dies as they are and build the cards and modify the BIOS. AMD did not allow them to do anything, AMD obviously pushed the chips with defects to them! Very different from simply setting different clocks on BIOS on MX150 case, where nVidia might as well not have an opinion on.

EDIT - Yes, there were some models where flashing the BIOS would unlock the remaining shaders, but that did not happen on all the cards, so AMD had to be providing them with defective chips.

EDIT 2 - From the horse's mouth:

It’s correct that 14 Compute Unit (896 stream processors) and 16 Compute Unit (1024 stream processor) versions of the Radeon RX 560 are available. We introduced the 14CU version this summer to provide AIBs and the market with more RX 500 series options.
 
Last edited:
Tell me: since when are AIBs capable of fusing CUs on a die by themselves? They just receive the dies as they are and build the cards and modify the BIOS. AMD did not allow them to do anything, AMD obviously pushed the chips with defects to them!
Are you saying the AIBs had no idea the chips were different?
 
Tell me: since when are AIBs capable of fusing CUs on a die by themselves? They just receive the dies as they are and build the cards and modify the BIOS. AMD did not allow them to do anything, AMD obviously pushed the chips with defects to them! Very different from simply setting different clocks on BIOS on MX150 case, where nVidia might as well not have an opinion on.

In the case of the 14CU RX560 they probably just had a bunch of RX460 chips laying around and the AIBs rebranded them as RX560.
We don't know in which hands these 14 CU chips were (AMD or AIBs) when they decided to sell the 14CU RX560, so you can't really trace the (original) fault back to AMD or AIBs.
You only know that AMD in the end allowed them to sell these cards as RX560, which IMO is bad enough.


In the case of the 10W MX150 with 30% lower performance it's definitely not a case of laptop makers buying regular MX150 chips and downclocking them without nvidia knowing about it, because they have different chip IDs.

This is the GPU-Z reading for the normal MX150:
iwfMwHl.png



And this is the same reading for the lower-performing MX150:
d8XBfcU.png




nvidia is selling the "10DE 1D10" with very different base/boost clocks from the "10DE 1D12", as you can see.
So unless you think laptop makers are somehow changing the chip ID microcode, nvidia is definitely in on this.


And again: the timing of these laptops appearing with the 1D12 MX150 is very curiously matching the appearance of Raven Ridge solutions in the market.




The problem is not that nvidia released a 10W version of GP108 to counter Raven Ridge.
The problem is they're calling it MX150, so reviewers have been comparing Ryzen Mobile's gaming performance to laptops with the old, higher-performing and more power-consuming MX150.
Had reviewers been comparing the Ryzen 2700U/2500U models with the lower-clocked 1D12 MX150 (which is the model that fits into the formers' power/heat demands) then the comparisons would probably be very different, as I pointed out in the first post.
 
Last edited by a moderator:
No, I'm saying that AIBs do not change the CUs of an SKU by themselves, which is what TottenTraz was implying by telling that AMD allowed them to do it.
I thought he was simply saying that AMD provided the different chips under the same model number and allowed AIBs to sell them under the same model number.
 
Back
Top