nvidia/OEMs sneaking slower-clocked MX150 variants into ultrabooks without disclosure..

  • Thread starter Deleted member 13524
  • Start date
And now you theory is nvidia is simply replacing the 1D10 with the 1D12 and the OEMs are the baddies who took advantage of that to underclock their GPUs.

Don't you see how desperate that sounds?!
Nvidia is providing both 1D10 and the 1D12 being newer is what I am saying, and it is the OEMs/partners who would be moving to the 1D12 variant for anything below a certain size also what I am saying.
I did say both are at fault but not in the same way, but I agree Nvidia needs to apologise for the confusion and mess it will create (and in future force spec similar to Max-Q branding), same way as AMD did with the 560.
 
Last edited:
IIRC, the GT650M GDDR5 has a 800MHz core clock while mine tops at 950 or so. The final difference is about 15% or less between the two versions.
Kind of off topic but I think this is actually a massive difference in bandwidth since GDDR5 is QDR.
 
And it's GDDR5 from 2012, so that version worked at 4000MT/s (a long way from today's 6000-8000MT/s standard).
Regardless, the DDR3 version used 1800MT/s DDR3 that could be clocked comfortably towards 2000-2100MT/s, but it's still a close to 2x more bandwidth which obviously makes a sizeable difference.

Then again, in a 2 SMX GPU with 16 ROPs, the difference would only be very noticeable above the laptop's native 1366*768 resolution.
 
To be honest I don't really see a problem here. Intel's been selling their mobile processors with OEM configurable TDPs for quite some time and those parts are named the same. Smaller form factor often necessitates a lower power consumption and performance level. It is still physically the same chip.
 
To be honest I don't really see a problem here. Intel's been selling their mobile processors with OEM configurable TDPs for quite some time and those parts are named the same. Smaller form factor often necessitates a lower power consumption and performance level. It is still physically the same chip.
It is consumer confusion, but the problem goes beyond Nvidia (who could had stopped this with clear spec-brand-model so are at fault that way) to any that has a large flexibile TDP range for their mobile parts without clear markings and can be set to either scale by an OEM.
Crux is they should be categorised clearly for consumers if the OEM product (notebook/laptop/etc) is specifically manufactured to either of the specs (more so if spec'd-installed to the efficiency low TDP as they will be strictly limited), and it needs to be seen if other IHVs fall into this trap with their mobile parts.
OEMs want greater flexibility between ergonomics/performance/battery life, but IHV need to ensure it is clear for consumers.
 
Last edited:
To be honest I don't really see a problem here. Intel's been selling their mobile processors with OEM configurable TDPs for quite some time and those parts are named the same.
You don't see a problem here, I see two problems.
And the first problem (Intel's cTDP variations non-disclosed by OEMs) does not excuse the problem mentioned in this thread.



Smaller form factor often necessitates a lower power consumption and performance level.
And there's nothing stopping companies from disclosing such variations.



It is still physically the same chip.
And so is the Geforce GTX1050 vs. GTX1050 Ti, or GTX1060 vs GTX1060 Max-Q, and the GTX1070 vs. GTX1070 Max-Q vs GTX1080 vs GTX1080 Max-Q.

For example:
- The performance difference between the laptop GTX1050 and GTX1050 Ti is approximately 20%.
- The difference between laptop GTX1060 and GTX1060 Max-Q is around 20%.
- Between laptop GTX1070 and GTX1070 Max-Q is around 20%.

Difference between MX150 and MX150: 30%.


So why is there a clear name difference between the same-chip variations with all the orther nvidia GPUs but not this one?

And why did nvidia announce the MX150 in May 2017, and then the Max-Q versions in May 2017, but ~6 months later they silently creep in a performance/TDP-reduced MX150 at about the same time that Ryzen Mobile U variants with Vega 8/10 start coming up in the market?
 
Just curious.
When I looked at the 15W and 25W AMD mobile Ryzen U variants implemented, where do the OEMs show it is the 15W ceiling limit variant (not necessarily even default 15W TDP behaviour) on the branding as this would be restricted to 15W due to internal thermal designs/etc or 25+W for those without restrictions that changes performance and battery life?
The notebook site makes mention of the difference (as they do with MX150), but there seems to be nothing from the OEM showing it as 15W default/15W configured, or the 25W ones that exists and so creating confusion for consumers.
Example of 15W configured option for 2500U with Swift 3 explained by notebook:
Acer has set the configurable TDP to 15 watts, as evidenced by HWinfo's analysis during Cinebench and stress tests and the assignment of Intel's 15-watt chips to the same chassis

Lenovo above 25W not as efficient : https://www.notebookcheck.net/Lenovo-Ideapad-720S-Ryzen-2500U-Vega-8-Laptop-Review.289046.0.html
I can no longer find 2500U on their site, but retailers show no indicators of what spec it is in nor do AMD.
Same can be said with 2700U on Lenovo site, which may be replacing the 2500U *shrug*

Acer Swift 15W ceiling limit efficient: https://www.notebookcheck.net/Acer-...Vega-8-256-GB-FHD-Laptop-Review.277262.0.html
No mention of 15W ceiling spec: https://www.acer.com/ac/en/US/content/model/NX.GV7AA.003
https://us-store.acer.com/laptops/u...acer.com&utm_campaign=CLM&utm_medium=referral

HP Envy above 25W not as efficient: https://www.pcper.com/reviews/Mobil...-Raven-Ridge/Power-Consumption-and-Software-C
No mention of 25W unrestricted spec: https://store.hp.com/us/en/Configur...Id=&catEntryId=3074457345618626318&quantity=1

Default 15W unfortunately means little when one also gives a cTDP of 12W-25W to OEMs with TDP envelope control.

AMD on their site note the APU as 'cTDP' with a range of 12W to 25W but that is it; meaning then configurable by OEMs (consumers may not even perceive the context of cTDP) but with such a range it will have an impact on consumers in multiple ways.
For reference it is also branded by AMD as a default 15W component, but that ignores what OEMs can do with cTDP behaviour even with 15W.
Importantly though even AMD do not show TDP implemented by the OEM on their own site where you can click on the OEM model under the 2500U section, even when it is 25W unrestricted.
Unfortunately then consumers could assume they are 15W, 12W, or dynamic and flexible-efficient and can range between 12W to 25W (they are not behaving that way though as an OEM product), let alone how this also affects reviews and benchmarks.
Click see prices for the specific OEM model then details: https://www.amd.com/en/products/apu/amd-ryzen-5-2500u

Point is this comes back to my previous post #47 and it is far from clear for consumers.
Nvidia is not sneaking this in, comes back to OEMs want greater flexibility to design and select an envelope for a model with different ergonomics/performance/battery life, but IHV need to ensure it is clear for consumers.
It is a trap that many of the IHVs are going to fall into in some way IMO.
 
Last edited:
Just curious.
When I looked at the 15W and 25W AMD mobile Ryzen U variants, where do the OEMs show it is the 15W limit variant on the branding as this would be restricted to 15W due to internal thermal designs/etc or 25+W for those without restrictions that changes performance and battery life?

AFAIK, laptop makers who enable mXFR have the liberty to promote it with stickers, box advertisements, etc. for that model.
It's the OEMs' responsibility if they choose not to promote the fact that they're using a higher-end cooling solution that allows better performance.
Battery life will always be terrible while gaming or performing intensive CPU tasks, so mXFR will make little difference to battery life where it counts. Besides, I'll bet that mXFR doesn't even kick in when windows is set to low-power mode (which it will by default when the device is powered by a battery).

The Ideapad 720S is a peculiar case because Lenovo is using single-channel DDR4 at 2133MHz (17GB/s total bandwidth) with a ~1TFLOPs iGPU. That's 2/3rds of the bandwidth present in the Switch, which has 1/3rd of the GPU throughput.
IMHO, AMD shouldn't even let laptop makers pair the 2500U or the 2700U with single-channel memory.
I guess with the 2200U with a Vega 3 it wouldn't make that much of a difference, but with the 2500U and up it's simply ridiculous.

Regardless, bringing up the matter of single-channel memory adoptions and OEMs not advertising mXFR is also a problem to discuss (in a thread you might want to create) but a strawman for this particular issue.

Though OEMs not advertising mXFR is to their own detriment, really.. it's not like a customer will be pissed off at HP because they're getting more performance than what's advertised by AMD.



Nvidia is not sneaking this in
You're probably the only person on this planet claiming that nvidia isn't sneaking this in.
 
AFAIK, laptop makers who enable mXFR have the liberty to promote it with stickers, box advertisements, etc. for that model.
It's the OEMs' responsibility if they choose not to promote the fact that they're using a higher-end cooling solution that allows better performance.
Battery life will always be terrible while gaming or performing intensive CPU tasks, so mXFR will make little difference to battery life where it counts. Besides, I'll bet that mXFR doesn't even kick in when windows is set to low-power mode (which it will by default when the device is powered by a battery).

The Ideapad 720S is a peculiar case because Lenovo is using single-channel DDR4 at 2133MHz (17GB/s total bandwidth) with a ~1TFLOPs iGPU. That's 2/3rds of the bandwidth present in the Switch, which has 1/3rd of the GPU throughput.
IMHO, AMD shouldn't even let laptop makers pair the 2500U or the 2700U with single-channel memory.
I guess with the 2200U with a Vega 3 it wouldn't make that much of a difference, but with the 2500U and up it's simply ridiculous.

Regardless, bringing up the matter of single-channel memory adoptions and OEMs not advertising mXFR is also a problem to discuss (in a thread you might want to create) but a strawman for this particular issue.

Though OEMs not advertising mXFR is to their own detriment, really.. it's not like a customer will be pissed off at HP because they're getting more performance than what's advertised by AMD.




You're probably the only person on this planet claiming that nvidia isn't sneaking this in.
Any examples of this before a consumer buys the product, as I said OEMs nor retail nor even AMD are differentiating between 15W,configurable 15W, 25W unrestricted for real products than can be purchased at the point consumers see.
The memory situation is nothing to do with this, it is about cTDP and true spec branding for consumers once OEM's get their hands on the components, OEMs that are partners to IHVs.
I gave some links supporting my point that a consumer cannot tell the difference.

You are defending AMD on doing something very similar and will cause consumer confusion not just buying but also in benchmarks.
AMD in essence have created a product that can be either very efficient/lower performance or the complete opposite and OEM's will implement both separately, but you feel it is OK in this instance to leave it up to the OEMs to decide on how to make it clear to consumers, basically allowing AMD to wash their hands of the confusion.
OK, so why cannot Nvidia also leave it up to the OEMs to brand-spec which MX150 they are using, and it would be naive to think an OEM is not aware of which component they are using.

In reality no IHV can leave it up to OEMs for reasons I explained previous two posts.
I gave you clear examples where it is now impacting AMD's own 2500U and probably 2700U that has similar configurable options for OEM.
 
Last edited:
AMD in essence have created a product that can be either very efficient/lower performance or the complete opposite, but you feel it is OK in this instance to leave it up to the OEMs to decide, basically allowing AMD to wash their hands of the confusion.

From the post you quoted:
IMHO, AMD shouldn't even let laptop makers pair the 2500U or the 2700U with single-channel memory.
I guess with the 2200U with a Vega 3 it wouldn't make that much of a difference, but with the 2500U and up it's simply ridiculous.
Regardless, bringing up the matter of single-channel memory adoptions and OEMs not advertising mXFR is also a problem to discuss (in a thread you might want to create)
(...)


¯\_(ツ)_/¯


You are defending AMD on doing something very similar and will cause consumer confusion not just buying but also in benchmarks.
(...)
OK, so why cannot Nvidia also leave it up to the OEMs to brand-spec which MX150 they are using, and it would be naive to think an OEM is not aware of which component they are using.

Case A:
- nvidia SILENTLY introduces a downgrade in performance for what is essentially the same product in name, months after they introduced the original higher-performing version.

Case B:
- AMD OPENLY STATES ON RELEASE DAY that laptop makers can enable mXFR boost if the laptop's cooling solution passes certification for 25W cooling, and leaves the decision to promote it to the OEMs.


Do you get it now?
If not, here's a hint: the really obvious difference is written in blue, bolded and capital letters with an embedded hyperlink.
 
From the post you quoted:



¯\_(ツ)_/¯




Case A:
- nvidia SILENTLY introduces a downgrade in performance for what is essentially the same product in name, months after they introduced the original higher-performing version.

Case B:
- AMD OPENLY STATES ON RELEASE DAY that laptop makers can enable mXFR boost if the laptop's cooling solution passes certification for 25W cooling, and leaves the decision to promote it to the OEMs.


Do you get it now?
If not, here's a hint: the really obvious difference is written in blue, bolded and capital letters with an embedded hyperlink.
It is not quite like that though, you are using Anandtech on a launch presentation from AMD, that in reality is not even reflected on the AMD 2500U page I linked earlier nor actually how used by OEMs.

Nvidia always stated that for MX150 configuration spec consumers should refer to the OEM. period - yes this is failing due to OEMs not actually bothering nor Nvidia enforcing it.
AMD state default TDP but then also cTDP 12W-25W which is what most of the OEMs are implementing even at 15W (ala Acer) - so OEMs not actually bothering nor AMD enforcing it.
Look at the 2500U AMD link I gave for my post your defending AMD against, this is the link consumers will go to separate to the OEMs and retailers (which also do not provide clear specification-behaviour).
https://www.amd.com/en/products/apu/amd-ryzen-5-2500u
Where does it clearly help with the mXFR boost/cooling supporting the OEM laptops they show on that page?
Where does it even mention anything on the implications/specification of OEM with cTDP?
Unfortunately cTDP in hands of OEM makes your case B meaningless.

One major complaint earlier on in the thread was how consumers perception of benchmarks/reviews would be skewed by the different MX150 implementations, well that is also applicable in this situation with 2500U and 2700U for reasons I gave and links.
Unfortunately AMD cannot be defended in this, same way Nvidia cannot as both are passing this on to others rather than ensuring there is no consumer confusion.

If one trawls AMD site to XFR/mXFR you find important relevant info as a closed footnote that is easily missed but it has massive caveats and not used by OEMs as seen with the reviews so far;
  1. ”Premium notebook” chassis is defined here as incorporating thermal solutions capable of operating within the upper range of a product’s capabilities. Check with manufacturer to confirm AMD requirements for “Ultimate mXFR Performance” have been met. Not enabled on all notebook designs.
  2. mXFR enablement must meet AMD requirements. Not enabled on all notebook designs. Check with manufacturer to confirm “amplified mXFR performance” support. GD-125
It has no relation to the points raised and you are using as a defense for explaining 15W,15W configurable,25W,25W unrestricted, where there is no spec by OEMs nor are reviews really showing mXFR in effect nor even reference it.
AMD has passed responsibility onto OEMs without enforcing spec-TDP branding, same way Nvidia did with MX150, and I agree this is not right for consumers.
Comes back to OEMs want greater flexibility to design and select an envelope for a model with different ergonomics/performance/battery life, but IHV need to ensure it is clear for consumers.
It is a trap that many of the IHVs are going to fall into in some way IMO.
 
Last edited:
And so is the Geforce GTX1050 vs. GTX1050 Ti, or GTX1060 vs GTX1060 Max-Q, and the GTX1070 vs. GTX1070 Max-Q vs GTX1080 vs GTX1080 Max-Q.

For example:
- The performance difference between the laptop GTX1050 and GTX1050 Ti is approximately 20%.
- The difference between laptop GTX1060 and GTX1060 Max-Q is around 20%.
- Between laptop GTX1070 and GTX1070 Max-Q is around 20%.

Difference between MX150 and MX150: 30%.


So why is there a clear name difference between the same-chip variations with all the orther nvidia GPUs but not this one?

And why did nvidia announce the MX150 in May 2017, and then the Max-Q versions in May 2017, but ~6 months later they silently creep in a performance/TDP-reduced MX150 at about the same time that Ryzen Mobile U variants with Vega 8/10 start coming up in the market?


GTX 1050 Ti,1050 and GTX 1070 and its Max-Q version does have different number of cores enabled. Max-Q is essentially branding related to high end gaming. Imo nVidia hasn't wanted to drag that name lower than the 1060 level.

Ok, but why not some difference in name like MX140?

It needs to be pointed out that there aren't just two specs for the MX150 parts. There are laptops out there that are somewhere in between, because the manufacturer has chosen to use those parametres for their product. The manufacturers actually do have the ability to configure these parts. The performance varies quite a bit between the laptops that have this chip. It's not just the 30% difference you mentioned.

This from the Anandtech link you provided:

From a technical perspective, details on the GeForce MX150 are very limited. Traditionally NVIDIA does not publish much in the way of details on their low-end laptop parts, and unfortunately the MX150’s launch isn’t any different. We’re still in the process of shaking down NVIDIA for more information, but what usually happens in these cases is that these low-end products don’t have strictly defined specifications. At a minimum, OEMs are allowed to dial in clockspeeds to meet their TDP and performance needs.

This is basically what is taking place here. I personally don't think that adjusting the settings of a chip with same amount of active units requires a name change for that chip. atleast in this context where we are moving to a different thermal environment. If you put a same car engine to a heavier car, it performs worse, but many times they don't sell it as cheaper lower alternative. It's not a lower end part, it's just operating in a harder environment. You could have the 25W version in a ultrabook only to have it throttle down and having the same 10W or worse uneven unpredictable performance.

Why is it a surprise or suspect that nVidia changes/expands it's product portfolio when the competitive landscape changes? They want to be in the 13" ultrabook space as well it seems and it seems to have required a lower TPD configuration.

I think it should be up to the laptop manufacturer to announce the clock rates of their product on the product sheet in a case like this.
 
Why is it a surprise or suspect that nVidia changes/expands it's product portfolio when the competitive landscape changes? They want to be in the 13" ultrabook space as well it seems and it seems to have required a lower TPD configuration.

I think it should be up to the laptop manufacturer to announce the clock rates of their product on the product sheet in a case like this.

The problem here is not that it happens. What you are saying was my initial assessment of this issue.The problem is that nVidia provided a different SKU 1D10 vs 1D12 for this purpose. They might be same chip at a high level, but at a low level, they might not. These might be chips that, for example, did not pass the tests to be clocked as high as the normal MX150, so they are therefore a different end product, even if the underlying architecture is the same.
 
Last edited:
The problem here is not that it happens. What you are saying was my initial assessment of this issue.The problem is that nVidia provided a different SKU 1D10 vs 1D12 for this purpose. They might be same chip at a high level, but at a low level, they might not. These might chips that, for example, did not pass the tests to be clocked as high as the normal MX150, so they are therefore a different end product, even if the underlying architecture is the same.
That is assuming it is binned to that extent, which we do not know but could be possible as all parts are binned including CPUs/APUs/etc or it was to give another level of TDP flexibility to OEMs that match a similar range to AMD's/Intel's (launched Q3'17) APUs-CPUs.
While Nvidia has a different version (some reports are saying it is firmware based), in essence what OEMs are doing with the 2500U/2700U is still splitting models based upon TDP configuration and this gives very different results and still consumer confusion; AMD is not enforcing any spec-branding with regards to use of the cTDP, nor is Nvidia and I think both should be, one could also bring Intel into such debates as well.
As an example would it be right calling it a 15W model if it is hitting 45W dynamically with no indicator in spec for consumers?
How does a consumer tell the difference for same component strictly implemented as 15W by an OEM?

Intel also offers theirs as configurable to OEMs with a notable divergence, so it applies to them as well.

The 8250U has a configurable cTDP of 10W to 25W and a 'default' of 15W.
However in below test clear that it is configured as 25W in the Acer Swift 3 but no mention anywhere generally for consumers, also has a comparison with the HP Envy 2500U.
Where that leaves consumer perception is a bit of a mess if there are multiple OEM models with diverging cTDP at the both ends of the TDP extremes.

power_0.png


https://www.pcper.com/reviews/Mobil...-Raven-Ridge/Power-Consumption-and-Software-C
 
Last edited:
The Anandtech announcement ties into MSI that has the right spec: https://www.msi.com/Graphics-card/GeForce-GT-1030-2GD4-LP-OC/Specification
The Geforce link you gave is the original 30W model and yeah you raise a very important point there as it does not look like Nvidia has the 20W model on their site and seem again to allow it down to partners/OEMs.
Definite one to keep an eye on to see if Nvidia specs this correctly with partners/OEMs.
So far the 3 named Partners (MSI, Palit, Gigabyte) are differentiating the 1030 models between the 20W and the 30W and with correct spec, but Palit do not make it clear enough with model name IMO and consumers may not read down to spec so Nvidia proabably needs to maintain a more stringent naming standard by OEMs for these lower powered-entry GPUs.
 
Last edited:
So you think selling the same GPU with ~33% less performance than the default versions is just fine and no naming differentiation should be enforced.
Ok.


Why stop at the MX150 then? Why not sell a GTX1080 as a "GTX1080 Ti 8GB"? The performance difference between the two is even less than 33%.

I think it is worth raising this again after the discussion with Picao.
Which of the current 2500U laptops identify themselves accurately as 15W/25W/unrestricted mXFR up to 40W?
Because Picao made the mistake on what the cTDP was for a particular model, some reviewers have made that mistake, other reviewers not even referencing mXFR, so what chance do consumers have when one 2500U is closer to 15W-20W and the next is 41W in gaming ?
You also raised separately that such a large TDP performance range skews performance benchmark comparison; albeit directed at the MX150 part.

And it is not just applicable to Nvidia/AMD but like I said earlier Intel with their 8250 that also has a massive range of 15W-25W; PCPer test comparing 8250 against the 2500U had both beyond their "official" 15W spec.
Point is these devices for mobile seem to require an extreme option range for the OEMs, which none of the IHVs are enforcing and comes back to what Dr Evil said in this thread about OEMs wanting the option to control TDP themselves whether that be the MX150 or APU in a flexibile way.
I know you mention AMD presented mXFR at prelaunch with reporters, but where are OEMs being enforced to do that and how will that work if one is 25-29W (some 2500U will probably end up here) and another is truly unrestricted averaging 41W in games?
Tech Report did not know about mXFR for the OEM product reviewed: https://techreport.com/news/32912/amd-confirms-that-the-hp-envy-x360-uses-mobile-xfr
Other reviews make no mention of mXFR in product reviews or more relevant cTDP 25W that can go quite a fair bit beyond.

power_0.png


This is not taking anything away from the fact Nvidia needs to enforce some kind of spec-brand that OEMs use for consumers to be able to differentiate mobile products with such a broad range of OEM configurable TDP as it influences benchmark-gaming performance and battery life in gaming, but it also is applicable to both the other IHVs (as seen in their power results above as even the 8250U Swift 3 has an "official" TDP of 15W - so the cTDP 25W spec performance is hidden from consumers albeit close to the cTDP 25W spec).

https://www.pcper.com/reviews/Mobil...-Raven-Ridge/Power-Consumption-and-Software-C

Worth noting as we discussed earlier, it seems the 10W option for MX150 only came available once the 15-25W cTDP of the 8250U launched, people can make of that what they will but again I agree Nvidia needs to change how they manage the OEMs with these parts, along with the other IHVs.

Anyway regarding mXFR it needs to be seen in context of AMD footnote and check with manufacturer:
AMD footnote said:
  1. mXFR enablement must meet AMD requirements. Not enabled on all notebook designs. Check with manufacturer to confirm “amplified mXFR performance” support. GD-125

Here are the two very different 2500U OEM products on Amazon with very different TDP and one with mXFR unrestricted (but only known as AMD confirmed that to Tech Report rather than by OEM):
https://www.amazon.com/dp/B078BC1YL2/
https://www.amazon.com/HP-Micro-edge-Flagship-Notebook-MultiTouch/dp/B079G52BDZ/
 
Last edited by a moderator:
Back
Top