AMD Radeon finally back into laptops?

without boost clocks, I'm sure it can down that far, if not farther if down clocked, with boost clocks the 1050ti stays around 70 watts.

If you lock clocks at base clocks on a 1060 it going down to 65 watts.... a 1070 is down to 75 watts.

They can cut down 50% power usage just by locking clocks (more specifically voltage so boost clocks can't be used)

They don't need to down clock a 1050 or 1050 ti to get to where a r460 is at when down clocked by 25%.

I'm also pretty sure this is why the mobile Pascals seem a bit slower than the desktops, other laptops are still limited by a lower TDP then what the desktop Pascals can do so they don't boost as much.

Then factor in binning for lower voltage chips, there ya have it. Even gaming laptops have a max on their TDP. For a 1060 gaming laptop I would expect a 70watt TDP, no more. 1070 laptop probably 105 watts.
 
Last edited:
For a 1060 gaming laptop I would expect a 70watt TDP, no more.
According to PCGamer:
Nvidia doesn't officially disclose their notebook GPU TDPs—and the notebook manufacturers have some flexibility in setting power targets—but based on testing I'd peg the notebook GTX 1060 at closer to 80W max (the same TDP as the GTX 970M).

And as I said before, there's no guarantee that lowering the clocks on the GP07 will make it go as down as 35W for the full card.
Regardless, that chip wasn't ready in time for the Surface Book so it probably wouldn't be on time for this Macbook Pro either.
 
According to PCGamer:


And as I said before, there's no guarantee that lowering the clocks on the GP07 will make it go as down as 35W for the full card.
Regardless, that chip wasn't ready in time for the Surface Book so it probably wouldn't be on time for this Macbook Pro either.
The discrete card at base clocks 1508MHz (something like that anyway) in game uses 62W, at 2050-2100MHz it uses 120W, higher figure 130W-140W more to do with custom models and top clocks (both memory and core).
So can extrapolate from that, and that is for the full discrete 6GB card.
Cheers
 
Last edited:
According to PCGamer:


And as I said before, there's no guarantee that lowering the clocks on the GP07 will make it go as down as 35W for the full card.
Regardless, that chip wasn't ready in time for the Surface Book so it probably wouldn't be on time for this Macbook Pro either.

They are guessing, they don't know, other websites have stated 65 and 75 watts.

And hell yes it can get to 35 watts, just search for voltage locks for the 1060 and 1070, people have done them on other forums, they are getting the 50% power savings I have been saying.

This is a moot point any ways, because we know Pascal kills GCN Polaris in perf/watt, you shouldn't even question that a 1050 or 1050ti could get down to 35 watts and still have better performance the question is how much more performance will it have and that end result is most likely retain the base clocks, no boost

Now if we want to have some data behind this, Temps of the 1050 and 1050ti don't increase unitl the core starts boosting

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1050-ti,4787-7.html

01-Clock-Rate_w_727.png


Just looking at the temps, they increase 35 degrees C, that alone tells us they can save 35 watts or so if they stop the card from boosting (seriously overclocked card here), so just to point out temps are playing a pretty important roll with power usage when this card is overclocked.

So its regular boost without it the temps will save 10 watts or so, then take into the effect of the voltage change, which does increase power usage considerably more then the temps, you are looking at alot of savings just by disabling the voltage for boost.
 
Last edited:
The discrete card at base clocks 1508MHz (something like that anyway) in game uses 62W, at 2050-2100MHz it uses 120W, higher figure 130W-140W more to do with custom models and top clocks (both memory and core).
So can extrapolate from that, and that is for the full discrete 6GB card.
No, you cannot extrapolate "downwards" from whatever happens "upwards". It just doesn't work that way because power consumption with frequency is not linear.
I get that you often take those line charts from tomshardware as some kind of holy grail that tells you the exact power consumption at each exact frequency, which is a tremendous mistake IMO (and this is something the author himself says too in the article).

Just to show how ridiculous that would be, that GTX 1060 line shows a slope of 0,1 (20W y-axis/200MHz difference x-axis between 60 and 80W). y = m.x + b, and with 60W - 1500MHz we get the point slope of [ power = 0,1*frequency - 90].

This is fantastic, because according to your extrapolation, if the frequency of the GTX 1060 is 850MHz, then power = 0,1*850 - 90 = -5 W.

Congratulations nvidia and @CSI PC , you just invented a graphics card that generates power out of nothing!
It's a miracle! Let's stop all the investment in renewable power sources and just start a way to harvest the power generated by those GP106 GPUs below 900MHz and we'll have free clean energy for all!


Now to the serious part, just because you can't see a horizontal asymptote down to 1500MHz (like you see the start of a vertical one close to 2GHz), it doesn't mean it won't be there at 1400MHz or 1300MHz.
As I stated before, there's a range of frequencies and voltages (not shown in the graph BTW) within which the chip behaves with that pretty linear slope. The chip was just binned for that. Outside those values, there will be little or no returns.


They are guessing, they don't know, other websites have stated 65 and 75 watts.
They are testing and put the number on that ballpark.
Feel free to share these other websites that claim 65 to 75W through testing and not guessing.

And hell yes it can get to 35 watts, just search for voltage locks for the 1060 and 1070, people have done them on other forums, they are getting the 50% power savings I have been saying.
Different chips made on a different foundry using a different process.
Feel free to share links of people claiming they got their GP107 cards to pull 35W total from 12V and 3.3V rails, and at what clocks.


you shouldn't even question that a 1050 or 1050ti could get down to 35 watts and still have better performance
Like the 1050 Ti would unquestionably reach 2 GHz on air with a 6pin connector?


In terms of power envelope, the GP107 is the sucessor of GM107 (GTX 750 Ti, 850M, 950M), and nvidia never had a single GM107 mobile card go down to 35W.
A smaller GM108 was developed to reach those values.
 
No, you cannot extrapolate "downwards" from whatever happens "upwards". It just doesn't work that way because power consumption with frequency is not linear.
I get that you often take those line charts from tomshardware as some kind of holy grail that tells you the exact power consumption at each exact frequency, which is a tremendous mistake IMO (and this is something the author himself says too in the article)

Just to show how ridiculous that would be, that GTX 1060 line shows a slope of 0,1 (20W y-axis/200MHz difference x-axis between 60 and 80W). y = m.x + b, and with 60W - 1500MHz we get the point slope of [ power = 0,1*frequency - 90].

This is fantastic, because according to your extrapolation, if the frequency of the GTX 1060 is 850MHz, then power = 0,1*850 - 90 = -5 W.

Congratulations nvidia and @CSI PC , you just invented a graphics card that generates power out of nothing!
It's a miracle! Let's stop all the investment in renewable power sources and just start a way to harvest the power generated by those GP106 GPUs below 900MHz and we'll have free clean energy for all!
.

You don't seem to understand how leakage works and what happens with increased voltages with the boosts. I suggest you look up things like that when you have free time before spouting about EE as if it couldn't be figured out with math.

Now to the serious part, just because you can't see a horizontal asymptote down to 1500MHz (like you see the start of a vertical one close to 2GHz), it doesn't mean it won't be there at 1400MHz or 1300MHz.
As I stated before, there's a range of frequencies and voltages (not shown in the graph BTW) within which the chip behaves with that pretty linear slope. The chip was just binned for that. Outside those values, there will be little or no returns.

Are you guessing on that? Cause I don't see the data showing that.........

They are testing and put the number on that ballpark.
Feel free to share these other websites that claim 65 to 75W through testing and not guessing.


Different chips made on a different foundry using a different process.
Feel free to share links of people claiming they got their GP107 cards to pull 35W total from 12V and 3.3V rails, and at what clocks.

I think you should search for them. You kinda put your foot in the fire here with pretty absurd comments, one of them is extremetech, so go look

Like the 1050 Ti would unquestionably reach 2 GHz on air with a 6pin connector?

I think it can do it, just that the bios isn't letting them do it, We can see the cooling for most of these 1050ti's are not enough to reach 2k. We can see overclocking without the 6 pin attached they still get to 1750 mhz, which means at 75 watts max board supply they have a butt load of room to push power.
In terms of power envelope, the GP107 is the sucessor of GM107 (GTX 750 Ti, 850M, 950M), and nvidia never had a single GM107 mobile card go down to 35W.
A smaller GM108 was developed to reach those values.

Oh I think you missed the razor blade ultrabook then........ Yes they did.
 
Last edited:
The Razer Blade didn't miraculously halve the GTX 1060's TDP. It just uses very loud fans to dissipate the heat of a 80W graphics card, just like its 2015 predecessor.
If it was using a 35W CPU together with a 35W GPU, it wouldn't need a 165W power supply.


As for the bile, personal attacks and derailment, I'll just make good use of B3D's top tip of 2016.
 
You stated 750ti, the 750ti that were used in notebooks, had TDP's around 35 to 45 watts ;)

Btw You were the one that stated you thought nV can't get to that power rating.

If you want me to quote it here ya go

Makes sense that the clocks would have to go down.
Apple decreased their GPU power budget of 45W from the previous M370X down to 35W.

The closest nvidia has at this TDP is GM108 with ~800 GFLOPs, and that chip needs to use DDR3 on a 64bit bus to reach those levels of power consumption.


By the way, the Pro 460 seems to be running at 911MHz. It's the same frequency as the PS4 Pro.
Probably not a coincidence.


Don't sit here and try to redirect your line of thought on to others that directly was associated with the what you call derailment now.

The rest of it, I did no such thing I just stated you need to look some stuff up before you post, because you haven't looked for the information before. If I were to do something like that, ya know and everyone knows I have no qualms and calling out what a person truly is by using the exact words, and I don't see you posting like that so I didn't.

http://www.extremetech.com/gaming/1...nt-amd-and-nv-graphics-cards-by-a-huge-margin

The real kicker, however, is the claimed 45W TDP envelope for the new Maxwell. This would make sense, given that the desktop variant has a 60W TDP and Nvidia can bin the mobile parts for superior power consumption. If it’s true, this puts AMD in a very tight spot with respect to the laptop market. AMD’s 28nm GCN parts are well-aligned against Kepler in terms of rated TDPs — the GTX 770M is reportedly a 75W part. At 45W, Mobile Maxwell is in a class of its own.

GTX 800 series (performance chips) were using GM107 chips, not GM108 which are the ones talked about at extremetech the one I just linked to.

The lower end GTX 800 think they were GTX 840 and 830 were the ones that were GM108 chips.

https://en.wikipedia.org/wiki/Maxwell_(microarchitecture)

GTX 850M/860M (GM107) and GeForce 830M/840M (GM108)
 
Last edited:
The radeon.pro page has a (partially obscured, off-angle, limited area in focus) visual representation of what appears to be the 460 die.

Assuming it's representative:
Without doing any visual manipulation, does it seem like the CUs were engineered to be narrower?
The SIMD portions are not symmetrical in these CUs, unlike Polaris 10 and others, but like some of the APU implementations like Carrizo. The inner SIMD section looks to be more heavily reworked.
Various other blocks seem to have been rotated and various SRAM blocks stacked along the length of the CU rather than width.
 
No, you cannot extrapolate "downwards" from whatever happens "upwards". It just doesn't work that way because power consumption with frequency is not linear.
I get that you often take those line charts from tomshardware as some kind of holy grail that tells you the exact power consumption at each exact frequency, which is a tremendous mistake IMO (and this is something the author himself says too in the article).

Just to show how ridiculous that would be, that GTX 1060 line shows a slope of 0,1 (20W y-axis/200MHz difference x-axis between 60 and 80W). y = m.x + b, and with 60W - 1500MHz we get the point slope of [ power = 0,1*frequency - 90].

This is fantastic, because according to your extrapolation, if the frequency of the GTX 1060 is 850MHz, then power = 0,1*850 - 90 = -5 W.

Now to the serious part, just because you can't see a horizontal asymptote down to 1500MHz (like you see the start of a vertical one close to 2GHz), it doesn't mean it won't be there at 1400MHz or 1300MHz.
As I stated before, there's a range of frequencies and voltages (not shown in the graph BTW) within which the chip behaves with that pretty linear slope. The chip was just binned for that. Outside those values, there will be little or no returns.


They are testing and put the number on that ballpark.
Feel free to share these other websites that claim 65 to 75W through testing and not guessing.


Different chips made on a different foundry using a different process.
Feel free to share links of people claiming they got their GP107 cards to pull 35W total from 12V and 3.3V rails, and at what clocks.



Like the 1050 Ti would unquestionably reach 2 GHz on air with a 6pin connector?


In terms of power envelope, the GP107 is the sucessor of GM107 (GTX 750 Ti, 850M, 950M), and nvidia never had a single GM107 mobile card go down to 35W.
A smaller GM108 was developed to reach those values.
You know your point is bloody stupid and deliberately baiting/twisting my information.
I think you like just arguing with me, just like you insisted the Nintendo Switch was not Nvidia and was most likely DMP.

By extrapolate it means look at the figure at base clocks/0.8V and go from there in a more sensible way than the wrong figure quoted by PCGamer, not in your pointless example of -5W/850MHz and a voltage figure that is stupid, there is a limit to how stable it will be at minimum voltage values and yeah it is obvious the linear behaviour is only within the silicon-node optimal performance envelope, and you read enough of my posts to know I go on about that a lot in various threads.
If you have the choice of the figure you use that has no basis within measurements, or an actual measurement benchmark albeit for the 1060 6GB discrete that was analysed for volt-frequency-watts-performance from 0.8V to 1.1V I know what I prefer.
So your using as fact the made up estimate from PCGamer to make a point and still want to argue about it....
Case in point the article you use says
PCGamer said:
but based on testing I'd peg the notebook GTX 1060 at closer to 80W max (the same TDP as the GTX 970M). 10 percent lower performance while using 30 percent less power is a good tradeoff.
But instead of arguing and posting a baiting/sarcastic response, you would know that at 1508MHz it ONLY USES around 62W when measured accurately.
I have posted many times the Tom's Hardware analysis and ironically you also argued briefly against that in the past.
Here is part of it again, and please cut back on the deliberate baiting/taking my posts out of context.

Power-Consumption-vs.-Clock-Rate.png


And then they checked their analysis with gameplay trend analysis as well, anyway far more concise and accurate to the figure you want to argue about used by PCGamer: http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1060-pascal,4679-7.html

I guess you want to argue with Nvidia that they are wrong stating their P4 GP104 (comparable to 1080 architecture) Tesla is spec'd at 75W with 5.5TFLOPs FP32, and yeah that has a clock range of 810MHz-1114MHz (75W is the top clock not base).
Anyway out of this discussion now, but I would not use the PCGamer figure, and would use the Tom's analysis to extrapolate the watts, and yes I assume all here have common sense to do that with reason.
Cheers
 
Last edited:
The radeon.pro page has a (partially obscured, off-angle, limited area in focus) visual representation of what appears to be the 460 die.

Assuming it's representative:
Without doing any visual manipulation, does it seem like the CUs were engineered to be narrower?
The SIMD portions are not symmetrical in these CUs, unlike Polaris 10 and others, but like some of the APU implementations like Carrizo. The inner SIMD section looks to be more heavily reworked.
Various other blocks seem to have been rotated and various SRAM blocks stacked along the length of the CU rather than width.

Have you seen this entry at radeon.com?
Perhaps it has something to do with this:

Jason Evangelho said:
It’s not enough to simply launch a professional mobile graphics processor that sips energy by running inside a 35W power envelope.
We’ve accomplished all this while simultaneously reducing the size of the very building blocks of this processing power. How? Through a complex process known as die thinning.
Die thinning is an elaborate process that reduces the thickness of each wafer – those thin slices of material used in circuits — from 780 microns to 380 microns, or 0.38 millimeters.



@CSI PC just through the fact that you put on bold some power/frequency values that I actually wrote in my post you quoted shows how little you read or paid any attention to it.
I really don't care. Go ahead and use that graph as your holy grail for predicting clocks and frequencies that are not within the values depicted in it. I just tried to explain how that doesn't make any sense but whatever.

And then you go on mixing whatever average of power values that Tomshardware measured on a single test as claiming it would be the same as validating that card's Thermal Design Power (which is kind of important and restrictive for small enclosures like a laptop). Even the author himself states those results must be limited because of their own time constraints:
We have to limit ourselves to one representative metric because there are so many data points to generate.

Metro is 1 game, 1 engine in 1 API, probably even using a timedemo, which is 1 scripted run in 1 scenario, and all of this in 1 setting config (for 3 different resolutions, yay). For you, apparently it's everything. Hint: don't follow a Quality Assurance career.

I also think it's awfully childish to try to rub around some of my guesses that didn't come true but are completely unrelated to the matter at hand. It's called a strawman.
Yes, up until the last moment I thought the NX was more likely to have either an AMD SoC or a Nintendo SoC with a GPU by DMP. I thought that because a) AMD had yet to announce their 3rd semi-custom design win, b) DMP had just announced a new family of GPUs capable of scaling up to 1TFLOPs, c) nvidia never hinted they had a semi-custom design win during their investor calls, d) Nintendo had used Tegra for dev boards of the 3DS and this could be the case again and e) IIRC Jen Hsu Huang had stated that console wins weren't the deals with the kind of profits they were looking for (turns out he was full of shit).

My opinion had zero to do with arguing with you or anyone else. I don't remember nor care if my opinions about the NX's SoC were made through replies to your username or some other's.
You give yourself way too much importance.
 
Have you seen this entry at radeon.com?
Perhaps it has something to do with this:

Yeah pretty familiar with wafer thinning, Intel has been using it for quite some time now. This is nothing new.

@CSI PC just through the fact that you put on bold some power/frequency values that I actually wrote in my post you quoted shows how little you read or paid any attention to it.
I really don't care. Go ahead and use that graph as your holy grail for predicting clocks and frequencies that are not within the values depicted in it. I just tried to explain how that doesn't make any sense but whatever.

And then you go on mixing whatever average of power values that Tomshardware measured on a single test as claiming it would be the same as validating that card's Thermal Design Power (which is kind of important and restrictive for small enclosures like a laptop). Even the author himself states those results must be limited because of their own time constraints:


Metro is 1 game, 1 engine in 1 API, probably even using a timedemo, which is 1 scripted run in 1 scenario, and all of this in 1 setting config (for 3 different resolutions, yay). For you, apparently it's everything. Hint: don't follow a Quality Assurance career.

I also think it's awfully childish to try to rub around some of my guesses that didn't come true but are completely unrelated to the matter at hand. It's called a strawman.
Yes, up until the last moment I thought the NX was more likely to have either an AMD SoC or a Nintendo SoC with a GPU by DMP. I thought that because a) AMD had yet to announce their 3rd semi-custom design win, b) DMP had just announced a new family of GPUs capable of scaling up to 1TFLOPs, c) nvidia never hinted they had a semi-custom design win during their investor calls, d) Nintendo had used Tegra for dev boards of the 3DS and this could be the case again and e) IIRC Jen Hsu Huang had stated that console wins weren't the deals with the kind of profits they were looking for (turns out he was full of shit).

My opinion had zero to do with arguing with you or anyone else. I don't remember nor care if my opinions about the NX's SoC were made through replies to your username or some other's.
You give yourself way too much importance.

Well you are guessing, when there is information where you don't need to guess, and that is what CSI_PC and I have been saying.

For a person that is saying a quality test engineer should follow more than one thing, its a hell of a lot better then guessing in the first place isn't it?
 
Last edited:
Have you seen this entry at radeon.com?
Perhaps it has something to do with this:

That seems to involve a manufacturing step more concerned with the Z axis of a chip, perpendicular to the plane the CUs are on.
Perhaps there are considerations for the implementation that can influence this to a limited extent, if for example there are mechanical reasons for specific choices.
With die thinning the big win would be shaving (ed: mechanically grinding) the non-patterned side of the wafer, which makes up most of the height.
 
Last edited:
I really don't care. Go ahead and use that graph as your holy grail for predicting clocks and frequencies that are not within the values depicted in it. I just tried to explain how that doesn't make any sense but whatever.

And then you go on mixing whatever average of power values that Tomshardware measured on a single test as claiming it would be the same as validating that card's Thermal Design Power (which is kind of important and restrictive for small enclosures like a laptop). Even the author himself states those results must be limited because of their own time constraints:

Metro is 1 game, 1 engine in 1 API, probably even using a timedemo, which is 1 scripted run in 1 scenario, and all of this in 1 setting config (for 3 different resolutions, yay). For you, apparently it's everything. Hint: don't follow a Quality Assurance career.

My opinion had zero to do with arguing with you or anyone else. I don't remember nor care if my opinions about the NX's SoC were made through replies to your username or some other's.
You give yourself way too much importance.
Sigh,
you do realise it was Tom's Hardware that proved the 480 TDP and power distribution was incorrect by using similar methodology and context, and followed up by PCPer?
Funny that AMD corrected their card based upon Tom's Hardware and PCPer, but hey what do we all know, and Metro gave results good enough for the point back then and is hard on power draw for both manufacturers, hence why it is used.
Yeah time constraints and they spent a solid 2 days just mapping the envelope of 2 GPUs, however again you ignored they double checked by running the game and analysing default/minimum/max clocks to see how it correlated to their earlier model.
Anyway enough, but had to respond as again you skewed the context and my knowledge, well done.
 
Last edited:
Interestingly Metro even though is an old game, when it comes to power consumption its a demanding game so what we are looking at is one of the more power intensive games that give out those kinda of figures.
 
No, you cannot extrapolate "downwards" from whatever happens "upwards". It just doesn't work that way because power consumption with frequency is not linear.
I get that you often take those line charts from tomshardware as some kind of holy grail that tells you the exact power consumption at each exact frequency, which is a tremendous mistake IMO (and this is something the author himself says too in the article).

Just to show how ridiculous that would be, that GTX 1060 line shows a slope of 0,1 (20W y-axis/200MHz difference x-axis between 60 and 80W). y = m.x + b, and with 60W - 1500MHz we get the point slope of [ power = 0,1*frequency - 90].

This is fantastic, because according to your extrapolation, if the frequency of the GTX 1060 is 850MHz, then power = 0,1*850 - 90 = -5 W.

Congratulations nvidia and @CSI PC , you just invented a graphics card that generates power out of nothing!
It's a miracle! Let's stop all the investment in renewable power sources and just start a way to harvest the power generated by those GP106 GPUs below 900MHz and we'll have free clean energy for all!


Now to the serious part, just because you can't see a horizontal asymptote down to 1500MHz (like you see the start of a vertical one close to 2GHz), it doesn't mean it won't be there at 1400MHz or 1300MHz.
As I stated before, there's a range of frequencies and voltages (not shown in the graph BTW) within which the chip behaves with that pretty linear slope. The chip was just binned for that. Outside those values, there will be little or no returns.



They are testing and put the number on that ballpark.
Feel free to share these other websites that claim 65 to 75W through testing and not guessing.


Different chips made on a different foundry using a different process.
Feel free to share links of people claiming they got their GP107 cards to pull 35W total from 12V and 3.3V rails, and at what clocks.



Like the 1050 Ti would unquestionably reach 2 GHz on air with a 6pin connector?


In terms of power envelope, the GP107 is the sucessor of GM107 (GTX 750 Ti, 850M, 950M), and nvidia never had a single GM107 mobile card go down to 35W.
A smaller GM108 was developed to reach those values.
While I don't think anyone needs to point out that you pulled a very dull and banal straw man argument, and with an irritating mocking tone to boot, but the hypocrisy of then accusing others posts of being full of bile was really the cherry on the cake.
As for the bile, personal attacks and derailment, I'll just make good use of B3D's top tip of 2016.


Good God man, this is the kind of thing I expect on reddit...

Obviously I cannot speak for Razor or CSI but I am quite certain nothing they said suggested power and frequency scale linearly.

The graph posted above shows fairly linear behavior in the clock range tested.

I'm going to take care to be clear because I don't want you pulling another strawman, pretty soon this thread will be a barn.

1. Indisputable fact; the entirety of a GTX1060 consume 62 watts at 1500mhz, at 2000mhz it's ~120w.

2. Indisputable fact: the Tesla P40 is a a full GP104 config with slower gddr5 clocked at ~1ghz boost. The power draw claimed by nvidia is 50-75w, of the whole board.

So what exactly is your problem?

The 1060 has a 50% reduction in power from a 25% reduction in clock from 2000-1500mhz.



I also think it's awfully childish to try to rub around some of my guesses that didn't come true but are completely unrelated to the matter at hand. It's called a strawman.

Seriously ? Come on.
 
Original statement:

There's no GP107 in notebook form that we know of, yet. We don't know if it can scale down to 35W while maintaining its original power/performance curve.

Counter-arguments:

1 - Personal attacks
2 - Flamebait
3 - Mentioning completely unrelated stuff like my opinion on the Nintendo NX SoC to try to discredit one's opinion
4 - Putting "likes" in each of the BFF's posts
5 - But GP106 (different chip on a different process) consumes 60W at 1500MHz in one test made by one website
6 - But GP104 (different chip on a different process in probably what are very high binned products for selling at huge margins) consumes 75W at 800MHz according to the IHV.


And now there's this 75 post guy starting his arguments by calling me irritating and an hypocrite.
With obvious likes from the BFFs.


I just wrote we don't know if GP107 can scale down while maintaining its original power/performance curve.
And all hell broke loose.
Let that sink in.
 
what happened to the post before you posted that? The one I quoted you on? That just isn't important anymore for this conversation?

I don't get you tots, you start something, but don't like it when people show you contrary to what you stated

I showed you the gp107 did get down there. Yet you keep on going with your imaginary thinking......

BTW I did no such things to you other then stating you should look up some of the things you are posting about, hinting to you are not correct. Maybe next time I will call you out as you are? Would that make you feel better? At least that way when you accuse me of personally attacking you it will actually be real?
 
Last edited:
Now we have come down to Pokemon memes lol! At least its entertaining :)

BTW the TDP figure for the laptop GPU from the way AMD has been marketing their TDP's this gen in their manuals, is for the GPU only, So yeah the question arises how much is it for the memory too.
 
Last edited:
Back
Top