AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
From the PC Perspective article linked earlier:

A 70/30% split is fine. In terms of performance per watt AMD was behind like 25% at most, and it's not like Pascal is anything to crow about other than compute features. As we can see from the Fury X to the Titan X, the performance was close-ish in terms of performance per PCB mm^2 as well. If this is anything to go on it just means that both AMD and Nvidia will be roughly equal in performance per both MM and Watt.

The "they're using more transistors but less clock speed!" is irrelevant as Finfet has a huge cliff for clockspeeds vs power usage so there's no clock battling. Nvidia's going to need to shrink it's own clockspeeds by just as much. Meaning this upcoming generation may well be fought on the yields of the two differing nodes and profit margins, who can put out the biggest card for what price/how many chips end up being functional, etc.
 
I didn't understand why you said "FinFETs have a less impressive curve past their sweet spot versus 28nm, as one of the graphs showed." when it appears to be the inverse to me: FMax increases to its maxima at nearly 3x power cost of the sweetspot on 28nm versus about 2.5x on FinFET.

The relative Fmax at equivalent power has a very large improvement at FinFET versus 28nm in the region at or below the first tick on the x axis.
At the start of the 28nm curve, the improvement from 28nm to FinFET is around 150% at half a unit.
At roughly 3/4 of a unit, it's and 80% improvement, and at 1 unit it's a 50% improvement in Fmax at the same power.
In the range of 1.6 to 2 units along the x-axis, the relative improvement provided by FinFET is in the 25-30% range.

The early part of the curve benefits significantly from FinFET's trend line beginning at a power level too low for 28nm and staying relatively steep until around .6-.75. It starts to flatten out while 28nm lacks the initial steep slope at the lower range but does not flatten out as early.
FinFET is still a good improvement even in the upper power range, it's just returning to more regular process scaling compared to a low end that is better than the theoretical scaling offered in the good old days, and then for a time offering better scaling than has been enjoyed for several nodes.

AMD chose to end the FinFET curve where it did for some reason, which might be something to look out for later.
At least for CPUs that can push things further, the margin of improvement does reduce at least somewhat as the curves flatten out further. At some point, FinFET can start to worsen in terms of Fmax/W relative to the alternative, although the designs would likely stop before getting to that point and there are a host of other variables that can make it counterproductive to reach that point.

GTX 950 has a base clock of 1024MHz and boosts to 1188MHz. AMD appears to spending huge amounts of transistors to get "equivalent performance" at 850MHz. The NVidia chip has a 20% clock advantage, so AMD must be using more transistors for "par performance".
Somewhere around .5-.7, there's a point where AMD could spend 2x the transistors at 2/3 the overall FMax, and given where 28nm reaches those speeds it might be a tie or a small win to FinFET in absolute power. That's about 4 units on the Y axis out of maybe a reasonable max of 6 for 28nm--when I think there is evidence from other GPUs that AMD has had trouble sustaining or getting much benefit from that portion.

AMD might be taking advantage of the frame cap, if it can drop the GPU into the portion of the FinFET curve below 28nm's minimum, which might be lost if the frame rate were allowed to vary more.
 
I didn't understand why you said "FinFETs have a less impressive curve past their sweet spot versus 28nm, as one of the graphs showed." when it appears to be the inverse to me: FMax increases to its maxima at nearly 3x power cost of the sweetspot on 28nm versus about 2.5x on FinFET.

I put the sweet spots at 1.5 and 1 units along Relative Power for 28nm and FinFET, respectively.

---

70/30 split and the 25-30% gain from process are both statements referenced to actual GPUs. The graph can be derived from an actual chip, since it's up and running.

GTX 950 has a base clock of 1024MHz and boosts to 1188MHz. AMD appears to spending huge amounts of transistors to get "equivalent performance" at 850MHz. The NVidia chip has a 20% clock advantage, so AMD must be using more transistors for "par performance". GM108 has a base of 1029MHz, if we want to talk about dies of roughly the same size (NVidia doesn't need to crash to a lower clock for performance per watt at the low end).

This appears to be exactly what we see when comparing AMD and NVidia at 28nm.

Put another way: on 28nm NVidia has a graph that looks something like the yellow line (sweet spot is at a higher frequency and at less power) and AMD is on something like the dotted-blue line. It appears that we'll see the same situation repeat on FinFET.

Err, what? The fmax curve just shows that frequency increase vs power draw goes much flatter faster, so I'd say he's right, clock speeds flatten fast and overclocking/clock battles aren't going to be prevalent for either IHV as you're just pushing exponentially more power for little gain.

Your math may also be quite a bit off on the 950 clockpeed advantage. Now, if 850Mhz is the "base clock" then indeed base clock to base clock the 950 has a 21% clockspeed advantage. But if we're going by fastest clock speeds that's a 40% clockspeed advantage for the 950 from using AMD's card as a base. We also haven't a clue as to how many transistors are actually in the AMD card, as the only report so far is that the "card looks really small" and even if you could measure it with a tape measure, that wouldn't tell you the specific average transistor density used for this particular GPU. "Double the transistor density" is only a guideline for if you go straight transistor density with no regard to clockspeed/power draw.

In truth the presentation told relatively little. A few inferences can be made. That AMD improved the architecture (at least in gaming terms) by some solid but modest amount while Nvidia has not this year (at least in gaming terms) will help AMD. That AMD wants to target the highly portable laptop market and has a chip that, perhaps at a stretch, can easily best Nvidia's old chip. What Nvidia's new chips will do on the same/roughly the same nodes we have no reference for so it's a bit irrelevant at the moment.
 
... performance per PCB mm^2 as well. ...
Well, there's a new metric!

The "they're using more transistors but less clock speed!" is irrelevant as Finfet has a huge cliff for clockspeeds vs power usage so there's no clock battling. Nvidia's going to need to shrink it's own clockspeeds by just as much.
This makes no sense. Are you claiming that upcoming GPUs will have lower clocks speeds than those of today?
 
Well, there's a new metric!


This makes no sense. Are you claiming that upcoming GPUs will have lower clocks speeds than those of today?

Huh? Both Fury X and Titan X are roughly the same die size, which is the max die size for 28nm so it seems a perfectly good comparison as far as that goes since it's the limiting factor therein. And no, I'm just implying that with a sharper exponential "hockey stick" curve of frequency to power usage the resulting difference in frequencies between Nvidia and AMD GPUs, which wasn't huge last time as it was, will shrink even more.
 
err do those even use discrete GPU's, I don't remember any of those types of notebooks using discrete GPU's, at least not from Dell or HP or Apple.
Asus UX303 and Lenovo Yoga 3.

Why would that want to put in a discrete GPU into those types of notebooks? There is no use for them in them.
For the same reason every other laptop with a discrete GPU has it. Performance is still better than mid-ranged integrated solutions and it's less expensive than the premium that Intel charges for Iris Pro models.

And I think you are talking about ultra portable notebooks, not thin notebooks.
No, these are the thin-and-light laptops.
In dimensions and weight, your Razer Blade is close to a 15" Macbook Pro, Asus X5 series or Dell Inspiron/Precision series.

And there's a couple of other things that your laptop with the 970M has that can't compare to actual thin-and-light models:
When the fans are running though, the laptop is very loud. We measured 55.0 dBA at 1 inch from the system after one hour of gaming. It is very loud, and very noticeable. In my opinion, any gaming on the Razer Blade would necessitate headphones unless the sound of fan noise does not bother you. There is a lot of heat generated inside the small chassis, and the fans have to expel that.


So no, Razer didn't magically place a 75W GPU + 47W CPU inside a chassis that is typically made for 40W CPUs alone, or 15W CPUs + 30W GPU combos.
 
Last edited by a moderator:
A 70/30% split is fine. In terms of performance per watt AMD was behind like 25% at most, and it's not like Pascal is anything to crow about other than compute features. As we can see from the Fury X to the Titan X, the performance was close-ish in terms of performance per PCB mm^2 as well. If this is anything to go on it just means that both AMD and Nvidia will be roughly equal in performance per both MM and Watt..
One large variable here though is that Maxwell relied upon compression to offset its reduced bandwidth/bit memory interface in comparison to AMD products, so at higher resolution it seems this plays a part in its performance - how much who knows but I guess some extrapolation may be possible with Pascal released (but not all NVIDIA products will use HBM2 or optimum GDDR5x).

Cheers
 
Asus UX303 and Lenovo Yoga 3.


For the same reason every other laptop with a discrete GPU has it. Performance is still better than mid-ranged integrated solutions and it's less expensive than the premium that Intel charges for Iris Pro models.


No, these are the thin-and-light laptops.
In dimensions and weight, your Razer Blade is close to a 15" Macbook Pro, Asus X5 series or Dell Inspiron/Precision series.

And there's a couple of other things that your laptop with the 970M has that can't compare to actual thin-and-light models:



So no, Razer didn't magically place a 75W GPU + 47W CPU inside a chassis that is typically made for 40W CPUs alone, or 15W CPUs + 30W GPU combos.

Just to add to the discussion.
Is this not an area for AMD with some kind of Zen APU/SoC?
Although I appreciate that may be late-ish 2016.
Cheers
 
Asus UX303 and Lenovo Yoga 3.


For the same reason every other laptop with a discrete GPU has it. Performance is still better than mid-ranged integrated solutions and it's less expensive than the premium that Intel charges for Iris Pro models.


No, these are the thin-and-light laptops.
In dimensions and weight, your Razer Blade is close to a 15" Macbook Pro, Asus X5 series or Dell Inspiron/Precision series.

And there's a couple of other things that your laptop with the 970M has that can't compare to actual thin-and-light models:



So no, Razer didn't magically place a 75W GPU + 47W CPU inside a chassis that is typically made for 40W CPUs alone, or 15W CPUs + 30W GPU combos.

So now you mention two other companies that you didn't bother mentioning before?

I'm not going to argue with you about this because you don't seem to understand what an ultra compact or ultrabook, thin and light, are and you are adding things that you didn't state before. Those are the three different classes of notebooks right now. The forth one is the normal large size used for gaming notebooks.

here is a list of ultra portables. Want to go through the list of what you said and point those out in this list? I can pull up many of these lists, and yes all the laptops you are talking about are listed as ultraportable, not thin and light.

http://www.digitaltrends.com/best-ultraportable-laptops/

And no they will not be using discrete GPU's since they cost more, it cuts into the margins of these products, which margins aren't that high for most of these.

Apple is probably the only one that would put them into theirs since they have the margins to do it but then gain it will cut into there's too unless they increase their price, but to keep the same margin they have now they have to increase it quite a bit.

The surface is a different story all together, MS is going after Apple's Ipad, and what to replace all these crap chromebooks but at a cost.

If you are just plaster a general statement can they do it? Yes they can, but will they do it, no they won't.

And here are the specs of the yoga 3 btw

http://shop.lenovo.com/us/en/laptops/lenovo/yoga-laptop-series/yoga-3-pro-laptop/#features

where do you see discrete graphics on that?

And the Asus is an ultrabook.
 
Last edited:
I'm not sure how the conversation led to a debate about the differences between thin & light notebooks and ultraportables,

When Razor1 tried to invalidate AMD's slide by arguing that a 75W discrete GPU fits into a thin-and-light laptop because his laptop has one. Even though said laptop weighs >2Kg and its fans make 55dB of noise (more than my desktop with two 290X) when gaming, hardly fitting into the description from AMD's slide.


Just to add to the discussion.
Is this not an area for AMD with some kind of Zen APU/SoC?
Although I appreciate that may be late-ish 2016.
Cheers

Yes, this performance bracket should be an area for an APU. Unfortunately, we won't see SoCs/APUs with Zen until 2017, and Carrizo simply isn't competitive with Skylake offerings.
I'm hoping for AMD to release an APU with Zen, HBM and 32+ CU GCN in 2017. Then I'll reform my laptop :)


So now you mention two other companies that you didn't bother mentioning before?
It's just that you didn't bother reading the previous post:
No one would ever mistake your Razer Blade for a Dell XPS 13 or an Asus UX305.

I'm not going to argue with you about this because you don't seem to understand what an ultra compact or ultrabook, thin and light,
Yes, it's totally my lack of understanding and not the delusions of grandeur you have about your laptop and where your 75W graphics card could fit.

are and you are adding things that you didn't state before. Those are the three different classes of notebooks right now. The forth one is the normal large size used for gaming notebooks.
That's odd because in that case I named families of laptops and not specific models.
Regardless, if by "the fourth one" you mean the Dell Precision, then the Precision 5000 is 300g lighter and has 66% of the volume of your thin and light Razer Blade.

And no they will not be using discrete GPU's since they cost more, it cuts into the margins of these products, which margins aren't that high for most of these.
Were you even remotely aware of the price premium that Intel used to charge for models with Iris and Iris Pro, you'd know it was cheaper to get a low-end discrete GPU and an Intel model with a GT2, at least for Haswell. That's why laptop OEMs opted for Intel GT2 + GK107/GK208, and later GT2 + GM108.

And here are the specs of the yoga 3 btw
http://shop.lenovo.com/us/en/laptops/lenovo/yoga-laptop-series/yoga-3-pro-laptop/#features
where do you see discrete graphics on that?

Wow such google skills :rolleyes:
That's Yoga 3 Pro. The regular 14" Yoga 3 has the option for a discrete 940M.
 
Last edited by a moderator:
.......
Yes, this performance bracket should be an area for an APU. Unfortunately, we won't see SoCs/APUs with Zen until 2017, and Carrizo simply isn't competitive with Skylake offerings.
I'm hoping for AMD to release an APU with Zen, HBM and 32+ CU GCN in 2017. Then I'll reform my laptop :)
.....
Ah man that sucks with it being so late :(
Shaking my head because if this was even late that year it would be a great revenue and platform base for AMD, will be trickier against Intel in 2017 and I really want AMD to strongly succeed again some way in this market.
This would had been one of the easier wins IMO if it could had happened this year.
Timing seems to be the bane of AMD in the last few years.
I thought it would be easier for AMD to get these lower powered APU/SoCs out rather than the more powerful Zens for desktops and larger laptops.
Cheers
 
When Razor1 tried to invalidate AMD's slide by arguing that a 75W discrete GPU fits into a thin-and-light laptop because his laptop has one. Even though said laptop weighs >2Kg and its fans make 55dB of noise (more than my desktop with two 290X) when gaming, hardly fitting into the description from AMD's slide.




Yes, this performance bracket should be an area for an APU. Unfortunately, we won't see SoCs/APUs with Zen until 2017, and Carrizo simply isn't competitive with Skylake offerings.
I'm hoping for AMD to release an APU with Zen, HBM and 32+ CU GCN in 2017. Then I'll reform my laptop :)



It's just that you didn't bother reading the previous post:



Yes, it's totally my lack of understanding and not the delusions of grandeur you have about your laptop and where your 75W graphics card could fit.


That's odd because in that case I named families of laptops and not specific models.
Regardless, if by "the fourth one" you mean the Dell Precision, then the Precision 5000 is 300g lighter and has 66% of the volume of your thin and light Razer Blade.


Were you even remotely aware of the price premium that Intel used to charge for models with Iris and Iris Pro, you'd know it was cheaper to get a low-end discrete GPU and an Intel model with a GT2, at least for Haswell. That's why laptop OEMs opted for Intel GT2 + GK107/GK208, and later GT2 + GM108.



Wow such google skills :rolleyes:
That's Yoga 3 Pro. The regular 14" Yoga 3 has the option for a discrete 940M.

dude be more clear, you are saying 13 inch before, now you showing a 14 inch, come on, what I'm supposded just pick stuff everything from under the sun to understand what you are saying?

yeah I didn't really google, I did all that out of memory, and went straight to a 13 inch yoga 3 pro! Because That is what I thought you were talking about! I can't see how you are so oblivious, yes I am saying it again because you don't seem to want to be very concise in your posts or concise enough to make a point. Don't assume that everyone knows exactly what you are thinking because that will never happen.

They are specifically talking about thin and light notebooks, not ultracompact which is what you were talking about, and now have added in ultrabooks into that mix, that is just all over the map. And stop that.
 
Last edited:
I thought it would be easier for AMD to get these lower powered APU/SoCs out rather than the more powerful Zens for desktops and larger laptops.
I also wonder why Zen wasn't an APU/SoC in its first iteration, since all mainstream offerings from Intel have embedded GPUs nowadays. It makes even less sense given AMD's investment in HSA for everything and now presents its latest CPU without any GPU Compute Units.
My guess is Zen is being developed for server first, domestic user later. Like Bulldozer before it (which isn't a good sign at all).


dude be more clear, you are saying 13 inch before, now you showing a 14 inch, come on, what I'm supposded just pick stuff everything from under the sun to understand what you are saying?
Yes, you are clearly confused and unable to follow through with the presented arguments. It's best that we stop here.
 
https://en.wikipedia.org/wiki/Subnotebook#2007.E2.80.93present

subnotebooks often use 18W TDP processors
At Computex 2011, Intel announced a new class of subnotebooks called Ultrabooks. The term is used to describe a highly portable laptop that has strict limits on size, weight and battery life and has tablet-like features such as instant-on functionality.


Does that fit your lets use a discrete GPU too, no it doesn't.

https://en.wikipedia.org/wiki/Laptop#Classification

read it and learn.
 
:rolleyes: The slide says thin-and-light, not Ultrabook.
Alas, there really isn't much more to explain. Feel free to revisit the earlier posts if you want to understand why there aren't thin-and-light laptops with 75W graphics cards.


As for the topic at hand, from PCPer:

It is likely that this is the first Polaris GPU being brought up (after only 2 months I’m told) and could represent the best improvement in efficiency that we will see. I’ll be curious to see how flagship GPUs from AMD compare under the same conditions.

So.. March for Polaris Mini after all? Maybe we'll see lots of laptops with this GPU during Computex, ready for market?
 
then why were you taking about ultraportables and ultrabooks? LOL

I even pointed that out to you in the second post to yours, you were talking about ultraportables and you still deny that?

Thin and light is a class of notebooks that are desktop replacements that are thin and light that's all they are and they are usually gaming notebooks.
 
I also wonder why Zen wasn't an APU/SoC in its first iteration, since all mainstream offerings from Intel have embedded GPUs nowadays. It makes even less sense given AMD's investment in HSA for everything and now presents its latest CPU without any GPU Compute Units.
My guess is Zen is being developed for server first, domestic user later. Like Bulldozer before it (which isn't a good sign at all).

It's not without some precedent, since it was only later with the latter Bulldozer descendants that there was something close to a syncing up of the CPU and GPU generations. This was after AMD gave up on deploying its new CPU cores anywhere but in an APU.

Zen seems to have been on a forced march to completion without entangling a new core and interconnect with GPU complications, and an APU is not going to give AMD silicon that can go towards validating Zen for non-consumer markets.
Size-wise, AMD's APUs are big compared to their price-equivalent Intel competitors, so a Zen APU starting earlier is going to run the risks related to its size on a process with uncertain price advantages and volume early on.
There's also the bang-for-buck argument that getting an APU out in the rather depressed price/mm2 client environment is not going to change AMD's fortunes as much as returning to a lucrative market where it is basically impossible for AMD to go anywhere but up. This is particularly true if a Zen APU is still weaker in CPU and stronger in GPU like things (mostly) stand at currently, which has helped AMD little. I am not sure Zen will be enough to do more than reduce AMD's performance shortfall to something more palatable, but little information exists right now.

To go with Zen's development seeming to be a more isolated project, the consolidation of the GPU side in AMD seems to point to the other side no longer striving as hard for such synergies, either.
 
As for the topic at hand, from PCPer:

So.. March for Polaris Mini after all? Maybe we'll see lots of laptops with this GPU during Computex, ready for market?

I think this might mean that AMD only got silicon back 2 months ago. I've read somewhere that Polaris would show up in mid-2016.
 
Status
Not open for further replies.
Back
Top