The AMD Execution Thread [2007 - 2017]

Status
Not open for further replies.
Err no it's not. To disregard nV as a rather serious potential consumer in that marketplace is nonsense, but to think them as some kind of entrenched or even desirable option there is folly. NV has a pretty long trek ahead of it, before it can claim to compete with the heavy hitters there, irrespective of what Jen Hsun tells you about only them and Qualcomm existing.

Also, if the GPU is dead NV is pretty damn doomed at this point because Tegra+Tesla put together don't bring as much as consumer does for them, and Tegra sure as hell can't serve as a petri-dish for Tesla, so as to make that look nicer by taking part of the R&D burden. Luckily for NV, the GPU is pretty fine currently, and will continue to be fine for quite a while, even if the POS bottom gets phagocited by CPU+IGP matchings. Consumer is pretty damn important, let's not pick up internet memes about it's death too soon (not happenig soon, BTW).

I was talking about brand power specifically with consumers. It is my belief that in a couple of years time consumers will walk into Best Buy looking for a tablet then ask one of the employees "which one of these has Nvidia Tegra", not "which one of these has Qualcomm QSD875392". That is the brand power Nvidia are building with Tegra and that is what will push them ahead of the competition and into the same league as Qualcomm.

The proclamations of death may seem early, but I was talking about 5-7 years from now, not next year. With Windows 8 being very tablet friendly and the next gen of consoles getting 1080p proper I think the role of PC gaming (and therefore discrete GPUs) will be more limited than they are currently. We saw at the start of this console generation PC gaming was losing steam, and though it has got back on its feet towards the end of the generation, the new consoles will put it back down into the doldrums again.

Just remember, the Android/Win8 tablet market may start out as a bunch of geeks who know the difference between OMAP4470, MSM8960 and Tegra 3/Kal-El but the consumers of the future won't care. They will see the "Tegra Inside" sticker or whatever equivalent Nvidia come up with and just buy it based on that and the screen size. Whatever you may think about Nvidia and JHH, they never fail at marketing their products well.
 
How many consumers know what SoC is in their phone/tablet outside of these forums? I'd say you're seriously overvaluing brand power of a chip.
 
I wonder whether that's true going forward with Restrictive Design Rules and especially 1D grid layouts. I'm not competent enough to judge but I'd certainly expect the advantage to go down quite a bit. I feel like a fool for never asking Icera about their design methodology on 28nm... Then again ironically GF's 28nm process has more flexible design rules than TSMC's. I'd be curious to know how much custom Intel is using nowadays versus 10 years ago.

Intel has had RDR since (65nm?) and even with all the restrictions, I think it is a fair bet that experienced designers can do more than automated synthesis.
 
How many consumers know what SoC is in their phone/tablet outside of these forums? I'd say you're seriously overvaluing brand power of a chip.

In the same way that Pentium became ubiquitous in the 90's for desktop computers and the Centrino platform for notebooks in the early 00's, I think Tegra will do the same in the tablet world, especially concerning Android where the market is wide open and no single player has got a stranglehold like Intel in x86.

I remember when my dad bought his first powerful computer, wouldn't get anything other than Pentium.

Really it's an extension of TWIMTBP but against a market which has never encountered such zealotry in marketing, like you said who, other than geeks, know about SoCs. That is what Tegra is about, melding a decent SoC with massively superior marketing aimed at consumers. Get them to ask for Tegra and the OEMs will have to provide. It's what Nvidia are banking on, and I think they will be successful because the rest of the market is wholly unprepared for it. They have never run direct marketing campaigns, or funded co-marketing with publishers or OEMs. Nvidia have so much experience in the field which they can leverage which will push the Tegra branding into the mainstream even further than GeForce.
 
Intel has advertised extremely effectively over the years, in addition to building a continuous supply of world leading products. That's why laymen know who they are and trust them.

NVIDIA has a long way to go before they are anything like that. Their GeForce name doesn't mean much at all outside of gaming land. Tegra 1 didn't really go anywhere so NV has only had one successful mobile product so far. It will be interesting how they hold up against the competition that is coming.
 
Tegra 2 scored mainly because it was the reference platform for Android.

Tegra 2+ will tell us just how successful they have been.
 
Non-enthusiast consumers don't know any chip companies outside of Intel and I doubt that will change as it costs a lot to market to these consumers.
 
In the same way that Pentium became ubiquitous in the 90's for desktop computers and the Centrino platform for notebooks in the early 00's, I think Tegra will do the same in the tablet world, especially concerning Android where the market is wide open and no single player has got a stranglehold like Intel in x86.

I remember when my dad bought his first powerful computer, wouldn't get anything other than Pentium.

Really it's an extension of TWIMTBP but against a market which has never encountered such zealotry in marketing, like you said who, other than geeks, know about SoCs. That is what Tegra is about, melding a decent SoC with massively superior marketing aimed at consumers. Get them to ask for Tegra and the OEMs will have to provide. It's what Nvidia are banking on, and I think they will be successful because the rest of the market is wholly unprepared for it. They have never run direct marketing campaigns, or funded co-marketing with publishers or OEMs. Nvidia have so much experience in the field which they can leverage which will push the Tegra branding into the mainstream even further than GeForce.

If Nvidia is banking on brand power with average consumers, they are screwed or they need up their advertising by about 1000000%.
 
rpg.314 said:
I think it is a fair bet that experienced designers can do more than automated synthesis.
For custom design, yes, but the cost is huge. For standard cells, no: automated synthesis has long surpassed the abilities of us mortals.
 
For custom design, yes, but the cost is huge. For standard cells, no: automated synthesis has long surpassed the abilities of us mortals.

IIRC, nv said somewhere that an SM for their GPU's was full custom. It might have changed since then, but it was definitely in the context of G80 or newer.

So if any company can afford custom design at all, I would imagine Intel can.

The real question is what is the leverage offered by custom vs synthesized design in RDR regime, if one were to ignore cost for a first (zeroth?) order answer?
 
IIRC, nv said somewhere that an SM for their GPU's was full custom. It might have changed since then, but it was definitely in the context of G80 or newer.

I'm pretty sure that was...err...one of the typical public statements NV tends to give through its non-engineering/marketing filtered tendrils. I'm quite sure their SMs were not that custom...from people who went through great pains to look at the chips entrails.
 
rpg.314 said:
IIRC, nv said somewhere that an SM for their GPU's was full custom. It might have changed since then, but it was definitely in the context of G80 or newer.
if it were *full* custom, they'd be spectacularly bad at it. But it's unlikely it was, because they need to target different nodes and because there are many different versions (increasing CUDA features etc.)

So if any company can afford custom design at all, I would imagine Intel can.
Of course.

The real question is what is the leverage offered by custom vs synthesized design in RDR regime, if one were to ignore cost for a first (zeroth?) order answer?
The difference would still be massive. I highly doubt that AMD doesn't use full custom anymore. But they probably have large parts synthesized too.
 
I'm pretty sure that was...err...one of the typical public statements NV tends to give through its non-engineering/marketing filtered tendrils. I'm quite sure their SMs were not that custom...from people who went through great pains to look at the chips entrails.

IIRC, it was David Kirk's presentation. But memory is hazy and PR is .....
 
if it were *full* custom, they'd be spectacularly bad at it. But it's unlikely it was, because they need to target different nodes and because there are many different versions (increasing CUDA features etc.)
Why would full custom imply a borked chip in nv's case?

Also, Intel regularly adds a feature or two with every tick/tock. They still do (almost full?) custom designs.

The difference would still be massive. I highly doubt that AMD doesn't use full custom anymore. But they probably have large parts synthesized too.
http://forum.beyond3d.com/showpost.php?p=1572954&postcount=748
 
rpg.314 said:
Why would full custom imply a borked chip in nv's case?
I didn't say borked.
But there's no point in doing full custom if you don't get incredible clock speeds or unbelievably low power. G80 and later had neither. Its clock speed was no outrageously high for a carefully crafted standard cell design and lots of pipeline stages.

Also, Intel regularly adds a feature or two with every tick/tock. They still do (almost full?) custom designs.
They didn't have half nodes, Nvidia did. The amount of SM revisions is much higher than the tick/tock of Intel. They also have an order of magnitude less employees.
 
CEO and a spot on the BOD.
It looks like he didn't feel like being some new guy's suboordinate.

edit: And president. That's 3 things AMD wasn't giving him.
 
Last edited by a moderator:
Semiaccurate is claiming that Llano's woes are more from the CPU side than the GPU, and are especially problematic for the A8 level.

That could be the case. Looking at the A8-A4 lineup, the A8 and A6 100 and 65W power bands are determined by CPU clock speed, and modest differences at that.
The GPU can be partly disabled and clocked lower, and all it takes is 100-200 MHz on the CPU to jam power up 50%.

The question is why, though.
Llano could be suffering variability problems, since power varies so greatly.
It is possible that the GPU is less affected because its clocks are lower and it may be able to get away with using slower and more variation-tolerant transistors.
The CPU needs higher performance, and its base design may not have included some of the new power-saving and variability-resistant circuits discussed for BD.

The (not yet substantiated) claim that Trinity is not affected as badly as Llano could mean that the fix is to have an architecture that was designed for challenges at these nodes.
On the other hand, it could be that the more heavily automated and synthesized route taken for BD works better because it fits better with the design rules for the GPU, and that Llano's CPU could possibly be better if the process weren't compromising between the two realms.
 
Semiaccurate is claiming that Llano's woes are more from the CPU side than the GPU, and are especially problematic for the A8 level.

That could be the case. Looking at the A8-A4 lineup, the A8 and A6 100 and 65W power bands are determined by CPU clock speed, and modest differences at that.
The GPU can be partly disabled and clocked lower, and all it takes is 100-200 MHz on the CPU to jam power up 50%.

The question is why, though.
Llano could be suffering variability problems, since power varies so greatly.
It is possible that the GPU is less affected because its clocks are lower and it may be able to get away with using slower and more variation-tolerant transistors.
The CPU needs higher performance, and its base design may not have included some of the new power-saving and variability-resistant circuits discussed for BD.

The (not yet substantiated) claim that Trinity is not affected as badly as Llano could mean that the fix is to have an architecture that was designed for challenges at these nodes.
On the other hand, it could be that the more heavily automated and synthesized route taken for BD works better because it fits better with the design rules for the GPU, and that Llano's CPU could possibly be better if the process weren't compromising between the two realms.

Seems like Zacate's smooth roll-out is some evidence in favor of ease of execution with this synthesized approach.
 
Status
Not open for further replies.
Back
Top