The AMD Execution Thread [2007 - 2017]

Status
Not open for further replies.
While I agree it was a bad move you noticed that was in 2009 right?

I'm not sure things would have been any different if AMD kept Imageon, Qualcomm would probably have continue to license Imageon graphics while the rest remain a mix of PowerVR, Mali and nVidia with Tegra and AMD would earn in the range of tens of million per year from licensing.
 
I'm not sure things would have been any different if AMD kept Imageon, Qualcomm would probably have continue to license Imageon graphics while the rest remain a mix of PowerVR, Mali and nVidia with Tegra and AMD would earn in the range of tens of million per year from licensing.
There are two things you're overlooking. AMD would have continued to develop a low power core and would have tried to licensed it to others. Second is the sale created a competitor which drains AMD of talent.
 
It's true that this sale occurred during very dark economic times in general, but it was still an extremely myopic move and $65 million is practically nothing compared to the opportunity cost they incurred, revenues they missed, and royalties they could have received. Dirk Meyer's quote then about focusing on their "core strengths" probably meant spending money on the development and release of BD.They probably should have saved money by scuttling the release of the second gen BD and Piledriver (in addition to the first gen that never saw the light of day) and going with revised STARs cores until Steamroller was ready.

Had they not initially over-architected fusion in their cancelled Swift by going for BD-cores and used STARs from the get-go, they could have delivered Llano on time and with enough volume to land that Apple contract we heard about. In fact, that write-down for $100 million worth of unsold Llanos was probably a stockpile made in hopes of landing an Apple contract that wasn't quite big enough to satisfy the demand Apple foresaw. Had things gone slightly differently, we would have a very different AMD today, but hindsight is 20/20.
 
Last edited by a moderator:
They probably should have saved money by scuttling the release of the second gen BD and Piledriver (in addition to the first gen that never saw the light of day) and going with revised STARs cores until Steamroller was ready.
There was a revised STARs cores at 32nm. It was called Llano, the chip responsible for around a $100 million dollars in write-offs.
Just because Bulldozer wasn't that great doesn't mean the alternatives were necessarily better. The previous core design had been dragged several nodes past its freshness date already.

Had they not initially over-architected fusion in their cancelled Swift by going for BD-cores and used STARs from the get-go, they could have delivered Llano on time and with enough volume to land that Apple contract we heard about.
Which articles claimed Swift would have BD cores?
One argument against Apple switching to Llano, aside from not believing AMD could handle the volume, is that Llano's ISA support would have been a regression.

In fact, that write-down for $100 million worth of unsold Llanos was probably a stockpile made in hopes of landing an Apple contract that wasn't quite big enough to satisfy the demand Apple foresaw. Had things gone slightly differently, we would have a very different AMD today, but hindsight is 20/20.
AMD has a very long history of undershooting or wildly overshooting demand, and so that screw up is not beyond the pale for AMD even without any big Apple deal.
 
I'm well aware that there was a revised STARS core in Llano, but this was the backup plan for an initial fusion based on BD:

http://arstechnica.com/gadgets/2008/01/amd-ditches-dozer-taps-phenom-for-cpugpu-fusion-edit/

I'm commenting about the costly development of unreleased processors which likely far exceeded the addition of a piddling $65 million from the sale of Imageon to Qualcomm. They sold off at a bargain price a stake in a major new paradigm of digital devices with this sale. For their core businesses, choosing more conservative design iterations and spending more on execution and operations would have been wiser than planning to go with BD everywhere from the get go.

I think an apple contract would have been a significant a foot in the door for AMD since Macbook motherboards would be based on AMD sockets and chipsets. This would likely have affected future Apple design and supply chain decisions and would have had some halo effect on AMD at the very least in being chosen by Apple.
 
Old "Analyst Day" slides reaffirm this:

http://www.bit-tech.net/news/hardware/2007/07/28/amd_goes_modular/1

In particular

http://images.bit-tech.net/news_images/2007/07/amd_goes_modular/amd-slide6-l.jpg

Although I shouldn't have called it "Swift." My bad.

Edit: Swift was in fact another cancelled MCM and STARS cores based Fusion part. Llano was a single silicon die iteration of Swift.

Edit2: Consider what would have happened if this had come to pass:

http://www.forbes.com/sites/brianca...nvidia-about-acquisition-before-grabbing-ati/

The company would certainly have been in better hand had Jen Hsun taken the reins away from Ruiz / Meyer.
 
Last edited by a moderator:
I'm pretty sure that the Llano writedown is purely down to GF having awful yields on it to start with, then spectacular yields too late in the day. I think AMD might be using this as leverage in the new wafer agreement as well, if not they ought to be.

Trinity is a winner btw, they are flying off the shelves for me. One can only hope AMD has this under control.
 
There are two things you're overlooking. AMD would have continued to develop a low power core and would have tried to licensed it to others. Second is the sale created a competitor which drains AMD of talent.

Given the amount of money that nVidia has loss on the Consumer Products Division I'm certain that AMD would have been bankrupted by the cost of producing a custom ARM core from scratch. The best Imageon application processor was launch in 2008 with a 300mhz ARM11 core; the previous year the original iPhone was launched with a 412mhz ARM11 core, and in the following year Qualcomm custom Scorpion-core based Snapdragon line was launched at 1 ghz. There's a reason that companies like TI, Marvell and Freescale have left or avoided the mobile/tablet market; competition is intense and the top dogs (Apple and Samsung) have their own solutions, and then you have Intel and nVidia joining in.
 
Given the amount of money that nVidia has loss on the Consumer Products Division I'm certain that AMD would have been bankrupted by the cost of producing a custom ARM core from scratch. The best Imageon application processor was launch in 2008 with a 300mhz ARM11 core; the previous year the original iPhone was launched with a 412mhz ARM11 core, and in the following year Qualcomm custom Scorpion-core based Snapdragon line was launched at 1 ghz. There's a reason that companies like TI, Marvell and Freescale have left or avoided the mobile/tablet market; competition is intense and the top dogs (Apple and Samsung) have their own solutions, and then you have Intel and nVidia joining in.

It's a tough space but a modified Bobcat style, Geode, or K6 core running at 200 mhz with a minimal OS could have worked too; ARM isn't a requirement and AMD had the assets in house.
 
It's a tough space but a modified Bobcat style, Geode, or K6 core running at 200 mhz with a minimal OS could have worked too; ARM isn't a requirement and AMD had the assets in house.

I had a acer w510 up until a month ago . It was super fast with windows 8 on it. The c-50 was a dual core bobcat at 1ghz and a radeon 6250 . On the same process i believe its down from 9 wats to 4.5watts. I bet on 28nm they could have intergrated the io stuff and gotten close to 1w or even less . IT would have made for a killer cpu/gpu for the $300-$500 tablet range and would have killed windows RT . But off course amd can't really do anything right anymore

This is based on what? All intel has done is be 4-5 months late with Ivy Bridge, and even at that it's basically a 10% speed bump. AMD has closed the gap with Piledriver, which considering how much of a mess the company is in is pretty bizarre.

AMD's problems are all of their own making, no need to look elsewhere.

Ivy bridge really tackled the TDP problems and drasticly increased intel's gpu performance
 
I had a acer w510 up until a month ago . It was super fast with windows 8 on it. The c-50 was a dual core bobcat at 1ghz and a radeon 6250 . On the same process i believe its down from 9 wats to 4.5watts. I bet on 28nm they could have intergrated the io stuff and gotten close to 1w or even less . IT would have made for a killer cpu/gpu for the $300-$500 tablet range and would have killed windows RT . But off course amd can't really do anything right anymore
At peak performance, something like a Snapdragon/Exynos/A6 in 28nm should consume something like 2W. There's no way you could make something faster with under a Watt in 28nm.
 
At peak performance, something like a Snapdragon/Exynos/A6 in 28nm should consume something like 2W. There's no way you could make something faster with under a Watt in 28nm.

AMD wouldn't need something faster, just something. It'd be seizing a small slice of an expanding pie.
 
At peak performance, something like a Snapdragon/Exynos/A6 in 28nm should consume something like 2W. There's no way you could make something faster with under a Watt in 28nm.

They are at 4.5 watts on 40nm. I'd think 28nm would be a good drop in TDP.
 
Voltage hasn't been dropping with each process node like it used to. Going from 4.5 W to 1 W because of a process shrink is unrealistic.

1W is probably pushing it, yes.

That said, Temash is more than just a move to 28nm, it's an actual SoC when Hondo is a two-chip solution. This level of integration should bring its own power savings.

AMD also claims more efficient clock-gating for the CPU cores:

Jaguar_power_gating.png


Granted, that's a much much smaller contribution to savings, but the point is that it's not all about process.

Apparently, AMD targets 3.6W and up:

34t9uth.jpg
 
Last edited by a moderator:
Voltage hasn't been dropping with each process node like it used to. Going from 4.5 W to 1 W because of a process shrink is unrealistic.
Dropping down to 3W would already be a nice achievement. I can't think of any case where power has gone down by a factor of more than 4 purely because of a process step. Not in the long gone era of almost perfect scaling, so definitely not now.
 
Why aren't they using 28nm GloFo? That would have made up a lot of ground at least in terms of power efficiency vs Ivy Bridge.
 
Status
Not open for further replies.
Back
Top