The AMD Execution Thread [2007 - 2017]

Status
Not open for further replies.
1.AMD could possible flounder

2. The vast majority of the silicon sold in the world doesn't need the kind of process Intel uses.

1.This possibility is a miracle, don't you think?

2.But Intel is so big for some reason. So wouldnt it profitable, and very competitive, if at least one company spent its resources in the fab stage? There are lots of fabs around, and really big ones, so I would expect at least one researching and trying to make its own way ahead of intel.
 
1.This possibility is a miracle, don't you think?
There are a lot of companies and institutions that want AMD around, either because bankruptcy would endanger the vast sums of money already sunk into AMD, or because they see AMD as a useful source of leverage over deals with Intel.

AMD will most likely be around in some form, and AMD going bankrupt would have to happen after AMD exhausts a number of other alternatives.

2.But Intel is so big for some reason. So wouldnt it profitable, and very competitive, if at least one company spent its resources in the fab stage? There are lots of fabs around, and really big ones, so I would expect at least one researching and trying to make its own way ahead of intel.
There are a number of companies trying that.
The process agreement between IBM and AMD and others is an example.
They collaborate because fab tech is expensive, and Intel is one of the few companies capable of doing alone.

A lot of the other big companies with fabs have processes tailored to their markets. What good is it to have a fab that would churn out decent x86 processors if they sell digital camera CCDs?
There is no money to be made on a fab that has no market to address.
 
Bankruptcy is not necessarily death either. Given their debt service costs right now, I could imagine them going that route and then emerging later on. But still quite a lot of ground to cover before even wiping away their debts would allow the company to become profitable. I think it's around $88M/quarter right now.
 
Does anyone have the figure for R600 or G80's development costs? I've done some digging and can't seem to find these figures. :?:

Edit: Found G80's development cost to be $400 million...I believe over a 4yr cycle

Ok, here goes the wild speculation.....brace yourselves....

What if AMD had to "delay" or even worse, "stop" development of future flagship GPU's to allow the company to sustain itself until 2H 2008 or 2009 when their product roadmaps seem to allow for better positioning and a return to healthy revenue? With debts as high as they are and cash as low as it is, can AMD realistically sustain concurrent development of R700, R800, etc? If R700 and R800 are ~$400 million to develop like G80 and will take roughly 4yrs to bring to market (total speculation here), then we can somewhat safely assume there's $100 million/yr development cost for each GPU.

Any thoughts?

(I'm not trying to start a flame war, I'm being serious here)
 
Last edited by a moderator:
Any thoughts?

(I'm not trying to start a flame war, I'm being serious here)

A lot of the work and tech in the flagship products is what is used for the lower products too. you would only save a small portion of that if you just didn't make flagship products. If you're suggesting taking a 4 year break from the graphics market, any company would find it very hard to come back from that. You'd lose your place in the market, your OEM deals, your reputation, and probably all your engineers too.

I doubt AMD would do this, as they want to provide complete platforms, and that includes high end graphics platforms too, as well as future developments like Fusion.
 
...If you're suggesting taking a 4 year break from the graphics market, any company would find it very hard to come back from that...

I'm only suggesting that AMD would "halt" development on R800 (and possibly R900) as those two GPU's (and possibly 1 or 2 more) are likely in development. With R700 likely close to taping-out in the relatively near future, it wouldn't make sense to stop efforts there. However, if pausing efforts on other future GPU's can save a few hundred million dollars, it might be an option AMD is financially forced to take...
 
However, if pausing efforts on other future GPU's can save a few hundred million dollars, it might be an option AMD is financially forced to take...

...if they want to ensure their company has no chance of a future.

I know things are very sticky for AMD at the moment but halting all work in R&D (even just in the graphics sector) would be just suicidal IMO.
 
I'm only suggesting that AMD would "halt" development on R800 (and possibly R900) as those two GPU's (and possibly 1 or 2 more) are likely in development. With R700 likely close to taping-out in the relatively near future, it wouldn't make sense to stop efforts there. However, if pausing efforts on other future GPU's can save a few hundred million dollars, it might be an option AMD is financially forced to take...

But you can't just stop development for future products. AMD probably have already put significant work into R800 and R900, even though they are a couple of years away. You not only lose all the things I mentioned in my last post, but you also cede all those things to your competitors. They end up with a massive warchest with which to bash you later on.

Companies spend massive amounts on R&D because they need to make money by regularly bringing out products, defending their market share and mindshare with the customer. You can't just take a holiday from that. It's a life cycle and landscape that you are either part of, or you are not - there's no popping in and out of it for a few years. It takes time to build that business, and the capabilities to participate in that market in the future. Take a few years out, and it will take you a decade to get back in, and you'll end up spending ten times what you would have anyway, without any products/market/mindshare.

If AMD don't have the money to spend on a significant part of their business and strategy, then they don't actually have the money to run their business or safeguard it's future in a market that is all about the next big thing you can develop and sell.
 
I definitely understand what you guys are saying, but to continue the role of Devil's Advocate I'll offer the following observation...

With R600 being aimed largely at NVIDIA's upper-end mainstream/performance card and not at the top of the line flagship range (8800 GTX and 8800 Ultra), one could argue that AMD is already starting to head down this road...

If AMD opted to concede "flagship" level GPU business and instead focus on high volume mainstream parts ($349 and below), they could dedicate their time to lower cost GPU development which would be much more tolerant of yields, cheaper to produce, shorter production cycles, and still turn in solid revenue thanks to huge volume...

The flagship GPU is a big gamble since it is low volume....extremely risky from a production standpoint, very costly, etc...

Stopping development on R800 and R900 would not be such a crazy departure for a semiconductor company. Sometimes, you have to cut your losses or do something to "stop the bleeding". It's done a fair amount in the semi industry...The design efforts would likely not be wasted as a great deal of what was learned/created could be carried over to a more mainstream-focused design.
 
I don't think they had much choice with the r600 on what price it was going to come out with. Also if they want good midrange and low end competition, they still need R&D expenditure, how is that different then what they are doing now ;) ? The only money they would be saving would be getting the high end out.
 
I definitely understand what you guys are saying, but to continue the role of Devil's Advocate I'll offer the following observation...

With R600 being aimed largely at NVIDIA's upper-end mainstream/performance card and not at the top of the line flagship range (8800 GTX and 8800 Ultra), one could argue that AMD is already starting to head down this road...

That was mostly a process issue. If R600 was hitting 1 ghz as supposedly originally planned, it would be a different story. Instead it leaks power all over the place.

Let me put it this way. If AMD had suspended developement of R600, and all the high end tech like the ring bus memory controller or the high shader power, where would that leave them with their midrange parts?

Look at car development. Without the high end expensive cars getting traction control, paddle gearboxes, antilock brakes, crumple zones, electric windows, etc, as early adopters, do you think that technology would still have tricked down to mainstream cars?

Without the high end driving development and trying to be next big thing, there is nothing for the average joe to have next year in your mainstream segment. If you're not pushing the boundaries on your best products, if all you make is middle of the road average products, you're always going to be chasing after your competitors as they break new ground. ATI was already in this situation when they came back into mainstream and high end graphics cards, and it took them years of products like the 8500 before they were anywhere near a serious competitor for Nvidia.

Stopping development on R800 and R900 would not be such a crazy departure for a semiconductor company. Sometimes, you have to cut your losses or do something to "stop the bleeding". It's done a fair amount in the semi industry...The design efforts would likely not be wasted as a great deal of what was learned/created could be carried over to a more mainstream-focused design.

You might stop development on a product in favour of another, more promising product, but you don't just abandon a market for a short while unless you really want to get out of it for good. Sure, they could save money if they stopped high end graphics development, but then where will the innovation come from for their products for next year, and the year after, and the following year? These big companies plan and develop their products years in advance, and the infrastructure to do that can't be turned on and off like a tap.

What you're suggesting is to slaughter all your cows today so you don't have to feed them for the next year, but where are your meat and milk going to come from then? And when all your cow farmers have gone to work for the opposition, and when everyone has to go and buy their milk and beef from your opposition, how are you going to get your herd back up and going again? How are you going to get your customers back? How are your competitors going to react against you with the big pile of money they've got from all the business you sent to them?

Any company in the computing field that isn't investing in the future simply doesn't have one.
 
What you're suggesting is to slaughter all your cows today so you don't have to feed them for the next year, but where are your meat and milk going to come from then? And when all your cow farmers have gone to work for the opposition, and when everyone has to go and buy their milk and beef from your opposition, how are you going to get your herd back up and going again? How are you going to get your customers back? How are your competitors going to react against you with the big pile of money they've got from all the business you sent to them?

Any company in the computing field that isn't investing in the future simply doesn't have one.

Ah, a perfect analogy to help me convey what I'm trying to say here. I totally agree with what you're saying, and I want to point out that I'm not saying AMD/ATI should thorw away all the work done on R800/R900/etc.

Using your analogy let's add one more element and say that AMD is hurting financially and food for the cows is expensive. I'm suggesting that for the sake of keeping the company financially alive, AMD takes 50% of their cows and milks the hell out of them before slaughtering them and putting that meat in the freezer. The remaining 50% of the cows get exceptional treatment and attention to ensure they grow as large and juicey as possible. If AMD can start making good money again off these "excellent" quality cows, they might be able to afford more food and be able to have the "original" 100% number of cows.

Putting the above into more useful context, I'm suggesting the following could happen:
  • Halt development on R800/R900/ and beyond projects and re-apply efforts towards applying those concepts to more mainstream GPU's (Note: R&D continues, though it is strictly for mainstream GPU's)
  • Pull teams off of any "flagship" ($399+) GPU development and re-assign them to aid with mainstream and value GPU development (shorten development time, decrease issues, add more polish, etc.)
  • If AMD can get back into the black (or come closer at least by 2H 2008 or 1H 2009) they can contemplate re-entering the "flagship" GPU business. Getting back in is always an option on the table...but it might not make financial sense for them if the above-approach proves successful.
Remember, when compared with flagship GPU's a mainstream GPU is:
  • Cheaper to produce (fewer transistors, smaller die, PCB with less layers, etc.)
  • Less complex (easier to design - though far from easy!)
  • Less power (less heat - TDP targets easier to reach)
  • Lower margin but typically MUCH higher volume
  • etc.
LOL....What the hell are we doing talking about cows and semiconductors in the same breath...hahaha....Only at B3D! ;)
 
Ah, a perfect analogy to help me convey what I'm trying to say here. I totally agree with what you're saying, and I want to point out that I'm not saying AMD/ATI should thorw away all the work done on R800/R900/etc.

Using your analogy let's add one more element and say that AMD is hurting financially and food for the cows is expensive. I'm suggesting that for the sake of keeping the company financially alive, AMD takes 50% of their cows and milks the hell out of them before slaughtering them and putting that meat in the freezer. The remaining 50% of the cows

Only you can't do that with R&D - take your "frozen meat" out of the freezer even a year later, and it's no use - the market has passed you buy and your competitors have moved the goalposts as they've moved forwards and you've stood still. And it's not just what you put in the freezer, it's all the projects that are on the go for the next few years, and when you do decide to go back in, you've got that time lag again as you take a couple of years to get your new projects up to speed - once you've rebuilt all your infrastructure and staffing.

get exceptional treatment and attention to ensure they grow as large and juicey as possible. If AMD can start making good money again off these "excellent" quality cows, they might be able to afford more food and be able to have the "original" 100% number of cows.

Leaving the cow analogy aside, R&D doesn't work like that. It has a critical mass, which is why there are so few companies competing where that critical mass is very expensive such as in the graphics field. You can't grow a new crop of engineers from the half a department you kept. Workload, team dynamics, company growth, company culture all get exacerbated when you're trying to ramp back up for new projects.

You can't be a top leading technology company unless you're building top technology. That doesn't just mean top of the range graphics chips, it also means making the best midrange and low range chips, making the best profit from them via yields and designs. If AMD was to give up making top technology, they wouldn't still be able to make great mid-range products because that top technology edge would be gone. They did that in the 8500 era, and it didn't move them forwards until they moved into making top technology that could filter down to all their products.

Putting the above into more useful context, I'm suggesting the following could happen:
  • Halt development on R800/R900/ and beyond projects and re-apply efforts towards applying those concepts to more mainstream GPU's (Note: R&D continues, though it is strictly for mainstream GPU's)


That technology may well be one in the same, so you gain nothing, but lose market and mind share. The technology you put "on hold" is lost as it's no use in 12 months time.

  • Pull teams off of any "flagship" ($399+) GPU development and re-assign them to aid with mainstream and value GPU development (shorten development time, decrease issues, add more polish, etc.)

A common misconception. You can't increase high tecnology projects like this just by throwing more people at it. It's the standard "9 women making a baby in one month" analogy. That's why you need to have multiple teams working on staggered projects designed to come to fruition at different times, because the development time is far longer than the product life cycle.

It took a significant amount of time for Intel to come back down the Pentium 4 road, for ATI to get back to the mainstream and high end graphics market, for Microsoft to turn around when they decided that the Internet was more than just a fad. The reason is you can't just throw lots of money at complex projects and have them done by tomorrow. It literally takes years. Look at how long Intel is going to take to get back into graphics with it's Larrabie project, and if anyone can afford to pour massive amounts of money/resources into a project, it's Intel.


  • If AMD can get back into the black (or come closer at least by 2H 2008 or 1H 2009) they can contemplate re-entering the "flagship" GPU business. Getting back in is always an option on the table...but it might not make financial sense for them if the above-approach proves successful.

As I said several times, you can't just stop the business for a couple of years. It's too expensive, you lose your critical staff and skills, you lose mindshare, you have to throw away all the work that was meant to be used during that period, your reputation goes down, your competitors get all your profits and OEM contracts, it takes you a couple of years to get back up to speed again.


Remember, when compared with flagship GPU's a mainstream GPU is:
  • Cheaper to produce (fewer transistors, smaller die, PCB with less layers, etc.)
  • Less complex (easier to design - though far from easy!)
  • Less power (less heat - TDP targets easier to reach)
  • Lower margin but typically MUCH higher volume

A midrange product is also less desirable, makes your company less profit per unit, loses you the mindshare in the market by not having a flagship product, and weakens your standing with OEMs because you can't supply their requirements at the high end. Do you think AMD wants to hear their OEMs tell them that they only supply high end Intel/Nvidia systems, because AMD can't make a high end system as they don't have a high end graphics solution?

You can't make those great mid-range products without having the technology to make great top range products too. You can't maintain a leading position in the market without leading technology in the form of leading products. If all you want to do is follow the leaders, then you're always going to be behind and falling back.
 
Last edited by a moderator:
LOL...Well BZB, you and I are apparantly the only ones with time to kill today... ;)

" If AMD was to give up making top technology, they wouldn't still be able to make great mid-range products because that top technology edge would be gone."

Although I hear what you're saying in almost every respect, I don't agree that a "Flagship" GPU design is an absolute necessity for AMD to be successful. If ATI was still on its own, then you might be right with that assumption....However, AMD has graphics to aid in the platform...and a flagship card isn't an "absolute must" (albeit, it would be nice).

If AMD took its two design centers (Marlborough and Santa Clara) and had them each focus on a mainstream GPU design, they could create a solid product. Granted, this would be in stark contrast to the traditional appraoch of creating a "flagship" GPU and then fidning ways to trickle the technology down to mainstream parts. I'm not saying AMD stops making GPU's......I'm saying they stop making $400+ flagship GPU's and focus their attention on cranking out high volumes of mainstream and performance segment cards...

Well, we now know where you and I stand on the subject....let's see if anyone else is bold enough to chime in with their $.02. :D
 
OK look its the same techonolgy used in the high end and mid range, so the R&D expenditure will end up the same, even if they focus on midrange only (we aren't talking about deltachrome level here, thats pretty much low end of last generation, for something like that that will less expenditure) AMD will still have to focus on the same things in midrange as they do in the high end, performace per watt, perfromance for mm2. The only place where there will be a difference in expenditure is deployment of the product, and thats why the high end are more expensive.

And also as BZB stated you loose the entire halo effect.
 
Last edited by a moderator:
I agree, that would be kneecapping themselves if they dropped R&D expenditure of the high end.

Didn't stop Matrox, S3, or PowerVR. Hey, the last one is actually more successful since quitting high end graphics.

Dropping high end support for ati though would make amd's purchase of ati entirely pointless, however.

Not to mention, there graphics business doesn't do anywhere near as poorly as matrox's or s3's were doing when they pulled out of the high end, and is a much larger and substantial portion of the their graphics income than high end graphics were for powervr.

Halt development on R800/R900/ and beyond projects and re-apply efforts towards applying those concepts to more mainstream GPU's (Note: R&D continues, though it is strictly for mainstream GPU's)
Pull teams off of any "flagship" ($399+) GPU development and re-assign them to aid with mainstream and value GPU development (shorten development time, decrease issues, add more polish, etc.)

How much time do you think is spent developing the technology behind the gpus, and how much time is spent developing "the costly one" versus "the cheap one"? It seems like the higher end gpus often end up just being multiples of the hardware found in lower end gpus, so would they really save that much? And generally, something that makes the high end product more competitive also makes the low end product more competitive.
 
  • Cheaper to produce (fewer transistors, smaller die, PCB with less layers, etc.)
  • Less complex (easier to design - though far from easy!)
  • Less power (less heat - TDP targets easier to reach)
  • Lower margin but typically MUCH higher volume
  • etc.
LOL....What the hell are we doing talking about cows and semiconductors in the same breath...hahaha....Only at B3D! ;)
These days midrange and low end GPUs are not less complex or easier to design. As someone else said the technology is basically the same with the difference being the amount of it. An argument can be made for focusing resources away from the high end, but that seems to be more of a strategic issue rather than a cost issue. The midrange GPU will require approximately the same number of engineers to design it and you'll already being paying for the tools. Cost savings would mainly come from laying off engineers that are no longer needed with one less product and saved tapeout costs. Of course, if you layoff engineers they won't be around should you decide to resume any delayed projects.

Didn't stop Matrox, S3, or PowerVR. Hey, the last one is actually more successful since quitting high end graphics.
Matrox and S3 are a fraction of what they once were though you can make a strong argument for their troubles not being the lack of a flagship but the lack of competitive products across the range.
 
These days midrange and low end GPUs are not less complex or easier to design.

The complexity of the design is not any less, however the design is far more tolerant. (lower frequencies, lower power, fewer transistors, etc.)

Designs for mainstream GPU's typically don't have to deal with the Achille's Hell of flagship GPU's which would be leakage and yield issues. Because the binning of these parts would likely result in significantly fewer "unacceptable" units, the margins would remain healthy. Given a high volume part, that would translate into a very solid revenue contribution with very little risk.

I firmly believe that AMD's focus in on FUSION and applying ATI's knowledge of graphics to various embedded and general consumer electronic designs.

Per usual though, time will tell....
 
* Halt development on R800/R900/ and beyond projects and re-apply efforts towards applying those concepts to more mainstream GPU's (Note: R&D continues, though it is strictly for mainstream GPU's)
* Pull teams off of any "flagship" ($399+) GPU development and re-assign them to aid with mainstream and value GPU development (shorten development time, decrease issues, add more polish, etc.)
* If AMD can get back into the black (or come closer at least by 2H 2008 or 1H 2009) they can contemplate re-entering the "flagship" GPU business. Getting back in is always an option on the table...but it might not make financial sense for them if the above-approach proves successful.

This is exactly what I think is happening internally right now. I see noother possibility for survival whatsoever.

EDIT: "frozen meat" is just money to me, not the resources or devs.
 
Status
Not open for further replies.
Back
Top