How nvidia can benefit form AMD's situation - My opinion

Techno+

Regular
Hello,

AMD Q1 results aren't very consoling, and if their upcoming products aren't good, then AMD is as good as dead. Nvidia, who has some really good GPUs but no CPU expertise and no x86 license, can really benefit from AMD's situation in the following ways:

1) AMD has said that they are willing to license their IP, if AMD really needs the cash, nvidia can get an x86 license from them.

2) Pray to god that barcelona isn't powerful enough to beat the competition, and then acquire AMD, and sell off ATI.

The first choice isn't as risky as the 2nd one, but the second choice has more to offer, i.e an x86 license, fabs and an experienced team of CPU engineers, they would sell ATI since nvidia already has GPU technology. They can even dismiss the FPU team since nvidia already has FPU designers.
 
AMD can't license x86 technology to third parties, unless it's for pure manufacturing purposes of their own branded processors (i.e. Chartered), and even then it can't exceed 20% of the global production volume (otherwise they would jeopardize the terms of the agreement with Intel).

The way i see it, Nvidia only has two choices in order to have access to a x86 license:

- Just buy out VIA/S3 (the Cx line of processors, of which C7 is the latest).
- License it directly from Intel (highly unlikely).


Even then, i don't know if x86 would be of any interest to them.
 
Last edited by a moderator:
As Inkster said, AMD cannot license x86 IP to NVIDIA. However, if AMD went out of business (unlikely, see below) then Intel would become a true monopoly, unless they got rid of the legal shit around x86 and encouraged other vendors including NV to compete.

Obviously, the industry dynamics happening if all hell broke loose at AMD would be quite interesting. Let's not be too hasty, however. One possibility would be Private Equity, although I'm not convinced it's a very likely scenario at this point, but it's not impossible.

Besides that, the following companies would be interested in the company (or specific divisions), excluding Intel for obvious reasons: IBM, Samsung, Applied Materials (??), NVIDIA. Companies that are unlikely to want the entire company: Sun, Broadcom, Qualcomm, Nokia, Chartered.

So, all I'm saying there is that obviously NVIDIA wouldn't be the only company in the running *if* AMD had to sell the entire company. They also wouldn't be the only company in the running if AMD wanted to sell specific product groups or design centers.

In the end, though, AMD is far from doomed at this point. We'll see what happens. I think the right strategy for them would be to stop competing in terms of performance on the desktop/laptop fronts, and focus exclusively on perf/mm2 (and perf/watt). Fusion might or might not be a big enough step in that direction. They would also be required to have a completely different architecture for servers and HPC, obviously.

The question really is, how much can they cut costs without affecting their capability to deliver that? My guess is that it might be extremely difficult... And unless things get much better before the end of the year, AMD will imo run out of cash sometime in Q1 2008, even after this $1.5B in extra funding. For their own sake, the restructuring better be very significant. That would create its own set of longer-term problems, however...

EDIT: Inkster: Don't forget code morphing! ;) (and I'm not sure VIA's license is transferable if ownership changes...)
 
I'm more and more convinced that buying ATI was a mistake, especially with the company in the middle of huge expenditures, like the "K10" R&D, building FAB 36, retro-fitting FAB 30 (will be called FAB 38 when done), the proposed upstate NY FAB, etc, etc.
Nvidia, for instance, doesn't like to buy companies with the manufacturing burden on their backs, it's too much of a financial risk. Hence i doubt very much they would ever buy AMD in its entirety .

They should have either bought a smaller chipset player to begin (like SiS, VIA, etc), or invest in in-house development (AMD 8000, among others, showed that they too could design reliable chipsets, if they really wanted to).

Aside from an imediate need for strong own-brand chipsets, i still to this day cannot discern exactly what were their intentions with high-end 3D graphics.

EDIT: Inkster: Don't forget code morphing! ;) (and I'm not sure VIA's license is transferable if ownership changes...)

Yeah, but even then... i don't know.
Transmeta wasn't exactly a shiny star too, even with Sony backing them up with capital.
And Centaur (VIA's design team) is really small, too small to make an impact in a market where there are already two huge giants.
 
Last edited by a moderator:
They should have either bought a smaller chipset player to begin (like SiS, VIA, etc)
I agree that buying VIA would have made a lot more sense. They had basic 3D (S3...) and tons of chipset expertise. Is their stuff as good as ATI's? No, probably not. But it would have cost a lot less and it actually tends to be pretty efficient (S3's GPUs are excellent in terms of perf/watt etc. and they're quite small) - also, going with VIA would have allowed them to position that ONLY in the low-end part of the market and perhaps not piss off NVIDIA so much.

If AMD's CPU division didn't begin to bleed money, ATI would have been a much more acceptable decision. But as it is, buying them with cash was crazy... That might have been harder to say back then than today, however.
Yeah, but even then... i don't know.
Transmeta wasn't exactly a shiny star too, even with Sony backing them up with capital.
The paradigm was decent and interesting, the implementation was not, imo. It really depends what your goal is. Doing an ultra-high-performance solution with code morphing would require a lot of expertise and R&D, but if your goal is to create a low-end solution only with good perf/mm2, it can be a very good proposition.
And Centaur (VIA's design team) is really small, too small to make an impact in a market where there are already two huge giants.
Personally, I've got a lot of respect for Centaur, and I can't wait to see what their next architecture will bring to the table.

You can claim that the C7's performance is due to a lack of engineering talent, but I wouldn't see it that way. It's also because they are just incredibly aggressive in terms of perf/mm2 and perf/watt. You can add a bunch of features to hit higher performance levels, but many of those will actually hurt your perf/watt and perf/mm2. I'm not convinced at all that the C7 wouldn't benefit from basic out-of-order techniques, but there is no good reason for them to try to get as aggressive as AMD/Intel there.

Consider this: The C7 has a die size of 30mm2 on 90nm SOI. The single-core K8 for AM2 platforms (aka Orleans) has a die size of 126mm2, and Prescott measured 109mm2. Obviously, the C7 also has less L2, but it'd still be much, much smaller even if you excluded cache. Furthermore, if you added Z-RAM, the cache amounts would become much more reasonable.

Consider what an updated architecture with 30-35% higher ILP and three cores would do on 45nm, along with a few megabytes of Z-RAM. That'd be an awesome chip for markets where the performance requirements aren't as high, and it'd only measure about 30mm2 too. If the x86 migrates towards being a commodity (that is, performance requirements go down and you nearly only compete on price), that is clearly a winning architecture. The commodity thing might be a big if, of course... But it does have a lot of potential.

EDIT: I originally said C7 had a die size of 40mm2, but it's actually only 30-31mm2. Obviously, that only makes my point stronger, heh.
 
AMD can't license x86 technology to third parties, unless it's for pure manufacturing purposes of their own branded processors (i.e. Chartered), and even then it can't exceed 20% of the global production volume (otherwise they would jeopardize the terms of the agreement with Intel).

The way i see it, Nvidia only has two choices in order to have access to a x86 license:

- Just buy out VIA/S3 (the Cx line of processors, of which C7 is the latest).
- License it directly from Intel (highly unlikely).


Even then, i don't know if x86 would be of any interest to them.

Oh, didn't know amd was required to fab their own chips. I was wondering why amd hadn't considered going fabless (after all, IBM alone probably has enough high end fab capacity to take care of AMD and their fabs are just as advanced, not to mention amd hasn't been staying too far ahead of the taiwanese companies lately). Sure, there's other reasons to want to keep their own fabs, but they're strapped for cash and I'd say the reasons for having their own fab decrease every year.

Oh, and I thought that Stexar company nvidia bought was rumored to have an x86 license?
Besides that, couldn't nvidia go the transmeta route?
 
perhaps AMD could sell off ATI in times of need and then buy VIA and S3.

Side Question : Does VIA totally own S3 or only some of their gfx assets?
 
The paradigm was decent and interesting, the implementation was not, imo. It really depends what your goal is. Doing an ultra-high-performance solution with code morphing would require a lot of expertise and R&D, but if your goal is to create a low-end solution only with good perf/mm2, it can be a very good proposition.
Transmeta showed that lower-end performance with decent power efficiency was doable.
All else being equal, however, there is no competing with hardware x86 support on performance.

Maybe if Transmeta came about around 32nm, things would have been different. It's possible that a leveling off in circuit performance and shift to multicore that favors density over fast switch speeds would have made things more favorable.

As it was the significant process and timing disparity present between the foundry processes Transmeta had to work with and Intel's march of new nodes basically doomed it.

At least up to 45nm, Intel's process lead can basically eat up most of the power savings a power-optimized code-morpher can manage, and Intle will have much better economies of scale.

Consider this: The C7 has a die size of 30mm2 on 90nm SOI. The single-core K8 for AM2 platforms (aka Orleans) has a die size of 126mm2, and Prescott measured 109mm2. Obviously, the C7 also has less L2, but it'd still be much, much smaller even if you excluded cache. Furthermore, if you added Z-RAM, the cache amounts would become much more reasonable.
I think Prescott won't be the standard C7 should be compared against when it comes to die size.
VIA's future chips would have to be compared to a single Yonah or Merom core with stripped-down cache at least one process node smaller.

That's the direction Intel could take things, looking at its UMPC offerings.
Die size is also only a major concern if you don't have fab capacity to burn.
A consumer won't care if a chip has 4 times the die size if the chips cost the same.
 
Back
Top