The AMD Execution Thread [2007 - 2017]

Status
Not open for further replies.
Nvidia?

Lose:
Oh only about 60-80% of their motherboard sales.

Gain:
...
...
...
(Still thinking. A hat made of money and 72 virgins* from Intel as the compensation for such self-mutilation notwithstanding, I am drawing a blank)




*the hot girl ones, not just a herd of incoming interns.
 
Nvidia?

Lose:
Oh only about 60-80% of their motherboard sales.

Gain:
...
...
...
(Still thinking. A hat made of money and 72 virgins* from Intel as the compensation for such self-mutilation notwithstanding, I am drawing a blank)




*the hot girl ones, not just a herd of incoming interns.

Yes, yes. But what if Intel gives them access to the highly lucrative Xeon segment, in return for stop supporting AMD on the desktop/workstation (which would also sell less processors due to lack of competition in the high-end) ?
Wouldn't a CSI bus license imply, not only access to future Core processors, but also future Xeon and Itanium "III" families ?
Would a pie of the much bigger Intel share of the market eventually offset the loss of the Opteron and Athlon/Phenom businesses ?
The motherboard business is still not the core business, and they have been focusing on the Intel market with new chipsets for the past 8 months.
AMD got... QuadFX/NF680a (which was really just two NF570 SLI SPP's tied together) as well as new low-end products (a signal of things to come regarding the Nvidia/AMD relationship ?).


To my understanding, no one this small ever got a license to build a 3rd party chipset for Xeon.
All the key players are much bigger (Intel, IBM, ServerWorks/Broadcom, Unisys, etc).
 
Last edited by a moderator:
Considering how anal server people are about their chipsets, giving away the bulk of their chipset sales for a *chance* to break into that market is a classic case (to mingle a proverb) of trading two birds in a hand for one in the bush.
 
Considering how anal server people are about their chipsets, giving away the bulk of their chipset sales for a *chance* to break into that market is a classic case (to mingle a proverb) of trading two birds in a hand for one in the bush.

But the "Nvidia nForce Professional" brand is already well known among them, as the chipset inside the vast majority of their Opteron servers...
"Nvidia" in the server/workstation chipset market has what ? A mere 3 or 4 years of life, since they started basically from scratch ?
Well..., technically they've started designing x86 CPU chipsets with a game console (Xbox 1). ;)
 
Why would Nvidia want to give up all SLI sales to AMD processors?

At some point I could imagine them finding it not worthwhile to compete with AMD for entry-level mobos. But high-end mobos? I don't see them getting out of that.
 
Why would Nvidia want to give up all SLI sales to AMD processors?

At some point I could imagine them finding it not worthwhile to compete with AMD for entry-level mobos. But high-end mobos? I don't see them getting out of that.

Less AMD processors sold at the high-end, less profit for high-end chipset manufacturers.
They need to pair a strong processor with their chipset designs, don't they ?

At the rate of continued CPU discounting and "Phenom" uncertainties AMD has been serving us with, it stands to reason that going to all the trouble and expense of designing a leading chipset for the non-leading CPU family may not be worth it for their bottom-line.
Nvidia rarely dipped their toes into non-IGP low-end chipset parts in the AMD market. NF4 (not the Ultra or the SLI flavors), NF550, etc, they were mostly residual products, unlike the mainstream and high-end chipsets.
Perhaps they know something about future high-end Intel and AMD processors that we don't and are shifting accordingly.
 
Nvidia?

Lose:
Oh only about 60-80% of their motherboard sales.

Gain:
...
...
...
(Still thinking. A hat made of money and 72 virgins* from Intel as the compensation for such self-mutilation notwithstanding, I am drawing a blank)




*the hot girl ones, not just a herd of incoming interns.

Although I totally agree....let's play devil's advocate and look at the flip-side to this...

(here comes a giant "what-if?")

What if Intel made an arrangement with NVIDIA and opted to concede a portion of their chipset market to NVIDIA? Given how significantly larger Intel's chipset volume is compared with AMD, the end result could work out to be very favorable to NVIDIA.

(here comes an even larger "what-if?")

What if Intel eventually backed-out of the chipset market entirely (perhaps still adressing entry-level) and conceded the vast majority (if not all) chipset production to NVIDIA? The percentage of chipsets made by NVIDIA would increase over time as Intel "tested" the market's reception and NVIDIA's execution...with an eventual buyout or merger stitching the two companies together...

What if?
 
What if nV just assists Intel in destroying the competition for both of them and not much beyond that?


"Destroying" would bring more harm than good to both companies (anti-trust legal issues).

No, if something like this was to be, they would aim at merely "weakening" the competition back to their previous levels of market share.
Nvidia might want to go back to pre-2003 days -when they lost the clear leadership to ATI-, while Intel might want to go back to the pre-Pentium 4 days -when they started eyeing the Athlon as a real threat-. ;)
 
Anandtech is laying into AMD today, after testing Barcelona:

http://www.anandtech.com/tradeshows/showdoc.aspx?i=3006

(first two pages)

The motherboard we tested on had minimal HT functionality and wouldn't run at memory speeds faster than DDR2-667; most video cards wouldn't even work in the motherboard. Memory performance was just atrocious on the system....

In the end, performance was absolutely terrible. We're beginning to understand why AMD didn't let us test Barcelona last month. It's not that AMD is waiting to surprise Intel; it's that the platform just isn't ready.

AMD simply hasn't gotten the process under control yet and after hearing our friends at the motherboard companies talk, AMD is close to near/total panic mode right now.
 
Some of AMD's most recent demonstrations of Barcelona have been touting it as a drop-in replacement that allows for equivalent performance at a lower power draw.
AMD seems to be positioning the initial push for Barcelona with the power angle, since performance gains are unlikely.

Rumors seem to indicate that the chip itself is functional to the point that it isn't disabling units.
The clocking problems don't seem to be thermally related, not at the low speeds we're talking about.

Perhaps memory and HT problems to point to possible issues with the independent clocking scheme or the crossbar.
HT isn't affected by changes to prefetchers or memory buffers, but HT, the IMC, and the cores do hang off of a common chip crossbar.
Traversing a crossbar between different and dynamically adjusted clock domains can be a hefty engineering challenge.


The fact that AMD is having problems like this so close to launch is not good.
That's two product lines with issues and possible delays. Too bad AMD can't swallow an R600-type delay on its CPU lines.
 
"Destroying" would bring more harm than good to both companies (anti-trust legal issues).

No, I mean just by totally outpacing and outselling the competition (each) and regular strategic alliances with no dark side. That seems to suffice already.
 
Well, what if you're wrong? :) http://youtube.com/watch?v=6mmskXXetcg
Sorry, I couldn't resist linking to that excellent video nAo linked me to recently, it's just too splendid for me to pass up an opportunity like this! No offense whatsoever intended.

However, it is worth noting that this makes no sense. Zero. Not one single bit. And there's a very good technical reason behind that: What do you think the original nForce 570 is? It is a single MCP55 chip. What do you think the nForce 590 is? It's a MCP55 with a C51 next to it, the latter being the GeForce 6100 IGP along with PCI Express lanes; NVIDIA is disabling the GPU in this case for redundancy and keeping the PCI Express functionality for SLI. The nForce 4 IGPs are simply MCP51+C51, with the GPU not disabled.

And finally, what do you think the nForce 680i SLI and the nForce 650i Ultra are? MCP55+C55 and MCP51+C55 respectively. They simply are AMD chipsets with a northbridge chip (Intel FSB/Memory Controller/PCI Express) next to them. MCP72, the next-generation southbridge from NVIDIA, will be the exact same thing.

And ironically enough, I wouldn't be surprised if we eventually saw a next-gen northbridge with the memory controller/FSB disabled for redundancy on AMD platforms, for 2x16 PCI Express Gen2 SLI. Another possibility is that they will mix it with a single-chip IGP (with the GPU disabled?) on the high-end for that, and get more of everything for the same price. That would be a very interesting design decision indeed, and I'm quite curious as to what will happen there.

Sorry for apparently going off topic, but this is clearly related to whether it makes sense for NVIDIA to drop AMD support completely. Their cost savings in doing that would effectively be zero, at least for MCP72 as that design has very obviously already been finalized. An even after that, the only real advantage is that they could create a one-chip non-IGP solution for the Intel mainstream segment. That part of the market isn't really under tremendous pricing pressure, however, so I'm not really sure what's the point there either, to be perfectly honest.

About one year ago, it was rumoured that NVIDIA was working on "T65" to replace the C51 on the nForce 590 SLI. That chip would have provided 2x16 SLI on a single chip, thus providing slightly higher performance in theory (and more bragging rights!) for SLI. However, that chip never surfaced and it was evidently cancelled (or simply never existed, but I doubt that). This is most likely the consequence of Conroe's success; why make/sell/create an AMD-only chip aimed at the enthusiast segment today, let alone back then? It didn't make sense back then, and neither does it today.

So the only AMD roadmap change you'll likely ever see (pre-CSI) for NVIDIA already happened one year ago. Any other change would b exclusively political, and as mentioned previously, these would be subject to legal issues.
 
From Intel's side, they also use chipsets to soak up (and further utilise the investment) process capacity that they are transitioning CPU's away from. What would they do with these processes instead?
 
That's an interesting idea. I suppose it depends on:

1). How cutthroat they want to be entering a new market vs maximizing leverage of previous investement. Presumably their typical process lead puts them at a competitive place with, say, TSMC even if they are using "old" capacity for GPUs. Sure, it's easy for enthusiasts to say "go balls to the wall", but the bean counters and financial people will have a say.

2). How cutthroat they *need* to be entering a new market with an unproven architecture with little to no initial ISV and driver optimization investment at first. In other words, being a process generation ahead could help mask those issues temporarily with raw brute force that no one else could bring to the table in the same time frame.
 
While GPU's, using external fabs, are often trailing CPU's on process node the difference isn't that far away (AMD is shipping 65nm GPU's already, Intel are still some months away from 45nm CPU's) and Intel don't have the advantage of the half nodes. Given the indications so far I'd suspect they would want the cutting edge to ensure the price/power/perf ratios - unlike chipsets, GPU's aren't small, so you want to maximise the wafer usage.
 
Yonah was at 65nm and was launched first quarter 2006.

If the gap is not noticeably narrowed, that means that an Intel GPU will have a process advantage for nearly a year.

That's a long time to sweat.
 
It's interesting you bring that up... It should be noted that Paul Otellini recently said (in this conference call, I listened to it back then, not going to do so again now) that he wanted to reduce the number of things done on trailing processes, and iirc, in a significant way.

It's not clear to me what that implies. Moving northbridges/IGPs to the cutting-edge process and keeping the southbridge on older or even much older processes would make sense. It is not clear to me how that is influenced by Nehalem and CSI, however.

And the XScale stuff will move to TSMC in 2008, so that also reduces how much they could attribute to the older processes (which apparently wasn't such a bright idea for XScale anyway, according to Marvell/Fabtech!)

So that does bring up the good question of what Intel will want to use the older fabs for. Maybe they just want to sell the equipment faster and get them converted faster too? That doesn't seem very smart to me, but it's not impossible at all. And what about the Dalian fab, which will be coming online in 2010 on 90nm, no less than 3 process nodes behind everything else by that point.

My theory there is that Intel plans to dedicate that fab to southbridges but, more importantly, to CMOS photonics. And yes, I just implied that I believe China will soon be the world's largest producer (albeit via Intel) of CMOS photonics in the world. If that is indeed the case, I'm not exactly surprised that Intel got such a generous deal with the government, and I wonder what will be the reaction of the press once they realize that, if it is indeed correct.

Ah well, too much OT, sorry!
P.S.: WRT the process advantage Intel GPUs would have against AMD's or NVIDIA's, it should still be taken into consideration that TSMC half-nodes should ease the pain a bit there.
 
...

The fact that AMD is having problems like this so close to launch is not good.
That's two product lines with issues and possible delays. Too bad AMD can't swallow an R600-type delay on its CPU lines.

I wouldn't worry a whole lot about the tiny pieces of pre-production and engineering-sample hardware that sites like AnandTech have been able to *briefly* touch so far this year. A poster above goes so far as to claim that AnandTech "tested" said hardware, when AT's own comments conclusively debunk that idea completely. "Touched it" is a lot more apt than "tested it," imo. A couple of things to think about:

(1) Prior to AMD shipping Opteron (which they did prior to shipping the desktop A64, remember), there was much despairing commentary also written about Opteron, which was based on early pre-production and engineering-sample looks that various people had. They jumped to several erroneous conclusions about Opteron and Opteron's future, as a result. That's pretty much exactly what we're seeing here, yet again, with respect to Barcelona.

(2) Prior to Intel actually shipping Core 2 to review sites and then the public, and lifting the NDA's for Core 2, general commentary about the upcoming Core 2 was that most of these sites doubted it would be giving Opteron/A64 much in the way of competition. I remember seeing "too little, too late" very often in those days as words used to describe Core 2, words written by people who were not privy to the actual production-grade silicon even though it was obvious by what they wrote that they thought they knew a whole lot more about Core 2 than they actually did. We're seeing the same phenomenon yet again, oh-so-predictably, in relation to Barcelona. Strangely, many of these sites have conveniently forgotten some of the things they opined about relative to Core 2 during those pre-Core 2 days. (Strangely? Nah...;) I wouldn't want to admit it, either.)

In this case, too, you've got AnandTech repeating gossip it's heard from gosh knows where about how Barcelona "isn't scaling very well" (I assume they mean in MHz), yet in the same sentence they talk about AMD "being very optimistic" about Barcelona's scaling in MHz--but evidently they decide it's more prudent to listen to the rumors as opposed to the company that will be manufacturing and selling Barcelona. It's sad, but it is predictable. So, they give the rumors credence while implying with some gusto that AMD hasn't a clue about its own upcoming cpus...;) I think the record is clear that with Opteron/A64 AMD surely knew best, and so my money's on AMD being right about what they're going to ship this time, too.

My own opinion is that for some reason these sites are miffed about what they perceive as being left out of the loop when it comes to Barcelona, as, after all, they think, they are "the loop," so to speak...;) So then they reason that since they are so important in the scheme of things that the fact that they've been left out thus far means that AMD is just trying to hide some things--like Barcelona's scalability in Mhz--ahem, even though what AMD has officially told sites like AT is that AMD isn't worried at all about Barcelona's ability to scale.

Last, I want to comment very briefly on R600: while the pessimists and naysayers are already counting R600 down and out, I believe the fact is that the R600 saga is only just beginning. My own opinion is that AMD has a lot more knowledge of its own products than it is being given credit for by some sites, and that very possibly we should be paying more attention to AMD here than to sites like AT, which are, after all, pontificating on the future of products they have no input in with respect to either manufacturing or marketing.
 
Status
Not open for further replies.
Back
Top