Nvidia: "ATI's thrown in the towel"

Status
Not open for further replies.
My guess is that Intel can't build a sufficiently good integrated part for Vista and will license or contract with NVidia, PowerVR, or someone else. They've been trying for along time, and continue to fail technology wise, although people still buy their chipsets (for other reasons) Likewise, Intel failed in the phone/PDA business, was forced to a acquire third party technology and then they had to sell it off.

AMD has some serious troubles to overcome. Core2 is going to dominate performance until atleast 1Q'07, and all AMD has to fight back with is a price war, which will be even worse for their margins. On top of that, they've got to pull off the merger, and make the synergy work, there's just so many "what ifs" and variables here. They certainly did not undertake this merger from a position of strength as they are clearly on the defense now. They had Intel on the ropes for awhile, with Intel's big net-burst misstep, but I think they underestimated Intel and got complacent.

All the concerns around NVidia's potential to be hurt in core MB chips is one thing, but keep in mind, that NVidia is first and foremost a discrete graphics company, and while the other markets are nice, as an investor, I would prefer for them to remain laser focused on graphics and not get too caught up into branching out of their core competency (hell, even the XBox1 was a distraction) The need for discrete solutions is not going to go away, it's where they make their bread and butter, and they would do well to remember that they must protect that market first and foremost.

The real threat comes not from losing the ability to produce/sell core chipsets, but not being able to control the standards for integrating their discrete solutions. Thus, if a future GPU needs custom logic support on the MB, they are in a worse position if they don't have a significant share of the market, because then they'll have to convince VIA, SiS, Intel, et al, to implement their new proposed extensions. Whereas, if nForce has high market penetration and brand recognition, they can force adoption simply by shipping a new chipset with support.

Right now, we don't know what such a custom bus would look like, but if AMD<->ATI provide some new CPU<->GPU linkage, and that linkage leads to readily visible and tangible benefits in games, Intel would be forced to follow, and Intel's mostly likely play would be to create a SIG like they did with PCI, invite NVidia, and define a new "open standard" that does the same thing, with the benefit that the AMD/ATI solution would be "AMD proprietary" AMD has two choices a) open the interface to NVidia and others as well, which negates some of the advatage or b) face a "slot war" with Intel and all of the companies in its gaggle of support. Remember PCIE vs PCI-X, PCI vs VLB vs MCA? We know who got burned in those fights.
 
RobertR1 said:
Neither CPU manufacturer is oblivious to this fact. Infact, as they see the forward trend, GPU's will continue to have more appeal (HTPC's, Vista, Gaming) so it is critical that they find a way to integrate the GPU at a respectable level into their CPU. Imagine if tomorrow you could go out and buy a new intel/amd cpu for $700 that has the CPU power of a Conroe and the GPU power of a x1800xt. It'd be a smash hit. Now imagine the entire lineup being as such and it'd put the GPU IHV's in a world of hurt. IF Intel and AMD are headed in that route, I expect nvidia to be merged/bought out at some point in the future as this gets closer to reality. Right now, nvidia is better off maintaining their sole position for the next few years and maximizing profits.

Yes, if it happened tommorow it would be valuable, unfortunately, it ain't gonna happen to 2008 or more likely 2009. In which case, the value of an integrated Conroe+x1800xt would be like the value of an integrated Pentium 3 and Voodoo3. By 2009, the CPUs and GPUs are going to be far far more powerful, and a Conroe + x1800xt is going to be a "value" low-end solution.

I'm going to be looking for 4-8 core CPUs by then, and I'll want a G90/R700 equivalent. What I won't want is half the CPU power and half the GPU power + sharing the same memory bus, in one chip, and no ability to upgrade them separately. Such a design is certainly going to lead to compromises. Integrated GPUs are never going to represent the middle-range and high-end solutions IMHO.
 
Inane_Dork said:
Mid-range DX10 card is well above a "solid mid-range gfx-card this year." They need DX9 hardware support and tolerably good performance at it just for the OS.

I'm still not convinced anything higher than an X1300 would be required for Vista but maybe I haven't seen what the beast can throw at hardware.
 
DemoCoder said:
Yes, if it happened tommorow it would be valuable, unfortunately, it ain't gonna happen to 2008 or more likely 2009. In which case, the value of an integrated Conroe+x1800xt would be like the value of an integrated Pentium 3 and Voodoo3. By 2009, the CPUs and GPUs are going to be far far more powerful, and a Conroe + x1800xt is going to be a "value" low-end solution.

I'm going to be looking for 4-8 core CPUs by then, and I'll want a G90/R700 equivalent. What I won't want is half the CPU power and half the GPU power + sharing the same memory bus, in one chip, and no ability to upgrade them separately. Such a design is certainly going to lead to compromises. Integrated GPUs are never going to represent the middle-range and high-end solutions IMHO.


It's a called an example..........obviously it would be scaled to the times and the tech available...........
 
RobertR1 said:
It's a called an example..........obviously it would be scaled to the times and the tech available...........

It's not obvious to me at all. You're implying that future discrete products at the X1800XT's market segment in terms of performance will be candidates for on-die (or package) integration? What makes you think that? Don't forget that discrete will benefit from these same process and technology advances.
 
trinibwoy said:
It's not obvious to me at all. You're implying that future discrete products at the X1800XT's market segment in terms of performance will be candidates for on-die (or package) integration?

Yes. If they truly want this to work. People will not adopt it unless they see performance.

What makes you think that?

My post gives very basic reasoning for this and I can't really break it down anymore than that.

Don't forget that discrete will benefit from these same process and technology advances.

For the comapanies that still want to do standalone GPU's sure but if Intel and AMD/Ati shift their resources to a cpu/gpu integrated package then the that only leaves nvidia. I don't think nvidia can take on Intel and AMD/Ati if they're all focusing on the same market segment but that's just my opinion.
 
Let's look at a simple example.

Let's say that by 2008, you can fit 1 billion transistors on a die using latest semiconductor processes in volume manufacturing.

Option 1: produce a CPU using 1 billion transistors on a die, and a separate GPU with 1 billion transistors on another die.

Option 2: produce a CPU+GPU that share the same 1 billion transistor budget.

Now, which design do you think is going to have both more CPU and more GPU power?

If AMD produces such a combo unit, and passes it off as the high-end market, then Intel is going to ship a CPU with twice the number of Cores/power as AMD's "high end" chip, and Nvidia is going to produce a GPU with twice number of transistors as ATI's "high end", and the combined PCI-Express Intel CPU + NVidia GPU will wipe the floor with AMD+ATI's integrated solution in the high end.

Moreover, for business desktop computing needs, Intel will be selling a chip with huge advantages over AMD's "high end" chip as well.

The integrated design absolutely makes no sense at the high end. It severely limits your memory bus design options, it constrains transistor budgets, will be most likely be harder to test and have worse yields, be harder to cool, constraint power design (now you've got to get power for a super hungry GPU and super hungry CPU all through the same CPU slot!)

If AMD/ATI tried this, it really would be a gift to Nvidia in the high end market.
 
DemoCoder said:
If AMD produces such a combo unit, and passes it off as the high-end market, then Intel is going to ship a CPU with twice the number of Cores/power as AMD's "high end" chip, and Nvidia is going to produce a GPU with twice number of transistors as ATI's "high end", and the combined PCI-Express Intel CPU + NVidia GPU will wipe the floor with AMD+ATI's integrated solution in the high end.

Precisely. The only way that a CPU/GPU combo can compete in even the mid-range market is if AMD manufacturing tech far surpasses that of the third party semis. But that's doubtful since each process node is probably gonna be adopted more slowly in the future and everybody is gonna be playing with the same ball for long periods of time.

Not to mention that AMD may still be pushing discrete mid-range+ solutions as well. This will keep competition alive in the discrete space and relegate integrated to the low-end.
 
Last edited by a moderator:
DemoCoder said:
If AMD/ATI tried this, it really would be a gift to Nvidia in the high end market.
That's exactly why it makes sense for ATI and AMD to continue targeting discrete high end parts, just as they have for the last five years, and as they have said they will continue to do so in the future (as per todays interviews at various websites).

Working on a future intergrated market to take money away from other manufacturers does not preclude them from continuing to provide the discrete solutions the top end market demands.

The people who run ATI and AMD are not stupid, they realise exactly what you've outlined.
 
Last edited by a moderator:
The danger at the high-end in my mind was never from a sudden left turn. It's from the "death of a thousand cuts" over a period of years. But, y'know, it has always been true in the high-end graphics business that you have to keep putting it out there every few months. . .and then again. . .and then again. . .etc Today they are saying the right things for my ears, and I'd certainly rather have them doing that than not. We'll have lots of opportunites to see how it's going.

I guess what I'm saying is there isn't anything they could say today that would 100% convince me that they'll be top-dog material five years from now (tho I appreciate them saying it anyway!). The thing is, there isn't anything NV could say today that would 100% convince me that they will either. It's just not that kind of business in the first place.
 
DemoCoder said:
Let's say that by 2008, you can fit 1 billion transistors on a die using latest semiconductor processes in volume manufacturing.

Option 1: produce a CPU using 1 billion transistors on a die, and a separate GPU with 1 billion transistors on another die.

Option 2: produce a CPU+GPU that share the same 1 billion transistor budget.

Option 3: produce two CPU+GPU chips

It seems quite silly to make a comparison where one of the alternatives use twice the amount of resources (1 vs 2) and draw a conclusion from it without taking that into account. Sure, a CPU+GPU-chip would have different requirements on the memory system than we're used to but there's nothing fundamentally inefficient about the idea on a conceptual level. If the memory bandwidth/latency issue is solved, using a shared memory pool could make GPU multiprocessing much more efficient than it is today as there's no need to keep two copies of everything.
 
I think AMD/ATI can dominate the mainstream. Which is where the money will be. Obviously nVidia will have to put out good high end stuff, if ATI's solution end up performing really well because of the AMD hookup. (Like 2009 fab access). In the short term, I don't see anything changing. R600 and its 650(or whatever) chips will lead us through 2007+.
 
Option 4: use a multi-chip module, like C1, Smithfield, NV45, IBM big iron etc.

Option 5: delegate integrated where it belongs, to the bottom of the barrel, and make higher-performance graphics properly (=discrete).

For anything you want to call "high end" with any confidence you need the dedicated bandwidth, not to mention the dedicated power circuitry.

That being said I'm pretty puzzled and worried by this deal. I can't shed the feeling that AMD wanted someone else but couldn't reach that agreement.

What need does AMD satisfy here? IMO it's only the IGPs, not even the base chipset tech (which they have). Discrete graphics may constitute revenue but it's also a big risk, and anyway, it's simply not necessary for them.

Have they snatched up ATI to prevent that certain someone else from doing so?
 
Rur0ni said:
I think AMD/ATI can dominate the mainstream. Which is where the money will be. Obviously nVidia will have to put out good high end stuff, if ATI's solution end up performing really well because of the AMD hookup. (Like 2009 fab access). In the short term, I don't see anything changing. R600 and its 650(or whatever) chips will lead us through 2007+.

Wow. That is optomistic given their current 20% market share, and the fact they've only been above 20% two quarters in the last five years?

Why do you think most of Intels huge marketshare is going to give up on them now when they have better CPUs after years of dominance with worse CPUs?
 
vember said:
Option 3: produce two CPU+GPU chips

It seems quite silly to make a comparison where one of the alternatives use twice the amount of resources (1 vs 2) and draw a conclusion from it without taking that into account.

It's not silly to assume that everyone has the same semiconductor process node and the same die size restrictions.

Your option, (0.5 CPU + 0.5 GPU) * 2 still doesn't alleviate problems with power, heat, memory, and cost of engineering a mainboard around these dual design.

Now your system needs a much more complicated memory system (I'm assuming your not proposing both CPUGPU chips share the same memory bus!) resulting in a substantially more expensive system that still isn't upgradable, and for dubious benefits. Not only that, but not the "system memory" of your PC has to be GDDR-4/GDDR-5 or some such, and if you want 2GB-4Gb main ram, it's going to be hugely freakin expensive.

If the memory bandwidth/latency issue is solved, using a shared memory pool could make GPU multiprocessing much more efficient than it is today as there's no need to keep two copies of everything.

Uh huh, shared memory between two GPUs which typically max out their memory? Completely unworkable, and any scheme to impose an arbiter and/or cache coherency would just add to bus traffic and slow the whole thing down.

2 GPUs sharing the same memory bus, especially with CPU sharing it as well as a TERRIBLE idea.
 
Rollo said:
Wow. That is optomistic given their current 20% market share, and the fact they've only been above 20% two quarters in the last five years?

Why do you think most of Intels huge marketshare is going to give up on them now when they have better CPUs after years of dominance with worse CPUs?

Most of that Intel marketshare is not really consumer based in the sense that the purchaser of those systems is "buying Intel". Most of those consumers are buying "Dell", "HP", etc.

I don't think there is any question that AMD is entering a rough patch, tho I think they can fight it off with pricing appealing to folks who are in a position to upgrade with an existing AMD ecosystem. For awhile, anyway. But by the time they are ready to compete strongly at the top again, they will also have a platform to do it with that is going to appeal to the Dell's and HP's more strongly than it has, and that equates to marketshare.

It seems very clear to me that when Ruiz kept saying "customers are demanding" (and I don't remember how often they said it, but it was alot) yesterday, he did not mean enthusiats who buy boxed AMD CPUs. Not in the least. He meant the Dell's and HPs of the world. And that's what will drive marketshare, even if it takes a year or two.
 
pretty common knowledge that R600 series chips and quite problably its refresh are so far along to completion that AMD wouldnt dare intervene in their launches. Hell the merger isnt even going to be 100% complete until later this year around when the cards launch. Its After 2007 i personally worry a bit. Now the only other issue i wonder about, is if the R600 series flops out, especially in the high end, against Nvidias cores, i wonder if that would lead to a demise of exactly how much R&D ATIs division can put into any future high end parts. AMD simply wont see it as very profitable.

And lastly, growth rate. Nvidia can spend more and more on R&D and normally does. This is what companies do to grow and succeed. I wonder if AMD will allow for the same amount of growth in ATIs division when their profits are tied to eachother. Again theres the issue with grouping with historically a second best that lives off the skin of their teeth in terms of profits/expenditures.


For us enthusiasts i see a bleak future as far as ATI goes. Still cant believe this whole thing. its like the company i know just dropped off the face of the earth. Its that hard to grasp. If only choices werent so damn limited.


I do believe ATI pushed for this by the way, which i still dont understand unless the company was in trouble and ran to a safe haven. If AMD wanted someone for integrated and low end solutions while looking at new ways to maybe assimilate graphics into a central processor theres a few other much cheaper roads they could of went down.
 
Last edited by a moderator:
SugarCoat said:
pretty common knowledge that R600 series chips and quite problably its refresh are so far along to completion that AMD wouldnt dare intervene in their launches. Hell the merger isnt even going to be 100% complete until later this year around when the cards launch. Its After 2007 i personally worry a bit. Now the only other issue i wonder about, is if the R600 series flops out, especially in the high end, against Nvidias cores, i wonder if that would lead to a demise of exactly how much R&D ATIs division can put into any future high end parts. AMD simply wont see it as very profitable.

And lastly, growth rate. Nvidia can spend more and more on R&D and normally does. This is what companies do to grow and succeed. I wonder if AMD will allow for the same amount of growth in ATIs division when their profits are tied to eachother. Again theres the issue with grouping with historically a second best that lives off the skin of their teeth in terms of profits/expenditures.


For us enthusiasts i see a bleak future as far as ATI goes. Still cant believe this whole thing. its like the company i know just dropped off the face of the earth. Its that hard to grasp. If only choices werent so damn limited.


I do believe ATI pushed for this by the way, which i still dont understand unless the company was in trouble and ran to a safe haven. If AMD wanted someone for integrated and low end solutions while looking at new ways to maybe assimilate graphics into a central processor theres a few other much cheaper roads they could of went down.

Exactly the way I'm thinking too, short term outlook won't change much at all, but in the longer run ( 1 year for the graphics industry) just too many loose ends where things can go wrong.

Yeah it is very strange that ATi wanted the merger just seems like they were afraid of something.
 
Status
Not open for further replies.
Back
Top