Nvidia buys ULI

Yes, timing with IGP was an important factor; they were also available when Turion was released so many of the Turion mobiles that have been announced are using XPRESS 200 - couple this with the fact that they are already advertising as Vista ready IGP and the OEM's kinda like it (turnaround time for lower end / business SKU's are longer than the high end, so something that will run Vista will be appealing).
 
Jawed said:
Does NVidia have much OEM penetration with its chipsets?

Doesn't ATI's advantage also come from IGP? Something that NVidia seems to have ignored for a long time.

Jawed
I don't really think they ignore its IGP chipset, but I think no one want it. As far as I can remember, NV IGP was first and last on Nforce 2 with something like MX200 graphic (anyone can correct this please) but surely at that time it was the most expensive IGP ever seen before for the K7 board. If my memory serves me right, it was almost the same street price as ASUS A7V at that time.
 
Sxotty said:
I think ATI was simply cheaper than nvidia. Nvidia was always might proud of their chipsets by the pricetag they set. Perhaps they have learned their lesson and the ULi aquisition signals they will change their ways...
Personally, I think NVIDIA just didn't have enough engineers to properly focus on both the high-end and the low-end; recently, they focused on the upper low-end (C51), and you may easily notice they haven't released anything new high-end-wise in a long time, IMO mostly because of that and the Intel nForce4. The only "refresh" they had is the 2x16 SLI chipset, which obviously was a spinoff of that R&D, since they needed to go away from a single-chip design again to make integrated possible (and it had benefits for their intel market, too).

As was noted by geo & others, NVIDIA is much more focused on high-margin products. In fact, they are so focused on it that it's for that simple reason they cancelled their development of Intel Integrated Graphics chipsets, even though it was confirmed 6 months ago or so they were progressing nicely for it. Their official reason is they felt Intel was too competitive in that segment to be able to get any substantial profit out of it.

From my POV, as I said above, they didn't have the engineering NUMBERS to diversify sufficiently and feel they'd stay competitive with high margins in all of those segments. Even if you've got the best engineers in the world, you can't expect 2 of them to work part-time on a big project and be done with it after a month.
Something that wasn't talked of much, though, is that even though ULi only cost NVIDIA about $52M, it is the largest acquisition in their history by the sheer number of employees they've gotten through it. Indeed: ULi has 213 active employees according to http://www.corporateinformation.com/snapshot.asp?Cusip=C7609O800

Assuming even 50% of those are marketing/sales/etc., which also matters because that's key for NVIDIA's Asia strategy, that still means they got nearly 100 new engineers out of the deal. That's more than what they got out of the 3DFX deal (because it was an IP purchase, and not an acquisition, many 3DFX employees were not given the option of joining NVIDIA, or didn't want to).

From my POV, the primary reason behind this acquisition, besides the Asia Strategy (what happened to NV's Indian Development Center they announced, anyway?), is that this will let them diversify more in the chipset market without spreading too thin. This should increase their number of chipset engineers by a good 30-40% I think, although I don't have precise numbers on how many engineers NVIDIA's chipset division currently has.

The following is purely my personal opinion of what NVIDIA should do, but I feel they should make a very clearly divided productline with distinct low/mid/high-end products. The low-end should be using single-chip designs (no south/northbridges), capable of being connected to integrated graphics chips with a couple of other features on the die too (audio? more I/O? etc.) - then you'd have 2 different southbridges, one for the mid-end and one for the high-end, and two northbridges, one for AMD and one for Intel.

This would represent 2 single-chip designs for the low-end (1 for AMD, 1 for Intel), one chip for integrated graphics & minor improvements, two chips for northbridges (1 for AMD, 1 for Intel), and two chips for southbridges (1 high-end, 1 mid-end). That's 7 chips total, which is a lot, considering NVIDIA is currently nearer to 4-5 distinct chips, but I feel that's their only way to be competitive with high margins in all segments of the industry.
It also makes it much more nicely modulable than their current solution: every time AMD/Intel introduce a new socket, they'd just make a new northbridge, with southbridges being upgraded "when needed"; and the integrated graphics could be kept compatible and upgraded "when needed", every generation too. This would be a much more interesting usage of their assets than the current rather, IMO, relatively inefficient model that keeps them in the high-end segment of the market, and the mid-end one to a lesser extend too.


Uttar
 
digitalwanderer said:
I'm kind of freaking at the ATi numbers too, I still don't really consider them as "players" yet in the mobo market....but I'm usually a bit behind the times. (He says lovingly patting his AGP rig)

What's this about ATi's chipset margins? It isn't well known by me and I'd really appreciate it if you could bring me up to speed real quick.

ATI's IGP margins are single-digits. For awhile, if memory serves, they were talking mid-single-digits, and now saying high-single digits. Otoh, Dave Orton also said that to stay in business, that across the company (i.e. blended over all the areas and parts 'n pieces) you need to be at 30%. So single-digits is Very Not Good.

They do, they say, Have a Plan, however.
 
Last edited by a moderator:
They clearly outlined in the latest CC that chipset margins were being hurt by poor cost control, one of the key factors being that they were still being fabbed on 0.13u and that transition to 0.11u was now underway.
 
Sxotty said:
The x-bit one included intel chipsets where ATI got an enormous boost due to intel not providing enough chipsets, so it doesn't mean too much. I find the inquirer numbers questionable myself, but one would assume that they actually checked before posting...
All you have to do is look at Intel tanking and ATI rising at the same rate.

No, Fuad's numbers are way off whack. Look at Nvidia's total unit chipset shipments for Q3/05 on that Xbit graph. They shipped over 5 million, and Fuad claims only 10% of that.

LOL, he just updated his article to correct the mistake, poor wretch. :LOL:
 
kemosabe said:
They clearly outlined in the latest CC that chipset margins were being hurt by poor cost control, one of the key factors being that they were still being fabbed on 0.13u and that transition to 0.11u was now underway.

Certainly changing process sounded like a key point, but I got the impression there are more strings to that bow that they didn't talk about. Hence my vagueness. Maybe someone will get a chance to ask them in more detail sometime soon.
 
I've lost track of which product segments Orton was referring to, but "substrate" and "packaging" were recurring themes when discussing disadvantageous cost structures. Crowley went as far as saying that the new chap hired to optimize operations (Dougherty) would be looking into possible changes in suppliers.

Something just struck me......NVDA and ULi vs. ATI and......Macci. ;)
 
Last edited by a moderator:
geo said:
ATI's IGP margins are single-digits. For awhile, if memory serves, they were talking mid-single-digits, and now saying high-single digits. Otoh, Dave Orton also said that to stay in business, that across the company (i.e. blended over all the areas and parts 'n pieces) you need to be at 30%. So single-digits is Very Not Good.

They do, they say, Have a Plan, however.
Thanks, that makes sense.

I think Their Plan is to make up the difference on other parts until they can get the costs down/get their margins up, at least that's the only thing I can think of.
 
trinibwoy said:
Not much of a correction. This guy is quite a buffoon.
It was a mighty sweeping update huh?
Inq said:
Article Withdrawn

BTW my laptop also has an ati integrated chipset in it, but to be honest I have not been real happy with it and I wasn't gaming or anything just regular word processing and such. The USB was flaky for me and it would crash whenever I took a usb drive out even if I clicked the little stop hardware icon and did everything perfect. (On my nforce4 I never stop the hardware just pull it out ;) ) Anyway I reinstalled windows (and made it look ugly so it would be fast) and downloaded ATIs newest drivers and that behavior stopped at least. I have been satisfied at least now, but it was pretty frustrating for awhile.
 
I hope ATI can get ImgTec, instead of Nvidia..... Nvidia already has a company that made title-based deferred renderers, GigaPixel. ATI could use ImgTec.

Nvidia would probably get Aegia though.
 
Megadrive1988 said:
I hope ATI can get ImgTec, instead of Nvidia..... Nvidia already has a company that made title-based deferred renderers, GigaPixel. ATI could use ImgTec.

Nvidia would probably get Aegia though.
Oh man! Think of the consequences if NVIDIA had Spectre AND Series 5! It'd be like Sauron getting the One Ring!
 
At last check Imagination had a market cap over 140M pounds, or $250M US dollars. Add a premium onto that and you've got a very large mouthful for either ATI or NVDA to swallow. Not sure their shareholders would approve considering IMG isn't profitable yet (although it's making progress).
 
Megadrive1988 said:
I hope ATI can get ImgTec, instead of Nvidia..... Nvidia already has a company that made title-based deferred renderers, GigaPixel. ATI could use ImgTec.

Nvidia would probably get Aegia though.

Wasn't ArtX ATI's aquisition of a company with TBDR IP?
 
There were never any reports about ArtX having TBDR IP as far as I know. Other than the Flipper GPU, they had core-logic expertise which helped ATI market their first integrated chipsets.
 
OH NOES, THE TBDR MYTHS RISE FROM THE GRAVE!!!!:)

Seriously, aren`t we quite a bit over that stage?My guess would be that if having a TBDR based architecture would have really meant such a huge leap, one of the big boys would`ve attempted to build one already, wouldn` t you agree?
 
Back
Top