Intel to make bid for nVidia? **Reuters**

What about the future growth of Nvidia dominating the GPU *and* CPU market, becoming a major competitor to AMD and Intel? It's not so crazy, give it 5 years. Shareholders know their potential payoff is way bigger if they hold out and stay independant.

It's crazy because NV has no x86 patents or know-how.
How could they design a CPU without them, and no factories of their own ?
They cost a pretty penny, you know ? ;)

Buying VIA ? Their C3/C7 is anemic, to say the least.
 
Will this end the same way the AMD/ATI story ended? Everyone said "Nooo, it won't happen!" and then it happened?
 
It's crazy because NV has no x86 patents or know-how.
How could they design a CPU without them, and no factories of their own ?
They cost a pretty penny, you know ? ;)

Buying VIA ? Their C3/C7 is anemic, to say the least.

There are plenty of small startups that have x86 licensing and experience... perhaps if Nvidia were to buy one? Oh wait... http://www.theinquirer.net/default.aspx?article=34461

Anyway, these days, a modern shader core is *very* similar to a CPU, with several of the same challenges. At an architecture level, Nvidia's engineers would be able to step up to a full general purpose core relatively quickly. Take a look at the SM 4.0 spec, aside from the ISA being non-x86, there's almost nothing stopping completely generalized computation... all that experience with the shader cores put them well within their ability to engineer an x86 core.

No factories? Yes, that is a problem, they will have to move above and beyond TSMC's capacity, but this can happen gradually. Nvidia can start with the high-end, high-margin, low volume market, and expand slowly. These days a dual-core x86 CPU is around 30% of the die size of a modern GPU. In the future, as transitors become commoditized, it's no stretch to imagine Nvidia spending 10% of their die to drop down an x86 core or 4. Intel can only keep going with these multi-core thing for so long, thread-parallelism is limited at best with normal user workloads. In 5 years, CPUs will have become a throwaway number of transistors, so no.. it's not as crazy as it sounds.
 
Will this end the same way the AMD/ATI story ended? Everyone said "Nooo, it won't happen!" and then it happened?

Highly doubtful. :LOL:

Well, that's the thing. You're both right. It is highly doubtful, but the previous experience means no one really has the cred to say it's impossible. . .unless Intel wants to, and they've already said they don't do that kind of thing.
 
There are plenty of small startups that have x86 licensing and experience... perhaps if Nvidia were to buy one? Oh wait... http://www.theinquirer.net/default.aspx?article=34461

Anyway, these days, a modern shader core is *very* similar to a CPU, with several of the same challenges. At an architecture level, Nvidia's engineers would be able to step up to a full general purpose core relatively quickly. Take a look at the SM 4.0 spec, aside from the ISA being non-x86, there's almost nothing stopping completely generalized computation... all that experience with the shader cores put them well within their ability to engineer an x86 core.

No factories? Yes, that is a problem, they will have to move above and beyond TSMC's capacity, but this can happen gradually. Nvidia can start with the high-end, high-margin, low volume market, and expand slowly. These days a dual-core x86 CPU is around 30% of the die size of a modern GPU. In the future, as transitors become commoditized, it's no stretch to imagine Nvidia spending 10% of their die to drop down an x86 core or 4. Intel can only keep going with these multi-core thing for so long, thread-parallelism is limited at best with normal user workloads. In 5 years, CPUs will have become a throwaway number of transistors, so no.. it's not as crazy as it sounds.

You begin to interest me. :smile:

Do you understand that a lot of the above is a darn good argument for Intel pursuing such, even as a hostile? How much *really good* shader core expertize is there in the world, particulary with the patent situation?

Still, I don't think it will happen either for reasons stated above. It is stupid to do a hostile on a company where the people are the #1 asset in the first place --because they can just walk, and likely will. They have to be wooed with a vision, because they can't really be acquired by force --slavery being illegal and all. :LOL:

Edit: Oh, and it seems unlikely that Intel would invest in PVR and buy NV in the same week. :p
 
You begin to interest me. :smile:

Do you understand that a lot of the above is a darn good argument for Intel pursuing such, even as a hostile? How much *really good* shader core expertize is there in the world, particulary with the patent situation?

Still, I don't think it will happen either for reasons stated above. It is stupid to do a hostile on a company where the people are the #1 asset in the first place --because they can just walk, and likely will. They have to be wooed with a vision, because they can't really be acquired by force --slavery being illegal and all. :LOL:

Yes, I agree that it would be in Intel's best interest to aquire Nvidia. As you pointed out though, a hostile takeover isn't going to work, and as far as a non-hostile... well as I said, Nvidia has nothing to gain, and everything to lose.

The next couple years will be very intersting. Media and Stream processing is just beginning to enter its prime with HD video, HD gaming, mobile media, and there's *tons* of headroom with rising resolutions, rising bandwidth, and the growing ubiquity of media devices. I sense a power-shift coming... from the giant CPU makers to the media processing companies. After all, how many applications are there that really stress out CPUs nowdays? FP supercomp farms, physics, AI, database sorting/processing... every single one of those maps wonderfully to GPUs... hmmm.
 
There are plenty of small startups that have x86 licensing and experience... perhaps if Nvidia were to buy one? Oh wait... http://www.theinquirer.net/default.aspx?article=34461

Anyway, these days, a modern shader core is *very* similar to a CPU, with several of the same challenges. At an architecture level, Nvidia's engineers would be able to step up to a full general purpose core relatively quickly. Take a look at the SM 4.0 spec, aside from the ISA being non-x86, there's almost nothing stopping completely generalized computation... all that experience with the shader cores put them well within their ability to engineer an x86 core.

No factories? Yes, that is a problem, they will have to move above and beyond TSMC's capacity, but this can happen gradually. Nvidia can start with the high-end, high-margin, low volume market, and expand slowly. These days a dual-core x86 CPU is around 30% of the die size of a modern GPU. In the future, as transitors become commoditized, it's no stretch to imagine Nvidia spending 10% of their die to drop down an x86 core or 4. Intel can only keep going with these multi-core thing for so long, thread-parallelism is limited at best with normal user workloads. In 5 years, CPUs will have become a throwaway number of transistors, so no.. it's not as crazy as it sounds.

There are several ex-AMD guys working at Nvidia.
In fact, not long ago a NV patent of relevance was discussed right here at B3D, with at least one of the authors being a former AMD engineer.
That doesn't mean they can use the patents invented while they were at AMD in a new company. :D

The argument that Intel would spend 10 bilion just to avoid their former engineers going NV's route is nonsense.
Why would they fire them all and close down the R&D center in the first place ?

Lastly, the differencial of proportions between Intel and NV's resources is enormous.
Both in head count, money, expertise, etc, it's just another league.


Wanna know what i think ?

Who has been the most affected class with the pink slip spree over at Intel recently ?
Mostly marketing guys and middle management.
Who has arguably one of the best marketing departments in the chip business ?

and

Who doesn't have high-end and mainstream GPU expertise (come on, don't tell me that ImgTec technologies is enough for the discrete market, even Real3D was bought from Lockheed) ?
Who has, not only said expertise, but also at least 3 major established brands worldwide (Geforce, nForce, SLI) ?
 
Last edited by a moderator:
Wanna know what i think ?

Who has been the most affected class with the pink slip spree over at Intel recently ?
Mostly marketing guys and middle management.
Who has arguably one of the best marketing departments in the chip business ?

and

Who doesn't have high-end and mainstream GPU expertise (come on, don't tell me that ImgTec technologies is enough for the discrete market, even Real3D was bought from Lockheed) ?
Who has, not only said expertise, but also at least 3 major established brands worldwide (Geforce, nForce, SLI) ?

I think you're missing the point here. No one is arguing about Intel wanting Nvidia.. I think it's a given that if Intel had the opportunity to aquire Nvidia, they would. What I'm saying is that Nvidia would never sell out to Intel because they are situated to become a major player *on their own*. Also pointed out here was the fact that Intel would fail in a hostile takeover of Nvidia, so that's out too.

I never said that Nvidia would be trying to use those ex-Intel engineers' patents, of course not, however there's nothing stopping them from developing new and innovative ways to come up with high-perf x86 cores leveraging past CPU arch experience from their engineers. There are *plenty* of patents in place for the high performance shader core architecture that nvidia currently uses, as I said before, there's a very small leap from the current ISA to uCode that can implement an x86 ISA. Most CPU patents deal with unique solutions to problems you run into trying to execute uCode ops quickly on any 'processing core'. Nvidia has already hit all those problems, and they've figured out their own way around them... your argument that it's just not possible to engineer a competitive core doesn't make sense, that didn't stop AMD, and it won't stop companies in the future.
 
I think you're missing the point here. No one is arguing about Intel wanting Nvidia.. I think it's a given that if Intel had the opportunity to aquire Nvidia, they would. What I'm saying is that Nvidia would never sell out to Intel because they are situated to become a major player *on their own*. Also pointed out here was the fact that Intel would fail in a hostile takeover of Nvidia, so that's out too.

I never said that Nvidia would be trying to use those ex-Intel engineers' patents, of course not, however there's nothing stopping them from developing new and innovative ways to come up with high-perf x86 cores leveraging past CPU arch experience from their engineers. There are *plenty* of patents in place for the high performance shader core architecture that nvidia currently uses, as I said before, there's a very small leap from the current ISA to uCode that can implement an x86 ISA. Most CPU patents deal with unique solutions to problems you run into trying to execute uCode ops quickly on any 'processing core'. Nvidia has already hit all those problems, and they've figured out their own way around them... your argument that it's just not possible to engineer a competitive core doesn't make sense, that didn't stop AMD, and it won't stop companies in the future.

I still believe AMD bought ATI primarily for the chipset and integrated gpu businesses.
It was a "quick way in", if you will.

But, as intel showed at IDF recently, they are already working on highly parallel computation into the future.
Why would they buy NV for that ?

And NV itself ?
What possible gain could they tangibly get from a price war with both AMD and Intel on the CPU market should they choose to go that way ? None.

A simple "on the side" project (Xbox 1) was enough to have GeforceFX almost ruin the company beyond the point of no return (to competition with their rival, ATI). It was a distraction.
 
Mergers just as often destroy a company than improve it. Mergers are done for a wide variety of reasons, not all of them having to do with synergy or combining resources. Some are done simply to get the accounts/customer rolodex (e.g. Oracle acquiring PeopleSoft), some are done just to get employees, some are done because a company was undervalued and buying it out acquires their pre-existing cash flow and assets.

The danger thing about mergers between large companies is the loss of focus. Contrary to synergistic amplification of resources, you get a company where every team is bogged down by the merger process, where different corporate cultures and lead to political turf wars, and where the company, which may have previously been laser focused on one niche, bites off too much of a grandiose vision of entering every market sector with their new found power.

The worrying thing about the ATI merger, is the simple fact that the drive to built integrated bread and butter platforms and enter the mobile space may turn the high end GPU race for them into an "R&D" project of less concern and status within the company, with future add-in board high-end GPUs becoming AMD's version of Itanic or Intel's 80-core monster -- nice research projects to trot out to the press at conferences, but not their main focus.

You can crow all you want about the intent to keep ATI's business unit independent, but the board of directors and management is under no obligation to continue this arrangement ad infinitum. In 2001, I worked for a small company acquired by a Fortune 100. The CEO/CTO and management was promised they'd have to independence. We could stay in our own office building. We could work on our own products, as long as they integrate with the parent products, and the branding, support, and marketing was coordinated. And this lovely arrangement continued for 2 years, but after 2 consecutive bad years for the parent company, our independence was cut short. We were rolled into the parent, we were forced to move to their HQ building, our teams were broken up and redistributed to other managers. Many of our promising products, canceled, in favor of those that were making money already or most promising to make money the soonest.


All I'm saying is, the future impact of AMD/ATI's merger on the GPU market is uncertain, and by no means is it a slam dunk against Nvidia. I worry about premature "maturity" of the GPU market, of this merger turning the contest into a replay of the CPU market, with real innovation giving way to treadmill of clocks and tweaks.

Yes, it may have been good for the shareholders, but what about the stakeholders? Just because a merger is good for the owners, doesn't mean it's good for *us*. There are plenty of times society has chosen to disallow mergers because even though the merger was financially profitable for the two companies involved, it was bad for the market overall.

I preferred the ole days of ATI vs NVidia mano to mano. I would hate to see this turn into Intel business unit vs AMD business unit. And ATI fnbys better hope that AMD doesn't kill off NV, and this turns out to be AMD vs Intel GMA* because that will be the deathnell of the highend GPU market.
 
I still believe AMD bought ATI primarily for the chipset and integrated gpu businesses.
It was a "quick way in", if you will.

But, as intel showed at IDF recently, they are already working on highly parallel computation into the future.
Why would they buy NV for that ?

And NV itself ?
What possible gain could they tangibly get from a price war with both AMD and Intel on the CPU market should they choose to go that way ? None.

A simple "on the side" project (Xbox 1) was enough to have GeforceFX almost ruin the company beyond the point of no return (to competition with their rival, ATI). It was a distraction.

Remember that this is all speculation, but I'm not saying Nvidia would attempt to enter the CPU market as a direct, discrete CPU competitor to AMD/Intel. You're right, there'd be no way they could compete on price/capacity.

However, what's to stop Nvidia from putting x86 cores on their own GPUs? In 2 years, a couple x86 cores+cache would take up a miniscule amount of real-estate on the die compared to the GPU logic. What if people could buy a GPU and quad-core CPU on one die? No PCIe bottleneck, easy cache-coherancy, massive performance, true native shared memory. This *IS* the way things will be going in the future, no matter what... now the debate is whether Intel will put GPUs on a CPU die, or Nvidia will put CPUs on their die.

I am saying that Nvidia is actually better positioned here.. GPUs are much more complex than CPUs nowdays, and much bigger. Nvidia already knows how to work on the order of a a billion transistors, something Intel (not counting cache, which is highly regular) does not. Additonally, the move from engineering GPU Shader->CPU is a much easier one than the reverse, there are dozens of ultra-complicated graphics subsystems that Intel would need to invent from scratch (i.e, EarlyZ, ROP, AA, Texturing, Sampling, Clipping, list goes on).

Nvidia already makes motherboard chipsets... that means they already make memory controllers, northbridges, everything on a southbridge, ethernet, usb, raid controllers, *everything*. They own this IP. They already make the most powerful processor in the computer, the GPU. They already have a massive amount if high performance processor core experience... see where I'm going with this? Transistor real estate is becoming dirt cheap/transistor. What's to stop Nvidia (in 5+ years mind you), from engineering an *entire computer* onto ONE die? I say nothing. They already own all the IP except the actual CPU, and really it wouldn't be that hard to make one. I say Nvidia is going to blow up into a huge company, that is my prediction.
 
Remember that this is all speculation, but I'm not saying Nvidia would attempt to enter the CPU market as a direct, discrete CPU competitor to AMD/Intel. You're right, there'd be no way they could compete on price/capacity.

However, what's to stop Nvidia from putting x86 cores on their own GPUs? In 2 years, a couple x86 cores+cache would take up a miniscule amount of real-estate on the die compared to the GPU logic. What if people could buy a GPU and quad-core CPU on one die? No PCIe bottleneck, easy cache-coherancy, massive performance, true native shared memory. This *IS* the way things will be going in the future, no matter what... now the debate is whether Intel will put GPUs on a CPU die, or Nvidia will put CPUs on their die.

I am saying that Nvidia is actually better positioned here.. GPUs are much more complex than CPUs nowdays, and much bigger. Nvidia already knows how to work on the order of a a billion transistors, something Intel (not counting cache, which is highly regular) does not. Additonally, the move from engineering GPU Shader->CPU is a much easier one than the reverse, there are dozens of ultra-complicated graphics subsystems that Intel would need to invent from scratch (i.e, EarlyZ, ROP, AA, Texturing, Sampling, Clipping, list goes on).

Nvidia already makes motherboard chipsets... that means they already make memory controllers, northbridges, everything on a southbridge, ethernet, usb, raid controllers, *everything*. They own this IP. They already make the most powerful processor in the computer, the GPU. They already have a massive amount if high performance processor core experience... see where I'm going with this? Transistor real estate is becoming dirt cheap/transistor. What's to stop Nvidia (in 5+ years mind you), from engineering an *entire computer* onto ONE die? I say nothing. They already own all the IP except the actual CPU, and really it wouldn't be that hard to make one. I say Nvidia is going to blow up into a huge company, that is my prediction.

I believe your are on to something.

Indeed, if we take a look at the current trend, most highly parallel uses for desktop and workstation computers revolve around non-user controled aplications (by that i mean things like off-line rendering, speech recognition, Folding, etc).

All those have in common the hability of not depending directly from real-time user control or interaction.
There's is a physical limit to what an average human being can do simultaneously on a single computer in order to take advantage of multitasking:
- web browsing, whatching digital videos or music, playing games, etc.

That's the point where there is more of a direct benefict in improving those areas by specific computational power alone (number of transistors/specialized cores inside the CPU), than by simply multiplying the existing general purpose cores.

Besides, DX10 does in fact place an emphasis on multi-threading and resource sharing at the GPU level, so you could, in theory at least, run a 3D game, a folding client and a hardware-accelerated HD video on a window.
This would take the focus out of CPU's (even if they are still needed by then) at one point, and it could hurt sales for both Intel and AMD.
All NV would need was some kind of simple, multi-core x86 CPU (like a multiplied VIA C7, for instance), controling threads inside a complex GPU/FP processor.


Could IBM be so foretelling when designing the CellBE ?
This opens many interesting paths...
 
Last edited by a moderator:
The core business is still GPU's, and Nvidia would eventually get to design all Intel chipsets -including the GMA's-.
That would more (a lot more) than offset the loss of the AMD business...

That loss is still in the stars, though. First there should be some really competitive products from AMD/ATI out there (I assume you're talking about chipsets) and that can take a while. And who knows what nV will release then, might as well kick AMD/ATI's arse or totally suck, but there's no way to know it until there are products on the shelves.
 
There are still options like Cell etc. Or maybe nV could team up with IBM for a CPU design/production. There are just too many possible options for any accurate predictions.
 
Mergers just as often destroy a company than improve it. Mergers are done for a wide variety of reasons, not all of them having to do with synergy or combining resources. Some are done simply to get the accounts/customer rolodex (e.g. Oracle acquiring PeopleSoft), some are done just to get employees, some are done because a company was undervalued and buying it out acquires their pre-existing cash flow and assets.

The danger thing about mergers between large companies is the loss of focus. Contrary to synergistic amplification of resources, you get a company where every team is bogged down by the merger process, where different corporate cultures and lead to political turf wars, and where the company, which may have previously been laser focused on one niche, bites off too much of a grandiose vision of entering every market sector with their new found power.

The worrying thing about the ATI merger, is the simple fact that the drive to built integrated bread and butter platforms and enter the mobile space may turn the high end GPU race for them into an "R&D" project of less concern and status within the company, with future add-in board high-end GPUs becoming AMD's version of Itanic or Intel's 80-core monster -- nice research projects to trot out to the press at conferences, but not their main focus.

You can crow all you want about the intent to keep ATI's business unit independent, but the board of directors and management is under no obligation to continue this arrangement ad infinitum. In 2001, I worked for a small company acquired by a Fortune 100. The CEO/CTO and management was promised they'd have to independence. We could stay in our own office building. We could work on our own products, as long as they integrate with the parent products, and the branding, support, and marketing was coordinated. And this lovely arrangement continued for 2 years, but after 2 consecutive bad years for the parent company, our independence was cut short. We were rolled into the parent, we were forced to move to their HQ building, our teams were broken up and redistributed to other managers. Many of our promising products, canceled, in favor of those that were making money already or most promising to make money the soonest.


All I'm saying is, the future impact of AMD/ATI's merger on the GPU market is uncertain, and by no means is it a slam dunk against Nvidia. I worry about premature "maturity" of the GPU market, of this merger turning the contest into a replay of the CPU market, with real innovation giving way to treadmill of clocks and tweaks.

Yes, it may have been good for the shareholders, but what about the stakeholders? Just because a merger is good for the owners, doesn't mean it's good for *us*. There are plenty of times society has chosen to disallow mergers because even though the merger was financially profitable for the two companies involved, it was bad for the market overall.

I preferred the ole days of ATI vs NVidia mano to mano. I would hate to see this turn into Intel business unit vs AMD business unit. And ATI fnbys better hope that AMD doesn't kill off NV, and this turns out to be AMD vs Intel GMA* because that will be the deathnell of the highend GPU market.

If you would have you're reputation enabled (yes I know it's a silly function), you'd sure get some for that post.

I'll make it quick and simple; IMHLO if Intel really wants to incorporate GPUs into it's future CPUs (which I consider highly likely) it would be times cheaper to buy IMG instead.

I too share the feeling that none of these mergers are in the longrun for the benefit of the high end GPU market.
 
Remember that this is all speculation, but I'm not saying Nvidia would attempt to enter the CPU market as a direct, discrete CPU competitor to AMD/Intel. You're right, there'd be no way they could compete on price/capacity.

However, what's to stop Nvidia from putting x86 cores on their own GPUs? In 2 years, a couple x86 cores+cache would take up a miniscule amount of real-estate on the die compared to the GPU logic. What if people could buy a GPU and quad-core CPU on one die? No PCIe bottleneck, easy cache-coherancy, massive performance, true native shared memory. This *IS* the way things will be going in the future, no matter what... now the debate is whether Intel will put GPUs on a CPU die, or Nvidia will put CPUs on their die.

I am saying that Nvidia is actually better positioned here.. GPUs are much more complex than CPUs nowdays, and much bigger. Nvidia already knows how to work on the order of a a billion transistors, something Intel (not counting cache, which is highly regular) does not. Additonally, the move from engineering GPU Shader->CPU is a much easier one than the reverse, there are dozens of ultra-complicated graphics subsystems that Intel would need to invent from scratch (i.e, EarlyZ, ROP, AA, Texturing, Sampling, Clipping, list goes on).

Nvidia already makes motherboard chipsets... that means they already make memory controllers, northbridges, everything on a southbridge, ethernet, usb, raid controllers, *everything*. They own this IP. They already make the most powerful processor in the computer, the GPU. They already have a massive amount if high performance processor core experience... see where I'm going with this? Transistor real estate is becoming dirt cheap/transistor. What's to stop Nvidia (in 5+ years mind you), from engineering an *entire computer* onto ONE die? I say nothing. They already own all the IP except the actual CPU, and really it wouldn't be that hard to make one. I say Nvidia is going to blow up into a huge company, that is my prediction.

Very nice post !

It's funny because last time I spoke with Jen-Hsun Huang in Cancun (Mexico), he told us that Nvidia will become a huge company with goal in mind to become the new Intel. Looking at the last 3 years of Nvidia development, things become clear. They have now IP porfolio to build complete SoC and I think next step will be a SoC based on MIPS or ARM core to compete with Freescale or TI in the highly lucrative phone market. Time to get more cash and continue to grow up to be ready to attack Intel with massive parallel x86 SoC integrating CPU/GPU/IGP/MCP/whatever crush big GFLOPS number.
Next 5 years will be very fun :cool:
 
However, what's to stop Nvidia from putting x86 cores on their own GPUs? In 2 years, a couple x86 cores+cache would take up a miniscule amount of real-estate on the die compared to the GPU logic. What if people could buy a GPU and quad-core CPU on one die? No PCIe bottleneck, easy cache-coherancy, massive performance, true native shared memory. This *IS* the way things will be going in the future, no matter what... now the debate is whether Intel will put GPUs on a CPU die, or Nvidia will put CPUs on their die.

I am saying that Nvidia is actually better positioned here.. GPUs are much more complex than CPUs nowdays, and much bigger. Nvidia already knows how to work on the order of a a billion transistors, something Intel (not counting cache, which is highly regular) does not. Additonally, the move from engineering GPU Shader->CPU is a much easier one than the reverse, there are dozens of ultra-complicated graphics subsystems that Intel would need to invent from scratch (i.e, EarlyZ, ROP, AA, Texturing, Sampling, Clipping, list goes on).

Where Intel has an advantage against everyone is in their fabrication abilities. This should not be underestimated, as it is half the equation. Insofar as a GPU shader being more difficult to engineer than a CPU, I think they really are difficult to compare. IIRC there are a number of threads here where that topic has been broached. Some very knowledgeable people explained how the use of custom cell libraries vs. standard ones allowed for greatly increased clocks in CPUs vs. GPUs. The number of transistors is not directly related to complexity.
 
I dont know where I stand on this. The funny thing is my hardware allegiances were Intel in CPU's and ATI in GPU's.

So what do I do with these merged companies LOL? Whose the good guy?

Nvidia is screwed either way. Either they get bought and appear to have lost, or dont get bought and the ATI/AMD combo has more size to give them a hard time.

I just wish these damn mergers would stop it's bad for the industry. Frankly I blame ATi for all of it, they're the ones who stopped wanting to compete anymore and forced the first merger.

...wow. you, my friend, are certifiable. :nope:
 
Remember that this is all speculation, but I'm not saying Nvidia would attempt to enter the CPU market as a direct, discrete CPU competitor to AMD/Intel. You're right, there'd be no way they could compete on price/capacity.
They also can't match the top x86 CPU manufacturers when it comes implementation, process, and methodology.

Nobody is going to go against the entrenched x86 high end without their own top of the line fab and man-centuries of design work. Intel's very briefly toyed with the idea of doing it with Itanium (very early on, never seriously) and they didn't dare.

The way NVIDIA and ATI have approached GPU production is miles away from what it takes to make something as clunky as x86 perform.

As fabless companies, NVIDIA and ATI chips are physically designed around the rules and processes of the foundries. Even at the same process geometry, the extreme and repeatedly refined custom work put into the fab process at AMD and Intel allows for timings that can be several times better than what a foundry can offer.

x86 chips require a lot of custom design and custom cells, custom layouts, and other tweaks that engineers have picked up in the 20 years they've been forcing the pig to fly.

Along with a custom process, x86 CPUs go through countless tweaks throughout their lifetimes, not enough for a new core or even a full revision, but countless nitpicky tweaks for that one iffy transistor on an L2 path or a .5% increase in manufacturability. Full revisions and new cores come around once every 2-5 years. Whole changes in core philosophy take even longer.

The GPU companies don't do that endless refinement of the same thing over and over again. They do a number of steppings, and they regularly produce new cores, sometimes with wildly different philosophies. If Intel or AMD had run the GPU race, we'd be looking at 3Ghz TNT2s right now.

The low-hanging fruit for performance in graphics takes GPU designers in a way different direction than it does CPU designers.

However, what's to stop Nvidia from putting x86 cores on their own GPUs? In 2 years, a couple x86 cores+cache would take up a miniscule amount of real-estate on the die compared to the GPU logic. What if people could buy a GPU and quad-core CPU on one die? No PCIe bottleneck, easy cache-coherancy, massive performance, true native shared memory. This *IS* the way things will be going in the future, no matter what... now the debate is whether Intel will put GPUs on a CPU die, or Nvidia will put CPUs on their die.

Nvidia would be better off not going with x86.
With x86 on-board, I get a feeling the performance we'd be looking for would be about where VIA and Transmeta were (or were recently in the case of Transmeta). Only they would be doing that well about four years from now.

I am saying that Nvidia is actually better positioned here.. GPUs are much more complex than CPUs nowdays, and much bigger. Nvidia already knows how to work on the order of a a billion transistors, something Intel (not counting cache, which is highly regular) does not.
Both CPUs and GPUs are complex in different ways. A GPU may have 24 pipelines, but they are very self-contained and autonomous compared to how the units in a CPU work. GPUs also don't have the same demands placed on them that CPUs have.

GPUs are given one task, one that is latency tolerant and highly parallel. They can afford to go wide because they can use more pipelines to capture more pixel ops. They have to, because there's no way they're going to clock any higher.

CPUs have a different workload, one that is much less latency tolerant and is quite often not nearly as parallel.

Additonally, the move from engineering GPU Shader->CPU is a much easier one than the reverse, there are dozens of ultra-complicated graphics subsystems that Intel would need to invent from scratch (i.e, EarlyZ, ROP, AA, Texturing, Sampling, Clipping, list goes on).
It's not like Nvidia doesn't have a number of things it has to catch up on, and it would have fewer engineers than Intel trying.

I'm sure when Nvidia gets around to implementing fully virtual memory, software permissions, interrupt handling, backwards compatibility with 20+ years of crud that have build up in the ISA, a wildly inconsistent instruction set, aggressive speculation, useful branching, branch speculation, precise exeptions, cache and result access within 2-3 cycles at 3+GHz, industry-leading process manufacturing with a multibillion dollar fab, wacky freaky circuit implementation details, and a whole host of other problems, they'd have a decent x86 chip for 1999 by 2009.

It would also need to eat the costs of its manufacturing screwups. You can't pay only for good dies if you own the fab. The margins in the CPU biz are lousy in the low and mid ends, but there is absolutely no way Nvidia can bluff its way into the high-end.

There's also a distinct lack of optimizing compilers, proprietary instructions, or safely ignorable approximations. You can't do adaptive filtering on bank records,

If they want to make a CPU, they'd be better off keeping it as far as they can from x86. At least some of those problems are reduced.

Nvidia already makes motherboard chipsets... that means they already make memory controllers, northbridges, everything on a southbridge, ethernet, usb, raid controllers, *everything*. They own this IP. They already make the most powerful processor in the computer, the GPU. They already have a massive amount if high performance processor core experience... see where I'm going with this?
Don't underestimate the amount of IP the x86 manufacturers have in controllers and IO. They have enough.

Intel already makes graphics parts. Sure they suck for performance, but they're infinitely better than the 0 CPUs Nvidia is putting out. What's to stop them from making a kick-ass graphics chip? Obviously there are reasons.

The high-end of both fields is virtually unassailable to a newcomer. There is so much built up expertise and proprietary knowledge that any company trying to break in must either have more cash than anyone does right now, more time than anyone deserves to have, or an aversion to staying in business.

GPU manufacturing expertise will be helpful, but inadequate to break into the x86 market.
The methods are different, the demands are different, the silicon is different, the transistors are different, the costs are different, the risks are different, and the rewards extremely far away.

Transistor real estate is becoming dirt cheap/transistor. What's to stop Nvidia (in 5+ years mind you), from engineering an *entire computer* onto ONE die? I say nothing. They already own all the IP except the actual CPU, and really it wouldn't be that hard to make one. I say Nvidia is going to blow up into a huge company, that is my prediction.
Transistors have been dirt-cheap since the advent of VLSI. There's no physical or IP-based reason why Intel or AMD or Nvidia for that matter couldn't have done it back in the 90's. Actually, Intel tried, it physically worked, but there are other reasons why things don't pan out.

System on a chip isn't a new idea. It does make it hard to have a flexible platform or perform well.
 
Back
Top