Remember that this is all speculation, but I'm not saying Nvidia would attempt to enter the CPU market as a direct, discrete CPU competitor to AMD/Intel. You're right, there'd be no way they could compete on price/capacity.
They also can't match the top x86 CPU manufacturers when it comes implementation, process, and methodology.
Nobody is going to go against the entrenched x86 high end without their own top of the line fab and man-centuries of design work. Intel's very briefly toyed with the idea of doing it with Itanium (very early on, never seriously) and they didn't dare.
The way NVIDIA and ATI have approached GPU production is miles away from what it takes to make something as clunky as x86 perform.
As fabless companies, NVIDIA and ATI chips are physically designed around the rules and processes of the foundries. Even at the same process geometry, the extreme and repeatedly refined custom work put into the fab process at AMD and Intel allows for timings that can be several times better than what a foundry can offer.
x86 chips require a lot of custom design and custom cells, custom layouts, and other tweaks that engineers have picked up in the 20 years they've been forcing the pig to fly.
Along with a custom process, x86 CPUs go through countless tweaks throughout their lifetimes, not enough for a new core or even a full revision, but countless nitpicky tweaks for that one iffy transistor on an L2 path or a .5% increase in manufacturability. Full revisions and new cores come around once every 2-5 years. Whole changes in core philosophy take even longer.
The GPU companies don't do that endless refinement of the same thing over and over again. They do a number of steppings, and they regularly produce new cores, sometimes with wildly different philosophies. If Intel or AMD had run the GPU race, we'd be looking at 3Ghz TNT2s right now.
The low-hanging fruit for performance in graphics takes GPU designers in a way different direction than it does CPU designers.
However, what's to stop Nvidia from putting x86 cores on their own GPUs? In 2 years, a couple x86 cores+cache would take up a miniscule amount of real-estate on the die compared to the GPU logic. What if people could buy a GPU and quad-core CPU on one die? No PCIe bottleneck, easy cache-coherancy, massive performance, true native shared memory. This *IS* the way things will be going in the future, no matter what... now the debate is whether Intel will put GPUs on a CPU die, or Nvidia will put CPUs on their die.
Nvidia would be better off not going with x86.
With x86 on-board, I get a feeling the performance we'd be looking for would be about where VIA and Transmeta were (or were recently in the case of Transmeta). Only they would be doing that well about four years from now.
I am saying that Nvidia is actually better positioned here.. GPUs are much more complex than CPUs nowdays, and much bigger. Nvidia already knows how to work on the order of a a billion transistors, something Intel (not counting cache, which is highly regular) does not.
Both CPUs and GPUs are complex in different ways. A GPU may have 24 pipelines, but they are very self-contained and autonomous compared to how the units in a CPU work. GPUs also don't have the same demands placed on them that CPUs have.
GPUs are given one task, one that is latency tolerant and highly parallel. They can afford to go wide because they can use more pipelines to capture more pixel ops. They have to, because there's no way they're going to clock any higher.
CPUs have a different workload, one that is much less latency tolerant and is quite often not nearly as parallel.
Additonally, the move from engineering GPU Shader->CPU is a much easier one than the reverse, there are dozens of ultra-complicated graphics subsystems that Intel would need to invent from scratch (i.e, EarlyZ, ROP, AA, Texturing, Sampling, Clipping, list goes on).
It's not like Nvidia doesn't have a number of things it has to catch up on, and it would have fewer engineers than Intel trying.
I'm sure when Nvidia gets around to implementing fully virtual memory, software permissions, interrupt handling, backwards compatibility with 20+ years of crud that have build up in the ISA, a wildly inconsistent instruction set, aggressive speculation, useful branching, branch speculation, precise exeptions, cache and result access within 2-3 cycles at 3+GHz, industry-leading process manufacturing with a multibillion dollar fab, wacky freaky circuit implementation details, and a whole host of other problems, they'd have a decent x86 chip for 1999 by 2009.
It would also need to eat the costs of its manufacturing screwups. You can't pay only for good dies if you own the fab. The margins in the CPU biz are lousy in the low and mid ends, but there is absolutely no way Nvidia can bluff its way into the high-end.
There's also a distinct lack of optimizing compilers, proprietary instructions, or safely ignorable approximations. You can't do adaptive filtering on bank records,
If they want to make a CPU, they'd be better off keeping it as far as they can from x86. At least some of those problems are reduced.
Nvidia already makes motherboard chipsets... that means they already make memory controllers, northbridges, everything on a southbridge, ethernet, usb, raid controllers, *everything*. They own this IP. They already make the most powerful processor in the computer, the GPU. They already have a massive amount if high performance processor core experience... see where I'm going with this?
Don't underestimate the amount of IP the x86 manufacturers have in controllers and IO. They have enough.
Intel already makes graphics parts. Sure they suck for performance, but they're infinitely better than the 0 CPUs Nvidia is putting out. What's to stop them from making a kick-ass graphics chip? Obviously there are reasons.
The high-end of both fields is virtually unassailable to a newcomer. There is so much built up expertise and proprietary knowledge that any company trying to break in must either have more cash than anyone does right now, more time than anyone deserves to have, or an aversion to staying in business.
GPU manufacturing expertise will be helpful, but inadequate to break into the x86 market.
The methods are different, the demands are different, the silicon is different, the transistors are different, the costs are different, the risks are different, and the rewards extremely far away.
Transistor real estate is becoming dirt cheap/transistor. What's to stop Nvidia (in 5+ years mind you), from engineering an *entire computer* onto ONE die? I say nothing. They already own all the IP except the actual CPU, and really it wouldn't be that hard to make one. I say Nvidia is going to blow up into a huge company, that is my prediction.
Transistors have been dirt-cheap since the advent of VLSI. There's no physical or IP-based reason why Intel or AMD or Nvidia for that matter couldn't have done it back in the 90's. Actually, Intel tried, it physically worked, but there are other reasons why things don't pan out.
System on a chip isn't a new idea. It does make it hard to have a flexible platform or perform well.