NVIDIA Chipset Discussion (MCP55, Nehalem & Misc.)

Arun

Unknown.
Moderator
Legend
Arun said:
And I very much doubt that northbridges are manufactured on a low-power process made with handhelds in mind!
Certainly not NVidia's with their ridiculous power consumption and temperature.
Sigh, don't you love this myth? There is one, and only ONE, chip in NVIDIA's entire line-up, ever, that had truly unacceptable power consumption: MCP55. The problem is they're crazy and stupid enough to keep reusing it again and again (nForce 550/570/590, nForce 680i, nForce 780i).

Pretty much all other chipsets in all of NVIDIA's history had perfectly acceptable power consumption. nForce1/2/3 were acceptable. The original nForce4 was fine. The nForce 4 for Intel was mostly comparable with apparently higher idle power but slightly lower load power than Intel's parts at the time.

All NVIDIA IGPs, including C51, MCP61, MCP68 and MCP73 all have perfectly fine power consumption. In fact, MCP73 is more power efficient than Intel's chipsets. C55 is hot, but it's not that bad compared to Intel's northbridges either; it's mostly MCP55 giving it a bad reputation.

So why is MCP55 so power inefficient? Because it's on 140nm (you read that right) and is a very big chip due to having 28 PCI Express lanes and 2 Gigabit Ethernet MACs. It's about 100-110mm2, which is huge for a chipset; but it also integrates a lot more than anything else out there.

So there are two reasons for the power inefficiency: first of all, a large chip that apparently isn't able to shutdown many parts of itself when they're idling. Secondly, the process itself: 140nm is a quarter-node(TM), a shrink based on the 150nm process. Why use it? The cost per wafer (and thus per mm2) is the lowest out there: it doesn't have low-k, let alone copper interconnects I suspect. Chipsets have a lot of analogue & I/O so it makes sense to optimize cost per wafer.

I'd suspect MCP55's power consumption would be a fair bit more reasonable if produced on 130nm. Given the bad publicity alone (you're living proof of that ;)) I'd say the trade-off wasn't worth it. At the very least, they shouldn't have released that chip without more efficient shutting down of idle parts.

Anyway, all their more recent chips are perfectly competitive power-wise, it's just sad they keep reusing MCP55... On the plus side of things, we've seen the last of it with the 780i AFAIK. Good riddance. (P.S.: Slide 18 of http://www.itctestweek.org/recap/2006malachowsky.pdf - AFAIK, 140nm was also used for a widely used shrink of NV34)

EDIT: And before anyone gets me wrong, I haven't been too impressed with most of NVIDIA's more recent chipsets; MCP68 is awful and I'd pick a RS690 any day over it, certainly. MCP73 also isn't the kind of thing I would buy, but in its defense it's mostly aimed at VIA/SiS, which have products I'd be even less willing to buy, heh. As for MCP65/nForce 560, it's too boring. MCP78 looks nice, but that's about it, so far I'm far incredibly impressed.

And yes, this post is damn long, but I wanted to type this once and for all so I could reference it in the future and not have to explain it again and again for no good reason.
 
I suspect that Intel's "Nehalem" might change all that.

Since both CPU manufacturers will relieve the chipset Northbridge of it's memory controller, the prospect of mix-matching components like Nvidia usually does now suddenly becomes even more feasible (and perhaps even economically desirable).
 
I suspect that Intel's "Nehalem" might change all that.

Since both CPU manufacturers will relieve the chipset Northbridge of it's memory controller,
... and PCI Express main hub and integrated GPU ...

the prospect of mix-matching components like Nvidia usually does now suddenly becomes even more feasible (and perhaps even economically desirable).
... NVidia will be left selling southbridges.

[I'd like post 44 to stay in this context if someone decides to split this subthread off, i.e. start with 45.]

Jawed
 
... and PCI Express main hub and integrated GPU ...


... NVidia will be left selling southbridges.

Funny that you say that, because you're assuming Nvidia will be standing there doing nothing while Intel keeps adding GPU functionality on the CPU die, and that it will somehow be more efficient than a dedicated, comparatively much larger transistor count processor such as a discrete GPU...
Heck, they might as well just stop on the Geforce 8, as there's no hope left, right ?

Just as CPU's evolve, so can a GPU design in the same time frame, don't you think ?
Who knows ? They might even integrate a basic x86 design as soon as...
 
Yes and no. What are the chances NVIDIA is going to allow SLI on Nehalem's integrated 1x16 Port? The logical thing to do is to sell BR04-like solutions to motherboard manufacturers and/or Intel for that. However, they'll also want Hybrid SLI, which leads me to imagining a chip like this: G98 + 64xPCI Express 2.0 + 1xHT3 + 2xCSI

And then you can use the same chip for both AMD and Intel Socket 1366 chipsets (along with a southbridge, obviously) with 4x16 Hybrid SLI, or sell it to motherboard manufacturers to pair with Intel southbridges for 3x16 Hybrid SLI.

I'm not saying that's what NVIDIA will do. I don't know if it is. But there certainly are some things they could do, such as that, that could keep their chipset business very viable as long as their southbridges are high-quality. It also remains to be seen whether Intel's CPU-GPU MCM is a success or not, although I doubt it'll be a flop.

Inkster: I think Jawed meant chipset-wise.
 
Funny that you say that, because you're assuming Nvidia will be standing there doing nothing while Intel keeps adding GPU functionality on the CPU die, and that it will somehow be more efficient than a dedicated, comparatively much larger transistor count processor such as a discrete GPU...
Heck, they might as well just stop on the Geforce 8, as there's no hope left, right ?
Eh?

There should be a fairly large gap between IGP and discrete graphics for a long long time, well past the 5 year timeline that Intel seems to want to convince us will see the death of DirectX and the rise of ray tracing (sigh).

Jawed
 
Eh?

There should be a fairly large gap between IGP and discrete graphics for a long long time, well past the 5 year timeline that Intel seems to want to convince us will see the death of DirectX and the rise of ray tracing (sigh).

Jawed

I wasn't talking about "Larrabee" or "Fusion"/"Swift" either, but this.
 
Back
Top