Really off-the-wall question about video chips.

Chalnoth said:
The Baron said:
arjan de lumens said:
Cooling a hypothetical socketed GPU shouldn't be harder than cooling a modern CPU; AFAIK, Prescott draws more power than even the NV40.
I don't believe this is correct. Prescott tops out at what, 110W, and isn't NV40 quite a bit higher than that?
Keep in mind that the power draw from an NV40 includes more than just the chip.
How much of that extra stuff would still be required on the motherboard? How much of that is RAM, too?

Also, think about where the fans on current GPUs blow. They blow *downwards*, not straight into the CPU cooler. And the best coolers vent air out of the case (Dustbuster not withstanding)
 
The Baron said:
arjan de lumens said:
Cooling a hypothetical socketed GPU shouldn't be harder than cooling a modern CPU; AFAIK, Prescott draws more power than even the NV40.
I don't believe this is correct. Prescott tops out at what, 110W, and isn't NV40 quite a bit higher than that?
Prescott TDP 103W, NV40U _board_ 110W. However, the Prescott number is cheated, TDP is not the maximum it can dissipate - the maximum is 25% (or 20%? I think it was 25%, but intel could have changed it) higher. The 1110W of the NV40 is supposed to be the real max power draw figure, so that would be below a prescott.
The problem is that you'd have to have two giant heatsinks in very close proximity, and you'd have to have some insane case cooling in order to keep the hot exhaust from one from blownig into the other.
BTX to the rescue :p.

mczak
 
The Baron said:
Also, think about where the fans on current GPUs blow. They blow *downwards*, not straight into the CPU cooler. And the best coolers vent air out of the case (Dustbuster not withstanding)

that doesn't bother my radeon 9700 pro, to be hell hot on top, directly some cm under the cpu cooler, wich is quite hot because of that at its bottom..
 
The Baron said:
The problem is that you'd have to have two giant heatsinks in very close proximity, and you'd have to have some insane case cooling in order to keep the hot exhaust from one from blownig into the other.

But people with dual CPUs have had this for quite a while with no real problems. They use normal HSF units too.

Now, try a mobo-mounted GPU in a dual (or quad) processor system... ;)
 
arjan de lumens said:
Cooling a hypothetical socketed GPU shouldn't be harder than cooling a modern CPU; AFAIK, Prescott draws more power than even the NV40.

As a matter of fact wouldn't it tend to be easier to cool a socketed GPU? That way you could actually use a much larger and heavier heatsink versus being restricted to the weight the card and the slot can hold. I suppose the upside to being in a slot is that it's easier to bring in cool fresh air for the GPU (or vent the hot air out of the case).

So people mention memory being a problem, what happens when GPUs start to have several MB of eDram, perhaps enough to store the color buffer and the z-buffer? And if predictions hold true and procedural textures become more and more common it might eventually be feasible to build a GPU socket since the GPU will hopefully not need such an insane amount of bandwidth.
 
Killer-Kris said:
As a matter of fact wouldn't it tend to be easier to cool a socketed GPU? That way you could actually use a much larger and heavier heatsink versus being restricted to the weight the card and the slot can hold. I suppose the upside to being in a slot is that it's easier to bring in cool fresh air for the GPU (or vent the hot air out of the case).
Well, one problem is that you're able to cool both sides of a video card. The other is that with two hot chips on the motherboard, the motherboard itself is going to get quite hot.
 
Chalnoth said:
Well, one problem is that you're able to cool both sides of a video card. The other is that with two hot chips on the motherboard, the motherboard itself is going to get quite hot.

Now I know that some graphics boards do indeed have cooling on both sides, but aren't most reference boards designed with cooling on only the one side?

Besides like people have already pointed out dual processor systems can be cooled just fine. Now like someone else pointed out, a dual (or more) processor and a gpu system.... yeah you might start to run into trouble :). Though I really doubt that you'd have to worry about the PCB of the motherboard... there are quad (and more) Itanium 2 systems where each chips is around 130W +- 20%. So I doubt that for a consumer system a 100W CPU and a 100W GPU would be much of a problem at all.
 
Killer-Kris said:
Now I know that some graphics boards do indeed have cooling on both sides, but aren't most reference boards designed with cooling on only the one side?
Even if there's not active cooling, it's still better than it being on the motherboard, as the back will get some air.

Besides like people have already pointed out dual processor systems can be cooled just fine. Now like someone else pointed out, a dual (or more) processor and a gpu system.... yeah you might start to run into trouble :). Though I really doubt that you'd have to worry about the PCB of the motherboard... there are quad (and more) Itanium 2 systems where each chips is around 130W +- 20%. So I doubt that for a consumer system a 100W CPU and a 100W GPU would be much of a problem at all.
Whenever you have multi-CPU systems, heat is always a significant concern. You may have noticed that dual- and quad-CPU motherboards are typically quite expensive.
 
Chalnoth said:
Killer-Kris said:
Now I know that some graphics boards do indeed have cooling on both sides, but aren't most reference boards designed with cooling on only the one side?
Even if there's not active cooling, it's still better than it being on the motherboard, as the back will get some air.

I'm still not convinced that there needs to be much in the way of cooling on the back side of the boards. Just using the examples of prescott and madison, both of these chips produce well over a 100 watts themselves which is not including the memory, memory controllers, etc... and they are still fairly easily cooled with out having to put either heatsinks or have airflow over the back of them. I know it wouldn't hurt, but I'm not sure it would help either.

Where as with current GPUs the entire board is consuming upto 100 watts I believe. Most of this is obviously the GPU, but a not quite insignificant amount is also the memory. If the GPU were moved into a socket cooling would certainly not be any more of an issue than it is for CPUs.

That's not to say that heat won't be an issue in the future since both CPUs and GPUs are using more and more logic and being pushed to even higher clock speeds. Something is going to need to be done, but it will likely need to be done the same on both the socket and the slot designs.

Whenever you have multi-CPU systems, heat is always a significant concern. You may have noticed that dual- and quad-CPU motherboards are typically quite expensive.

I was under the assumption that the added cost of multiprocessor motherboards had more to do with validation and economies of scale than it had to do with the added complexity. Just like the difference in price between P4 and Xeon, Opteron and Athlon64, Quadro and Geforce, FireGL and Radeon, etc... now of course that wasn't exactly an apples to apples comparison because unlike all the above examples motherboards can't be based off of an identical design like chips can but it should be fairly close.

Anyways we're getting off topic. There's still the issue to address of having a unified socket for all chips, high -> mid -> low range. Anyone have any suggestions? Do we make the high end suffer by going with the lowest common denominator, or do we make the low end cost more by designing around the high end, or is there a better solution?

Now if indeed we are entering into a time in which new features are going to be introduced at a slower pace, and improvements will mostly come in the form of higher clocks and more pipelines, I can actually foresee it becoming much like the CPU realm at the moment. A new socket every 24-36 months, bus speed boosts semi-regularly, and higher clocked chips every so often as well. And just like we put up with having to buy a new motherboard when AMD or Intel releases a new socket, we'll probably also just put up with Nvidia or Ati releasing a new socket in the same manner. Now of course where does this leave the motherboard manufacturers... do they have to make 4 different mother boards, one for each combination?
 
I can't see why heat dissipation should be regarded as anything more than a small engineering issue to solve.

But I think before solving the problem we should make sure there even is a problem. I'll split the topic into two relatively separate threads. First, if the GPU chip should be pluggable into a socket, and secondly if the GPU should reside on the motherboard.

AFAICS the only reason to make the GPU use a socket (as opposed to soldering it directly on the graphics board) is to reuse the infrastructure on upgrading. I think that a lot of, if not most, people don't upgrade their computers but buy new ones, making the added cost of separating the infrastructure from the chip useless. Or if they do upgrade, it might be too infrequently to reuse the previous infrastructure.

But lets assume that there is enough market for upgradeable graphics boards.

So what can be shared between different chips.
A very large portion of the cost of a graphics board is in the memory chips. If we want to share them between subsequent chips the following issues crop up:
1) Different chips have different bandwidth requirements, so most of the time the infrastructure must be upgraded anyway
2) There's a loss of memory speed using a socket interface.
Using a multi-chip module (expensive?) memory could be switched with the GPU thus negating most of the cost advantage.

From what's left power supply can be shared, but again must scale with the chips making the lowend more expensive or the high-end require infrastructure upgrading.
Then there are some encoder chips, possibly a tv-tuner and other supporting stuff that can be shared, assuming the GPU can be kept pin compatible (thus limiting the HW vendors ability to add new stuff).

Putting it all together; the (small) minority that upgrade can save the cost of power circuitry, physical board components and possible external chips in case the existing power supply is even enough for the new chip. That would come at the added expense of the socket interface, a MCM and extra engineering effort.
As far as I can quantify these factors, it just isn't worth it.


As to placing the GPU on the motherboard, excluding low-end solutions there is close to nothing that could be shared for cost advantages and the manufactoring cost would be a LOT more than a socket interface requires. Though there could still be market for a form-factor where the graphics board is in the same plane with the motherboard, with the socket being on the edge of the motherboard.
 
You're right, it is a solvable problem, but remember that the only reason to go to a GPU socket on the motherboard would be cost. Dealing with heat adds cost.

But when you look at the situation, it should become apparent that in reality, putting a GPU on the motherboard would both decrease performance (by making it impossible to have memory that performs as highly), and increase costs. So there's really no point.
 
Back
Top