GPU socket coming

_xxx_

Banned
Some tidbits from Anand:

Manufacturers seem to think G72 and G73 will be an easy tool over from NV40/43, but another vendor claims NVIDIA has bigger plans. They claim that NVIDIA is working on flip chip GPU sockets for motherboards. Apparently, inside NVIDIA engineering teams have several prototypes where the GPU, rather than the CPU, is the main focus of a motherboard with two sockets: one for the GPU and another for the CPU. Whether or not such a machine will ever see the light of day is difficult to say right now. However, the idea of pin compatible GPUs already suggests that we are halfway there when it comes to buying GPUs the same way we buy CPUs: in flip chips. We have plenty of questions, like how the memory interface will work and how that will affect performance, but GPU sockets are likely less a question of "if", but rather "when".

Just being able to upgrade GPU's like we do with CPU's now would make things much more interesting...
 
_xxx_ said:
Just being able to upgrade GPU's like we do with CPU's now would make things much more interesting...

Oh Please No. I can see it now.

'We've come out with a new gpu unfortunately the new package has 128 more pins so you need to buy a new mobo. The new mobos will only accept GDDR8 ram so you will have to throw your gddr7 out and buy the new stuff, also the new design produces more heat so your old fan will be inadequate...'
 
Socketted GPU's may be an option for the ultra low-end market, where you can use system memory. Or, perhaps a bit further down the line, they'll start having on-board memory.
 
AlphaWolf said:
'We've come out with a new gpu unfortunately the new package has 128 more pins so you need to buy a new mobo. The new mobos will only accept GDDR8 ram so you will have to throw your gddr7 out and buy the new stuff, also the new design produces more heat so your old fan will be inadequate...'
That's the situation today with graphics cards. Having a socket for the GPU will at least allow one major upgrade without throwing away all the rest.
 
IgnorancePersonified said:
You can do it from both vendors in notebooks now. I'm no expert but I would think you would be limited to expansion in one generation of product only.
I see no reason why that would have to be the case. After all, the input and output interfaces are pretty standardized. I would expect that one could have a socket that would last as long as PCI Express will last, which would be much longer than CPU sockets.

The caveat is, of course, the memory interface. But since it doesn't make sense to change out the GPU and not the memory, a socket interface would make the most sense with on-package or on-die memory.
 
AlphaWolf said:
Oh Please No. I can see it now.

'We've come out with a new gpu unfortunately the new package has 128 more pins so you need to buy a new mobo. The new mobos will only accept GDDR8 ram so you will have to throw your gddr7 out and buy the new stuff, also the new design produces more heat so your old fan will be inadequate...'

exactly.. i don't see it that interesting
 
Nick said:
That's the situation today with graphics cards. Having a socket for the GPU will at least allow one major upgrade without throwing away all the rest.

That's how I see it too. At least one "big" upgrade, be it from low-end to high-end in the same gen is definitely something.

Chalnoth: AFAIK it's not only the chip in the notebooks, but the whole module?
 
yeh bit hard to go from a 128 bit bus to 256bit bus or similar though. I have no problem putting a chip in with limited bandwidth but there are going to be a steep slope of diminishing returns. A better featured chip for not much is always better if the $ are right.
Maybe have to have a bios update as well or a "Unified Bios" spanning a 6200 to 6800 for example or 7600? In the mobo world a tbird athlon, while physically being able to be plugged into an early gen socketA board, still wouldn't post and there was no bios update available that would enable support. Whilst I see it's in the gpu makers best interests to have this (more chips sales) AND most boards are reference designs anyway it sounds a bit hit and miss and more reliant on finger crossing. Board makers want to sell boards as well and is why I think the mobo eg above didn't get the bios update needed (was an abit - can't remember the model name other than kt7-raid something)
 
I'm seeing this more as an integrated graphics alternative than any type of performant solution. I'm not sure about current products but X1300 can already operate with "no" local memory (i.e. entirely rendered to system RAM) and I would suspect that TurboCache will be able to as well.
 
Dave Baumann said:
I'm seeing this more as an integrated graphics alternative than any type of performant solution. I'm not sure about current products but X1300 can already operate with "no" local memory (i.e. entirely rendered to system RAM) and I would suspect that TurboCache will be able to as well.

Wouldn't there be an extra cost for the socket? I can't imagine the low end is a big upgrade market. Would people would pay extra for a 6150 board so they could upgrade graphics in the future to 7150? Maybe they are specifically targetting vista with this?
 
This has a couple of possibilities.

First, the mobo makers can target one motherboard towards both OEMs and enthusiasts. Have a 16 lane PCIe and 16 lane link to the GPU socket. The OEM can either drop in a $20 GPU in the socket, or put in a $200 add in card. More options for the Dells of the world.

Also, if its more or less open, like PCIe, then you could have to possibility of using the onboard socket for a GPU while waiting for the next best thing to come out. Once you buy your new $600 top of the line graphics card, the onboard GPU can be used as a physics co-processor like ATI were talking about.

Also it would be realy handy for laptops. For instance, my laptop is stupidly overpowered inthe CPU stakes, but is getting a little slow in the GPU stakes (Athlon64 3.4gig and a mobility Radeon 9700). If I could drop in a X1600 with on-package RAM that would be great!

Ali
 
Chalnoth said:
The caveat is, of course, the memory interface. But since it doesn't make sense to change out the GPU and not the memory, a socket interface would make the most sense with on-package or on-die memory.
At which point you may as well slap all of that on some sort of "card" and plug it into a "slot".
 
Ali said:
Once you buy your new $600 top of the line graphics card, the onboard GPU can be used as a physics co-processor like ATI were talking about.
Or as a graphics co-processor for vertex shading and Z-first passes like a NVidia patent says.
 
A socket for a GPU (I presume high-end GPU, with memory not on the GPU package itself) on a motherboard sounds like a not-very-good idea for at least 2 technical reasons:
  • Capacitance. A CPU-type socket adds a bit of capacitance per pin, potentially limiting the data rate that the memory bus can run at. This isn't a very big problem at 400-600 MBps/pin (like Athlon64), but a bit more at 1.5+ GBps/pin (like today's high-end GPUs)
  • Board layers. A high-end GPU like the R520 presumably requires something like 8 or 10 layer boards (the large number of layers are mainly needed to route out the enormous number of traces directly beneath the GPU). Adding that many layers to a board as large as a standard motherboard will increase the price rather dramatically, like perhaps $50-$100.
Having a GPU package with on-package memory will solve these problems, but multi-chip modules are a bit expensive, so I don't see why a socketed GPU would be any cheaper or more flexible than today's slotted GPUs.
 
This was bandied about years ago...IIRC it was "Socket-X" that was being pushed by Rendition / Micron.

I just don't see the real viability of it. I don't see a "middle ground" between integration with the north bridge at the low end, and a cheap-o add-on one step up. That being said, the one possibility I could see for another standardized socket is for a "FPU co-processor". Then let ATI, nVidia, Ageia, Intel, Amd etx have at it. ;)
 
See, now given my current obsession, I was thinking this could work on a card too. Every PCB sold can hold two NV GPU, but whether you get that second socket pre-filled or not is up to you. Then your upgrade cost is much less when you decide you want to go SLI later on.

The heatsink would be a problem, and I think even with the shared memory patent that might be a problem too.
 
arjan de lumens said:
A socket for a GPU (I presume high-end GPU, with memory not on the GPU package itself) on a motherboard sounds like a not-very-good idea for at least 2 technical reasons:
  • Capacitance. A CPU-type socket adds a bit of capacitance per pin, potentially limiting the data rate that the memory bus can run at. This isn't a very big problem at 400-600 MBps/pin (like Athlon64), but a bit more at 1.5+ GBps/pin (like today's high-end GPUs)
  • Board layers.


  • Well, today's Athlon 64s already run their HT links at 1GHz double pumped (2Gbit/s per pin pair), so the pin-speed is definately doable. But you of course have a point about the bandwidth, it would be unfeaseble to use a 256bit bus interface in a socket IMO.

    But! If you could lower the bandwidth requirements of the GPU by putting a chunk of RAM on the GPU itself (maybe not on the same die, but in the same package) connecting a GPU directly to a CPU through something like HT (ie. shared memory NUMA) would make sense IMO.

    That would make the GPU a first class citizen of the computer and vastly improve GPU<->CPU communication. This might be important if future GPUs are going to be as versatile as it appears (ie. doing physics and all that). And your CPU would be able to utilize the extra bandwidth when the GPU isn't crunching.

    Cheers
    Gubbi
 
Joe DeFuria said:
This was bandied about years ago...IIRC it was "Socket-X" that was being pushed by Rendition / Micron.
Things have changed since then. And NVIDIA knows how to create the proper chipsets themselves...
I just don't see the real viability of it. I don't see a "middle ground" between integration with the north bridge at the low end, and a cheap-o add-on one step up. That being said, the one possibility I could see for another standardized socket is for a "FPU co-processor". Then let ATI, nVidia, Ageia, Intel, Amd etx have at it. ;)
It's just like SLI. People will do anything to buy the latest graphics card(s). But it costs a lot to create a whole new card that is only 20% faster. Producing only a new GPU is easier and an interesting upgrade. Plus they can sell all the different speed bins, with exponential pricing, just like CPUs. That's one of the possibilities I see, although cooling a high-end GPU on the motherboard seems tricky.

A separate socket for a coprocessor, now that's a bad idea. Soon we'll all have multi-core CPUs and there's still no application making good use of it...
 
Back
Top