GPU socket coming

Nick said:
A separate socket for a coprocessor, now that's a bad idea.

So, you think, for example, a physics processor in general is a bad idea?

Soon we'll all have multi-core CPUs...

Specifically, we'll have "more of the same old x86 built with a legacy for serial integer operation core" CPUs. Not floating point monsters which I'm talking about.
 
I don't see whay a "Cell-like" architecture can't be constructed with a x86 core. Should be another thread though.

Also Aegia (I thought it was spelt Ageia??...) PPU delayed until Q2. Whoops.

Jawed
 
Jawed said:
I don't see whay a "Cell-like" architecture can't be constructed with a x86 core. Should be another thread though.

They likely will be...eventually. Just like the "math-co processor" started off as a separate chip, and was LATER integrated into the core, I don't see why the situation can't be similar this time around. At first, have a socket for the "FPU monster unit", and those people willing to pay for the cost of adding a separate chip, will do so. Later, when it makes financial sense to just integrate it into the core...then do it.
 
Joe DeFuria said:
So, you think, for example, a physics processor in general is a bad idea?
Yes.
Specifically, we'll have "more of the same old x86 built with a legacy for serial integer operation core" CPUs. Not floating point monsters which I'm talking about.
I think you greatly underestimate the power of a modern x86 CPU. The SSE instruction set allows four 32-bit floating-point numbers to be processed in parallel. Also, the clock frequency of CPUs is several times higher than any separate co-processor could ever reach. Last but not least an extra core gives you 100% extra. Not 10% or 20% that you have to share with everything else. So if you want a 'floating-point monster' it would have to be highly parallel, run very hot, and be hard to develop for. By the time it's practically feasible to release a game for it, we have quad-core CPUs ready to process all physics you can throw at it, and more.

Anyway, back on topic, if there's any other processor suited for physics processing it's the GPU. Giving it a socket is definitely a step forward to make it cheaper to develop and to upgrade.
 
Nick said:

Well, then why integrate more FPU power into a CPU at all?

I think you greatly underestimate the power of a modern x86 CPU.

I don't think so.

When a GPU can clearly out-class the "power of a modern x86 CPU" in apps like protein folding...

x86 are great at what they do. Monster FPU power is not currently one of them.

Anyway, back on topic, if there's any other processor suited for physics processing it's the GPU. Giving it a socket is definitely a step forward to make it cheaper to develop and to upgrade.

??

I see very little difference between a "GPU" and a "PPU". I'm calling them both "floating point monsters" in a generic sense. As I said earlier: develop a socket, and then have ALL the companies (ATI, nVidia, Aegia, Intel, Amd...) all develop a "FPU monster co-processor" for it.

ATI and nVidia can just slap in the same chip they use for graphics for all I care if it gets the job done.

What I do NOT see working, is having a separate socket exclusively for graphics processors.
 
Nick said:
Anyway, back on topic, if there's any other processor suited for physics processing it's the GPU.

So, a GPU (be it ATI or nVidia) will outpace a 4GHz Cell/BPA derived IC?

PS. Wow, I can't remember the last time I fully agree with Joe outside of the SocioPolitical forum :p
 
Last edited by a moderator:
Nick said:
I think you greatly underestimate the power of a modern x86 CPU. The SSE instruction set allows four 32-bit floating-point numbers to be processed in parallel.
A "real" FP chip allows a lot more than that though. Besides, neither interchip nor external bandwidth allows modern x86 chips to run at a sustained 4 32-bit ops/clock. They're just not made for that kind of a workload, sort of like taking a SUV off-road. :devilish:

Also, the clock frequency of CPUs is several times higher than any separate co-processor could ever reach.
That sounds a little narrowminded to me.

Last but not least an extra core gives you 100% extra. Not 10% or 20% that you have to share with everything else.
There's almost no workload that scales completely linearly with the addition of a second x86 core, in part because both cores will share the same (quite small) memory bandwidth. In the future, both cores will even share the same L2.

So if you want a 'floating-point monster' it would have to be highly parallel, run very hot, and be hard to develop for.
APIs do take care of the last bit...

Anyway, back on topic, if there's any other processor suited for physics processing it's the GPU.
Absolutely not. What a stupid idea, the GPU's job is drawing graphics, I don't need my game slowing down even more in busy scenes because a significant amount of GPU processing is stolen by physics simulation.

GPUs are already crazy expensive in $/FPS ratio. No need to make it even worse.
 
Guden Oden said:
Absolutely not. What a stupid idea, the GPU's job is drawing graphics, I don't need my game slowing down even more in busy scenes because a significant amount of GPU processing is stolen by physics simulation.

GPUs are already crazy expensive in $/FPS ratio. No need to make it even worse.

Compeletly agree, GPU's aren't well suited for this and the factor that we don't have enough graphics power yet to push higher level lighting models it wouldn't be an ideal solution at all or the near future for physics.
 
It seems to me it is a marketing problem/issue as much as anything. Not everyone will want/need these. Who will? The biggest market will be gamers. Who has more credibility and experience selling to gamers and continually pushing the envelope for greater gaming performance in serious % increases, the cpu guys or the gpu guys? Therefore, who's logo has more credibility on a PPU to the major market for PPUs --Intel/AMD or NV/ATI? Which companies, Intel/AMD or NV/ATI, would see the volume of the PPU market as interesting in a business sense? Which companies, Intel/AMD or NV/ATI, would see themselves at those volumes as benefitting the most from "the halo effect" of being a leader in this new area?

YMMV, but that adds up a very definite way in my eyes.
 
geo said:
It seems to me it is a marketing problem/issue as much as anything. Not everyone will want/need these. Who will? The biggest market will be gamers. Who has more credibility and experience selling to gamers and continually pushing the envelope for greater gaming performance in serious % increases, the cpu guys or the gpu guys? Therefore, who's logo has more credibility on a PPU to the major market for PPUs --Intel/AMD or NV/ATI? Which companies, Intel/AMD or NV/ATI, would see the volume of the PPU market as interesting in a business sense? Which companies, Intel/AMD or NV/ATI, would see themselves at those volumes as benefitting the most from "the halo effect" of being a leader in this new area?

YMMV, but that adds up a very definite way in my eyes.

I personally am an avid support of SFF and other related forms of 'small' computing - the concept of GPU-on-motherboard has always appealed to me as one of the best ways to further reduce the size of current SFF computers.

Of course, this would also be the holy grail to laptop gaming (where I see the desktop going anyways).

See, lots of potential market :smile:
 
A176 said:
didn't 3dfx have a socket board in development?

I believe 3dfx produced a reference design with a discrete Bashee graphics chip directly on the motherboard. However, I don't think this was part of any standardized socket initiative. It was more of a "see, this is what you can do with Banshee"...in an attempt to appeal to OEMs.
 
I don't think a video socket on a motherboard makes sense unless the mb has the 512MB or DDR3/4/5 on board as well. For laptops, sure, desktops, what's the point? And I seriously doubt you'd have a common socket standard between ATI and NVIDIA.

As for a socket on a video card, that might make some sense, if $400 worth of your $500 video card is expensive ram, then upgrading just the gpu would be great. Or perhaps an empty socket to upgrade later to a dual gpu card. Joe blow probably would have a problem removing the heatsinks, refrigerators, heat pipes and vacuum cleaners from the gpu, and warranties would be an issue.

:)
 
AlphaWolf said:
Oh Please No. I can see it now.

'We've come out with a new gpu unfortunately the new package has 128 more pins so you need to buy a new mobo. The new mobos will only accept GDDR8 ram so you will have to throw your gddr7 out and buy the new stuff, also the new design produces more heat so your old fan will be inadequate...'

i hate it when that happens...than happened to me with my current rig.
 
Guden Oden said:
GPUs are already crazy expensive in $/FPS ratio. No need to make it even worse.

By the time we are actually able to use GPUs for physics in any real games(as in by the time directx has a physics component) many people will have an old PCI express card collecting dust somewhere. If that card can be plugged in and used for physics and if power usage isn't that much of a concern then it might be better than doing it on the CPU.
 
I don't think that'll be a viable option all that commonly. It would only be an option for the subset of the gaming population that owns SLI motherboards but does not have an SLI setup.
 
but you could PCIe x1 or x4 slots, similar to Matrox's use of PCIe x1 for their G4x series port, then again that isn't giving your PPU much bandwidth

as for interesting concepts, why not mimick 3DLabs or 3dfx and stick a geometry unit on the board, and put a coprocessor with it to do the crunching, you could cut costs per chip by not producing these 300-400M transistor devices, cut cooling down by moving components (this was shown to work even moving components on the same die, a University of Virginia experiment showed that principle for thermal management as viable, but two seperate dies in seperate areas would be even more dispersal of heat)

the only reason I consider this is if you observe dual core processors, and the soon coming multi-cores, they aren't being fully utilized (both cores aren't) by games, why not make one of those 2.8-3.6GHZ (or equiv) cores do # crunching for a slimmed down NV40 or G70 design?

the processing power seems to be there, the only issue might be bandwidth between CPU and GPU in this solution, but with HT that shouldn't be a problem

but if you did something like that, and used an on motherboard socket for this geometry/graphivs unit, you could upgrade more freely (it would seem) due to the cheaper package
then again, this is very similar to the concept of integrated graphics, and the RAM usage would go up along with IRQ requests made in order to perform calculations

the only other viable socket option I could see would be a continued utilization of PCIe and boards with sockets on them, when the socketed board "expires" you upgrade to a new one
RAM would be a potential issue due to speeds and bus width however, possibly a system with dynamically adjustable channels? the ability to re-configure the RAM anyway needed, from anything in a single or dual channel (32-64 bit) to higher widths, like 256-bit or 512-bit, you'd still have all the capacity, and your speed wouldn't have to be derived from pure clock speed

if the design used say, 1GHZ GDDR3 that can scale 32-bit to 512-bit, you can scale from 8GB/s to 128GB/s, easily enough bandwidth to handle anything from a GeForce3 Ti 200 to a Radeon X1800XT (with bandwidth left over)

the memory controller design for that however would be, costly to say the least
yet it would allow for a better upgrade cycle, and when 128GB/s is maxed out you would have to upgrade to a new board, maybe an XDR-II board @ 8.4GHZ with 64-1024 full adjustment, who knows?

idk, just my opinions
as for an on motherboard socket, the cooling requirements that would add make me squirm...just no
 
Back
Top