Geforce Chipset and CUDA

iwod

Newcomer
I wonder when we will we see 64 Shader on mGPU.
To be honest i am more then happy to pay more for a better integrated GPU. Like 10 - 20 Dollars.

If Nvidia wants CUDA to succeed. They will need WIDE ( or Ultra Wide ) penetration in Intel Chipset market.

Therefore Motherboard GPU should be an Investment that is used to drive CUDA, not as a profit making machine in itself. Those money will be make back when CUDA get more application to be used in the much more profitable High End Market.

To me we are still only at 16 Shader in MGPU is rather slow improvement.
 
power and heat are a concern. especially heat dissipation, how would you put a big heatsink and a fan on the motherboard's chipset, where would the air go?
then there's the bandwith problem which could make it next to useless.
 
power and heat are a concern. especially heat dissipation, how would you put a big heatsink and a fan on the motherboard's chipset, where would the air go?
then there's the bandwith problem which could make it next to useless.

It wouldn't be unheard of...
As for the bandwidth, it's true that shared memory isn't optimal, but they could *in theory* add a Sideport memory support scheme to the IGP, just like ATI did.
 
I think before they do that they need the killer app for the common man, im not sure video transcoding is it
 
Not necessarily. If everyone predicted the next lottery number, then in theory everyone loses, because the prize is split among all winners :p

Seriously though, I don't really think integrated graphics chips are going to be very useful for CUDA applications, except for perhaps a few cases, such as, maybe, cracking passwords?
 
I've run FFT3DGPU on my 780G. It can process 720x480 video at about 7 fps, compared to ~50fps for my 3870. ;) Actually, my X800XL can pull about 25fps. And the 8800GTX rocks along at almost double the rate of the 3870. These numbers are from expiring brain memory though....

Heat isn't any different than if you play games. IGP performance is pretty gimpy though, due to the GPU itself being like a 3450/8400 and because it has limited, shared bandwidth.
 
Last edited by a moderator:
I've run FFT3DGPU on my 780G. It can process 720x480 video at about 7 fps, compared to ~50fps for my 3870. ;) Actually, my X800XL can pull about 25fps. And the 8800GTX rocks along at almost double the rate of the 3870. These numbers are from expiring brain memory though....

Heat isn't any different than if you play games. IGP performance is pretty gimpy though, due to the GPU itself being like a 3450/8400 and because it has limited, shared bandwidth.

Hey atleast these new IGPs from AMD/nVIDIA are miles ahead of intel's graphics Media decelerator or i mean accelerator..

I think nVIDIA's MCP7A is a good start for them to get a decent market penetration into intel's IGP market. I dont think intel was very happy when apple decided to use nV chipsets.
 
I think nVIDIA's MCP7A is a good start for them to get a decent market penetration into intel's IGP market. I dont think intel was very happy when apple decided to use nV chipsets.
Yes, he was! ;)
http://www.eetimes.com/showArticle....IMMLLAQSNDLQCKHSCJUNN2JVN?articleID=205205933
12/30/2007 said:
Joe Toste, vice president of marketing at Equus Computer Systems, said the irony is that the industry needs a strong AMD. "But this is the year that Intel shifts their paranoia from AMD to Nvidia. [Intel CEO Paul] Otellini was really upset that Apple notebooks went to Nvidia."
And yes, I've suspected Macbooks would be MCP7A-based ever since that point but didn't feel it would be appropriate to make such a scoop out of such a comment, since it could blow back at the poor guy. Anyhow, this has been quite a long time in the making at least! :)
 
Well at this rate, i dont think it could ever get to 4870 performance on a Integrated Graphics in my life time.

Upset? Why? Otellini should be ashamed of their Self.

I guess that is why happen

SJ trusted Intel, in that X4500 will be SO MUCH better then X3100, and that Intel graphics will be 10 times better then X3100 in 3 - 4 years time.......

X4500 came out, SJ were disappointed....... What Jen said make sense, 10X better then a pile of crap is still a bile of crap.

Nvidia make good graphics, CUDA was a good fit for OpenCL.

After number of years listening to Intel, i have given up hope on their graphics department. I really Larrabee fail too.
 
Seriously though, I don't really think integrated graphics chips are going to be very useful for CUDA applications, except for perhaps a few cases, such as, maybe, cracking passwords?

It's the same chicken-and-egg problem that Intel faced with regards to multi-threading.

Until there is a strong enough market penetration of a technology, few developers will actively code to make use of what it offers. But until developers are offering programs that take advantage of a technology, few people will upgrade to that technology.

This is part of the reason that Intel have moved back to SMT with Nehalem - as this increases the number of logical cores that developers can assume are free (even on home/business spec systems), and so more deeply threaded code should be produced. Because processors are simply a required component of a computer, no 'killer app' is even needed - just a two year update cycle!

Similarly, a graphics system tends to be required, and that alone is why both Intel and nVidia are working on IGP offerings for GPGPU. The problem is that there is no commonly adopted standard to help programmers to make use of GPGPU offerings, unlike the intel instruction set for multi-threading.

The end game really can be as mundane as a faster spreadsheet application for accountants, quicker javascript on websites, and live red-eye reduction for everyone.
 
And this is the reason why I think it is pretty cool that Apple tries to force this developement. They will be integrating OpenCL into their next OS (Snow Leopard) release. This will give software developers the ability to integrate massively parallel routines in their programms. If there is no GPU present, fine – it will run on CPU, as it would otherwise: no loss here. But with a GPU, there will be a (substantial) speedup. As all future Macs will provide this functionality, professional applications for video and audio editing will emerge soon, and will spread out to the PC world too.
 
P.S.

The problem is that there is no commonly adopted standard to help programmers to make use of GPGPU offerings, unlike the intel instruction set for multi-threading.

There is, or there will be soon, it is created by Apple and is called OpenCL. All major GPU makers (Nvidia, ATI, dunno about Intel) have already announced their future support for it.
 
Well at this rate, i dont think it could ever get to 4870 performance on a Integrated Graphics in my life time.
Why? Are you suffering from a terminal illness? Integrated graphics today are about as fast as the $400 R300 from only 6 years ago.

I'm waiting for ATI to pack those tiny HD4000 series ALUs into a small process chipset and put it in a laptop or even a netbook. That would absolutely rule. Fusion in a laptop would be pretty good too.
 
Why? Are you suffering from a terminal illness? Integrated graphics today are about as fast as the $400 R300 from only 6 years ago.
Only if you go with 790GX and a Phenom or that new GF9300 IGP. Maybe then it would be like a 9700 Pro. I'm not entirely sure about that though. My 780G + A64X2 isn't there. It has an unstable framerate due to issues with CnQ, and the reduced HT bandwidth with a A64X2. Even games like Elite Force 2 and NFS Hot Pursuit 2 are too much for it at 1680x1050.

Any 'ol PCIe 3650/9500/X800/6800/insert-~$50-GPU-here would be cheap and obliterate the best IGPs of today. And on AMD platforms you wouldn't need to buy a relatively expensive CPU (Phenom) to give your budget setup extra bandwidth.
 
Last edited by a moderator:
It wouldn't be unheard of...
As for the bandwidth, it's true that shared memory isn't optimal, but they could *in theory* add a Sideport memory support scheme to the IGP, just like ATI did.

it's 32-bit! get it more decent-sized, such as 128bit, and it will get expensive.
64bit GDDR5 might be an option.
the thing is, I don't see what you gain with such a mobo vs a normal mobo + a card, power and cost wise.


but, this has come from my memory. A weird mobo with SiS chipset + SiS Xabre 200 with its own memory was made!
so I might be short-sighted and later proved wrong.
http://www.techwarelabs.com/reviews/motherboard/pcchips_m847lu/index.shtml

actually, non-IGP onboard "GPU" were also quite common, ages ago. such that marvelous ATI Rage Pro 2 Turbo with 8MB sdram on a friend's mobo (thank god there was the voodoo2). it didn't take much space and had no heatsink; nothing like that xabre. More common were integrated 1MB S3, cirrus logic 2D etc. video.
 
Last edited by a moderator:
it's 32-bit! get it more decent-sized, such as 128bit, and it will get expensive.
64bit GDDR5 might be an option.
the thing is, I don't see what you gain with such a mobo vs a normal mobo + a card, power and cost wise.


but, this has come from my memory. A weird mobo with SiS chipset + SiS Xabre 200 with its own memory was made!
so I might be short-sighted and later proved wrong.
http://www.techwarelabs.com/reviews/motherboard/pcchips_m847lu/index.shtml

actually, non-IGP onboard "GPU" were also quite common, ages ago. such that marvelous ATI Rage Pro 2 Turbo with 8MB sdram on a friend's mobo (thank god there was the voodoo2). it didn't take much space and had no heatsink; nothing like that xabre. More common were integrated 1MB S3, cirrus logic 2D etc. video.


Err um do we really want to go back to the era of ubiquitous Radeon "HD1xxxx" chips with 128MB of GDDR5 memory lasting 5 or so years that the Rage chip lasted? (Exaggerating a little but the original Mach 64 chip was introduced in 1995 and the last iteration Rage 128 Pro with admittedly far improved features and performance was introduced in 1999.) The "Rage" chip lasted so long

With all due respect to the poster the Xabre is not a chipset I would suggest as a prime example of onboard graphics to suggest a change to added memory.

I remember being excited by the Xabre at E3 2002 (In fact I still have the sample SIS sent to me at that time. I also remember firing up games and going ugh)
 
Rage 128 was as really as good as its contemporaries thoug, aside from drivers. Maybe it had similar 2D compared to Mach 64, but I honestly rather doubt that. It had DVD acceleration, for example, and Mach64 sure did not have that. Neither did G400 for that matter! (And maybe TNT2 too?)

IGPs today are not all that different from a performance perspective relative to their ancient ancestors. They still suck for games. Nearly useless for anything that's newer than, say, 2005. (780G tweaker/overclocker here) For example, the i810 IGP was not vastly different than a Voodoo2 released only ~two years earlier. There were boards with onboard TNT2s. I'm not sure I recall onboard Rage128s.

If you want to play any semi-recent 3D games on your machine at a res above 800x600 with low detail levels, don't go with an IGP. It's as simple as that. There are gobs of ex-hotstuff 3D cards out there that will dust them and can be had for cheap. Even brand new stuff is cheap now, with how crowed the sub $100 range is.

On the other hand, my 780G is great for everything that's not 3D. It eats up H264 and VC1, for example.

CUDA isn't going to magically run better on an IGP than games do. You still have hardware equivalent to the lowest of the low in recent GPUs, strapped to slow, shared RAM interfaces.
 
Last edited by a moderator:
Back
Top