Best "old" Geforce card to buy for CUDA?

Hi,

I'm an engineering student who is looking to learn GPU programming, more specifically CUDA.
I would be looking for a used card that would not cost too much money.
I see that the Double precision floating point processing power hasn't moved much in the past years, atleast not on Nvidia's cards. Eg: The Geforce GTX 580 does 195 GFLOPS DP, while the GTX 980 does 165 GFLOPS DP.

Would there be any disadvantages to go for a GTX 580 instead of a card from the 6xx/7xx series (9xx series would be too expensive), given that I hopefully could find a GTX580 3GB?

I would use CUDA mostly for finite-difference methods of forward modelling of the wave equations, mostly in the setting of exploration seismology.

Thank you and happy holidays to all!
 
Is FP64 important to you? It may be implied in your post but I wanted to make sure.
 
FP64-to-FP32 ratios have been consistently deprecated in recent GPUs because FP64 doesn't seem to be used in games at all. In fact, there's a trend to (re?)introduce specific FP16 functionality because for some shaders there's no discernible visual difference from using 32 bit values.
Also, the fact that the GPUs have been stuck in 28nm for over 3 years probably didn't help, so IHVs had to start cutting somewhere in order to dedicate more transistors towards more gaming-oriented features.


The wikipedia lists have columns for theoretical SP and DP performance on all DX11+ GPUs:

https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units

As for the cards, the GTX580 3GB versions are extremely rare, so they'll probably be a lot more expensive than the regular 1.5GB ones in the used market.

I don't see any disadvantage of using a GTX 580 except for it being a rather old GPU with a rather low efficiency, so maybe even the old GK104 cards may get better real-life performance (despite the lower theoretical performance).


I do see somewhat of a disadvantage of using CUDA in the particular case of you being a student and wanting to learn GPU programming.
I recon the documentation from nVidia should be better and easier to get into, but you're learning an SDK that is tied to a single graphics chip vendor and as such it might not be around for much longer.

For example, OpenCL may not have the documentation and full functionality of CUDA (yet), but it will work on both nVidia and AMD GPUs. If you go with SP, you could use GPGPU acceleration from pretty much every GPU that was released within the last 2-3 years from nVidia, AMD, Intel, ARM, PowerVR and Vivante.

Furthermore, if DP is really what you need, you could get your hands on an old Tahiti XT card (Radeon 7970, R280X) for little money, which will already bring 3GB and has 5x higher DP performance than the GTX 580 for the same power consumption.
 
This is what I was getting at. If DPFP isn't important to you then go with a GTX960 4GB or something.
 
FP64-to-FP32 ratios have been consistently deprecated in recent GPUs because FP64 doesn't seem to be used in games at all. In fact, there's a trend to (re?)introduce specific FP16 functionality because for some shaders there's no discernible visual difference from using 32 bit values.
Also, the fact that the GPUs have been stuck in 28nm for over 3 years probably didn't help, so IHVs had to start cutting somewhere in order to dedicate more transistors towards more gaming-oriented features.


The wikipedia lists have columns for theoretical SP and DP performance on all DX11+ GPUs:

https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units

As for the cards, the GTX580 3GB versions are extremely rare, so they'll probably be a lot more expensive than the regular 1.5GB ones in the used market.

I don't see any disadvantage of using a GTX 580 except for it being a rather old GPU with a rather low efficiency, so maybe even the old GK104 cards may get better real-life performance (despite the lower theoretical performance).


I do see somewhat of a disadvantage of using CUDA in the particular case of you being a student and wanting to learn GPU programming.
I recon the documentation from nVidia should be better and easier to get into, but you're learning an SDK that is tied to a single graphics chip vendor and as such it might not be around for much longer.

For example, OpenCL may not have the documentation and full functionality of CUDA (yet), but it will work on both nVidia and AMD GPUs. If you go with SP, you could use GPGPU acceleration from pretty much every GPU that was released within the last 2-3 years from nVidia, AMD, Intel, ARM, PowerVR and Vivante.

Furthermore, if DP is really what you need, you could get your hands on an old Tahiti XT card (Radeon 7970, R280X) for little money, which will already bring 3GB and has 5x higher DP performance than the GTX 580 for the same power consumption.
OpenCL would be useless. CUDA is where it's at.
 
Back
Top