NVIDIA GF100 & Friends speculation

Come on guys, that "dropping CUDA" is just trolling - I linked it for perspective on the likely bullshitty quality of that website as a whole.
 
NV won't drop CUDA. It will just get deprecated or de-emphasized. They have to support the legacy.

DK

Isn't that really the same thing? A company like that can't exactly "drop" CUDA but they can no longer develop further on it and push OpenCL instead and still support existing CUDA customers and fix issues. To me that is still dropping CUDA.
 
Isn't that really the same thing? A company like that can't exactly "drop" CUDA but they can no longer develop further on it and push OpenCL instead and still support existing CUDA customers and fix issues. To me that is still dropping CUDA.

Dropping is dropping. ie, no more support. Ripped out of drivers. Ripped out of tool flows. No more bug fixes.
 
What they can do is drop the "CUDA core" market-speeke. IF there's a kernel of truth in this story it might be that their marketing people are going to turn it down a notch (from 12 to 11) on the CUDA naming front.
 
Come on guys, that "dropping CUDA" is just trolling - I linked it for perspective on the likely bullshitty quality of that website as a whole.
I'd just like to point out that I was merely presenting a reasonable alternative to what was obviously an incorrect statement about nVidia and CUDA support on future hardware. Basically, some rube may well have heard that nVidia is dropping CUDA support in the future, and interpreted it to mean something completely different, but what it really meant is that nVidia is transitioning to OpenCL instead, slowly deemphasizing and then eventually dropping CUDA entirely down the road.

Of course, this is just a guess that some true statement, through transmission, became distorted. But if the person wasn't just pulling bullshit out of thin air, I think this may be a reasonable interpretation.
 
There are. Do you want me to mail you some ketchup? :)
Henderson's relish if that's OK, but wait until I get a GF104 to play with first :smile:

Carsten, I'll shoot you some notes in private.
 
Nvidia dropping CUDA... :rolleyes: No way. But it would be interesting to know how much CUDA costs Nvidia per year.
Well, like I said, from a software developer's standpoint, from what I've heard from people who are programming on GPU's, there is just no reason to program on CUDA instead of OpenCL on nVidia hardware. And since programming in OpenCL lets you use other hardware as well, there's just no reason to use CUDA. So it makes perfect sense that nVidia will deprecate it.
 
Is there an OpenCL equivalent to Nvidia's Runtime CUDA interface? I remember seeing rumbles about the latter being a bit easier to deal with but I guess it's not that big of a deal if you're a serious programmer. As long as CUDA doesn't expose features that OpenCL lacks there really isn't a compelling argument for CUDA once the two APIs achieve performance parity - last I saw Nvidia's OpenCL implementation was still lagging behind CUDA.
 
CUDA's strength is in leveraging their hardware more efficiently and faster (in terms of time before features or abilities are exposed properly) than OpenCL, with a development process for the API that they control, where it doesn't have to do a round of design by committee and be passed by the other OpenCL ARB members for yays and nays, myriad spec wording rewrites, vetoes and arguments and the rest of it.

As for there being no reason to program using CUDA rather than CL on NV hardware, you're either joking or you just haven't given it a go. The documentation and tooling and support and driver are pretty much light years ahead of the CL offering (and anyone's CL offering). Most of the difficulty in getting over the initial development curve with CL or CUDA isn't the API; it's the debugging and profiling and support and good docs.

It's not enough to just download the documentation and headers and a compiler and from then on it's just API vs API because performance and features are at parity. That's not the real world with parallel programming on GPUs.

Not that I don't enjoy CL or I'm unproductive with it, but CUDA still wins at this point. It's no coincidence that a big push at Khronos right now is concerned with tooling and a settled API, so developers can warm to it quicker. As soon as that happens we can probably have a serious discussion about what NV will do next with CUDA to maintain their competitive advantage in the market. After all, that's why it's there.
 
Well, like I said, from a software developer's standpoint, from what I've heard from people who are programming on GPU's, there is just no reason to program on CUDA instead of OpenCL on nVidia hardware. And since programming in OpenCL lets you use other hardware as well, there's just no reason to use CUDA. So it makes perfect sense that nVidia will deprecate it.

Isn't CUDA a bit faster? At least that's what I've seen from the few benchmarks I've stumbled upon.

Edit: after reading Rys's post, I guess it may just be due to it being used better.
 
I can't see OpenCL ever achieving "performance parity" with CUDA because LLVM is in the way.

What I'm intrigued to see is how CUDA programming on Fermi goes beyond what OpenCL can do, in terms of language constructs at the C/C++ level. OpenCL's memory model is quite restricted, e.g. the way pointers can be used. And other stuff - I'm pretty fuzzy to be honest. I'm interested to see what the "more advanced programming model" brings.

I'm not referring to Fermi's better/more-efficient support of CUDA features, e.g. fast atomics or cache hierarchy - but specifically to the way that programmers craft algorithms. Whether new algorithms become possible (in a similar fashion to the algorithmic choice shared memory engenders), or merely easier to implement.

I suppose I should add the caveat, that this is all within the confines of the GPU. OpenCL obviously blurs boundaries with CPU/GPU/co-processor and opens up algorithmic choices that aren't so clear-cut with CUDA.
 
Back
Top