Well, those too actually. If there's really four raster engines on a GF100 I'll eat some tasty headwear.I think he was referring to the rasterizers not the ROPs.
NV won't drop CUDA. It will just get deprecated or de-emphasized. They have to support the legacy.
DK
Isn't that really the same thing? A company like that can't exactly "drop" CUDA but they can no longer develop further on it and push OpenCL instead and still support existing CUDA customers and fix issues. To me that is still dropping CUDA.
Well, those too actually. If there's really four raster engines on a GF100 I'll eat some tasty headwear.
I'd just like to point out that I was merely presenting a reasonable alternative to what was obviously an incorrect statement about nVidia and CUDA support on future hardware. Basically, some rube may well have heard that nVidia is dropping CUDA support in the future, and interpreted it to mean something completely different, but what it really meant is that nVidia is transitioning to OpenCL instead, slowly deemphasizing and then eventually dropping CUDA entirely down the road.Come on guys, that "dropping CUDA" is just trolling - I linked it for perspective on the likely bullshitty quality of that website as a whole.
It's too bad there aren't more details from the Hardware.fr review. There's no test with no culling and no details on triangle size.Well, those too actually. If there's really four raster engines on a GF100 I'll eat some tasty headwear.
There are. Do you want me to mail you some ketchup?Rys said:If there's really four raster engines on a GF100 I'll eat some tasty headwear.
NV won't drop CUDA. It will just get deprecated or de-emphasized. They have to support the legacy.
DK
Henderson's relish if that's OK, but wait until I get a GF104 to play with first :smile:There are. Do you want me to mail you some ketchup?
Well, like I said, from a software developer's standpoint, from what I've heard from people who are programming on GPU's, there is just no reason to program on CUDA instead of OpenCL on nVidia hardware. And since programming in OpenCL lets you use other hardware as well, there's just no reason to use CUDA. So it makes perfect sense that nVidia will deprecate it.Nvidia dropping CUDA... No way. But it would be interesting to know how much CUDA costs Nvidia per year.
Well, like I said, from a software developer's standpoint, from what I've heard from people who are programming on GPU's, there is just no reason to program on CUDA instead of OpenCL on nVidia hardware. And since programming in OpenCL lets you use other hardware as well, there's just no reason to use CUDA. So it makes perfect sense that nVidia will deprecate it.
Not true. Nv uses llvm to compile ptx to isa as well.I can't see OpenCL ever achieving "performance parity" with CUDA because LLVM is in the way.
Do you have a source for that? I always thought PTX translation happened with a non-LLVM front end.Not true. Nv uses llvm to compile ptx to isa as well.