NVIDIA GF100 & Friends speculation

Discussion in 'Architecture and Products' started by Arty, Oct 1, 2009.

  1. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Come on guys, that "dropping CUDA" is just trolling - I linked it for perspective on the likely bullshitty quality of that website as a whole.
     
  2. dkanter

    Regular

    Joined:
    Jan 19, 2008
    Messages:
    360
    Likes Received:
    20
    NV won't drop CUDA. It will just get deprecated or de-emphasized. They have to support the legacy.

    DK
     
  3. Rys

    Rys Graphics @ AMD
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,182
    Likes Received:
    1,579
    Location:
    Beyond3D HQ
    Well, those too actually. If there's really four raster engines on a GF100 I'll eat some tasty headwear.
     
  4. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,931
    Likes Received:
    5,533
    Location:
    Pennsylvania
    Isn't that really the same thing? A company like that can't exactly "drop" CUDA but they can no longer develop further on it and push OpenCL instead and still support existing CUDA customers and fix issues. To me that is still dropping CUDA.
     
  5. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    Dropping is dropping. ie, no more support. Ripped out of drivers. Ripped out of tool flows. No more bug fixes.
     
  6. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,492
    Likes Received:
    979
    Location:
    en.gb.uk
    What they can do is drop the "CUDA core" market-speeke. IF there's a kernel of truth in this story it might be that their marketing people are going to turn it down a notch (from 12 to 11) on the CUDA naming front.
     
  7. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Care to share your feelings on this matter? :)
     
  8. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    I'd just like to point out that I was merely presenting a reasonable alternative to what was obviously an incorrect statement about nVidia and CUDA support on future hardware. Basically, some rube may well have heard that nVidia is dropping CUDA support in the future, and interpreted it to mean something completely different, but what it really meant is that nVidia is transitioning to OpenCL instead, slowly deemphasizing and then eventually dropping CUDA entirely down the road.

    Of course, this is just a guess that some true statement, through transmission, became distorted. But if the person wasn't just pulling bullshit out of thin air, I think this may be a reasonable interpretation.
     
  9. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,493
    Likes Received:
    474
    It's too bad there aren't more details from the Hardware.fr review. There's no test with no culling and no details on triangle size.
     
  10. Bob

    Bob
    Regular

    Joined:
    Apr 22, 2004
    Messages:
    424
    Likes Received:
    47
    There are. Do you want me to mail you some ketchup? :)
     
  11. Arnold Beckenbauer

    Veteran Subscriber

    Joined:
    Oct 11, 2006
    Messages:
    1,756
    Likes Received:
    722
    Location:
    Germany
    What I can imagine, is, that Nvidia could "drop" C/C# for CUDA in the same way like AMD did it with Brook+.
     
  12. Rys

    Rys Graphics @ AMD
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,182
    Likes Received:
    1,579
    Location:
    Beyond3D HQ
    Henderson's relish if that's OK, but wait until I get a GF104 to play with first :smile:

    Carsten, I'll shoot you some notes in private.
     
  13. Robert Varga

    Newcomer

    Joined:
    Jan 13, 2010
    Messages:
    26
    Likes Received:
    0
    Nvidia dropping CUDA... :roll: No way. But it would be interesting to know how much CUDA costs Nvidia per year.
     
  14. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, like I said, from a software developer's standpoint, from what I've heard from people who are programming on GPU's, there is just no reason to program on CUDA instead of OpenCL on nVidia hardware. And since programming in OpenCL lets you use other hardware as well, there's just no reason to use CUDA. So it makes perfect sense that nVidia will deprecate it.
     
  15. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    Is there an OpenCL equivalent to Nvidia's Runtime CUDA interface? I remember seeing rumbles about the latter being a bit easier to deal with but I guess it's not that big of a deal if you're a serious programmer. As long as CUDA doesn't expose features that OpenCL lacks there really isn't a compelling argument for CUDA once the two APIs achieve performance parity - last I saw Nvidia's OpenCL implementation was still lagging behind CUDA.
     
  16. Rys

    Rys Graphics @ AMD
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,182
    Likes Received:
    1,579
    Location:
    Beyond3D HQ
    CUDA's strength is in leveraging their hardware more efficiently and faster (in terms of time before features or abilities are exposed properly) than OpenCL, with a development process for the API that they control, where it doesn't have to do a round of design by committee and be passed by the other OpenCL ARB members for yays and nays, myriad spec wording rewrites, vetoes and arguments and the rest of it.

    As for there being no reason to program using CUDA rather than CL on NV hardware, you're either joking or you just haven't given it a go. The documentation and tooling and support and driver are pretty much light years ahead of the CL offering (and anyone's CL offering). Most of the difficulty in getting over the initial development curve with CL or CUDA isn't the API; it's the debugging and profiling and support and good docs.

    It's not enough to just download the documentation and headers and a compiler and from then on it's just API vs API because performance and features are at parity. That's not the real world with parallel programming on GPUs.

    Not that I don't enjoy CL or I'm unproductive with it, but CUDA still wins at this point. It's no coincidence that a big push at Khronos right now is concerned with tooling and a settled API, so developers can warm to it quicker. As soon as that happens we can probably have a serious discussion about what NV will do next with CUDA to maintain their competitive advantage in the market. After all, that's why it's there.
     
  17. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Isn't CUDA a bit faster? At least that's what I've seen from the few benchmarks I've stumbled upon.

    Edit: after reading Rys's post, I guess it may just be due to it being used better.
     
  18. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    I can't see OpenCL ever achieving "performance parity" with CUDA because LLVM is in the way.

    What I'm intrigued to see is how CUDA programming on Fermi goes beyond what OpenCL can do, in terms of language constructs at the C/C++ level. OpenCL's memory model is quite restricted, e.g. the way pointers can be used. And other stuff - I'm pretty fuzzy to be honest. I'm interested to see what the "more advanced programming model" brings.

    I'm not referring to Fermi's better/more-efficient support of CUDA features, e.g. fast atomics or cache hierarchy - but specifically to the way that programmers craft algorithms. Whether new algorithms become possible (in a similar fashion to the algorithmic choice shared memory engenders), or merely easier to implement.

    I suppose I should add the caveat, that this is all within the confines of the GPU. OpenCL obviously blurs boundaries with CPU/GPU/co-processor and opens up algorithmic choices that aren't so clear-cut with CUDA.
     
  19. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Not true. Nv uses llvm to compile ptx to isa as well.
     
  20. Rys

    Rys Graphics @ AMD
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,182
    Likes Received:
    1,579
    Location:
    Beyond3D HQ
    Do you have a source for that? I always thought PTX translation happened with a non-LLVM front end.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...