How CUDA programming works

pharma

Legend
This is not a course on CUDA programming. It's a foundation on what works, what doesn't work, and why. We'll tell you how to think about a problem in a way that will run well on the GPU, and you'll see how the CUDA programming model is built to run that way. If you're new to CUDA, we'll give you the core background knowledge you need — getting started begins with understanding. If you're an expert, hopefully you'll face your next optimization problem with a new perspective on what might work, and why.
GTC 2022
 
Didn't Nvidia open source a portion of the CUDA compiler (2011) to provide a compatibility path for other manufacturers? I think that's what they are referring to with "the GPU" statement, maybe more so now with architectures like ARM.
No.

"NVIDIA will not be releasing CUDA LLVM in a truly open source manner, but they will be releasing the source in a manner akin to Microsoft’s “shared source” initiative – eligible researchers and developers will be able to apply to NVIDIA for access to the source code."

Maybe you can read the article before you start posting links?
 
If you have issues with linked articles that's your problem. Thanks for highlighting (part of) the relevant quote from the article. ;)

Edit:
"NVIDIA will not be releasing CUDA LLVM in a truly open source manner, but they will be releasing the source in a manner akin to Microsoft’s “shared source” initiative – eligible researchers and developers will be able to apply to NVIDIA for access to the source code. This allows NVIDIA to share CUDA LLVM with the necessary parties to expand its functionality without sharing it with everyone and having the inner workings of the Fermi code generator exposed, or having someone (i.e. AMD) add support for a new architecture and hurt NVIDIA’s hardware business in the process."
 
Last edited:
I think what you (Pharma) just quoted directly supports the statement that Tuna made initially, and refutes the one that you attempted to make. I'm not sure of any other way to interpret what I've read.

Or to say it another way: No, NVIDIA did not open up their CUDA source code to other manufacturers. They may decide, of their own volition, to permit specific groups to extend functionality so long as it meant never exposing the code generation components or using the knowledge to build competing architectures.

Undoubtedly they would deny anyone who wanted to permit CUDA to run anywhere else.
 
Exactly, in my initial statement I explicitly said a "portion" which refers to "sharing" CUDA source code with specific groups. I know it's not fully open source and could have been a bit more clear with my statement.

I think Nvidia is intelligent enough not to allow full access to CUDA LLVM without having all "i" dotted and "t" crossed by developers since it still is valuable IP.
 
Funny tidbit about CUDA ... initially was considered a bad investment by Wall Street.

In 2006, the company made another huge bet, releasing a software toolkit called CUDA.

“For 10 years, Wall Street asked Nvidia, ‘Why are you making this investment? No one’s using it.’ And they valued it at $0 in our market cap,” said Bryan Catanzaro, vice president of applied deep learning research at Nvidia. He was one of the only employees working on AI when he joined Nvidia in 2008. Now, the company has thousands of staffers working in the space.

“It wasn’t until around 2016, 10 years after CUDA came out, that all of a sudden people understood this is a dramatically different way of writing computer programs,” Catanzaro said. “It has transformational speedups that then yield breakthrough results in artificial intelligence.”

Although AI is growing rapidly, gaming remains Nvidia’s primary business. In 2018, the company used its AI expertise to make its next big leap in graphics. The company introduced GeForce RTX based on what it had learned in AI.

“In order for us to take computer graphics and video games to the next level, we had to reinvent and disrupt ourselves, change literally what we invented altogether,” Huang said. “We invented this new way of doing computer graphics, ray tracing, basically simulating the pathways of light and simulate everything with generative AI. And so we compute one pixel and we imagine with AI the other seven.”
 
Back
Top