NVIDIA discussion [2024]

His answer seems focused on cloud inference which of course Nvidia has a first mover advantage in. The edge inference market is still wide open and Intel/AMD/Qualcomm/Apple will be mixing it up there.
Meanwhile cloud providers (at least Google) are saying it's actually CPUs doing most inference, not NVIDIA GPUs like Jensen claims
 
Meanwhile cloud providers (at least Google) are saying it's actually CPUs doing most inference, not NVIDIA GPUs like Jensen claims

That's probably true and makes sense given the massive install base of CPUs. The question is what are cloud providers buying today to run inference going forward.
 
The nVidia effect:
Oracle (NYSE:ORCL) plans to release details next week of a partnership with tech titan Nvidia (NVDA).
Austin, Texas-based Oracle confirmed the partnership and upcoming announcement during the company's latest earnings call on Monday.
Oracle shares rocketed 13% during post-market trading.
 
Oracle’s cloud business seems to be doing well which bodes well for Nvidia. A lot of the good news is already priced in though.
 

Perfect explanation as to why their graphics hardware designs keep ballooning up in complexity ...
 
Sparse memory is a term defined to focus on memory management, which involves using mathematical equations to allocate data to ensure a higher efficiency than traditional methods, along with providing a faster retrieval time. The adoption of sparse memory has increased recently as more gaming titles begin to make it mandatory. In light of this, NVIDIA's NVK Vulkan driver has added support for the memory management feature, ensuring a seamless experience for Linux gamers.
...
Phoronix reports that NVIDIA's NVK driver feature sparse memory support will have a massive contribution towards the future of the platform in the domain of gaming since several games within the Linux ecosystems, such as those on Steam Play that utilize DXVK and VKD3D-Proton, have sparse memory added in as a primary requirement. We have yet to see the inclusion of support from AMD's camp, but judging by how the open-source developer MESA is proceeding with AMD's Vulkan driver, we expect it to arrive soon.
 
NVIDIA is sued over copyright infringements over training data used for NeMo. They've used a dataset which includes 196 460 books, including ones from Brian Keene, Abdi Nazemian and Stewart O'Nan who are behind the class action suit, without appropriate licences from the writers.
That's a good thing. AI is stealing lot of data to achieve their result. This behavior must change.

PS: Of course they went against the richest company to get the biggest compensation 🤑
 
Has it been established that training on copyrighted material is against copyright? I don't think OpenAI denies having used NYTimes' material either, but there NYT is arguing that articles can be reproduced verbatim which presumably allows the alleged violation to more cleanly map onto existing law?
 
Doesnt this make reading any sort of book to learn something now useless because it's against copyright
for example :
1710302296169.png
It's now useless because if I learn from it it's against copyright
 
Doesnt this make reading any sort of book to learn something now useless because it's against copyright
for example :
View attachment 10968
It's now useless because if I learn from it it's against copyright
There's a difference between humans and computers(programs) though, and in this case it's been argued that bots can reproduce the learning material in verbatim. (Yes, there are some eidetic memory that could probably do it but they're exception to human rule)
 
There's a difference between humans and computers(programs) though, and in this case it's been argued that bots can reproduce the learning material in verbatim. (Yes, there are some eidetic memory that could probably do it but they're exception to human rule)

Isn't it better to just forbid AI from "reproduce the material in verbatim"? I mean, it's probably not legal (or at least will require a fair use test) even if it's a human doing that. Consider this: if a person who can recite a full novel perfectly, the copyright owner might have a case to receive something everytime the person is doing that (e.g. if the person is doing that as a show or something). On the other hand, if a person writes a novel with a similar style to another novel, the copyright owner probably can't ask for a license fee, even if the person learnt the style by reading the novel.
 
Isn't it better to just forbid AI from "reproduce the material in verbatim"? I mean, it's probably not legal (or at least will require a fair use test) even if it's a human doing that. Consider this: if a person who can recite a full novel perfectly, the copyright owner might have a case to receive something everytime the person is doing that (e.g. if the person is doing that as a show or something). On the other hand, if a person writes a novel with a similar style to another novel, the copyright owner probably can't ask for a license fee, even if the person learnt the style by reading the novel.
Where do you draw the line, though? In music you just need to be close enough, doesn't need to be exact copy, same style or even whole song to be copyright infringement. From AI point of view it could do visually (or audibly oo) identical copy of anything with inperceivable changes for humans but more than enough to fool other AIs (think about the proposed copy protection for images from AI bots method). And it can copy text in verbatim and if the need be, rewrite it with minute changes to claim it's not a copy even when it really is.
Given the fact that with unlimited time and computational power AI could in fact learn literally everything ever produced in any way there needs to be a price to pay for the training data.
 
Where do you draw the line, though? In music you just need to be close enough, doesn't need to be exact copy, same style or even whole song to be copyright infringement. From AI point of view it could do visually (or audibly oo) identical copy of anything with inperceivable changes for humans but more than enough to fool other AIs (think about the proposed copy protection for images from AI bots method). And it can copy text in verbatim and if the need be, rewrite it with minute changes to claim it's not a copy even when it really is.
Given the fact that with unlimited time and computational power AI could in fact learn literally everything ever produced in any way there needs to be a price to pay for the training data.

This is something I think we (as a society) need to consider more carefully, of course. But right now I think it's probably reasonable to start from what we have (and roughly working) for now, and maybe iterate from here. Obvious AI will be able to do something that no human can, and we'll have to come up with new rules to tackle that something. Right now, it's not yet very clear what that something will be.
I think it's a bit dangerous to just require AI trainers to pay a different fee "because it's AI". It'd be like a newspaper charging different rates for people who's just "casually reading" or "doing some research." If search engines ware restricted the same way at the beginning, it's quite possible that we'll never really know how useful a search engine can ever be.
 
Heres a question if the people in charge of the a.i program pay for the training data are they not entitled to use it ?
eg: if they buy all the Steven king novels cant they use them ?
or if your talking about programming guides thats the whole point of buying them
 
Heres a question if the people in charge of the a.i program pay for the training data are they not entitled to use it ?
eg: if they buy all the Steven king novels cant they use them ?
or if your talking about programming guides thats the whole point of buying them

This is the problem of licensing, since we don't have something like that before (for books).
For example, if we look at software licensing, we are already familiar with different licensing for different purposes (e.g. there are educational licenseing, commercial licensing, etc.) For books we generally don't have that, although there are some specific exceptions such as licensing for public libraries. Also, generally if you buy a book, you don't have the right to, say, adapt that book into a movie.
So it could be something like, in the future, copyright owners (or publishers) will require a different license for use in AI. This is probably not the direction some people like to see (you be buying a book which has "for human readings only" or something like that), but that's probably the most likely direction we're heading to.
 
Back
Top