Bondrewd
Veteran
Yeah but LLMs as a workload map really badly to client devices that have tiny DRAM pools and low bandwidth.Hence wide open.
Which is why all LPDDR inference sticks a-la QC CloudAI went nowhere this cycle.
Yeah but LLMs as a workload map really badly to client devices that have tiny DRAM pools and low bandwidth.Hence wide open.
Meanwhile cloud providers (at least Google) are saying it's actually CPUs doing most inference, not NVIDIA GPUs like Jensen claimsHis answer seems focused on cloud inference which of course Nvidia has a first mover advantage in. The edge inference market is still wide open and Intel/AMD/Qualcomm/Apple will be mixing it up there.
Meanwhile cloud providers (at least Google) are saying it's actually CPUs doing most inference, not NVIDIA GPUs like Jensen claims
Google: CPUs are Leading AI Inference Workloads, Not GPUs
The AI infrastructure of today is mostly fueled by the expansion that relies on GPU-accelerated servers. Google, one of the world's largest hyperscalers, has noted that CPUs are still a leading compute for AI/ML workloads, recorded on their Google Cloud Services cloud internal analysis. During...www.techpowerup.com
Sparse memory is a term defined to focus on memory management, which involves using mathematical equations to allocate data to ensure a higher efficiency than traditional methods, along with providing a faster retrieval time. The adoption of sparse memory has increased recently as more gaming titles begin to make it mandatory. In light of this, NVIDIA's NVK Vulkan driver has added support for the memory management feature, ensuring a seamless experience for Linux gamers.
...
Phoronix reports that NVIDIA's NVK driver feature sparse memory support will have a massive contribution towards the future of the platform in the domain of gaming since several games within the Linux ecosystems, such as those on Steam Play that utilize DXVK and VKD3D-Proton, have sparse memory added in as a primary requirement. We have yet to see the inclusion of support from AMD's camp, but judging by how the open-source developer MESA is proceeding with AMD's Vulkan driver, we expect it to arrive soon.
That's a good thing. AI is stealing lot of data to achieve their result. This behavior must change.NVIDIA is sued over copyright infringements over training data used for NeMo. They've used a dataset which includes 196 460 books, including ones from Brian Keene, Abdi Nazemian and Stewart O'Nan who are behind the class action suit, without appropriate licences from the writers.
There's a difference between humans and computers(programs) though, and in this case it's been argued that bots can reproduce the learning material in verbatim. (Yes, there are some eidetic memory that could probably do it but they're exception to human rule)Doesnt this make reading any sort of book to learn something now useless because it's against copyright
for example :
View attachment 10968
It's now useless because if I learn from it it's against copyright
There's a difference between humans and computers(programs) though, and in this case it's been argued that bots can reproduce the learning material in verbatim. (Yes, there are some eidetic memory that could probably do it but they're exception to human rule)
Where do you draw the line, though? In music you just need to be close enough, doesn't need to be exact copy, same style or even whole song to be copyright infringement. From AI point of view it could do visually (or audibly oo) identical copy of anything with inperceivable changes for humans but more than enough to fool other AIs (think about the proposed copy protection for images from AI bots method). And it can copy text in verbatim and if the need be, rewrite it with minute changes to claim it's not a copy even when it really is.Isn't it better to just forbid AI from "reproduce the material in verbatim"? I mean, it's probably not legal (or at least will require a fair use test) even if it's a human doing that. Consider this: if a person who can recite a full novel perfectly, the copyright owner might have a case to receive something everytime the person is doing that (e.g. if the person is doing that as a show or something). On the other hand, if a person writes a novel with a similar style to another novel, the copyright owner probably can't ask for a license fee, even if the person learnt the style by reading the novel.
Where do you draw the line, though? In music you just need to be close enough, doesn't need to be exact copy, same style or even whole song to be copyright infringement. From AI point of view it could do visually (or audibly oo) identical copy of anything with inperceivable changes for humans but more than enough to fool other AIs (think about the proposed copy protection for images from AI bots method). And it can copy text in verbatim and if the need be, rewrite it with minute changes to claim it's not a copy even when it really is.
Given the fact that with unlimited time and computational power AI could in fact learn literally everything ever produced in any way there needs to be a price to pay for the training data.
Heres a question if the people in charge of the a.i program pay for the training data are they not entitled to use it ?
eg: if they buy all the Steven king novels cant they use them ?
or if your talking about programming guides thats the whole point of buying them