NVIDIA discussion [2024]

  • Thread starter Deleted member 2197
  • Start date
That's a good thing. AI is stealing lot of data to achieve their result. This behavior must change.
This is a really complicated issue though. If Nvidia, OpenAI and others lose in court and the US bans training AI models with copyrighted materials unless authorized, then it's possible that only a handful of huge US companies could build those models in the future. Startups/smaller companies/universities could not afford paying for licenses/fees to use the material for training models on a lower budget. This is hardly a favorable outcome.

But competitors like China or some other western countries like say the UK or Japan might take much permissive stance and allow training on copyrighted materials.

So effectively my book, painting and music would still get used as training material for AI models even if I never granted anyone the permit to do so. Just not by Nvidia or some other US company.
 
Last edited:
This is a really complicated issue though. If Nvidia, OpenAI and others lose in court and the US bans training AI models with copyrighted materials unless authorized, then it's possible that only a handful of huge US companies could build those models in the future. Startups/smaller companies/universities could not afford paying for licenses/fees to use the material for training models on a lower budget. This is hardly a favorable outcome.

But competitors like China or some other western countries like say the UK or Japan might take much permissive stance and allow training on copyrighted materials.

So effectively my book, painting and music would still get used as training material for AI models even if I never granted anyone the permit to do so. Just not by Nvidia or some other US company.

This is a very good point. My preference is go for output only, e.g. ChatGPT should refuse a request to reproduce a specific article from NYTimes (or something similar to the specific article), but you can't prevent the AI from learning something from them.
Of course, there will be grey area, such as what should be if I ask "give me a brief summary of this article" (this is probably within fair use), or "rewrite this article without changing its points" (this could be problematic, maybe handled with translation rights even if it's in the same language).
 
Does the source of the data input have to be copyrightable? What about personal choices/data collected by Microsoft/Amazon/Ebay/online-retail on individuals that can be used/sold in bulk to train models.(btw, I've always been against this form of data collection without my permission or having some type of compensation mechanism in place. I've never see a "true" opt-out choice where absolutely no data will be used. Even though my information is just a drop in the ocean it does contribute to the overall result.
 
Last edited by a moderator:
Does the source of the data input have to be copyrightable?
No, naturally a dataset comprising only of non-copyrighted material can be used. It will just be poorer in quality and quantity than a non-restricted dataset and having to work with such limitations will make training models more expensive.

Good quality data and massive amounts of it is very valuable and the big players are already making deals for access to copyrighted material even though the rules have not yet been set.


My preference is go for output only, e.g. ChatGPT should refuse a request to reproduce a specific article from NYTimes (or something similar to the specific article), but you can't prevent the AI from learning something from them.
I tend to agree. A human is allowed to learn from and have their work be influenced by the work of others, whether copyrighted or not. Outright plagiarism is however forbidden. In principle, I'm not sure why the rules for AI models should be much different, although clearly there are technical differences in how humans and AI models work and in practice the grey areas will look different and raise new challenges.

Curious to see what happens with the court cases and legislation and what kind of arguments prevail.
 
March 13, 2024
Mark Zuckerberg's Meta AI venture has taken a new high as the firm proceeds towards rapid development on AGI, which is the next big thing after generative AI in the field. To achieve optimal computing power, Meta has developed two new data center clusters, as reported by Datacenter Dynamics, with the sole aim of AI research and LLM development in consumer-specific applications such as voice and image recognition. The firm has decided to integrate none other than NVIDIA's H100 AI GPUs, with both clusters containing 24,576 units.

Expanding more upon what the clusters offer, both of them come with a 400Gbps interconnect capability, with one having Meta's self-developed fabric solution based on the Arista 7800, while the other cluster features NVIDIA's Quantum2 InfiniBand fabric to ensure a seamless inter-connectivity experience. Moreover, the clusters are built upon Meta's very own open-GPU Grand Teton AI platform, which is built to leverage the capabilities present in modern-day accelerators through having a bumped-up host-to-GPU bandwidth and compute capacity.
...
Meta's two new clusters are a part of the company's plans to be a leading force in having a larger AI computing capacity than its competitors. The integration of NVIDIA's H100s comes under the plan of employing 350,000 of these AI GPUs by the end of this year, amounting to a total of 600,000 H100s in their bag.
 
Yeah but LLMs as a workload map really badly to client devices that have tiny DRAM pools and low bandwidth.
Which is why all LPDDR inference sticks a-la QC CloudAI went nowhere this cycle.

Yep and getting them up and running on anything that isn't Nvidia is either (1) a pain, or (2) not possible. Arc has made good progress but still firmly in the painful category.

We've trained and deployed models using Matlab (engineers already know Matlab) for a few years now and neither GPU-based training, nor inference is possible on anything that isn't Nvidia. The models are deep enough that there is a substantial-enough-to-be-annoying wait when inferencing on CPUs.

We don't buy anything other than Nvidia because others were asleep and their stuff isn't usable. What's funny is that I could load and use large data on a Nvidia GPU in Matlab back in 2012?ish. They've been pushing this a long time and nobody else cared until now, so I can't say I have any sympathy.
 
That's really niche.

Matlab SIMD offload is silly niche.
Other vendors usually have bigger things to worry about.
You call a "niche" and "bigger things to worry" a $90 BILLION business? (expected NVDA DC revenue for this year)
Please stop, my abs can't take it 🤣

PS: obviously Matlab training is not the main market but the point made by sniffy is that Nvidia has been present in this field for more than a decade where nobody cared. Nvidia has thousands of AI/ML "niche" customers in dozens of different industries, all with different workflows. These early customers, who are at the forefront of technology evolution, are used to Nvidia hardware and tools. It's extremely difficult to change their habit. The challenge for AMD and Intel is to provide a robust and comprehensive ecosystem for a painless transition in all these scenarios. Easier said then done when you are 10 years late..
 
Last edited:
That's really niche.

In engineering and industry it most definitely is not. In academia, yeah sure it is quite niche but as a part-time academic myself, who cares about them.

Other vendors usually have bigger things to worry about.

Yes I am sure that's how they think. It only cost them the professional market but yes bigger fish to fry.

You call a "niche" and "bigger things to worry" a $90 BILLION business? (expected NVDA DC revenue for this year)
Please stop, my abs can't take it 🤣

PS: obviously Matlab training is not the main market but the point made by sniffy is that Nvidia has been present in this field for more than a decade where nobody cared. Nvidia has thousands of AI/ML "niche" customers in dozens of different industries, all with different workflows. These early customers, who are at the forefront of technology evolution, are used to Nvidia hardware and tools. It's extremely difficult to change their habit. The challenge for AMD and Intel is to provide a robust and comprehensive ecosystem for a painless transition in all these scenarios. Easier said then done when you are 10 years late..

Not only is it difficult to change habits, but in this scenario there is no reason to do so (bar availability). Nobody in the real world worships at the altar of GPU vendors, they buy the best product on balance. If your software stack is broken and needs fixing, or it outright is not supported, nobody will use it. The only reason you would think about doing that is you need to buy hardware now and the better product is not available. But in my example, better to wait.
 
Last edited:
Nobody in the real world worships at the altar of GPU vendors, they buy the best product on balance. If your software stack is broken and needs fixing, or it outright is not supported, nobody will use it.

Yep, hardware benchmarks are interesting but that’s not how businesses make decisions. Nvidia talks about TCO for a reason. Having stuff “just work” is far more important than the sticker price on the hardware.

MI300 seems to be gaining some traction though. Remains to be seen if AMD can support their ecosystem adequately enough to maintain that interest for MI400.
 
March 13, 2024
More info on the gpu breakdown ...
At the time, Zuckerberg said that by the end of 2024, Meta Platforms would have a fleet of accelerators that had the performance of “almost 600,000 H100s equivalents of compute if you include other GPUs.”

Let’s talk about these GPU equivalent numbers first and then see what choices Facebook is making as it builds out its infrastructure as it pursues AGI with Llama 3 and works on what we presume are Llama 4 and Llama 5 in its Facebook AI Research and Generative AI labs.
...
Based on all of this, here is what we think the Meta Platforms GPU fleet will look like as 2024 comes to a close:
meta-platforms-gpu-fleet.jpg

Anyway, we think there is room in the Meta Platforms budget for 24,000 Blackwell B100s or B200s this year, and we would not be surprised to see such a cluster being built if Nvidia can even allocate that many to Meta Platforms. Or that could be a mix of Blackwell devices from Nvidia and “Antares” Instinct MI300X devices from AMD.
 
Lets see if they right with 5,8 PFLOPs FP16 performance with sparsity. Would put B100 2,23x higher than MI300X.
 
You call a "niche" and "bigger things to worry" a $90 BILLION business?
Low quality bait again.
Matlab is niche.
It only cost them the professional market but yes bigger fish to fry.
Dimes. Pennies.
Like Intel's been doing MKL for 25 years and that really hasn't made them much money back.
Remains to be seen if AMD can support their ecosystem
The ecosystem (OCP guys) supports itself.
They just need a roadmap.
 
GTC—At GTC on Monday, Microsoft Corp. and NVIDIA expanded their longstanding collaboration with powerful new integrations that leverage the latest NVIDIA generative AI and Omniverse™ technologies across Microsoft Azure, Azure AI services, Microsoft Fabric and Microsoft 365.

“Together with NVIDIA, we are making the promise of AI real, helping to drive new benefits and productivity gains for people and organizations everywhere,” said Satya Nadella, Chairman and CEO, Microsoft. “From bringing the GB200 Grace Blackwell processor to Azure, to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability.”
...
Advancing AI Infrastructure
Microsoft will be one of the first organizations to bring the power of NVIDIA Grace Blackwell GB200 and advanced NVIDIA Quantum-X800 InfiniBand networking to Azure, deliver cutting-edge trillion-parameter foundation models for natural language processing, computer vision, speech recognition and more.

Microsoft is also announcing the general availability of its Azure NC H100 v5 VM virtual machine (VM) based on the NVIDIA H100 NVL platform. Designed for mid-range training and inferencing, the NC series of virtual machines offers customers two classes of VMs from one to two NVIDIA H100 94GB PCIe Tensor Core GPUs and supports NVIDIA Multi-Instance GPU (MIG) technology, which allows customers to partition each GPU into up to seven instances, providing flexibility and scalability for diverse AI workloads.
...
Healthcare and Life Sciences Breakthroughs
Microsoft is expanding its collaboration with NVIDIA to transform healthcare and life sciences through the integration of cloud, AI and supercomputing technologies. By harnessing the power of Microsoft Azure alongside NVIDIA DGX™ Cloud and the NVIDIA Clara™ suite of microservices, healthcare providers, pharmaceutical and biotechnology companies, and medical device developers will soon be able to innovate rapidly across clinical research and care delivery with improved efficiency.
...
Industrial Digitalization
NVIDIA Omniverse Cloud APIs will be available first on Microsoft Azure later this year, enabling developers to bring increased data interoperability, collaboration, and physics-based visualization to existing software applications. At NVIDIA GTC, Microsoft is demonstrating a preview of what is possible using Omniverse Cloud APIs on Microsoft Azure. Using an interactive 3D viewer in Microsoft Power BI, factory operators can see real-time factory data overlaid on a 3D digital twin of their facility to gain new insights that can speed up production.
...
NVIDIA Triton Inference Server and Microsoft Copilot
NVIDIA GPUs and NVIDIA Triton Inference Server™ help serve AI inference predictions in Microsoft Copilot for Microsoft 365. Copilot for Microsoft 365, soon available as a dedicated physical keyboard key on Windows 11 PCs, combines the power of large language models with proprietary enterprise data to deliver real-time contextualized intelligence, enabling users to enhance their creativity, productivity and skills.
....
From AI Training to AI Deployment
NVIDIA NIM™ inference microservices are coming to Azure AI to turbocharge AI deployments. Part of the NVIDIA AI Enterprise software platform, also available on the Azure Marketplace, NIM provides cloud-native microservices for optimized inference on more than two dozen popular foundation models, including NVIDIA-built models that users can experience at ai.nvidia.com. For deployment, the microservices deliver pre-built, run-anywhere containers powered by NVIDIA AI Enterprise inference software — including Triton Inference Server, TensorRT™ and TensorRT-LLM — to help developers speed time to market of performance-optimized production AI applications.
 

Nvidia, Amazon Tease 6x Performance Boost to Upcoming Supercomputer​

The companies revise the Project Ceiba supercomputer to have six times more computing power than originally envisioned thanks to Nvidia's new Blackwell GPU architecture.
March 19, 2024
The upcoming supercomputer, dubbed Project Ceiba, was originally announced in November in partnership with Amazon. But on Monday, the companies announced they planned on upgrading the machine with a “6x performance increase.”

Nvidia will harness its newly announced Blackwell GPU architecture, the successor to its Hopper-based H100 GPUs. Swapping in Blackwell will enable the supercomputer to reach a processing power of 414 exaflops, up from a mere 65 exaflops.

To pull this off, Nvidia plans on installing 20,736 Blackwell B200 GPUs inside the supercomputer, alongside 10,368 Grace CPUs. All that computing power will be hosted through Amazon’s AWS cloud service.
...
The current most powerful supercomputer, Frontier, can reach over 1.1 exaflops when measured using the Linpack benchmark. Microsoft’s cloud supercomputer Eagle, which was built with 14,400 Nvidia H100 GPUs, ranks third and is able to reach 0.56 exaflops.
 
In the business of predicting the future, predicting when it'll happen is probably always most difficult :)

Function wise, I can imagine an AI enhanced image generator in the future. I mean, we already have something like DALL-E and Sora. Today they are driven by natural language prompts, but it's not a stretch if we have something generated by an AI using a more descriptive language.
In a way, today's 3D images are generated from a scene description. We use algorithms to calculate how lights interact with those models and produce pixels. I think it's possible that an AI could do pretty much of these calculations. Once we have that, we can reduce the scene from 3D meshes to maybe something more "natural language like." For example, instead of a full 3D model of a giant robot, it might be something like "mechanical robot using random seed 12345678."

I can't predict when something like this will happen, but I'm quite sure it'll at least be possible in the future. Whether it'll be adopted depends on how good and how efficient it is. This being considered as even possible is itself astonishing.
 
Nvidia, Amazon Tease 6x Performance Boost to Upcoming Supercomputer
Good read about the project.
Nvidia has not given out prices for the DGX GB200 NVL72 rackscale system, but we think it is on the order of $3.5 million. And with 278 racks, that works out to a cool $935.5 million. That is without the cost of the EFA2 network linking the racks together or the cost of the supplemental storage that will also be used on AWS to run Ceiba. We don’t know the nature of the Ceiba deal, but AWS is probably buying parts knowing that Nvidia is guaranteeing to rent its full capacity to do research, and at a premium over whatever it costs.

Google Cloud and Oracle Cloud are also expected to be installing DGX GB200 NVL72 systems in their datacenters, but it is unclear if Nvidia will also be utilizing this equipment or if they hope to sell capacity to customers on it.

On a related note:
 
Last edited by a moderator:
In the business of predicting the future, predicting when it'll happen is probably always most difficult :)

Function wise, I can imagine an AI enhanced image generator in the future. I mean, we already have something like DALL-E and Sora. Today they are driven by natural language prompts, but it's not a stretch if we have something generated by an AI using a more descriptive language.
In a way, today's 3D images are generated from a scene description. We use algorithms to calculate how lights interact with those models and produce pixels. I think it's possible that an AI could do pretty much of these calculations. Once we have that, we can reduce the scene from 3D meshes to maybe something more "natural language like." For example, instead of a full 3D model of a giant robot, it might be something like "mechanical robot using random seed 12345678."

I can't predict when something like this will happen, but I'm quite sure it'll at least be possible in the future. Whether it'll be adopted depends on how good and how efficient it is. This being considered as even possible is itself astonishing.
The big problem which this approach will have to solve is local generation variability. Games should look (and play) the same for all players.
 
Games should look (and play) the same for all players.

Purely from a graphics perspective? Or from gameplay too?

If games are going to start including AI in eg. interactions with NPCs then will any two players have the same experience? Or if AI is used to generate plot-lines, or other "content"? I'm not just meaning what can currently be achieved with an RNG, but something that closely and semi-intelligently tracks the interactions with the player.

Just a thought.
 
Back
Top