Nvidia Shows Signs in [2022]

Status
Not open for further replies.
January 4, 2022
The NVIDIA Canvas update released today, powered by the GauGAN2 AI model and NVIDIA RTX GPU Tensor Cores, generates backgrounds with increased quality and 4x higher resolution, and adds five new materials to paint with.

Use AI to turn simple brushstrokes into realistic landscape images. Create backgrounds quickly, or speed up your concept exploration so you can spend more time visualizing ideas.
Beta download
 
NVIDIA Launches a Free Version of Omniverse for Creators
From visual effects, manufacturing designs, architectural views, super computing, to game design, the Omniverse opens up endless possibilities for creators of all skill levels. As of writing, the NVIDIA Omniverse also has integrations with some of the best 3D sculpting software, including Autodesk 3ds Max, Maya, Unreal Engine, Adobe, Blender, and more.
Designed to be the foundation of virtual words, NVIDIA makes Omniverse free for individual NVIDIA Studio creators who are using GeForce RTX and NVIDIA RTX GPUs.
In addition, Omniverse is compatible with NVIDIA users for RTX-powered laptops, desktops, and workstations. So, if your GPU is qualified, you can download NVIDIA Omniverse here.
 
Meta Builds World’s Largest AI Supercomputer With NVIDIA For AI Research And Production (forbes.com)
Facebook, I mean Meta, has always been one of the industry leaders when it comes to AI research and deployment. The company processes hundreds of trillions (yes, trillions with a “T”) of inferences every day, and trains some 30,000 models daily on its current NVIDIA V100 based AI fleet.
...
So, before we get to the implications, let’s review the specs. The new system, dubbed the “RSC” (ok, Meta could use some help with naming, right?) already has 760 DGX servers with 6,080 A100 GPUs and 1520 AMD EPYC CPUs equipped with Nvidia's Quantum InfiniBand networking system, which supports up to 200Gb/s of bandwidth.

The plan is to build that up to 16,000 GPU’s by July, which at 5 Exaflops would make it the largest known AI supercomputer in the world, beating out the US DOE Perlmutter 4 Exaflops NVIDIA-based system. PureStorage is supplying a flash subsystem growing up to an exabyte of training data, and Penguin Computing is acting as the system integrator, helping out with the setup and installation. When finished, the 16,000 GPU RSC will be able to train trillion-parameter AI models in weeks, and become a critical tool for Meta to fulfill it’s metaverse ambitions. (Remember, GPUs do graphics, too!)


The Implications

1, Facebook recognizes that NVIDIA GPUs are the best platform available for AI research and development.

2. The NVIDIA DGX server allows customers like Facebook to stand up a large fleet quickly, avoiding months or years of the normal planning needed to design and install a custom supercomputer.

3. Consequently, the systems business at NVIDIA is transforming from a literally gold-plated poster child example to providing a world-class platform worthy of the expense: DGX has the best AI performance money can buy. This sets the stage nicely for the upcoming NVIDIA Grace roll-out of a fully integrated intelligence platform, a complete re-imagining of accelerated computing systems.
 

I wonder how Omniverse stacks up to an engine like UE5 from a developer’s perspective. It has many of the same ingredients - 3D asset import, physics, scripting and a renderer. Nvidia has the benefit of the datacenter tech and rendering hardware to run it on too.

It would be nice to take a “free” virtual tour of a high fidelity Omniverse scene rendered in the cloud that ties it all together. I’m surprised Nvidia hasn’t done something like that yet.
 
I wonder how Omniverse stacks up to an engine like UE5 from a developer’s perspective. It has many of the same ingredients - 3D asset import, physics, scripting and a renderer. Nvidia has the benefit of the datacenter tech and rendering hardware to run it on too.

It would be nice to take a “free” virtual tour of a high fidelity Omniverse scene rendered in the cloud that ties it all together. I’m surprised Nvidia hasn’t done something like that yet.

Omniverse lacks facilities to implement game logic, scripting, or even networking/connectivity functionality as opposed to a true game engine like UE5. Omniverse makes no attempt at replicating Unreal's Gameplay Framework or Unity's GameObject APIs so the tool can't be feasibly used to create games ...
 
Missed this announcement last April. Interested to see whether they can achieve the anticipated 20 Exaflop AI performance in 2023.

The Swiss National Computing Center (CSCS) and ETH Zurich will deploy a massive Grace-based system, delivered by HPE Cray, that will provide 20 Exaflops of (16-bit) AI performance for the largest AI networks. If performance scales linearly across 16- to 64-bit precisions, and it typically does for peak performance, that would be 10/2/2=2.5 times faster than the fastest DOE system currently envisioned, El Capitan, which is to use AMD CPUs and GPUs in 2023. It is now evident that the US DOE decision to exclude NVIDIA in the first Exascale systems did not foretell the end of NVIDIA leadership in HPC. Far from it, this announcement leapfrogs those competitors by over 2x in roughly the same timeframe.
...
While NVIDIA went to great lengths to ensure that this announcement in no way portends a shift to compete directly with Intel or AMD CPUs, the Grace CPU will certainly displace those CPUs in systems built to handle massive AI models.


960x0.jpg

NVIDIA Completely Re-Imagines The Data Center For AI (forbes.com)
 
The craziest part is the next quarter expectations with revenue of 8.1B and 65.2% gross margin :runaway::runaway::runaway:

And it will continue to grow aggressively. They mention during the earning call that their sales were supply limited but they will have better outcome this year with now $9 billion in long-term supply obligations, up from $2.54 billion a year ago. One big part goes to TSMC 5nm to secure the huge interest of Grace/Hopper...
 
Last edited:
Nvidia Says It Is Still Developing a Full Spectrum of Arm CPUs | Tom's Hardware (tomshardware.com)
February 17, 2022
The company says that its inaugural Grace CPU is on-track to be released in the first half of 2023, but its release marks just the beginning of Nvidia's long CPU journey. Huang says the company has a 20-year license for Arm's architecture, so it will use the instruction set to develop CPUs for a wide range of applications spanning from tiny SoCs for robotics to high-end processors for supercomputers.
...
Nvidia's multi-year Arm architecture license allows the company to develop highly custom Arm-based cores for a variety of applications, something that Apple does for its A-series SoCs for smartphones and tablets as well as M-series processors for Mac computers. But while Apple doesn't seem to be interested in chips for data centers or supercomputers (at least we do not know anything about such plans), Nvidia wants a top-to-bottom family of Arm chips with its own cores.
...
It certainly makes sense for Nvidia to design its own CPUs and datacenter-grade chips to maximize its revenue opportunities and gross margins. However, Nvidia will continue to support industry-standard x86 CPU platforms for obvious reasons, as accelerated computing platforms are Nvidia's bread and butter.

"Our strategy is accelerated computing; that is ultimately what we do for a living," stated the head of Nvidia. "We will deliver on our three-chip strategy across CPUs, GPUs, and DPUs. "Whether x86 or Arm, we will use the best CPU for the job. Together with partners in the computer industry, we will offer the world's best computing platform to tackle the most impactful challenges of our time."
 
Status
Not open for further replies.
Back
Top