Nvidia Shows Signs in [2022]

Status
Not open for further replies.
March 23, 2022
With adversarial reinforcement learning, physically simulated characters can be developed that automatically synthesize lifelike and responsive behaviors. A character is first trained to perform complex motor skills by imitating human motion data.

Once the character has acquired a rich repertoire of skills, it can reuse those skills to perform new tasks in a natural, lifelike way.

This model then allows you to generate motions for new scenarios, without tedious manual animation or new motion data from real actors.
 
https://www.tomshardware.com/news/n...el-foundry-intel-and-amd-know-all-our-secrets

Huang explained that using Intel as a foundry services partner would take an extended period of time. "Foundry discussions take a long time, and it's not just about desire. We have to align technology, the business models have to be aligned, the capacity has to be aligned, the operations process and the nature of the two companies have to be aligned. It takes a fair amount of time and a lot of deep, deep discussion – we're not buying milk here. This is really about the integration of the supply chains. Our partnerships with TSMC and Samsung in the last several years are something that took years to cultivate. So we are very open-minded to considering Intel, and I'm delighted by the efforts that they're making."

"We have been working closely with Intel, sharing with them our roadmap long before we share it with the public, for years. Intel has known our secrets for years. AMD has known our secrets for years. We are sophisticated and mature enough to realize that we have to collaborate.[...] We share roadmaps, of course, under confidentiality and a very selective channel of communications. The industry has just learned how to work in that way."

This should be obvious but this has been my experience throughout my career. Large firms know how to both compete and collaborate with each other on different initiatives. Thankfully they’re a bit more mature than the average forum discussion.
 
UK police arrest 7 people in connection with Lapsus$ hacks (yahoo.com)
Police in the United Kingdom have arrested seven people over suspected connections to the Lapsus$ hacking group, which has in recent weeks targeted tech giants including Samsung, Nvidia, Microsoft and Okta.

In a statement given to TechCrunch, Detective Inspector Michael O’Sullivan from the City of London Police said: “The City of London Police has been conducting an investigation with its partners into members of a hacking group. Seven people between the ages of 16 and 21 have been arrested in connection with this investigation and have all been released under investigation. Our enquiries remain ongoing.”
...
News of the arrests comes just hours after a Bloomberg report revealed a teenager based in Oxford, U.K. is suspected of being the mastermind of the now-prolific Lapsus$ hacking group. Four researchers investigating the gang's recent hacks said they believed the 16-year-old, who uses the online moniker “White” or “Breachbase,” was a leading figure in Lapsus$, and Bloomberg was able to track down the suspected hacker after his personal information was leaked online by rival hackers.
 
NVIDIA GPUs are used by a company called GRAID to accelerate RAID controllers in RAID configurations to massive speeds, a T1000 (~1650 Turing GPU) is used in a model called SR-1010, and A2000 (~3060 Ampere GPU) is used in the SR-1010 model. The GPUs are used to bypass the CPU and accelerate IO operations, they also run a "secret sauce" AI model for more acceleration and efficiency.

The cards are 4 to 5 times faster than traditional software CPU based solutions.

https://videocardz.com/newz/graid-s...ler-uses-nvidia-ga106-gpu-for-ai-acceleration
 
Last edited:
NVIDIA GPUs are used by a company called GRAID to accelerate RAID controllers in RAID configurations to massive speeds, a T1000 (~1650 Turing GPU) is used in a model called SR-1010, and A2000 (~3060 Ampere GPU) is used in the SR-1010 model. The GPUs are used to bypass the CPU and accelerate IO operations, they also run a "secret sauce" AI model for more acceleration and efficiency.

The cards are 4 to 5 times faster than traditional software CPU based solutions.

https://videocardz.com/newz/graid-s...ler-uses-nvidia-ga106-gpu-for-ai-acceleration

What an odd product, i wish them luck. guess it might make finical sense for a vendor at a certain scale to use GPU's to do this.
 
What an odd product, i wish them luck. guess it might make finical sense for a vendor at a certain scale to use GPU's to do this.

In the past such solutions would be using a custom designed ASIC. So it's interesting to see that a GPU is considered as a good solution.
 
NVIDIA GPUs are used by a company called GRAID to accelerate RAID controllers in RAID configurations to massive speeds, a T1000 (~1650 Turing GPU) is used in a model called SR-1010, and A2000 (~3060 Ampere GPU) is used in the SR-1010 model. The GPUs are used to bypass the CPU and accelerate IO operations, they also run a "secret sauce" AI model for more acceleration and efficiency.

The cards are 4 to 5 times faster than traditional software CPU based solutions.

https://videocardz.com/newz/graid-s...ler-uses-nvidia-ga106-gpu-for-ai-acceleration

This bit amuses me.

The difference is, it is not a graphics card and therefore has no display connectors.

If it's anything like their T1000 based solution, then it's just a bog standard NV graphics card and even has the display connectors. They just aren't exposed on the backplate. :p You could even run games on it if you wanted to. :p


I'm curious if they actually did go to the trouble to have a custom card made for the new product that doesn't have the display out connectors on it or if they are just using another off the shelf NV graphics card?

Regards,
SB
 
SEC Charges NVIDIA Corporation with Inadequate Disclosures about Impact of Cryptomining
More like: "SEC [settles previously undisclosed] Charges [against] NVIDIA Corporation for Inadequate Disclosures about Impact of Cryptomining"
Without admitting or denying the SEC’s findings, NVIDIA agreed to a cease-and-desist order and to pay a $5.5 million penalty.
 
Last edited:
I wonder if the Nvidia hack had something to do with this decision?

Hard to tell, but it's still a very tiny small amount of what is actually being released as Open Source. It does not include any of the user-space components like their libraries and the OpenGL / Vulkan / OpenCL / CUDA drivers.
 
EETimes - Lockheed Counts on Intel, Nvidia to Connect Defense Systems
Lockheed Martin has joined Intel, Nvidia, and eight other big tech companies as part of a plan to securely link defense systems in what Lockheed calls a 21st century concept.

The company, which makes defense equipment ranging from F–35 jets to the Javelin anti–tank missiles currently used in the Ukraine war, foresees a day when such arms can be upgraded online in the same way Tesla does with its electric vehicles.

“The way to do that is to establish an open architecture, IoT environment, or set of standards that will enable — whether it’s Lockheed Martin, or Northrop Grumman, or Boeing — to plug their products into an IoT 5G–enabled system so that you can tie all these assets together, and increase every, six to twelve months, the capability of a mission,” Lockheed Martin CEO James Taiclet said during an April 29 event held by the Atlantic Council, a U.S. think tank.

...
For 5G, Lockheed has chosen Verizon. The defense giant has selected Microsoft for cloud computing, while Nvidia will provide support for simulation and AI.

“We also have Intel as a partner because the chips themselves need to be anti–spoof and anti–hack and able to be customized relatively cheaply,” Taiclet said. “We’re trying to really accelerate our speed by partnering with the commercial technology industry, which has already invested billions and billions of dollars into this. They’ve got tremendous talent that the defense industry will not ever be able to replicate.”
 
Introduction of a New Supercomputer with the World’s First NVIDIA H100 PCIe and Non-Volatile Memory
https://www.ccs.tsukuba.ac.jp/release220512e/
Next-generation Intel Xeon, NVIDIA H100 Tensor Core GPU with PCIe and 48TFlops of extreme performance, and 2 TiB of non-volatile memory strongly drive Big Data and AI
The world’s first announcement of the introduction of a system with the next generation Xeon and the next generation Optane non-volatile memory
The world’s first system with NVIDIA H100 PCIe GPUs connected via PCIe Gen5
First system announced in Japan that will utilize NVIDIA Quantum-2 InfiniBand networking
 
0:00 Intro - GeForce History
0:27 Nvidia GeForce 256
1:19 Nvidia GeForce 2
2:17 Surfshark
3:17 Nvidia GeForce3
4:13 Nvidia GeForce4 Ti 4600
5:20 Nvidia GeForce FX 5800 Ultra
6:23 Nvidia GeForce 6800 Ultra
7:22 Nvidia GeForce 7800 GTX
8:23 Nvidia GeForce 8800 GTX
9:13 Nvidia GeForce 9800 GTX
9:38 Nvidia GeForce GTX 280
10:44 Nvidia GeForce GTX 480
11:42 Nvidia GeForce GTX 580
12:35 Nvidia GeForce GTX 680
13:26 Nvidia GeForce GTX 780
14:30 Nvidia GeForce GTX 980
15:26 Nvidia GeForce GTX 1080
16:22 Nvidia GeForce RTX 2080
17:39 Nvidia GeForce RTX 3080
19:15 Nvidia GeForce RTX 4080
20:04 Geforce 256 vs RTX 3090 Ti
20:25 Thank You for Watching
 
0:00 Intro - GeForce History
0:27 Nvidia GeForce 256
1:19 Nvidia GeForce 2
2:17 Surfshark
3:17 Nvidia GeForce3
4:13 Nvidia GeForce4 Ti 4600
5:20 Nvidia GeForce FX 5800 Ultra
6:23 Nvidia GeForce 6800 Ultra
7:22 Nvidia GeForce 7800 GTX
8:23 Nvidia GeForce 8800 GTX
9:13 Nvidia GeForce 9800 GTX
9:38 Nvidia GeForce GTX 280
10:44 Nvidia GeForce GTX 480
11:42 Nvidia GeForce GTX 580
12:35 Nvidia GeForce GTX 680
13:26 Nvidia GeForce GTX 780
14:30 Nvidia GeForce GTX 980
15:26 Nvidia GeForce GTX 1080
16:22 Nvidia GeForce RTX 2080
17:39 Nvidia GeForce RTX 3080
19:15 Nvidia GeForce RTX 4080
20:04 Geforce 256 vs RTX 3090 Ti
20:25 Thank You for Watching
Such a cool video. I love it! Would love to see an AMD version.

Tech demos :love:
 
Status
Not open for further replies.
Back
Top