NVIDIA Tegra Architecture

Yeah not to scale.
Would be worrying to consider the 48-pin vehicle connector also being that size, or each camera/RADR/LiDAR relative to Parker/GPUs. :)
Cheers
 
Why not just use a Raspberry Pi and an Arduino?

Kidding aside there's an ungodly amount of I/Os and it's hard to keep track of how many links there are between the Tegras. It's like you decided to have two desktop PCs and they talk through ethernet, two null modem cables, share a Thunderbolt enclosure with a PCIe switch and two GPUs inside, share a USB hub, are plugged into a KVM switch and there's something about a CAN bus (or CAN buses?) and a huge microcontroller.

That was a really cool picture thanks
 
I think the bulk of the communication between the Parker chips is done through the PCIe PLX. I'd say those additional dual SPI connections between the SoCs are there just to determine which Parker is being host or slave though the PLX in each instance.
Most of the car I/Os seem to be multiplexed/demultiplexed to/from both Parker chips through that Aurix MCU.
 
NVIDIA announced that it has an agreement to supply Tesla Motors the DRIVE PX 2 platform to power a new AutoPilot system in the Model S, X, and upcoming 3. With a maximum TDP of 250 Watts, DRIVE PX 2 casts off the mobile SoC constraints of the previous design, despite still having NVIDIA’s Denver CPU paired with Cortex-A57. With the continued investment in this sector, NVIDIA has seen strong growth here, and revenue for this quarter was $127 million, which is up from $79 million a year ago, or a gain of 60%

This is from the NV financial results. Seems like the Tegra business is picking up! I was expecting them to land Tesla..that partnership should suit both parties well.

Source - http://www.anandtech.com/show/10825/nvidia-announces-record-q3-fy-2017-results
 
Most of the car I/Os seem to be multiplexed/demultiplexed to/from both Parker chips through that Aurix MCU.

This is what I wondered a bit about, so if you want to plan around the complete failure of one Parker CPU you can. Likewise there should be at least graceful degradation if one or both GPU fail.
 
Nvidia has concrete plans to release a level 4 car by 2020, in conjunction with Audi:

By 2020 (a "hard deadline" I'm told), Nvidia plans to release an almost entirely automated, self-driving car that anyone can buy. This "Level 4" automatic vehicle—where the AI can control the vehicle in all but a few environments such as severe weather—will be built in conjunction with automotive legend Audi, and it will be powered by Nvidia's PX2 board. The latter combines Pascal-powered GPUs with custom ARM chips to analyse signals, objects, pedestrians, signs, or whatever else it is a car needs to know in order to navigate our roads.

Nvidia claims PX2 can perform 24 trillion deep learning operations per second—enough to make a Level 4 automated car a reality.

Plugged into BB8 are cameras that give it a full 360-degree view of the road, along with numerous light detection and ranging sensors (lidar) that constantly construct a highly accurate 3D model of the environment as the car drives around. These sensors, combined with Nvidia's own AI algorithms, enabled it to teach a prototype of the upcoming Audi car (also on display at CES) to learn to drive in just four days.

And yes, the Audi car was just as adept as navigating the CES test track as BB8.

While a fully automated self-driving car is extremely impressive, not all car makers are going down the same route. Some are opting to instead create partially autonomous cars. These can take over driving duties at key points—for example, when on a long stretch of highway—and then hand back control to the driver with a warning message and a one-minute countdown.

"We feel that AI will be running the car all the time," Danny Shapiro, senior director of automotive at Nvidia, told me. "It might be in some places the AI will take over, because it has a high confidence. In other places, it wouldn't. When you input a destination, the car knows whether it has those roads mapped and whether it has the confidence to tackle them. Maybe there are some crazy roundabouts and you need to handle them. The AI will always be running, it's up to the auto-maker to figure out where to activate it."

http://arstechnica.com/cars/2017/01/nvidia-audi-bb8-self-driving-car/

You wouldn't think Nvidia is a big enough player to do this on their own, especially navigating the regulatory and other legal issues involved. It also sounds like they want to sell this system to other manufacturers than Audi. So it's one thing for big tech companies like Google and MS to have a lot of AI and DL experts on staff. Can Nvidia compete in this arena?

They couldn't compete in mobile, couldn't get enough design wins. Now they're in autonomous cars, with a lot of bigger, richer competitors. If the market for self-driving cars take off, it will be much larger than mobile (though not necessarily more profitable). If Audi goes exclusively with Nvidia, that's a big design win with a prestigious marque.

Now the question is, can they design a cost-competitive system? For these prototypes, they're probably spending a lot more on hardware than they would on a production car. Maybe not as many different types of sensors, maybe not as complicated a setup. Or are they able to run this system with self-contained PX2 SOCs without need for additional custom silicon?
 


q:100

https://www.engadget.com/2017/01/04/nvidia-self-driving-car-xavier-supercomputer/
 
Last edited:
Cool video but they need to show the trunk or wherever the processing system is installed.

Then show where the sensors are.
 
You wouldn't think Nvidia is a big enough player to do this on their own, especially navigating the regulatory and other legal issues involved. It also sounds like they want to sell this system to other manufacturers than Audi. So it's one thing for big tech companies like Google and MS to have a lot of AI and DL experts on staff. Can Nvidia compete in this arena?
NVDA announced partnership with BOSCH and ZF at CES (respectively #1 and #5 car audio suppliers) to provide drive PX2 solutions to their customers.
http://fortune.com/2017/01/05/audi-nvidia-2020/

And last but not least, NVDA will also supply Mercedes for a next year model featuring green team AI tech
https://techcrunch.com/2017/01/06/n...z-to-bring-an-ai-car-to-market-within-a-year/

+Volvo + Baidu

so for now, I would say that they got some serious attraction...
 
Nvidia's Jetson Goes Embedded, Gets Cisco

The latest to sign up for Jetson TX1 is Cisco. It will use Jetson TX1 for its newest enterprise collaboration tool, Cisco Spark Board. The 55-inch 4K board allows users to share a screen, use interactive whiteboard and video-conference.

Cisco picked Jetson TX1 for several reasons. One is a “DirectStylus” technology Nvidia originally developed for tablets, with “extremely low pen-to-ink latency” to replicate the natural whiteboarding experience. Jetson TX1 also offers “advanced graphics,” important in driving the interactive content-sharing experience. Cisco finds TX1’s “image processors and GPUs” critical, as they deliver high-resolution video from the Spark Board’s built-in camera to “create intelligent views from remote participants.”

Cisco-Spark-Board.png
http://www.eetimes.com/document.asp?_mc=RSS_EET_EDT&doc_id=1331249&page_number=1
 
Last edited:
How can they get 30 TOPS with 512 core design... 512 * 2 (FMA) = 1024 FLOP per cycle. At 1 GHz that is 1 TFLOP (32 bit float)...

That would be:
- 16 bit float: 2 TFLOP
- 8 bit integer: 4 TFLOP (FMA or DP)

30 TOPS is 7.5x more than my 8 bit integer FMA/DP numbers. How do they calculate TOPS?
 
As has been pointed out in the Volta thread, the majority of those (DL) TOPs most likely come from the CVA (Compute Vision Accelerator) in Xavier. Mobileye calls their unit PMA (Programmable Macro Array) for which in the current EyeQ4 SoC they reach with 2 PMAs 2.5 TOPs@3-4W power and for their upcoming EyeQ5 15 TOPs@5W power: http://www.mobileye.com/our-technology/evolution-eyeq-chip/

It was initially 20 TOPs@20W, but it seems for some reason NV pumped it up to 30 TOPs@30W while most likely increasing CVA unit count (power doesn't scale linearly with frequency increases).

No pixy dust really to get to any of those TOP values for an estimated 2019-2020 timeframe for either/or ;)
 
Last edited:
nVIDIA's Tegra Facebook page name was just changed to "nVIDIA Embeeded". I guess this marks the end of the line for the Tegra brand? Since it stopped making sense marketing Tegra directly to the consumer from the moment its technology is mostly dedicated to being integrated in someone else's product, there is no reason for it to exist anymore.
 
NVIDIA Launches the Jetson TX2 IoT System
Key features of Jetson TX2 include:
  • GPU: 256-core NVIDIA Pascal architecture-based GPU offering best-in-class performance
  • CPU: Dual 64-bit NVIDIA Denver 2, Quad ARM A57
  • Video: 4K x 2K 60fps encode and decode
  • Camera: 12 CSI lanes supporting up to 6 cameras; 2.5 gigabytes/second/lane
  • Memory: 8GB LPDDR4; 58.3 gigabytes/second
  • Storage: 32GB eMMC
  • Connectivity: 802.11ac WLAN, Bluetooth
  • Networking: 1GB Ethernet
  • OS Support: Linux for Tegra
  • Size: 50mm x 87mm
....
The Jetson family is supported by the most comprehensive SDK for AI computing, JetPack 3.0, which makes it easy to integrate AI into a wide variety of applications, and support the following:

  • TensorRT 1.0, a high-performance neural network inference engine for production deployment of deep learning applications
  • cuDNN 5.1, a GPU-accelerated library of primitives for deep neural networks
  • VisionWorks 1.6, a software development package for computer vision and image processing
  • The latest graphics drivers and APIs, including OpenGL 4.5, OpenGL ES 3.2, EGL 1.4 and Vulkan 1.0
  • CUDA 8, which turns the GPU into a general-purpose massively parallel processor, giving developers access to tremendous performance and power-efficiency
http://www.guru3d.com/news-story/nvidia-launches-the-jetson-tx2-iot-system.html
 
Something being announced April 29th:
Bet is on a new Shield Portable, the one that was submited to FCC back in July 2016, with a TX1 and 4GB RAM:
Update:

We just got word in, this teaser is not about a product announcement but it merely anteaser for a Southern Europe event slash gathering. And judging from the naming, I'd say chances are fairly high that it is in collaboration with ASUS.

http://www.guru3d.com/news-story/nvidia-is-teasing-something-to-be-released-29-04-1.html
 
Toyota is another huge design win for Nvidia Xavier.
n the big design win of the show, Huang announced that Toyota will use Nvidia’s GPU-based Drive PX for advanced automated driving functions in a future car. The companies did not name which car or when the car will ship.

The Toyota version of the Drive PX platform will use a new processor called Xavier, a custom Nvidia 64-bit ARM processor with a Volta GPU and a custom core for deep learning. The custom core offers 10 tera-operations per second to speed inference jobs.
http://www.eetimes.com/document.asp?doc_id=1331729&page_number=2

With Mercedes, Honda, Tesla, Volkswagen group (including Audi) and now Toyota, maybe it's time to say that Nvidia is wining the AI automotive race. One thing is sure, Tegra revenue will explode in the next 3-5 years and this division will finally be profitable
 

I'm surprised they're still going at this with the Switch around. Then again I suppose they're not quite the same market.
 
Back
Top