NVIDIA Tegra Architecture

Shame it doesn't feature a 3-axis gimbal, that would really give it an edge. Although the fact that it can follow someone without something strapped to them is really nice. Featurewise it looks great.
But nothing new, there's several (I'm guessing 10-20 + their clones) drones out there that follow you without any tethers for quite some time already without any fancy Tegras. Here's one "shopping guide" from last fall on drones that include the feature to follow a person http://www.dronesglobe.com/guide/follow-me/
 
Probably to process all the data from all those cameras used for collision avoidance, it needs a more powerful processor than whatever is common in other drones?

I think the DJIs have at most one in the front, one in the back and maybe one facing down to monitor for objects to avoid?

So this thing may be kind of overkill, though I guess it's going to navigate a forest better than other drones, as they show in that video of the gymnast doing flips and summersaults through the forest.
 
Tesla AutoPilot v2.5 PCB is showing NVIDIA Pascal and Two Parker Chips
Tesla recently not just only updated their media computer for the Tesla Model S and X with a more powerful Intel processor, it seems that a separate Autopilot computer for assisted driving functionality also has received an update. Version 2.5 boards are all Nvidia powered, with Pascal and now two Parker chips.


index.php



This is interesting to see, as the newer models Tesla 3, S and X will get/have this board. Basically, with the three chips, that's almost a full Drive PX 2 setup that Tesla uses right now for their 2018 production cars. Parker is a dedicated automotive design chip and is based on a combo of Denver 2 (x2) + A57 (x4) ARM CPU cores paired with 256 Shader processors (Pascal architecture), it is the Tegra X2.

Mind you that the author of the photo above, designated the first left side chip as 'Pascal', and that caught my attention. That one remained a bit of a mystery but a close-up photo shows something interesting, it was assumed to be a 256 Shader processor based GPU. But it has the SKU codename GP106-510 :) Now before you think, hey that's my GTX 1060, well we don't know, it's definitely something close to that GPU though. But a GTX 1060 has 1,280 shader processors and certainly is a totally different category opposed to anything with 256 shader processors. Dunno if it can play Crysis though!

Another thing I noticed, the board makes use of four SKHynix H5GC8H24MJR DRAM chips positioned just above the GP106 GPU, if you look that up that is VRAM in the form of GDDR5 SGRAM (8Gb). Video and parallel processing is obviously a big thing for anything with cameras and sensors, and that does require fast memory.
http://www.guru3d.com/news-story/te...owing-nvidia-pascal-and-two-parker-chips.html
 
Bound to happen. A posterchild single application case where a dedicated ASIC makes tons of sense. (What was not bound to happen was for them to roll their own full out, which is interesting even if many blocks will be off-the-shelf IP.)
 
Last edited:
Bound to happen. A posterchild single application case where a dedicated ASIC makes tons of sense.

I thought the SoCs handling the driving were also being used for the cars' infotainment system. Can Tesla make SoCs that also handle video, 3d render for maps display and always-on connections?
 
I thought the SoCs handling the driving were also being used for the cars' infotainment system. Can Tesla make SoCs that also handle video, 3d render for maps display and always-on connections?

Should be different portions of the same Soc anyway, if that's the case.

Traditionally, infotaiment had the most powerful controllers in the car. You potentially may start from that when adding new functions which would require raw processing power .
The inverse of that is absurd in my opinion, on the other hand. If you need/have good inference DSP/accelerators, there's 0 pressure to make them handle video playback, navigation and whatnot. As a car company with some tradition, you already have an existing solution for that, in house or off the shelf

Also self driving functions should have more stringent requirements wrt to safety or availability. Separating them from infotaiment makes even more sense as such
 
I thought the SoCs handling the driving were also being used for the cars' infotainment system. Can Tesla make SoCs that also handle video, 3d render for maps display and always-on connections?
Those can be done using licensed cores as part of the SoC
 
Bound to happen. A posterchild single application case where a dedicated ASIC makes tons of sense. (What was not bound to happen was for them to roll their own full out, which is interesting even if many blocks will be off-the-shelf IP.)
It will be interesting to see how successful they are.
To put it into perspective how many production HW designs out there successfully using tensor based architecture effectively for DL-inferencing beyond Google and Nvidia?
I can only think of a couple of others that could be deemed production ready and with any semblance of a deep framework/library support, and they have been around for quite awhile with their expertise.

Let alone as well integrating multiple camera/lidar/radar technology with their data-objects into the autonomous system to process.
 
The NVIDIA Jetson TX2 Performance Has Evolved Nicely Since Launch
29 August 2018
Anyhow, the benchmarks today are seeing how the performance of the Jetson TX2 has changed since launch. In particular, still using the Ubuntu 16.04 "Linux 4 Tegra" root file-system but with having upgraded to the L4T R28.2.1 release that was released in June.
...
Overall, from the fresh tests using the latest Linux 4 Tegra components, the performance has near universally improved a fair amount compared to the software stack shipping on the Jetson TX2 back in Q1'2017.

It will certainly be interesting to see how the NVIDIA Jetson Xavier Development Kit performs particularly with its machine learning / tensor benchmarks and will be working on getting access to the hardware soon.
https://www.phoronix.com/scan.php?page=article&item=jetson-tx2-2018&num=1
 
Last edited:
Is there anything in the information/entertainment system of a car that can't be handled by run-of-the-mill Mali cores?
 
Is there anything in the information/entertainment system of a car that can't be handled by run-of-the-mill Mali cores?

Not only garden variety ARM GPU IP but any relevant IP out there. But since any IP provider isn't necessarily responsible for things like software or even system stability in a final product (bundle), you might want to overthink that question; as to why NVIDIA is actually landing as many automotive deals after all.
 
A Quick Test Of NVIDIA's "Carmel" CPU Performance
NVIDIA should be sending over a Jetson Xavier Development Kit shortly for benchmarking on Phoronix, but in the mean time, a Phoronix reader who pre-ordered one of these developer kits was kind enough and offered remote access to it for some brief benchmarking.

Due to the remote nature, I was just running some basic ARM CPU Linux benchmarks while once having my hands on the hardware will be looking more at the GPU/CUDA/tensor performance and other areas. Besides the eight Carmel cores and 512-core Volta GPU and dual NVDLA engines, the Jetson AGX Xavier also has 16GB of LPDDR4x memory, 32GB eMMC 5.1 storage, and 7-way VLIW vision processor.


embed.php


embed.php


embed.php


From the initial tests, the eight-core Carmel CPU performance is looking quite good and will be exciting to see how the overall Tegra194 Xavier performs with its Volta GPU and accelerators.
https://www.phoronix.com/scan.php?page=article&item=nvidia-carmel-quick&num=1
 
Since he only had remote access to Xavier he's likely running it also in the default state. Maybe if he has time he might rerun the TX1 and TX2.
It will be interesting to see the benchmarks in his test system when he receives the Xavier board Nvidia is sending.
 
Last edited:
Since he only had remote access to Xavier he's likely running it also in the default state. Maybe if he has time he might rerun the TX1 and TX2.
I already highlighted the fact Michael is likely running TX2 in default mode months ago in Phoronix forums. And there once again. I am unable to contact him directly (his PM box is full) so I hope he did notice this time.

It will be interesting to see the benchmarks in his test system when he receives the Xavier board Nvidia is sending.
Yeah I want to see proper results, and I don't think Xavier should be slower than TX2 when both are properly setup.
 
Back
Top