NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
They tried with ARM in mobile and decided they'd be better off in automotive, using the same tech there initially (Drive PX2 obviously is a step up from there). I doubt they want to go after a high volume but low margin area where there's fierce competition already. That'd just be a money sink.
 
It's a shame NV just didn't have the influence to make Android gaming go somewhere . It has the potential to be more interesting than closed up boxes from Nintendo et al producing the standard selection of titles. Unfortunately the audience there really won't pay more than about $5 for any software and that's a tiny problem.
 
Last edited:
Nvidia overtakes MediaTek as 3rd largest IC design company
Product strategy issues resulted in MediaTek's disappointing smartphone SoC sales in the second quarter of 2017, TRI indicated. MediaTek moved down one spot to fourth.

GPU specialist Nvidia enjoyed strong demand for data centers and professional visualization applications in the second quarter of 2017, when the company saw its revenues surge 56.7% from a year ago to US$1.91 billion, TRI noted. Nvidia entered the top-3 fabless IC vendors and had the largest revenue increase among the top 10 in the second quarter, TRI said.

Broadcom remained the world's largest fabless IC vendor in the second quarter of 2017, when the company saw its revenues increase 17.3% on year to US$4.37 billion. Second-ranked Qualcomm's revenues grew 13.1% from a year earlier to US$4.05 billion during the same period, according to TRI.
http://www.digitimes.com/news/a20170817PD203.html
 
Nvidia Quadro vDWS brings greater flexibility to virtualized pro graphics
Thursday, August 17, 2017

All that changes today with Nvidia's introduction of its Quadro Virtual Data Center Workstation software, which will run on Pascal-powered Tesla accelerators for the first time. Quadro vDWS, as Nvidia abbreviates it, could offer system administrators a much more flexible way of provisioning workstation power to remote users. Because the Pascal architecture supports preemption in hardware, a Tesla accelerator with a Pascal GPU can be sliced and diced as needed to support users whose performance needs vary, but who all need Quadro drivers.
...
For example, a Pascal Tesla might be configured to offer one virtual user 50% of its resources, while two other users might receive 25% each. That quality-of-service management wasn't possible with virtual workstations on Maxwell accelerators. All of those users can run CUDA applications or demanding graphics workloads without taking over the entire graphics card.

Quadro vDWS will also support every Tesla accelerator going forward. Nvidia says that in the past, only a select range of Maxwell cards could be used for virtual workstation duty. Customers' existing Maxwell accelerators will still work with Quadro vDWS, but Pascal Teslas offer potentially exciting new flexibility beyond guaranteed quality of service. The Tesla P40, for example, joins 24GB of memory with a single GPU. That large pool of RAM could help administrators virtualize users whose data sets simply couldn't fit on older products. Past GRID-compatible Maxwell Teslas could only boast as much as 8GB of RAM per GPU.
http://techreport.com/review/32413/...eater-flexibility-to-virtualized-pro-graphics
 
Last edited by a moderator:
Introducing faster GPUs for Google Compute Engine
Today, we're happy to make some massively parallel announcements for Cloud GPUs. First, Google Cloud Platform (GCP) gets another performance boost with the public launch of NVIDIA P100 GPUs in beta. Second, NVIDIA K80 GPUs are now generally available on Google Compute Engine. Third, we're happy to announce the introduction of sustained use discounts on both the K80 and P100 GPUs.

Cloud GPUs can accelerate your workloads including machine learning training and inference, geophysical data processing, simulation, seismic analysis, molecular modeling, genomics and many more high performance compute use cases.

The NVIDIA Tesla P100 is the state of the art of GPU technology. Based on the Pascal GPU architecture, you can increase throughput with fewer instances while saving money. P100 GPUs can accelerate your workloads by up to 10x compared to K80.
.
https://cloudplatform.googleblog.com/2017/09/introducing-faster-GPUs-for-Google-Compute-Engine.html
 
145 Automotive Startups Around the World Choose NVIDIA DRIVE
September 25, 2017
NVIDIA CEO Jensen Huang highlighted dozens of startups developing autonomous vehicle solutions using NVIDIA AI platforms in his keynote presentation today at the GPU Technology Conference in Beijing.
....
NVIDIA works with the world’s leading automakers, including Audi, Toyota, Mercedes-Benz, Volvo and Tesla. But startups are equally important partners. Their innovation and ingenuity result in critical new autonomous driving solutions.
...
Cognata is a deep learning autonomous simulation company. Simulation is used extensively in autonomous vehicle development. By building a virtual world that behaves like the real world, companies like Cognata create a safe and flexible environment in which AI can learn. And these simulated environments can run constantly, enabling self-driving AI to train 24 hours a day, every day.

Momenta is local startup, headquartered in Beijing, that develops software for object perception, HD mapping and path planning. By crowdsourcing driving data, Momenta leverages millions of miles of real-world driving scenarios to further train and perfect their AI driving algorithms.

One of the most secretive companies in Silicon Valley, Zoox is working on reinventing the entire mobility ecosystem. They’re building a shared, on-demand, zero-emission mobility system. When asked if they were building a car from the ground up, Zoox founder and CEO Tim Kentley-Klay told Fortune Magazine, “No, I’m saying we’re building what comes after the car.”
https://blogs.nvidia.com/blog/2017/09/25/auto-startups-nvidia-drive/

Alibaba Cloud, Baidu and Tencent Upgrade Data Centers with NVIDIA Tesla V100 GPU Accelerators
September 25, 2017
Speaking at the GPU Technology Conference in Beijing, NVIDIA founder and CEO Jensen Huang announced that Alibaba Cloud, Baidu and Tencent are incorporating NVIDIA Tesla® V100 GPU accelerators into their data centers and cloud-service infrastructures.
http://nvidianews.nvidia.com/news/c...olta-gpus-to-supercharge-next-gen-ai-services
 
Last edited by a moderator:

I have a feeling this is due to NVIDIA deciding to not supply Tegra chips for infotainment anymore. That market is rapidly being comoditized and prices are falling below NVIDIA`s preferences. On the other hand the next Tegra chips (if the Tegra brand lives on at all, they have already started rebranding their official channels on it) will probably be overkill for infotainment anyway, since they are aiming for self driving cars. Intel chips are probably more than enough for infotainment by now, it's not like it needs a lot of computing power.
 
I'm confused. Weren't Tesla shifting to AMD a few weeks ago?
Infotainment and navigation would be separate. Timelines likely different as well, as the in house AMD solution is likely at least a year off. If the news was accurate. It may very well be a 2019/2020 release, but no way of knowing what Tesla is working on. Just that keeping costs in check is in their best interest.
 
AMD and GF separately and publicly denied working with Tesla on the rumored project. Perhaps someone can parse the wording finely enough to leave some possibility open, but it's significant that both were motivated to inform investors that no such deal was made.

https://forum.beyond3d.com/posts/2002152/

It would be difficult for a slightly different deal than described to not have a similar material impact on their stock prices, or at least AMD, so the denial is a rather strong indicator that the rumor was false.

The initial story in particular even had a correction for itself, and from what I can tell that correction essentially torpedoed the whole logic chain of the rumor. The author made a weak association that Tesla working with GF meant AMD was involved, and that initial assumption was the result of misinterpreting a fragment of a hypothetical example.
 
NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference
October 31, 2017

NVIDIA also sees plenty of opportunities for inference acceleration in IoT and other "edge" platforms, although it doesn't intend to supply them with chips. Instead, it's decided to open-source the NVDLA deep learning processor core found in its "Xavier" SoC introduced last fall.

In a recent briefing, Deepu Talla, NVIDIA's Vice President and General Manager of Autonomous Machines, explained that the company's open-sourcing decision was driven in part by the realization that the more deep learning inference that’s done in edge devices (regardless of whether this processing takes place on NVIDIA silicon), will also expand demand for cloud-based training of deep learning models, which is often done on NVIDIA's platforms. However, the number of processor architecture options for edge-based inference processing is large and still rapidly growing. According to Talla, the resultant inference architecture diversity threatens to stall overall market growth, although NVIDIA's claims may overstate the reality of this fragmentation's effects. Regardless, NVIDIA decided to encourage consolidation by openly licensing the NVDLA core integrated in the Xavier chip, which is intended for ADAS and autonomous vehicles and is currently scheduled to begin sampling next year both standalone and as part of the just-introduced Pegasus processing module (Figure 1).
nvdla-primer-core-diagram_revised.png
https://www.bdti.com/InsideDSP/2017/10/31/NVIDIA
 
NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference
October 31, 2017


https://www.bdti.com/InsideDSP/2017/10/31/NVIDIA

Makes a lot of sense. In the medium / long term chips targeted at inference will be mass marketed driving margins down. nVIDIA business model needs high margins and they are more easily found on the training aspect. By open sourcing their inference chips they get someone else to manufacture them, who work ok with smaller margins, while keeping some control on the platform. Looks like nVIDIA did learn something out of their mobile chips fiasco.
 
Status
Not open for further replies.
Back
Top