Certainly not too late for nVidia who can use ARM. Intel have their own IP so that's different.
Yeah I remember all of that edgy doom and gloom speculation too. The future is fusion or something right?All the more so considered all the ink spilled at the time regarding the dim prospects of Nvidia's survival in the face of CPU-GPU competitive paradigm they could not match.
http://www.digitimes.com/news/a20170817PD203.htmlProduct strategy issues resulted in MediaTek's disappointing smartphone SoC sales in the second quarter of 2017, TRI indicated. MediaTek moved down one spot to fourth.
GPU specialist Nvidia enjoyed strong demand for data centers and professional visualization applications in the second quarter of 2017, when the company saw its revenues surge 56.7% from a year ago to US$1.91 billion, TRI noted. Nvidia entered the top-3 fabless IC vendors and had the largest revenue increase among the top 10 in the second quarter, TRI said.
Broadcom remained the world's largest fabless IC vendor in the second quarter of 2017, when the company saw its revenues increase 17.3% on year to US$4.37 billion. Second-ranked Qualcomm's revenues grew 13.1% from a year earlier to US$4.05 billion during the same period, according to TRI.
http://techreport.com/review/32413/...eater-flexibility-to-virtualized-pro-graphicsAll that changes today with Nvidia's introduction of its Quadro Virtual Data Center Workstation software, which will run on Pascal-powered Tesla accelerators for the first time. Quadro vDWS, as Nvidia abbreviates it, could offer system administrators a much more flexible way of provisioning workstation power to remote users. Because the Pascal architecture supports preemption in hardware, a Tesla accelerator with a Pascal GPU can be sliced and diced as needed to support users whose performance needs vary, but who all need Quadro drivers.
...
For example, a Pascal Tesla might be configured to offer one virtual user 50% of its resources, while two other users might receive 25% each. That quality-of-service management wasn't possible with virtual workstations on Maxwell accelerators. All of those users can run CUDA applications or demanding graphics workloads without taking over the entire graphics card.
Quadro vDWS will also support every Tesla accelerator going forward. Nvidia says that in the past, only a select range of Maxwell cards could be used for virtual workstation duty. Customers' existing Maxwell accelerators will still work with Quadro vDWS, but Pascal Teslas offer potentially exciting new flexibility beyond guaranteed quality of service. The Tesla P40, for example, joins 24GB of memory with a single GPU. That large pool of RAM could help administrators virtualize users whose data sets simply couldn't fit on older products. Past GRID-compatible Maxwell Teslas could only boast as much as 8GB of RAM per GPU.
https://cloudplatform.googleblog.com/2017/09/introducing-faster-GPUs-for-Google-Compute-Engine.htmlToday, we're happy to make some massively parallel announcements for Cloud GPUs. First, Google Cloud Platform (GCP) gets another performance boost with the public launch of NVIDIA P100 GPUs in beta. Second, NVIDIA K80 GPUs are now generally available on Google Compute Engine. Third, we're happy to announce the introduction of sustained use discounts on both the K80 and P100 GPUs.
Cloud GPUs can accelerate your workloads including machine learning training and inference, geophysical data processing, simulation, seismic analysis, molecular modeling, genomics and many more high performance compute use cases.
The NVIDIA Tesla P100 is the state of the art of GPU technology. Based on the Pascal GPU architecture, you can increase throughput with fewer instances while saving money. P100 GPUs can accelerate your workloads by up to 10x compared to K80.
.
https://blogs.nvidia.com/blog/2017/09/25/auto-startups-nvidia-drive/NVIDIA CEO Jensen Huang highlighted dozens of startups developing autonomous vehicle solutions using NVIDIA AI platforms in his keynote presentation today at the GPU Technology Conference in Beijing.
....
NVIDIA works with the world’s leading automakers, including Audi, Toyota, Mercedes-Benz, Volvo and Tesla. But startups are equally important partners. Their innovation and ingenuity result in critical new autonomous driving solutions.
...
Cognata is a deep learning autonomous simulation company. Simulation is used extensively in autonomous vehicle development. By building a virtual world that behaves like the real world, companies like Cognata create a safe and flexible environment in which AI can learn. And these simulated environments can run constantly, enabling self-driving AI to train 24 hours a day, every day.
Momenta is local startup, headquartered in Beijing, that develops software for object perception, HD mapping and path planning. By crowdsourcing driving data, Momenta leverages millions of miles of real-world driving scenarios to further train and perfect their AI driving algorithms.
One of the most secretive companies in Silicon Valley, Zoox is working on reinventing the entire mobility ecosystem. They’re building a shared, on-demand, zero-emission mobility system. When asked if they were building a car from the ground up, Zoox founder and CEO Tim Kentley-Klay told Fortune Magazine, “No, I’m saying we’re building what comes after the car.”
http://nvidianews.nvidia.com/news/c...olta-gpus-to-supercharge-next-gen-ai-servicesSpeaking at the GPU Technology Conference in Beijing, NVIDIA founder and CEO Jensen Huang announced that Alibaba Cloud, Baidu and Tencent are incorporating NVIDIA Tesla® V100 GPU accelerators into their data centers and cloud-service infrastructures.
http://www.reuters.com/article/us-t...ters/technologyNews+(Reuters+Technology+News)Tesla's Model 3 and new versions of its other cars will get the new Intel processing modules, Bloomberg reported, citing sources familiar with the matter.
Tesla shifts to Intel from Nvidia for infotainment: Bloomberg
http://www.reuters.com/article/us-tesla-infotainment/tesla-shifts-to-intel-from-nvidia-for-infotainment-bloomberg-idUSKCN1C1336?feedType=RSS&feedName=technologyNews&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+reuters/technologyNews+(Reuters+Technology+News)
I thought that was confirmed to be a false rumor?I'm confused. Weren't Tesla shifting to AMD a few weeks ago?
Infotainment and navigation would be separate. Timelines likely different as well, as the in house AMD solution is likely at least a year off. If the news was accurate. It may very well be a 2019/2020 release, but no way of knowing what Tesla is working on. Just that keeping costs in check is in their best interest.I'm confused. Weren't Tesla shifting to AMD a few weeks ago?
https://www.bdti.com/InsideDSP/2017/10/31/NVIDIANVIDIA also sees plenty of opportunities for inference acceleration in IoT and other "edge" platforms, although it doesn't intend to supply them with chips. Instead, it's decided to open-source the NVDLA deep learning processor core found in its "Xavier" SoC introduced last fall.
In a recent briefing, Deepu Talla, NVIDIA's Vice President and General Manager of Autonomous Machines, explained that the company's open-sourcing decision was driven in part by the realization that the more deep learning inference that’s done in edge devices (regardless of whether this processing takes place on NVIDIA silicon), will also expand demand for cloud-based training of deep learning models, which is often done on NVIDIA's platforms. However, the number of processor architecture options for edge-based inference processing is large and still rapidly growing. According to Talla, the resultant inference architecture diversity threatens to stall overall market growth, although NVIDIA's claims may overstate the reality of this fragmentation's effects. Regardless, NVIDIA decided to encourage consolidation by openly licensing the NVDLA core integrated in the Xavier chip, which is intended for ADAS and autonomous vehicles and is currently scheduled to begin sampling next year both standalone and as part of the just-introduced Pegasus processing module (Figure 1).
NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference
October 31, 2017
https://www.bdti.com/InsideDSP/2017/10/31/NVIDIA