NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
I thought they basicly only sold the Tegras for considerably *larger* form factors (cars) these days... ;)

(which I also think was SB's point - they are leaving the tablet market, not the financial significance (which is rarely there when you choose to leave))

Yes. They get out of a market in which not only are they not doing well, but that is the antithesis of the high margin product stack (they are approaching 60% margins and may soon pass Intel) they are going for.

The Nintendo NX contract may not be that much better with regards to margins compared to the tablet market, but they'll have little to no competition (either for space in a tablet or even other devices in the NX's class) which is significantly better than they could do with Shield devices or even Android devices (Tegra isn't exactly popular there and isn't gaining ground). All the while they avoid the cut throat race to the bottom that ARM SOC makers are engaged in to be able to get into Android devices.

Automotive + NX allows them a steady revenue stream so that they can continue development on Tegra. NX will be able to more consistently showcase the graphical benefits of the Tegra SOC as well.

Regards,
SB
 
The NX deal (as any other console deal) requires a certain amount of dedicated resources. Since the Tegra related resources aren't unlimited obviously, they're far better invested in the Nintendo NX than any other cancelled project.
 
AMD, Nvidia GPU Battle Heats Up

Nvidia is bullish, reporting last week a better-than-expected record quarterly revenue of $1.43 billion. What’s more it forecasted 18% quarterly revenue growth to $1.68 billion in the fall compared to Wall Street’s 6% estimates.

“We remain impressed with the superb execution and growth delivered by Nvidia,” said Deutsche Bank analyst Ross Seymore in a report that kept the company at a hold ranking.

Matthew Ramsay, analyst for Canaccord Genuity, was even more bullish.

“NVIDIA's record July quarter results soundly beat our expectations and consensus, primarily driven by strong Pascal GPU demand and rapid growth in deep learning datacenter GPU sales to key customers including Facebook, Amazon, Microsoft, Baidu, and Alibaba…We maintain our belief that Nvidia’s transformation from a PC-leveraged GPU supplier to a diverse visual-computing company is essentially complete,” Ramsay wrote in a report.

The quarter saw 18% year-over-year growth in gaming. "We continue to anticipate strong gaming GPU growth with Pascal helping push VR-capable GPUs into the 80M unit installed base for GeForce,” he added.
....
The third calendar quarter will be the first full period for shipping AMD Polaris and Nvidia Pascal chips. By early next year, multiple benchmarks should be out for top-end Polaris and Pascal chips, and market demand will reveal any supply-chain issues either company has making parts.

“It may come down to whose factory is pumping out the most,” McCarron said.

Nvidia may have an edge here in using TSMC’s 16FF+ process. Multiple chips are now ramping in the technology the Taiwan foundry took some time to flesh out.

By contrast AMD depends on the 14nm process Globalfoundries licensed from Samsung. AMD has a history of great architectural innovations in silicon it has had trouble making in some of the same fabs it spun out several years ago as the genesis of Globalfoundries.
http://www.eetimes.com/document.asp?doc_id=1330305&_mc=RSS_EET_EDT
 
What exactly is "top-end Polaris"? Both companies have primarily been targeting different market segments and I'm not sure "top-end" applies to the one they are competing in. Early next year would also likely imply 1080ti/TitanX vs Vega.
 
What exactly is "top-end Polaris"? Both companies have primarily been targeting different market segments and I'm not sure "top-end" applies to the one they are competing in. Early next year would also likely imply 1080ti/TitanX vs Vega.
The only card Nvidia have not released a new competitor for is the RX 460, but with how utterly disappointing that card turned out to be I doubt they are in a rush.
 
What exactly is "top-end Polaris"? Both companies have primarily been targeting different market segments and I'm not sure "top-end" applies to the one they are competing in. Early next year would also likely imply 1080ti/TitanX vs Vega.
Current indications are Vega's performance will compete with GTX 1070 and 1080, not with GP102 Titan X. You can conclude this from power efficiency analysis alone.

A GTX 1060 and Polaris 10 are effectively tied in performance. A GTX 1080 has double the compute of a GTX 1060, so to match it AMD needs a GPU with double the resources of Polaris 10. If you simply scale a 165 watt Polaris 10 RX480 up to double size, you'd use twice the wattage: 330 watts. That's far, far above the 250 watt practical maximum. However, Vega will have lower power HBM2, saving optimistically 25 watts from the memory system. To reach the power goal of 250 watts, even with the HBM2 memory savings, AMD still needs to increase Vega's power efficiency by 20% over Polaris. Vega won't have a new fab process to help (unless it's delayed deep into 2017 and uses 10nm) so it's up to AMD's engineers to optimize the design.

So now it's an engineering question: In 8 months, can AMD's engineering team make a new, larger GPU layout that is also 20% more power efficient than Polaris? And at the same time integrate HBM2? The answer is yes, but it's still a challenge to do so quickly with their small team. There's no easy quick fix to change GCN's design to be dramatically power saving, so improvements are probably from accumulating tweaks on existing routing and cell designs. AMD proudly announced how hard they worked on perf-per-watt for Polaris (calling it a "historic" effort) and we know how well that turned out in practice. AMD's marketing slides show an additional but undramatic boost for Vega's perf/watt on a graph with unlabeled axes, so there's expectation of more improvement, but not a revolution. Architecture and process savings will be where real improvements for perf/watt happen, and both are scheduled for Navi in 2018.

And this challenge is just to make Vega match GP104, and it will. But GP102 has 2.8x the compute resources of GTX1060, not GP104's 2x.

HBM2 will give Vega a new advantage: it will reduce or even remove memory bandwidth issues. But Fury X's use of HBM shows that this doesn't give any noticable boost to GCN in practical performance. The success of GDDR5X by AMD (and NVidia) seems to have satiated the bandwidth needs of both camps tolerably well, so Vega's HBM2 inclusion will help power and especially PCB compactness, but not really performance.

Vega could boost its perf/watt in another way, by dramatically increasing the number of compute units while significantly decreasing their frequency and voltage, say by fabbing a maxed-out 600mm^2 die (3x the size of Polaris 11) and lowering frequencies by say 20%. That strategy has a fatal business cost in terms of the price of the huge die for a mid-range part. That's not a good choice for AMD either for profitability. (It's surprising to call GTX 1080 mid-range but it is in practice now, and certainly will be considered so in 2017).

The above analysis has a flaw in that I use compute (ALUs) as a 2x ratio proxy for Vega to match, which is arguably the correct metric. But using NVidia performance as a proxy the ROP ratios are lower between Pascal GPU families than compute. Compute is 1:2:3, but ROPs are in a ratio 1:1.33:2. So it's easier to design Vega to match GP104 or even GP102 in ROP limited benchmarks; but ALU shaders are the most significant bottleneck and obstacle for Vega to compete at the higher end.

TL;DR Vega is very unlikely to improve its perf/watt over Polaris enough to make a GP102 competitor. Navi will.
 
Current indications are Vega's performance will compete with GTX 1070 and 1080, not with GP102 Titan X. You can conclude this from power efficiency analysis alone.

A GTX 1060 and Polaris 10 are effectively tied in performance. A GTX 1080 has double the compute of a GTX 1060, so to match it AMD needs a GPU with double the resources of Polaris 10. If you simply scale a 165 watt Polaris 10 RX480 up to double size, you'd use twice the wattage: 330 watts.

If we go by that logic then if you scale up a 120W GTX1060, a GTX1080 should be using 240W.
That's far, far above the 250 watt practical maximum. However, Vega will have lower power HBM2, saving optimistically 25 watts from the memory system. To reach the power goal of 250 watts, even with the HBM2 memory savings, AMD still needs to increase Vega's power efficiency by 20% over Polaris. Vega won't have a new fab process to help (unless it's delayed deep into 2017 and uses 10nm) so it's up to AMD's engineers to optimize the design.

If the rumours so far are correct, Vega will be using a different fab process, i.e. TSMC 16nm instead of GF14nm.
 
If we go by that logic then if you scale up a 120W GTX1060, a GTX1080 should be using 240W.
That's an excellent point! Tom's hardware shows GTX1080 load power between 180 and 210 watts. It's probably not a doubling because the ROP ratio is only 1.33x, only shaders are 2x
But as you point out, that still gives extra headroom for Vega and should make it much easier to match GP104.

In fact that ~200 watt power usage of GTX1080 gives us another way to quantify the goal for Vega: it needs to have 200W/250W = 80% the perf per watt of GP104 or better to fit within the 250 watt power threshold and match GTX 1080's performance.

GTX 1060 versus RX 480 wattage ratio is 120W/164W= 73%, so AMD needs only a 10% perf/watt improvement to be able to build a GPU to match GTX 1080. Even less since HBM2 should save somewhere between 10-25 watts.

But GP102 is right at the 250 watt limit itself so AMD needs to boost its perf/watt by 37% to be able to match its performance. That's not going to happen.
 
Scaling won't be linear for the designs. The frontend will be a largely fixed cost that won't necessarily scale as the chip gets larger. That could also contribute a significant amount of the TDP. There are also papers floating around hinting at what some of those changes will entail and they won't be insignificant in regards to power efficiency. Especially under less than ideal workloads. We also know Vega will be featuring some form of interconnect thanks to HPC Zen which could make multi die setups interesting. In theory the high end enthusiast part may be well in excess of 600mm. Yes Navi was geared towards multi-chip setups, but that may just mean more than two die. Another issue with Polaris is that a lot of the power saving features don't appear to be enabled for one reason or another. If that issue gets addressed there could be a significant increase in efficiency.
 
I don't agree that AMD had only 8 months to improve from Polaris to Vega. It's still possible that Polaris uses the tired old, bad perf/W GCN design with some tweaks, while Vega uses a new one on which they worked in parallel for much longer.

That said, I've been saying "what else have been doing all his time" for a couple of years now, and I still don't have an answer...
 
You talk about "Vega this and that" but completely ignore the fact that there will be two desktop Vega chips (and the iGPUs based on same architecture).
Surely you're not trying to say that both Vega 10 & 11 target 1070/1080?
 
I don't agree that AMD had only 8 months to improve from Polaris to Vega. It's still possible that Polaris uses the tired old, bad perf/W GCN design with some tweaks, while Vega uses a new one on which they worked in parallel for much longer.

That said, I've been saying "what else have been doing all his time" for a couple of years now, and I still don't have an answer...

Given their financial condition, it looks like Polaris was planned as a minimal risk "tick" to tide them over until Vega. And given when they had working samples..its quite possible that it was delayed. We have to also consider the WSA, that might even be the reason Polaris is at GF.

I really hope Vega is what they have actually been working on for the last few years. The roadmap places it significantly ahead of Polaris on Perf/W. I would really like them to deliver on that!
 
Has their financial condition been like this for the past 3-4 years?
 
MSI may ship 5 million graphics cards in 2016
Micro-Star International (MSI) shipped two million graphics cards in the first half of 2016 and is likely to ship five million units in the whole year, growing 31.6% on year, for an operating profit of NT$1.5 billion (US$47.2 million), according to industry sources.

MSI's large growth in 2016 graphics card shipments will be mainly because graphics cards equipped with Nvidia GPUs have reached a global market share of nearly 80%, the sources said. In addition, the large growth will be partly due to gaming notebook sales which have boosted its brand image, the sources indicated. MSI aims to ship 1.2 million gaming notebooks in 2016, increasing 33.3% on year, for an operating profit of NT$3 billion, the sources noted.
http://www.digitimes.com/news/a20160825PD206.html
 
Last edited by a moderator:
Nvidia's NVLink 2.0 will first appear in Power9 servers next year
Upgrades bi-directional ports from 160GB/s to 200GB/s


IBM is projecting its Power9 servers to be available beginning in the middle of 2017, with PCWorld reporting that the new processor lineup will include support for NVLINK 2.0 technology. Each NVLINK lane will communicate at 25Gbps, up from 20Gbps in the first iteration. With eight differential lanes, this translates to a 400Gbps (50GB/s) bi-directional link between CPUs and GPUs, or about 25 percent more performance if the information is correct.

Meanwhile, Nvidia has yet to release any NVLINK 2.0-capable GPUs, but a company presentation slide in Korean language suggests that the technology will first appear in Volta GPUs which are also scheduled for release sometime next year. We were originally under the impression that the new GPU architecture would release in 2018, as per Nvidia’s roadmap. But a source hinted last month that Volta would be getting 16nm FinFET treatment and may show up in roughly the same timeframe as AMD’s HBM 2.0-powered Vega sometime in 2017. After all, it is easier for Nvidia to launch sooner if the new architecture is built on the same node as the Pascal lineup.

nvidia-nvlink-2.0-ibm-slide.jpg

http://www.fudzilla.com/news/graphics/41420-nvidia-nvlink-2-0-arrives-in-ibm-servers-next-year
 
Last edited by a moderator:
Nvidia Pascal GPU supplies remain tight say industry sources

Can't make 'em fast enough

Nvidia Pascal GPUs are being snapped up as fast as they leave the foundry with the likes of Asus, Gigabyte, Colorful, and MSI vying for greater supplies. According to computer industry journal DigiTimes the cause of the tight supply and "aggressive" jostling between the Nvidia AIC partners is caused by the strong public demand for GeForce GTX 1080, 1070 and 1060 graphics cards.
...
Today DigiTimes said that despite production lines being extremely busy with the higher-end GPUs, Nvidia and its partners are readying for the release of the GTX 1050 graphics card – which could happen before this month is out.
http://hexus.net/tech/news/graphics...u-supplies-remain-tight-say-industry-sources/
 
Samsung wants an AMD or Nvidia GPU in its Exynos SoCs
Samsung is negotiating with both AMD and Nvidia over leveraging the respective companies GPUs in future Exynos mobile processors. Just ahead of the weekend SamMobile exclusively reported that it had been tipped by an industry insider that talks are ongoing between Samsung and the red and green teams, the major rivals behind the PC's superlative 3D graphics.

Though Samsung's relationship with AMD looks outwardly cosier (it has even been rumoured to be interested in buying up AMD, but it has been in recent legal tussles with Nvidia), SamMobile says it is currently favouring the "superior Pascal architecture". Of course Samsung and Apple are fierce rivals in business, and in legal wrangling, but still manage to do business with each other for mutual gains. Looking back in Samsung's Exynos development history it was rumoured that the South Korean tech giant was designing its own GPU for launch in the Note5. However that didn't happen, as it continued to make use of ARM's Mali GPUs in its Exynos SoCs.
http://hexus.net/tech/news/graphics/96799-samsung-wants-amd-nvidia-gpu-exynos-socs/
 
Status
Not open for further replies.
Back
Top