Nvidia Post-Volta (Ampere?) Rumor and Speculation Thread

Status
Not open for further replies.
Not sure who that is, or how how reliable, but wouldn't His post seem odd, seeing that NVidia was still shopping nodes 6 months beforehand..? Can you put it down on lith that quickly..?

And this is TSMC 7nm ?

We only know one fact about Ampere, it's build into the Big Red supercomputer in this summer. Which means it should start mass production now and should've taped out around 1 year before mass production for such a big complex chip, which we can expect as volta successor.
So it's clear his info was true. We have no other reliable infos. Speculated was 8LPP Samsung as a process in the last time. We'll know it in the next weeks, when Nvidia presents Ampere. They won't wait too long after postponing their GTC presentation of it. I'm sure they'll show it in may. But that's only about the supercomputer Ampere. Consumer we have 0 infos.
 
Looks fake, the ram just isn't enough versus the consoles. 10gb for mid range wouldn't match high end console VRAM requirements, there's no way they could charge $500 for it. I'd also expect a 320bit bus to be some cut down bin from a full fat 384bit GPU, not its own separate tapeout.

You could just double the ram, awkwardly as GDDR6 is mostly sold in 8gb sticks. But the worse part is the bandwidth doesn't add up with compute performance. 103 to 102 has 20% larger bus with 40% more compute area and a 26% performance uplift... wait what? Not to mention 104 to 103 adds up even less.

It's more than enough. Even XBSX is allocating 10GB for video and limiting bandwidth for the other 6GB of RAM (the 6GB can be accessed by the GPU, but the design choice tells me they decided they only need 10GB). Besides, PCs have a separate pool for system RAM. Above budget gaming PCs will probably have 24GB of RAM (8GB VRAM + 16GB System RAM). That's almost double what the consoles have, although consoles use it for gaming much more efficiently.
 
16gb for the whole system, OS/audio/game logic. 10gb for graphics related is more towards high end.

It surpasses current "enthusiast" grade GPUs certainly, though I would like to point out that high-end graphics cards have featured > 10GB (11-12GB) since the Titan X (Pascal) launched in August 2016. Next-gen cards will almost certainly have more VRAM onboard due to the density of GDDR6. Yes, consumer Turing SKUs use GDDR6 though at the lower 8Gbit density, which is the same as high-end GDDR5/x. I suspect this was a cost-saving measure due to the large die sizes of Turing GPUs. My guess is 20 or 22GB for 3080 Ti, 16GB for 3080.
 
It shouldn't take too long now before NDAs are lifted.
That also depends a bit on how badly Nvidia is impacted by Corona, and I know for some companies it's pretty bad.
 
16gb for the whole system, OS/audio/game logic. 10gb for graphics related is more towards high end.

That's 1:1 for Xsx high speed only, discarding the rest of the ram. Disregarding that a few titles inevitably will put VRAM stuff in the low speed bin regardless, titles going from consoles to PC aren't always 1:1 with VRAM reqs. Doom Eternal recommends a 4gb card for specs, that probably matches consoles settings despite the fact we know all 4gb of available console ram aren't GPU dedicated.

nVidia had a firechat last month and they claimed that every "new" product they would have shown is on track and contributing to the financial quarter: https://investor.nvidia.com/events-...ails/2020/GTC-Analyst-QA-Session/default.aspx

Summary: https://seekingalpha.com/article/43...f-multi-decade-growth-trends#comment-84604751

So, they're sending out GPUs to high end corporate clients? At least that's what I'd assume.
 
So, they're sending out GPUs to high end corporate clients? At least that's what I'd assume.

Yes, probably first small amounts for hyperscalers to develop their cloud solutions. But this makes it pretty clear, that they need to announce it before the earnings call in May. No way they'll publish results with revenues from yet unannounced products.
 
Rumor: NVIDIA GeForce RTX 3070 and 3080 Coming Q3 2020 + Specs
April 29, 2020
K.H. Chia is a retired engineer, you might know him from an earlier 'prognosis', he predicted and explained chiplet designs from AMD at the time, and also he drew up a block diagram of AMD Epyc2 processors getting 64 cores spread out over 8+1 dies. That was a prediction before it was public, and he was spot on. That gives the slide he posted a bit of merit and perhaps more credibility than normal.
In a now deleted tweet, it seems that the release dates of the next-gen RTX GPUs will be somewhere in the third quarter of 2020 for the GeForce RTX 3070 and RTX 3080, then Q4 2020 for the RTX 3080 Ti, the 3060 should be released by Q1 2021.

So far, that all sounds plausible enough. It's expected by a lot of people already, that the new RTX would be released somewhere in the summer, this tweet did not have any specific month, but target dates by quarter. The most interesting part is obviously the specs, where a 3070 could be close in performance to an RTX 2080 Ti. So the shader counts seems hmm, let's call it far-fetched and too enthusiastic?
index.php

Above is the spec-deck he posted on his Twitter account, which was shortly after removed again, which is weird and raises questions. It is of course speculation of the highest level, but certainly an interesting tweet all by itself. The smaller than suffixes do seem to indicate an expected number of cores, frequencies, and so on. Very interesting none-the-less. 16 Gbps is mentioned for graphics memory speed, of that pans out to be true that is GDDR6 16 Gbps.
https://www.guru3d.com/news-story/rumor-nvidia-geforce-rtx-3070-and-3080-coming-q3-2020-specs.html
 
Rather worried about Nvidia's bandwidth situation, GDDR6 is hitting a Max of 18gbps only at the end of the year and from the looks of it they've done all they reasonably can with compression.

Thus the huge "nigh double!" 2080ti successor seems dubious. How many separate very expensive HBM stacks would that require? Or a notoriously difficult 512bit bus. But I guess we'll see.

Edit- 4, 4 stacks of HBM2e should about equal a 512bit bus with 18gbps ram (actually more than, but 3 doesn't quiiiite work so whatever), which would be enough to feed 8k cores. Except 4 stacks cost like $250+ or so just for the ram, that's damned heavy on the BOM side and does not at all feel like it belongs in a consumer part. If it was 8k for some crazy new AI Volta successor that they'll charge $5k for or something I can see it, but for a consumer part at all I find it suspect.
 
Last edited:
Status
Not open for further replies.
Back
Top