Nvidia Post-Volta (Ampere?) Rumor and Speculation Thread

Status
Not open for further replies.
Sorry for the question, and it's maybe cross topic with rdna2, but, given the latest rumors, Ampere (for gaming) is coming before rdna 2, right ?

More speculation than rumour from this latest video I think but he's saying NV might come out with a very high end halo product this year (around September) to retain the performance crown from RDNA2, but it may be a paper launch with full availability down the range being as much as a quarter behind RDNA2. Personally I don't see that happening.
 
Sorry for the question, and it's maybe cross topic with rdna2, but, given the latest rumors, Ampere (for gaming) is coming before rdna 2, right ?
For AMD Computex at the end of September seems like a good launch window even if availability be in Q4 (based on what they've said so far).
For NVIDIA, who knows, there's rumors going all around, but I'd expect them to release gaming models in H2 too, but timetable is anyones guess. Jensen will probably unveil the architecture + HPC chip the day after tomorrow, maybe we'll get some insight on schedule there.
 
It looks heavy lol. Dude needs to hit the gym more if he wants to pull this trick out for the 4XXX series.
 
Moore's Law is Dead update video

Basically below is what he states in his updated rumor vid. I still take it with a grain of salt.
According to the leak, the full-fat GA102 is armed with 84 streaming multiprocessors and 5,376 CUDA cores. On the GeForce RTX 3080 Ti engineering sample, this is said to be paired with 12GB of memory running at 18Gbps on a 384-bit bus, yielding 864GB/s of memory bandwidth. That is a massive 40.3 percent jump over the current generation GeForce RTX 2080 Ti.

He also claims the card boosts to 2.2GHz, and reckons that even a cut down version would deliver 21 TFLOPS of performance, while consuming 220-230W.

What this all boils down to, according to the leak, is a card that offers at least a 40 percent increase in rasterization performance at 4K, compared to the GeForce RTX 2080 Ti. In addition, the YouTuber supposes the GA103 and GA104 will essentially make the CPU a bottleneck at 1080p and even 1440p gaming, with 4K becoming a "midrange standard."

small_ga102_ampere_specs.jpg



According to his engineering source, the number of RT cores may not change with Ampere, but they will be "substantially" more powerful than the ones used in Turing.

To put it into perspective, he says if someone plucked the RT cores form Ampere and injected them into Turing, then the GeForce RTX 2080 Ti would theoretically offer four times faster ray tracing performance.

He also says NVIDIA will be doubling the number of Tensor cores, so that lower end Ampere cards will vastly outperform higher end Turing cards in ray-traced and DLSS workloads.

small_ampere_ray_tracing.jpg


https://hothardware.com/news/nvidia-geforce-rtx-3080-ti-ampere-specs-leak
 
Last edited by a moderator:
Some really interesting info in that video (again). It's all still sounding quite plausible to me and he's seems to have staked a lot of his credibility on this as he's presenting a lot of very specific information as factual rather than speculation. If he's wrong about even half of it his credibility is going to be shot.

He also seems to mix factual leak info with his own speculation without clearly indicating which is which. That's particularly apparent in the Tensor compressed video memory claims where previously he talked about this as essentially increasing both video size and bandwidth, but now we learn that it actually comes with a performance penalty and so is really only useful at giving the GPU some extra VRAM space if it runs out, and it's a toggle, not on as default - this sounds far more believable than the previous claim.

The thing that most interests me is NVCache and we should learn whether that's real or not in the HPC launch in a few days. So that should give a good indicator as to the reliability of the rest of this info.

The claims on DLSS 3.0 were interesting too. Nvidia will override settings in some games forcing it on?? A controversial move if so....

I dont buy the tensor VRAM compression at all.

Also dont believe for a second that Ampere will be 4-5x faster than Titan RTX in minecraft.

Denoising isnt done on tensor cores and from everything we have heard, thats not likely to change in the near future.
 
Status
Not open for further replies.
Back
Top