Nvidia Ampere Discussion [2020-05-14]

Unfortunately, not much new compared to the already disclosed thingies. Or am I missing the elephant in the room?
 
I like VRAM amount is an option at the mid and higher tiers.
The pricing of the graphics cards appears to be roughly the same pricing level for last generation ( up till the RTX 3080) and the RTX 3090 taking the space of the TITAN. Considering its massive size, the name TITAN might actually have been more suited to the card. Interestingly, the lower end of NVIDIA's card is posited to be priced at $399.
3060 400
3070 600
3080 800
3090 1400

What we know about NVIDIA SKUs so far:
Since we do not know the confirmed naming schemes yet, I will refer to these boards according to their board numbers and the RTX 2000 series card they are intended to replace.

  1. The crown jewel of NVIDIA's lineup is the PG132-10 board with 24GB of vRAM. It is going to be replacing the RTX 2080 Ti and is currently scheduled to launch in the second half of September.
  2. We then have the PG132-20 and PG132-30 boards, both of which are replacing the RTX 2080 SUPER graphics card and will have 20GB and 10GB worth of vRAM respectively. The PG132-20 board is going to be launching in the first half of October while the PG132-30 board is going to be launching in mid-September. It is worth adding here that these three parts are likely the SKU10, 20 and 30 we have been hearing about and the SKU20 is going to be targetted dead center at AMD's Big Navi offering (and hence the staggered launch schedule). Since AMD's Big Navi will *probably* have 16GB worth of vRAM, it also explains why NVIDIA wants to go with 20GB.
  3. The PG142-0 and PG142-10 are both going to be replacing the RTX 2070 SUPER and will feature 16GB and 8GB worth of vRAM respectively. While the PG142-10 has a known launch schedule in the second half of September, the PG142-0 board has no confirmed launch date yet.
  4. Finally, we have the PG190-10 board which is going to be replacing the RTX 2060 SUPER graphics card and will have 8GB of vRAM as well. The launch schedule for this board has not been decided yet either.
  1. NVIDIA-GeForce-RTX-3090-Ampere-Flagship-Gaming-Graphics-Card_1.jpg

https://wccftech.com/nvidia-rtx-300...or-799-rtx-3070-for-599-and-rtx-3060-for-399/
 
Last edited by a moderator:
If the rear fan spins fast enough maybe it'll provide sufficient thrust to keep the tail-end from sagging.

I suppose that's the benefit of having the triple slot mounting plate and what looks to be like a fairly rigid construction. In any case, I'm glad I have one of those 90deg rotated Silverstone cases so the vid card sits vertically.
 
So the 3090 only comes with 24GB VRAM is that right? Or is there still speculation that there's a 12GB version?

I'm not paying $1400 for a card with 12GB of VRAM. 24GB though and it's day 1.
 
I'd love a mid-high tier GPU with 16Gb of VRAM.

Well you're in luck because that seems to be every mid tier GPU from Nvidia and AMD this year. It's just how it works out compared to consoles really. I wonder how much ram you'll need for the 8k textures in the Crysis remake, full 24gb, more? Doesn't seem like they've changed the engine around for high end SSD streaming so, could be.
 
Quick pixel counting says ~325mm and 110mm fans (based on PCIe connector length)
The cooler, apart from being quite large and a 3-slot (which AFAIK is also a first for single-GPU reference cards), still seems odd to me. Especially the back side, where you cannot see any details (HS structure) through the fan blades, which you can for the front fan.
 
We don’t know anything about Samsung’s process. Maybe it’s crap.

Qualcomm ported the SD845 from 10nm LPP to 7nm DUV. Transistor density went up 50% but efficiency for the GPU only ~20%:
https://www.anandtech.com/show/14072/the-samsung-galaxy-s10plus-review/10

The efficiency gain comes from a 50% wider GPU with a 27% lower clock rate. Samsung claims that 8nm LPP is 10% denser and 10% more efficient. Samsung is producing 10nm LPP chips since three and 8nm LPP since two years. Everybody should know the quality of the process...
 
Qualcomm ported the SD845 from 10nm LPP to 7nm DUV. Transistor density went up 50% but efficiency for the GPU only ~20%:
https://www.anandtech.com/show/14072/the-samsung-galaxy-s10plus-review/10

The efficiency gain comes from a 50% wider GPU with a 27% lower clock rate. Samsung claims that 8nm LPP is 10% denser and 10% more efficient. Samsung is producing 10nm LPP chips since three and 8nm LPP since two years. Everybody should know the quality of the process...
Check 765G vs Kirin 990
 
Something interesting.
Jensen, always the egocentric, said at the GeForce2 launch that the card was a "major step toward achieving" the goal of “Pixar-level animation in real-time”. Some people at Pixar really didn't liked heading this:

"Do you really believe that their toy is a million times faster than one of the cpus on our Ultra Sparc servers? What’s the chance that we wouldn’t put one of these babies on every desk in the building? They cost a couple of hundred bucks, right? Why hasn’t NVIDIA tried to give us a carton of these things? — think of the publicity milage [sic] they could get out of it!"

Besides getting mad at Jensen, they're explained how long they predicted a GeForce would be able to offer the quality they had achieved in real time, 20 years.

"At Moore’s Law-like rates (a factor of 10 in 5 years), even if the hardware they have today is 80 times more powerful than what we use now, it will take them 20 years before they can do the frames we do today in real time. "

What it's interesting is that the GeForce2 was launched in September 2000, exactly twenty years ago.
The cards may be expensive, but what about the jump in actual power and visual fidelity? Will Nvidia reach that "goal"?
I wonder if Jensen remembers that.
 
Back
Top