Nvidia Ampere Discussion [2020-05-14]

Does anyone know why Nvidia is giving us the big die for 3080? The difference between 3090 is 20% about there.

Is it because Nvidia was unable to secure TSMC 7+ contract and settling with the inferior S8N, they could not pack as much innovation into Ampere

Will we see a repeat of 780Ti this generation. Remember 780Ti?

https://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review
Some say it's because they know AMD has something good coming, others it's because of the process
 
Nvidia is also bundling watch dogs legion and a year of geforce now, much generous of them

When does amd start leaking big Navi results?
 
Like you said, much more granular/finer power management with a lot more control over the power/speed states greatly helped to even out the playing field between different bins. 16/14nm helped because of FinFet and the density improvements going from 28nm to 16/14Finfet allowed them to implement the increased power management.

Little bit of speculation here on my part but from what I have heard, 7nm became a bit of an issue with leakage and clockspeeds again because of quad patterning. I believe AMD talked about how closely they worked and integrated their designs for the process to break through some of those limitations.
FinFET significantly improved leakage compared to planar, although like a prior significant materials change--the shift to hi-K gate dielectrics at 45nm or 28nm, the benefits don't remain sufficient forever.
The first node to implement a major structural change has a big improvement, but then the challenges it was created for continue to get worse with each subsequent node and/or other previously neglected factors become dominant.

FinFet added significantly better control over the channel at 22/16nm nodes depending on the manufacturer, but the research into nanowire or GAAFET was started with the expectation that the challenges after several nodes would eventually out-scale what iterations of FinFET could compensate for.
 
NVIDIA GeForce RTX 3080 Can Hit Overclocks Beyond 20 Gbps On Its GDDR6X Memory, Power Limits & First Overclocking Results Detailed
The GeForce RTX 3080 that was tested had a power limit of 320W which was lower than the Founders Edition and a higher TDP custom variant. The memory was pushed to 20.7 Gbps over the standard 19 Gbps pin speeds which is an 850 MHz overclock while the GPU was overclocked by 70 MHz.

Since this test was done purely to show how well the memory overclocks, the GPU wasn't pushed much. The final clocks for the card were 1510 MHz base, 1780 MHz boost, and 1294 MHz memory frequencies. Within several 3DMark tests, the GPU reported gains of 2-3% with the memory overclock which isn't a big deal but does show that the memory can be pushed this far on the RTX 3080 graphics cards.
...
What are interesting are the power and thermal figures. At both stock and overclocked speeds, the GPU temperature didn't exceed 70C. At stock, the core clocks averaged at around stabilized at 1860 MHz while at overclocked specs, the clocks stabilized at 1935 MHz. The overall RTX 3080 board power draw in both scenarios averaged around 317W with a few spikes. The total GPU chip power was around 175W while the memory power draw was around 76W when overclocked.
NVIDIA-GeForce-RTX-3080-Memory-OC-Test.png
https://wccftech.com/nvidia-geforce-rtx-3080-memory-overclocking-performance-detailed/


 
70C is nice. Now they just need to deliver the promised 30dB of fan noise and they may have a winner.
Will you buy the 10GB or wait for the 20GB?

What if 20GB is a third-party only cooler?

I noticed in the Digital Foundry video about optimised settings for FS2020 that it uses 10.5GB ish at top settings. Silly game devs thinking 11GB or more would be available on PC for the enthusiast level gamers, huh?
 
Will you buy the 10GB or wait for the 20GB?

What if 20GB is a third-party only cooler?

I noticed in the Digital Foundry video about optimised settings for FS2020 that it uses 10.5GB ish at top settings. Silly game devs thinking 11GB or more would be available on PC for the enthusiast level gamers, huh?

Some games allocate all they can. Doesn't mean it's "needed" in every situation. But yeah I agree.

(Full disclamer, I felt burn by the 4gb of my old Fury X at the time, so I went to a Vega FE (16gb) after that :eek:)
 
NVIDIA GeForce RTX 3080 Can Hit Overclocks Beyond 20 Gbps On Its GDDR6X Memory, Power Limits & First Overclocking Results Detailed

https://wccftech.com/nvidia-geforce-rtx-3080-memory-overclocking-performance-detailed/


I was under the impression that gddr6x used the same power as gddr6?

Wiki says 15% less per bit transfered, so shouldn't it be comparable to 2080ti 616GB/s?

If 180w GPU only is to be believed, then the memory and rest of the board is responsible for 140w?

And if 317w is after memory and GPU oc, why did NVIDIA list a 320w tdp. Surely it would have come under 300w stock.

Anyway, seems peculiar. 2 - 3% for an oc doesn't seem that impressive either. If the GPU core is already hitting 1935mhz I'd guess it's pretty close to max if Turing is anything to go by
 
Back
Top