Nvidia Ampere Discussion [2020-05-14]

Couple of years ago, when small gaming boxes were all the rage, Nvidia was pretty adamant that their ref designs had to be 2-slot blowers or they wouldn't fit into those particular small cases anymore.
Turing launched couple of years ago. It's reference were 2-slot 2 fans factory OCed designs.

But then, maybe that exotic thing is necessary when you put your 400 Watt SXM4 into an adapter for PCIe.
So it will consume more than GA100 in PCIE? Because this one is 250W.
 
Regardless, the render times are so slow on CPUs that I wouldn't be surprised to see big production houses start to roll out GPU render farms over time. More and more production renderers are getting the capabilities, and overall it's probably a big time saver.
It's been a hot topic for the past few years within the M&E industry, and traditionally the quality hasn't been the same as CPU rendering. One interesting use is using unattended GPU desktops to the add to resources available to the render farm at night.
Previously, this rendering would be completed on a CPU farm, which is a finite resource. Instead, Redshift makes it possible to render on an idle resource, such as unused desktop computers during nighttime hours. In every studio, with a bevy of PC video cards sitting idle for 12 hours of the night, you now have a GPU farm with extra render resources – and the render execution is fast. A 20-minute CPU render often takes 20 to 40 seconds in Redshift.
...
On most days, every studio has at least a half dozen machines that are idle due to workplace absences and can be added to the farm. Between those machines and a few dedicated GPUs, the studio can have a daytime farm to iterate test frames and hundreds of machines to use at night for full-frame range renders. That offload strategy of unloading to the GPU to free up the CPU farm for final renders has been the most game-changing application of Redshift.
July 24, 2020
https://expertswhogetit.ca/data-cen...-is-changing-vfx-practices-in-the-me-industry
 
Last edited by a moderator:
None of this bodes well no matter how you spin it. Nvidia isn’t going to push the limits of power and cooling just for kicks.

Doesn't bode well? The 3090 is basically, or looks like a Titan replacement, it's the enthusiast product for people that want the absolute best performance and are willing to pay for that, doubt those mind the power draw or the measurements of it.
A 3080/3080Ti will evaporate what's in consoles already, a GPU like that will suffice for most high-end gamers to last an entire generation, giving them the best version of multiplat games.

I think it will be more intresting what a 3060/3070 will be like, if they perform like a 2070S/2080/2080Ti then that's a very good thing.
 
There is still the 3080 with a two slot reference cooler...
... which is fine (for Nvidia) if it's competing with the fastest AMD has to offer. And which isn't (for Nvidia), if AMDs fastest competes in a 2-slot/28cm design with 3090.
 
None of this bodes well no matter how you spin it. Nvidia isn’t going to push the limits of power and cooling just for kicks.

Non-Custom 2080Ti were much to power limited. Difference from 2080Super to 2080Ti FE was pretty small because of that. It makes sense for Nvidia to go for higher TDP to increase the performance of the Enthusiast tear. Especially as they know, that AMD isn't hesitating to go for 375W TDP and Intels Xe HP Presentations could also lead to products with more than 300W TDP. SLI is dying anyway, so why not build a real ultra enthisiast graphics card? Not like the Titan, which is quite limited. I like Nvidia going all in with the 3090.

The normal 3080 is much more important as an indicator of architecture efficiency. We have rumours of 320W TDP for the 3080, but at the same time the 3080 is rumoured to use the 2 slot version of the cooler. Makes no sense for me to change to the smaller cooler design for just 30 W TDP. If the 3080 really has 320 W TDP, then something went wrong during design (process, memory power..)
 
Doesn't bode well? The 3090 is basically, or looks like a Titan replacement, it's the enthusiast product for people that want the absolute best performance and are willing to pay for that, doubt those mind the power draw or the measurements of it.

Prior Titans also fit that description but didn’t require bizarro cooling or power. So that can’t be it.
 
I'm having a very hard time believing that Nvidia totally fucked their power power draw advantage over AMD while doing a node shrink. 5700xt drew almost as much power as a 2080FE, and Turing was 12nm while 5700xt was 7nm. I know people are saying Samsung 8nm isn't great, but it seems very weird to me that a 2080FE would be 210W max and suddenly the 3080 is some kind of thermal monstrosity even with a node shrink that should give some advantages.
 
Back
Top