Nvidia Ampere Discussion [2020-05-14]

Gotta see how the next Exynos part with RDNA performs tbh.
Well how that ends up wouldn't matter much as we don't expect 5LPE GPUs. The only good sign about 5LPE is that Qualcomm initially was onboard for the S875, disregarding the recent yield issue rumours and them going back to TSMC for a late year refresh.

Samsung's 10LPP was better than TSMC's 10FF, 8LPP is also good, although not as great as TSMC N7 but also not too far off, besides density where it's far behind.

7LPP is supposedly double the static leakage and -5% dynamic power vs N7, and keep in mind N7P and N7+ are better over N7 as well.
 
Well how that ends up wouldn't matter much as we don't expect 5LPE GPUs.
Eh, still gonna be helluva interesting round of benchmarking for you to do.
The only good sign about 5LPE is that Qualcomm initially was onboard for the S875
The whole 875 or only the modem?
Initial thingies said X60 only.
7LPP is supposedly double the static leakage and -5% dynamic power vs N7, and keep in mind N7P and N7+ are better over N7 as well.
what the hell went wrong there
 
Eh, still gonna be helluva interesting round of benchmarking for you to do.
I'm not convinced we'll even see RDNA next gen so there's that. And people give SLSI too much credit when they've fucked up for 5 years in a row.
The whole 875 or only the modem?
Initial thingies said X60 only.
Qualcomm hinted at some issues regarding N5 - in-between the lines I think they couldn't get any volume allocated.
what the hell went wrong there
TSMC just has more resources and better R&D. Hard to keep up when you have no customers left - having tons of customers that can give you feedback is incredibly useful.
 
I'm not convinced we'll even see RDNA next gen so there's that.
P sure it's coming.
And people give SLSI too much credit when they've fucked up for 5 years in a row.
The Mali GPU portion was never totally truly awful sans 990.
I have some slight hopes.
Qualcomm hinted at some issues regarding N5 - in-between the lines I think they couldn't get any volume allocated.
N5 for '21 is nothing besides Apple so I doubt they had no slots at all. Weird.
TSMC just has more resources and better R&D
As does Intel, but Intel set themselves on fire.
Also 14 and 10 worked pretty well for all I care.
 
I looked up GP107. A GTX1050TI card delivered 44 GFLOPs/W within 60W and with a core clockrate around 1700MHz and the max-q variant at 54 GFLOPs/W. A 1650 sits at 46GLOPs/W with 1824MHz and a 2080TI FE with 1750MHz is around 54 FLOPs/W (same as a GTX1080).

Basically Gaming-Ampere delivers no improvement over Pascal on 16nm and Pascal on 14nm.
 
P sure it's coming.
Last I heard there will be a G78, but not sure if we'll see more than one design.
Also 14 and 10 worked pretty well for all I care.
TSMC saw the EUV conundrum coming and focused on DUV first and foremost while Samsung gambled and lost that bet. TSMC meanwhile focused to either develop their own pellicle (not confirmed) or some super-secret anti-contamination system while Samsung struggles to get wafers out due to EUV yield issues. I hope 5LPE is competitive because if not - 3GAA will be their last chance as a leading edge foundry beyond which they'll lose viability at the leading edge forever.
 
Last I heard there will be a G78
Google's semi-custom part is rumored to be G78 sure, but E1000 (or w/ever they call it) actual is ??????.
TSMC meanwhile focused to either develop their own pellicle (not confirmed) or some super-secret anti-contamination system while Samsung struggles to get wafers out due to EUV yield issues.
Sounds like voodoo but EUV is voodoo so anything is possible.
I hope 5LPE is competitive
God I hope, QC needs to get their balls squashed for laziness.
 
The answer is no, they still use CPUs. Their upcoming (out now?) Renderman version supports mixed rendering from gpu compute, but their render farm hasn't been upgraded yet.
Nobody (none of the big animation& VFX studios ) uses GPU farms for final frame rendering. GPU rendering is mainly used by the artists during production for dev-look, lighting setup, etc. The one case where some form of GPU ray tracing was used was on Avatar & Tintin using PantaRay (Weta's ray tracer developed by Nvidia's Jacopo Pantaleoni) to bake directional ambient occlusion for the spherical harmonics pipeline. Final rendering was then done using Renderman. Weta Digital's current renderer, Manuka, was initially developed as an hybrid CPU/GPU path-tracer but the GPU path has later been dropped. When it comes to Nvidia's RTX there's also the small "issue" that most renderers use double-precision (64-bit) floating-point at several stages while RTX relies on single-precision (32-bit) floating-point which can result in inaccurate shading and limit accuracy in large scenes.
 
Last edited:
So somewhere between things which are pretty common on all modern cards including those based on RDNA1?

I don't think there was ever a single GPU reference air cooler that came even close to being this beefy. It may very well be that this is some special Nvidia only edition (no AIB partner versions) monster with huge headroom for OC. Or whats more likely, Nvidia pushed 3090 or whatever ends up being called, to the brink of whats possible with air cooling and it likely has very little or no headroom left. This thing screams 400W.
 
The answer is no, they still use CPUs. Their upcoming (out now?) Renderman version supports mixed rendering from gpu compute, but their render farm hasn't been upgraded yet.
Interesting. Surely, they got something out of the deal though. It would be rather odd to just give Nvidia money for no reason.
 
Interesting. Surely, they got something out of the deal though. It would be rather odd to just give Nvidia money for no reason.
It's literally in the second paragraph:

The multi-year strategic licensing agreement gives Pixar access to NVIDIA's quasi-Monte Carlo (QMC) rendering methods.
Pixar licensed Mental Ray's QMC which Nvidia acquired in 2007 & expanded upon.
 
Crazy how reading links works ;).

Do you reckon they are using CPU rendering in Presto / USD?
 
Last edited:
Nobody (none of the big animation& VFX studios ) uses GPU farms for final frame rendering. GPU rendering is mainly used by the artists during production for dev-look, lighting setup, etc. The one case where some form of GPU ray tracing was used was on Avatar & Tintin using PantaRay (Weta's ray tracer developed by Nvidia's Jacopo Pantaleoni) to bake directional ambient occlusion for the spherical harmonics pipeline. Final rendering was then done using Renderman. Weta Digital's current renderer, Manuka, was initially developed as an hybrid CPU/GPU path-tracer but the GPU path has later been dropped. When it comes to Nvidia's RTX there's also the small "issue" that most renderers use double-precision (64-bit) floating-point at several stages while RTX relies on single-precision (32-bit) floating-point which can result in inaccurate shading and limit accuracy in large scenes.

The upcoming Renderman was announced as fully vendor agnostic, so I'm not sure if they're using any sort of DXR standard. That being said, technically The Mandalorian does use GPUs for their realtime backprojection stuff. Since... whatever part of Disney is responsible for it said they're opening up the tech and studios for other productions I wouldn't be surprised if more shows end up using it as well. We'll probably see Quadro RTX 8ks and raytracing for the background stuff in Season 2, but I wonder how they'll upgrade after that. Big Nvidia chip v Big AMD chip from this year, fight!

Regardless, the render times are so slow on CPUs that I wouldn't be surprised to see big production houses start to roll out GPU render farms over time. More and more production renderers are getting the capabilities, and overall it's probably a big time saver.
 
So somewhere between things which are pretty common on all modern cards including those based on RDNA1?
Not for reference cards, though. Couple of years ago, when small gaming boxes were all the rage, Nvidia was pretty adamant that their ref designs had to be 2-slot blowers or they wouldn't fit into those particular small cases anymore.

But then, maybe that exotic thing is necessary when you put your 400 Watt SXM4 into an adapter for PCIe. ;)

Regardless, the render times are so slow on CPUs that I wouldn't be surprised to see big production houses start to roll out GPU render farms over time. More and more production renderers are getting the capabilities, and overall it's probably a big time saver.
Not really sure, but for really large scenes, maybe there's a limit to the 1st-gen RTX cores. Their BVH traversal in hardware could fall off a cliff somewhere when the internal cache is oversubscribed.
 
Back
Top