Bondrewd
Veteran
Oh boy.For AMD to come close to Ampere with RDNA2
Gotta see how the next Exynos part with RDNA performs tbh.True that, those Samsung process-related horror stories might be true after all.
Oh boy.For AMD to come close to Ampere with RDNA2
Gotta see how the next Exynos part with RDNA performs tbh.True that, those Samsung process-related horror stories might be true after all.
For AMD to come close to Ampere with RDNA2, NV probably has to do worse then Turing.
Well how that ends up wouldn't matter much as we don't expect 5LPE GPUs. The only good sign about 5LPE is that Qualcomm initially was onboard for the S875, disregarding the recent yield issue rumours and them going back to TSMC for a late year refresh.Gotta see how the next Exynos part with RDNA performs tbh.
Eh, still gonna be helluva interesting round of benchmarking for you to do.Well how that ends up wouldn't matter much as we don't expect 5LPE GPUs.
The whole 875 or only the modem?The only good sign about 5LPE is that Qualcomm initially was onboard for the S875
what the hell went wrong there7LPP is supposedly double the static leakage and -5% dynamic power vs N7, and keep in mind N7P and N7+ are better over N7 as well.
I'm not convinced we'll even see RDNA next gen so there's that. And people give SLSI too much credit when they've fucked up for 5 years in a row.Eh, still gonna be helluva interesting round of benchmarking for you to do.
Qualcomm hinted at some issues regarding N5 - in-between the lines I think they couldn't get any volume allocated.The whole 875 or only the modem?
Initial thingies said X60 only.
TSMC just has more resources and better R&D. Hard to keep up when you have no customers left - having tons of customers that can give you feedback is incredibly useful.what the hell went wrong there
P sure it's coming.I'm not convinced we'll even see RDNA next gen so there's that.
The Mali GPU portion was never totally truly awful sans 990.And people give SLSI too much credit when they've fucked up for 5 years in a row.
N5 for '21 is nothing besides Apple so I doubt they had no slots at all. Weird.Qualcomm hinted at some issues regarding N5 - in-between the lines I think they couldn't get any volume allocated.
As does Intel, but Intel set themselves on fire.TSMC just has more resources and better R&D
Last I heard there will be a G78, but not sure if we'll see more than one design.P sure it's coming.
TSMC saw the EUV conundrum coming and focused on DUV first and foremost while Samsung gambled and lost that bet. TSMC meanwhile focused to either develop their own pellicle (not confirmed) or some super-secret anti-contamination system while Samsung struggles to get wafers out due to EUV yield issues. I hope 5LPE is competitive because if not - 3GAA will be their last chance as a leading edge foundry beyond which they'll lose viability at the leading edge forever.Also 14 and 10 worked pretty well for all I care.
Google's semi-custom part is rumored to be G78 sure, but E1000 (or w/ever they call it) actual is ??????.Last I heard there will be a G78
Sounds like voodoo but EUV is voodoo so anything is possible.TSMC meanwhile focused to either develop their own pellicle (not confirmed) or some super-secret anti-contamination system while Samsung struggles to get wafers out due to EUV yield issues.
God I hope, QC needs to get their balls squashed for laziness.I hope 5LPE is competitive
After reading this, I wondered if Pixar uses Nvidia GPUs, so I googled it.
https://nvidianews.nvidia.com/news/...logy-for-accelerating-feature-film-production
Nobody (none of the big animation& VFX studios ) uses GPU farms for final frame rendering. GPU rendering is mainly used by the artists during production for dev-look, lighting setup, etc. The one case where some form of GPU ray tracing was used was on Avatar & Tintin using PantaRay (Weta's ray tracer developed by Nvidia's Jacopo Pantaleoni) to bake directional ambient occlusion for the spherical harmonics pipeline. Final rendering was then done using Renderman. Weta Digital's current renderer, Manuka, was initially developed as an hybrid CPU/GPU path-tracer but the GPU path has later been dropped. When it comes to Nvidia's RTX there's also the small "issue" that most renderers use double-precision (64-bit) floating-point at several stages while RTX relies on single-precision (32-bit) floating-point which can result in inaccurate shading and limit accuracy in large scenes.The answer is no, they still use CPUs. Their upcoming (out now?) Renderman version supports mixed rendering from gpu compute, but their render farm hasn't been upgraded yet.
So somewhere between things which are pretty common on all modern cards including those based on RDNA1?Somewhere between 3 slots and gigantic finstacks.
So somewhere between things which are pretty common on all modern cards including those based on RDNA1?
I don't think there was ever a single GPU reference air cooler that came even close to being this beefy.
Interesting. Surely, they got something out of the deal though. It would be rather odd to just give Nvidia money for no reason.The answer is no, they still use CPUs. Their upcoming (out now?) Renderman version supports mixed rendering from gpu compute, but their render farm hasn't been upgraded yet.
It's literally in the second paragraph:Interesting. Surely, they got something out of the deal though. It would be rather odd to just give Nvidia money for no reason.
Pixar licensed Mental Ray's QMC which Nvidia acquired in 2007 & expanded upon.The multi-year strategic licensing agreement gives Pixar access to NVIDIA's quasi-Monte Carlo (QMC) rendering methods.
Nobody (none of the big animation& VFX studios ) uses GPU farms for final frame rendering. GPU rendering is mainly used by the artists during production for dev-look, lighting setup, etc. The one case where some form of GPU ray tracing was used was on Avatar & Tintin using PantaRay (Weta's ray tracer developed by Nvidia's Jacopo Pantaleoni) to bake directional ambient occlusion for the spherical harmonics pipeline. Final rendering was then done using Renderman. Weta Digital's current renderer, Manuka, was initially developed as an hybrid CPU/GPU path-tracer but the GPU path has later been dropped. When it comes to Nvidia's RTX there's also the small "issue" that most renderers use double-precision (64-bit) floating-point at several stages while RTX relies on single-precision (32-bit) floating-point which can result in inaccurate shading and limit accuracy in large scenes.
Not for reference cards, though. Couple of years ago, when small gaming boxes were all the rage, Nvidia was pretty adamant that their ref designs had to be 2-slot blowers or they wouldn't fit into those particular small cases anymore.So somewhere between things which are pretty common on all modern cards including those based on RDNA1?
Not really sure, but for really large scenes, maybe there's a limit to the 1st-gen RTX cores. Their BVH traversal in hardware could fall off a cliff somewhere when the internal cache is oversubscribed.Regardless, the render times are so slow on CPUs that I wouldn't be surprised to see big production houses start to roll out GPU render farms over time. More and more production renderers are getting the capabilities, and overall it's probably a big time saver.
notSo somewhere between things which are pretty common on all modern cards including those based on RDNA1?