If Ampere is 'underutilized' and already hitting a power wall. What exactly would you gain by increasing utilization?
If you hit the power limit your clock and voltage drops, but utilizing the full wide gpu will still result in better performance.
If Ampere is 'underutilized' and already hitting a power wall. What exactly would you gain by increasing utilization?
That's impressive.3080 vs 2080Ti(300W) on Nvidia Asteroid Benchmark which is a mesh shader demo. Difference is huge at 4K resolution.
That's impressive.
I see the most gains come from compute workloads.
Mesh shaders have the same execution model as compute shaders but with direct interface to rasterizers.
Here are other examples where Ampere scales almost linearly with flops, all compute workloads:
https://www.ixbt.com/img//x1600/r30/00/02/33/56/d3d1210nbodygravity64k.png
https://www.ixbt.com/img/r30/00/02/33/56/vray_668771.png
https://www.pugetsystems.com/pic_disp.php?id=63679&width=800
https://babeltechreviews.com/wp-content/uploads/2020/09/Sandra-2020.jpg
In the case of Sandra, image processing and many other kernels are all ~2x faster over 2080 Ti.
Who knows? Maybe they've got some inventory of 2GB modules for 3090 launch already. We haven't even heard about G6X a month ago.What else could it have been?
Why wouldn't they just clamshell it like 3090?Who knows? Maybe they've got some inventory of 2GB modules for 3090 launch already. We haven't even heard about G6X a month ago.
This confirms it though - and also makes 3080 20GB a 2021 product probably.
Why wouldn't they just clamshell it like 3090?
Would add too much cost for the card to still be a "3080" IMO.Why wouldn't they just clamshell it like 3090?
So 18 months on there is still no video card that can properly drive my LG C9 65" with variable refresh rates at 4K...
LG's decision to not add support for AMD freesync on C9's was bad enough (since they were first advertised as an 'adaptive sync' capable TV well before they were G-Sync certified).
Turing Titan is full-speed mixed precision, 130 TFLOPWith Turing, they had reserved unconstrained Tensor throughput for Quadro cards, not 100% sure about Titan though:
https://www.nvidia.com/content/dam/...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf
Does it say there explicitly it's full speed mixed precision? I didn't find any concrete evidence.Turing Titan is full-speed mixed precision, 130 TFLOP
https://developer.nvidia.com/titan-rtx
https://developer.nvidia.com/titan-rtxDoes it say there explicitly it's full speed mixed precision? I didn't find any concrete evidence.
I was under the impression that 2000 series GPU already have G-sync working with their 2019 LG OLEDs.So 18 months on there is still no video card that can properly drive my LG C9 65" with variable refresh rates at 4K...
Actually this is AMD's fault, not LG's. AMD promised it would support VRR on HDMI. Freesync and VRR are not precisely the same thing.LG's decision to not add support for AMD freesync on C9's was bad enough (since they were first advertised as an 'adaptive sync' capable TV well before they were G-Sync certified).
There seems to be a driver bug which affects all VRR enabled GPUs right now. Hopefully it will be fixed on NV side and won't require firmware updates from LG.I was under the impression that 2000 series GPU already have G-sync working with their 2019 LG OLEDs.
Damn, I'd totally forgotten about ixbt:That's impressive.
I see the most gains come from compute workloads.
Mesh shaders have the same execution model as compute shaders but with direct interface to rasterizers.
Here are other examples where Ampere scales almost linearly with flops, all compute workloads:
https://www.ixbt.com/img//x1600/r30/00/02/33/56/d3d1210nbodygravity64k.png
https://www.ixbt.com/img/r30/00/02/33/56/vray_668771.png
https://www.pugetsystems.com/pic_disp.php?id=63679&width=800
https://babeltechreviews.com/wp-content/uploads/2020/09/Sandra-2020.jpg
In the case of Sandra, image processing and many other kernels are all ~2x faster over 2080 Ti.