Nvidia's 3000 Series RTX GPU [3090s with different memory capacity]

hub3080wqkmo.png


Hardware Unboxed noticed the coil whine as well.

So PCAT has been really useful for nvidia there, the total system power consumption difference from the second-highest power hog was around 80W.
 
As far as I can tell, the console platforms are intending to use VRS. However VRS seems to be ?somewhat incompatible? with DLSS - which may create a mess on PC.

Maybe so. But, VRS is also available on Nvidia's Turing and Ampere architectures, and doesn't seem like they're leaning much into it, not like DLSS anyhow.
 
I feel disappointed. less than 30% performance improvement. Yeah they are not so overpriced this gem but I hoped more. Also 10GB in the next years will probably won't be enough for 4K gaming. That and the 400W OC :oops:
 
I think I'm going to wait to see if AMD has anything competitive coming. There are no games that I'm stoked about that my current Pascal hardware will have trouble with anyway.

10GB seems like a scheme to tempt you to buy that 3090. I'm not very excited about upgrading my 1080 Ti to a card with less RAM.
 
Seems to me that this gpu really shows that flops isn't everything. I've no proof of course but I believe that the more you need fp32, the more this gpu will shine. In others situations, it will be bottleneck elsewhere (bandwitdh ? Rops? Even drivers ? A mix of everything I guess). In a weird way it reminds me some old ati gpu :D

TFLOPs, at least to me, has never been a useful performance metric by itself. The listed numbers have always just been a theoretical peak number derived from the clock speed x FPU units x Ops per unit (which is basically 2 across). So in essence comparing TFLOPs is just comparing the difference between FPU units x clock speed. If you took a GPU and downclocked it's memory to the minimum it's TFLOPs rating wouldn't even change (actually with how modern dynamic clocking works it might be higher due to increased power budget), yet we know in reality for any meaningful work load it's real TFLOPs number would go down significantly.

Of course it's one of those useful technical marketing terms so it gets used.
 
I feel disappointed. less than 30% performance improvement. Yeah they are not so overpriced this gem but I hoped more. Also 10GB in the next years will probably won't be enough for 4K gaming. That and the 400W OC :oops:

30% performance improvement over the 2080ti for what 60% of the price? I guess if you have a 2080ti the upgrade is not as clear cut. For everyone else, it's an easy winner.
 
30% performance improvement over the 2080ti for what 60% of the price? I guess if you have a 2080ti the upgrade is not as clear cut. For everyone else, it's an easy winner.

IF the 2080TI was ridiculous overpriced that doesn't mean Ampere is good, just show how bad Turing was. Of course we could not see it because AMD did not offer any point of comparison.

I also don't understand why ppl are so happy with, 5-10% better efficiency out of a new Arch against a 2 year+ one plus a new node? Like we should see even more gains that that just by the node itself.

And this kids is why you should never pre-order a product base on a paid review that says it will be 80% faster.
 
It's about the only thing I'd worry about, but I'm not that worried.
my vega 56 is what just over 3 years now. So i'm hoping to keep my new card 3 years.

That and power consumption. It looks like notebook users will be stuck with cards like the GF1660.

I mean just going to the new tech would be helpful in the notebook world. however we might see the ryzen with intergrated navi 2 stuff knocking on gf 1660 performance soon
 
I'm wondering the same. Maybe DirectStorage will help or perhaps developers will start to optimize for 10GB now that there is 4k capable 10GB card. Worst case, run 1440p and use dlss.
Do you want to risk being "stuck" with 10GB until a worthwhile better card launches in 2 years' time? What happens to second-hand prices of 10GB cards if 20GB cards launch in November? Or even 12GB?

Remember texture quality is what's really going to eat VRAM and that isn't improved by running at 1440p.

Sampler feedback is the one technique that will really make a difference in the lifetime of a 10GB card. In 2 years' time it will probably be a widely-used technique, with games being worked on right now taking the time to use the technique. But it does require a serious re-work of existing game engines.

One of the things that I'm still struggling to understand is the VRAM impact of ray tracing. It's likely to become more common in games over the next couple of years but I have no idea whether it's going to be significant.
 
One of the things that I'm still struggling to understand is the VRAM impact of ray tracing. It's likely to become more common in games over the next couple of years but I have no idea whether it's going to be significant.
I somewhat recall Digital Foundry found that VRAM consumption increased quite a bit when they tested Battlefield V. Maybe that'll change a bit with more optimization, but I figured there'd be a non-trivial increase from the various buffers involved.

Maybe this video:

I think they discussed texture settings needing to be reduced when DXR was on. (can't watch right now :oops:)
 
Last edited:
Why? RTX3070 delivers 2080TIFE performance with 50W less. nVidia has put a "2080 super" into notebooks. The 3070 will deliver much more performance than this card.
Notebook variants of the 20xx GPUs are heavily throttled to make them fit for a laptop. And even at the 80W limit of the MaxQ cards, the notebook cooling solutions required already start to be heavy and noisy. The most sensible solution would be a card capable of playing games maxed out at 720p, RT included, coupled with a high performance, high quality dedicated upscaling unit. But if the upscaling process has to run on the same shader units, as it happens now with Ampere, then the upscaling performance is tied to that of the whole GPU.
 
Do you want to risk being "stuck" with 10GB until a worthwhile better card launches in 2 years' time? What happens to second-hand prices of 10GB cards if 20GB cards launch in November? Or even 12GB?
.

In case you are curious. I built my last computer 2013 which I'm still using. Only upgrades I did was better ssd and better gpu. I see pc technology development being slower and slower,... Not counting gpu my next planned pc build is slightly over 2000$ in parts at the moment. I might get 3080 or 3090. I'm not expecting something in 2 years time that would obsolete that build. When I look at turing to ampere performance uplift that would not provoke me to upgrade gpu. Earliest maybe 4 years from now there is something good enough to make me consider upgrading. Thinking about how good dlss2.0 already is compared to native 4k it might be 6 years upgrade cycle for gpu. dlss is only going to get better over time and I can sneak by few years with dlss on to avoid upgrading.

I'm firmly in the boat to buy something so good that I rarely need to upgrade. I'll wait until winter though. I want something zen3 based and perhaps by that time the availability and understanding of what is good high end buy is more clear to me. Also by that time cyberpunk2077 has had enough patching that I dare to start to play it.

I was "forced" to upgrade my original gpu to 1080ti as my old gpu just wasn't good enough for vr. 1080ti I want to upgrade to get ray tracing for new games like cyberpunk2077 and bloodlines2 and so on. But I suspect my old 4core cpu is just not going to cut it anymore for those games with ray tracing on. If there was no ray tracing I would still be happy with 1080ti.

edit. I wouldn't be surprised if developers started to optimize 4k for 10GB memory now that there is 4k capable card with that spec. In past that was not the case and 4k was 2080ti or better only affair. 3080 and 3070 are going to sell a ton of units and can in theory run optimized 4k well. 3080 should be desirable optimization target for developers.
 
Last edited:
Back
Top