Nvidia GeForce RTX 4080 Reviews

Using Techpowerup's review of the FE and their relative performance charts for GPUs. It's interesting.

VS. 3080
  • 500 USD (71%) more compared to 3080 launch price and even worse for current pricing.
  • 42% faster.
VS. 3090
  • 300 USD (20%) cheaper than 3090 launch price but 300 USD (33%) more than current 3090 pricing.
  • 25% faster.
So, it's horrible value compared to the 3080 and even current 3090 pricing, but at least compares favorably to 3090 launch pricing. :p It obviously compares more favorably to the Ti variants but then the Ti variants were already incredibly bad value products anyway.

If you're suddenly having to find value in a lower tier performance product compared to the highest tier previous product in order to somehow justify the product's pricing. Yuk.

Regards,
SB

The thing is, the 4080 is still faster than both of them by a significant amount. It may not be better in terms of frames/$, but it is better by all technical metrics. People don't buy mid-range gpus instead of high-end ones because they're better value. They buy the one they can afford and that meets the performance level they want. If you want something faster than a 3090, you have two choices, so then it just becomes a question of whether you can afford it.

I also remember when people said the 20 series was overpriced and people bought it. Then the 30 series came out and people said it was too expensive for the 3090 and people bought them up for gaming. And with crypto gamers were buying up cards way above msrp across the board for all classes of gpus. It's likely that what the market is willing to pay is much higher than what nvidia and competitors had been asking for.
 
Last edited:
Actually it's one of the reasons why I decided not to wait for 4080, when I heard about its 256 bits memory interface.
4080 did use faster memory to mitigate the problem a bit, but that also potentially making it harder to overclock the memory.
On the other hand, 4090 actually have the same memory bandwidth as a 3090 Ti, but it performed quite a bit better. So it's entirely possible that either Ada is more bandwidth efficient or Ampere does not have enough computation power to use all these memory bandwidth.

The 4090 shows very good performance scaling with more bandwidth.
 
It's likely that what the market is willing to pay is much higher than what nvidia and competitors had been asking for.
I've been saying this for a while. NVIDIA's highest end cards are way too cheap. I'm not sure if this applies to AMD, but I am confident NVIDIA could increase the MSRP of 4090 by a lot and it would still sell out in the US.
 
That's your take on it. Or that the $700 would have been fine given normal product consumption instead of the mining craze combined with COVID spending/supply issues, and that Nvidia firmly believes it can get away with charging consumers whatever they damn want to now whilst also using the prices to clear excess Ampere inventory.

I don’t know why they would believe that they could move the same volume of $1200 cards now that they could 18 months ago. The demand drivers are no longer there. What they learned is that there is a lucrative market of people willing to spend $1000-$2000 on a graphics card. But it’s likely not a very big market. It will be interesting to know how many people who spent $700 to get the best are now willing to spend $1500-$2000 because that’s what the best costs now.

It’s a win/win for Nvidia either way. They give the big spenders something to throw money at and they still have a full lineup to satisfy the higher volume $300-$600 crowd. The cards between $800 and $1200 are in no man’s land though. If you’re willing to spend $1000 on a graphics card you probably don’t want to settle for second best.

One counterpoint is that according to Steam the 3080 Ti is 50% more popular than the 3090 even though it launched 9 months later for only $400 less. So there may actually be a lot of price elasticity above $1000. The difference of course is that the 4090 is head and shoulders faster than the 4080 so it’ll be Interesting to see how the latter fares at $1200.
 
4090 with 82,6 TFlops anf 1008 GB/s actually has worse (higher) Flops/Byte ration than 4080 with 48,7 TFlops and 716 GB/s. And 4080 has 25% more L2 cache per MC (8192 kB vs. 6144 kB). Should not be any more bandwidth limited than 4090.
 
4090 with 82,6 TFlops anf 1008 GB/s actually has worse (higher) Flops/Byte ration than 4080 with 48,7 TFlops and 716 GB/s. And 4080 has 25% more L2 cache per MC (8192 kB vs. 6144 kB). Should not be any more bandwidth limited than 4090.
It may not be more b/w limited in comparison to 4090 but it is in comparison to the cards it actually substitutes on the market.
4080 is "just" 716 GB/s while even 3080/10 was 760 GB/s and 3080Ti was 912 GB/s.
4090 is an upgrade in bandwidth on 3090Ti (same memory but more cache) while 4080 is not, and it shows in its 4K+ scaling - it is loosing some 5-10% of scaling above 3090Ti when going from 1440p to 4K.

CB has shown this well: https://www.computerbase.de/2022-11/nvidia-aus-msi-zotac-geforce-rtx-4080-review-test/2/
Average scaling over 3090Ti there goes from +21% in 1440 to 18% in 4K and 15% in "5K".
4090 does much better there: +41%, +59%, +59% respectively (1440p result is obviously CPU limited).
 
JP-Japan_videocardz.png
4GamerNVIDIA Founders Edition
BR-Brazil_videocardz.png
AdrenalineNVIDIA Founders Edition
AU-Australia_videocardz.png
AusGamersNVIDIA Founders Edition
MSI GAMING X TRIO
PL-Poland_videocardz.png
BenchmarkNVIDIA Founders Edition
US-United-States_videocardz.png
BPS CustomsNVIDIA Founders Edition
DE-Germany_videocardz.png
CGDirectorNVIDIA Founders Edition
CN-China_videocardz.png
ChaowankeAXGAMING
US-United-States_videocardz.png
Club386NVIDIA Founders Edition
FR-France_videocardz.png
ComptoirHardwareNVIDIA Founders Edition
DE-Germany_videocardz.png
ComputerBaseNVIDIA Founders Edition
ASUS TUF OC
MSI SUPRIM X
ZOTAC AMP AIRO
DE-Germany_videocardz.png
der8auerASUS ROG STRIX OC
GB-UKM-United-Kingdom_videocardz.png
Digital FoundryNVIDIA Founders Edition
ES-Spain_videocardz.png
El Chapuzas InformaticoNVIDIA Founders Edition
GB-UKM-United-Kingdom_videocardz.png
EteknixNVIDIA Founders Edition
Gigabyte Gaming OC
INNO3D iChill X3
Palit GameRock OC
MSI Suprim X
[video]
US-United-States_videocardz.png
GamersNexusNVIDIA Founders Edition
JP-Japan_videocardz.png
GDMNVIDIA Founders Edition
NL-Netherlands_videocardz.png
Guru3DNVIDIA Founders Edition
ASUS ROG STRIX OC
MSI SUPRIM X
PALIT GAMEROCK OC
GB-UKM-United-Kingdom_videocardz.png
ForbesNVIDIA Founders Edition [video]

Thanks to Videocardz!
the Guru3D review concludes more or less the same as the Digital Foundry review. Superb performance, but not cheap. What puzzles me the most is that power consumption is 321W max, which makes possible to use the classic 2x8 pin connector.

It's kinda impressive how this card runs games like Flight Simulator when DLSS 3 is enabled. That's a perfect game for DLSS 3 where you play at your own pace and the pace of the game is also "slow". It's a perfect choice for games like A Plague Tale Requiem, which is not super fast paced also -unlike say F1 2022-.
 
What puzzles me the most is that power consumption is 321W max, which makes possible to use the classic 2x8 pin connector.
They went with 12VHPWR on the whole lineup because they want to get rid of the zoo of "classic connectors". And if not for the issues with the melting plugs it would most definitely be a net win for consumers - just one GPU power plug supplying anything from 0 to 600W.
 
They went with 12VHPWR on the whole lineup because they want to get rid of the zoo of "classic connectors". And if not for the issues with the melting plugs it would most definitely be a net win for consumers - just one GPU power plug supplying anything from 0 to 600W.

I'd rather have that plug be like this (at least for the US) :

1668565154879.png
 
Back
Top