Nvidia's 3000 Series RTX GPU [3090s with different memory capacity]

Then there are people, which I think are a minority, who have elevated their spendings for PC Gaming since the beginning of the pandemic for two reasons: They spent more time inside, playing games and two, they were giving up at some point to get a card at MRSP. Or went on to buy a pre-built PC or a gaming notebook - those probably are larger volume markets than the DIY crowd anyway.
See, these people are not actually responsible for the current situation if you agree that miners are the predominant problem.

It may make surface sense to blame everybody, but it makes less sense once you consider the difference in situation and the actual demand it creates and the effect that has on pricing.

Gamers willing to pay these extreme prices are a small minority, as you seem to agree. So it stands to reason this isn't some inexhaustible market that's actually pushing the cap up.

Miners, on the other hand, cannot get enough GPU's. If Nvidia and AMD built enough GPU's to satisfy the entirety of the 'willing to pay $2500 for a 3080' gaming crowd, it wouldn't change anything. Because there is no end to the demand from miners and that is the real problem. Nvidia and AMD cannot make enough GPU's to satisfy them.

So in a situation where mining wasn't a thing, would gamers and supply issues cause higher prices than normal? Yes. But not to nearly the degree we're seeing now. That desperate 'willing to pay $2500 for a 3080' crowd would be such a small number of people overall that it wouldn't actually drive prices up because there's not enough of them. Prices would drop significantly in order to reach a broader 'willing to pay' point. It wouldn't be a great situation, but it would a lot more tolerable than now.

It is only miners that are causing things to be as extreme as they are. It makes no sense to blame a small minority of gamers also buying at these stupid prices.
 
So if we ignore data, and make up assumptions about SKU distribution, we can totes defend miners ruining the GPU economy.

Got it.

I'm done here, you guys have decided the answer and have no interest in an actual data-backed dialogue. I have no interest in debating feelings, thoughts, prayers and your confimation bias.

Dude you're the one orienting everything to your goal discarding what you don't like.... Data going one way are not all of the data and information...
 
NVIDIA Resizable BAR Performance - A Big Boost For Some Linux Games - Phoronix
June 17, 2021
When it comes to Linux-native games benefiting from Resizable BAR with NVIDIA, Total War: Three Kingdoms was the biggest beneficiary we have seen so far... Very nice improvements to the performance with "ReBar" enabled and no other system changes - 15% to 41% faster depending upon the resolution and quality settings.

That's the quick look for two of the games on Linux seemingly benefiting from NVIDIA's Resizable BAR support while additional tests are forthcoming and especially once having Resizable BAR support on more of the RTX 30 graphics cards with updated video BIOS. For most benchmark-friendly games tested so far under Linux the performance was largely flat but continuing to look at more titles especially Steam Play games.
 
Mid-2021 GPU Rendering Performance: Arnold, Blender, KeyShot, LuxCoreRender, Octane, Radeon ProRender, Redshift & V-Ray – Techgage
June 25, 2021
It’s been six months since we’ve last taken an in-depth look at GPU rendering performance, and with NVIDIA having just released two new GPUs, we felt now was a great time to get up-to-date. With AMD’s and NVIDIA’s current-get stack in-hand, along with a few legends of the past, we’re going to investigate rendering performance in Blender, Octane, Redshift, V-Ray, and more.
 
Ampere heavy compute approach is quite advantageous, the 3070 is ahead of the 2080Ti in a significant way, whether RT acceleration is off or on.

The 3060Ti is even faster than the 6900XT in RT heavy workloads.
Our contractor doing product renders uses Blender and he updated this year his workstations from Pascal to Ampere. He told us that it was the biggest performance boost he ever saw in the last 10 years. He also tested RDNA2 during the purchasing process and it was so far behind in performance that even at half the price it was totally uncompetitive.
Ampere kills it in professional workloads. They are absolutely zero argument for RDNA2 this round.
and from what I heard, Lovelace will be another massive performance improvement in heavy FLOPs applications...
 
Uh-hu.

We're dealing with bleeding edge tech companies here. Don't be fooled by Jensen's kitchen setup, bro.


That why you plan, and do not scramble after shit has hit the fan.
Guess what: A borked tech for a whole generation of cards is waaayyyyyyyyyyyyyyyyy more expensive than to plan and budget ahead. Ask Intel about 10 nm if you don't believe me, or Nvidia about 40 nm.

edit:
To not make this a one-liner battle: two years between Turing and Ampere, with Turing being an addition to the roadmap. First Nvidia card officially over 300 watts. First Nvidia card with a 3-slot cooler. Ultra-dense PCB with probably a ridiculous BOM (I think you even said that yourself, don't care to look up all your one-liners though). Do you think they planned for all of this in their A-scenario? Or that they maybe had to up the specs a little more when they got wind of Navi21? And that they, being on plan B, did not have much headroom left?
Ampere was clearly planned to be Samsung. NVidia has been whining about TSMC for, what, a decade now? So NVidia thought it could boss TSMC by switching to Samsung. "We'll show them".

Lots of people have been cheering for Samsung by NVidia; "evil TSMC monopoly, we must have competition, thank you NVidia".

NVidia kept A100 on TSMC because it knew this dedicated data centre GPU is now fighting to keep NVidia relevant in the DL space. NVidia is quite aware that its customers are actively seeking better, more cost-effective compute. A100 is NVidia's top priority, which is why NVidia bit its lip and went with TSMC. Far too risky to make A100 at Samsung.

The trend in power consumption is there, "cos Moore's law is dead".

NVidia's plan B was pricing and extreme power consumption because AMD aimed at 3070Ti and hit 3090.
 
A100 is 250-400W depending on an SKU, with 250W one not being that far behind the 400W one. Just a reminder for those who prefer to blindly believe the marketing bot.

Inspur Releases Liquid Cooled AI Server With NVIDIA A100 GPUs at ISC High Performance Digital 2021 (yahoo.com)

"High-efficiency liquid-cooling is among the major reasons that NF5488LA5 ranks No.1 in 11 of the 16 tests in the closed data center division of the 2021 MLPerf™ Inference V1.0 Benchmark. It is also the only GPU server submitted that ran the NVIDIA A100 GPU at 500W TDP via liquid cooling technology."
 
Inspur Releases Liquid Cooled AI Server With NVIDIA A100 GPUs at ISC High Performance Digital 2021 (yahoo.com)

"High-efficiency liquid-cooling is among the major reasons that NF5488LA5 ranks No.1 in 11 of the 16 tests in the closed data center division of the 2021 MLPerf™ Inference V1.0 Benchmark. It is also the only GPU server submitted that ran the NVIDIA A100 GPU at 500W TDP via liquid cooling technology."
Okay, so it's 250-500W with the latter being LC. I'm sure you can push it to 1000W under LN. Does it make A100 "a 1000W product"?
 
Liquid cooling is pretty normal in data centre applications.

PowerPoint Presentation (hpcuserforum.com)


Quoted simply because you joined the one-liner gang. Shame.

We would prefer it if you actually know your shit before shitposting like that.

SXM is not a power consumption figure, it's a max design factor limit designed to accommodate different things.

For example A100 40GB is the same 400w as the A100 80GB.

And the A100 SXM4 80GB is 400w by the way, not 500w, which is still smaller than the final form of V100 SXM3 which was 350w initially then got to 450w in the end.
See above. That's at least two products that are running SXM4 at 500W.
 
It's right there in the link, the "only" solution that ran 500w. The official SKU from NVIDIA is 400w on air.
And so now there's a 500W liquid cooled edition - from at least two companies.

I'm sure NVidia would prefer to have the performance of the 500W version at 400W. Yet, somehow I doubt NVidia is unhappy about the liquid-cooled products. Who does the binning for those, NVidia or the vendor?

Does the 500W version need new firmware? Or is the built-in boost capability of the processor enough to let it freewheel up to 500W?
 
Back
Top