Speculation and Rumors: Nvidia Blackwell ...

  • Thread starter Deleted member 2197
  • Start date
The only thing bad about the RTX 4080 was its launch price. The performance gain over the 3080 was enough to be considered a true successor, and complaining about the performance gap to the 4090 doesn't make any sense. It's clear that the 4090 represented an entirely new class of graphics card, a super-flagship sitting far and above AMD's 7900XTX and Nvidia's previous flagships in performance and transistor count and priced accordingly. And the 3080 was a fluke, it was far closer to the 3090 than Nvidia because it used the same GA102 die, when Nvidia likely originally intended it to be on the 103 die like the 4080. Prior generations don't have a xx90 to serve as a comparison anyways. If the 5080 is the same price or cheaper than the 4080 Super and delivers a significant performance increase then that would be great regardless of what the 5090 is.
 
Nvidia doesn't eliminate anything, they add on top of what would be the top end card otherwise and to do this they introduce a new product at a higher price. If not for 2080Ti then 2080 would be the fastest Turing, 4080 would be the top card of Lovelace and 5080 would have probably been the top card of Blackwell. The cards above that are an extension of the previous pricing range (for a single card anyway) to produce an SKU for those who can afford it. Anyone thinking that a 4090 could have been sold for $700 at launch are delusional.
 
The only thing bad about the RTX 4080 was its launch price. The performance gain over the 3080 was enough to be considered a true successor
Its launch price was directly related to its naming, though. The 4080 was basically the equivalent of the 3070 in Ampere terms - a cut down upper midrange die. Now I'm not saying we should have expected the 4080 to cost $500 like the 3070, but it shows how incredibly they exploited naming here to upcharge consumers, on top of the further upcharge even at the same naming tier. Had the 4080 been called a 4070Ti for $750, I think plenty of people would have been quite alright with it, while still giving Nvidia an effective $250(or 50%) price hike. Wouldn't have been an amazing situation, but a decent enough 'meet in the middle' between consumer and corporate wants.

If we're talking the performance gain, we aren't paying for performance, we're paying for the hardware. Pascal gave us huge performance gains without charging us out the nose for it. Pascal is beloved, but it obviously wouldn't have been if they'd charged $900 for the GTX1070. That's essentially what Lovelace did.

I expect the 5080 to provide a fairly modest increase in performance on the 4080, given the rumored specs, and to again try the same $1200 price point, while positioning it as the 'high end' for this generation, even though it's really not. This technically lets them compare the value of the original 4080's price rather than its current price to make it seem like a better deal, even though they're still selling us like a 3070Ti-level GPU for $1200. Then perhaps six months from now or something, they can reduce it back to $1000 and slot in a 5080Ti above it.
 
I expect the 5080 to provide a fairly modest increase in performance on the 4080, given the rumored specs, and to again try the same $1200 price point, while positioning it as the 'high end' for this generation, even though it's really not. This technically lets them compare the value of the original 4080's price rather than its current price to make it seem like a better deal, even though they're still selling us like a 3070Ti-level GPU for $1200. Then perhaps six months from now or something, they can reduce it back to $1000 and slot in a 5080Ti above it.
The 4080 Super launching at $1K was effectively an admission from Nvidia that the original 4080 was priced too high. I doubt they will go for $1200 again, but there's a chance they'll get greedy since the RDNA4 flagship will be weaker than the 5080.
 
MOD MODE: I'm quickly tiring of the pendanticism. If you can't make your point without being obviously obtuse or combative, then don't post. The topic of "what makes a tier" is a set theory problem, not a technical one. You're welcome to talk about how you define your pariticular set, however you're not allowed to brow-beat someone about how their definition isn't right.

Next overtly snarky reply gets nuked. More than one in a row gets someone a several-day break from posting.
 
Now, back to normal contributor mode, I agree with @pjbliverpool in expressing sadness to the forum's ability to continually try to hash this out. Big long passionate opinions on why any particular vendor (NVIDIA, AMD, Intel) named their parts they way they did, why somehow it's still wrong, and why other people who disagree are wrong.

In the end, bluntly, it doesn't fucking matter. NVIDIA could call the next "almost-but-not-quite-toppest-of-the-line" card the IXBQ#7 and that's now the name and we just get to buy it or not.

Here's where reason and logic take over: consumers who are buying this part are doing so either because of name recognition (NVIDIA, RTX, the "x080 series", whatever) or based on performance figures from reviews and benchmarks on social media. And if one customer who buys solely on the name recognition of the "x080-series" nameplate is sorely let down with their purchase, then they're at least moderately likely to remember the expensive lesson during their next GPU purchase.
 
complaining about the performance gap to the 4090 doesn't make any sense.

Agreed, the value prop for the x80 card isn’t based on how close it gets to the x90. The x90 can be $4000 and 4x faster but that doesn’t matter to someone with a $1000 budget.

The expectation that prices and relative performance are anchored to some fixed ratios every generation is unrealistic as these things can change at any time.
 
The 4080 Super launching at $1K was effectively an admission from Nvidia that the original 4080 was priced too high. I doubt they will go for $1200 again, but there's a chance they'll get greedy since the RDNA4 flagship will be weaker than the 5080.
The only lesson Nvidia has learned is that they need to provide the utmost minimal improvements in performance per dollar for consumers and fanboy morons to defend them.

People praised the 'Super' series value for Lovelace, even though the original value was just so garbage that any slight improvement got positive attention.

Nvidia has learned that 99.99% of consumers are uninformed morons and easily exploitable. I've long been an advocate of the idea that the biggest problem with corporate greed is not their inherent and predictable greed, but the consumer weakness in giving in because they dont want to do without. AKA - consumers lack principles.
 
The only lesson Nvidia has learned is that they need to provide the utmost minimal improvements in performance per dollar for consumers and fanboy morons to defend them.

People praised the 'Super' series value for Lovelace, even though the original value was just so garbage that any slight improvement got positive attention.

Nvidia has learned that 99.99% of consumers are uninformed morons and easily exploitable. I've long been an advocate of the idea that the biggest problem with corporate greed is not their inherent and predictable greed, but the consumer weakness in giving in because they dont want to do without. AKA - consumers lack principles.
This seems pretty pessimistic to me.

If you want the highest performing consumer video card available today, you're buying from NVIDIA. If you want the second highest performing consumer video available today, you're still buying from NVIDIA. If you want to talk about "bang for the buck", then NVIDIA may still very well be in the mix depending on the performance target you're aiming for.

I can conceive of no good argument in saying "consumers lack principles" when it's unreasonable to ask them to stop paying for top tier performance. Who are you to say they cannot or should not spend the money they want in buying those items? Who are any of us to say NVIDIA shouldn't charge what the market will bear? This is capitalism 101. Ferrari makes the SF90 not because everyone can afford to buy them, but because some people in this world want the fastest, sexiest, Ferrari-reddest thing on the road. Why shouldn't Ferrari sell as many as they can reasonably make, or at least as many as they care to?

Do I wish video cards were less expensive? You bet. Do I have at least two dozen options to spend less money to get less performance? Yes, I absolutely do.

Consumers have a LOT of choices in this space today. Apparently, you don't agree with the choices they're making; that doesn't make your opinion any more right than theirs.
 
Last edited:
Kopetite has been wrong before - both the bus width and SM count are higher than expected on the 5090. It seems to work against their effort to push AI users to the workstation cards, unless they’ve managed to gimp the cards for AI (although in that case would they really need D variants for the 5080?)
 
That would alienate a lot of 4090 owners though. Would be the first time Nvidia didn’t give previous flagship owners a reasonable upgrade path on a new architecture launch.
I would argue that Turing was rather unreasonable for quite some time.
 
In the end, bluntly, it doesn't fucking matter.
Which is exactly my point however I am being told that some mythical "x80 tier" means something for a product which hasn't come out yet and thus the only thing we know about it is it's supposed marketing name.

There are tiers: performance tiers, pricing tiers, power consumption tiers, cooler size tiers, etc. I.e. tiers of actually important and comparable characteristics between products of one or more vendors.

There are no "naming tiers". Product names don't mean anything. Even indication of a generation and a relative positioning inside this generation product lines aren't necessarily a part of marketing naming. Nvidia (and other vendors) are abusing this glitch in consumer understanding to "upsell" them products which aren't necessarily better than a differently named product - people are stuck in this mindset where a product is somehow different just because it's a "40 series" or "x80 SKU" when in practice the only thing which should matter to a consumer are performance/cost/features.

And I truly hope that we can end this discussion on that and never ever mention a "x80 tier" in a TECHNICAL DISCUSSION ever again.

Edit: And here's another wrench into the whole "but names mean something to me" discussion:

 
Last edited:
There are no "naming tiers". Product names don't mean anything.

So why are the GPU's named as they are? In a numerically consistent fashion going up as performance increases, and generally consistent from generation to generation. What is your explanation for that if there are "no naming tiers and product names mean nothing"? Are these consistencies pure coincidence? Why doesn't Nvidia name it's GPU's after trees, or rivers, or Greek gods or something? Why use numbers at all? It's not particularly marketable is it if the numbers have zero meaning?

Would you have no issue whatsoever if the top tier product was named "5060" and the next fastest one was named "5090"? And perhaps the next fastest after that the 6080? This would be just as sensible and logical to you as the current naming scheme?

Assuming your answer to the above isn't actually 'yes' (in which case this discussion has no-where left to go) then clearly the numbered naming schemes do mean something and are given for a reason. The reason being to indicate to the consumer the tier and generation of product that they are purchasing. And those tiers and generations come with some traditionally consistent expectations. For example an (x+1)x80 GPU will be faster than an xx80 GPU. Or more simply an xx80 GPU will be faster than an xx70 GPU within the same generation. Or more specifically, that the xx80 of a new generation will be as fast or faster than the top end GPU of the previous generation (true going all the way back to Maxwell/Pascal when arguably the current number progression scheme was fully established).

No-one is suggesting Nvidia can't change how this works, turn the whole thing on it's head, abandon it completely. Of course they can do whatever they wish, and of course we all just have to live with it and reset our expectations (or abandon expectations completely). But that doesn't in turn mean that people can't argue that in doing so they are to some degree trying to leverage the previous consistency of that numbering scheme and the associated precedents that it has established to mislead customers into thinking they might be getting more than they actually are. I thought it was pretty well established by now that Nvidia tried to do precisely that with the "4080 12GB" and in fact even realised that themselves and corrected the "mistake" with the subsequent re-naming and and price reduction.

The link you posted to the rumoured upcoming "5080 24GB" suggests the possibility they may try for something similar again, perhaps separating the release schedule this time to make it less obvious than last time.
 
Kopetite has been wrong before - both the bus width and SM count are higher than expected on the 5090. It seems to work against their effort to push AI users to the workstation cards, unless they’ve managed to gimp the cards for AI (although in that case would they really need D variants for the 5080?)
Where can I read more about Nvidia's effort to push AI users to workstation cards? I'm a bit skeptical about this, since they market their Geforce GPU's for LLM and image generation use.

Also, 32GB isn't alot for AI use. I bet you won't see the 5090 being recommended on r/localllama for the same reason that no one recommends Nvidia workstation cards either, they're just way too expensive for the amount of VRAM they offer. A system with 2x3090 will be a lot cheaper and has +50% VRAM capacity. It can for example run Llama 3 70B at 4 bits per weight whereas a single 5090 won't be able to.
 
That would alienate a lot of 4090 owners though. Would be the first time Nvidia didn’t give previous flagship owners a reasonable upgrade path on a new architecture launch.
If they get more money elsewhere, why'd they give a ****? What are their fans going to do? Buy AMD? Not gonna happen. nVidia can absolutely ignore the high-end gaming market and still be on top and maximise their income selling to higher-profit industries.

That won't mean nVidia won't, but they now have clear financial reason not to bother giving elite hardware to elite gamers (unless they are absolutely loaded), so don't be surprised if that does indeed happen and there's no 5090 at a consumer price-point. They'll just have to make do with whatever is available.
 
Yeah, what we're landing upon is the unfortunate reality of a world where competition doesn't exist in a given market. Earlier, when I said this:
If you want the highest performing consumer video card available today, you're buying from NVIDIA. If you want the second highest performing consumer video available today, you're still buying from NVIDIA.
... one corollary statement can also be "there is no competition to NVIDIA in the highest end of the consumer video market today." Which means, to get the highest performance, you pay NVIDIA whatever they're asking. If you decide their price is wholly untenable? Well, you don't get the highest performance video cards in the market today.

Logically, at least one follow-on statement from the above could be: "Without competition, what is the business value in making ever-faster consumer video cards for the same price?" It costs NVIDIA more and more money in R&D, fabrication, packaging, testing, and marketing to create ever faster and more complex devices. Business value of these devices is creating ever-higher margins; quite easy when the nearest competition is two or three generations away.

Until competition is able to compete on both performance and price, there's no reason for NVIDIA to reduce price. The highest-end cards will continue to get ever more expensive, until finally sales diminsh sufficently to drive the point on the cost curve hard into the right edge of the graph. For now, they're actually still climbing the cost curve, enjoying ever more profits for ever more performance because people are willing to pay. And why not? Nobody else could even try to sell you the same performance, irrespective of price.
 
If they get more money elsewhere, why'd they give a ****? What are their fans going to do? Buy AMD? Not gonna happen. nVidia can absolutely ignore the high-end gaming market and still be on top and maximise their income selling to higher-profit industries.

Yes that’s the rational option if they were operating solely from a position of profit maximization. As much as people demonize Nvidia there may actually be people at the company who still care about gaming, including Huang. I don’t know how to reconcile their current market share with the predicted abandonment of the consumer market since crypto days.

Until competition is able to compete on both performance and price, there's no reason for NVIDIA to reduce price. The highest-end cards will continue to get ever more expensive, until finally sales diminsh sufficently to drive the point on the cost curve hard into the right edge of the graph. For now, they're actually still climbing the cost curve, enjoying ever more profits for ever more performance because people are willing to pay. And why not? Nobody else could even try to sell you the same performance, irrespective of price.

Yeah which is incentive to offer 4090 owners an upgrade path since they’re willing to pony up the cash. The 4090 sold very well at $1600+ according to Steam. I don’t see Nvidia walking away from that demo just because they don’t need the extra cash. By that logic they don’t need to offer 4080 owners an upgrade path either.

They can probably ship a 5090 at $1800-$2000 and still capture a significant chunk of that market. It’s anybody’s guess where volume falls off a cliff but a $4000 5090 likely doesn’t make sense if they still “care about gaming”.
 
I feel something that always gets lost in these discussions is that people might not have the proper perspective here. The 4090 (or whatever) buyers aren't the actual minority. What the real minority is Enthuasist DIYers who upgrade gen on gen and discuss this stuff online (dare I say ad nauseum), but that group for some reason thinks they represent the market. Like the line about caring about "gaming" I think needs some perspective, in that that small subset mentioned above doesn't represent gaming.

Which is kind of the issue when discussions start into what Nvidia (or really other companies as well) will do in terms of thier approach to the market and their customers. It's hard to have that without first realizing what the market and customers actually are.
 
The way I define “caring about gaming” from Nvidia’s perspective is having executives who are still passionate about the advancement of game graphics. I’m not referring to selling a billion 4060’s to the mass market. So yes there’s an element of catering to the enthusiast DIY crowd implicit in that.
 
Back
Top