Speculation and Rumors: Nvidia Blackwell ...

  • Thread starter Deleted member 2197
  • Start date
The other obvious move is to just shift all SKUs up a tier. 5080 from chopped GB202, 5070 from GB203 etc.
I'd say the opposite is more likely tbh, with AMD missing from the fight this time and Nvidia likely looking into providing minor perf/price gains in favor of bigger margins.
If GB203 is actually capable of being on par with a 4090 then it's possible that we'll see a launch similar to that of Maxwell with a GB202 products coming in later in 2025 when AMD will show up.
 
I'd say the opposite is more likely tbh, with AMD missing from the fight this time and Nvidia likely looking into providing minor perf/price gains in favor of bigger margins.
If GB203 is actually capable of being on par with a 4090 then it's possible that we'll see a launch similar to that of Maxwell with a GB202 products coming in later in 2025 when AMD will show up.

It would need to come with a price cut just like Maxwell. That also assumes GB203 can actually challenge the 4090. Lots of ifs.
 
980 launched at $550 while being slightly faster than 780Ti which was $700 so it kinda would be?
The cut down 970 matched the 780Ti for just $330. The 980 was considered hugely overpriced, especially since it wasn't a high end part like the 780 was. Anybody paying attention to what the 980 actually was at the Maxwell launch knew that it wasn't a true replacement for the 780, and that we'd get a much higher performing high end part soon.

Anyways, you also know quite well that when people say 'price cuts', they mean for more tier-equivalent parts.
 
So? The top card of the lineup wasn't 780. You could even argue that it was Titan Z with its $3000 MSRP but for the sake of argument lets assume that Titans weren't GeForces.

You brought up the Maxwell analogy. I’m simply pointing out it’s not a good analogy.
 
Mod Mode: Nobody will be opining on any other poster's intelligence in this forum. Next person who does it in this particular thread takes a day or two off.
 
Now, back to my irregularly scheduled posting...

Despite the varying opinions on whether there's technological merit to naming cards in a "tiering" way (eg {n}030 vs {n}050 vs {n]060 et al), the 100% valid point remains all vendors who are producing video cards today have decided this method of tier-labeling absolutely exists as a relative position statement within and between card generations. As such, card tiers do exist and are expected to be compared between generations.

Whether those are meaningful comparisons is left to reasonable opinion, which could (equally reasonably) state those tier comparos end up being mostly garbage thanks to a lot of factors. Doesn't negate the fact that the tiers do exist, and customers (and reviewers) pay attention -- whether the attention is truly deserved or even meaningful, or not.
 
You brought up the Maxwell analogy. I’m simply pointing out it’s not a good analogy.
The analogy is them not launching the highest chip of the lineup until the next year. If you don't like Maxwell for whatever reason then see Pascal - same thing.

all vendors who are producing video cards today have decided this method of tier-labeling absolutely exists as a relative position statement within and between card generations
Could you please point me to where any vendor describe this decision of theirs?

As such, card tiers do exist and are expected to be compared between generations.
Performance (as in fps shown in benchmarks by products) and pricing (these are what we call entry level, mid range, high end, etc) tiers exist. Nothing else exists and should be used in any gen on gen comparison, ever.

Doesn't negate the fact that the tiers do exist, and customers (and reviewers) pay attention -- whether the attention is truly deserved or even meaningful, or not.
Educating customers is what reviewers should be doing instead of pouring water on something which "exist" only in the imagination of said customers to get clicks and views.
 
Could you please point me to where any vendor describe this decision of theirs?
The vendors don't talk about the decisions (and there's no rational expectation of them to do so) yet the labeling and related tiering inarguably exists.
Performance (as in fps shown in benchmarks by products) and pricing (these are what we call entry level, mid range, high end, etc) tiers exist. Nothing else exists and should be used in any gen on gen comparison, ever.
That's your opinion and you're welcome to it. Don't mistake your opinion for universal truth, regardless of whether I or anyone else agrees with you.
Educating customers is what reviewers should be doing instead of pouring water on something which "exist" only in the imagination of said customers to get clicks and views.
If you would like people to lean towards your opinion, start by approaching it in a more fair and balanced way. Also: Good luck telling people not to chase clicks and views, that form of brain rot seemingly pervades every facet of online life at this point... Hint: you're not going to win a fight against those things, at least not until society as a whole figures it out.
 
The analogy is them not launching the highest chip of the lineup until the next year. If you don't like Maxwell for whatever reason then see Pascal - same thing.

Yes it would be similar in that regard.

Performance (as in fps shown in benchmarks by products) and pricing (these are what we call entry level, mid range, high end, etc) tiers exist. Nothing else exists and should be used in any gen on gen comparison, ever.

Not sure that matters. Never in the history of GPUs have we considered a cheaper new generation mid range GPU with the performance of an old high end GPU to be a “price cut”. Improving perf/$ has become an inherent and expected benefit of technological progress. You can argue that it shouldn’t be but that doesn’t change the fact that it is.
 
Never in the history of GPUs have we considered a cheaper new generation mid range GPU with the performance of an old high end GPU to be a “price cut”.
Off the top of my head there were re-badges like 470->570 which were essentially nothing more than a price cut. I'm sure there are other examples of that, with many reviewers considering such launches as "price cuts". A fresh example would be a 4080->4080S. Why is it any different when such launch comes from "another generation"?
 
Improving perf/$ has become an inherent and expected benefit of technological progress.
Something 99.99% of people would agree with for incredibly obvious reasons. If we didn't expect this, then there'd be very little to be excited by with a new generation of processor technology. I mean, it's literally what has built the entire processor industry for many decades, it's what allowed the personal computer to become a thing in the first place, and the primary reason Moore's Law was held up as so insanely important to everybody. If we cant deliver improved performance for better prices as an inherent part of a new generation, then we're not really moving forward.

DegustatoR seem entirely happy to argue that so long as we're getting any piddly level of improvement in performance per dollar every two years, that we should shut up and be happy, all while Nvidia's margins are through the roof and are making ludicrous amounts of money, proving that Nvidia dont need to do what they're doing by pushing products that would otherwise be lower tier up a naming tier or two, it's just out and out, undeniable greed. And they're doing that on top of pushing prices per naming tier up as well! It's seriously egregious. There's absolutely more room to give consumers better performance per dollar, and it's not entitlement to ask for that.

I guess we can hope that Nvidia will give us a bit of a bone after gouging us with Lovelace, kinda like they did with Turing->Ampere. Ya know, given they're not using a more expensive node, maybe give us all the new performance gains per mm² without charging any extra for it. And it's sad that we have to be very pessimistic on them doing this...
 
Off the top of my head there were re-badges like 470->570 which were essentially nothing more than a price cut. I'm sure there are other examples of that, with many reviewers considering such launches as "price cuts". A fresh example would be a 4080->4080S. Why is it any different when such launch comes from "another generation"?
470->570 was when AMD were doing yearly releases. The 500 series was entirely just a rebrand series, just cuz it was their practice to have a new series every year kinda like how Intel does things. They of course could have easily kept the same 400 series naming for an extra year, just with a discount and it would have been the same thing. Nobody would have a problem with that, especially cuz the 400 series was already pretty great value to begin with. But it absolutely was a pain point for AMD cuz they didn't have any replacement for the 500 series parts til 2019, three years after Polaris 10's launch in 2016, and AMD were absolutely criticized for this, though they were propped up heavily by the cryptomining boom in 2017/2018, letting them ignore most of that criticism.

And then in 2019 we got RDNA1/5000 series, which was AMD basically jacking up a Polaris 10-tier part up to $400+. And I was absolutely very critical of that at the time, just like I am with Nvidia here. AMD got away with that, cuz Nvidia was on Turing at the time, which everybody was dissatisfied with in terms of value proposition and improvements in performance per dollar from Pascal.

So I dont think there's any real contradiction here. Similarly, the Super series refresh is just a yearly, basically 'interim' release. Most people understand that, because it's not actually a new generation. And that's ignoring that the base Lovelace parts, especially the 4080, were starting from such a terrible value prospective, and only giving us some scraps to make up for it.
 
Similarly, the Super series refresh is just a yearly, basically 'interim' release. Most people understand that, because it's not actually a new generation.
What if Blackwell comes with essentially the same perf/price gains as those of 40 Super? How would that be any different just because it's "a new generation"?
 
What if Blackwell comes with essentially the same perf/price gains as those of 40 Super? How would that be any different just because it's "a new generation"?
I judge based on generation versus generation largely. Which in today's landscape(and for Nvidia for quite a while now) means a two year cycle. Launch vs launch. Interim/price cut parts are nice to have, but do not determine the basis of comparison for me. And I'm even more strongly against judging against later discounted parts that are priced to move, as I dont think that makes for a fair comparison for new parts which of course will have a hard time demonstrating bigger value improvements against such heavily discounted parts. Not a fan when people do that. I do try and be reasonable. I have no agenda to make anybody look bad or anything.
 
Last edited:
Sure but I just don't see why Nvidia would push for lower margins in current market. And moving 5080 to the biggest die is just that.
Maybe there's quite a lot of rejected GB203 dies that can be cut down to 320-bit bus w/ 112-116SMs? That'd be good as a 5080 IMV. 84 SM full die GB203 will be quite small (probably close-ish to 300mm2 than 400mm2) and with a good enough cooler & board that's not too costly and ran at 250W would be really good as a 5070 Ti 16GB targetting 1440p.
 
Back
Top