Speculation and Rumors: Nvidia Blackwell ...

  • Thread starter Deleted member 2197
  • Start date
It did, 40 series did not sell all that well compared to 30 series, hence why they released the Super refresh.

Steam Hardware Survey (September 2024) seems to suggest otherwise, as 4060 (4.58%) is currently #2 compared to 3060 (5.86%) at #1, 4060 Laptop (4.37%) #3, and 4060 Ti (3.66%) #4, compared to 3060 Ti (3.57%) at #6. 4070 (2.91%) is lower than 3070 (3.31%) but not by much, especially considering 4070 was released much later. 4070 Super is 1.47%.
I think it's probably only 4080 (0.72%) that's significantly worse than 3080 (1.94%) but 4080 seems to be not selling well as even 4090 (0.93%) has higher market share.
 
Yeah, the existence of Super refresh doesn't tell us anything about how well 40 series has sold. It is far more likely that the lack of said update with 30 series was a result of mining GPU demand removing the need for any update back then. It is also anyone's guess into how many of these 30 series sales ended up in gamers hands to begin with.
 
It did, 40 series did not sell all that well compared to 30 series, hence why they released the Super refresh.
For the entirety of the high end series (4080/4090) I was under the assumption there was usually more demand then supply. The company met or exceeded their sales targets for the enthusiast tiers, especially the 4090.

Take it with a grain of salt but TomsHardware arrived at approximately 125000 - 160000 units sold of 4090s the first month of release based on Steam Deck and GPU hardware monthly statistics. Their approach was pretty smart but there were other data issues that month didn't help much with their conclusion. However a statement from Valve confirmed some of their Steam Deck numbers which would indicate they sold a few million of 4090s.
 
Last edited by a moderator:
For the entirety of the high end series (4080/4090) I was under the assumption there was usually more demand then supply. The company met or exceeded their sales targets for the enthusiast tiers, especially the 4090.
I think this was the case for the 4090 but not the 4080. They did effectively cut the 4080 MSRP by $200 and it still comfortably sits at the new $1000 MSRP. The 4090 has spent much of its life above the $1600 MSRP and is now effectively an $1800-$2000 card.
 
FWIW Kopite7kimi don't think that 5090 will be much more expensive than 4090:

I don’t think even Nvidia knows exactly how much it will be - price is the easiest thing to change on a SKU. I would be surprised if they weren’t at least floating a much higher price point to partners to gauge reaction. Interesting that he doesn’t mention the 5080 though - he was referring to the MLID leak including 5080 up to $1500. But I can’t see them launching even a $1,400 5080 and leaving the 5090 at $2k - although who knows, they tried it last Gen!
 
But ultra high end monster GPUs aren't really intended for such cases. If you're going with small form factor, you should be looking at something less than "peak workstation" hardware.
Plenty of Mini-ITX cases support the length of large GPUs and have sufficient cooling, but max out at 2-2.5 slots. I can understand a '90 class card not fitting if we think of that as a Titan replacement. However the '80 class cards didn't use to be "peak workstation" hardware.
 
I don’t think even Nvidia knows exactly how much it will be - price is the easiest thing to change on a SKU.
Nvidia knows exactly how much it can be but the final pricing will depend on their decisions on competitive landscape and projected supply/demand figures. It is true that final pricing is usually the last thing which is being set for any SKU but that doesn't mean that this pricing can be whatever, there is a range - and this range is usually communicated to AIBs so that they could decide on how to build their own lineups.
 
Nvidia knows exactly how much it can be but the final pricing will depend on their decisions on competitive landscape and projected supply/demand figures. It is true that final pricing is usually the last thing which is being set for any SKU but that doesn't mean that this pricing can be whatever, there is a range - and this range is usually communicated to AIBs so that they could decide on how to build their own lineups.
Yes, that’s essentially what I was saying. Nothing I said implied they could charge anything (if you read my earlier posts I strongly disagree with this). Again, I suspect they’ve floated higher end prices (eg $2500) to AIBs to gauge their reaction and will settle on a price shortly before launch.
 
Again, I suspect they’ve floated higher end prices (eg $2500) to AIBs to gauge their reaction and will settle on a price shortly before launch
This isn't how it works though. They must provide a more or less accurate pricing window to partners so that they could prepare their own lineups and inventories not to mention their own board designs.

So while the ranges can be rather big for the top end (and they can be slightly different to control leaks) they should in fact be accurate.

So providing a $2500 figure and then launching a $1600 product wouldn't work - unless they intend to supply such AIBs with reference cards during the whole launch time frame which admittedly had happened before for the top most SKU.

I also fail to see the benefit of such maneuver. The only value from this seems to be leak control. Nvidia knows the market well enough without any need for an AIB input.
 
This isn't how it works though. They must provide a more or less accurate pricing window to partners so that they could prepare their own lineups and inventories not to mention their own board designs.

So while the ranges can be rather big for the top end (and they can be slightly different to control leaks) they should in fact be accurate.

So providing a $2500 figure and then launching a $1600 product wouldn't work - unless they intend to supply such AIBs with reference cards during the whole launch time frame which admittedly had happened before for the top most SKU.

I also fail to see the benefit of such maneuver. The only value from this seems to be leak control. Nvidia knows the market well enough without any need for an AIB input.
Except it is how it works - whether they listen to what the AIBs say is another matter, but they definitely share a projected price range ahead of setting a final price. You’re characterizing what I said as “hey this is the price” and then they change it - no one is saying that’s what happens, including me. I clearly said they share a price range, and nowhere did I indicate it was as wide as $1600-$2500. It’s likely a $500 range at most, I’d guess $2k-$2.5k. I don’t know where you got me saying that they'd share some outlandish unrealistic price window. Very confused why you’re disagreeing with me, we seem to be saying the same thing.
 
Last edited:
I moved all the EVGA discussion to the old thread about it in the industry forum, could we please take a break from talking about pricing at least until there's new information/rumours from credible sources? This really isn't going anywhere useful/interesting with what little we know so far...
 
I think they’re still talking about Blackwell. As in B300 series.
There is no B300 series though, next year update is B200 "Ultra":

eco.png


I'm also not sure how much of a "series" GB/B200 is - it's like one chip/product? Maybe with "Ultra" it will become one.
 
Apologies if this isn't the thread in which to ask, but the above image made me wonder: is there any scope for multi-die GPU's from Nvidia in the near future?

I recall that bandwidth between each die was always an issue, but the 1800GB/s in the image above suggests that issue might be solved.

Given the rumoured size of the RTX5090 die and the cost that comes with that, it seems that binning and combining smaller dies might be worth experimenting with?
 
Apologies if this isn't the thread in which to ask, but the above image made me wonder: is there any scope for multi-die GPU's from Nvidia in the near future?

I recall that bandwidth between each die was always an issue, but the 1800GB/s in the image above suggests that issue might be solved.

Given the rumoured size of the RTX5090 die and the cost that comes with that, it seems that binning and combining smaller dies might be worth experimenting with?

I don't think that's likely in the near future. Fast interconnection is just too expensive (PCIe 5.0 x16 is "only" 128GB/s). It's just not viable for a market this small (let's say one 5090 is going to be ~$2K, two would be ~$4K plus some extra interconnection cost, it's not hard to imagine how small the market will be). Not to mention that there will be compatibility problems. Many games might not be compatible.
However, I suspect that once raytracing becomes mainstream, demand on interconnection could be lower. In theory, you can upload the whole scenes (models and textures) and BVH to multiple cards and let them calculate independently, then combine the results. This could lead to nearly linear speedups, without requiring expensive interconnections.
 
However, I suspect that once raytracing becomes mainstream, demand on interconnection could be lower. In theory, you can upload the whole scenes (models and textures) and BVH to multiple cards and let them calculate independently, then combine the results. This could lead to nearly linear speedups, without requiring expensive interconnections.

That’s the dream but I don’t think we’re getting rid of the gbuffer or depth passes anytime soon.
 
Apologies if this isn't the thread in which to ask, but the above image made me wonder: is there any scope for multi-die GPU's from Nvidia in the near future?
The image is about multi-die HPC/AI GPUs, and these cost north of $20000 per GPU which should tell you a lot about how applicable the tech is to the mass market currently.
Maybe at some point where there will be no other option to scale up the die size we will see these in gaming markets but I very much doubt that this will have anything to do with combining smaller dies - it is far more likely that the first such product will be a combination of two dies which will be reticle limited otherwise. Everything below that would still be a lot cheaper to make from a single die.
 
That’s the dream but I don’t think we’re getting rid of the gbuffer or depth passes anytime soon.

That's true, but they're also "once per scene" (or, in some special cases, "a few per scene") as when doing raytracing the renderer no longer has to render multiple buffers from different light sources or making cube maps etc. That'll significantly lower the bandwidth required for interconnections.
 
Back
Top