Speculation and Rumors: Nvidia Blackwell ...

  • Thread starter Deleted member 2197
  • Start date
Maybe there's quite a lot of rejected GB203 dies that can be cut down to 320-bit bus w/ 112-116SMs? That'd be good as a 5080 IMV. 84 SM full die GB203 will be quite small (probably close-ish to 300mm2 than 400mm2) and with a good enough cooler & board that's not too costly and ran at 250W would be really good as a 5070 Ti 16GB targetting 1440p.
GB203 is supposedly 256 bit like AD103.
 
GB202 in 5090 will likely already be cut down to account for chips with defects so there wouldn't be any need to make a "salvage" SKU, at least not initially - if ever, as AD102 has shown Nvidia can produce one gaming SKU on the biggest chip just fine.
 
GB202 in 5090 will likely already be cut down to account for chips with defects so there wouldn't be any need to make a "salvage" SKU, at least not initially - if ever, as AD102 has shown Nvidia can produce one gaming SKU on the biggest chip just fine.

And they've also shown they're happy to make multiple gaming SKUs from the biggest chip too. It doesn't really tell us anything.

Either way, Dangerman clearly thinks GB203 has 84 SMs and the first sentence was just a typo.
 
Either way, Dangerman clearly thinks GB203 has 84 SMs and the first sentence was just a typo.
GB202 is 512 bit though so cutting that down to 320 would be quite a salvage operation.
Either way I doubt that we'll see a lot of salvage parts on Blackwell just like we didn't really see many on Lovelace. The process should be the same so it would be even more mature than with Lovelace while the chips will presumably feature a similar unit amounts distribution - which doesn't leave much space for salvage parts. GB202 in 5090 is the only part which could get its memory bus cut down by 1 or 2 MCs (so 480 or 448 bit). Depending on how RDNA4/5 will be doing it's possible that they'll introduce a 384 bits part later as a 5080Ti for example but this won't be a part of the initial lineup.
 
Last edited:
And they've also shown they're happy to make multiple gaming SKUs from the biggest chip too. It doesn't really tell us anything.

Either way, Dangerman clearly thinks GB203 has 84 SMs and the first sentence was just a typo.
Yeah that's what I meant.
 
Either way I doubt that we'll see a lot of salvage parts on Blackwell just like we didn't really see many on Lovelace. The process should be the same so it would be even more mature than with Lovelace while the chips will presumably feature a similar unit amounts distribution - which doesn't leave much space for salvage parts. GB202 in 5090 is the only part which could get its memory bus cut down by 1 or 2 MCs (so 480 or 448 bit). Depending on how RDNA4/5 will be doing it's possible that they'll introduce a 384 bits part later as a 5080Ti for example but this won't be a part of the initial lineup.

It also depends on how much distance they can put between Blackwell and Ada without using the big chip. They may be forced to use salvage GB202 to offer any sort of reasonable increase. Or they may be going for a power efficient side grade like Maxwell. We won’t know until we know.
 
GB202 is 512 bit though so cutting that down to 320 would be quite a salvage operation.
320-bit would be extreme, but if this is indeed some 740mm²+ monster, it wouldn't be totally out of the question, either. I think maybe 384-bit might make more sense if they wanted to really do a top end 5080 rather than an upper midrange 5080.

That said, I think Nvidia might well keep enjoying the advantage of ensuring a massive difference between 5080 and 5090 by having the 5080 on a far inferior die, while still charging $1000+ for it anyways. Their whole gambit with 40 series seems to have been getting people accustomed to paying WAY higher prices, and I think it worked.

It'll be even more effective if they can position the 5090 as some kind of almost prosumer-type GPU that's great for AI workloads and whatnot, positioning the 5080 as more of the 'pure gaming flagship'.

This is all cynical, but they've definitely got room to exploit consumers further if they want.
 
I'm somewhat of the opinion that video card names matter as much as my job title. You can call me the Junior Subordinate Toilet Plunger Associate and so long as the work and paychecks meet or exceed my expectations, then I'm fine with it. Along those same lines, a video card manufacturer could call their next product the 262,144KZQMFT-XR and so long as the performance and pricetag meet or exceed my expectations, then I'm fine with it.

At the very end of the day, there will always be that segment of the population who simply buys the next thing with the next-higher number from the same company they've always bought from, due to name recognition. brand loyalty, ignorance, or laziness. Those same people are analogous to those who vote for the same politician regardless solely due to name recognition, party affiliation, ignorance or laziness. There's no helping those people, no matter how much better advertising gets or political punditry becomes.

For those customers who actually pay attention (and I suspect most on B3D would fit this description) they will look for price, performance, features, compatibilty, and stability when considering their next purchase. I've switched both CPU and GPU vendors multiple times in the past and wouldn't hesitate to do so again. Right now, AMD has my CPU business and NVIDIA has my GPU business, purely because the products I've purchased met my own internal criteria for the things I felt were important at the time of my purchase. I would like to think (and I have no way to measure!!) most PC enthusiasts would do the same, regardless of whatever dumb name some marketing dweeb attaches to the box.

So, I'm still not convinced the name really matters in the end. The vendors have firmly established their tiering labeling system, yet the only people who those labels might be meaningful to are mostly those who blindly buy on name recognition. Reviewers and "content creators" cover the topic because it drives clicks and views, and again there's not use tilting against that windmill.

I'm not really sure why it needs to drive so much conversation, TBH.
 
I'm not really sure why it needs to drive so much conversation, TBH.
Cause that bleeds into purely tech/math conversation as someone's unrealistic expectations from a product which are based on nothing else but the fact that it has a couple of letters/numbers in it's name which were also present in a completely different older product made in better greener times by other people (most likely).

So instead of looking at and judging what matters everyone go "oohhh this isn't what we expected from the name it has" and this has became annoying as hell to watch and read.
 
For those customers who actually pay attention (and I suspect most on B3D would fit this description) they will look for price, performance, features, compatibilty, and stability when considering their next purchase.

So, I'm still not convinced the name really matters in the end. The vendors have firmly established their tiering labeling system, yet the only people who those labels might be meaningful to are mostly those who blindly buy on name recognition. Reviewers and "content creators" cover the topic because it drives clicks and views, and again there's not use tilting against that windmill.

Yes, but we forum dwellers aren't even a drop in the ocean of customers.

The tiering system is well established like you said by the manufacturers themselves and people, be it blind, ignorant, brand loyalty or whatever, have learned to be guided by it when making decisions. Regulatory bodies don't generally like if you go and do something radically different on established tiers which could mislead said customers.
 
320-bit would be extreme, but if this is indeed some 740mm²+ monster, it wouldn't be totally out of the question, either. I think maybe 384-bit might make more sense if they wanted to really do a top end 5080 rather than an upper midrange 5080.

That said, I think Nvidia might well keep enjoying the advantage of ensuring a massive difference between 5080 and 5090 by having the 5080 on a far inferior die, while still charging $1000+ for it anyways. Their whole gambit with 40 series seems to have been getting people accustomed to paying WAY higher prices, and I think it worked.

It'll be even more effective if they can position the 5090 as some kind of almost prosumer-type GPU that's great for AI workloads and whatnot, positioning the 5080 as more of the 'pure gaming flagship'.

This is all cynical, but they've definitely got room to exploit consumers further if they want.

Yields on 4nm are probably very good by now and would get even better by the time it launches, so a heavily cut down part would probably be wasteful in volume. Given the sales potential, you'd think they could do a standalone 384 bit die as well.

To add to your point, it wasn't so long ago that the xx60 or x600 series of GPUs used to be 40-50% of the specs of the highest end part (Though the highest end was also 500-600mm2 at best). Seems like the high end has been getting even bigger and the mid to low end has been getting even more nerfed. It seems like the xx70 series barely reaches that level these days. While historically the mid-range parts have always offered the highest performance per dollar, it no longer seems to be the case.
 
I don't share the perception that people are arguing about names of products specifically, more so price points and the positioning of the product tiers and the price/performance gaps between them.

If you have one generation where at launch the second tier product is less than half the price and delivers 90% the performance of the highest tier product, and then follow this up with the second tier product being 80% the performance of the highest tier product and 75% of the price, it's a far worse value proposition. The 'halo part' premium tax has been almost completely nullified by making the lower tier parts have the same performance/$. This represents a huge shift from the norm and is not a direction I am personally thrilled with. I'm not upset with the names these products have, I merely use the names for short form to universally refer to the tier/class of product. The pain point is comparing relative performance/$ of the 2nd/3rd/4th tier products to the halo 1st tier part - they are no longer the value champions they used to be and this represents a major shift in the market.
 
I don't share the perception that people are arguing about names of products specifically, more so price points and the positioning of the product tiers and the price/performance gaps between them.
So names basically. Because price points ARE positioning inside product "tiers".
Then there's this fun story about 4080 12GB which was forced to become 4070Ti and then people went "eh, it's too expensive for a 4070".
 
I'm somewhat of the opinion that video card names matter as much as my job title. You can call me the Junior Subordinate Toilet Plunger Associate and so long as the work and paychecks meet or exceed my expectations, then I'm fine with it. Along those same lines, a video card manufacturer could call their next product the 262,144KZQMFT-XR and so long as the performance and pricetag meet or exceed my expectations, then I'm fine with it.

At the very end of the day, there will always be that segment of the population who simply buys the next thing with the next-higher number from the same company they've always bought from, due to name recognition. brand loyalty, ignorance, or laziness. Those same people are analogous to those who vote for the same politician regardless solely due to name recognition, party affiliation, ignorance or laziness. There's no helping those people, no matter how much better advertising gets or political punditry becomes.

For those customers who actually pay attention (and I suspect most on B3D would fit this description) they will look for price, performance, features, compatibilty, and stability when considering their next purchase. I've switched both CPU and GPU vendors multiple times in the past and wouldn't hesitate to do so again. Right now, AMD has my CPU business and NVIDIA has my GPU business, purely because the products I've purchased met my own internal criteria for the things I felt were important at the time of my purchase. I would like to think (and I have no way to measure!!) most PC enthusiasts would do the same, regardless of whatever dumb name some marketing dweeb attaches to the box.

So, I'm still not convinced the name really matters in the end. The vendors have firmly established their tiering labeling system, yet the only people who those labels might be meaningful to are mostly those who blindly buy on name recognition. Reviewers and "content creators" cover the topic because it drives clicks and views, and again there's not use tilting against that windmill.

I'm not really sure why it needs to drive so much conversation, TBH.
My argument wasn't that the name was so strictly important, because there's still a TIER of product underlying everything that's discernable even without names. For instance, an RTX4060 is a low end part, regardless of the name or price. It's a sub 150mm² GPU with a 128-bit bus. We know this is a low end GPU. If Nvidia called it a Titan and charged $3000 for it, it would still be a low end part.

We need to pay attention to this kind of thing or else Nvidia will(and has) exploit naming to get away with seriously greedy practices. So it affects you whether you personally care about the naming or not. Cuz if consumers get duped, then Nvidia gets away with it, and you are then forced to pay higher prices for a given tier of GPU.
 
you are then forced to pay higher prices for a given tier of GPU.

Technically we aren’t forced to pay anything. That’s kinda why Nvidia gets away with it, because people willingly bend over.

I don’t fully agree with the notion that it’s any of our business to say how much a chip should sell for. We aren’t buying chip acreage. We’re buying features and performance. If Nvidia can deliver in 150mm^2 what a competitor can deliver in 300mm^2 that doesn’t mean they should sell their chip for half the price. It’s a contrived example but it’s easier if we just focus on things consumers actually care about. Die size isn’t one of those things.
 
Technically we aren’t forced to pay anything. That’s kinda why Nvidia gets away with it, because people willingly bend over.

I don’t fully agree with the notion that it’s any of our business to say how much a chip should sell for. We aren’t buying chip acreage. We’re buying features and performance. If Nvidia can deliver in 150mm^2 what a competitor can deliver in 300mm^2 that doesn’t mean they should sell their chip for half the price. It’s a contrived example but it’s easier if we just focus on things consumers actually care about. Die size isn’t one of those things.
I wasn't putting any specific pricetags on anything, but it's important to discern when we're clearly getting less for more. Or rather, when they could easily offer more for less. This is all about value, it's just making it clear that we're being given purposefully worse value for reasons of insatiable greed than anything to do with necessary costs or anything. We need to be able to explain why we think certain parts are overpriced based on something more than "Well I just wish it was cheaper!". And using direct naming doesn't work either, cuz as Nvidia is proving, that's extremely exploitable. Pointing out to people that these are lower tier GPU's than their naming suggests through die size and all that is a good way of making the point. It's not that consumers care about die size, it's that die sizes are the proof that these parts are overpriced.

Lovelace provided basically a Pascal-esque leap in performance/efficiency and it could have easily been considered one of the best generations ever just like Pascal had Nvidia just priced their stuff more reasonably. Even the competition argument doesn't work great because we know AMD is going to follow Nvidia's lead in pricing nowadays. And I do agree that consumers had the power to stop all this, but decided to just give in anyways. It's been why I'm quite frustrated at having seen what is probably the complete ruin of the GPU market going forward. We will likely never see 'great value' GPU's ever again, at least not as launch MSRP or anything.

We used to actually get passed along the great leaps in performance without being asked to pay a ton more for it. 150mm² GPU does not justify a much higher price tag just cuz it performs a lot better than a theoretical 150mm² from a previous gen. It's like y'all have been completely reconditioned to forget that big leaps in performance per dollar were expected every new generation, and now y'all are arguing that it's ok to raise prices a ton just cuz the performance is a lot better! And to be clear, I think some level of price increase is justified. But Nvidia essentially more than doubled prices in just one generation. It's indefensible.
 
Last edited:
A sub 150mm2 chip on a cutting edge process isn't "low end" either. But I've already said this many times.
There was nothing especially 'cutting edge' about a 5nm family process in late 2022. Apple had been using 5nm in released products for two years by that time. GPU's should be on some sort of modern node, that's not anything that should make us recalculate the situation in any super significant way.

150mm² is low end. It just is. There is no process so advanced that it changes this. Also just a 128-bit bus. That's again, a low end spec that you use on low end parts.

Navi 10 wasn't a high end part just cuz it used TSMC 7nm, either.

There certainly cant be any excuses like this for Blackwell though, right?
 
This is all about value.

I agree but don’t think die size is a relevant or useful metric of value. There are too many other factors to consider and we don’t have good info on what a given die costs anyway.

150mm² GPU does not justify a much higher price tag just cuz it performs a lot better than a theoretical 150mm² from a previous gen.

No it doesn’t and I haven’t said that it does. The only metric that makes any sense to me is perf/$ with some accounting for feature set and user experience. The 4060 for example is terrible value vs the 3060 when using this metric.
 
Back
Top