Speculation and Rumors: Nvidia Blackwell ...

  • Thread starter Deleted member 2197
  • Start date
My hypothesis is that if the rumors are true, they've scaled back some of the SRAM allocated to on-die cache, choosing to use more of that area for other things, leaning on the external memory bus to do the heavy lifting, Ampere-style. Maybe GDDR7 is cheap and power efficient enough that it makes sense this generation, or maybe they're going close to the reticle limit and there simply wasn't enough room for the giant on-chip cache (for GB202 anyway), especially with how poorly SRAM scales on cutting edge processes.

512-bit bus kind of necessitates a giant chip, if for no other reason than you need the space around the edges of the die to fit the 16 memory controller channels.
AD102 at 609mm^2 on N4 doesn't look like it has space in the floorplan for 4 more memory channels, even if you omit the NVLink bits entirely.
I think a die size increase of some sort for this top end part was likely anyways, still staying on the (for the most part) same node.

And some have speculated that the 5090 is being primed as an AI GPU for non-data center applications. 4090 had success in this market, and 5090 looks like it's really catering strongly to that. If they want to sell this thing for $2500-3000, they probably can.

They could then perhaps offer a 5080Ti that's cut down with 448-bit bus/28GB for like $1500 six months from now or something.
 
AD102 at 609mm^2 on N4 doesn't look like it has space in the floorplan for 4 more memory channels, even if you omit the NVLink bits entirely
Reticle limit should be around 850 mm2 IIRC and this version of the process is supposedly about 10-20% denser.
 
And some have speculated that the 5090 is being primed as an AI GPU for non-data center applications. 4090 had success in this market, and 5090 looks like it's really catering strongly to that. If they want to sell this thing for $2500-3000, they probably can.

That would alienate a lot of 4090 owners though. Would be the first time Nvidia didn’t give previous flagship owners a reasonable upgrade path on a new architecture launch.
 
That would alienate a lot of 4090 owners though. Would be the first time Nvidia didn’t give previous flagship owners a reasonable upgrade path on a new architecture launch.
???

Neither Kepler, Maxwell or Pascal launched immediately with any high end parts at all, let alone a clear flagship part. In fact, doing so has been far more of a recent trend thing.

And it's not like an upgrade path wont be there at all in this situation.
 
I think a die size increase of some sort for this top end part was likely anyways, still staying on the (for the most part) same node.

And some have speculated that the 5090 is being primed as an AI GPU for non-data center applications. 4090 had success in this market, and 5090 looks like it's really catering strongly to that. If they want to sell this thing for $2500-3000, they probably can.

They could then perhaps offer a 5080Ti that's cut down with 448-bit bus/28GB for like $1500 six months from now or something.

This is exactly my fear. The spec gap between that 5080 and 5090 is huge here, way bigger than the 4080 and 4090 which was already very large.

That would make the 5080 in this case more a replacement for the 4070Ti than the 4080 but I don't expect an associated price drop (at least not to 4070Ti price level).

If true this feels like another attempt at the "4080 12GB" debacle but without the associated 4080 16GB to inform everyone that its not a "real" x080 tier product.

I wouldn't be surprised if they sold this for something like $999 claiming a massive price drop from the 4080 (but in actuality a big price increase from the 4070Ti) and then as you say, we see the "real" 5080 class product a few months later in the guise of a 5080 Super at a higher price than the 4080.
 
???

Neither Kepler, Maxwell or Pascal launched immediately with any high end parts at all, let alone a clear flagship part. In fact, doing so has been far more of a recent trend thing.

And it's not like an upgrade path wont be there at all in this situation.

The 1080 was 35% faster than the 980 Ti at launch. Might want to check your facts there. Also read my post again.
 
That would make the 5080 in this case more a replacement for the 4070Ti than the 4080 but I don't expect an associated price drop (at least not to 4070Ti price level).

If true this feels like another attempt at the "4080 12GB" debacle but without the associated 4080 16GB to inform everyone that its not a "real" x080 tier product.

A real x80 tier product is one that’s substantially faster than the last x80 tier product. If the 5080 fails to meet that bar I would agree with you.
 
The 1080 was 35% faster than the 980 Ti at launch. Might want to check your facts there. Also read my post again.
I read your post just fine. A 1080 is not the 'reasonable upgrade path' for 980Ti owners. A 1080 is an upper midrange part. The Pascal Titan X or 1080Ti were the proper upgrade paths for 980Ti/Titan X owners. You'd have been an absolute dope to buy a 1080 thinking it was the new flagship Pascal part, only for Nvidia to throw down the Titan X and then 1080Ti not too long after that were way better and actual direct replacements. Any basic look at the specs and die size would have made this super obvious.

You're also conveniently ignoring Kepler and Maxwell here, as they are more examples of Nvidia not leading with new flagship parts. You're literally wrong here while telling me to check my facts? smh

If you're just trying to say it'll be the first time Nvidia doesn't offer anything at all faster than the last generation's best at launch, then fine. But that's a different claim. And also no reason Nvidia has to stick with any such 'tradition'. The people who spent $1600+ on a 4090 are not gonna suddenly whine and get turned off Nvidia because they weren't immediately catered to. These are not sensitive buyers. lol
 
Last edited:
This is exactly my fear. The spec gap between that 5080 and 5090 is huge here, way bigger than the 4080 and 4090 which was already very large.

That would make the 5080 in this case more a replacement for the 4070Ti than the 4080 but I don't expect an associated price drop (at least not to 4070Ti price level).

If true this feels like another attempt at the "4080 12GB" debacle but without the associated 4080 16GB to inform everyone that its not a "real" x080 tier product.

I wouldn't be surprised if they sold this for something like $999 claiming a massive price drop from the 4080 (but in actuality a big price increase from the 4070Ti) and then as you say, we see the "real" 5080 class product a few months later in the guise of a 5080 Super at a higher price than the 4080.
Pretty much, yea. Even the 4070Ti(aka 4080 12GB) was still a higher tier than it should have been. Nvidia has shown their intentions and given the absolute murder they got away with with Lovelace, I see no reason for them to not double down and go even further with it with Blackwell. And that could all start by drastically raising the 5090's price/perceived position.
 
And that could all start by drastically raising the 5090's price/perceived position.
The huge gap in specs between the 5080 and the 5090 plays into this positioning aswell. I would assume the gap will be reflected in pricing aswell, so that if the 5080 is something like 1000-1200 USD the 5090 would quite naturally be about double that.

Considering that the 4090 is still selling at an avg price of 1578 USD on the used market according to HUB a week ago it's hard to see why Nvidia wouldn't want to seriously hike the price from the 4090 MSRP of 1599 USD.

I'm also interested how sensible Nvidia thinks it is to sell the GB202 at a comfortable price tag to consumers when there's most likely going to be a good amount of demand for the chip in the datacenter market aswell at a much higher price. Maybe just eat the criticism and outrage about extreme pricing and enjoy profits?
 
A real successor to 4080 is a card with any name which provides better performance for the same price. There is no "x80 tier".

It's both amusing and disappointing that we (the forum) are clearly diving head first into exactly the same conversation loop that occurred with Ada. On this point in particular though I'd disagree. There is in fact by definition a x080 tier. It's composed of the 1080, 2080, 3080, 4080, and soon to be 5080. What constitutes a "valid/worthwhile/justified" entry into that tier depends entirely on how you define those criteria, and so will naturally vary from person to person. I don't think there's actually an universally correct way of defining this, it's just a matter of opinion. So I'll qualify my earlier statement to say that I personally don't think the 5080 based on my interpretation of these specs should be placed in the 5080 tier based on it's relative performance vs 5090 and 4090.

In other words I define what I consider to be a valid entry into xx80 tier based on a combination of it's relative performance in the current overall product stack along with it's relative performance AND price vs the previous entry in that performance tier (as well as an element of the earlier entries too).

So based on this rumour, IF the 5080 is slower than the 4090 while being a little over half the performance of the 5090 (and a little faster than the 4080), then I don't personally think it belongs in the xx80 tier regardless of it's price. Although I won't be complaining if it's priced at or lower than the current 4070Ti/Super (not happening).
 
It's both amusing and disappointing that we (the forum) are clearly diving head first into exactly the same conversation loop that occurred with Ada. On this point in particular though I'd disagree. There is in fact by definition a x080 tier. It's composed of the 1080, 2080, 3080, 4080, and soon to be 5080.
What "tier" is this then? Are they all have the same performance? Features? Prices maybe?
 
You clearly didn’t as I said nothing about “leading with flagship parts”. That’s all your imagination.



Ah Seanspeed has declared that 35% is not a reasonable upgrade. You heard it here first folks.

Lol.
This is not complicated. People who buy flagship GPU's(which you brought up) dont upgrade to midrange parts. They are looking for that next gen's high end/flagship parts. That's the 'reasonable upgrade option' for such folks.

Again, if you were a 980Ti owner and you bought a 1080, you were a fool and missed out massively given the amazing 1080Ti that would come out not long after. Anybody who looked at the die size of the 1080 would know that it was not some high end equivalent of what they had and that Nvidia would have something much better soon enough. But I guess if you're the type who doesn't think die size means anything, you'd be fool enough to make such a mistake.

Similarly, 4090 owners are not anxious to spend $1000 to get 10% more performance from the 5080(if it even ever gets that). This is not how people who buy these sorts of GPU's tend to operate.

You can redefine 'reasonable upgrade path' to mean something ridiculous, but 35% is not actually a reasonable upgrade in the GPU world. I'd say people shouldn't be upgrading their GPU's for anything less than ~50% performance improvement.
 
Last edited:
The huge gap in specs between the 5080 and the 5090 plays into this positioning aswell.
Absolutely. Nvidia know what they are doing here. They are basically eliminating a whole tier of GPU in order to upsell people on a smaller GPU, via naming.

But we'll have to see how Nvidia handle individual SKU's. They've got options. And they certainly never offered a more cut down AD102 offering, so no guarantee they do the same this time. Also entirely possible kopite is simply wrong and a 5090 will be cut down from a full GB202, which would change the speculative picture a fair bit.
 
Back
Top