Speculation and Rumors: Nvidia Blackwell ...

  • Thread starter Deleted member 2197
  • Start date
In a scenario where a person has no other option, you're correct. I have other capable GPUs as backups however. I will not be without the ability to play all my games comfortably. I'd rather maximize my return on the previous gen product.
What other GPUs you got?
 
4070ti and 2080ti


I've got my heart set out on the best for my main PC. No interest in the 5080 at all.
You can game just fine on a 4070Ti. It would be a fine stopgap. The questions are:
1) How well will the 4090 retain its value once the 5090 arrives? Normally I'd say not very well but the market is strange right now.
2) How much and how available will the 5090 be?

Unfortunately we can't answer those questions for certain in advance.
 
In this scenario it should be easy for the competition to undercut their prices.
It is easy, but that doesn't mean they're going to do it. Because, ya know, AMD can be greedy too. Plus free market competition doesn't work when consumers stop caring about who is actually giving them the better value.

"Oh but consumers are deciding that Nvidia offers better value". We know this isn't true, and I really dont want to have to have the discussion where I have to demonstrate that consumers aren't all highly informed, rational people who are good at thinking for themselves like the alternative argument would demand be the case.

Doubtful as the density is almost 2X worse than what Nvidia and AMD get on the same N5 process. It's either dark silicon - which is there for some reason - or Intel is lying about complexity - take your pick.
Ah so you think Intel is deliberately running their die costs up for no reason because.....? Seriously, what on earth is this argument supposed to be about? You're literally proving my point, that Intel's architectural performance efficiency in terms of die space is absolutely dreadful. It's in their best interest to produce the smallest die possible for a given performance target, so if they didn't have to do this, they wouldn't. But they clearly cant deliver on the same level of performance per mm² as Nvidia or anywhere close and that's the only thing that really matters here, unless you've got some genius explanation why Intel would purposefully make their dies much more expensive with no performance benefit. :/
 
...Because they needed 192 bit bus to compete.
This does not remotely come close to explaining the overall performance per mm² deficiencies of Intel's GPUs. And you perfectly well know that.

It also, in fact, further proves my point about Intel's architectural performance deficiencies, hilariously.

In no world does what you're saying do anything except help what I'm saying.
 
It explains them quite well. Don't know why you think that it doesn't.
How do YOU not understand that what you're saying only proves my point?

If Intel needs a bigger memory bus to compete as you claim, then you're literally saying their base architectural performance per mm² isn't good or else they could and would shrink the die without needing the larger memory bus. Obviously.

Not that two 32-bit blocks at all explains the entire die space discrepancy. They are really that bad in this area.
 
If Intel needs a bigger memory bus to compete as you claim, then you're literally saying their base architectural performance per mm² isn't good or else they could and would shrink the die without needing the larger memory bus. Obviously.
The base architectural performance per mm^2 has zero to do with the memory bandwidth needed to perform on par with competition. I'm fairly sure that Intel will make a much smaller GPU with the same performance as soon as they'll have access to G7.
 
It is easy, but that doesn't mean they're going to do it. Because, ya know, AMD can be greedy too. Plus free market competition doesn't work when consumers stop caring about who is actually giving them the better value.

Yep everyone is greedy. Welcome to capitalism.

"Oh but consumers are deciding that Nvidia offers better value". We know this isn't true, and I really dont want to have to have the discussion where I have to demonstrate that consumers aren't all highly informed, rational people who are good at thinking for themselves like the alternative argument would demand be the case.

What would be the appropriate behavior for a rational, highly informed person in the current market situation? Seems you think over 90% of GPU buyers are dumbasses.
 
The leaked 5070 config is pretty interesting.

The 4070 super has the same bandwidth and clocks as the 4070 but 10 more SMs, 16 more ROPs and 12MB more L2 at a 20W higher TDP. It benchmarks 15% faster which is pretty good scaling given no increase in bandwidth. That’s 15% higher performance on the super for 10% more power and 21% more SMs. Fair to assume the 4070 super isn’t terribly bandwidth limited.

Now here comes the 5070 with essentially the same number of SMs as the 4070 but 33% higher bandwidth and 25% higher TDP. Why so much more power and bandwidth for the same SM count when clearly neither were necessary to scale up the SM count on the 4070 super? And that’s before factoring in any efficiency tweaks on N4 vs N5.

One explanation could be that GB205 has a smaller L2 and leans on vram bandwidth more than AD104. That wouldn’t explain the power increase though. I think the most obvious answer is that Blackwell SMs are a lot more bandwidth and power hungry than on Ada. Maybe higher clocks. Maybe beefier RT. Or a surprise.
 
Back
Top