Speculation and Rumors: Nvidia Blackwell ...

  • Thread starter Deleted member 2197
  • Start date
isn´t it too (s)low for mighty Nvidia ?
People noted if the rumor is true the chip sounds somewhat special and would get pretty insane boost clocks. (compared to the pretty high stock and OC boost clocks in Lovelace)
 
Not necessarily. We don't know if Blackwell will have the same boost behavior.

If it will and this rumor is true though then yeah, sustained boost clocks should be around 3.6Ghz.
 
How did you get to 3.6? The 4090 tops out at around 2.7.
And its base is 2.235 GHz. Applying a similar ratio for a base of 2.9 GHz I get 3.5 rather than 3.6, but still much higher than Ada.

With the obvious caveat that two huge assumptions are being made there.
 
How did you get to 3.6? The 4090 tops out at around 2.7.
More like 2.9.

clock-vs-voltage.png


FE specifically does 2.7, yes:

clock-vs-voltage.png
 
Yeah let’s try to avoid using souped up AIB numbers when speculating on upcoming hardware.

And its base is 2.235 GHz. Applying a similar ratio for a base of 2.9 GHz I get 3.5 rather than 3.6, but still much higher than Ada.

With the obvious caveat that two huge assumptions are being made there.

Ah that makes sense. I mistakenly used 2.32 and not 2.23 for base. Either way there’s no guarantee the boost algo works off percentages that way and certainly no guarantee Blackwell uses the same ratios. Anything around 3.5 will be pretty impressive.
 
Yeah let’s try to avoid using souped up AIB numbers when speculating on upcoming hardware.
These two cards have the exact same base clock so there's nothing "souped up" there aside from power limit.
Lovelace seem to top out at ~2.9 across all chips really. You need to do w/c or LHC to push it higher.
 
Edit - I can see boost clock for the highest clocking GPUs of this arch, that's a reasonable target. That the highest tier consumer part would also target the highest tier boost clock is questionable, those parts are usually power limited and I see every reason to assume that's the case here.
 
Last edited:
Without a proper node jump, I'd say reasonable expectations should be more in the 35-40% increase range.

Looking back to the last two times Nvidia stuck to the same (general) process between generations, we have:

780Ti -> 980Ti being about a 40% increase in performance at 1440p, increasing to 45% if we want to talk about the Titan X. This was overall considered a really excellent result. Die size increased from 561mm² to 601mm²(7% increase).

1080Ti -> 2080Ti was a 33% increase in performance, actually decreasing slightly if we compare against Pascal Titan X/XP. And given the incredible increase in die size from 471mm² to 754mm²(60% bigger), this was widely considered a very disappointing result. Granted, Turing used a chunk of the new silicon for all new features that wouldn't be widely used yet, but this in no way felt like a worthy generational leap regardless.

So pretty different situations, but it shows that even Nvidia at its best was not capable of providing a Pascal/Lovelace-esque improvement without a proper node leap. I would hope that we'll get better than what we got with Turing, but we really dont know. I just dont think the ceiling for performance improvement is really as great as such a rumor is suggesting. That would be almost miraculous, even going with some 700mm²+ behemoth.
 
Back
Top