Speculation and Rumors: Nvidia Blackwell ...

pharma

Legend
blackwell-feature-copy.jpg

David Harold Blackwell (April 24, 1919 – July 8, 2010) was an American statistician and mathematician who made significant contributions to game theory, probability theory, information theory, and statistics. He is one of the eponyms of the Rao–Blackwell theorem. He was the first African American inducted into the National Academy of Sciences, the first black tenured faculty member at the University of California, Berkeley, and the seventh African American to receive a Ph.D. in Mathematics.

Nvidia has committed to releasing new GPU/CPU every 2 years so we can expect Blackwell sometime in 2024. Not much circulating on the rumor mill except 4 Blackwell gpus were mentioned in data dump via leak at Videocardz.
nvidia-blackwell-gb100.jpg
 
Blackwell is "Ampere Next Next", isn't it?

So that's the same SM architecture as Ampere, presumably...
 
While answering a question at the Arete Tech Conference 2022, Ian Buck of NVIDIA reiterated that the company is fully committed to launching a major GPGPU architecture every 2 years and Hopper was released this year. This was in response to potential new developments in 2023 and confirms that the Blackwell GPU architecture will be launched in 2024

Volta > Ampere > Hopper > Blackwell
 
Last edited:
Not only. Top SKU monster MCM under evaluation too. Hope it goes into production as it will be a $2k+ monster GPU with the biggest performance jump ever in any client GPU
How do you benchmark something like that properly? Native 8k raster with DLSS quality/balanced RT 8k with frame gen? Using 4k raster you'll far outscale any CPU gains again so hit more CPU and engine FPS bottlenecks, along with high refresh rates for almost everything if the hardware can scale out that much and doesn't have utilisation problems. Actually that's a point, hardware so fast you limit your utilisation so significantly reduce power consumption - spend crazy money on a GPU and save the planet, it's so simple.
 
How do you benchmark something like that properly? Native 8k raster with DLSS quality/balanced RT 8k with frame gen? Using 4k raster you'll far outscale any CPU gains again so hit more CPU and engine FPS bottlenecks, along with high refresh rates for almost everything if the hardware can scale out that much and doesn't have utilisation problems. Actually that's a point, hardware so fast you limit your utilisation so significantly reduce power consumption - spend crazy money on a GPU and save the planet, it's so simple.
RT heavy titles.
 
If half a million or so gamers/creators/game-studios etc. think that's a reasonable price for the performance, why not?
Because this price for that performance is highly likely to be even worse than what we have now with Lovelace? I dunno.
 
Because this price for that performance is highly likely to be even worse than what we have now with Lovelace? I dunno.
This tier of GPU is aimed at people who aren't looking for value, only maximum performance per installation.

Doesn't say anything about lower tiers...

A typical gamer only has to worry about whether NVidia thinks gamers are worth making cards for, as opposed to data centre...

Are margins on gamer products going to sustain gaming GPUs?

Does NVidia want to sell GPUs to people who only have $300? What about $200?
 
But Lovelace's price/perf is actually really good at that segment. For RTX 3090->RTX 4090 buyers it's the biggest perf/price improvement since Maxwell->Pascal, and arguably better if you factor in feature set improvements. It's the rest of the stack downwards, at least so far, that's the issue in that aspect.

This is the thing with a future of chiplets, especially if there is multiple GCDs, I feel is somewhat getting somewhat misinterpreted. The scaling benefits are upwards in the stack and not downwards. Note that AMDs lowest priced CPUs and lowest priced CPUs are monolithic. Even with RDNA3 the lowest in the stack is monolithic. Chiplets are not going to shrink the product stack. Hypothetically if even further down the line they scale up to >2 GCDs per GPU you can expect that product stack to move even further up in terms of power and price of entry at the absolute highest end. We're basically moving back to the equivalent of multiple graphics cards being the true halo end.
 
This tier of GPU is aimed at people who aren't looking for value, only maximum performance per installation.
Yeah but if we're looking at 2X the performance for 2X the price then why wait till Blackwell to do it?

But Lovelace's price/perf is actually really good at that segment.
Exactly. Which means that it will be hard to beat it with next gen which also means that this "monster card at $2500" can easily end up with the same perf/price we have now on 4090.

I personally would be more excited to hear that they are looking at pushing 4090's performance into $500 tier with next gen.
 
this "monster card at $2500" can easily end up with the same perf/price we have now on 4090.
Unless the foundriesfoundry can deliver actual per-transistor cost reduction it'll be hard for IHVs to improve perf/$.

To a first order all they can do is build bigger and more expensive things.

There is a second-order way to improve perf/$ -- if perf/W improves then IHVs can build thinner and higher-clocked designs to achieve the same perf as a prior-gen wider-and-slower design. This can cut silicon costs but higher clocks need other support infrastructure that eats into those savings.
 
Back
Top