Nvidia Blackwell Architecture Speculation

  • Thread starter Deleted member 2197
  • Start date
Guess nobody is really interested in talking about the 5080 reviews, eh? lol


Turns out we didn't need to wait for reviews to know this was gonna be extremely disappointing.
If you consider that the 1000$ MSRP will not exist in Europe, and will be almost non existent in the US, things are worse than what reviewers are saying. 1200€$ should be considered the real price for this thing (and that's probably conservative).

Nvidia MSRP prices are a smokescreen.

Ps: just saw the official price for the 5080 in Italy, and it's 1200€ base price 🥱
 
Last edited:
I have been keen to find out the L2 cache for months, and I agree it’s a bit odd to leave it out. They would have had to shrink the relative size of the cache footprint to keep the same ratio on GB202 (ie 128MB) - not impossible (AMD did it with Zen 5) but once we knew the chip was 22% bigger it never seemed likely. My guess is it’s still 96MB (maybe a bit smaller even) and they don’t want silly stories about less cache for more money or some such nonsense. Worth noting that unless they needlessly disable cache on the 5090 again, it’ll probably end up with more overall than the 4090 (even if the SM/L2 ratio is a bit smaller).

Edit: Replied before seeing the post on the next page, no surprises there.
128 MByte L2 for GB202 has been confirmed in the meantime since my posting below.
 
If you consider that the 1000$ MSRP will not exist in Europe, and will be almost non existent in the US, things are worse than what reviewers are saying. 1200€$ should be considered the real price for this thing (and that's probably conservative).

Nvidia MSRP prices are a smokescreen.

The same was probably true of 4080 and 4080 super. At least I never saw it. Gets super hard when you start comparing prices of AIB models, but I do agree that using the FE pricing as some kind of comparison point seems limited.
 
Guess nobody is really interested in talking about the 5080 reviews, eh? lol


Turns out we didn't need to wait for reviews to know this was gonna be extremely disappointing.

Honestly, even Blackwell as an architecture is super underwhelming. You can argue it's setting up some improvements that will show up later down the line, but by the time these things are more commonly incorporated into games, there will be better GPU's out.
How is the 5080 a disappointment when it delivers more for the same price?
 
Techreport it's about 13% faster than 4080 super at 4k. Really comes down to the games being tested.

 
Last edited by a moderator:
Based on AW2 - RTX 2080 Ti Runs about 5 ms better with it on vs off
That's amazing. I wonder what the Blackwell advantage is. Nvidia basically put all of their resources into neural rendering and improving the RT cores. You would hope that Blackwell has a significant performance advantage in titles that support RTX Mega Geometry.
 
Techreport it's about 13% faster than 4080 super at 4k. Really comes down to the games being tested.

You got me excited for a second. RIP TechReport 🕯️

Does the 5090 really have 128MB of L2? Techpowerup still lists 98MB.
 
DXR 2.0 must be coming. This is a pretty significant departure from how ray-tracing is currently being handled.

I'm really curious how Mega Geometry works on a gpu that lacks the cluster intersection testing in the RT core. You'll still get big benefits on the CPU side, but I wonder how the intersection testing is handled and what the performance penalty is.
The new Sphere and Linear Swept Spheres intersection testing is a big change to RT as well. It's the first time in a long time (forever?) that mass market graphics cards have come with hardware support for non-triangle 3D graphic primitives. This post ended up being rather prescient.
What are the chances of adding support for intersecting curve primitives in the next iteration of DXR? Apple recently added it to Metal but it’s unclear whether it’s hardware accelerated. Optix supports it in CUDA.

Theoretically it will be faster to intersect curve primitives for stuff like hair which I believe currently uses highly tessellated lines & triangles.

DXR 2.0, standardization of SER, Cooperative Vectors, and Shader Model 7 with SPIR-V support are enough new features for Microsoft to announce DirectX 13.
 
Nothing deceptive in comparing 2X FG with 4X FG.

As a measure of performance, there is. 4X produces more artifacts than 2X so they are not directly comparable.

Obviously generated frame quality matters, or else you would dismiss DLSS4 FG because we have the lossless scaling app to do that (and more) already. Few people are going to make that argument because the resulting quality isn't up to DLSS4.
 
Back
Top