DegustatoR
Legend
I don't see anything "disappointing" in getting a +10-20% performance increase at the same price.
If you consider that the 1000$ MSRP will not exist in Europe, and will be almost non existent in the US, things are worse than what reviewers are saying. 1200€$ should be considered the real price for this thing (and that's probably conservative).Guess nobody is really interested in talking about the 5080 reviews, eh? lol
Turns out we didn't need to wait for reviews to know this was gonna be extremely disappointing.
128 MByte L2 for GB202 has been confirmed in the meantime since my posting below.I have been keen to find out the L2 cache for months, and I agree it’s a bit odd to leave it out. They would have had to shrink the relative size of the cache footprint to keep the same ratio on GB202 (ie 128MB) - not impossible (AMD did it with Zen 5) but once we knew the chip was 22% bigger it never seemed likely. My guess is it’s still 96MB (maybe a bit smaller even) and they don’t want silly stories about less cache for more money or some such nonsense. Worth noting that unless they needlessly disable cache on the 5090 again, it’ll probably end up with more overall than the 4090 (even if the SM/L2 ratio is a bit smaller).
Edit: Replied before seeing the post on the next page, no surprises there.
If you consider that the 1000$ MSRP will not exist in Europe, and will be almost non existent in the US, things are worse than what reviewers are saying. 1200€$ should be considered the real price for this thing (and that's probably conservative).
Nvidia MSRP prices are a smokescreen.
How is the 5080 a disappointment when it delivers more for the same price?Guess nobody is really interested in talking about the 5080 reviews, eh? lol
Turns out we didn't need to wait for reviews to know this was gonna be extremely disappointing.
Honestly, even Blackwell as an architecture is super underwhelming. You can argue it's setting up some improvements that will show up later down the line, but by the time these things are more commonly incorporated into games, there will be better GPU's out.
Considering nVidia's promo materials claimed it was more than 2x the performance of the 4080, yes 10-20% is a let down.I don't see anything "disappointing" in getting a +10-20% performance increase at the same price.
Well it is 2X with MFG vs FG, and this is still a valid comparison.Considering nVidia's promo materials claimed it was more than 2x the performance of the 4080, yes 10-20% is a let down.
I know the used MFG for their comparison but that's on them for getting the hype too high.
Reported. This is seriously tiring.You wouldn't see anything disappointing if Jensen personally ran over your dog.
As with 5090 there is zero point in looking at "average" which seem to be CPU limited to a significant degree or just full of buggy results.And it's more like 10% average
That's amazing. I wonder what the Blackwell advantage is. Nvidia basically put all of their resources into neural rendering and improving the RT cores. You would hope that Blackwell has a significant performance advantage in titles that support RTX Mega Geometry.Based on AW2 - RTX 2080 Ti Runs about 5 ms better with it on vs off
You got me excited for a second. RIP TechReportTechreport it's about 13% faster than 4080 super at 4k. Really comes down to the games being tested.
NVIDIA GeForce RTX 5080 Founders Edition Review
The NVIDIA GeForce RTX 5080 is out now. Priced at $1000 it includes all the new GeForce RTX 50 features and offers good gaming performance. On the other hand, the gen-over-gen performance increase is smaller than expected, but NVIDIA didn't raise their pricing either.www.techpowerup.com
The new Sphere and Linear Swept Spheres intersection testing is a big change to RT as well. It's the first time in a long time (forever?) that mass market graphics cards have come with hardware support for non-triangle 3D graphic primitives. This post ended up being rather prescient.DXR 2.0 must be coming. This is a pretty significant departure from how ray-tracing is currently being handled.
I'm really curious how Mega Geometry works on a gpu that lacks the cluster intersection testing in the RT core. You'll still get big benefits on the CPU side, but I wonder how the intersection testing is handled and what the performance penalty is.
What are the chances of adding support for intersecting curve primitives in the next iteration of DXR? Apple recently added it to Metal but it’s unclear whether it’s hardware accelerated. Optix supports it in CUDA.
Theoretically it will be faster to intersect curve primitives for stuff like hair which I believe currently uses highly tessellated lines & triangles.
Seems more deceptive than valid to me but you are welcome to your own opinion on the matter.Well it is 2X with MFG vs FG, and this is still a valid comparison.
You got me excited for a second. RIP TechReport
Does the 5090 really have 128MB of L2? Techpowerup still lists 98MB.
Nothing deceptive in comparing 2X FG with 4X FG.Seems more deceptive than valid to me but you are welcome to your own opinion on the matter.
Nothing deceptive in comparing 2X FG with 4X FG.