Nvidia Blackwell Architecture Speculation

  • Thread starter Deleted member 2197
  • Start date
4X produces more artifacts than 2X so they are not directly comparable.
Considering that as of now I see exactly zero artifacts with 2X FG on my 4090 more*zero=zero so that's fine.

Obviously generated frame quality matters, or else you would dismiss DLSS4 FG because we have the lossless scaling app to do that (and more) already. Few people are going to make that argument because the resulting quality isn't up to DLSS4.
On a more serious note though you're looking at persistence of said artifacts and this doesn't increase with 3X/4X FG since these 2-3 frames are being inserted into the same timescale as 1 would in the 2X option. So what you get is 3-4 frames being shown in the same time as 2 frames are in 2X FG. Which means that the artifacts are likely just as "visible" as in the current solution - while the visual fluidity gain should be similar to getting just as many "real" frames.
 
On a more serious note though you're looking at persistence of said artifacts and this doesn't increase with 3X/4X FG since these 2-3 frames are being inserted into the same timescale as 1 would in the 2X option. So what you get is 3-4 frames being shown in the same time as 2 frames are in 2X FG. Which means that the artifacts are likely just as "visible" as in the current solution - while the visual fluidity gain should be similar to getting just as many "real" frames.
I don't think this checks out. With 2x Frame Generation generated frames are displayed half the time. With 3x Frame Generation generated frames are displayed two-thirds of the time. With 4x Frame Generation generated frames are displayed three-quarters of the time. If the artifacts in consecutive generated frames are correlated then that means they are visible for a longer period of time. I believe it was even mentioned that DLSS 4 Frame Gen was designed to have less artifacts to alleviate this issue.
 
I don't think this checks out. With 2x Frame Generation generated frames are displayed half the time. With 3x Frame Generation generated frames are displayed two-thirds of the time. With 4x Frame Generation generated frames are displayed three-quarters of the time. If the artifacts in consecutive generated frames are correlated then that means they are visible for a longer period of time. I believe it was even mentioned that DLSS 4 Frame Gen was designed to have less artifacts to alleviate this issue.
True but considering how "visible" the artifacts are with the old FG I honestly doubt that this is going to be any sort of a problem. Lag, hitches, general frametime health under and at FPS limit are all much bigger issues IMO.
 
True but considering how "visible" the artifacts are with the old FG I honestly doubt that this is going to be any sort of a problem. Lag, hitches, general frametime health under and at FPS limit are all much bigger issues IMO.
I agree that the artifacts with old FG were not visible when gaming. At least I couldn't see them (though I can see them when zoomed and going frame by frame).

It could be the case that I couldn't detect the artifacts because every other frame was clean and my brain was able to compensate. But this might not be the case if 3/4 of frames are generated. IDK but it makes reviewing these cards more complicated if human perception is getting involved.

Also in some games that I haven't played the artifacts do seem noticeable in motion. Pointing the flashlight around in the forest in AW2 the artifacts are visible even in youtube videos. That is a very difficult thing for FG to deal with, interpolating a leaf that was very dark in one frame and very brightly lit in the next.
 
Last edited:
Based on AW2 - RTX 2080 Ti Runs about 5 ms better with it on vs off
That's a great result, really nice. And Blackwell should benefit from it even more. If devs use this for higher perf instead of pushing higher visual fidelity, then that should allow for more performant RT on lower end cards. I wonder if this feature also reduces VRAM.
 
IMO it's fine to say 60->240fps with MFG is better than 60->120fps with FG (though there is a bit more latency it's pretty small).

It's not fine to compare 30->120fps with 60->120fps and say they are delivering the same performance. At a given output framerate, less FG (more "real" frames) is objectively better in every way.
 
I'm guessing only maybe LTT might have the resources to do something like this these days but I feel an actual interesting user oriented test would be doing something like having a RTX 5080 system vs RTX 4090 system in games capable of MFG on both a 165hz and a 240hz monitor. Don't tell the users what you are actually testing, what the settings are, or show any numbers and just have them rate the game play experience between them.

But with the above you can see why the direction of technology is seeing push back from people in the traditional space. If an actual proper analysis and review of products are going to require the above it's not going to work with how the current space is setup or what the audience expects.

There seems to be this want to figure out some sort of equivalency and so we can still rely on the current paradigm for benchmarking/reviews but maybe that isn't going to be possible or even the actual best user relevant direction. The future might just be case by case subjective analysis, which does throw a big wrench into the online hardware debates.
 
Guess nobody is really interested in talking about the 5080 reviews, eh? lol


Turns out we didn't need to wait for reviews to know this was gonna be extremely disappointing.

Honestly, even Blackwell as an architecture is super underwhelming. You can argue it's setting up some improvements that will show up later down the line, but by the time these things are more commonly incorporated into games, there will be better GPU's out.
oh man, a possible future RTX 5080 Ti is going to be the 4080 Super Super for real-I swear to you version
 
At least it might have more memory. 24GB would be welcome for this level of performance.
tbh, since I use a 550W PSU and prefer "smaller" efficient components (I'm not saying the 5080 isn't efficient) I don't care about these reviews much regarding how I see nVidia. I won't spend more than 500$ on a GPU, I never did.

The true test comes with my favourite, the RTX 5060, how it fares compared to the 4060, which is already a great GPU that performs well and is the more efficient GPU of its tier compared to the RX 7600 and the B570.

What I fear the most is that nVidia puts 8GB in the 5060. I got used to the A770 16GB and while not the best gpu ever by any means, it runs every game as best as it can 'cos the VRAM is NEVER a bottleneck.
 
At least it might have more memory. 24GB would be welcome for this level of performance.
According to TechPowerUp the RTX 5080 already has a fully-enabled GB203 die (and the 5090 doesn't have a fully-enabled GB202 die). I doubt Nvidia is interested in cutting down the GB202 even more to create a 5080 Ti. A 5080 24GB is possible once there's enough 3GB GDDR7 chips though.
 
According to TechPowerUp the RTX 5080 already has a fully-enabled GB203 die (and the 5090 doesn't have a fully-enabled GB202 die). I doubt Nvidia is interested in cutting down the GB202 even more to create a 5080 Ti. A 5080 24GB is possible once there's enough 3GB GDDR7 chips though.
GB202 is a big ass die, so I think there's enough defective chips to make a 24GB 384-bit 5080 Ti.
 
I think it's safe to say now that Blackwell is a disaster architecture for gaming. I hope someone can underclock the memory on the 5080 and compare it vs the 4080super. The IPC is looking like almost 0%. It's almost Intel core ultra levels of bad.

I wouldn't say it's a disaster for the 5080, because it's not a price increase over the 4080 super. It's just a much smaller gain than is typical. Ironically the 4080 launch felt like more of a disaster because of the price increase over the 3080, even though the performance gain was much better. Really curious to see how the market responds to this one. It's still a great upgrade for a 2080 or 3080. Definitely not a good upgrade from 4080(super). The number of people that upgrade after two years is probably pretty small. Could be wrong. If they don't have good competition, then ultimately not sure it'll matter much.
 
According to TechPowerUp the RTX 5080 already has a fully-enabled GB203 die (and the 5090 doesn't have a fully-enabled GB202 die). I doubt Nvidia is interested in cutting down the GB202 even more to create a 5080 Ti. A 5080 24GB is possible once there's enough 3GB GDDR7 chips though.
Yea I mean GB203 using 3GB modules. I'd love to see a cut down GB202 but they never did that with AD102 (4090D aside).
 
I wouldn't say it's a disaster for the 5080, because it's not a price increase over the 4080 super. It's just a much smaller gain than is typical. Ironically the 4080 launch felt like more of a disaster because of the price increase over the 3080, even though the performance gain was much better. Really curious to see how the market responds to this one. It's still a great upgrade for a 2080 or 3080. Definitely not a good upgrade from 4080(super). The number of people that upgrade after two years is probably pretty small. Could be wrong. If they don't have good competition, then ultimately not sure it'll matter much.
My post got deleted..... I'm waiting for the mod who deleted it to explain why.... Keep in mind that when comparing generational uplifts between x80 cards, we haven't seen an uplift this low since the 780. That delivered a 24% uplift..... A 15% uplift is really bad. To compound this problem, it's not like we're getting a significant ipc improvement which results in a smaller chip using less power. Lets call a spade a spade here. Price aside, this is just terrible.
 
Last edited:
Back
Top