Or maybe you just think that because Cyan mentions it every 5 minutesI don't know, I think 165hz is pretty common in budget monitors nowadays
Or maybe you just think that because Cyan mentions it every 5 minutesI don't know, I think 165hz is pretty common in budget monitors nowadays
we will need a bit more information on DLSS4 FG to know that. But if we're reading the slides right, all FG is based off the current rendered frame, so it' just produces the next 3 interpolated inbetween the past and present.That is something, which i dont understand: Can a frame only include information from the same "timeline"? What happens when a frame contains information from the past frame(s)? Isnt this just a "hallucinated frame", too?
I mean, look at all of the people claiming that the 1+ frame in latency incurred by frame gen is a deal breaker, which often adds less than 20% end to end latency.
Or maybe you just think that because Cyan mentions it every 5 minutes
It's a different story when we look at AI computational performance. We know the RTX 50-series will have FP4 number format support, but just as important, it seems to have twice the compute per tensor core as the RTX 40-series. That's not enough compute for the 5070 to surpass the 4090, but it's 'only' about 25% slower in theoretical performance. And if something can leverage FP4 on the 5070 where the 4090 needs to use FP8, then it might run better on the 5070. But even the INT8 TOPS favors the 4090.
true that. With the only exception being native 30fps games like those running via emulation, where 30fps + FGx3 = 90fps and FGx4 = 120fps look absolutely incredible, especially if you knew and had played the original game, the difference is staggering.That's the marketed use case but not how it plays out in practice. The input lag penalty when using a low base frame rate is so high you'd never want to use it. It also introduces more motion artefacts. Under no circumstances, I'd recommend anyone take a 30fps output and run FG on it.
This is basically what VR does, and it is at this point well understood in that context. It can indeed make some of these cases feel better (especially the wiggling the mouse FPS one), but the limitations and fairly fundamental. Camera rotation looks pretty good, camera motion or anything moving significantly on the screen will look blurry or jittery or both. Inpainting artifacts are fairly obvious with any significant motion. Someone noted DCS so it's worth noting that lots of people can enjoy it at 45Hz time/spacewarped up to 90 or similar as head motion is pretty fine, but near-field moving objects or other aircraft are generally pretty blurry; everyone can tell the difference, even if you do have to find a set of compromises that work for you.Lets go the other way on this: https://www.nvidia.com/en-gb/geforce/news/reflex-2-even-lower-latency-gameplay-with-frame-warp/
With reflex 2, nvidia is lowering latency by generating pieces of the image on the fly instead of waiting for the native rendering to complete. How should reviewers look at that? You're getting lower latency which is a big win. So if the IQ isn't impacted, is this a win win and doesn't need further dissection?
Agree, and I think it'd be great to just provide both numbers in reviews.I'm going to try one more time to steer this back on topic. My intention was very much not to start a general discussion about the pros and cons of frame gen. As I've said in every single post I've made, frame gen is great tech to have available and obviously in a lot of cases it's great to get additional smoothness. I'll state - yet again - that it's normally a better way to fill out additional vblanks on high refresh monitors than just repeating previous frames.
Everyone instead seems to have gotten fixated on the notion that the problem with it is that it adds some latency, and then arguing about whether it's "worth it" vs. the motion clarity improvements. This is a strawman to the original discussion. We can split it off to a separate thread if anyone actually wants to have this argument, but it's not an argument I'm making, and it's somewhat off-topic to the point that I made here specifically because it relates to the press. Also no one is saying that motion clarity doesn't matter; that is another strawman. I'll try one last time:
Generally the expectation if you report "card A at 120fps and card B at 60fps" (or 60 vs 30, or whatever) is that card A is notably more responsive than card B. Frame generation - even if it were completely free, had identical latency to the base frame rate and zero artifacts - breaks these assumptions in a major way. Since the feel of a game is pretty indisputably an important part of why we measure these things (of course there are diminishing returns, just like there are with motion smoothness!), that sort of reporting seems like a problem to me. If we are going to repurpose "FPS" to speak only to motion smoothness then we need to more consistently report a different metric for responsiveness.
As this is the DF thread, I'd propose we try and keep the discussion related to media reporting and benchmarking of FG cases rather than all the separate discussions, although we can start threads for those as desired. I just see people increasingly fighting strawmen and arguing past each other here in a way that isn't making any forward progress.
Are you really trying to argue based on this that we should intentionally conflate the two cases even for the enthusiasts who care enough to bother looking up benchmarks? This feels like a pretty extreme level of motivated reasoning to me; I don't think that direction is a useful discussion to have.
Ultimately if you don't care, fine. But some of us really do care and buy products based on them feeling better to play games with, not just looking smoother. A 5070 absolutely will not be an equivalent experience to me as my 4090 (not even considering VR!) and I think it's completely reasonable to call out NVIDIA for making such a ridiculous claim, let alone folks who perpetuate it.
The non generated frames could be called base fps, while the generated ones + base could be called total frames.I'm not sure if you want to call it somethign else? like Rendered Frames per Second, or AIFPS, HFPS, etc. so that we can denote things going forward. I suppose the community can come to decision on the easiest naming convention forward.
The non generated frames could be called base fps, while the generated ones + base could be called total frames.
Thankfully good reviewers will generally give fps for base game, with upscaling and upscaling + FG.
Do any reviews give FG numbers? I can’t think of any sites that do.
in the late 90s if you had a 1 year old GPU your hardware was outpaced already, and in the early 2000s every year and a half your hardware was kinda obsolete too. Back then I got the Matrox G400 MAX 32MB AGP just because it had a 32 bits colour depth and because it was the first ever, iirc, GPU to feature environment mapped bump mapping, and included a game that featured it.I didn't quite get to put this into words with my previous posts, but FPS was the end all be all of reviews in the early days of 3d acceleration. And then, features started being tested in reviews. 32bit vs 16bit color depth was featured in reviews when ATi and nVidia added the feature, and 3Dfx fell behind. Filtering quality started getting front billing when it was revealed that nVidia and ATi were lowering the quality of filtering in common benchmarks. FSAA comparisons were common. Then filtering quality again with the rise of AF, because graphics cards started to ship with optimizations to AF could cause image quality differences along with performance gains. The GeForceFX line had lots of image quality comparisons because it supported both higher quality and lower quality shaders than competing cards, and nVidia was swapping out shaders from shipping games at the driver level to gain performance, often with noticeable IQ reductions. This was a wild time because I remember updating drivers on my wife's computer and launching a game, and it looked like almost every surface ad VRS on. After that, reviewers struggled to describe frame pacing, often referring to micro-stutter but not having numbers to back up what they were feeling. Once frame times and frame pacing entered the standard vocabulary, reviews once again reverted to FPS, with mentions of frame times and pacing.But image quality has largely fallen out of fashion except maybe IQ differences when different upscalers were used. And latency... well only the "crazy" esports guys who were building custom hardware to test that even cared about it.
Reviews had a lot more nuance back then. Anyone who read them back then I'm sure will remember the AF test images of a checkerboarded tunnel, or the pages full of Half-Life 2 (I think?) utility poles and lines used to test AA. Or those zoom ins to see where mipmaps changed between cards. Will this new future, where reliance on upscaling and frame generation result in harder work for reviewers? Probably. But I reject the idea that we, the consumers, are too dim to understand the nuance of these reviews. Some of us went through this already, and we figured it out just fine. Plus, we have Digital Foundry to teach the youngins now.
We, the enthusiasts at Beyond3D, are not too dim to get it.But I reject the idea that we, the consumers, are too dim to understand the nuance of these reviews.
I personally feel frame generation and even specific reconstruction benchmarking is not all that useful.
Overall though, I think the current situation with reviews and general benchmarking is still entirely fine.