It is still present but with ray reconstruction, it's even worse.
The original PT stuff was pretty impressive but had a fair bit of noise and ghosting, so I was hoping ray reconstruction helped with that stuff. I gave it a quick shot today and my experience was honestly pretty mixed. It definitely helps sharpness in reflections and shadows, but that wasn't really my primary complaint before.
In terms of noise, I think it's slightly better than before, especially in specular reflections.
In terms of ghosting though, it's a very mixed bad. Before dynamic objects would make the background blurry. This was particularly noticeable when you opened doors and the like or when a character walked across an area that is primarily indirect lit. You'd effectively have them smear the background behind them into a blur that would resolve over the next half second or so.
Some of these cases are slightly improved now in terms of how far away from the dynamic object the smearing happens, but on the downside the length of the temporal integration/ghosting artifacts is *significantly* longer. The ML model seems to be leaning far too heavily on re-using these samples as long as they are somewhat "close" to pixels that have non-zero motion vectors. Instead of rejecting these history samples, it is keeping them whereas the previous denoiser seemed to be tuned a bit more to reject these samples with disparate depths/spatial locations and then have to reconstruct a blurry result from spatial samples.
The main issue is that the old denoiser artifacts were somewhat proportional to the amount of perceived motion in the frame. i.e. if an NPC was walking close to you across the screen or a door opened across most of the frame, it'd get very blurry for a second, but if the scene had relatively slow movement the ghosting wouldn't be too bad. This also has the benefit that the higher your sample count/frame rate the less issues you have as everything is effectively moving "slower" relative to the sampling. With the new ray reconstruction stuff even at 100fps (real frames, not frame gen) on a 4090 you get significant artifacts even with very little movement. For instance, this is just NPC idle sway animation:
Unfortunately you don't really have to cherry-pick to find these cases.
I don't know how I feel about this overall TBH. There's some legit improvements for sure but unless they can resolve these issues without hurting the other improvements I don't know if I'd really consider this shippable quality. Hopefully this is just a case of something buggy with motion vectors or similar, but it doesn't seem to be unique to skeletal meshes; it happens even with camera panning which is one of the simplest motions.
Ideally this could also just be a case where they improve things significantly in a future update, like they did from DLSS 1.0 to 2.0. I like the simplicity of just dumping a bunch of data into an ML model and letting it sort things out, but it doesn't seem like we've gotten there yet. It remains to be seen if we just need some better tuning/training, more input data, or if ultimately doing this temporal denoising entirely in screen space might not be sufficient. Or maybe the RT sampling rate just needs to go up a fair bit and we're just trying to push this too far right now, even at 100fps with DLSS Quality (I even tried no upscaling and the artifacts were the same).
Anyways as usual very interesting tech and I appreciate them putting it out for us to play around with, and Alex and DF giving us some good analysis. Just feels like we still have some ways to go before this is usable in scenes with even moderate amounts of motion.