What methodology did they use to measure the input latency ? Are they using a high speed camera to track and measure the difference between control input and a change in the scene on the display ?
As most of us would have thought, buffering multiple frames isn't optimal from an end user experience standpoint. Even after using frames before being presented, it's astonishing in itself how UI elements are negatively impacted. It's a bit concerning how all these comparisons were made by using DLSS performance mode where there's conveniently the most amount of information being generated in the time domain despite having the lowest image quality. It will be an interesting comparison to see if anymore more visual anomalies will crop up in higher quality upscaling modes where the least amount of information is being generated in the time domain ...
DLSS 3 is an explicit tradeoff between visual fluidity and end user experience ...
As most of us would have thought, buffering multiple frames isn't optimal from an end user experience standpoint. Even after using frames before being presented, it's astonishing in itself how UI elements are negatively impacted. It's a bit concerning how all these comparisons were made by using DLSS performance mode where there's conveniently the most amount of information being generated in the time domain despite having the lowest image quality. It will be an interesting comparison to see if anymore more visual anomalies will crop up in higher quality upscaling modes where the least amount of information is being generated in the time domain ...
DLSS 3 is an explicit tradeoff between visual fluidity and end user experience ...