MSAA trick? Pre ps4pro...If you knew better, you better had said it correctly in the first place. I know why you said it like that, so does everybody else.
And opinions do not invalidate facts. Sony started it.
MSAA trick? Pre ps4pro...If you knew better, you better had said it correctly in the first place. I know why you said it like that, so does everybody else.
And opinions do not invalidate facts. Sony started it.
Watched for about 5 mins and dropped out.
Showing the screenshot which is confirmed as not even using FSR while saying how great it looks and showing DLSS1 from BFV again because you know this is still relevant in 2021 for some reason. Zero new info on FSR otherwise.
Zero new info on FSR otherwise.
I'm not saying that FSR will offer higher IQ than DLSS 2.0, but this logic is faulty. The fact that DLSS 2.0 uses AI doesn't mean anything by itself. DLSS 1.0 also used AI and no matter that it was worse than a simple spatial upscaler.
Yeah I wonder what will be the comparison strategy here as well. Equalize performance and check quality? But shouldn't this also account for native performance to establish the baseline? Equalize quality and check performance? What do you do when one of solutions isn't capable of a similar quality? Will DLSS UP mode suddenly become relevant to them in such comparisons? EtcFirstly the bar for success is FSR IQ halfway between DLSS 1 and 2. Will be fun seeing how that’s measured
Relevant for initial FSR assessment, not so much for figuring out how it fits into present day status quo. I also wonder what their take on FSR will be if their take on DLSS1 was "it's worse than just running the game in lower resolution". Say FSR will be "between DLSS1 and 2" - wouldn't that mean that it's the same as running game in a lower resolution to them? And if yes then shouldn't that make FSR completely useless in their view of the matter?The comparisons to DLSS 1.0 are somewhat relevant as it was also a spatial upscaler.
I'm not saying that FSR will offer higher IQ than DLSS 2.0, but this logic is faulty. The fact that DLSS 2.0 uses AI doesn't mean anything by itself. DLSS 1.0 also used AI and no matter that it was worse than a simple spatial upscaler.
Nvidia's marketing team has been pretty successful at convincing some that "it's only good if it's using deep learning". Which is pretty much their job since they're the only ones putting lots of dedicated tensor cores in consumer GPUs.
Oddly, I'm not seeing anyone blasting UE5's temporal super resolution, which isn't using deep learning AFAIK.
It has been optimised on PS5 and XSX, and we should soon be able to have this optimizations enabled on D3D11, D3D12 and Vulkan. One is already available on PC with r.TemporalAA.R11G11B10History=1.
It is more expensive, because it is at its core designed to be a temporal upscaler (TAAU with r.TemporalAA.Upsampling=1) more than it it is a TAA pass, to maintain quality when upscaling.
Maintaining the quality with lower input resolution is what truly differentiate Gen5 TAA to Gen4’s, and we find that the savings by lowering the inputs resolution quickly outweigh the performance overhead it has while maintaining better output quality with the console optimization. Already we have cases where Gen5 TAAU, despite configured with a lower screen percentage, looks better than Gen4’s. But where the performance and quality benefit comes in, is when you can lower even more the screen percentage: not only to compensate for the higher upscaling cost, but also to free up some ms on the rest of the frame you can reinvest to increase quality or even turn-ON some other rendering features that ultimately also contribute to a higher output pixel quality.
Watched for about 5 mins and dropped out.
Showing the screenshot which is confirmed as not even using FSR while saying how great it looks and showing DLSS1 from BFV again because you know this is still relevant in 2021 for some reason. Zero new info on FSR otherwise.
Is that because it looks pretty decent?Oddly, I'm not seeing anyone blasting UE5's temporal super resolution, which isn't using deep learning AFAIK.
I couldn't finish it either. Terrible analysis as usual. They should stick to data-driven reviews.
You should certainly watch the video again because they talk about how good FSR look precisely while showing the ill fated 4K native image. And stop telling others what they should talk about.2) I saw the video again and they are using B-roll while they speak. Their image quality comments were made on the footage that was confirmed to be using FSR.
You should certainly watch the video again because they talk about how good FSR look precisely while showing the ill fated 4K native image. And stop telling others what they should talk about.
It has temporal component, so it should be able to pull details single frame solution cannot.The HWUB guys are being pretty level headed with what to expect from FSR, especially considering they were recently bullied by nvidia for not promoting DLSS and RTX as much as some other supposedly independent outlets.
It's also good that they're pointing out past overhypes from RTG as a way to say "we're not giving you a free pass on this".
Nvidia's marketing team has been pretty successful at convincing some that "it's only good if it's using deep learning". Which is pretty much their job since they're the only ones putting lots of dedicated tensor cores in consumer GPUs.
Oddly, I'm not seeing anyone blasting UE5's temporal super resolution, which isn't using deep learning AFAIK.
Since it uses temporal accumulation I can totally believe it's pretty decent (or even excellent). It'll be really interesting to see how it A/Bs vs. DLSS2.Is that because it looks pretty decent?
Doesn't that actually prove that people aren't saying it's only good if you use ML?
"People" definitely are. Starting with wccftech's piece on FSR that was posted in this thread, plus a bunch of other posts here and elsewhere.Doesn't that actually prove that people aren't saying it's only good if you use ML?