Theoretically, let's say Sony have a magical upscaler that can
Theoretically, let's say Sony have a magical upscaler that can convert half the resolution to native without any loss of quality. Then half the TFs would still result in identical performance.
This is a way bigger deal for retro games. Some of the results are astounding.Speaking of magical upscaling..
Microsoft's DirectML is the next-generation game-changer that nobody's talking about
https://www.overclock3d.net/news/so...on_game-changer_that_nobody_s_talking_about/1
This is a way bigger deal for retro games. Some of the results are astounding.
This is a way bigger deal for retro games. Some of the results are astounding.
Maybe. On the other hand the XSX APU has that "8K" text etched on die for some reason...
In the grand scheme things it doesn't matter the difference in FLOPS is because most 3rd party developers will just target the lowest common denominator & so they will both be using the same FLOPS, no?
Unless that difference is 1.9
Tommy McClain
Sony makes some of the best ASIC upscalers, with the recent model using AI databases (don't they all now?). They could leverage that asic work to avoid wasting gpu cycles, but i don't know if it's really processing intensive. The hard part is building the deep learning data, during rendering it's just applying the inference model. There is no actual deep learning involved at runtime.
Sony makes some of the best ASIC upscalers, with the recent model using AI databases (don't they all now?). They could leverage that asic work to avoid wasting gpu cycles, but i don't know if it's really processing intensive. The hard part is building the deep learning data, during rendering it's just applying the inference model. There is no actual deep learning involved at runtime.
I find myself having difficulty discerning the difference between sarcastic, hyperbolic posts meant in jest and those which are meant seriously.
Yeah good question. I know their frame interpolator needs 45ms, and there are other stuff in modern TVs which are also temporal, it's all useless for gaming. HDR post processing also needs a few frames depending on the brand.Are they low-latency, though? That, to me, is the tricky part.
Perhaps because it's just 386 pages of people arguing about the color of Schrödinger's Cat.I find myself having difficulty discerning the difference between sarcastic, hyperbolic posts meant in jest and those which are meant seriously.
Perhaps because it's just 386 pages of people arguing about the color of Schrödinger's Cat.
Would like a poll too see how many believe this.Concretely, switching back to tensor cores and using an AI model allows Nvidia to achieve better image quality, better handling of some pain points like motion, better low resolution support and a more flexible approach. Apparently this implementation for Control required a lot of hand tuning and was found to not work well with other types of games, whereas DLSS 2.0 back on the tensor cores is more generalized and more easily applicable to a wide range of games without per-game training."