Digital Foundry Article Technical Discussion [2022]

Status
Not open for further replies.
If there were to be a case for more customised semi-custom silicon next generation, perhaps it could be along the lines of Nvidia's tensor cores and image flow processor gubbins.

The 3D pipeline and even GPU Compute (using shader hardware) seems fairly standardised-ish now (give or take), but upscaling and frame interpolation seems like the wild west. NV Tensor, Intel XMX, AMD .... nothing.
There has been some standardization effort by Intel and Nvidia that would benefit developers through Streamline with Nvidia and Intel working together on a single integration open source plugin. Also ARM, Intel and Nvidia recently published FP8 standard specification for AI, but you are correct that AMD seems to have it's own agenda.
 
Last edited:
If there were to be a case for more customised semi-custom silicon next generation, perhaps it could be along the lines of Nvidia's tensor cores and image flow processor gubbins.

The 3D pipeline and even GPU Compute (using shader hardware) seems fairly standardised-ish now (give or take), but upscaling and frame interpolation seems like the wild west. NV Tensor, Intel XMX, AMD .... nothing.

Not sure if there's be some kind of standardisation before next gen consoles roll around.
There's no incentive for AMD to ever consider specialized HW which don't help them win in benchmarks. They seem perfectly content by offering those similar techniques with software solutions. Even among vendors (NV/Intel) with such specialized HW they can't even agree on whichever method (DLSS/XeSS) should be the standard and probably can't even implement each other's method ...
 
Nvidia and Intel's matrix cores will help them in benchmarks as soon as games use more real time ML. You can use ML for all sort of stuff in gaming beyond upscaling.

There's no doubt AMD will follow.

You mentioning that, I wonder if the PC version of Spiderman uses the same ML based muscle deformation the PS5 version uses.

I've never thought to check that until now.
 
I always thought they added it to Spiderman too?

If it is just in MM it will be interesting if the PC version has it and if there's a toggle for it.
I can only find something about muscle deformation with Miles Morales, so yeah seems to be exclusive for that one.

I don't think there will be a toggle, it might just run slower with FP16 on older GPUs and use DP4a for faster speed on modern architectures.
 
I was hoping the ps5 pro would have 240hz support

It's unlikely that a console will support features that aren't supported by a television, at least one made by Sony. They have very little history of having display outs that were incompatible with televisions.

Microsoft do implement/expose some non-television features into their display outs (non-TV resolutions, for example) but I'd be doubtful that even they would create a console that supported 240 Hz considering that not only do TVs not support it, but having hardware that can render at that framerate at anything resembling current console pricing levels would be a herculean task.

So, at least 2 things need to happen, IMO, before Sony would make a PlayStation console with support for 240 Hz.
  • There must be at least some televisions that support 240 Hz input preferably ones that Sony manufactures.
  • There must be hardware available that'll allow a console to render at 240 Hz in a game with at least something approaching graphics quality contemporary with the times in which the console is released at expected console pricing.
Regards,
SB
 
It's unlikely that a console will support features that aren't supported by a television, at least one made by Sony. They have very little history of having display outs that were incompatible with televisions.

Microsoft do implement/expose some non-television features into their display outs (non-TV resolutions, for example) but I'd be doubtful that even they would create a console that supported 240 Hz considering that not only do TVs not support it, but having hardware that can render at that framerate at anything resembling current console pricing levels would be a herculean task.

So, at least 2 things need to happen, IMO, before Sony would make a PlayStation console with support for 240 Hz.
  • There must be at least some televisions that support 240 Hz input preferably ones that Sony manufactures.
  • There must be hardware available that'll allow a console to render at 240 Hz in a game with at least something approaching graphics quality contemporary with the times in which the console is released at expected console pricing.
Regards,
SB
I suspect some of the current 1080p modes could get near 240Hz on a pro model
 
You don't need 240fps to benefit from a 240hz refresh rate.
The 120hz output of the Xbox One S is quite beneficial to games that can't even maintain a constant 30fps.
 
I can only find something about muscle deformation with Miles Morales, so yeah seems to be exclusive for that one.

I don't think there will be a toggle, it might just run slower with FP16 on older GPUs and use DP4a for faster speed on modern architectures.
So the underlying tech, ZivaRT, was acquired by unity and a plugin to run their models in unity was recently made available for free. When it was released about a month ago I gave it a try in an empty unity project and imported the samples (provided in the plugin) of a cheetah walking and a face making a variety of expressions. On my 1070 the performance cost was ... basically zero, i couldnt measure a change in the frametimes when enabling or disabling the plugin while running on the GPU and the reported CPU processing time for the scene was within 0.1ms of the graphics frametime. The cheetah model has something like 500k vertices and the face was ~250k. My takeaway was that the performance cost doesnt have to be high to provide useful results, of course it may be higher on "hero" models in a real game. Of note, they only offer solvers for CPU and GPU compute shaders, no tensor cores. My best guess for why this is so performant is that it is solving for an order of magnitude fewer vertex positions than the millions of pixels used in upscaling/computer vision models.
 
So the underlying tech, ZivaRT, was acquired by unity and a plugin to run their models in unity was recently made available for free. When it was released about a month ago I gave it a try in an empty unity project and imported the samples (provided in the plugin) of a cheetah walking and a face making a variety of expressions. On my 1070 the performance cost was ... basically zero, i couldnt measure a change in the frametimes when enabling or disabling the plugin while running on the GPU and the reported CPU processing time for the scene was within 0.1ms of the graphics frametime. The cheetah model has something like 500k vertices and the face was ~250k. My takeaway was that the performance cost doesnt have to be high to provide useful results, of course it may be higher on "hero" models in a real game. Of note, they only offer solvers for CPU and GPU compute shaders, no tensor cores. My best guess for why this is so performant is that it is solving for an order of magnitude fewer vertex positions than the millions of pixels used in upscaling/computer vision models.

Thanks for the information and tests. Confirms my suspicion that this MM muscle deformation would basically run on anything. We had advanced AI on the first xbox....
 
Nvidia and Intel's matrix cores will help them in benchmarks as soon as games use more real time ML. You can use ML for all sort of stuff in gaming beyond upscaling.

There's no doubt AMD will follow.
So far there's no direct benefit for such HW aiding either the graphics pipeline or the ray traced pipeline. There seems to be a really common theme (upscaling/TAA/denoising/frame interpolation) behind throwing the kitchen sink that is AL/ML HW on post-processed effects. AI/ML is far away from the holy grail of assisting G-buffer generation/BVH traversal or generation/texture sampling/shading/rasterization. We're far more likely to see AI/ML enhanced post-processed effects in the near future such as depth of field/motion blur/2D UIs than any real acceleration for the graphics or ray traced pipeline ...
 
You must be technically not well versed at all as you should know that RTX Remix literally uses the same physics and animations as the original game: it does not replace those things. So you are calling the original developers of Portal, Valve, amateurs.
Cool Story
This video (spicy title, but the extreme negativity intrigued me and it was worth a watch to get a sense of the complaints) mentions that Portal RTX must be using Portal’s DX8 path, which supposedly had some visual downgrades from the default DX9 path and may explain some of the changes (e.g., the portals).

This isn’t the place to get into the artistic merits of old vs. new, but it’s interesting how the DX9 and DX8 portal textures differ, and how RTX‘s retextured portals missed the (much subtler in the DX8 version) brightness gradient (portals are darker at the top).
 
This video (spicy title, but the extreme negativity intrigued me and it was worth a watch to get a sense of the complaints) mentions that Portal RTX must be using Portal’s DX8 path, which supposedly had some visual downgrades from the default DX9 path and may explain some of the changes (e.g., the portals).

This isn’t the place to get into the artistic merits of old vs. new, but it’s interesting how the DX9 and DX8 portal textures differ, and how RTX‘s retextured portals missed the (much subtler in the DX8 version) brightness gradient (portals are darker at the top).
That would be weird. Doesn't RTX remix support DX9? Pardon me if the video explains this but I don't have time to watch currently.
 
Status
Not open for further replies.
Back
Top