Not quite.
AMD actually has a tensor core equivalent on their MI instinct line. I think it’s called Matrix something.
Tensor cores are large, and designed specifically for deep learning training, and that’s overkill for a console
So the other two remaining options is XDNA which is a low powered IOT based Xilinx silicon designed to run AI models efficiently with little silicon and little power use. Those
are in the strix processors today. Perhaps they have a custom larger variant of this in 5pro.
Secondly, if not this, then the only remaining way is to this via the compute engines, and with some creative math you can get to 300TOPS, an alternative is that they customized something in the GPU to support sparse formats which well that would get you to 300. But you’re sacrificing your GPU rendering time to do it, as opposed to running in parallel so I’m not really sure that makes sense.