AI training - probably yes, since that is what it was designed to do well. But this line of discussion started around the usefulness for scientifc HPC applications.According to Nvidia, they provide the same accuracy in training. From here:
https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/
In fact, I have the results of Nvidia comparison between FP32 and TF32. Not sure I can share since I don't see it anywhere online, but I can say that the networks trained using TF32 have the same accuracy than FP32. For AI, TF32 is really a safe replacement for FP32 with a huge speedup in performance
Apart from that: Tensor cores are massive MMA arrays, they cannot do anything else, AFAIK, for example they do not have the SFUs in the traditional cores.