Jawed
Legend
Perhaps there are still algorithms where texture-based memory accesses on contemporary NVidia chips outperform straight memory based algorithms. This has certainly been true in the past.Not sure I buy this.... I mean why bother with texturing at all if it is strictly an HPC part.
Also, with so much emphasis by NVidia on image processing (to the extent that they seem to have caught up with GCN's image processing ops for working on 8-bit data, for example) for deep learning, texture units are really good at fast access paths for < 32-bit data types.