Gipsel, this is Andy Glew's presentation that originally made me very confused (since he insisted that GPUs actually used this, and because he used the same "SIMT" term):
http://parlab.eecs.berkeley.edu/sites/all/parlab/files/20090827-glew-vector.pdf
Here he says that all GPUs are using this type of SIMT and stresses that it's not just a robust SIMD with masking. The description on pages 25/26 is what I thought nVidia GPUs were using (having read about the ISAs on AMD GPUs and descriptions from posters here I didn't think it was more than SIMD w/predication, didn't know about the ability to create wavefronts dynamically).
I think he was just wrong about what GPUs at the time were doing and I took it on good faith that someone in his position wouldn't be...
http://parlab.eecs.berkeley.edu/sites/all/parlab/files/20090827-glew-vector.pdf
Here he says that all GPUs are using this type of SIMT and stresses that it's not just a robust SIMD with masking. The description on pages 25/26 is what I thought nVidia GPUs were using (having read about the ISAs on AMD GPUs and descriptions from posters here I didn't think it was more than SIMD w/predication, didn't know about the ability to create wavefronts dynamically).
I think he was just wrong about what GPUs at the time were doing and I took it on good faith that someone in his position wouldn't be...