Yes. DX10 requires 32 bit integer processing, and HLSL has robust integer instruction set. However integer processing is not that useful for pure DX10 hardware, because DX10 didn't support compute shaders. In compute shaders, you need integers for address calculation (complex data structures, array indexing, thread block local memory addressing, etc, etc).Doesn't Direct x 10 compliance require the GPU to process integers at I don't remember which precision (which xenos doesn't)?
32 bit floating point (supported by Xenos as well), has 24 bit mantissa. You can do integer calculations in these 24 bits (with bit perfect results), if you are careful and know exactly what you are doing. Most common integer operations can be emulated by floating point operations. For example if you want to shift up a value and add it to a bit mask, you can do that with a single floating point multiply-add instruction (multiply by power of two). Some operations (for example shift down) need extra floor instructions added after the operation, to guarantee the precision. In general the 24 bit integers are more than enough for pure graphics rendering (in pixel and vertex shaders). For complex compute shader code, you would need full integer support, but that's not a big deal, since DX10 doesn't support compute shaders.
The addition of full integer support in DX10 was a good stepping stone towards GPU compute (in DX11), but it didn't help DX10 games that much. I have written a real time DXT compression algorithm purely using floating points for Xbox 360. The same floating point algorithm actually runs exactly as fast on the DX10 PC hardware than the comparable algorithm written in integer instructions (both are BW bound). With floating point code you can abuse the full speed multiply-adds to do two things at once. In comparison 32 bit integer multiply (or multiply add) is quite slow on most GPUs (1:6 rate on Kepler). So if 24 bits is enough (and your algorithm does not need overflow/underflow support), floating point unit is often good enough for simple integer processing, and should perform similarly compared to real integer processing.
(maybe this discussion could be moved to a separate thread... it's starting to be OT here)