When selecting an initial color for an object you will be using 32bit. Anything lower and you are limiting your range, anything higher is beyond the realm of human comprehension(the color deviations aren't enough to be discernible by the human eye).
As far as utilizing a lower bit depth if you can get away with it, I've never even realized if any of the colors I was using on any of the models I've built(or textures for that matter) was one of the base colors that could be handled by lower precission. Once you start adding shaders, ray tracing if you are using it and/or radiosity the computation time for the final render is going to be so long that starting off with a lower level of color precission is going to amount to pretty much nothing in terms of saving time.
There is also the hardware acceleration end. Running in 16bit can enhance performance a decent amount on some hardware compared to 32bit, but modern rasterizers actually don't tend to like working in 256 color(8bit) mode and to the best of my knowledge not even any of the low end modeling packages support 8bit color(let alone Maya, SI, Lightwave, 3DSM etc).
What I'm questioning is would that color choice typically lie within a 16 bit range?
It could be that artists do end up chosing a 16bit level color on a not infrequent basis, but odds are they aren't going to even realize it. If you are working in PS, do you know if the color you are using is available at a lower bit depth then what you are using(honest question, I sure as hell don't
).
I dunno- am I making any sense?
Yes, the question makes sense, although I think you'll find that anyone working with anything 3D is going to be running at least 32bit color. Besides the above mentioned issues with color, you also reduce your Z accuracy when you run lower then 32bit and 16bit Z is completely unacceptable for anything approaching pro level.