It depends if there's any truth to the possibility of agi, and superintelligent agi. If such is possible, it may come to pass that a program will be able to better tell what you want, and even override what you've asked with what's actually best for your particular problem, a task which may exceed your capacity.
I believe it's not impossible to figure out what one might want based on one's past behavior (or, maybe even a brain scan?), but if it's accurate to, say, 95%, then I'd say it probably has its own consciousness