Every language feature can become a bad programming practice if used incorrectly. It all depends on what you're trying to do.
Function pointers have one downside, one that not even CPU prediction can overcome, and that's the fact that they hamper cross-function procedural analysis in compilers. (I speak of function pointers in C, not say, high order functions in Haskell)
Thus, even though the CPU may reduce he code of an indirect call, it can't fix the fact that optimizations were short circruited, code didn't get inlined when it could, registers were allocated less well, copies that could be propagated weren't, subexpressions that could have been eliminated weren't, etc.
This is not neccessarily always bad, but if it happens for a function that is invoked 99 gazillion times, it could be.
In languages like Java, you have the problem that all functions are virtual by default (opposite of C++). The way Java overcomes the performance issues is by profiling and recompiling at runtime, and ultimately engaging in speculative optimizations (assume A* is bound to B and not C, optimize the call to A() as if it were B, but include a runtime check on A* to see if it is C first, and if so, invoke via the slow non-optimized path)
In languages like Haskell, the compiler has enough information to prove what a late bound value is and thus eliminate a late bound dispatch *if it wants do*. In fact, in such lazy languages, pretty much all method calls are eliminated until proven they are needed.