Why are people prejudiced against function pointers?

K.I.L.E.R

Retarded moron
Veteran
Why?

What's wrong with using them.
I use them, I never had any problems with them?
It's not like I use them everywhere anyway.
So why are (imaginary)people picking on me and telling me that it's a bad programming practice?
 
Function pointers are not bad. However, when you don't need them, you don't use them, because they are slower than normal function calls. On the other hand, most modern CPU do support indirect call prediction, so it's not that slow anymore. But it's still slower.

All "virtual" functions in C++ are implemented with function pointers. So saying function pointer is bad is quite, ugh, strange.
 
Every language feature can become a bad programming practice if used incorrectly. It all depends on what you're trying to do.

Function pointers have one downside, one that not even CPU prediction can overcome, and that's the fact that they hamper cross-function procedural analysis in compilers. (I speak of function pointers in C, not say, high order functions in Haskell)

Thus, even though the CPU may reduce he code of an indirect call, it can't fix the fact that optimizations were short circruited, code didn't get inlined when it could, registers were allocated less well, copies that could be propagated weren't, subexpressions that could have been eliminated weren't, etc.

This is not neccessarily always bad, but if it happens for a function that is invoked 99 gazillion times, it could be.

In languages like Java, you have the problem that all functions are virtual by default (opposite of C++). The way Java overcomes the performance issues is by profiling and recompiling at runtime, and ultimately engaging in speculative optimizations (assume A* is bound to B and not C, optimize the call to A() as if it were B, but include a runtime check on A* to see if it is C first, and if so, invoke via the slow non-optimized path)

In languages like Haskell, the compiler has enough information to prove what a late bound value is and thus eliminate a late bound dispatch *if it wants do*. In fact, in such lazy languages, pretty much all method calls are eliminated until proven they are needed. :)
 
Well I admit I'm biased against them. But I share the same bias for using 'raw' memory allocation and pointers.

Most of this comes from having to deal with function-pointers-gone-bad (In other peoples code...). Once they do go bad, in my experience at least, the debugger can fall over too.

I also dislike trying to read them, I've had to maintain functions that had multiple function pointers as arguments in the past, and it is something I do not recommend.

Also being a heavy user of .Net, and knowing just how well delegates and events (managed function pointers) work, going back to raw function pointers feels very.... unclean.

Also declaring C++ method pointers isn't always intuative.
 
Last edited by a moderator:
In OO languages they feel somewhat "unclean" to me. (Yes I know, not exactly a scientific argument - I guess I just have developed in Java for too long...;) )
In practice, the Command pattern works well enough for me (to avoid those horrible switch monsters).

Functional languages are a completely different matter, of course.
 
Function pointers are neither good nor bad. They just are. Their intended use is so limited, however, that you should have a good reason for every manually operated instance of them. They're not something to throw in the mix as a general solution to something. That's why I'd be suspicious of any use of them. Not that they're wrong, but rather that the demand for them is low enough that it's improbable that any given situation needs explicit function pointers.

Basically, specific tool for a specific job.
 
I love function pointers. Like to death. I build empires on function pointers. I have arrays of them. No application can be considered complete, not to mention elegant, without at least one 16-element array of function pointers.
No really, that's how I like to do things (but then I like to write frameworks of functions with custom calling conventions in assembly, maybe that makes me just strange).

To hell with if statements though! Their intended use is so limited that you should have a good reason for every manually operated instance of them. They're not something to throw in the mix as a general solution to something :p
 
- Non-static methods? Check.
- Events (callbacks)? Check.
- Calling an API function? Check.
- Message handlers (through the framework)? Check.
 
I tend to agree with the people who say that function pointers have their place, but that they can go very very wrong. One of the last things I want to deal with is a mess of uncommented function pointers that get passed back in on themselves and and morph into a kind of terrible angry monster that is nearly impossible to trace because the original author felt like they were being clever.

Nite_Hawk
 
I tend to agree with the people who say that function pointers have their place, but that they can go very very wrong. One of the last things I want to deal with is a mess of uncommented function pointers that get passed back in on themselves and and morph into a kind of terrible angry monster that is nearly impossible to trace because the original author felt like they were being clever.

Nite_Hawk
Let me ask you a question.

Suppose you have some graphics wrestling code that accepts gobs of data in one of many different formats, converts them into a common internal format for processing, and, after that's done, spits it back out in some other format. Like what might happen when you hand a texture to an OpenGL driver and request it to build mipmaps.

Let's pretend someone implemented the selection process that determines which conversion function should be called to get the data into the preferred internal format with an array of function pointers, indexed with the format enum (sanity-checked of course), instead of a massive switch statement or a chain of if-elses.

Would that be the kind of terrible angry monster you're speaking of, or is that still okay?
 
Last edited by a moderator:
And what if, instead of using C and an array of function pointers, someone used a language with real OO support instead instead of faking polymorphic dispatch with weakly typed C arrays and weakly typed C enums.

If the particular function that needs to be passed isn't a "hotspot" critical function (which will be called a gazillion times) it doesn't need to be written in raw C for performance, and if the function *IS* in the critical path and needs uber performant calls, then it is better to use a switch/case or if-then chain because the compiler's optimizer will work better.

The problem with FPs most of the time is that they do not represent a sweet spot. They are harder to maintain and more error prone than using a higher level language with first class polymorphism support, and they are less performant than the alternatives, the worst of both worlds.
 
Let me ask you a question.

Suppose you have some graphics wrestling code that accepts gobs of data in one of many different formats, converts them into a common internal format for processing, and, after that's done, spits it back out in some other format. Like what might happen when you hand a texture to an OpenGL driver and request it to build mipmaps.

Let's pretend someone implemented the selection process that determines which conversion function should be called to get the data into the preferred internal format with an array of function pointers, indexed with the format enum (sanity-checked of course), instead of a massive switch statement or a chain of if-elses.

Would that be the kind of terrible angry monster you're speaking of, or is that still okay?

If using a procedural language, okay, but with an object oriented language, I'd neither use switch, if-else or function pointers, I'd try everything to solve it with OO inheritance/polymorphism.

(Edit: didn't read DemoCoder's last message before posting)
 
Last edited by a moderator:
Back
Top