If that needs to have an additional parameter, one line has to change. You just add “, int b” to the end of the do() parameter list and you’re done. Anyone in the switch can use it, or not, at their discretion. If instead we do it the object-oriented way, now we have two choices. Either we change six separate do() function prototypes (the base class and the five derived classes) and their six separate definitions in twelve different files to take “, int b” at the end, or we introduce a new virtual function that “thunks” to the old function, introducing cognitive and computational overhead to avoid having to change code in lots of places.
Essentially, what you get when you structure things in the virtual dispatch way is an O(n) instead of O(1) cost for changing functions. That might be interesting if you got something back, but you really don’t. It’s not as if you have reduced an O(n) cost elsewhere. If you want to introduce a new type to the system, for example, in the non-OOP case you have to add a new case to each switch statement you cared about. In the OOP case, you’d have to add a new function for each virtual function you cared about. Both are O(n), although the OOP case is still a bit worse, being O(2n) since you have to add both the header and the implementation.
I could go on for quite some time about all the ways in which OOP creates more work in these scenarios, but hopefully you can extrapolate from here.
Now, I know a lot of people are probably thinking something like, “yes, OOP costs more to maintain, but the benefit is that people can’t misuse the objects” or something like this. It is a very common thing that I hear, especially in reference to “large teams”, about OOP helping to reduce the number of mistakes that programmers make when using other people’s code. I find this to be a rather nebulous and specious argument.
The reason is because I’m not really sure what kind of mistakes people are talking about. They’re rarely specific about these “mistakes” that are being “prevented” by OOP. I strongly suspect that what they’re actually talking about doesn’t have anything to do with the transformation of things from O(1) switch functions to O(n) virtual functions, which is expensive, but rather just the fact that hiding an implementation can be beneficial if there is a clear API boundary.
That is to say, if I believe there is a point in the code where there is no benefit to be gained by mixing it with the code that uses it, I can draw a hard line there and have a strictly opaque boundary between the two. I certainly wouldn’t knock this programming construct — I use it all the time. I’ll make a header file with some functions in it and a forward-delcared struct that houses the data, and then I’ll define the struct and the functions in the actual code file so that nobody who includes it will actually touch them. It’s a nice technique, and it certainly falls under the category of “encapsulation”.
But it is important not to apply this technique to everything, which is what OOP advocates. Much of the time, you actually want code to be miscible. Most parts of your program really do depend on each other, and it makes the code extremely difficult to manage if you try to artificially separate it into “objects” where there are no natural lines of separation. This is specifically the case with the “mode” concept I illustrated in the previous section.
The key thing to understand is that it is not worth trying to “prevent mistakes” if you are also preventing good, maintainable code! Preventing mistakes should only be a focus if you know you can do so without producing bad code as a result. Otherwise you might say that you went ahead and already made a coding mistake in the name of preventing some potential future coding mistake.
Yes, that was about three thousands words about nothing other than defining a single enum with only two actual values in it. When I said I was going to go over the implementation of this code in detail, I was not exaggerating.
Putting things in perspective, the fact that I can write so much about such a small piece of code is a good example of why it’s hard to learn how to program well, and also why good programmers don’t always write good programs. There’s so many things to think about at every turn, sometimes you just can’t afford the extra mental focus and push forward without considering everything and managing the tradeoffs as well as you should. I know this certainly happens to me all the time, and I’ve written many a piece of crappy code that, if I were to look at it in a calmer circumstance, I would find disgusting.
Furthermore, it is almost certain that my lengthy discussion of this simple enum is far from complete. I bet there are a bunch of experienced programmers reading this right now who have a number of points they’d like to add to the discussion. And that’s not even counting the object-oriented programming aficionados who are livid right now at my dismissal of their preferred methods, and would certainly have a lot to say about that.
So really, everything here just underscores the point with which I began the first article: good programmers need to talk more about how they actually code and what they think about when writing code. To that end, I’ll be back next week to continue examining the Lister Panel code, and I hope other folks out there will take some time to write about how they structure their programs and what they find to be efficient ways to write code, too!