I’m not sure I have strong feelings about either of these. Personally, I prefer the original form, because it is very clear what is going on and there is no real intricacy or bulk to the code that would make it fragile to future updates. So collapsing it just introduces errors in comprehension or updating where someone would make a mistake in reading or changing the (now more complicated) computation of passes_special. That said, if someone were to make that change and felt strongly about it, I wouldn’t object.
If the cases were longer, and had more shared code that was intricate and likely to change, I would definitely prefer collapsing the two, even at the cost of legibility. So really this is a case of the code being small enough that I prefer the duplication, but anything bigger and I would probably change my preference.
All the for loops in this code are encapsulated within a block. Why?
Well… yeah. That’s a “Muratori-style for loop”, so named because I am literally the only person anyone’s ever seen write a for loop this way. Nowadays, there is little or no reason to use this bracketing style. It is a habit of mine that I picked up in 1995 when working on a cross-platform codebase that had to compile under both Microsoft Visual C++ and GNU C++ (among others). At the time, GNU C++ had adopted the convention that a variable declaration inside the for’s parentheses was scoped to the loop (as per the spec), but Visual C++ did the opposite, considering the declaration to be in the outer scope. This lead to a no-win situation where code written properly under GNU C++ would have multiply-defined symbol errors under Visual C++, and code written in Visual C++ would have undefined-symbol errors in GNU C++. My solution to this problem was to devise the double-bracket style you see in this code, which solved the problem rather elegantly in my opinion, but was apparently too ugly for anyone else to adopt. Later, Visual C++ fixed their compiler to have proper scoping rules, and the bracketing style was no longer necessary, but by that point I had gotten so used to doing it that I still haven’t stopped.
I noticed that you guys were banning all /* */ comment blocks. That’s rather odd. Any reason for this?
They are not banned, they probably just didn’t happen to get used in this file for whatever reason. I definitely do use them sometimes for longer comments, both in my own code and in the code I wrote for The Witness. My editor (GNU Emacs) handles commenting automatically, so it’s “free” for me to make long comments with //’s instead of /* */, so sometimes I just don’t notice that I have made a very long comment entirely in the former.
That said, I do tend to avoid using C-style comments for disabling code, because comments usually don’t nest properly under most compiler’s default settings. For this reason, I tend to prefer #if 0 blocks for code, since I can #if 0 a block of code that already has a #if 0 in it and it will work, whereas I cannot comment out a block of code that already has a commented-out block of code in it without it breaking most compilers on their default settings.
lister_panel.h begins with a forward declaration of struct Entity_Panel, but you include “entity_panel.h” in lister_panel.cpp which presumably has the definition. Why?
To be honest, I have no idea what those forward declarations are doing there. The .h file doesn’t reference them anymore as far as I can tell. I suspect it was historical, where at one point the .h file was referring to those structs and needed them to be declared, but didn’t want to include the whole .h file with their definition.
I try never to include .h files in a .h file unless I absolutely have to, since each .h file is usually included by multiple .cpp files, so the more .h files you include from an .h file, the more that multiplies the number of files processed by the compiler for each compilation unit. This leads to higher compile times, and I hate higher compile times.
These days, in my own codebase, I actually don’t compile files separately anymore, I just include all the .cpp files into one big .cpp file and compile that. So forward declarations are rarely necessary. But in other codebases where files are compiled separately, I tend to adhere to most of the compile-time-minimizing practices I learned long ago from Lakos’ Large Scale C++ Software Design
. The only one I don’t tend to do in foreign codebases is redundant include guards, but that is solely because I rarely know if there are consistently named include guards in the target .h files I’m referencing, and I don’t usually want to spend the time to exhaustively check.
Why do you use macros for constants instead of const variables/enums?
I don’t have a strong opinion about this, and I do use enums sometimes. It tends to be whatever strikes my fancy. There are reasons to use macros instead of enums, namely that there can be situations where a macro works and an enum doesn’t. The example would be assigning the value 5 to a float. If you make the 5 an enum, some compilers may emit a precision warning because you are converting an integer to a float. If you make the 5 a macro, the same compiler may not warn because it can see that 5 can be perfectly represented as a float, unlike other integers which cannot be. This may not be true for today’s compilers, which might do analysis of the enum value just like it would with an expanded macro, but I haven’t played around much with that recently so I can’t say for certain.
As for why I might want a constant to be assignable to both a float and an int, this is typical of my thinking about programming. It is the opposite of how most C++ programmers think. I usually prefer the option that supports the widest possible code use, whereas most C++ programmers seem to prefer the option that supports the narrowest. They are all about “protected” and “private” and “const” and so on, because those language features restrict the number of ways in which a piece of code can be used. But I prefer to make code that can be used in the widest number of ways, and so I tend to prefer practices that are as permissive as possible. Had I grown up in the sixties, I probably would have been a happy LISP programmer :)
The reason I differ in opinion here is that I tend to find that the primary development cost in codebases to lie in working out the interoperation of large amounts of code, not in the debugging of that code. I think people often perceive the cost of debugging to be much higher than it actually is, probably because it is not much fun to do. I believe many modern programmers spend an inordinate amount of time using things like “const” and “private” to try to prevent possible misuses of a piece of code, all the while making the code less usable in a number of circumstances that would have worked just fine, thus costing even more development time than the time already spent “designing in” these restrictions.
Unfortunately, who is right and who is wrong about these sorts of things is not really a question anyone can definitively answer because we don’t have the kind of metrics that we would need to accurately assess it. I can’t say for sure that my way is the right way. But I would definitely encourage people to think deeply about every programming practice they currently do in the name of bug prevention and consider how much debugging time it is actually saving, and to jettison those for which there is no strong evidence for an overall time savings.
You wrote, “this is also editor-only code, which means it is not meant to be used in a released product.” Does that mean that there’s an entire codebase dedicated uniquely to the editor?
Not exactly. There is a directory full of editor-only code, but it is something that adds on to the game’s source code rather than replacing it entirely. The point of my comment was just to indicate that lister_panel.cpp, along with all the other editor-specific add-ons, does not get compiled into the shipping version of the game, and thus does not have to be as robust as code that users will interact with directly.
Do you purposefully avoid writing about the IM-ness of the UI to make it seem like it’s something ordinary?
No, I avoid writing about it because I didn’t write it :) In general, I try to avoid talking about code I didn’t write, because I don’t feel like I will be able to accurately represent the thought process that went into crafting the code.
wrote the GUI for the Witness editor. It is a very old GUI codebase written before Braid
that Jon has used on a number of projects. He would be the appropriate person to write about it, since he’s the only one who knows why he made the decisions he made and why it works the way it does.
Do you develop on Linux
? I ask because I would love to see an article on the blog about your workstation, the specs of the PC (Mac?), OS, (IDE, editor) whatever.
I am often on Linux and I do use it for programming my own code. That said, though I did get The Witness
compiling under Linux
, the majority of the Witness development I did was on a Windows machine. At the time when I got things building on Linux, Ignacio
hadn’t yet finished the port of The Witness
. Since Linux doesn’t have Direct3D
support (apart from Wine
), I didn’t try to get things running completely. I just set up a dedicated Windows machine for development while working on The Witness and used that.
For my own code, I try to keep everything running on at least two platforms at all times, so I have Linux on all my laptops and Windows on my (much older) desktop. I use Emacs
as my editor, so that is the same on both platforms, as is cmirror, my custom-made sync and backup utility. I do not use a build system, I just have a .bat on Windows and a shell script on Linux. For debugging, I use Microsoft Visual Studio
on Windows, which sucks, and QtCreator
on Linux, which somehow sucks even more. Sometimes I think they get together and have competitions to see who can make the shittier debugger, and QtCreator
comes out on top, but its neck-and-neck right to the finish.
On the whole, I am very interested in trying to make alternatives like Linux viable. I’m not sure how realistic that is, given the fractured nature of Linux and its hardware compatibility difficulties. But I feel like I should at least be trying to help, rather than sitting idly by while Microsoft
, through a combination of selfishness and incompetence, flushes my home computing platform straight down the drain.
It would be great to read about cases where OOP is the proper way to go. You’ve made it clear where not to use OOP, but where should we use it?
Honestly, I feel like the answer is “in some other language”. My position on object-oriented programming is generally that it does not work well in C++ because C++ does not have any of the features you might want to allow you to get a benefit out of thinking of things in terms of objects.
To expand on that answer a little bit: there are two reasons that I think object-oriented programming usually doesn’t work. The first reason is that I think people try to apply the methodology to a wide variety of circumstances, when really it is only appropriate for a very narrow set of circumstances. But the second reason is that C++ doesn’t have good object-oriented programming support, so even if you did find a circumstance where object-oriented design was going to be optimal, you would still end up with bad results if you tried to use C++ to do the actual implementation.
I would say that if you are programming in C++, you should basically never use object-oriented programming. If you are programming in some other language, perhaps there are times when object-oriented programming might be a good idea. Unfortunately, since I have thirty years of experience programming C and C++, and only a small fraction of that experience in any other language, I feel it would be out of place for me to speculate on anything specific, since I really can’t speak from experience in any of the more traditional object-oriented languages.
And, just to make sure the point isn’t lost, as I said in the previous articles, the phrase “never use object-oriented programming” doesn’t mean that I don’t think you will have data structures in your code that can have multiple types, or which might contain extensions or different data members based on some notion of their type. Instead, what I mean is that you should never be thinking in terms of “objects” and “members” and “encapsulation”. You should be thinking strictly in terms of the actual code and the algorithms that you are creating, and then you should build the backing data to fit those algorithms. If something like an object should then arise, that is totally fine. But at least as far as C++ is concerned, never start by thinking in terms of objects! It is always the wrong approach.