Hello! Welcome back to the Blightmare dev blog! Today is the last instalment in the refactor series. This is going to be some general thoughts on the whole process. You might call it a bit of a retrospective I suppose. Let’s get to it.
The only downside of refactoring is the time spent which could have been spent on something else. Using that logic then, a refactor is worth it if the value of the time spent is more than spending that time on something else. The hardest thing about trying to make this determination is estimating the value of efficiency in future tasks that is gained by the refactor. Another wildcard is how valuable the knowledge gained by looking at the code from another perspective is.
My approach now is to limit the scope of any individual refactor so that it can be done quickly. This comes at the expense of being able to tackle any really big problems, but I think the best way to get to a large scale change is many small changes. This is a relatively new opinion for me. I used to think many small changes was just an inefficient way to make a large change. The reason for this is because when you do a bunch of small changes, each intermediate step needs to be a working state. This inevitably means that time is spent adapting code to work with a new system even if that code will eventually be removed.
On its face, that sounds wasteful. However, using the new code as soon as possible is invaluable as a way to ensure the changes being made are actually a step in the right direction. In terms of scope, I think it can vary significantly depending on how much time you have available in a particular coding session. My average chunk is about an hour. That means that I need to be able to dive in, make a change, and make sure things are working again all within an hour. After doing this fairly regularly, I actually found that it is a very efficient way to get things done.
A major reason why an hour is a good size of task to actually do is because the amount of change that can be done in an hour is usually something that can fit entirely in my head. Being able to completely think through a problem makes it really easy to get it coded up and completed. Anything larger needs to be broken down more to come up with an approach that isn’t going to have any surprises. Plus, it’s really nice to have definite progress to point to as you chip an hour off a project that will probably take several thousand hours when it’s done. The decrease in emotional effort just to get started can be the most important factor between getting something done and watching twitch instead.
That was a giant detour away from the original topic of trying to determine if our multiple refactors were worthwhile, so let me try to get back on track. Speaking broadly, I think the first refactor was a mistake and more importantly, I believe it was a preventable mistake given the situation at the time. It was wrong to try to chase the “latest and greatest” piece of technology. I believe now that it is wrong to make this decision for any serious project actually, because any serious project has more than enough complexity inherent to itself. Adding more variables or increasing the difficulty is not a rational choice and does not make things better over the long term. I think about it this way now: doing something new means giving up your expertise. I’ve often talked about how being able to iterate and refine are the real keys to making a good game, so the choices made along the way should support those goals. Even if there is some incremental cost to using the “old way,” that cost is typically something you can plan for or at least foresee in a way that just cannot be done with some new or untested technology. When it comes to tools, boring is good.
The second refactor however was a good call I believe. Obviously it was great to get away from the Entities implementation, but even if we had just been going from the first version of the game to the current one, I think it would have been worthwhile. We’ve had all the mechanics in the game playable for a while now and tuning or adjusting how they work or interact is very easy, even for things that were written many months ago. That is a clear sign that the code is in pretty good shape. As we work on more scripted pieces of the game like boss fights and cinematic sequences, having the logic more or less centralized is also very helpful because customizations don’t need to be distributed. It’s possible that we wouldn’t have gotten here without first taking the wrong step toward Entities, it’s impossible to know now, but we did eventually get here and it’s a pretty good place to be.
All in all, the cost of a few weeks of development time to learn important lessons about when and how to refactor as well as the eventual improvements to our ability to build the game is probably a fair price. It’s certainly true that we can’t go back and do it differently now, so looking back at the process and learning all that we can is incredibly important to make sure nothing was wasted. It’s been a fascinating journey for me to go back through all of this in order to write up these blog posts and I hope some interest or value was created for readers. Next week we will be on to something completely different. Let us know in the comments if there are specific topics that interest you and we can see about speaking to them.
As always, if you’re interested in following the game, please wishlist Blightmare on Steam and give us a follow on Twitter. Stay safe wherever you are, and have a great week!