Hello and welcome back to another instalment of the Blightmare development blog!  Today’s post is the second to last part of a long series covering the 2 largest individual changes made to the code base.  I will cover the second refactor here and leave some concluding thoughts for next week.  Let’s dive in.

 

I discussed several issues and frustrations we encountered with the refactor to Unity’s Entities API last week in some detail that I won’t repeat here.  The big ticket items are the following:

API Stability.  This was a preview or alpha or experimental – whatever you want to call it – package.  It’s true.  However, I’ve never seen something that is publicly released have so many incompatible changes with the frequency that the Entities API did.  Every 2-3 weeks I found myself spending almost an entire week refactoring significant portions of our code because some core concept was changed or removed.  We just couldn’t afford that kind of code churn.

Editor support.  It became nearly impossible to author content in the editor as a prefab and be able to actually use it within a System.  I spent at least half my time writing and maintaining various adapter layers or inventing hacks to allow my components to reference non-blittable things such as asset instances, prefabs, or strings.  It was difficult for me as the programmer that made most of these things to even use it.  That’s a really bad sign for any kind of productivity for our designers or artists which again is not a productivity hit that we could afford.

Performance.  The battle with performance is always ongoing and I’m sure we could have optimized things to be better, but the way the system was architected meant that we constantly had to fight it in order to squeeze out a reasonable frame rate.  That’s a pretty clear sign that the wrong tool is being used for the job.  The most frustrating part about this for me is that the one operation that should be optimized for – adding or removing components – was specifically designed to be expensive.

The last thing isn’t something I mentioned last week, but using the latest and (sometimes) greatest APIs can often mean that existing libraries or plugins won’t work or may require significant work to adapt in order to work.  One of the big reasons to use a well established engine like Unity is to be able to save time by buying functionality, so making this difficult or impossible is a pretty big downside.

 

For all of these reasons, Tom and I decided that we needed to cut our losses and go back to the more traditional way of doing things.  Obviously there was significant concern that we spent a lot of effort getting to where we were and we were talking about throwing it away.  I’ll talk about that in more detail next week in the conclusions discussion, but know now that I don’t see this process too negatively.  I do wish we could have made all the correct decisions at every turn, but that’s never going to happen, so instead we just need to make sure to get the most out of the decisions we have made.

There were 2 major possibilities for how to switch back to MonoBehaviour based gameplay.  We could go back in the source control to just before the Entities refactor and try to port all the features that were added since then to that base, or we could pretend like that history didn’t exist and do another forward facing refactor.  I’ve covered a lot of problems with the original implementation – enough that we refactored the first time – so I decided that it would be better in the long run to use the old code as a reminder of what not to do.

 

Refactoring back went quite quick actually.  I believe a large amount of the game was working and playable after about a week.  The rest came along slowly as we discovered pieces that were broken or as the designers moved onto different focuses.  There were some pretty radical changes in my thinking in particular that really helped out:

Significantly less generic things.  I didn’t worry at all about writing code twice, or trying to make one component that could apply to several situations.  Instead, I duplicated things out and solved specific problems in their own places.  Only after we had most of the code refactored did I start factoring out common chunks.  This really allowed me to focus on one problem at a time without having to worry about what else I might be breaking.

Assume a single player.  The earlier iterations of the code might have been able to handle multiple players, although I never tested this so surely it would have been broken in some way.  This is a silly thing to do for a single player game however.  It’s a massive increase in complexity because it prevents making simplifying assumptions in logic about the player and prevents convenient accessor functions or state.  In many ways, this is just a specific example of the earlier point, but it so radically changes some code paths that I think it’s worth calling out as a distinct point.

Centralize logic.  I mentioned that a lot of the mechanics were implemented in their own components to potentially allow a designer to create new things that I hadn’t considered.  This is still true in some cases where we have more “systemic” behaviour such as the physics system or some activation mechanisms, but the cost is not worth paying to try to support things that we just won’t have.  Instead, there’s a couple places that have nearly all the logic and state for the game.  The way this works is through a request system.  Essentially some piece of code may request to have a mechanic applied to the player during the frame.  All these requests are accumulated on the main player logic script and then they can be resolved and applied in a single place that has all the context.  Collisions, overlaps, and movement take the same basic approach in our current code.  There’s components that accumulate requests for movement which then feed forward into a new set of allowed movement and pending collisions.  Once we have all of this data, it’s much easier to look at it together and decide what to do in a way that makes sense for the game.  Similarly, overlaps are generated as a list which can be inspected by anyone that may be interested and is reset each frame.

 

The most significant change from the first version of the code to the current one is a shift to a more data oriented approach to the logical structure while still being implemented in terms of mono behaviours.  In a way of thinking, the central logic components are kind of like Systems, and the request lists are a bit like components to be processed.  This shouldn’t be considered a direct comparison, but some of the principles do translate and as a result the refactored code is a lot simpler which makes it much easier to keep it working.

 

That brings me to the end of the discussion of code changes in the refactor series.  Next week I’ll share some overall thoughts about the impact of making big changes to a code base and some lessons we’ve learned along the way.  I hope you’ll stop by!  If you’re enjoying this blog, please head over to our Steam page and give us a wishlist.  We’re also on Twitter.  Have a great week, and if you’re in Southwestern Ontario, try not to melt!