Happy Tuesday everyone!  Welcome back to the Blightmare development blog.  Last week, I talked a bit about the tools that we use as a team to get things done.  I’m back today to finish up with a discussion of our version control experiences – as promised.  I’m going to go through them in the order that we used them in which means the service that we use now will be last.

 

Unity Collaborate

The first service we tried was the built-in Unity offering: Collaborate.  We didn’t have a whole lot of data at the time, and it was largely just Tom and myself building out the first basic features.  Even with a minimal use case like that, we had several challenges.  The purpose of version control is to enable parallel development of a project.  It exists to increase development efficiency.  Outside of this role, it should be completely invisible.  Collaborate failed this simple test by requiring me to restart the Unity editor every time my computer went to sleep due to some kind of authentication session failure.  On a project that is developed over many short sessions, losing time at the start to version control is highly annoying.  To make matters worse, the sync function also took longer than I think it should have which just compounded the startup cost to a programming session.

When changes are made concurrently to a project, there’s a chance that those changes overlap – consider 2 different changes to the same function.  This overlap is usually called a conflict in version control parlance.  Clearly something must be done to address this problem.  There are a few approaches that could be taken: prevent conflicts before they arise, force one side to win, or somehow combine both changes. 

Preventing conflicts before they arise is usually accomplished through a locking scheme where someone must exclusively lock a file for edit, thus guaranteeing that nobody else can create a conflict.  Locking schemes typically are dependent on consistent communication with a central server which actually manages the locks, and as a result does not lend itself well to any kind of offline development.  For some projects or some teams, that may not be problem because other workflows already require an internet connection.  The exclusivity of editing can also present some problems for common files which may be in high contention.  This can be addressed by breaking up content into smaller pieces, but having to change your project structure to fit your version control is also annoying.

Picking one change entirely or combining the changes are both forms of what is called merging.  Most version control systems support merging of text files such as code or configuration, and some more complex ones can offer forms of binary file merging.  Merging is clearly more efficient because it does not require coordination of changes ahead of time, but it does come with its own set of challenges however, and there has been a lot of work put into building efficient merge algorithms and tools to assist in conflict resolution.

The last major concept that version control typically offers is called branching.  This refers to the ability for someone to create a new line of development that can diverge from the “main” project for the purpose of an experiment or some large change that may impact workflows while it is in development.  When branching is combined with merging, a powerful set of tools are made available to a development team which allows many concurrent experiments or ideas to be explored and then combined back into the main project when desired.

This brings me to the most significant problem that Unity Collaborate had: no branching.  In collaborate, every change had to be integrated into the same project line always and immediately shared with the whole team.  It is possible to effectively work in an environment like this – check out “Trunk Based Development” for more information – but it did not work for us so we went searching for a replacement tool.

 

Plastic SCM

There are many services out there that offer game version control and asset management.  Many of them are too expensive for our team to really consider, so they won’t show up here.  PlasticSCM is an offering that wasn’t too expensive, came integrated into Unity, and seemed to offer all the missing features that we wanted from Collaborate: branching, better merging, and better performance.  Reality didn’t quite hold up to the promise unfortunately.

We signed up for a cloud hosted version of PlasticSCM, because we’re a small team and we didn’t want to have to manage backups and the other maintenance tasks associated with a server.  The first problem is that there’s a special client that needed to be installed for the “cloud” version of PlasticSCM.  This strikes me as something that shouldn’t be required, but I don’t know what their code looks like.  At any rate, just installing the client took several days and numerous trips to the PlasticSCM forums.  The silliest problem that we had was related to a “security” feature.  Plastic supported an encryption feature that involved a pre-shared encryption key, much like a VPN.  The difference with Plastic is that you only had a single chance to set the encryption key which was during initial installation.  If you didn’t put it in correctly, or if you didn’t know that you needed to do that, it was too late and you had to re-install the whole program.  Looking back, I’m not entirely sure why we went ahead after that experience, but we did so you don’t have to.

Once we had the client installed on everyone’s machines, the first thing that we noticed is that synchronizing changes was slow.  There was a fancy UI that came up while the sync operations were happening, and it had at least 3 stages I think, with an animation between them.  I would like to suggest that focusing on functionality and performance is much more important than animations for a productivity tool.  Once we got setup however, we could make branches which was a nice change from before and we soldiered on.

One of the selling points of Plastic is that they offer an easy to use UI on top of the version control system which should make things quick for users that don’t need any advanced features.  In order to deliver this, they decided to come up with a bunch of new concepts and subtly alter some existing ones.  I’m not going to exhaustively list all the problems we had because I don’t have time.  Instead I’ll walk through a single example and you can imagine that this was just one of many.

One day, we were trying to put together a release to take to a convention.  We had some art that needed to be integrated as well as some new mechanics.  The new mechanic work introduced a partially breaking change which needed to be re-tuned once everything was integrated, so it was on a branch to make sure everyone else wasn’t affected.  The art also took a bit of tuning and testing to make sure all the references and links were in the proper places, and animation names were correct, etc.  This was also on a branch.  The goal for the day was to merge everything together into the main branch and start doing the final testing for the public build.  In Plastic, a merge was a complex beast for reasons that were not clear to me.  This was made worse by the UI in their client tool, but I digress.  The general structure of our branches for this example can be seen in this picture:

I happen to know that none of the changes that were being merged together conflicted, so this should be a really simple operation.  What I want to be able to do is just merge code into main and then art into main and call it a day.  Not in Plastic.  What we had to do is:

  • Set local checkout to art – this adds/deletes files as needed to be exactly what is on that version
  • Merge main into art
  • Publish the merge to the server
  • Switch to main – this deletes the new art files locally because they aren’t yet part of main
  • Merge the new end of art into main – now we resync all the art that we had a minute ago
  • Publish the merge to the server
  • Switch to code – deleting the art locally again
  • Merge main – sync the art locally
  • Publish this merge
  • Switch to main
  • Merge code
  • Finally publish the finalized merge

The actual process took an incredible amount of time because it took me a while to realize that this is what we needed to do.  Instead I tried other merge schemes which did not do what I wanted – changes were removed or conflicts were created but not reported causing the project to no longer work.  Reverting an operation could only be done on either Mac or Windows (I don’t recall which) despite the claim of being “cross platform.”  Long story short, it was a nightmare which cost us a lot of time right before a demo and it was at that moment that I decided we had to use something else, even if it cost more money.

 

Git + GitLFS on GitHub

Git is possibly the most common version control system for programmers due to its relatively simple interface, high performance, and distributed nature that removed dependencies on network accessibility.  The single best feature of git is that it works.  The problem with git is that while I happen to think the interface is relatively simple, it has a fairly specific and complex mental model that takes some getting used to.  Without that understanding, it is an intimidating tool, and since it’s largely geared towards programmers, most interfaces are not exceptionally user friendly.  I must say that significant progress has been made in this area, but I’m not very familiar with the offerings because I’m old fashioned and stick to the command line.  There are several hosted git offerings which provide robust backups and central sharing points for git repositories.  GitHub may be the most widely recognized of these, but GitLab and Bitbucket are also major providers.

The problem with git for our use case is that it does not handle large binary files very efficiently.  This is because git stores changes as small differential blobs, and each local copy of the repository contains every change to every file that exists in the repository’s history.  This is one of the keys to the speed and power of git – everything you need is local – but it also means that as your repository grows, the footprint grows everywhere and reducing the size is rather complex.  This problem gets much worse when you have files that git cannot effectively create differential patches for – images, audio, zip files, etc.  All of these “large binary files” are stored as complete copies for each version in history.  As you can perhaps tell, this can lead to runaway repository sizes which then cause many knock-on problems because git isn’t really designed to handle this kind of content.

Enter GitLFS which stands for Git Large File Storage.  This is an extension that – to put it simply – replaces the large binary files with a simple text file whose contents is a pointer to the actual file data.  This extension allows git to return to managing text files which it is very good at, while allowing large binary files to be stored efficiently somewhere else and as a result provides many of the benefits of both systems.  GitLFS is a relatively new offering which is why we didn’t investigate it too much at the start, and the pricing was a bit hard to understand initially as well.

When we switched to Git/GitLFS hosted on GitHub, it cost us about $65/month for the whole team because we each had to have an organization seat to have access to our GitHub accounts.  This was more money than I wanted to be spending and it was higher than what we paid with Plastic, but the efficiency gain and general reduction in frustration was definitely worth it.  To make things even better, GitHub changed their pricing so that our organization was actually free and now we only pay $10/month for LFS data.  In the end, GitHub’s offering was not only the best from a functional point of view, but it ended up being the most cost effective solution as well.  It was a long road, but we’re in a great place now, and we know where to start for next time.

 

 

Congratulations reader!  If you’ve made it this far then you really stuck with me for the long haul and I thank you.  If you’re interested in Blightmare, please consider heading over to our Steam page and putting us on your wishlist.  Our Twitter is the best place to make sure you’re up to date with all our news and latest blog posts.  Have a great week and thanks for reading!  For those of you in Canada, Happy Thanksgiving!