Looking ahead: GitHub and CRYENGINE
Looking ahead: GitHub and CRYENGINE

Looking ahead: GitHub and CRYENGINE

Another preview of 5.1 by our Senior Systems Engineer

I’m David Kaye, Senior Systems Engineer on our Deployment team. Most of my time is spent working on our build and deployment infrastructure, which involves developing scripts to compile, upload, and test many of the projects based in  our Frankfurt office. I was initially responsible for migrating CRYENGINE to doxygen for our C++ API documentation and automating the creation of these pages. But my time recently has been spent developing the infrastructure to support releasing the CRYENGINE source code on GitHub.  

Git is a very widely used version control system that allows users to instantaneously access any revision of a file that has ever existed in their repository. This makes comparing revisions of a file and tracking changes over time very convenient, particularly as each time a set of changes is committed to the repository, a description of the change can be added.


In the past, public CRYENGINE releases came with source code in a zip file included in the build. While this allowed users to customize systems as they saw fit, it was a simple drop of files. To see the difference in a set of files between one release and the next, both archives needed to be extracted and compared. Git makes this a lot easier, and since Git is decentralized, it can also be used just as effectively with no network connection. Further than this, however, it will be much more convenient for users to keep up-to-date with new CRYENGINE releases, as merging code branches is an area where Git excels. This has been on our internal roadmap for some time, but we wanted to take our time and make sure we got it right: once a file is pushed to a Git repository, it becomes part of the history and cannot be removed by subsequent updates (in contrast to other version control systems, which may allow changes and file revisions to be obliterated).

Much of the time I have spent working on Git has not been pushing the code itself, but rather creating the supporting infrastructure. In order for a continuous feature like Git to be maintainable, it must be automated as much as possible, and the development of these scripts and test cases was the most time-consuming element. We’ve needed to spend some time going through the repositories looking for files that we aren’t able to ship due either to licensing issues (in the case of some SDKs) or because it makes no sense to do so (for files that relate to configuration of internal tools).

The next step for Git is to establish a simpler pipeline for accepting pull requests from GitHub into our existing version control system (we use Perforce internally, but are providing source via GitHub as this is most commonly requested). In order to maintain the quality of our code, we have a system of tests that code must pass before it can be submitted to our production branches, which cannot be run directly against pull requests.