Many programmers tend to believe, that sticking to the particular technology of their choice while being reluctant to the other pieces of a rather complex process of providing a completed application is not their concern. DevOps, infrastructural concerns, cloud computing and so on – we got other teams able to do that, correct? Well, even if you do, you’re missing a huge piece of the knowledge that could save your day at some point in the future. Let me briefly present my point of view on the given subject.
To begin with, I have a question for you. Imagine, that you’re working on your own project (could be after hours as some of us do) and sooner or later, you would like to deploy it somewhere. And I’m talking about things like API with the front-end part, as providing SaaS solutions is probably the most common solution these days. At this point, you can choose only between 2 activities related to the deployment – manual or the automated one.
Manual deployment is usually quick at the beginning. As long as you know the basics of some HTTP Server like Nginx, Apache or IIS to host your application, all you need to do is to basically open the connection to the virtual machine, upload the application via FTP / ssh, maybe setup some additional firewall and databases and you’re good to go.
The actual process of deploying your so-called artifacts can greatly vary depending on the size of your application, whether you’re using multiple virtual machines to e.g. distribute it via load balancer and maybe you’re doing a (micro)services solution? Doing it manually for the first time is probably ok, but doing the same things (updating the source code being spread out across multiple servers etc.) tends to become really cumbersome.
Thus, just like in the actual software, when you see a repetitive code in 2 or more places, you don’t want to copy it over and over again and then update all of the N places when there are new changes on their way. What you should do instead, is to refactor the code, so there’s only a single place containing the common classes or methods.
The quite similar approach applies to the process of deploying your software, which can be understood as the continuous integration and deployment (CI & CD) process.
Before we dive into the details, let’s get back to our question. All of us are aware that time is the most precious value. In that case, you can also think of time as being equivalent to the money. So, would you enjoy spending your time doing the repetitive things over and over again? Uploading the new version of your application, testing it manually and so on? Trust me, it would you drive you mad after some time, and you can only imagine how much time would be wasted due to this.
For sure, there’s also another way – let’s hire some people to do this for us. And yes, it could be a viable option, yet in that scenario, instead of wasting time, you would spend a lot of money to pay the engineers to do that for you (remember, I’m still talking from the perspective of a guy, who runs his own project after hours). Now, what if you order other people to do that task for you, but in future, some things change and there’s a need to adjust the whole build and deployment process. You see where I’m going at, a neverending story, where you’re merely an observer, who has no clue what this is al about.
As you may have already guessed, the solution would be to learn these things on our own. Although you may not like this idea, I think it’s the only way if you truly want to have everything under your control. You may feel overwhelmed by the number of different topics that are required to possess the general knowledge about delivering the software from A-Z. For sure I did, but trust me, it’s not that difficult as may seem at the first glance. I’d start with the basic strategy and later on extend it, depending on your needs.
- Setup a hosting environment.
- Setup a build server.
This already gives you a lot of flexibility and stands as a core of the CI & CD. At first, you need to have the place (e.g. a virtual machine) that will be able to host your application. Then, you need to set up, a build server that will build your code, test it and then push it to the given server. And everything will happen automatically e.g. after a new commit to the source code repository. By spending maybe half a day to find out how the build server works, you will already save a lot of time that would be unnecessarily spent on the manual deployment instead. And that’s just a beginning. Later on, you can think of creating a separate test environment first, that will be run integration tests and only after they succeed, the build server will receive a message that it’s safe to push the new release to the production environment.
Going further, you can pack your application within the containers (e.g. using Docker), setup some notifications to the Slack, build your own packages using MyGet and do other extraordinary, fully automated activities. At some point, it’s almost impossible to keep track of the manual deployment process, which is why I believe that the sooner you establish it, the better.
I hope that this short article was able maybe not to already convince you to go studying the automation of the deployment process, but at least to think about it for more than a minute. There are so many services and tools able to do all these things, that it’s really easy to get started at least with the build server (for example take a look at my post about Travis CI).
I do believe that we, being the programmers, and even more importantly, software engineers should know as much as possible about not only about creating but also delivering the software.