As DevOps grows, it helps to know about how it works. One of the big things in DevOps is “infrastructure as code.” This means that you treat your infrastructure the exact same as you would treat your application code. So you’ll check it into version control, write tests for it, and make sure that it doesn’t diverge from what you have across multiple environments.
Handling infrastructure as code prevents problems like unexpected code changes and configuration divergence between environments like production and development. It also ensures that every deployment you do is the exact same. You don’t have to worry about those weird differences that happen with manual deploys when this is implemented correctly.
Something else to consider is that you don’t need to use different programming languages, like Python or Go. There will be some tool specific languages, but those are usually simple and have great documentation around them. The main thing you’re changing with infrastructure as code is the way that you handle your systems.
Instead of logging into a server and manually making changes, you’ll use a development approach to work on these tasks. That means you won’t have to deal with a lot of issues that only one person in the company knows about. This way, everyone has the ability to update and deploy changes to the infrastructure and the changes are preserved the same way code is when you check it into version control.
While infrastructure as code helps with many aspects of getting and keeping a reliable version of your app on production, it really adds value when you add automation to it.
Automating your infrastructure
One of the first things you want to look at is how you take your code and make it into an “artifact.” An artifact is any deployable element that is produced by your build process. For example, when you are working with an app built in React, you know that the npm build command produces a build directory in the root of your project. Everything in that directory is what gets deployed to the server.
In the case of infrastructure as code, the artifacts are things like Docker images or VM images. You have to know what artifacts you should expect from your infrastructure code because these will be versioned and tested, just like your React app would be. Some examples of infrastructure artifacts include OS packages, RPMs, and DEBs.
With your artifacts built, you need to test them just like you would with code. After the build is finished, you can run unit and integration tests. You can also do some security checks to make sure no sensitive information is leaked throughout the process.
A few tools you might write infrastructure automation with include Chef or Ansible. Both of these can be unit tested for any syntax errors or best practice violations without provisioning an entire system.
Checking for linter and formatter errors early on can save you a lot of unnecessary problems later because it keeps it consistent no matter how many developers make changes. You can also write actual tests to make sure that the right server platforms are being used in the correct environment. You can also check that your packages are being installed as you expect them to be.
You can take it to the next level and run integration tests to see if your system gets provisioned and deployed correctly. You’ll be able to check to make sure the right packages get installed and that the services you need are running on the correct ports.
Another type of testing you can add to your infrastructure code is security testing. This includes making sure you’re in compliance with industry regulations and making sure that you don’t have any extra ports open that could give attackers a way in. The way you’ll write tests will largely depend on the tools you decide to use and we’ll cover a few of those in the next section.
Testing is a huge part of automating infrastructure as code because it saves you a lot of debugging time on silent errors. You’ll be able to track down and fix anything that might cause problems when you get ready to deploy your infrastructure and use it to get your application updates to production consistently.
The tools you use will help you build the infrastructure code that you need for your pipelines. There are a number of open-source and proprietary tools available for just about any infrastructure needs you have.
Commonly used tools
Some of the tools you’ll see commonly used in infrastructure as code include:
- AWS CloudFormation
- Azure Resource Manager
- Cloud Deployment Manager
The specific tool you decide to go with will depend on the infrastructure and application code you already have in place and the other services you need to work with. You might even find that a combination of these tools works the best for you.
The most important thing with infrastructure as code is to understand everything that goes into it. That way you can make better decisions on which tools to use and how to structure your systems.
Everything behind getting your apps to production
When you hear people talking about provisioning, it means that they are getting the server ready to run what you want to deploy to it. That means you get the OS and system services ready for use. You’ll also check for things like network connectivity and port availability to ensure everything has connections to what they need.
Deployment means that there is an automatic process that handles deploying apps and upgrading apps on a server. Another term you’ll hear a lot is “orchestration.” Orchestration helps coordinate operations across multiple systems.
So once your initial provisioning is finished, orchestration makes sure you can upgrade and a running system and that you have control over the running system.
Then there’s configuration management. It makes sure that applications and packages are maintained and upgraded as needed. This also handles change control of a system’s configuration after the initial provisioning happens. There are a few important rules in configuration management.
- Systems should be converged to a desired state. Systems that run against a server and against new versions of a model to get in compliance with the existing model are called convergent.
- System commands and configs should be idempotent. That means you should be able to run a configuration management procedure multiple times and end up with the same state.
- Systems should be immutable. That means deployments can’t be changed after they’ve been deployed. So you would need to redeploy the whole system if a change was needed.
- Systems should be self-service. Any user should be able to start a process without help from anyone else. There shouldn’t be one person with magic knowledge of how provisioning and deployments get handled.
The more complex your infrastructure becomes, the more important it is for these basic rules to be followed.
If you’re wondering how you use these tools to make something useful, it highly depends on what systems you’re working with. If you’re working with simple applications, it might be worth looking into setting up AWS CloudFormation. If you’re thinking about going with microservices, Docker might be a good tool to use along with Kubernetes to orchestrate them.
If you’re working with a huge distributed environment, like a corporate network that has custom applications, you might consider using Puppet, Conducto (my company), or Chef. If you have a site with really high uptime requirements, you might use an orchestration tool like Ansible or Conducto.
These aren’t hard rules you should follow because all of these tools can be used in a number of ways. The use cases I’ve mentioned here are just some of the common ways infrastructure as code tools are used. Hopefully these use cases give you a better idea of how useful infrastructure as code can be.