fbpx
1-888-310-4540 (main) / 1-888-707-6150 (support) info@spkaa.com
Select Page

Continuous Integration Best Practices—Part 4

As I noted in other articles in this “Continuous Integration Best Practices” Series (click here for Part 1, Part 2, or Part 3“, there are 10 best practice principles associated with Continuous Integration and in this previous articles, we covered the first eight. In this article, we pick up where we left off and talk about principles nine and ten.

For review, the full set of principles are outlined below:

1) Maintain a code repository
2) Automate the build
3) Make the build self-testing
4) Everyone commits to the baseline every day
5) Every commit (to baseline) should be built
6) Keep the build fast
7) Test in a clone of the production environment
8) Make it easy to get the latest deliverables
9) Everyone can see the results of the latest build
10) Automate deployment

9) Everyone can see the results of the latest build

In its most basic form, this principle could look like an email report that’s delivered to all relevant parties following the completion of a build. This is not an ideal solution because it limits the audience to those in the mailing group. Additionally, there is an almost certain chance that the majority of people will quickly tune out due to the volume of reports coming their way.

Email reports may be a necessary first step when implementing a CI system, but a better option is a web-based dashboard that allows interested viewers access and the ability to examine aggregate metrics as well as drill down to individual builds and their constituent steps. Once a suitable dashboard is in place, the email reports can be dispensed with and users should only see items in their inbox that are notifications alerting them to critical events or requesting some action on their part.

10) Automate deployment

When complete, a build should automatically deploy into a production-like environment or set of environments. Following this principle forces several things to be true. First, it forces you to crystallize your understanding of your deployment process. By automating the process, you strip out any reliance on “tribal knowledge” that only your deployment guru has and forces it into the open where the process can be explicitly documented and accounted for.

Secondly, in order to satisfy principles 6 and 7, the process of deployment cannot be manual. Automating the deployment process is the logical consequence of those two principles. Being forced to test in a production-like environment and to make the whole process as fast as possible rules out the use of manual deployments – they must be automated to keep things moving.

David Hubbell
Software Engineer
SPK and Associates

Next Steps:

Latest White Papers

Costs and Benefits of Moving a .NET Application to the Cloud

Costs and Benefits of Moving a .NET Application to the Cloud

Do you know the full cost and benefits of moving your .NET application to the cloud? In this guide we’ll cover everything you need to know about your .NET cloud migration. Is this guide for you? If you’re faced with outdated legacy systems and the pressures of digital...

Related Resources

Virtual Workspaces Compared: VDI vs DaaS vs vCAD by SPK

Virtual Workspaces Compared: VDI vs DaaS vs vCAD by SPK

Over the last few years in particular, you’ve likely seen a shift to virtual workspaces.  You might have even used them as an effective cog in your business machine to enable on-site remote workers. Virtual workspaces are great for ensuring a uniform user experience...

What is Observability And How Can It Optimize IT?

What is Observability And How Can It Optimize IT?

Your IT architecture is anything but simple. In fact, it’s more like the complex, yet silent spinal cord of your business functions. But what can you do when something goes wrong? Monitoring tools give you a partial view into business performance (or issues),...