This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(July 2016) (Learn how and when to remove this message) |
| Part of a series on |
| Software development |
|---|
Paradigms and models |
Standards and bodies of knowledge |

Continuous integration (CI) is the practice of integratingsource code changes frequently and ensuring that the integrated codebase is in a workable state. Typically, developersmerge changes to anintegration branch, and an automated systembuilds andtests thesoftware system.[1] Often, the automated process runs on eachcommit or runs on a schedule such as once a day.Grady Booch first proposed the term CI in1991,[2] although he did not advocate integrating multiple times a day, but later, CI came to include that aspect.[3]
This sectionneeds expansion. You can help byadding to it.(August 2014) |
The earliest known work (1989) on continuous integration was the Infuse environment developed by G. E. Kaiser, D. E. Perry, and W. M. Schell.[4]
In 1994,Grady Booch used the phrase continuous integration inObject-Oriented Analysis and Design with Applications (2nd edition)[5] to explain how, when developing using micro processes, "internal releases represent a sort of continuous integration of the system, and exist to force closure of the micro process".
In 1997,Kent Beck andRon Jeffries inventedextreme programming (XP) while on theChrysler Comprehensive Compensation System project, including continuous integration.[1][self-published source] Beck published about continuous integration in 1998, emphasising the importance of face-to-face communication over technological support.[6] In 1999, Beck elaborated more in his first full book on Extreme Programming.[7]CruiseControl, one of the first open-source CI tools,[8][self-published source] was released in 2001.
In 2010, Timothy Fitz published an article detailing howIMVU's engineering team had built and been using the first practical CD system. While his post was originally met with skepticism, it quickly caught on and found widespread adoption[9] as part of thelean software development methodology, also based on IMVU.
The core activities of CI are developers co-locate code changes in a shared, integration area frequently and that the resulting integrated codebase is verified for correctness. The first part generally involves merging changes to a common version control branch. The second part generally involves automated processes including: building, testing and many other processes.
Typically, aserver builds from the integration area frequently; i.e. after each commit or periodically like once a day. The server may performquality control checks such as running unit tests[10] and collectsoftware quality metrics via processes such as static analysis and performance testing.
Build automation is a best practice.[11][12]Build automation tools automate building.
Proponents of CI recommend that a single command should have the capability of building the system.
Automation often includes automating the integration, which often includesdeployment into a production-likeenvironment. In many cases, the build script not only compiles binaries but also generates documentation, website pages, statistics and distribution media (such as DebianDEB, Red HatRPM or WindowsMSI files).
CI requires the version control system to supportatomic commits; i.e., all of a developer's changes are handled as a single commit.
When making a code change, adeveloper creates a branch that is a copy of the currentcodebase. As other changes are committed to therepository, this copy diverges from the latest version.
The longer development continues on a branch without merging to the integration branch, the greater the risk of multiple integration conflicts[13] and failures when the developer branch is eventually merged back. When developers submit code to the repository they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes.
Eventually, the repository may become so different from the developers' baselines that they enter what is sometimes referred to as "merge hell", or "integration hell",[14] where the time it takes to integrate exceeds the time it took to make their original changes.[15]
Proponents of CI suggest that developers should usetest-driven development and toensure that allunit tests pass locally before committing to the integration branch so that one developer's work does not break another developer's copy.
Incomplete features can be disabled before committing, usingfeature toggles.
Continuous delivery ensures the software checked in on an integration branch is always in a state that can be deployed to users, andcontinuous deployment automates the deployment process.
Continuous delivery andcontinuous deployment are often performed in conjunction with CI and together form a CI/CD pipeline.
Proponents of CI recommend storing all files and information needed for building inversion control, (forgit arepository); that the system should be buildable from a fresh checkout and not require additional dependencies.
Martin Fowler recommends that all developers commit to the same integration branch.[16]
Developers can reduce the effort of resolving conflicting changes by synchronizing changes with each other frequently; at least daily. Checking in a week's worth of work risks conflict both in likelihood of occurrence and complexity to resolve. Relatively small conflicts are significantly easier to resolve than larger ones. Integrating (committing) changes at least once a day is considered good practice, and more often better.[17]
Building daily, if not more often, is generally recommended.[citation needed]
The system should build commits to the current working version to verify that they integrate correctly. A common practice is to use Automated Continuous Integration, although this may be done manually. Automated Continuous Integration employs a continuous integration server ordaemon to monitor therevision control system for changes, then automatically run the build process.
When fixing a bug, it is a good practice to push a test case that reproduces the bug. This avoids the fix to be reverted, and the bug to reappear, which is known as aregression.
The build needs to complete rapidly so that if there is a problem with integration, it is quickly identified.
Having atest environment can lead to failures in tested systems when they deploy in theproduction environment because the production environment may differ from the test environment in a significant way. However, building a replica of a production environment is cost-prohibitive. Instead, the test environment or a separatepre-production environment ("staging") should be built to be a scalable version of the production environment to alleviate costs while maintainingtechnology stack composition and nuances. Within these test environments,service virtualisation is commonly used to obtain on-demand access to dependencies (e.g., APIs, third-party applications, services,mainframes, etc.) that are beyond the team's control, still evolving, or too complex to configure in a virtual test lab.
Making builds readily available to stakeholders and testers can reduce the amount of rework necessary when rebuilding a feature that doesn't meet requirements. Additionally, early testing reduces the chances that defects survive until deployment. Finding errors earlier can reduce the amount of work necessary to resolve them.
All programmers should start the day by updating the project from the repository. That way, they will all stay up to date.
It should be easy to find out whether the build breaks and, if so, who made the relevant change and what that change was.
Most CI systems allow the running of scripts after a build finishes. In most situations, it is possible to write a script to deploy the application to a live test server that everyone can look at. A further advance in this way of thinking iscontinuous deployment, which calls for the software to be deployed directly into production, often with additional automation to prevent defects or regressions.[18][19]
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(May 2016) (Learn how and when to remove this message) |
CI benefits include:
Risks of CI include:
The following practices can enhance productivity ofpipelines, especially in systems hosted in thecloud:[23][24][25]
{{cite journal}}:Cite journal requires|journal= (help)