What is CI/CD?
CI/CD refers to the combined practices of continuous integration, continuous delivery and/or continuous deployment.
Continuous integration is the practice of automating everything around developing the code. New code changes are regularly built, tested and merged to a shared repository.
Continuous delivery is an approach in which teams produce software in short cycles, ensuring software releases are reliable at any time. It also depends on automation at every stage so that cycles can be both quick and dependable. Deployment is manual.
Continuous deployment is the same as continuous delivery, but the deployment is automated.
Short cycles speed up delivery
The term “short cycle” means that code is merged to the release branch, tested and released to production as soon as possible. Speeding up the process and maintaining the quality of code in a short development cycle depends on effective automation. Automation helps increase the effectiveness of your employees and decrease the possibility of human error.
Linters and prettier make your coding process efficient by automatically highlighting and correcting any coding or styling error.
- Linters are the first line of defense against bugs. It’s a static code analysis tool that flags programming/stylistic errors and suspicious constructs.
- Linters give feedback and force you to apply best practices. You also can write your own custom rule for eslint. For example:
Force people not to create a 1-letter variable.
- Linters reduce conflicts caused by the style differences among developers. Forget about the Pull Request, where you see adding/deleting brackets and spaces back and forth.
- Eslint, if set up correctly, saves time by beautifying code automatically on commit or on save, or both.
Potential tools: Eslint, Stylelint, Prettier, Eslint-plugin-prettier, Eslint-plugin-react-hooks
Use Git hooks to run checks automatically
Developers may forget to run linters manually and can merge code with errors. Tools like Husky allows you to work with Git Hooks easier and there is one of the setup variant:
Suggested tools: Husky, Lint-staged, Bash scripts
Setup unit tests
Not much time is spent manually testing the application, and with unit tests, you can catch a bug (for example, “Unhandled promise rejection” or “React cannot update state on an unmount component”). And can see how your change in a shared component affects other places.
Suggested tools: Jest, React-testing-library
Enable automation tests during development
See how code affects existing functionality. Does it have any surprise output? It’s better to find that out during the development or pull request stage; debugging a root cause of a new bug before a merge is easier for both devs and QA.
Suggested tools: Jest, Selenium
- You can find more bugs because Typescript highlights potential overlooked scenarios in the code.
- It reduces time on debugging because you will know what is expected to pass in the component or function.
You can gradually introduce Typescript into your project. You will have a mix of js and ts files at the start, but if you like it, you can move the rest of the js files to be Typescript at some point.
Choose a library
Your code must be safe and reliable, and so do your third-party libraries and tools. So, how do you choose a library?
- What works for your team and company? (Consider: Is it a well-known library? Does it have good documentation? Do other teams use something similar already?)
- Is it necessary? Each third-party package is a piece of code that can cause a bug at any time and it’s far more time-consuming to fix it in a third-party library than in your code.
- Is it regularly maintained? Does it have any vulnerabilities?
Tool: Snyk Open Source
Snyk Open Source can run scans on your dependencies and immediately report on what kind of vulnerabilities packages have their description and potential solution.
Other points to consider:
- Are bugs quickly resolved?
- When was it last released?
- Is it open source or not?
- Does it have an active community?
- How large is it? (Size is important for dependencies that go to production.)
Tool: Bundle Phobia
Pull request stage
Trunk-based vs. branch-based development
*marked points are the upside points of each development base.
Our team uses a Trunk-based approach. We spend more effort on the previous stage, making sure that our code is safe before we merge it to Trunk before it goes to release. With the trunk-based approach, we are able to prevent ourselves from having to rebase quite frequently and resolving more merge conflicts in the end.
Getting production ready
One of the downsides of using the Trunk-based approach is code needs to be "production ready." So, work-in-progress features need to be merged in the master. But how is it possible to do this? Optimizely can be a good tool for this. It hides functionality until it’s ready.
Here’s a high-level example of how it looks in code:
Continuous delivery through automated development pipeline
A proper manual code review takes time and effort and developers may miss on running checks to check if the ‘code smells’. These can lead to bugs and existing breaking changes.
Using an automated development pipeline, you can address issues by automatically running existing unit tests and even forcing 100% test coverage. Not only does this show failed unit tests, but Jest also can be set up in a way that a particular word triggers a build failure.
Jest can be set up in a way that a particular word like "Unhandled promise rejection" can trigger the build failure. A message that appears due to poorly written unit tests or pointing at the bug in an actual code.
Example of a development pipeline using BitBucket Pipeline
Trulioo developers are currently using a development pipeline service called Bitbucket Pipeline. This project is an integrated CI/CD service built-in as part of BitBucket version control. Here is a small example of how you could write the bitbucket-pipelines.yml file, which defines the overall pipeline process after a code commit has been made:
Here’s what it looks like from the bitbucket pull request side.
There is a warning about four additional checks that need passing before a code can be merged and one of them is: No failed builds. In the above example, you can see the pipeline status for the current change build has a warning symbol that tells you that something is wrong with the existing automated build.
By clicking into the failed build, you can see more details regarding the errors that caused the current build to fail. In the previous pipeline yml file, several commands were to be run, such as lint and test, which in this instance are expected to run successfully. In this case, the pipeline notifies that the lint process has some failures and it will stop the current pipeline build process, which includes displaying the error logs:
Another way to extend the pipeline process is to integrate it with third-party applications.
A good example would be to set up a BitBucket to send an email to the developer that creates the pull request regarding a build failure in the pipeline.
Another example is integrating SonarCloud to be part of the build pipeline. In this case, you can enforce the new pull request change to be able to pass Sonar Cloud check before it could be merged:
Overall pipeline steps
So, below is an overview of the process from the beginning to the end product:
- Starting from requirements that were given by product managers, developers start to write code.
- When the code is finished, developers then create a pull request to be reviewed by other developers.
- Once all of the pull requests pipeline checks are passed and the code is properly reviewed, the developers can then merge the code, triggering the deployment process to other environments such as QA.
- Finally, once QA tests the code, the code can be deployed to production.
Don’t hesitate to experiment and play around with your CI/CD pipeline. Continuous improvement is a natural outcome of implementing a CI/CD pipeline!
Tools: Bitbucket Pipeline, SonarCloud, AWS Code Build
Other considerations for working with CI/CD
On the ticket requirement stage:
- Quit the pixel-perfect approach
- Design review sessions/grooming sessions with developers before the ticket gets into the sprint
- Split huge tickets into sub-tasks so code can be developed and merged more frequently
On the pull request stage: If the reviews with the PM or business analyst goes well, we merge the code to the ‘trunk’. Having a ‘Code Review’ session should be made mandatory. Also, it is important to set up default reviewers and default description templates for the pull requests. Make the pull requests small and make a checklist on what should be done before/during the pull request stage and points that are easy to forget.
Automate your processes or partially automate the environment setup. Write automation tests and run critical test suites on production daily to check its health.
Other points to consider:
- Tech debt rotation (a dedicated developer should fix bugs/tech debt tickets for the sprint and familiarize themself with different parts of the project. Next sprint - next developer).
- Have a library for a designer with reusable components to build UI designs and implement them into the application quickly.
- Don’t be afraid to write your custom tools to make your team productive and solve some painful problems.
- Write code that is easy to extend if new features come up.
Automating your pipeline
Having an automated development pipeline reduces the effort needed for the code review process and prevents potential bugs or breaking changes. Continuous integration, continuous delivery and continuous deployment are potent ways to automate everything to increase efficiency, and improve maintenance. Experiment with your pipeline and determine how you can increase productivity and build better projects.