CI/CD Example
In modern software development, you continuously build, test and deploy iterative code changes. This process helps reduce the chance that you develop new code based on buggy or failed previous versions. These principles can also be applied to network infrastructure configuration. Creating a Continuous Integration / Continuous Development (CI/CD) process for network configuration lowers the amount of human intervention, and minimizes risk associated with changes.
Consider the limitations imposed in a production environment, by following the simple or comprehensive example. Storing the code locally allows only a single user to make modifications. Making the code available in a Git repository would not fully address problem as the Terraform state is still stored locally. Only the user that ran terraform apply
would be aware of the latest state of the configuration.
This section of Nexus-as-Code helps the user create a basic CI/CD environment using GitLab. It serves as an example. GitLab offers powerful features such as the ability to create Git repositories, create teams, manage Terraform state, and CI/CD.
Note that a plethora of alternative tooling is available such as Jenkins or GitHub. Whilst this guide focuses on GitLab, the same principles apply. It is worthwhile to understand whether your organisation already offers any of these solutions for you to leverage.
This guide helps you to set up a pipeline with the following stages:
- Validate
- Test
- Build
- Deploy
- Cleanup
Note that if you do not have an ACI environment available you can leverage the DevNet Sandbox environment. For this scenario is it is recommended to use the always-on sandbox. Please refer to the Sandbox section for more information.
Step 1: Getting started with Gitlab
Sign up for a GitLab SaaS account at GitLab. If you prefer self-managing your own GitLab instance you can download the packages here: Install self-managed GitLab. This guide uses the SaaS version of GitLab, but the same principles apply.
Note that all GitLab features used in this guide are available in the free tier. For more information about features and pricing see: GitLab Pricing
After creating your account, sign in to GitLab. After your first login you are prompted with a few questions. You can either create a new project from here, or create a project from the GitLab dashboard.
Select Import Project
.
Under Import Project from
select Repository by URL
. Set the GIT repository URL
to https://github.com/netascode/nac-aci-simple-example.git
.
Give the project a Name
, Description
, and make sure to keep the repository private. Click on Create project
once you are satisfied with your configuration.
Step 2: Setting up a Runner
The environment must typically be prepared with a Runner. A Runner is a process that picks up and executes CI/CD jobs for GitLab. As it is unlikely for the Application Policy Infrastructure Controller (APIC) to be reachable from the internet, a (local) runner may be used to access it. Runners can be installed using Linux, macOS, FreeBSD, and Windows. It can be installed:
- In a container.
- By downloading a binary manually.
- By using a repository for rpm/deb packages.
This guide assumes installation on Linux through rpm/deb packages. It also assumes that Docker Engine
is installed as the Runner will be set up as Docker Executor
. This allows the Runner to connect to Docker Engine to run and build each build in a separate and isolated container using a predefined image that will be configured later. For instructions on how to install Docker Engine, see: Docker Engine Installation. It is advised to install the Runner on a dedicated (virtual) machine instead of locally on your computer.
To install GitLab Runner:
- Add the official GitLab repository: For Debian/Ubuntu/Mint:
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
For RHEL/CentOS/Fedora:
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.rpm.sh" | sudo bash
- Install the latest version of GitLab Runner: For Debian/Ubuntu/Mint:
sudo apt-get install gitlab-runner
For RHEL/CentOS/Fedora:
sudo yum install gitlab-runner
- Register the Runner:
Navigate to
Settings
>CI/CD
, and expand theRunners
section. Click onNew project runner
, selectRun untagged jobs
, select the applicableoperating system
, and continue withCreate runner
.
Disable using shared Runners for this project.
On the machine where the runner is installed, run:
gitlab-runner register
and provide the following configuration:
- URL: https://gitlab.com (note that this is a different URL when you use self-managed GitLab)
- Registration token: Your Registration token from the Runner section in GitLab.
- Description: leave blank (optional)
- Tags: leave blank
- Maintenance note: leave blank
- Executor: docker
- Default Docker image: docker
Completed output (not that your output may look different depending on your local machine):
Enter the GitLab instance URL (for example, https://gitlab.com/):
https://gitlab.com
Enter the registration token: xxxx
Verifying runner... is valid runner=xiePszj12
Enter a name for the runner. This is stored only in the local config.toml file:
[localhost.localdomain]: runner-nac-cicd
Enter an executor: custom, parallels, docker+machine, instance, kubernetes, docker, docker-windows, shell, ssh, virtualbox, docker-autoscaler:
docker
Enter the default Docker image (for example, ruby:2.7):
docker
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
Make sure that your Runner is listed as available in GitLab:
For issues with the GitLab Runner installation please see: Runner FAQ. You can verify whether gitlab-runner is running and can contact GitLab with gitlab-runner status
and gitlab-runner verify
. In case the runner seems active but is not picking up jobs in the pipeline in a later step, you may also try to use the sudo gitlab-runner run&
to run the process in the background.
Completed output (not that your output may look different depending on your local machine):
user@ubuntu-runner:~$ sudo gitlab-runner status
Runtime platform arch=amd64 os=linux pid=344962 revision=85586bd1 version=16.0.2
gitlab-runner: Service is running
user@ubuntu-runner:~$ sudo gitlab-runner verify
Runtime platform arch=amd64 os=linux pid=344968 revision=85586bd1 version=16.0.2
Running in system-mode.
Verifying runner... is valid runner=xiePszj12
Step 3: Creating a pipeline
In the newly created GitLab repository, click the clone
button and copy the URL displayed under Clone with HTTPS
.
When using Visual Studio Code
, navigate to view -> Command palette...
, type clone
and select the Git: Clone
option. Provide the repository URL from the GitLab project.
A prompt will appear to select a folder where the cloned repository should be stored. Select a path and continue with Select as Repository Destination
. When prompted to open the cloned repository, select open
.
Alternatively, the repository can be cloned from the command-line interface.
~/Documents/coding > git clone https://gitlab.com/your-org/your-project.git
Cloning into 'nac-cicd'...
remote: Enumerating objects: 98, done.
remote: Counting objects: 100% (98/98), done.
remote: Compressing objects: 100% (38/38), done.
remote: Total 98 (delta 48), reused 98 (delta 48), pack-reused 0
Receiving objects: 100% (98/98), 19.96 KiB | 3.33 MiB/s, done.
Resolving deltas: 100% (48/48), done.
Create a new file .gitlab-ci.yml
in the root of this folder and add the following code:
include:
- template: Terraform/Base.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Terraform/Base.gitlab-ci.yml
- template: Jobs/SAST-IaC.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST-IaC.gitlab-ci.yml
image:
name: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
variables:
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
stages:
- validate
- test
- build
- deploy
- cleanup
fmt:
extends: .terraform:fmt
needs: []
validate:
extends: .terraform:validate
needs: []
build:
extends: .terraform:build
deploy:
extends: .terraform:deploy
dependencies:
- build
cleanup:
extends: .terraform:destroy
dependencies:
- deploy
Save the file and continue with the next step.
For more information about .gitlab-ci.yml and other examples, see: .gitlab-ci.yml
Note that two other .gitlab-ci.yml
files are being included
. GitLab merges the content of these templates together with the main gitlab-ci.yml
file when running the pipeline. Using templates for common jobs makes building pipelines much simpler. From the base .gitlab-ci.yml
we can simply include the jobs we are interested in, without having to worry about any variables. Terraform/Base.gitlab-ci.yml
gives everything that is needed for the Terraform Jobs and SAST-IaC.gitlab-ci.yml
is helping to find security vulnerabilities, compliance issues, and infrastructure misconfigurations in Infrastructure as Code solutions such as Terraform. The results of the test stage (which includes KICS) will be available as an artifact in GitLab. If you wish to omit this part of the pipeline you can simply comment out that line in your .gitlab-ci.yml
. This will save several minutes when running the pipeline while performing tests.
Step 4: Changing Terraform Backend
When using Runners it becomes even more important to use a remote backend. The reason is that each job in the pipeline is ran by a new, immutable container which runs a specific task and dies. The Terraform state would be lost forever without a remote backend. Navigate to main.tf
and add a remote backend section for terraform. This instructs Terraform to make use of a remote backend.
terraform {
backend "http" {
}
}
The beginning of your main.tf
file should now look like this:
terraform {
required_providers {
aci = {
source = "CiscoDevNet/aci"
}
}
}
terraform {
backend "http" {
}
}
~output omitted~
Save the updated main.tf
file and continue with the next step.
Note that there are multiple options for remote backends such as AWS S3, Consul or Terraform Cloud. In this guide you will use the Terraform backend provided by GitLab.
Step 5: Managing variables
As this code will be stored centrally in a GitLab repository, it is a good practice to replace any credentials or sensitive data with variables. These variables can be stored securely in GitLab and passed to your Runner as environment variables when executing different jobs. Note that this only happens when your branch is set to protected, which is the default setting.
In the provider "aci"
block you can remove username
, password
and url
, as those will be passed as environment variables.
provider "aci" {
}
Keep in mind the use of a username
and password
is easiest, but the use of signature-based authentication is preferred. For more information see Terraform Provider Documentation.
The final main.tf
file should look like this:
terraform {
required_providers {
aci = {
source = "CiscoDevNet/aci"
}
}
}
terraform {
backend "http" {
}
}
provider "aci" {
}
module "aci" {
source = "netascode/nac-aci/aci"
version = "0.7.0"
yaml_directories = ["data"]
manage_access_policies = false
manage_fabric_policies = false
manage_pod_policies = false
manage_node_policies = false
manage_interface_policies = false
manage_tenants = true
}
Save the updated main.tf
file.
Now that these values have been replaced by variables you have to provide their values in GitLab. Open your project in GitLab, and navigate to Settings
> CI/CD
, and expand Variables
:
Click Add variable
to add the variables used for authentication against APIC:
Complete this step for ACI_USERNAME
, ACI_PASSWORD
, and ACI_URL
. Uncheck Protect variable
for each variable.
Once your variables have been added, continue with the next step.
Note that you if you want to hide your credentials in the job logs you must check 'mask variable'.
Step 6: Pushing the code
The pipeline definition has been added to gitlab-ci.yml
in step 3. main.tf
has been modified to make use of a remote backend in step 4, and any sensitive data is replaced with GitLab variables
in step 5. The next step is to add
the new file to staging, and commit
the changes.
Before committing changes, the Git username and Email address must be provided. This is how changes can be tracked to an individual user.
git config --global user.name "First Last"
git config --global user.email "first@example.com"
When using Visual Studio Code
select the Source Control
tab on the left hand side. Stage changes by clicking on the +
button next to Changes
. After adding a commit Message
, changes can be committed to the local copy of the repository. By clicking Sync changes
, the local copy will be pushed to the remote repository in GitLab.
Alternatively this can be done from the command-line interface:
git add .
git commit -m "adding pipeline"
git push
Note that this is a commit against the master branch. It is advised to work with branches when using this in production. Branching is not covered in this example.
The GitLab repository now contains your code and will trigger the pipeline as described in .gitlab-ci.yml
Step 7: Deploying the configuration
This pipeline assumes a Continuous Delivery step whereby the code is checked automatically, but requires human intervention to manually trigger the deployment of the changes. Open your project in GitLab and navigate to CI/CD
> Pipelines
. You should see the pipeline that was triggered by the push
in step 6.
If previous steps were executed correctly, the pipeline will have completed three steps successfully and will be in a blocked
state, meaning human intervention is required to proceed.
Note that the
terraform init
operation will download the required provides and modules. This may take a few minutes depending on the available bandwidth. Later stages and new pipelines will make use of the cache.
Before clicking deploy
it is recommended to navigate to the pipeline and verify the output of the individual jobs by clicking on each completed stage. The output of the build
stage will provide an overview of the planned actions.
If any of these steps have failed they should also provide you with a reason to help you further troubleshoot.
The build job generated a plan to add 20 resources as shown by the output:
When you are satisfied with the output of the build job, you can manually trigger the deploy job by clicking play. This will trigger a Terraform apply
action and push the configuration to APIC. If you set up the variables in step 5 correctly, this step should successfully complete:
Navigate to APIC to verify that the configuration was deployed successfully:
Optionally, if you wish to automatically deploy your configuration in a truly Continuous Deployment (CD) fashion, you can modify the deploy
block in the .gitlab-ci.yml
file to always
run:
deploy:
extends: .terraform:deploy
dependencies:
- build
rules:
- when: always
The cleanup step is skipped as this would trigger a Terraform destroy
, which would remove the deployed configuration.
Step 8: Adding configuration
Now that the initial configuration is pushed to the repository and the terraform state is available centrally in GitLab, you or someone else in your team can commit new configuration, which would trigger a new pipeline when pushed the repository.
Note that in production you would typically create a branch that contains any changes, which would be merged after a pull request. In this example however, you will add a new Bridge Domain configuration, and commit straight to the
main
branch.
Open data/tenant_DEV.nac.yaml
in your preferred editor / IDE and add the following section:
- name: 10.1.203.0_24
vrf: DEV.DEV-VRF
subnets:
- ip: 10.1.203.1/24
The tenant_DEV.nac.yaml
file should now look like this:
---
apic:
tenants:
- name: DEV
vrfs:
- name: DEV.DEV-VRF
bridge_domains:
- name: 10.1.200.0_24
vrf: DEV.DEV-VRF
subnets:
- ip: 10.1.200.1/24
- name: 10.1.201.0_24
vrf: DEV.DEV-VRF
subnets:
- ip: 10.1.201.1/24
- name: 10.1.202.0_24
vrf: DEV.DEV-VRF
subnets:
- ip: 10.1.202.1/24
- name: 10.1.203.0_24
vrf: DEV.DEV-VRF
subnets:
- ip: 10.1.203.1/24
application_profiles:
- name: VLANS
endpoint_groups:
- name: VLAN200
bridge_domain: 10.1.200.0_24
- name: VLAN201
bridge_domain: 10.1.201.0_24
- name: VLAN202
bridge_domain: 10.1.202.0_24
Save the file, commit and push your changes. Either via Visual Studio Code
or locally via command-line interface:
git add data/tenant_DEV.nac.yaml
git commit -m "adding new bd"
git push
This triggers a new iteration of the pipeline that will be in blocked
state, waiting for human intervention.
Open the pipeline and expand the build
job. Note that Terraform calculated that 3 new resources are to be added.
Once you are satisfied with your changes you can trigger deployment by clicking the deploy job play
button.
Navigate to APIC to verify that the new Bridge Domain has been added:
Step 9: Restoring a previous commit
Imagine that the Bridge Domain added in step 8 resulted in an outage and you must restore the configuration to an earlier time. Because this change is tracked individually it is easy to revert to to an earlier commit.
Get a list of previous commits with git log
:
git log --pretty=format:"%h%x09%an%x09%ad%x09%s"
f96ac68 Rob van der Kind Wed Jun 21 11:41:53 2023 +0200 adding new bd
4f76688 Rob van der Kind Wed Jun 21 11:13:13 2023 +0200 adding pipeline
~output omitted~
By reverting the last commit you undo the last change:
git revert <your last commit id such as f96ac68>
Save a commit message and push the code:
git push
This is just one way to revert your change. Although this is the preferred way as now you have the revert action added on top of your commit history. This allows you to track changes or even undo the revert action itself.
Navigate to your project in Gitlab and open the most recent pipeline:
Expand the build job and verify that this plan will destroy 3 resources:
Once you are satisfied with your changes you can trigger deployment by clicking the deploy job play button.
Navigate to APIC to verify your Bridge Domain has been removed.
Step 10: Cleaning up
Congratulations. You can now work on your code with multiple team members, start expanding the configuration or even include additional modules such as those shown in the Comprehensive Example section. This CI/CD guide used the code provided in the Simple Example section and is meant to get the reader familiar with some basic automation principles and how to set up a solid foundation for CI/CD.
As a final step you can clean up the configuration by manually triggering the cleanup job. Navigate to CI/CD
> Pipelines
in your project and click on the cleanup play
button:
Note that this triggers a Terraform Destroy action and cleans up the configuration.
Navigate to APIC to verify that your tenant DEV
has been removed.
Step 11 (Optional): Adding pre-change validation
Using a git repository to store the intended configuration helps with keeping track of changes and allows for an easy revert operations if you happen to apply the wrong configuration. But at that point the configuration could already have had undesirable results. Using Nexus Dashboard Insights (NDI) it is possible to run a pre-change validation based on the intended configuration in the *.yaml
files. That way your new configuration is compared with an epoch (essentially a snapshot) of the state of the fabric. It shows the user if there are any anomalies that can be expected due to the new configuration. It also allows the user to compare the new configuration against any compliance rules. For example, if you have a traffic segmentation rule configured in NDI that states that A should never be able to talk to B, this will be assessed in the pre-change validation.
For more information about Nexus Dashboard Insights and Pre-change Analysis, please visit the NDI User Guide.
Using the commandline tool Nexus-PCV, you can modify the pipeline to include a Pre-Change Validation (PCV) stage. Nexus-PCV can either work with provided JSON file(s) or a terraform plan output from a Nexus as Code project. It waits for the analysis to complete and evaluates the results.
At this point in time it might be sensible to take bit more control over each of the individial stages in the pipeline. That way it becomes simpler to deal wth artifacts between stages, making it easier to pass the plan output to the PCV stage. Instead of including a sample terraform pipeline as in the previous steps, using the extend
function, all logic will now be included locally in gitlab-ci.yml
. A condensed example of how to include the Nexus-PCV tool is shown below:
image:
name: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
variables:
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
stages:
- build
- pcv
- deploy
- cleanup
build:
stage: build
script:
- cd "${TF_ROOT}"
- gitlab-terraform plan
- gitlab-terraform plan-json
- terraform show -json plan.cache > pcv.json
resource_group: ${TF_STATE_NAME}
artifacts:
paths:
- plan.cache
- pcv.json
reports:
terraform: plan.json
# Note that danischm/nac:0.1.3 is just an example docker image. It is advised to build your own docker image with Nexus-PCV installed. This image can be stored in the Gitlab Container registry or Docker Hub. Alternatively you could add a new runner to the Gitlab project. Make sure that within the "script:" section of the stage you include the nexus-pcv package.
pcv:
stage: pcv
image: danischm/nac:0.1.3
script:
- nexus-pcv --version
- nexus-pcv --name ${CI_PIPELINE_ID} --nac-tf-plan pcv.json --output-summary pcv_output.txt --output-url url.txt
artifacts:
paths:
- pcv_output.txt
- url.txt
dependencies:
- build
deploy:
stage: deploy
script:
- cd "${TF_ROOT}"
- gitlab-terraform apply
resource_group: ${TF_STATE_NAME}
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: manual
dependencies:
- pcv
- build
cleanup:
stage: cleanup
script:
- cd "${TF_ROOT}"
- gitlab-terraform destroy
resource_group: ${TF_STATE_NAME}
when: manual
Note that the values for
PCV_GROUP
,PCV_HOSTNAME_IP
,PCV_PASSWORD
,PCV_SITE
, andPCV_USERNAME
are passed as environment variables. This can be configured in theSettings
>CI/CD
section of your Gitlab project.
Example of a PCV with the CI_PIPELINE_ID
as name, invoked by the nexus-pcv tool:
Below shows an example of a pre-change validation output. The venn diagram (from left to right) shows how many anomalies are present in the first epoch (meaning latest snapshot), how many overlap and how many new are found, based on the new configuration:
Note: The validations section of the Nexus-as-Code for ACI section includes more information about which additional steps can be included in your pipeline, including linting, semantic, syntactical validation and unit testing.