First Steps
Set Environment for the Collection
Section titled “Set Environment for the Collection”Installation of a Python virtual environment is needed in order to install the collection and it’s requirements. We recommend pyenv which provides a robust Python virtual environment capability that also allows for management of different Python versions. The following instructions are detailed around using pyenv. For pipeline execution please refer to the pipeline section which is documented at container level.
Step 1 - Installing the Example Repository
Section titled “Step 1 - Installing the Example Repository”To simplify getting started with this collection we provide you with an example repository. Simply clone this repo from GitHub to create the required skeleton, including examples for pipelines. Cloaning the repository requires the installation of git client which is available for all platforms.
Run the following command in the location of interest.
git clone https://github.com/netascode/ansible-dc-vxlan-example.git nac-vxlanThis will clone the example repository into the directory nac-vxlan. Next delete the .git repository to remove the connection to the example repository. Now you can create your own repository from this pre-built structure.
Step 2 - Create the Virtual Environment with pyenv
Section titled “Step 2 - Create the Virtual Environment with pyenv”In this directory create a new virtual environment and install a Python version of your choice. At the time of this writting, a commonly used version is Python version 3.10.13. Command pyenv install 3.10.13 will install this version. For detailed instructions please visit the pyenv site.
cd nac-vxlanpyenv virtualenv <python_version> nac-ndfcpyenv local nac-ndfcExecuting command pyenv local nac-ndfc sets the environment so that whenever the directory is entered it will change into the right virtual environment.
Step 3 - Install Ansible and Additional Required Tools
Section titled “Step 3 - Install Ansible and Additional Required Tools”Included in the example repository is the requirements file to install ansible. First upgrade PIP to latest version.
pip install --upgrade pippip install -r requirements.txtStep 4 - (Option 1) - Install Ansible Galaxy Collection (default placement)
Section titled “Step 4 - (Option 1) - Install Ansible Galaxy Collection (default placement)”The default placement of the ansible galaxy collections would be in your home directory under .ansible/collections/ansible_collections/. To install the collection in the default location run the following command:
ansible-galaxy collection install -r requirements.yamlStep 4 - (Option 2) Install Ansible Galaxy Collection (non-default placement)
Section titled “Step 4 - (Option 2) Install Ansible Galaxy Collection (non-default placement)”If you wish to install the galaxy collection inside the repository you are creating with this example repository, you can run the following command:
ansible-galaxy collection install -p collections/ansible_collections/ -r requirements.yamlThe ansible.cfg file needs to be configured to point to the location of the collection.
This is the path for all the python modules and libraries of the virtual environment that were created. If you look in that directory, you will find the collections package locations. Here is the base ansible.cfg, you will need to adjust the collections_path to your environment paths:
[defaults]collections_path = ./collections/ansible_collections/Step 5 - Change Ansible Callbacks
Section titled “Step 5 - Change Ansible Callbacks”If you wish to add any ansible callbacks ( the listed below expand on displaying time execution ) you can add the following to the ansible.cfg file:
callback_whitelist=ansible.posix.timer,ansible.posix.profile_tasks,ansible.posix.profile_rolescallbacks_enabled=ansible.posix.timer,ansible.posix.profile_tasks,ansible.posix.profile_rolesbin_ansible_callbacks = TrueStep 6 - Verify the Installation
Section titled “Step 6 - Verify the Installation”Verify that the ansible configuration file is being read and all the paths are correct inside of this virtual environment.
ansible --version
ansible [core 2.16.3] config file = /Users/username/tmp/nac-vxlan/ansible.cfg configured module search path = ['/Users/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/username/.pyenv/versions/3.10.13/envs/nac-ndfc/lib/python3.10/site-packages/ansible ansible collection location = /Users/username/path/to/collections/ansible_collections executable location = /Users/username/.pyenv/versions/nac-ndfc/bin/ansible python version = 3.10.13 (main, Oct 29 2023, 00:04:17) [Clang 15.0.0 (clang-1500.0.40.1)] (/Users/username/.pyenv/versions/3.10.13/envs/nac-ndfc/bin/python3.10) jinja version = 3.1.4 libyaml = TrueInventory Host Files
Section titled “Inventory Host Files”As is standard with Ansible best practices, inventory files provide the destination targets for the automation. For this collection, the inventory file is a YAML file that contains the information about the devices that are going to be configured. The inventory files is called inventory.yaml and is located in the root of the repository.
The inventory file is going to contain a structure similar to this:
---all: children: ndfc: hosts: nac-ndfc1: ansible_host: 10.X.X.XThis structure creates two things in Ansible, a group called ndfc and a host called nac-ndfc1:. These are tied back to the directory structure of the repository that contains two folders in the top directory:
graph root-->group_vars root-->host_vars group_vars-->ndfc ndfc-->connection.yaml host_vars-->nac-ndfc1 nac-ndfc1-->data_model_filesThe data model is required to exist under the host_vars directory structure. The inventory file is organizing how the variables are read through both the group_vars and the host_vars. Under the group_vars is where you will set the connection.yaml file that has the credentials of the NDFC controller. Under the host_vars is where we will place the inventory.
The collection is pre-built to utilize the group_vars and host_vars matching what is already constructed in the repository. Currently this methodology is a 1:1 relationship between code repository and NDFC fabric. For more complex environments, the inventory file can be expanded to include multiple groups and hosts including the usage of multi-site fabrics, explained in a separate document.
Step 1 - Update the Inventory File
Section titled “Step 1 - Update the Inventory File”In the provided inventory.yaml file on the root directory, update the ansible_host variable to point to your NDFC controller by replacing 10.X.X.X with the IP address of the NDFC controller.
Step 2 - Configure Ansible Connection File
Section titled “Step 2 - Configure Ansible Connection File”In the directory group_vars/ndfc is a file called connection.yaml that contains example data as:
---# Connection Parameters for 'ndfc' inventory group## Controller Credentialsansible_connection: ansible.netcommon.httpapiansible_httpapi_port: 443ansible_httpapi_use_ssl: trueansible_httpapi_validate_certs: falseansible_network_os: cisco.dcnm.dcnm# NDFC API Credentialsansible_user: "{{ lookup('env', 'ND_USERNAME') }}"ansible_password: "{{ lookup('env', 'ND_PASSWORD') }}"# Credentials for devices in Inventoryndfc_switch_username: "{{ lookup('env', 'NDFC_SW_USERNAME') }}"ndfc_switch_password: "{{ lookup('env', 'NDFC_SW_PASSWORD') }}"This file is going to contain the connection parameters for reachability to the NDFC controller. The ansible_user, and ansible_password are set to establish connection to the NDFC controller. For the devices, you will set separate variables also configured as environment variables. The usage of environment variables is done for security reasons, so that the credentials are not stored in plain text in the repository. Accidentally including your credentials in a repository is very hard to remove. Hence, the usage of environment variables is recommended as a starting point.
Also, if you plan to eventually utilize a pipeline, the environment variables can be set in the pipeline configuration in a secure manner that is not exposed to the repository.
The usage of Ansible vault is also possible to encrypt the contents of the connection file or simply encrypt the variables.
Step 3 - Set Environment Variables
Section titled “Step 3 - Set Environment Variables”The environment variables are set in the shell that is going to execute the playbook. The environment variables are configured via the export command in the shell (bash). Using this template set the environment variables to the correct credentials for the NDFC controller and the devices in the inventory on your topology.
# These are the credentials forexport ansible_user=adminexport ansible_password=Admin_123# These are the credentials for the devices in the inventoryexport ndfc_switch_username=adminexport ndfc_switch_password=Admin_123The following quickstart repository is available to provide a step by step guide for using this collection
This collection is intended for use with the following release versions:
NDFC Release 12.2.1or later.
Ansible Version Compatibility
Section titled “Ansible Version Compatibility”This collection has been tested against following Ansible versions: >=2.14.15.
Plugins, roles and modules within a collection may be tested with only specific Ansible versions. A collection may contain metadata that identifies these versions. PEP440 is the schema used to describe the versions of Ansible.
Building the Primary Playbook
Section titled “Building the Primary Playbook”The following playbook for the NDFC as Code collection is the central execution point for this collection. Compared to automation in other collections, this playbook is designed to be mostly static and typically will not change. What gets executed during automation is based entirely on changes in the data model. When changes are made in the data model, the playbook will call the various roles and underlying modules to process the changes and update the NDFC managed fabric.
The playbook is located in the root of the repository and is called vxlan.yaml. It contains the following:
---# This is the main entry point playbook for calling the various# roles in this collection.- hosts: nac-ndfc1 any_errors_fatal: true gather_facts: no
roles: # Prepare service model for all subsequent roles # - role: cisco.nac_dc_vxlan.validate
# ----------------------- # DataCenter Roles # Role: cisco.netascode_dc_vxlan.dtc manages direct to controller NDFC workflows # - role: cisco.nac_dc_vxlan.dtc.create tags: 'role_create'
- role: cisco.nac_dc_vxlan.dtc.deploy tags: 'role_deploy'
- role: cisco.nac_dc_vxlan.dtc.remove tags: 'role_remove'The host is defined as nac-ndfc1 which references back to the inventory.yaml file. The roles section is where the various collection roles are called.
The first role is cisco.nac_dc_vxlan.validate which is going to validate the data model. This is a required step to ensure that the data model is correct and that the data model is going to be able to be processed by the subsequent roles.
The subsequent roles are the cisco.nac_dc_vxlan.dtc.create, cisco.nac_dc_vxlan.dtc.deploy, and cisco.nac_dc_vxlan.dtc.remove roles. These roles are the primary roles that will invoke changes in NDFC as described earlier.
Note: For your safety as indicated ealier, the
removerole also requires setting some variables totrueunder thegroup_varsdirectory. This is to avoid accidental removal of configuration from NDFC that might impact the network. This will be covered in more detail below.
The playbook can be configured to execute only the roles that are required. For example, as you are building your data model and familiarizing yourself with the collection, you may comment out the deploy and remove roles and only execute the validate and create roles. This provides a quick way to make sure that the data model is structured correctly.
Role Level Tags:
To speed up execution when only certain roles need to be run the following role level tags are provided:
- role_validate - Select and run
cisco.nac_dc_vxlan.validaterole - role_create - Select and run
cisco.nac_dc_vxlan.createrole - role_deploy - Select and run
cisco.nac_dc_vxlan.deployrole - role_remove - Select and run
cisco.nac_dc_vxlan.removerole
The validate role will automatically run if tags role_create, role_deploy, role_remove are specified.
Example: Selectively Run cisco.nac_dc_vxlan.create role alone
ansible-playbook -i inventory.yaml vxlan.yaml --tags role_createSelective Execution based on Model Changes
This collection has the capability to selectively run only sections within each role that changed in the data model. This requires at least one run where all of the roles and sections are executed creating previous state. On the next run only the sections that changed in the data model will be executed. For example, if VRFs and Networks are added/changed/removed in the model data files only the VRF and Networks sections will be run.
This capability is not available under the following conditions:
- Control flag
force_run_allunder group_vars is set totrue. - When using ansible tags to control execution.
- When one of the following roles failed to complete on the previous run.
cisco.nac_dc_vxlan.validatecisco.nac_dc_vxlan.createcisco.nac_dc_vxlan.deploycisco.nac_dc_vxlan.remove
If any of these conditions is true then all roles/sections will be run.
See Also
Section titled “See Also”- Ansible Using collections for more details.
Contributing to this Collection
Section titled “Contributing to this Collection”Ongoing development efforts and contributions to this collection are focused on new roles when needed and enhancements to current roles.
We welcome community contributions to this collection. If you find problems, please open an issue or create a PR against the Cisco netascode_dc_vxlan collection repository.