Under the majority of the role directories you'll notice a ".yml" file. Playbooks are written in a simple markup language called YAML, which stands for YAML Ain't Markup Language. YAML is used as its even easier to read than data structure formats such as XML or JSON that you have previously examined.
YAML files optionally begin with --- at the top of the file to signify the start of the file. Following the
file start, comes the Ansible modules written in YAML syntax. YAML syntax for Ansible modules is expressed as a list
of key/value pairs. A list simply begins with a "- " (a dash followed by a space) while, as in previous examples,
dictionaries use ":".
Below you can see an example of YAML syntax that contains a list of dictionaries and where the dictionaries contain lists:
- name: CONFIGURE PIM ANYCAST RP
cisco.nxos.nxos_config:
lines:
- ip pim anycast-rp {{ rp_address }} {{ s1_loopback }}
- ip pim anycast-rp {{ rp_address }} {{ s2_loopback }}
- name: CONFIGURE PIM RP
cisco.nxos.nxos_pim_rp_address:
rp_address: "{{ rp_address }}"
In the example above, you were introduced to two things:
cisco.nxos.nxos_config and cisco.nxos.nxos_pim_rp_address which are existing Ansible Network modules from the Cisco NXOS collection.
Within the documentation for each network module there is a synopsis of what function the module performs and a table of parameters or keys. The tabulated parameters inform the user which parameters are required for the module to function as a task, which parameters are optional, and what the defaults are for those parameters.
As an example, each of these core network modules have basic requirements for arguments to access the device, albeit, some of these parameters may not apply to the network operating system you work with:
Ansible allows you to define and reference variables in your playbook tasks using Jinja2 templating; a templating
language for Python. A YAML gotcha for Jinja2 is that if you start the line with a Jinja2 templated variable, for
example "{{inventory_hostname}}", then the entire line must be in quotes.
Variables can be defined in various locations and
Ansible has a precedence system for understanding what and when a variable would be overridden if used in more
than one location. The precedence for a variable can be seen below. As an example, a variable in the role defaults
directory would be overridden by a variable in the playbook global_vars/all directory and file. Further, both of these are
overridden by the local vars directory within the specific role, role vars.
Create an ansible.cfg file to disable hostkey checking and set your python interpreter for the purposes of this lab.
touch /home/pod12/workspace/nxapilab/ansible-nxos/ansible.cfg
cat <<EOF > /home/pod12/workspace/nxapilab/ansible-nxos/ansible.cfg
[defaults]
interpreter_python = $PYENV_VIRTUAL_ENV/bin/python
host_key_checking = False
# collections_path = $PYENV_VIRTUAL_ENV/lib/python3.11/site-packages/ansible_collections
[persistent_connection]
command_timeout=1000
connect_timeout=1000
EOF
You will create an nxos group vars directory for any data that is consistent across all NXOS devices regardless
of their role. Within this directory, you will create a connection.yml file to store common connection details
such as the ansible_connection, ansible_network_os, and username/password information used
to connect to the devices, as well as a common.yml file to store variables shared across all NXOS devices.
These are files with key/value pairs. group_vars/nxos is where you place universal variables that apply for all devices.
mkdir /home/pod12/workspace/nxapilab/ansible-nxos/group_vars/nxos
For passwords, it is best practice to leverage something like Ansible Vault or source from environment variables which you will do in this lab.
touch /home/pod12/workspace/nxapilab/ansible-nxos/group_vars/nxos/connection.yml
cat <<EOF > /home/pod12/workspace/nxapilab/ansible-nxos/group_vars/nxos/connection.yml
---
ansible_connection: ansible.netcommon.httpapi
ansible_httpapi_port: 443
ansible_httpapi_use_ssl: true
ansible_httpapi_validate_certs: false
ansible_network_os: cisco.nxos.nxos
ansible_user: "{{ lookup('ansible.builtin.env', 'NXOS_USERNAME') }}"
ansible_httpapi_pass: "{{ lookup('ansible.builtin.env', 'NXOS_PASSWORD') }}"
EOF
Next, create the common.yml file to store variables shared across all NXOS devices. This file will contain NTP server configuration that applies regardless of device role.
touch /home/pod12/workspace/nxapilab/ansible-nxos/group_vars/nxos/common.yml
cat <<EOF > /home/pod12/workspace/nxapilab/ansible-nxos/group_vars/nxos/common.yml
---
ntp:
servers:
- ip: 10.81.254.131
vrf: management
prefer: true
EOF
Create a shell script to set environment variables for your NXOS device credentials. Your Ansible playbook will
leverage environment variables named NXOS_USERNAME and NXOS_PASSWORD for authentication
purposes with the NXOS devices.
touch /home/pod12/workspace/nxapilab/ansible-nxos/secrets.sh
cat <<EOF > /home/pod12/workspace/nxapilab/ansible-nxos/secrets.sh
export NXOS_USERNAME="admin"
export NXOS_PASSWORD="cisco.123"
EOF
Source the simple shell script to set the secret env variables.
cd /home/pod12/workspace/nxapilab/ansible-nxos
source secrets.sh
You can check if your env variable is set by issuing the below command in your VSCode terminal window:
env | grep -E "^NX"
$ env | grep -E "^NX" NXOS_USERNAME=admin NXOS_PASSWORD=cisco.123
Copy the below YAML into the group_vars/spines.yml file. The variables here will be used to configure specific features for VXLAN EVPN and BGP parameters. Remember, each of these is a dictionary with key/value pairs or a dictionary that contains a list of dictionaries.
touch /home/pod12/workspace/nxapilab/ansible-nxos/group_vars/spines.yml
code-server -r /home/pod12/workspace/nxapilab/ansible-nxos/group_vars/spines.yml
---
# var file for spines group
features:
- nxapi
- ospf
- pim
- bgp
- nv overlay
- netconf
- restconf
ospf:
- process: UNDERLAY
area: 0.0.0.0
pim:
anycast_rp_address: 10.250.250.1
anycast_rp_router_addresses:
- 10.0.0.1
bgp:
asn: 65001
neighbors:
- neighbor: 10.0.0.101
remote_as: 65001
update_source: loopback0
- neighbor: 10.0.0.102
remote_as: 65001
update_source: loopback0
- neighbor: 10.0.0.103
remote_as: 65001
update_source: loopback0
Copy the below YAML into the leaf role vars directory main.yml file. The variables here will be used for specific features for VXLAN EVPN, BGP parameters, SVI parameters, and VXLAN parameters, such as tenant VRFs and VLANs mapped to specific VNIs. Remember, each of these is a dictionary with key/value pairs or a dictionary that contains a list of dictionaries.
touch /home/pod12/workspace/nxapilab/ansible-nxos/group_vars/leafs.yml
code-server -r /home/pod12/workspace/nxapilab/ansible-nxos/group_vars/leafs.yml
---
# var file for leafs group
features:
- nxapi
- ospf
- pim
- bgp
- nv overlay
- vn-segment-vlan-based
- interface-vlan
- netconf
- restconf
ospf:
- process: UNDERLAY
area: 0.0.0.0
bgp:
asn: 65001
neighbors:
- neighbor: 10.0.0.1
remote_as: 65001
update_source: loopback0
- neighbor: 10.0.0.2
remote_as: 65001
update_source: loopback0
anycast_gw_mac: "1234.5678.9000"
vrfs:
- name: management
routes:
- destination: 0.0.0.0/0
next_hop: 10.15.12.1
- name: &refvrf_ansiblevrf AnsibleVRF
vlan_id: 500
vni_id: 50000
networks:
# - vlan_id: 1
- name: AnsibleNet1
vlan_id: 101
vni_id: 10101
vrf: *refvrf_ansiblevrf
ip_address: 192.168.1.1
mask: 24
mcast_grp: 239.1.1.1
- name: AnsibleNet2
vlan_id: 102
vni_id: 10102
vrf: *refvrf_ansiblevrf
ip_address: 192.168.2.1
mask: 24
mcast_grp: 239.1.1.2
For each role, you are now going to create the variables to be used per device and commonly across all devices specified within the role. The device specific variables will be defined under the host_vars directory for each device. The common variables to be used across all devices will be defined under group_vars/spines.yml and group_vars/leafs.yml respectively.
Copy the below YAML files into the host_vars directory for your Spine1 device. These device-specific variables include physical interface information, such as IP addressing, and loopback interface information for underlay routing, such as peering, router-id, and PIM RP address.
In this YAML file, you will see the use of the anchor and alias feature of YAML. This is a way to reuse pieces of YAML in other parts of a YAML file. Here, you create two different data structures for two different types of layer 3 interfaces. Then, you combine them into a list of layer 3 interfaces.
touch /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-spine1.yml
code-server -r /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-spine1.yml
---
# vars file for staging-spine1
hostname: staging-spine1
layer3_physical_interfaces: &l3
- name: mgmt0
ip_address: 10.15.12.11
mask: 24
enabled: true
- name: Ethernet1/1
description: To L1 Eth1/1
ip_address: 10.1.1.0
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: Ethernet1/2
description: To L2 Eth1/1
ip_address: 10.1.1.2
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: Ethernet1/3
description: To L3 Eth1/1
ip_address: 10.1.1.4
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
loopback_interfaces: &lo
- name: loopback0
description: Routing Loopback
ip_address: 10.0.0.1
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: loopback250
description: PIM Anycast RP Loopback
ip_address: 10.250.250.1
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
all_layer3_interfaces: "{{ layer3_physical_interfaces + loopback_interfaces }}"
Copy the below YAML files into the host_vars directory for your Spine2 device. Like Spine1, the device-specific variables include physical interface information, such as IP addressing, and loopback interface information for underlay routing, such as peering, router-id, and PIM RP address.
touch /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-spine2.yml
code-server -r /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-spine2.yml
---
# vars file for staging-spine2
hostname: staging-spine2
layer3_physical_interfaces: &l3
- name: mgmt0
ip_address: 10.15.12.12
mask: 24
enabled: true
- name: Ethernet1/1
description: To L1 Eth1/2
ip_address: 10.2.2.0
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: Ethernet1/2
description: To L2 Eth1/2
ip_address: 10.2.2.2
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: Ethernet1/3
description: To L3 Eth1/2
ip_address: 10.2.2.4
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
loopback_interfaces: &lo
- name: loopback0
description: Routing Loopback
ip_address: 10.0.0.2
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: loopback250
description: PIM Anycast RP Loopback
ip_address: 10.250.250.2
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
all_layer3_interfaces: "{{ layer3_physical_interfaces + loopback_interfaces }}"
Copy the below YAML files into the host_vars directory for your Leaf1 device. Like your spine, the leaf device-specific variables include physical interface information, such as IP addressing, and loopback interface information for underlay routing, such as peering, router-id, and VTEP address.
touch /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-leaf1.yml
code-server -r /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-leaf1.yml
---
# vars file for staging-leaf1
hostname: staging-leaf1
layer3_physical_interfaces: &l3
- name: mgmt0
ip_address: 10.15.12.21
mask: 24
enabled: true
- name: nve1
enabled: true
- name: Ethernet1/1
description: To S1 Eth1/1
mode: layer3
ip_address: 10.1.1.1
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: Ethernet1/2
description: To S2 Eth1/1
mode: layer3
ip_address: 10.2.2.1
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
loopback_interfaces: &lo
- name: loopback0
description: Routing Loopback
ip_address: 10.0.0.101
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: loopback1
description: VTEP Loopback
ip_address: 10.100.100.101
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
all_layer3_interfaces: "{{ layer3_physical_interfaces + loopback_interfaces }}"
layer2_physical_interfaces: &l2
- name: Ethernet1/4
description: To Server1 Eth1
mode: access
vlan: 101
enabled: true
Copy the below YAML files into the host_vars directory for your Leaf2 device. Like your spine, the leaf device-specific variables include physical interface information, such as IP addressing, and loopback interface information for underlay routing, such as peering, router-id, and VTEP address.
touch /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-leaf2.yml
code-server -r /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-leaf2.yml
---
# vars file for staging-leaf2
hostname: staging-leaf2
layer3_physical_interfaces: &l3
- name: mgmt0
ip_address: 10.15.12.22
mask: 24
enabled: true
- name: nve1
enabled: true
- name: Ethernet1/1
description: To S1 Eth1/2
mode: layer3
ip_address: 10.1.1.3
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: Ethernet1/2
description: To S2 Eth1/2
mode: layer3
ip_address: 10.2.2.3
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
loopback_interfaces: &lo
- name: loopback0
description: Routing Loopback
ip_address: 10.0.0.102
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: loopback1
description: VTEP Loopback
ip_address: 10.100.100.102
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
all_layer3_interfaces: "{{ layer3_physical_interfaces + loopback_interfaces }}"
layer2_physical_interfaces: &l2
- name: Ethernet1/4
description: To Server3 Eth1
mode: access
vlan: 101
enabled: true
Copy the below YAML files into the host_vars directory for your Leaf3 device. Like the other leaf devices, the device-specific variables include physical interface information, such as IP addressing, and loopback interface information for underlay routing, such as peering, router-id, and VTEP address.
touch /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-leaf3.yml
code-server -r /home/pod12/workspace/nxapilab/ansible-nxos/host_vars/staging-leaf3.yml
---
# vars file for staging-leaf3
hostname: staging-leaf3
layer3_physical_interfaces: &l3
- name: mgmt0
ip_address: 10.15.12.23
mask: 24
enabled: true
- name: nve1
enabled: true
- name: Ethernet1/1
description: To S1 Eth1/3
mode: layer3
ip_address: 10.1.1.5
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: Ethernet1/2
description: To S2 Eth1/3
mode: layer3
ip_address: 10.2.2.5
mask: 31
mtu: 9216
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
loopback_interfaces: &lo
- name: loopback0
description: Routing Loopback
ip_address: 10.0.0.103
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
- name: loopback1
description: VTEP Loopback
ip_address: 10.100.100.103
mask: 32
enabled: true
ospf:
process: UNDERLAY
area: 0.0.0.0
pim: true
all_layer3_interfaces: "{{ layer3_physical_interfaces + loopback_interfaces }}"
With all your variables in place, continue to the next section to build the tasks for configuring the VXLAN EVPN fabric.