Ansible – Config Generator – I

Before proceeding, make sure to install “netaddr” as this is required for “ipaddr()” used in the Jinja2 configuration template.

pip install netaddr

aggr.yml is the playbook that will be utilized for generating L3 SVI configuration:

---
- hosts: local
  connection: local
  gather_facts: no

tasks:
- name: GENERATE CONFIG
  template:
  src: ./SVI.j2
  dest: ./{{ item.vlan }}.conf
  with_items:
  - { vrf: NET1, vlan: 502, vlanname: VLAN-502-NAME, net: 10.80.120.128/29 }
  - { vrf: NET1, vlan: 503, vlanname: VLAN-503-NAME, net: 10.80.120.136/29 }

This is the SVI.j2 Jinja2 template that is being utilized to generate the final configuration:

###### A-Side ######
vlan {{ item.vlan }}
 name {{ item.vlanname }}

interface Vlan {{ item.vlan }}
 description {{ item.vlanname }}
 mtu 9100
 vrf member {{ item.vrf }}
 no ip redirects
 ip address {{ item.net | ipaddr('2') }}
 hsrp version 2
 hsrp 1
 authentication md5 key-string password{{ item.vlan }}
 preempt delay minimum 120
 priority 120
 timers 1 3
 ip {{ item.net | ipaddr('1') | ipaddr('address') }}

###### B-Side ######
vlan {{ item.vlan }}
 name {{ item.vlanname }}

interface Vlan {{ item.vlan }}
 description {{ item.vlanname }}
 mtu 9100
 vrf member {{ item.vrf }}
 no ip redirects
 ip address {{ item.net | ipaddr('3') }}
 hsrp version 2
 hsrp 1
 authentication md5 key-string password{{ item.vlan }}
 preempt delay minimum 120
 priority 110
 timers 1 3
 ip {{ item.net | ipaddr('1') | ipaddr('address') }}

In my setup, the play-book (aggr.yml), source template SVI.j2 file and the destination file exist in the same folder.

Reference Github.

Ansible – Encrypting Password

Basic Ansible automation playbook provides a method for accessing Cisco IOS devices and executing “show commands”. The “secrets.yml” file contains the username and password in plain-text. ansible-vault can be utilized to encrypt the “secrets.yml” file.

Encrypt a file using ansible-vault:

ansible-vault encrypt secrets.yml

View the contents of an encrypted file:

ansible-vault view secrets.yml

Decrypt a file using ansible-vault:

ansible-vault decrypt secrets.yml

Ansible – Basic Playbook

The goal of this post is to provide you with a simple way to utilize Ansible 2.x and obtain data from Cisco IOS devices by running “show” commands. Github Reference.

Ansible Installation:

Before starting, make sure you have ansible installed.

Create a working directory:

mkdir ansible_play
cd ansible_play

Create the following 4 files within the “ansible_play” directory:

ansible.cfg
hosts
secrets.yml
show_code.yml

Among the 4 files provided in this example, the “hosts” and “secrets.yml” file would have to be altered to suit your requirements. The other 2 files, “ansible.cfg” and “show_code.yml” can be used as-is.

The contents of the above files will be like this:

$ cat ansible.cfg

[defaults]
hostfile = ./hosts
host_key_checking=False
timeout = 5

For now, use the contents as provided above for the “ansible.cfg” file.

$ cat hosts

[ios]
switch1

The hosts file will have the switches/routers that you would like to use for running the ansible-playbook. In my case, I am using “switch1”. The 1st line “[ios]” is the name of the hosts that will be referenced in the ansible-playbook.

$ cat secrets.yml

---
creds:
  username: cisco
  password: ciscopassword
  auth_pass: ciscoauth

This file contains the login information required to access the devices in the hosts file.

$ cat show_code.yml

---
- hosts: ios
  gather_facts: no
  connection: local
 
  tasks:
  - name: OBTAIN LOGIN CREDENTIALS
    include_vars: secrets.yml
 
  - name: DEFINE PROVIDER
    set_fact:
      provider:
        host: "{{ inventory_hostname }}"
        username: "{{ creds['username'] }}"
        password: "{{ creds['password'] }}"
        auth_pass: "{{ creds['auth_pass'] }}"
 
  - name: SHOW VERSION
    ios_command:
      provider: "{{ provider }}"
      commands:
        - show version | i Version
    register: write
 
  - debug: var=write.stdout_lines

Executing the ansible playbook:

$ ansible-playbook show_code.yml

PLAY [ios] *********************************************************************

TASK [OBTAIN LOGIN CREDENTIALS] ************************************************
ok: [switch1]

TASK [DEFINE PROVIDER] *********************************************************
ok: [switch1]

TASK [SHOW VERSION] ************************************************************
ok: [switch1]

TASK [debug] *******************************************************************
ok: [switch1] => {
    "write.stdout_lines": [
        [
            "Cisco IOS Software, Catalyst 4500 L3 Switch  Software (cat4500e-ENTSERVICESK9-M), Version 15.2(2)E3, RELEASE SOFTWARE (fc3)"
        ]
    ]
}

PLAY RECAP *********************************************************************
switch1               : ok=4    changed=0    unreachable=0    failed=0

 

 

F5 Virtual Server – Order of Precedence

The VS order of predence differs with code version and the tm.continuematching db variable. This tm.continuematching db variable is set to false by default and hence, a lower predence VS does not handle the traffic if there exists an higher predence VS in a disabled state. If the traffic has to be handled by lower precedence VS when the higher precedence VS is disabled, we would have to set this db variable as true:

11.x Code Version:

(tmos)# modify /sys db tm.continuematching value true
(tmos)# save /sys config

9.x – 10.x Code Version:

bigpipe db TM.ContinueMatching true
bigpipe save all

The order of predence for VS processing for different code version is provided below.

Order of Precedence for code version: 9.2 – 11.2.x

<address>:<port>
 <address>:*
 <network>:<port>
 <network>:*
 *:<port>
 *:*

Order of Precedence for code version: 11.3 and later

Order Destination Source Service port
1 <host address> <host address> <port>
2 <host address> <host address> *
3 <host address> <network address> <port>
4 <host address> <network address> *
5 <host address> * <port>
6 <host address> * *
7 <network address> <host address> <port>
8 <network address> <host address> *
9 <network address> <network address> <port>
10 <network address> <network address> *
11 <network address> * <port>
12 <network address> * *
13 * <host address> <port>
14 * <host address> *
15 * <network address> <port>
16 * <network address> *
17 * * <port>
18 * * *

Ansible – Installation

If you have root access to your box, you can utilize the following link in order to install Ansible. I would recommend Ansible 2.1 & later version if your goal is to utilize Ansible as a Network Automation tool.

Creating a Virtual Environment with Ansible:

If you don’t have root access to the bastion host that is used to access the network infrastructure, you can utilize ansible in virtual environment.

pip install --upgrade pip virtualenv virtualenvwrapper
virtualenv ansible2.1
source ansible2.1/bin/activate
pip install ansible==2.2.1

Whenever required, the virtual environment can be accessed using:
source ansible2.1/bin/activate

root@ansible:~$ source ansible2.1/bin/activate

(ansible2.1)root@ansible:~$ ansible --version
ansible 2.1.0.0
  config file = 
  configured module search path = Default w/o overrides

F5 – Bleeding Active Connections

Scenario:

A Virtual Server is load balancing connections to a pool with 2 pool members. During maintenance window, one of the two pool members is disabled and maintenance is completed followed by the other pool member.

However, as the users make continuous API calls every 5 seconds, the existing TCP connection never bleeds out. Even after waiting for 24 hours, connections still exist on the disabled pool member.

Solution:

By default, F5 makes load balancing decision when the 1st HTTP request within a TCP connection is received. Subsequent HTTP request within the TCP connection are sent to the same pool member as the very 1st HTTP request.

By enabling OneConnect profile with a /32 netmask (255.255.255.255), we were able to force the F5 to make load balancing decision for every HTTP request instead of its default behavior.

The OneConnect profile used along with disabled or forced-offline setting will move the connection from the failed pool member to the active pool member.

Reference Link.

Sub-Domain Delegation GTM/DNS

 

Lets say that you have domain.com hosted with a 3rd party DNS provider and you would like to create GTM (BigIP-DNS) DNS load balancing by utilizing Sub-Domain Delegation.

In this scenario, there are 2 GTM. One in each DC (DC-1 & DC-2). The basic set up has been completed and the GTMs are in a common sync-group.

Create A-Records for the 2 GTM using their Listener IP addresses:

 gtm1.wip.domain.com. IN A 100.100.100.100
 gtm2.wip.domain.com. IN A 200.200.200.200

gtm1 and gtm2 exist in DC-1 and DC-2 respectively and 100.100.100.100 & 200.200.200.200 are the listener IP address configured on gtm1 and gtm2.

Delegate the sub-domain to the GTM using NS Records:

 wip.domain.com. IN NS gtm1.wip.domain.com.
 wip.domain.com. IN NS gtm2.wip.domain.com.

Use CNAME records:

www.domain.com. IN CNAME www.wip.domain.com.

The above DNS records (A, NS & CNAME) will be added to the 3rd party DNS records that is hosting domain.com. Any request for

www.domain.com

will be sent to the 3rd party DNS provider which will then resolve to

www.wip.domain.com

because of the CNAME and that will be handled by the GTMs because of the NS & A records.

SOL277 – Sub-domain delegation.

Brocade ADX Source NAT

Global source NAT

Similar to F5’s Automap

Automap will SNAT any traffic going towards the real-server. The client’s source IP will be replaced with the self-IP configured on the Brocade’s interface closest to the real servers. SNAT is implemented for ALL the real servers.

server source-nat

 

Warning: Do not use automap in environments with any considerable number of clients and/or servers, since there is a high chance to run into port-exhaustion and connections will drop.

SNAT IP

server source-nat 
server source-nat-ip 192.168.100.100 255.255.255.255 0.0.0.0 port-range 2 port-alloc-per-real

The client’s source IP will be replaced with the 192.168.100.100 configured in the second line.

The port-alloc-per-real command indicates that a SNAT IP:port combination can be re-used per real server at any particular instance. The  port-range  parameter specifies which port range this peer uses for source NAT for this source IP address. Specify 1 for the lower port range or 2 for the upper port range.

Per-real-server source NAT

SNAT IP is the IP of the interface closest to the real servers.

SNAT is implemented for real servers by configuring them with  source-nat  command:

server real r1 192.168.100.20
source-nat
server source-nat-ip 192.168.100.100 255.255.255.255 0.0.0.0 port-range 2 port-alloc-per-real 
server real r1 192.168.100.20
  source-nat 

Per-real-server source NAT with ACL

Same as automap per-real-server, but in this case, SNAT is implemented only for traffic originating from private 192.168.100.0/22 network by utilizing an  access-list. This way, the access to the VIP from other real servers and the client requests from the Internet will not be subjected to SNAT, their IP will not change.

server source-nat-ip 192.168.100.100 255.255.255.255 0.0.0.0 port-range 2 port-alloc-per-real
access-list 1 permit 192.168.100.0 0.0.3.255 
access-list 1 deny any 

server real r1 192.168.100.20
source-nat access-list 1 

 

HA considerations

If a SNAT configuration is used in an HA config, add the source-nat-ip into the vip-group in order for the secondary to take over the SNAT IP, in case of a failover:

server vip-group 1
source-nat-ip 192.168.100.100

Server Load Balancing : Source NAT – http://www.brocade.com/downloads/documents/html_product_manuals/VADX_03000_SLB/wwhelp/wwhimpl/common/html/wwhelp.htm#context=Virtual_ADX_0300_SLBGuide&file=slb_V_ADX.04.06.html

Brocade ADX – Fall Back Server

TERMINOLOGY:
  • Primary – A primary server is used by the ServerIron ADX when load balancing client requests for an application.
  • Backup – A backup server is used by the ServerIron ADX only if all the primary servers are unavailable for the requested application.
  • Local – A local server is one that is connected to the ServerIron ADX at Layer 2. The ServerIron ADX uses local servers for regular load balancing.
  • Remote – A remote server is one that is connected to the ServerIron ADX through one or more router hops. The ServerIron ADX uses remote servers only if all the local servers are unavailable.
  • By default, Local Real Server is considered to be “Primary” and Remote Real Server is considered to be “Backup”.
  • Local Real Server > Dedicated Servers
  • Remote Real Server > Cloud Servers

 

LOCAL REAL SERVERS:

  • Server definition starts with “server real” for Local Real Servers
server real web1 192.168.10.33
 port http
 port http keepalive
 port http url "HEAD /"
 port http l4-check-only
!
server real web2 192.168.10.34
 port http
 port http keepalive
 port http url "HEAD /"
 port http l4-check-only
!

 

REMOTE REAL SERVERS:

There are 3 Remote Real Servers. These are the Cloud Servers that are accessible via the RackConnected ASA FW.

  • Server definition starts with “server remote-name” for Remote Real Servers
  • source-nat is used for the Remote Real Servers
  • If the number of connections to the Remote Real Servers are expected to be more than 65K, it is better to use separate SNAT IP. Use the “Brocade ADX Source NAT” Axios documentation
server remote-name web3.domain.com 10.180.4.235
 source-nat
 port http
 port http keepalive
 port http url "HEAD /"
 port http l4-check-only

server remote-name web4.domain.com 10.180.5.109
 source-nat
 port http
 port http keepalive
 port http url "HEAD /"
 port http l4-check-only

server remote-name web5.domain.com 10.180.5.99
 source-nat
 port http
 port http keepalive
 port http url "HEAD /"
 port http l4-check-only
!

VIRTUAL SERVER

server virtual VS-5.5.5.5.5 192.168.99.30
 predictor least-conn
 port http sticky
 port http tcp-only
 port http lb-pri-servers
 port http reset-on-port-fail
 bind http web1 http web2 http
 bind http web5.domain.com http web4.domain.com http web3.domain.com http

By default, without “port http lb-pri-servers”, ALL the traffic will be sent ONLY to the “Local Real Servers”. Traffic will be sent to the “Remote Real Servers”, only if ALL the Local Real Servers fail. This is because the Local Real Server is considered to be “Primary Server” and Remote Real Server is considered to be “Backup Server”, by default.

port http lb-pri-servers

When we use the commands mentioned earlier ( port http lb-pri-servers ), all the real servers bound to the VS ( for port http ) will be considered to be “Primary” and traffic will be distributed across “Local” and “Remote” servers.

If we want one server to be “Backup” and all the other servers to be “Primary”, we would have to enter the command:

  • backup

under the relevant “Real Server” (Local or Remote).

CSW POLICY – CLIENT IP INSERTION

Create CSW Rule:

csw-rule "HOST_Domain" header "host" pattern "."

OR

csw-rule "HOST_Domain" header "host" exists

 

Create CSW Policy:

csw-policy "CSW_CLIENT_IP" 
match "HOST_Domain" forward 1
match "HOST_Domain" rewrite request-insert client-ip
default forward 1
default rewrite request-insert client-ip

We would have to use the separate “Match-Forward” rule followed by the “Default” rule since the “Default” rule on its own, without a “Match-Action” rule is not allowed by Brocade ADX.

Create Group ID for Real-Servers:

 port http group-id 1 1

Apply CSW Policy to Virtual Server:

 port http csw-policy "CSW_CLIENT_IP" 
 port http csw

Example:

server virtual VS-5.5.5.5 192.168.99.30
 predictor least-conn
 port http sticky
 port http tcp-only
 port http lb-pri-servers
 port http csw-policy "CSW_CLIENT_IP" 
 port http csw
 bind http web1 http web2 http
 bind http web5.domain.com http web4.domain.com http web3.domain.com http

Reference:

http://community.brocade.com/docs/DOC-1526/diff?secondVersionNumber=4

OneConnect & HTTP Requests

This is a copy/paste of a Q&A in devcentral. I didn’t change it as it is quite descriptive and gets the point across.

Current Setup:

We are using Cookie Insert method for session persistence. So LTM adds “BigipServer*” Cookie in the http response header with value as encoded IP address and port name. Subsequent requests from the client (in our case browser) will have this cookie in the request header and this helps LTM to send the request to same server. This LTM cookie’s expiry is set to session, so this cookie will be cleared when we close the browser or we expire it using iRule.

Use Case:

We have set of servers configured as pool members serving traffic to users who are logged in. During release time, we will release the code to new set of servers and add those servers also to the LTM pool. LTM will now have servers with both old code as well as new code. We disable all servers which has the old code so that LTM routes only the requests which already has “BigipServer*” Cookie value pointing to those servers. This will not interrupt the users who are already logged in and doing some work. All new requests (new users) will be load balanced to any of the active servers which has new code. We will ask our already logged in users to logout and login back again once they are done with the current work. We have an iRule configured to expire the LTM cookie during logout, so our expectation is that users will be connected to new servers when they are logging in again.

Problem:

Even though iRule expires the LTM cookie during logout and the cookie is not present the request header of login, users are still routed to the same disabled server when they are logging in again. Ideally, LTM should have load balanced the request to any of the active servers.

Root Cause:

Upon analyzing this further with network traffic, we found that, whenever the browser has a persistent TCP connection open with LTM after logout, browser uses that existing TCP connection for sending the login request. LTM routes this login request to the same disabled server which handled the previous request even though LTM cookie is not present in the request header. If we close the TCP connection manually after logout (using CurrPosts or some other tool), the browser establishes a new connection with LTM during login and LTM load balances this requests to any active server. One option for us is to send “Connection: close” in the response header during logout, but the browser may hold multiple persistent TCP connections (I have seen browser holding even three connections) and hence closing a single TCP connection will not help. Other option is to close the browser, but we don’t have that choice for reasons I cannot explain here (trust me).

SOLUTION:

Try using the following:

  1. OneConnect Profile in VS with netmask of /32.
  2. Action on Service Down in the Pool set to Reselect.

(1) will force the load balancing decision to be made for every HTTP request instead of the the default of lb decision being made only for the 1st HTTP request within a TCP connection.

(2) will force the HTTP Request to be sent to a new pool member when the selected member is down as the load balancing decision is made for every HTTP request instead of the very 1st HTTP request within a persistent/keep-alive connection.

Keep-alive Connection (also referred to as Persistent Connection) is used to refer to the same feature provided by HTTP1.1 where you can utilize a single TCP connection in order to send multiple HTTP requests within a single TCP connection.