The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This document decribes how to deploy an Application Virtual Switch (AVS) switch with an Adaptive Security Virtual Appliance (ASAv) single firewall in Routed/GOTO mode as a L4-L7 Service Graph between two End Point Groups (EPGs) to establish client-to-server communication using ACI 1.2(x) Release.
Cisco recommends that you have knowledge of these topics:
The information in this document is based on these software and hardware versions:
Hardware & Software:
Features:
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.
As shown in the image,
AVS Initial Setup creates a VMware vCenter Domain (VMM integration)2
Note:
Navigate to VM Networking > VMWare > Create vCenter Domain, as shown in the image:
If you’re using Port-channel or VPC (Virtual Port-channel) it is recommended to set the vSwitch policies to use Mac Pinning.
After this, APIC should push AVS switch configuration to vCenter, as shown in the image:
On APIC you can notice that a VXLAN Tunnel Endpoint (VTEP) address is assigned to the VTEP port-group for AVS. This address is assigned no matter what Connectivity mode is used (VLAN or VXLAN)
Install the Cisco AVS software in vCenter
Note:In this case we are using ESX 5.5, Table 1, shows the Compatibility matrix for ESXi 6.0, 5.5, 5.1, and 5.0
Table 1 - Host Software Version Compatibility for ESXi 6.0, 5.5, 5.1, and 5.0
Within the ZIP file there are 3 VIB files, one for each of the ESXi host versions, select the one appropiate for ESX 5.5, as shown in the image:
Note: If a VIB file exists on the host, remove it by using the esxcli software vib remove command.
esxcli software vib remove -n cross_cisco-vem-v197-5.2.1.3.1.5.0-3.2.1.vib
or by browsing the Datastore directly.
esxcli software vib install -v /vmfs/volumes/datastore1/cross_cisco-vem-v250-5.2.1.3.1.10.0-3.2.1.vib --maintenance-mode --no-sig-check
In the Add Host to vSphere Distributed Switch dialog box, choose the virtual NIC ports that are connected to the leaf switch (In this example you move only vmnic6), as shown in the image:
Note: If multiple ESXi hosts are used, all of them need to run the AVS/VEM so they can be managed from Standard switch to DVS or AVS.
With this, AVS integration has been completed and we are ready to continue with L4-L7 ASAv deployment:
ASAv Initial Setup
Navigate to L4-L7 Services > Packages > Import Device Package, as shown in the image:
Before you continue, there are few aspects of the installation that need to be determined before the actual L4-L7 integration is performed:
There are two types of Management networks, In-Band Management and Out-Of-Band (OOB), these can be used to manage devices that are not part of the basic Application Centric Infrastructure (ACI) (leaf, spines nor apic controller) which would include ASAv, Loadbalancers, etc.
In this case, OOB for ASAv is deployed with the use of Standard vSwitch. For bare metal ASA or other service appliances and/or servers, connect the OOB Management port to the OOB switch or Network, as shown in the image.
ASAv OOB Mgmt Port management connection needs to use ESXi uplink ports to communicate with APIC via OOB. When mapping vNIC interfaces, Network adapter1 always matches the Management0/0 interface on the ASAv, and the rest of the data plane interfaces are started from Network adapter2.
The Table 2 shows the concordance of Network Adapter IDs and ASAv interface IDs:
Table 2
username admin password <device_password> encrypted privilege 15
Additionally, from Global configuration mode enable http server:
http server enable
http 0.0.0.0 0.0.0.0 management
L4-L7 for ASAv Integration in APIC:
For this implementation, following settings will be applied:
-Managed Mode
-Firewall Service
-Virtual Device
-Connected to AVS domain with a Single Node
-ASAv Model
-Routed mode (GoTo)
-Management Address (has to match the previously address assigned to Mgmt0/0 interface)
For the first part, use Table 2 showed in the previous section to properly match the Network Adapter IDs with the ASAv interface IDs that you’d like to use. The Path refers to the physical Port or Port-channel or VPC that enables the way in and out of the Firewall interfaces. In this case, ASA is located in an ESX host, where in and out are same for both interfaces. In a Physical appliance, Inside and Outside of the Firewall (FW) would be different physical ports.
For the second part, the Cluster interfaces have to be defined always with not exceptions (even if Cluster HA is not used), this is because the Object Model has an association between the mIf interface (meta interface on the Device Package), the LIf interface (leaf interface such as e.g., external, internal, inside, etc.) and the CIf (concrete interface). The L4-L7 concrete devices have to be configured in a device cluster configuration and this abstraction is called a logical device. The logical device has logical interfaces that are mapped to concrete interfaces on the concrete device.
For this example, the following association will be used:
Gi0/0 = vmnic2 = ServerInt/provider/server > EPG1
Gi0/1 = vmnic3 = ClientInt/consumer/client > EPG2
Note: For failover/HA deployments, GigabitEthernet 0/8 is pre-configured as the failover interface.
Device state should be Stable and you should be ready to deploy the Function Profile and Service Graph Template
Service Graph Temple
Firstly, create a Function Profile for ASAv but before that you need to create Function Profile Group and then L4-L7 Services Function Profile under that folder, as shown in the image:
For this exercise, a routed firewall (GoTo mode) requires that each interface has a unique IP address. Standard ASA configuration also has a interface security level (external interface is less secure, internal interface is more secure). You can also change the name of the interface as per your requirement. Defaults are used in this example.
Note: You can also modify the default Access-List settings and create your own base template. By default, the RoutedMode template will include rules for HTTP & HTTPS. For this exercise, SSH and ICMP will be added to the allowed outside access-list.
Note: Each interface of the Firewall will be assigned with an encap-vlan from the AVS Dynamic Pool. Verify there are no faults.
For this test, I had the 2 EPGs communicating with standard contracts, these 2 EPGs are in different Domains and different VRFs, so route leaking between them was previously configured. This simplifies a bit after your insert the Service Graph as the FW sets up the routing and filtering in between the 2 EPGs. The DG previously configured under the EPG and BD can now be removed same as the contracts. Only the contract pushed by the L4-L7 should remain under the EPGs.
As the standard contract is removed, you can confirm that traffic is now flows through the ASAv, the command show access-list should display the hit count for the rule incrementing every time the client sends a request to the server.
On the leaf, endpoints should be learned for client and server VMs as well as the ASAv interfaces
see both firewall interfaces attached to the VEM.
ESX-1
ESX-2
Finally, the Firewall rules can be verified at the leaf level too if we know the PC Tags for source and destination EPGs:
Filter IDs can be matched with the PC tags on the leaf to verify the FW rules.
Note: The EPG PCTags/Sclass never communicate directly. The communication is interrupted or tied together via the shadow EPGs created by the L4-L7 service graph insertion.
And communication Client to Server works.
VTEP address is not assigned
Verify that Infrastructure Vlan is checked under the AEP:
Unsupported Version
Verify VEM version is correct and support appropriate ESXi VMWare system.
~ # vem version Running esx version -1746974 x86_64 VEM Version: 5.2.1.3.1.10.0-3.2.1 OpFlex SDK Version: 1.2(1i) System Version: VMware ESXi 5.5.0 Releasebuild-1746974 ESX Version Update Level: 0[an error occurred while processing this directive]
VEM and Fabric communication not working
- Check VEM status vem status - Try reloading or restating the VEM at the host: vem reload vem restart - Check if there’s connectivity towards the Fabric. You can try pinging 10.0.0.30 which is (infra:default) with 10.0.0.30 (shared address, for both Leafs) ~ # vmkping -I vmk1 10.0.0.30 PING 10.0.0.30 (10.0.0.30): 56 data bytes --- 10.0.0.30 ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss If ping fails, check: - Check OpFlex status - The DPA (DataPathAgent) handles all the control traffic between AVS and APIC (talks to the immediate Leaf switch that is connecting to) using OpFlex (opflex client/agent).[an error occurred while processing this directive]
All EPG communication will go thru this opflex connection. ~ # vemcmd show opflex Status: 0 (Discovering) Channel0: 0 (Discovering), Channel1: 0 (Discovering) Dvs name: comp/prov-VMware/ctrlr-[AVS]-vCenterController/sw-dvs-129 Remote IP: 10.0.0.30 Port: 8000 Infra vlan: 3967 FTEP IP: 10.0.0.32 Switching Mode: unknown Encap Type: unknown NS GIPO: 0.0.0.0 you can also check the status of the vmnics at the host level: ~ # esxcfg-vmknic -l Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type vmk0 Management Network IPv4 10.201.35.219 255.255.255.0 10.201.35.255 e4:aa:5d:ad:06:3e 1500 65535 true STATIC vmk0 Management Network IPv6 fe80::e6aa:5dff:fead:63e 64 e4:aa:5d:ad:06:3e 1500 65535 true STATIC, PREFERRED vmk1 160 IPv4 10.0.32.65 255.255.0.0 10.0.255.255 00:50:56:6b:ca:25 1500 65535 true STATIC vmk1 160 IPv6 fe80::250:56ff:fe6b:ca25 64 00:50:56:6b:ca:25 1500 65535 true STATIC, PREFERRED ~ # - Also on the host, verify if DHCP requests are sent back and forth: ~ # tcpdump-uw -i vmk1 tcpdump-uw: verbose output suppressed, use -v or -vv for full protocol decode listening on vmk1, link-type EN10MB (Ethernet), capture size 96 bytes 12:46:08.818776 IP truncated-ip - 246 bytes missing! 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:50:56:6b:ca:25 (oui Unknown), length 300 12:46:13.002342 IP truncated-ip - 246 bytes missing! 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:50:56:6b:ca:25 (oui Unknown), length 300 12:46:21.002532 IP truncated-ip - 246 bytes missing! 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:50:56:6b:ca:25 (oui Unknown), length 300 12:46:30.002753 IP truncated-ip - 246 bytes missing! 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 00:50:56:6b:ca:25 (oui Unknown), length 300
At this point it can be determined that Fabric communication between the ESXi host and the Leaf does not work properly. Some verification commands can be checked at the leaf side to determine root cause.
leaf2# show cdp ne Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge S - Switch, H - Host, I - IGMP, r - Repeater, V - VoIP-Phone, D - Remotely-Managed-Device, s - Supports-STP-Dispute Device-ID Local Intrfce Hldtme Capability Platform Port ID AVS:localhost.localdomainmain Eth1/5 169 S I s VMware ESXi vmnic4 AVS:localhost.localdomainmain Eth1/6 169 S I s VMware ESXi vmnic5 N3K-2(FOC1938R02L) Eth1/13 166 R S I s N3K-C3172PQ-1 Eth1/13 leaf2# show port-c sum Flags: D - Down P - Up in port-channel (members) I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed S - Switched R - Routed U - Up (port-channel) M - Not in use. Min-links not met F - Configuration failed ------------------------------------------------------------------------------- Group Port- Type Protocol Member Ports Channel ------------------------------------------------------------------------------- 5 Po5(SU) Eth LACP Eth1/5(P) Eth1/6(P)[an error occurred while processing this directive]
There are 2 Ports used in the ESXi connected via a Po5
leaf2# show vlan extended VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 13 infra:default active Eth1/1, Eth1/20 19 -- active Eth1/13 22 mgmt:inb active Eth1/1 26 -- active Eth1/5, Eth1/6, Po5 27 -- active Eth1/1 28 :: active Eth1/5, Eth1/6, Po5 36 common:pod6_BD active Eth1/5, Eth1/6, Po5 VLAN Type Vlan-mode Encap ---- ----- ---------- ------------------------------- 13 enet CE vxlan-16777209, vlan-3967 19 enet CE vxlan-14680064, vlan-150 22 enet CE vxlan-16383902 26 enet CE vxlan-15531929, vlan-200 27 enet CE vlan-11 28 enet CE vlan-14 36 enet CE vxlan-15662984[an error occurred while processing this directive] From the above output it can be observed that the Infra Vlan is not allowed or passed through the Uplinks ports that go to the ESXi host (1/5-6). This indicates a misconfiguration with the Interface Policy or Switch Policy configured on APIC.
leaf2# show vlan extended VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 13 infra:default active Eth1/1, Eth1/5, Eth1/6, Eth1/20, Po5 19 -- active Eth1/13 22 mgmt:inb active Eth1/1 26 -- active Eth1/5, Eth1/6, Po5 27 -- active Eth1/1 28 :: active Eth1/5, Eth1/6, Po5 36 common:pod6_BD active Eth1/5, Eth1/6, Po5 VLAN Type Vlan-mode Encap ---- ----- ---------- ------------------------------- 13 enet CE vxlan-16777209, vlan-3967 19 enet CE vxlan-14680064, vlan-150 22 enet CE vxlan-16383902 26 enet CE vxlan-15531929, vlan-200 27 enet CE vlan-11 28 enet CE vlan-14 36 enet CE vxlan-15662984 and Opflex connection is restablised after restarting the VEM module: ~ # vem restart stopDpa VEM SwISCSI PID is Warn: DPA running host/vim/vimuser/cisco/vem/vemdpa.213997 Warn: DPA running host/vim/vimuser/cisco/vem/vemdpa.213997 watchdog-vemdpa: Terminating watchdog process with PID 213974 ~ # vemcmd show opflex Status: 0 (Discovering) Channel0: 14 (Connection attempt), Channel1: 0 (Discovering) Dvs name: comp/prov-VMware/ctrlr-[AVS]-vCenterController/sw-dvs-129 Remote IP: 10.0.0.30 Port: 8000 Infra vlan: 3967 FTEP IP: 10.0.0.32 Switching Mode: unknown Encap Type: unknown NS GIPO: 0.0.0.0 ~ # vemcmd show opflex Status: 12 (Active) Channel0: 12 (Active), Channel1: 0 (Discovering) Dvs name: comp/prov-VMware/ctrlr-[AVS]-vCenterController/sw-dvs-129 Remote IP: 10.0.0.30 Port: 8000 Infra vlan: 3967 FTEP IP: 10.0.0.32 Switching Mode: LS Encap Type: unknown NS GIPO: 0.0.0.0[an error occurred while processing this directive]
Application Virtual Switch Installation
Cisco Systems, Inc. Cisco Application Virtual Switch Installation Guide, Release 5.2(1)SV3(1.2)Deploy the ASAv Using VMware
Cisco Systems, Inc. Cisco Adaptive Security Virtual Appliance (ASAv) Quick Start Guide, 9.4
Cisco ACI and Cisco AVS
Cisco Systems, Inc. Cisco ACI Virtualization Guide, Release 1.2(1i)
Service Graph Design with Cisco Application Centric Infrastructure White Paper
Service Graph Design with Cisco Application Centric Infrastructure White Paper