This document discusses configuring multi-chassis EtherChannel on Nexus 7000 switches with virtual port channels (vPC). It describes:
1) Configuring vPC peer links between the two Nexus switches and the vPC domain.
2) Connecting ports on a Cisco 3750 switch stack to both Nexus switches using EtherChannel without needing to know it is connected to two devices.
3) Key points are having all links active in spanning tree and utilizing all bandwidth between devices.
This document discusses configuring multi-chassis EtherChannel on Nexus 7000 switches with virtual port channels (vPC). It describes:
1) Configuring vPC peer links between the two Nexus switches and the vPC domain.
2) Connecting ports on a Cisco 3750 switch stack to both Nexus switches using EtherChannel without needing to know it is connected to two devices.
3) Key points are having all links active in spanning tree and utilizing all bandwidth between devices.
This document discusses configuring multi-chassis EtherChannel on Nexus 7000 switches with virtual port channels (vPC). It describes:
1) Configuring vPC peer links between the two Nexus switches and the vPC domain.
2) Connecting ports on a Cisco 3750 switch stack to both Nexus switches using EtherChannel without needing to know it is connected to two devices.
3) Key points are having all links active in spanning tree and utilizing all bandwidth between devices.
This document discusses configuring multi-chassis EtherChannel on Nexus 7000 switches with virtual port channels (vPC). It describes:
1) Configuring vPC peer links between the two Nexus switches and the vPC domain.
2) Connecting ports on a Cisco 3750 switch stack to both Nexus switches using EtherChannel without needing to know it is connected to two devices.
3) Key points are having all links active in spanning tree and utilizing all bandwidth between devices.
Download as DOCX, PDF, TXT or read online from Scribd
Download as docx, pdf, or txt
You are on page 1of 14
At a glance
Powered by AI
The key takeaways are that a vPC configuration allows all uplinks from switches to be in the forwarding state utilizing all available bandwidth while maintaining a loop-free topology. It also provides redundancy in the event of component failures.
With a multi-chassis etherchannel using vPC, all uplinks can be active and forwarding traffic. This utilizes all available bandwidth without any ports blocked by spanning tree. It also provides a loop-free topology across the data center.
The vPC configuration is set up by first enabling the vPC feature and creating the vPC peer-link port-channel between the Nexus switches. This port-channel is assigned to a dedicated VRF for the vPC peer-keep alive traffic. The port-channel is then configured as a vPC port-channel.
LACP Configuration and multi-chassis
Etherchannel on Nexus 7000 with vPC, Part 1 of
2 In Nexus on September 13, 2010 at 11:21 The other day I received a question on Ether-channel and the Nexus 7000 - based on the question I felt it would be also good to include the information here.
This will be a 2-part post, first part is the Nexus configuration for vPC, the second post will be on the mutli-chassis ether-channel configuration around the 3750 as well as the Nexus 7000 switches. What are the benefits of Multi-chassis (vPC) ether-channel? Basically all your up-links from your switches are in FORWARDING mode, nothing is in blocking mode in your spanning tree domain. What this means is that you have a loop free topology in your data center and all links can be utilized.
Below is the diagram of the configuration that I will be showing here. There will be a Layer 2 Ether-channel vPC between the Nexus 7010-1 and Nexus 7010-2 (Orangish line), a Layer 3 Ether-channel for vPC keep-alive (Red line), as well as a mutli-chassis (vPC) ether-channel from a 3750 stack to Nexus 7010-1 and Nexus 7010-2 with all links in a single ether-channel bundle.
Configuration for both of the Nexus switches is the same except where noted.
Configuration for the Nexus switches First thing to do is enable the vPC feature: feature vpc
Once you have enabled the vPC feature, you should create your keep-alive links. Here I create a port-channel via LACP over ports 9/1 and 10/1. You will also notice that I have spread the channel over two line cards. This has been done to help assure maximum redundancy. If a card where to go bad, the other card would still be active in the port-channel. interface Ethernet9/1 description [----[ vPC KeepAlive to CoreSwitch2 ]----] channel-group 101 mode active ! Assign port to port-channel 101 via LACP no shutdown interface Ethernet10/1 description [----[ vPC KeepAlive to CoreSwitch2 ]----] channel-group 101 mode active no shutdown
Now we can create the VRF for the keep-alive link. I suggest using a dedicated VRF for security and sanity purpose. This VRF will not participate in your global routing table, thus allowing for more stability and also the prevention of duplicate IP addresses in the network. vrf context VPC100_KA
Now we can create the Layer 3 interface on the port-channel and assign it to the new VRF, VPC100_KA interface port-channel101 description [----[ vPC Keep-Alive link between CoreSwitches ]----] vrf member VPC100_KA ! Assign this interface into the appropriate VRF ip address 10.10.10.1/30 ! The other side of the link is .2/30
Now you can configuration the vPC Peer links (Orangish lines). Since I am using 10G links for this connection, I have set the rate mode to Dedicated. This prevents any chance for over subscription on the 10G port. It also disables the other 3 ports in group, so you need to keep that in mind when you are designing your deployment. interface Ethernet7/1 description [-[ vPC Connection to Nexus 7010-2 - E7/1 ]-] switchport switchport mode trunk ! Set the mode to trunk rate-mode dedicated force ! Force the rate-mode mtu 9216 udld enable ! Since this is also fiber, enable UDLD channel-group 100 mode active ! Assign to port-channel 100 no shutdown ! interface Ethernet8/1 description [-[ vPC Connection to Nexus 7010-2 - E8/1 ]-] switchport switchport mode trunk rate-mode dedicated force mtu 9216 udld enable channel-group 100 mode active no shutdown !
Now to configure the port-channel as a vPC link as well as the vPC domain information. interface port-channel100 description [-[ vPC Peer-Link between Nexus Switches ]-] switchport switchport mode trunk vpc peer-link ! Assign this port-channel as a vpc peer-link spanning-tree port type network mtu 9216 ! vpc domain 100 role priority 16000 ! Here I hard-coded switch 1 to be the vPC master. switch 2 was left as the default peer-keepalive destination 10.10.10.2 source 10.10.10.1 vrf VPC100_KA ! The other side has the IP addresses reversed Had to move the formatting above to get the command to fit, sorry.
Let's check the port-channel and make sure it is up with the appropriate members. As you can see from the output, Eth7/1 and Eth8/1 are members of the channel.
N7K1# sh int port-channel 100 port-channel100 is up [------ SNIP - Output omitted! ------] Members in this channel: Eth7/1, Eth8/1 N7K1#
Also check the vPC and the vPC keep-alive link N7K1# sh vpc Legend: (*) - local vPC is down, forwarding via vPC peer-link vPC domain id : 100 Peer status : peer adjacency formed ok vPC keep-alive status : peer is alive Configuration consistency status : success Type-2 consistency status : success vPC role : primary, operational secondary Number of vPCs configured : 9 Peer Gateway : Disabled Dual-active excluded VLANs : - vPC Peer-link status --------------------------------------------------------------------- id Port Status Active vlans -- ---- ------ -------------------------------------------------- 1 Po100 up 1-224
N7K1# sh vpc peer-keepalive
vPC keep-alive status : peer is alive --Peer is alive for : (1486816) seconds, (684) msec --Send status : Success --Last send at : 2010.09.11 12:38:36 872 ms --Sent on interface : Po101 --Receive status : Success --Last receive at : 2010.09.11 12:38:36 872 ms --Received on interface : Po101 --Last update from peer : (0) seconds, (161) msec
This concludes the first post, the second post will be up shortly and will focus around the Cisco 3750 configuration as well as the associated configs on the Nexus 7000 switches.
LACP Configuration and multi-chassis Etherchannel on Nexus 7000 with vPC, Part 2 of 2 In Nexus on September 13, 2010 at 19:47 This is the second part in a two part post on Etherchannel on the Nexus 7000. In the first part I covered how to configure vPC on the Nexus 7000, here I will cover what it takes to get a remote switch to uplink to the Nexus 7000 core switches using vPC/Multi-chassis etherchannel.
Here is a diagram depicting the layout that we are using. For this part of the post, we will focus on the blue line that is connecting both Nexus switches to the 3750 Stack.
On the Cisco 3750 switches (they are in a stack configuration of two switches) we need to configure the interface to be in a channel-group - for this example Iam using Channel-Group 6 (the switch is actually named StackSwitch06). What you will also notice is that you configure the 3750 Stack just like it was only connected to one switch, just one single port-channel that consists of all the ports connected to both Nexus switches.
For this example we are using ports G1/0/1, G1/0/24, G2/0/1, and G2/0/24. One thing I want to mention, when you are thinking about your uplinks to your core switches, be aware of the switch ASIC layout. I say this because I have seen many times when companies use ports 23 and 24 to uplink to a core switch. The problem with this is that: 1) The same ASIC is probably controlling both ports, and if it goes bad your links to the switch are gone and your switch is also isolated. 2) You have a better chance of oversubscribing the ASIC before the uplink when utilization is high on the channel.
Now, onto the configuration, first up the Cisco 3750s. interface GigabitEthernet 1/0/1 description [----[ Uplink to N7K1 - E9/10 ]----] switchport trunk encapsulation dot1q switchport mode trunk channel-group 6 mode active interface GigabitEthernet1/0/24 description [----[ Uplink to N7K2 - E9/10 ]----] switchport trunk encapsulation dot1q switchport mode trunk channel-group 6 mode active interface GigabitEthernet 2/0/1 description [----[ Uplink to N7K1 - E10/10]----] switchport trunk encapsulation dot1q switchport mode trunk channel-group 6 mode active interface GigabitEthernet2/0/24 description [----[ Uplink to N7K2 - E10/10]----] switchport trunk encapsulation dot1q switchport mode trunk channel-group 6 mode active
Once the interfaces are assigned to the channel-group, we can configure the etherchannel on the Cisco 3750s. Notice that there is no vPC info nor anything else that says this is connected to two switches. interface Port-channel6 switchport trunk encapsulation dot1q switchport mode trunk
Now, on the Nexus side we need to do some configurations as well. Both Nexus switches are also configured the same, so there are no differences in the switch configs. interface Ethernet9/10 description [----[ StackSwitch6-1 ]----] switchport switchport mode trunk channel-group 6 mode active no shutdown
interface Ethernet10/10 description [----[ StackSwitch6-1 ]----] switchport switchport mode trunk channel-group 6 mode active no shutdown
Now, when it comes to configuring the etherchannel on the Nexus switches, is is configured the same except for the addition of a vPC identifier. I recommend using the same number that you used for the port-channel for easy identification, but that is up to you. interface port-channel6 description [----[ LACP EtherChannel for StackSwitch6 ]----] switchport switchport mode trunk vpc 6
Once you have it configured on the Nexus, make sure it is up and in the vPC correctly.
N7K1# sh int port-channel 6 port-channel6 is up vPC Status: Up, vPC number: 6 Hardware: Port-Channel, address: 5475.d04f.1165 (bia 5475.d04f.1165) Description: [----[ LACP EtherChannel for RackSwitch6 ]----] Members in this channel: Eth9/10, Eth10/10 N7K1#
Once you have confirmed that all is working correctly, you can check out the StackSwitch spanning tree information: