UCS BootCamp PDF
UCS BootCamp PDF
UCS BootCamp PDF
Director
Mike Griffin
Day 1 – UCS architecture and overview
Data Center trends
UCS overview
UCS hardware architecture
UCS B series server overview
C series server overview
UCS firmware management
UCS HA
Day 2 – Service profile overview and lab
Pools, Policies and templates
UCS Lab
2
Day 3 – UCS Director
UCS director overview
UCS director hands on lab
Day 4 – Advances UCS topics
UCS Networking and connectivity
Implementing QoS within UCS
UCS Central
3
Module 1
Scale Up Scale Out Scale In
Bladed and Rack servers
Monolithic servers Commoditized servers
Multi-Socket / Multi-core
Large numbers of CPUs 1 APP / 1 Physical Server
CPUs
Proprietary platform X86 platform
X64 platforms (Intel / AMD)
Proprietary OS Commoditized OS
Commoditized OS
Many apps per server
Virtual Machine Density
6
Console, power,
networking, and
storage connectivity
to each blade
Console, power,
networking, and storage
connectivity shared in
chassis
7
single core core
core
core core
CPU
Server Impact
More cores in a CPU = More Processing
Critical for application that become processing bound
Core densities are increasing 2/4/6/8/12/16
CPUs are x64 based
DIMM Slots DIMM
9
PCIe BUS
In virtually all server compute
platforms PCIe bus serves as the
primary motherboard-level
interconnect to hardware
Lanes: A lane is composed of a transmit and receive pair of differential lines. PCIe
slots can have 1 to 32 lanes. Data transmits bi-directionally over lane pairs. PCIe
x16 where x16 represents the # lanes the card can use
Form Factor: A PCIe card fits into a slot of its physical size or larger (maximum
×16), but may not fit into a smaller PCIe slot (×16 in a ×8 slot)
Platform Virtualization:
Physical servers host multiple Virtual Servers
Better physical server utilization (Using more of
existing resources)
Virtual Servers are managed like physical
Access to physical resources on server are shared
Access to resources are controlled by hypervisor on
physical host
Challenges:
Pushing Complexity into virtualization
Who manages what when everything is virtual
Integrated and virtualization aware products
11
Server Orchestrators / Manager of Manager
12
Cloud
Virtualization
Web
Client Srv
Mini Comp
Mainframe
Storage
Compute Network - NetApp FAS
- FCoE
Infrastructure - UCS B Series
- Nexus 7K, 5K,
- UCS C Series
4K, 3K, 2K,
Mgmt Server
Over the past 20 years
An evolution of size, not thinking
More servers & switches than ever
Management applied, not integrated
Virtualization has amplified the problem
Result
More points of management
More difficult to maintain policy coherence
More difficult to secure
More difficult to scale
15
Mgmt Server Embed management
Unify fabrics
Optimize virtualization
Remove unnecessary
o switches,
o adapters,
o management modules
Unified Fabric
18
SAN LAN MGMT SAN
Fabric Interconnect
G G S S G G
Fabric Fabric
A A
Interconnect Interconnect
G G G G G G Chassis
o Up to 8 half width blades or 4 full
Compute Chassis width blades
Fabric Fabric
R I C C I R
Extender Extender
x8 x8 x8 x8
Fabric Extender
o Host to uplink traffic engineering
M P P
Adapter
B
Adapter
B
Adapter
Adapter
o Adapter for single OS and
X X X X X X hypervisor systems
x86 Computer x86 Computer
Compute Blad
o Half Width or Full Width
Compute Blade Compute Blade 20
(Half slot) (Full slot)
SAN LAN
UCS Fabric Interconnect
20 Port 10Gb FCoE
40 Port 10Gb FCoE
4 x Single slot
blades
chassis
2 x double
slot blades
24
4 x power supplies
expansion
20/40/48/96 x fabric/border ports module bay 1
(Depending on model) or 2
console port
1 x management port
2 x IOMs 8 x fan
modules
4 x 10GE SFP+
fabric ports (FCoE)
25
4 x power entry
2U
1U
2U
26
Product Features
UCS 6120XP UCS 6140XP UCS 6248UP UCS 6296UP
and Specs
Switch Fabric Throughput 520 Gbps 1.04 Tbps 960 Gbps 1920 Gbps
Virtual Interface Support 15 per Downlink 15 per Downlink 63 per Downlink 63 per Downlink
16 Unified Ports
Switching ASIC
Upto 8
Aggregates traffic to/from host-facing
Fabric Ports
10G Ethernet ports from/to network- FLASH to
facing 10G Ethernet ports Interconnect
DRAM
CPU (also referred to as CMC)
EEPROM
Controls ASIC and perform other
chassis management functionality Control
Chassis
Management IO Switching ASIC
L2 Switch Controller
Aggregates traffic from CIMCs on the
server blades
Switch
Interfaces
HIF (Backplane ports)
NIF (FabricPorts)
Chassis
BIF Up to 32 Backplane Ports to
Signals Blades
CIF
IOM-2204
30
IOM-2208
2104/220X Generational Contrasts
Blade Connectors
PSU Connectors
B200 M3
2-Socket Intel E5-2600, 2 SFF Disk / SSD, 24 DIMM
Blade Servers
B250 M2
2-Socket Intel 5600, 2 SFF Disk / SSD, 48 DIMM
B230 M2
2-Socket Intel E7-2800 and E7-8800, 2 SSD, 32 DIMM
B420 M3
4-Socket Intel E5-4600, 4 SFF Disk / SSD, 48 DIMM
B440 M2
4-Socket Intel E7-4800 and E7-8800, 4 SFF Disk / SSD, 32 DIMM
33
Expands UCS into rack mount
market
Multiple offerings for different work
loads
UCS C460 M2 o C200 - 1RU base rack-mount server
o C210 – 2RU large internal storage
moderate RAM
o C250 – 2RU Memory Extending
(384GB)
UCS C260 M2 UCS C220 M3 o C260 – 2RU Large internal storage
and large RAM capacity (1TB)
o C460 – 4RU and 4 socket / Large
intenal storage / large RAM (1TB)
UCS C250 M2 o C220 M3 – Dense Enterprise Class
1 RU server / 2 socket / 256 GB /
optimize for virtualization
UCS C240 M3 o C240 M3 – 2RU / Storage
UCS C210 M2 Opotimized / Enterprise class / 384
GB / up to 24 disks
Offers Path to Unified Computing
UCS C200 M2 34
Dongle
for 2USB,
VGA,
Console
DVD
Internal
Disk
Power
LOM
USB and VGA
UCS C200 Rear View
35
Dongle
for 2USB,
VGA,
Console
DVD
Internal
Disk
Power
Power
USB
Expansion Card and
VGA
UCS C250 Rear View LOM
37
Console and Management
Internal
Disk
DVD
Dongle
for 2USB,
VGA,
UCS C260 Front View Console
38
Dongle
for 2USB,
VGA,
Console
DVD
Internal
UCS C460 Front View Disk
Dongle
for 2USB,
VGA,
Console
Internal
DVD Disk
UCS C220 Front View
40
Dongle
for 2USB,
VGA,
Console
Internal
Disk
UCS C240 Front View
41
C22 M3
2-Socket Intel E5-2400, 8 Disk s/ SSD, 12 DIMM, 2 PCIe, 1U
C24 M3
2-Socket Intel E5-2400, 24 Disks / SSD, 12 DIMM, 5 PCIe, 2U
C220 M3
2-Socket Intel E5-2600, 4/8 Disks / SSD, 16 DIMM, 2 PCIe, 1U
Rack Servers
C240 M3
2-Socket Intel E5-2600, 16/24 Disks / SSD, 24 DIMM, 5 PCIe, 2U
C260 M2
2-Socket Intel E7-2800 / E7-8800, 16 Disks / SSD, 64 DIMM, 6 PCIe, 2U
C460 M3
4-Socket Intel E5-4600 , 16 Disks / SSD, 48 DIMM, 7 PCIe, 2U
C460 M2
4-Socket Intel E7-4800 / E7-8800, 12 Disks / SSD, 64 DIMM, 10 PCIe 4U
42
Virtualization – Ethernet Only
Compatibility
M81KR / VIC 1200
10GbE/FCoE 10GbE/FCoE
vNICs
UIF 0 UIF 1
M81KR - VIC
Next Generation VIC
Dual 4x10 Gbps connectivity into fabric
PCIe x16 GEN2 host interface
Capable of 256 PCIe devices
(OS dependent)
Same host side drivers as VIC (M81KR)
10 Base KR
Retains VIC features with enhancements Sub Ports
1280-VIC
45
mLOM on M3 blades
Dual 2x10 Gbps connectivity into fabric
PCIe x16 GEN2 host interface
Capable of 256 PCIe devices
(OS dependent)
Same host side drivers as VIC (M81KR)
10 Base KR
Retains VIC features with enhancements Sub Ports
1240-VIC
Virtualization
127
10GbE/FCoE
Eth
PCIe x16
FC
0 1 2 3
Eth Eth
FC
UCS P81e and VIC1225
Up to 256 vNICs
NIC Teaming done by HW
vNICs
CNA
10GbE/FCoE
PCIe Bus
10GbE FC
47
48
RAID Controllers Disks
1 Built in controller (ICH10R) 3.5 inch and 2.5 inch form factors
Option LSI 1064e based mezz 15K SAS (High Performance)
controller 10K SAS (Performace)
Option LSI 1078 based Mega RAID 7200 SAS (High Cap / Perf)
controller (0,1,5,6 and 10 support) 7200 SATA (Cost and Cap)
73GB, 146GB, 300GB, and 500GB
The FI’s runs 3 separate “planes” for the various functionality
o Local-mgmt
• Log file management, license management, reboot etc is done through local-
mgmt
o NXOS
• The data forwarding plane of the FI
• Functionality equivalent to NXOS found on Nexus switches but is Read-only
o UCSM
• XML Based and is the only way to configure the system
• Configures NXOS for data forwarding
51
management redundant management managed endpoints
interfaces service
Fabric Interconnect
UCSM switch elements
UCSM
chassis elements
server elements
multiple protocol
support
52
GUI
XML
configuration
CLI state
Cisco UCS
API
Manager
operational
3rd party state
tools
53
Fabric Interconnects synchronize database and state information
through dedicated, redundant Ethernet links (L1 and L2)
54
L1 to L1
L2 to L2
55
56
57
58
Example of session log file on client
Client logs for debugging UCSM access & Client KVM access are found at this location
on Client system:
59
C:\Documents and Settings\userid\Application Data\Sun\Java\Deployment\log\.ucsm
• Embedded device manager for family of UCS components
• Enables stateless computing via Service Profiles
• Efficient scale: Same effort for 1 or N blades
GUI Navigation CLI Equivalent to GUI
SNMP SMASH CLP Call-home
TCP 80
64
The C-Series rack mount servers can also be managed by the
UCSM.
This requires a pair 2232PP FEX to accomplish this. This FEX
supports the needed features for PCIe virtualization and FCoE.
A total of 2 cables must be connected from the server to both FEXs.
One pair cables will be connected to the LOM (LAN on
Motherboard). This will provide control plane connectivity for the
UCSM to manage the server.
The other pair of cables will be connected the adapter (P81 or
VIC1225). This will provide data plane connectivity.
VIC1225 adapters support single wire management in UCSM 2.1
66
• 16 servers per UCS “virtual chassis”
(pair of 2232PPs) UCS
Manager
• 1 Gig LOM’s used for management
10 Gb CNA
1 Gb LOM
GLC-T connector
Mgmt Connection
Data Connection
67
• Management and data for C-Series
rack servers carried over single wire, UCS
Manager
rather than separate wires
• Requires VIC 1225 adapter UCS 6100 or 6200 UCS 6100 or 6200
VIC 1225
Mgmt and
Data Connection
68
Cisco VIC provides converged
network connectivity for Cisco
UCS C-Series servers.
Integrated into UCSM operates
in NIV (VN-TAG) mode.
Up to 118 PCIe devices (vNIC/vHBA)
Provides NC-SI connection for
stand alone Single Wire Mgmt
69
FI-A FI-B
2232 fex
2232fex
10/100 BMC
Mgmt ports 10G
1G Adapter
LOM ports ports
GE LOM
NC-SI
BMC
PCIe
CPU Mem
Rack server
70
Existing out of band management topologies will continue to work
71
Server Model Number of VIC PCIe Slots that Primary NC-SI
1225 Supported support VIC 1225 Slot (Standby
Power) for UCSM
integration
UCS C22 M3 1 1 1
UCS C24 M3 1 1 1
UCS C220 M3 1 1 1
UCS C240 M3 2 2 and 5 2
UCS C260 M2 2 1 and 7 7
UCS C420 M3 3 1, 4, and 7 4
UCS C460 M2 2 1 and 2 1
72
SAN A ETH 1 ETH 2 SAN B
MGMT MGMT
Uplink Ports
OOB Mgmt
Fabric A Fabric B
Fabric Switch Cluster
Server Ports
Console
Setup runs on a new system
<snip>
Enter the configuration method. (console/gui) ? console
Enter the setup mode; setup newly or restore from backup. (setup/restore) ? setup
You have chosen to setup a new Fabric interconnect. Continue? (y/n): y
Is this Fabric interconnect part of a cluster(select 'no' for standalone)? (yes/no) [n]: yes
Enter the switch fabric (A/B) []: A
Enter the system name: MySystem
Physical Switch Mgmt0 IPv4 address : 10.10.10.2
Physical Switch Mgmt0 IPv4 netmask : 255.255.255.0
IPv4 address of the default gateway : 10.10.10.254
Cluster IPv4 address : 10.10.10.1
<snip>
Login prompt
MySystem-A login:
76
Setup runs on a new system
o Enter the configuration method. (console/gui) ? console
o Installer has detected the presence of a peer Fabric interconnect.
o This Fabric interconnect will be added to the cluster. Continue (y/n) ? y
o Enter the admin password of the peer Fabric interconnect: <password>
o Retrieving config from peer Fabric interconnect... done
o Peer Fabric interconnect Mgmt0 IP Address: 10.10.10.2
o Cluster IP address : 10.10.10.1
o Physical Switch Mgmt0 IPv4 address : 10.10.10.3
o Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
o Applying configuration. Please wait. Configuration file - Ok
Login prompt
MySystem-B login:
77
78
79
80
81
Three downloadable files for blade
and rack mount integration Infrastructure Bundle: UCS
Manager
• UCSM
o Infrastructure Bundle • Fabric Interconnect
(NX-OS)
o B-series Bundle • Fabric Extender
(IOM) Firmware
o C-series Bundle • Chassis Mgmt.
Controller
• Adapter FW
• Catalog File
• UCSM Mgmt Extn.
85
86
87
Manual
o Upgrade guides published with every UCSM release
o Very important to follow the upgrade order listed in the guide
http://www.cisco.com/en/US/products/ps10281/prod_installation_guides_list.html
Firmware Auto-Install
o New feature in UCSM 2.1
o Wizard-like interface to specify which version of firmware to upgrade
infrastructure / servers to
o Sequencing of firmware updates is handled automatically to ensure the
least downtime
o Intermediate user acknowledgement during fabric upgrade allows users
to verify that elements such as storage are in an appropriate state
before continuing the upgrade
88
Firmware Auto-Install implements package version based upgrades
for both UCS Infrastructure components and Server components
89
Sequence followed by “Install Infrastructure Firmware”
1) Upgrade UCSM
Non disruptive but UCSM connection is lost for 60-80 seconds.
2) Update backup image of all IOMs
Non disruptive.
3) Activate all IOMs with “set startup” option
Non disruptive.
4) Activate secondary Fabric Interconnect
Non disruptive but degraded due to one FI reboot.
5) Wait for User Acknowledgement
6) Activate primary Fabric Interconnect
Non disruptive but degraded due to one FI reboot and UCSM
connection is lost for 60-80 seconds.
90
91
92
Blade management service to
external client
o IP’s on ext mgmt network
o NAT’d by the FI
Service to ext clients:
o KVM / Virtual media
o IPMI
o Serial Over LAN (SOL)
KVM, IPMI, SoL
to the FI: SSH, HTTP/S
FI 192.168.1.2
eth0:0
eth0:1
eth0:2 192.168.1.4 Server 1/1
eth0:3 Server 1/2
eth0:4 192.168.1.5
Server 1/3
eth0:5 192.168.1.6 Server 1/4
mgmt0
Number of hosts in subnet / number of blades
96
98
ISO/IMG
100
FC Eth
Benefits Use-cases
Simplify switch purchase - Flexible LAN & storage convergence
remove ports ratio guess work based on business needs
Increase design flexibility Service can be adjusted based on the
demand for specific traffic
Remove specific protocol
bandwidth bottlenecks
104
Ports on the base card or the Unified Port GEM Module can either
be Ethernet or FC
Only a continuous set of ports can be configured as Ethernet or FC
Ethernet Ports have to be the 1st set of ports
Port type changes take effect after next reboot of switch for Base
board ports or power-off/on of the GEM for GEM unified ports.
Eth FC Eth FC
105
106
Slider based configuration
Only even number of ports can be configured as FC
Ethernet
o Server Port
o Uplink Port
o FCoE Uplink
o FCoE Storage
o Appliance Port
Fiber Channel
o FC Uplink Port
o FC Storage Port
108
Server Port
o Connects to Chassis
Uplink Port
o Connects to upstream LAN.
Appliance Port
o Connects to an IP appliance (NAS)
109
FC Uplink Port
o Connects to upstream SAN via FC
o Can be 2 / 4 or 8 Gig
FC Storage Port
o Connects to a directly attached FC Target
110
The FIs do not participate in VTP
111
112
113
VSAN configuration is done in the SAN Tab in UCSM
114
115
116
vPC
Port Channels provide better
performance and resiliency
117
118
119
120
FC Uplinks from FI can be
members of a port channel with a
Nexus or MDS upstream FCF
121
122
123
Module 2
LAN SAN
MAC Address Drive Controller F/W UUID BMC Firmware WWN Address
NIC Firmware Drive Firmware BIOS Firmware HBA Firmware
NIC Settings BIOS Settings HBA Settings UUID: 56 4dcd3f 59 5b…
Boot Order MAC : 08:00:69:02:01:FC
WWN: 5080020000075740
Boot Order: SAN, LAN
Chassis-1/Blade-2
Chassis-8/Blade-5
Time A
Identity
LAN/SAN
Config
Time B
Service Profile:
MyDBServer
Feature for multi tenancy which defines a management hierarchy for the
UCS system
129
Root Org
Eng HR
QA HW
130
Pools, Policies, Service Profiles, Templates
Blades are not part of an organization and are global resources
131
Root has access to Pools,
Policies in Group-C
Root Org
HR has access to Pools,
Policies defined in Group-C Group-C
132
Consumer of a Pool is a Service Profile.
Value retrieved from pool as you create logical object, then specific
value from pool belongs to service profile (and still moves from
blade to blade at association time)
135
Point to pool from appropriate place in Service Profile
For example:
o vNIC --- use MAC pool
o vHBA – use WW Port Name pool
In GUI can see the value that is retrieved from the pool
o Note that it belongs to service profile, not physical blade
136
Pools simplify creation of Service Profiles.
Cloning
Templates
137
If you create a profile with pool associations
o (server pool, MAC pool, WWPN pool, etc)…..
• Specific new values for MAC, WWN will be immediately assigned to the
profile from the appropriate pool.
138
16-byte (128-bit) number 3x10^38 different values
Stored in BIOS
Consumed by some software vendors (e.g. MS, VMware)
139
UUIDs (as used by ESX) need only be unique within ESX
“datacenter” (unlike MACs, WWNs, and IPs)
140
One MAC per vNIC
141
Can have overlapping pools
142
One WWNN per service
profile
One WWPN per vHBA
WWN assignment:
o Use hardware-derived
WWN
o Manually create and assign
WWN
o Assign WWNN pool to
profile/template
o Assign WWPN pool to vHBA
143
Can have overlapping pools
20:00:00:25:B5:XX:XX:XX recommended
144
Manually populated or Auto-populated
145
One server per service profile
Assign server pool to service profile or template
149
Can have overlapping pools
150
Policies can be broadly categorized as
o Global Policies
• Chassis Discovery Policy
• SEL Policy
o Policies tied to a Service Profile
• Boot Policy
• BIOS Policy
• Ethernet Adapter Policy
• Maintenance Policy
Policies when tied to a Service Profile greatly reduce the time taken
for provisioning
152
154
156
157
158
159
160
161
Template flavors:
Initial template
o Updates to template are not propagated to profile clone
Updating template
o Updates to template propagated to profile clone
Template types:
vNIC
vHBA
Service Profile
163
When creating a vNIC in a service profile, a vNIC template can be
referenced.
This template will have all of the values and configuration to be used
for creating the vNIC.
164
165
Similar to a vNIC template. This is used when creating vHBAs in
your service profile.
166
167
Same flow as creating Service Profile
Can associate virtual adapters (vNIC, VHBA) with MAC and WWN
pools
168
169
170
171
172
173
174
175
176
177
178
179
180
181
You will first start by creating several pools, policies and templates that will
be used for the values assigned to each of your servers though a service
profile.
You will then create a service profile template. From the wizard you will be
selecting the various pools, policies and templates.
Once you’ve created your service profile template you will then create two
service profiles clones. The values and pools and policies assigned to the
template will be used to create two individual service profiles.
For example you will create a MAC pool with 16 usable MAC addresses.
This will then be placed in the service profile template. When creating
clones from the template, the system will allocate MAC addresses from this
pool to be used by each vNIC in this service profile.
The service profile will automatically be assigned to the server via the
server pool. You will then boot the server and install Linux.
182
183
Module 3
Role Based Access Control
Remote User Authentication
Faults, Events and Audit Logs
Backup and Restore
Enabling SNMP
Call Home
Enabling Syslog
Fault Suppression
185
Organizations
o Defines a management hierarchy for the UCS system
o Absolutely no effect on actual operation of blade and its OS
RBAC
o Delegated Management
o Allows certain users to have certain privileges in certain organizations
o Absolutely no effect on who can use and access the OS on the blades
187
Orgs and RBAC could be used independently
Orgs without RBAC
o Structural management hierarchy
o Could still use without delegated administration
• Use administrator that can still do everything
RBAC without Orgs
o Everything in root org (as we have been doing so far)
o Still possible to delegate administration (separate border network/FC
admin from server admin, eg)
188
Really no such thing as not having Orgs
SWDev QA
SWgrpA SWgrpB
IntTest
Policies
Blades are independent of Org
root (/)
priv1
/SWDev
priv2
/Eng/HWEng
priv3
User: jim
Role is a collection of privileges
198
Provider – The remote authentication server
199
For LDAP we define a DN and reference the roles and Locales it
maps to.
If no group map is defined, a user could end up with the default
privileges such as read-only
200
Faults – System and hardware failures such as power supply failures, Power
failure, or configuration issues.
Events – System events such as clustering, or RSA key generated, etc.
Audit logs – Configuration events such as Service Profile and vNIC creation,
etc.
Syslog – Syslog messages generated and sent to a external Syslog server
TechSupport files – These are “show tech” files that have been created and
stored.
202
203
A Fault Suppression policy is used to determine how long faults are
retained or cleared.
The flapping interval is used to determine how long a fault retains its
severity and stays in an active state.
Say for example the flapping interval is 10 seconds. If a critical fault
came in continuously within the 10 seconds, the fault would be
suppressed and remain in an active state.
After the 10 seconds duration, if no further instances of the fault
have been reported the fault is then either retained or cleared based
on the suppression policy.
204
Full state backup – This is a backup of the entire system for
disaster recovery. This file can not be imported and can only be
used when doing a system restore upon startup of the Fabric
Interconnects.
All Configuration backup – This backs up the system and logical
configuration into an XML file. This file can not be used during the
system restore and can only be imported while the UCS is
functioning. This backup does not include passwords of locally
authenticated users.
System Configuration – Only system configuration such as users,
roles and management configuration.
Logical Configuration – This backup is logical configuration such
as policies, pools, VLANs, etc.
207
Creating a Backup Operation allows you to perform the same
backup multiple times.
File can be stored on the locally or on a remote file system.
208
209
You can also create scheduled backups
This can only be done for Full State and All System configuration
backups.
Can point the UCS to write to an FTP server, storage array or any
other type of file system.
This can be done daily, weekly or bi weekly.
211
IP or Hostname of
remote server to store
backup
Admin state of
scheduled backup
Backup Schedule
212
Once a Backup operation is complete you can then import the
configuration as needed.
You must create an Import Operation. This is where you will point to
the file you want to Import into the UCS.
You can not import a Full system backup. This file can only be used
when doing a system restore when a Fabric Interconnect is booting.
Options are to Merge with the running configuration or replace the
configuration.
213
UCS supports both SNMP versions 1, 2 and 3
The following protocols are supported for SNMPv3 users:
o HMAC-MD5-96 (MD5)
o HMAC-SHA-96 (SHA)
The AES protocol can be enabled under a SNMPv3 user as well for
additional security.
216
217
You have the option to enable Traps or Informs. Traps are less
reliable because it does not require acknowledgements. Informs
require acknowledgements but also have more overhead.
If you enable SNMPv3, the following V3 privileges can be enabled:
o Auth—Authentication but no encryption
218
Choose the authentication encryption type.
Once you enable AES, you must use a privacy password. This is
used when generating the AES 128 bit encryption key.
219
Call Home is a feature that allows UCS to generate a message
based on system alerts, faults and environmental errors.
Messages can be E mailed, sent to a pager or an XML based
application.
The UCS can send these message in the following format:
o Short text format
o Full text
o XML format
221
A destination profile is used to determine the recipients of the call
home alerts, the format it will be sent on and for what severity level.
A call home policy dictates what error messages you would like to
enable or disable the system from sending.
When using E mail as the method to send alerts, an SMTP server
must be configured.
It’s recommended that both fabric interconnects have reachability to
the SMTP server.
222
Call Home logging
level for the system
SMTP Server
223
Alert groups – What
elements you want to
receive errors on.
Logging Level
Alert Format
224
Call home will send alerts for certain types of events and messages.
A Call Home policy allows you to disable alerting for these specific
messages.
225
Smart Call home will alert Cisco TAC of an issue with UCS.
Based on certain alerts, A Cisco TAC case will be generated
automatically.
A destination profile of “CiscoTAC-1” is already predefined. This is
configured to send Cisco TAC message with the XML format.
226
Under the CiscoTAC-1 profile, enter [email protected]
Under the “System Inventory” tab, click “send inventory now”.
The message will sent to Cisco. You will then receive and automatic
reply based on the contact info you specified in the Call Home
setup.
Simply click on the link in the e mail and follow the instructions to
register your UCS for the Smart Call Home feature.
227
Syslog can be enabled under the admin tab in UCS.
Local destination will allow you to configure UCS to store syslog
messages locally in a file.
Remote destination will allow UCS to send to a remote syslog
server. Up to three servers can be specified.
Local sources will allow you to decide what types of messages are
sent. The three sources are Alerts, Audits and Events.
229
Customer benefits
Feature details
• Fault suppression offers the ability to
lower severity of designated faults for a
maintenance window, preventing Call
Home and SNMP traps during that period
233
Server Focused - Operating system level shutdown/reboot
- Local disk removal/ replacement
- Server power on/power off/reset
- BIOS, adapter firmware activation/upgrades
- Service profile association, re-association, dis-
association
IOM Focused - Update/Activate firmware
- Reset IOM
- Remove/Insert SFPs
- IOM removal/insert
236
1. default-chassis-all-maint
Blade, IOM, PSU, Fan
2. default-chassis-phys-maint
PSU, Fan
3. default-fex-all-maint
OM, PSU, Fan
4. default-fex-phys-maint
PSU, Fan
5. default-iom-maint
IOM
6. default-server-maint
237
238
Module 4
Single Point of Management
Unified Fabric
240
UCS Manager
Embedded– manages entire system
UCS Fabric Interconnect
242
MGMT MGMT
Uplink Ports
OOB Mgmt
Fabric A Fabric B
Fabric Switch Cluster
Server Ports
Chassis 1 Chassis 20
Fabric Extenders I I I I
O O O O
M M M M
Uplink Ports
OOB Mgmt
Fabric A Fabric B
Fabric Switch Cluster
Server Ports
VIF
Policy application point where a vNIC IOM
connects to UCS fabric
VNTag Adapter
An identifier that is added to the packet
which contains source and destination
ID which is used for switching within the vHBA vNIC Cable
UCS fabric. 1 1
Virtual Cable
(VNTag)
Service Profile
(Server)
245 Blade
What you see
FI-A FI-A FI-A
Eth 1/1
Dynamic, Rapid
Provisioning
IOM A IOM A State abstraction
Cable
Location
10GE 10GE Independence
A A
Adapter
vHBA vNIC vHBA vNIC Physical
1 1 1 1 Cable
Virtual Cable
Service Profile (VN-Tag)
(Server)
Blade (Server)
246
Blade
Hardware Components
DDR3 x2
Carmel 1
10 Gig Carmel 2
South Intel
Carmel 6
Bridge Jasper Forest
Carmel cpu
Sunnyvale
ASIC
Unified Crossbar Fabric
CPU
0
PCIE PCIE PCIE
Dual Gig Dual Gig Dual Gig
0 1 0 1
1 N/C
12 Gig 12 Gig
Xcon1 Mgmt
248
FC Eth
Benefits Use-cases
Simplify switch purchase - Flexible LAN & storage convergence
remove ports ratio guess work based on business needs
Increase design flexibility Service can be adjusted based on the
demand for specific traffic
Remove specific protocol
bandwidth bottlenecks
249
Ports on the base card or the Unified Port GEM Module can either
be Ethernet or FC
Only a continuous set of ports can be configured as Ethernet or FC
Ethernet Ports have to be the 1st set of ports
Port type changes take effect after next reboot of switch for Base
board ports or power-off/on of the GEM for GEM unified ports.
Eth FC Eth FC
250
251
Slider based configuration
Only even number of ports can be configured as FC
Configured on a per FI basis
252
61x0/62xx Generational Contrasts
Feature 61x0 62xx
Flash 16GB eUSB 32GB iSATA
DRAM 4GB DDR3 16GB DDR3
Processor Single Core Celeron 1.66 Dual Core Jasper Forest 1.66
Unified Ports No Yes
Number of ports / UPC 4 8
Number of VIF’s / UPC 128 / port fixed 4096 programmable
Buffering per port 480KB 640KB
VLANs 1k 1k (4k future)
Active SPAN Session 2 4 (w/dedicated buffer)
Latency 3.2uS 2uS
MAC Table 16k 16k (32k future)
L3 Switching No Future
IGMP entries 1k 4k (future)
Port Channels 16 48 (96 in 6296)
FabricPath No 253 Future
Components
Switching ASIC Upto 8
Fabric Ports
FLASH to
Aggregates traffic to/from host-facing 10G Interconnect
Ethernet ports from/to network-facing 10G DRAM
Ethernet ports
EEPROM
CPU (also referred to as CMC) Control
Controls Redwood and perform other Chassis
chassis management functionality Management IO Switching ASIC
Controller
L2 Switch
Aggregates traffic from CIMCs on the Switch
server blades
Woodside Interfaces
HIF (Backplane ports) Chassis
Up to 32 Backplane Ports to
NIF (FabricPorts) Signals Blades
BIF
CIF
No local switching – All traffic from
HIFs goes upstream for Switching
254
2104/220X Generational Contrasts
255
Next Generation VIC based
Dual 4x10 Gbps connectivity into fabric
PCIe x16 GEN2 host interface
Capable of 256 PCIe devices
(OS dependent)
Same host side drivers as VIC (M81KR)
10 Base KR
Retains VIC features with enhancements Sub Ports
1280-VIC
256
Key Generational Contrasts
Function/Capability M81KR 1280-VIC
258
Ethernet Switching Modes
LAN Server vNIC pinned to an Uplink port
Spanning
Tree No Spanning Tree Protocol
o Reduces CPU load on upstream switches
o Reduces Control Plane load on 6100
o Simplified upstream connectivity
B
VNIC 0 VNIC 0
Server 2 Server 1
262
Root
LAN Fabric Interconnect behaves like a
normal Layer 2 switch
Server vNIC traffic follows VLAN
forwarding
Spanning tree protocol is run on
FI-A MAC
the uplink ports per VLAN—Rapid
Learning PVST+
vEth 3 vEth 1
Configuration of STP parameters
(bridge priority, Hello Timers etc)
Fabric A VLAN 10
not supported
VTP is not supported currently
L2
Switching MAC learning/aging happens on
both the server and uplink ports
like in a typical Layer 2 switch
VNIC 0 VNIC 0
Upstream links are blocked per
VLAN via Spanning Tree logic
Server 2 Server 1
263
Fabric Failover
Fabric provides NIC FI-A
L1
L2
L1
L2 FI-B
failover capabilities vEth vEth
chosen when defining a 1 1
service profile
Traditionally done using
Physical Cable IOM IOM
NIC bonding driver in the
OS Virtual Cable
10GE 10GE
Provides failover for both
unicast and multicast
PHY Adapter
traffic Cisco VIC
Menlo – M71KR
Works for any OS vNIC VIRT
Adapter
1
265
OS / Hypervisor / VM
1 2 1 2
Upstream Switch 15 15
Upstream Switch
16 14 14 16
MAC-A
Uplink
7 8 8 7
Uplink gARP
Ports Ports
UCS FI-A UCS FI-B
VLAN 10 VLAN 20 VLAN 10 VLAN 20
HA Links
1 2 3 4 5 6 1 2 3 4 5 6
Server Ports Server Ports
1 2
Fabric Ports
3 4
UCS 1 2
Fabric Ports
3 4
Blade Server
Eth 1/1/4 Adapter Eth 1/1/4
vNIC stays UP
MAC –A Eth 0 MAC –B Eth 1
PCI Bus
266
1 2 1 2
Upstream Switch 15 15
Upstream Switch
16 14 14 16
Uplink Ports Uplink Ports
7 8 8 7
UCS FI-A UCS FI-B
VLAN Web veth1240 VLAN Web veth1241
VLAN NFS MAC-C VLAN NFS MAC-C
VLAN VMK
MAC-E
VLAN VMK
MAC-E MAC-C, E
gARP
VLAN COS VLAN COS
HA Links
1 2 3 4 5 6 1 2 3 4 5 6
Server Ports Server Ports
1 2
Fabric Ports
3 4
UCS 1 2
Fabric Ports
3 4
Blade Server
Eth 1/1/4 Adapter Eth 1/1/4
MAC –C Service
MAC –D MAC –E Kernel
Console
267
Ethernet Switching Modes Recommendations
Spanning Tree protocol is not run in EHM hence control plane is
unoccupied
EHM is least disruptive to upstream network – BPDU Filter/Guard,
Portfast enabled upstream
MAC learning does not happen in EHM on uplink ports. Current
MAC address limitation on the 6100 ~14.5K.
APPLIED:
VNIC 0 VNIC 0 PinGroup
Recommendation: Oracle
End Host Mode Server X Oracle
270
L1 L1
FI-A
Fabric Failover is only L2 L2 FI-B
Virtual
Cable 10G 10G
E E
PHY Adapter
Cisco VIC
Menlo – M71KR
vNIC VIRT
Adapter
1
LAN LAN
Active/Active
Blocking
Border Ports Border Ports
272
Recommendation: End Host Mode
Certain application like MS-NLB (Unicast mode) have the need for
unknown unicast flooding which is not done in EHM
IOM-B
2x10G KR
2x10G KR
2x10G KR
2x10G KR
Mezzanine Connector 4x10G KR Integrated I/O
Slot (mLOM – VIC1240)
`
x16 Gen 2 x16 Gen 2
275
2208-A
2208-B
Port 0 Port 1
Mezzanine Slot
VIC-1240
VIC ASIC
Not Populated
`
x16 Gen 2 x16 Gen 2
B200-M3
CPU CPU
276
2208 - A
2208 - B
Port 0 Port 1 Port 0 Port 1
Mezzanine Slot
VIC-1240
VIC ASIC VIC ASIC
VIC 1280
`
x16 Gen 2 x16 Gen 2
B200-M3
CPU CPU
277
2208 - A
2208 - B
Port 0 Port 1 Port 0 Port 1
VIC-1240
Sereno
Port Expander
`
x16 Gen 2 x16 Gen 2
B200-M3
CPU CPU
278
2204-A
2204-B
Port 0 Port 1
Mezzanine Slot
VIC-1240
VIC ASIC
Not Populated
`
x16 Gen 2 x16 Gen 2
B200-M3
CPU CPU
279
2204 - A
2204 - B
Port 0 Port 1 Port 0 Port 1
Mezzanine Slot
VIC-1240
VIC ASIC VIC ASIC
`
x16 Gen 2 x16 Gen 2
B200-M3
CPU CPU
280
2204 - A
2204 - B
Port 0 Port 1 Port 0 Port 1
VIC-1240
Mezzanine Slot
Pass Through
`
x16 Gen 2 x16 Gen 2
B200-M3
CPU CPU
281
IOM – FI Connectivity
Server-to-Fabric Port Pinning Configurations
Discrete Mode Port Channel Mode FAN FAN FAN FAN
FAN1
FAN1
PS1
PS1
FAN FAN FAN FAN STAT STAT STAT STAT
FAN1
FAN1
PS1
PS1
STAT STAT STAT STAT FAIL FAIL FAIL FAIL
STAT
STAT
FAIL FAIL FAIL FAIL
FAN2
FAN2
STAT
STAT
FAN2
FAN2
OK OK OK OK
OK OK OK OK
PS2
PS2
PS2
PS2
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W
!
!
SLOT SLOT
SLOT SLOT
1 2
1 2
SLOT
3
Slot 3 Slot 4 SLOT
4
SLOT
3
Slot 3 Slot 4 SLOT
4
SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
7
Slot 7 Slot 8 SLOT
8
SLOT
7
Slot 7 Slot 8 SLOT
8
• 6200 to 2208
283 • 6200 to 2204
Individual Links
Bladed pinned to discrete NIFs
Valid number of NIFs for pinning – 1,2,4,8
Port-channel
Only supported between UCS 6200 –2208/4 XP
284
Number of Active Fabric Links Blades pinned to fabric link
Fabric Ports
fabric ports
Blade 7
Blade 1
Blade 2
Blade 3
Blade 4
Blade 5
Blade 6
Blade 8
286
287
Blade 1 Server Ports Fabric Ports
IOM
Blade 2
Blade 3
Blade 4
Blade 5
Blade 6
Blade 7
Fabric Interconnect
Blade 8
Blades re-pinned to valid number of Fabric Interconnect
6100/6200
links – 1,2,4 or 8
Unused Link
Fabric Ports
Pinned blade connectivity affected
Server Ports
Addition of links requires re-ack of
chassis.
Blade 5
Blade 1
Blade 2
Blade 3
Blade 4
Blade 6
Blade 8
Blade7
288
Fabric Interconnect
Only possible between 6200- 6200
2208XP
Fabric Ports
HIFs pinned to port-channel Port Channel
Server Ports
FCoE
L2 SA, L2 DA, FC SID ,FC DID
Blade 4
Blade 5
Blade 6
Blade 8
Blade 1
Blade 2
Blade 3
Blade7
289
Fabric Interconnect
Blades still pinned to Port-channel 6200
on a link failure
Fabric Ports
HIF’s not brought down till all Port Channel
members fail
IOM – 2208XP
Server Ports
Blade 4
Blade 5
Blade 6
Blade 8
Blade 1
Blade 2
Blade 3
Blade7
290
Discrete Mode FAN FAN FAN FAN
Port Channel Mode FAN FAN FAN FAN
FAN1
FAN1
FAN1
FAN1
PS1
PS1
PS1
PS1
STAT STAT STAT STAT STAT STAT STAT STAT
FAIL FAIL FAIL FAIL FAIL FAIL FAIL FAIL
STAT
STAT
STAT
STAT
FAN2
FAN2
FAN2
FAN2
OK OK OK OK OK OK OK OK
PS2
PS2
PS2
PS2
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W
SLOT
1
Slot 1 Slot 2 SLOT
2
!
SLOT
1
Slot 1 Slot 2 SLOT
2
!
SLOT
Slot 3 Slot 4 SLOT SLOT
Slot 3 Slot 4 SLOT
3 4 3 4
SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
7
Slot 7 Slot 8 SLOT
8
SLOT
7
Slot 7 Slot 8 SLOT
8
Servers can only use a single 10GE IOM uplink Servers can utilize all 8x 10GE IOM uplinks
A blade is pinned to a discrete 10 Gb uplink A blade is pinned to a logical interface of 80 Gbps
Fabric Failover if a single uplink goes down Fabric Failover if all uplinks on same side go down
Per blade traffic distribution , same as Balboa Per flow traffic distribution with-in a port-channel
Suitable for traffic engineering use case Suitable for most environments
Addition of links requires chassis re-ack. Recommended with VIC 1280
291
Upstream Connectivity (Ethernet)
DMZ 1 DMZ 2
VLAN 20-30 VLAN 40-50
Prune VLANs
FI-A FI-B
EHM EHM
DMZ 1 Server
DMZ 2 Server
FI-A
Sub-second re-pinning Pinning
vEth 3 vEth 1
Switching
Fabric A VLAN 10
L2
Switching VNIC stays up
All uplinks forwarding for all VLANs
vSwitch /
GARP aided upstream convergence N1K
VNIC 0
No STP ESX HOST 1
MAC A
Sub-second re-pinning
VM 1 VM 2
No server NIC disruption
MAC B MAC C
VNIC 0 Server 2
294
Recommended: Port Channel Uplinks No disruption
No GARPs
needed
FI-A
Sub-second convergence Pinning
vEth 3 vEth 1
Switching
Fabric A VLAN 10
vPC
No disruption Domain
No GARPs
Needed!
FI-A
Pinning
vEth 3 vEth 1
Switching
More Bandwidth per Uplink Fabric A VLAN 10
297
7K1 7K2
FI-A FI-B
EHM EHM
keepalive
vPC peer-link
FI-A FI-B
EHM EHM
With 4 x 10G (or more) uplinks per 6100 – Port Channels
FI-A FI-B
EHM EHM
300
Upstream Connectivity (Storage)
Fabric Interconnect operates in N_Port
Proxy mode (not FC Switch mode) FLOGI
o Simplifies multi-vendor interoperation FDISC
o Simplifies management F_Port F_Port
FI-A FI-B
Server vHBA pinned to an FC uplink in vFC 1 vFC 2 vFC 1 vFC 2
the same VSAN. Round Robin F_Proxy F_Proxy
selection.
N_Port N_Port
Eliminates the FC domain on UCS
vHBA vHBA vHBA vHBA
Fabric Interconnect 0 1 0 1
Server 1 Server 2
Targets
FI-A FI-B
vFC 1 vFC 2 vFC 1 vFC 2
Light subset of FC Switching
F_Proxy
features F_Proxy
Domain ID
Ethernet
UCSM 2.1 - In the absence of SAN,
FC
Zoning for directly connected Converged
targets will be done on the FI’s. FCoE link
Dedicated
FCoE link
Nexus
FI’s in NPV Mode FLOGI 7k/5k
FDISC
VF Port VF Port
F_Proxy F_Proxy
N_Port N_Port
Server 1 Server 2
Ethernet
FC
Converged
FCoE link
Dedicated
FCoE link
Nexus 5k
FI’s in NPV Mode FLOGI
FDISC
VF Port VF Port
N_Port N_Port
Can be used in scenarios where port
vHBA vHBA vHBA vHBA
licenses and cabling an issue. 0 1 0 1
Server 1 Server 2
Ethernet
FC
Converged
FCoE link
Dedicated
FCoE link
IP Storage attached to “Appliance NAS LAN
Port”
NFS, iSCSI, CIFS Volume
A
Volume
B
C1
Controller interfaces active/standby
for a given volume when attached to
separate FIs Appliance
Port
A U A U
FI-A FI-B
Controller interfaces Active/Active vEth 1 vEth 2 vEth 1 vEth 2
when each handling their own
volumes
vNIC 0 vNIC 1 FC 0 FC 1
Server 1