Checkpoint Private Cloud Security For Cisco Aci Infrastructure
Checkpoint Private Cloud Security For Cisco Aci Infrastructure
TABLE OF CONTENTS
Introduction..............................................................................................................3
Cisco ACI Overview ...................................................................................................4
Terminology and Definitions ...................................................................................4
Deployment Modes ................................................................................................6
Cisco ACI environment ......................................................................................... 10
Check Point Integration with Cisco ACI ..................................................................... 19
Check Point and Cisco ACI Integration Benefits ...................................................... 23
Single and Multi-Pod Overview .............................................................................. 26
Integrating Check Point Firewalls to the ACI Infrastructure ..................................... 28
Single Pod Security Design ...................................................................................... 32
Single Pod overview ............................................................................................. 32
Check Point Security Appliances for Single Pod ...................................................... 34
VSX Cluster Design for Single Pod Security Deployment ......................................... 34
Traffic Flows in the Single Pod Architecture ........................................................... 35
Check Point Maestro for Single Pod ....................................................................... 40
Check Point Maestro with VSX/VSLS for Single Pod ................................................ 41
Security Appliances Fleet with Symmetric PBR Load Balancing design ..................... 42
Multi-Pod Security Design ........................................................................................ 43
Multi-Pod Security Design with dedicated Bridge Domains ...................................... 46
Maestro design with Active/Standby MHOs per pod ................................................ 48
VSX/VSLS design with the cluster per pod ............................................................. 49
Maestro plus VSLS Cluster design with MHO Cluster ............................................... 49
Multi-Pod Security Architecture with stretched Bridge Domains ............................... 50
Multi-Site Security Design ........................................................................................ 64
Summary................................................................................................................ 66
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 3
Introduction
As companies are embarking on their application and data modernization programs and considering cloud and
infrastructure requirements, they will most likely opt for a hybrid cloud strategy, with application and data workloads
spread across both public and private clouds.
In hybrid deployments, hyper-scalers combine the cloud benefits of innovation, speed, consumption, and scale of the
public cloud with the benefits of private clouds, such as regulatory compliance, performance, data gravity, and
recouping existing investments. Furthermore, hybrid deployments provide the same level of operation and
management in both public and private cloud environments, e.g., unified management, flexibility, agility.
Cisco Application Centric Infrastructure (ACI) is a mature SDN (Software Defined Network) technology that offers
enterprises of all sizes "cloud-like" performance, availability, resilience, monitoring, and automation. Enterprises that
want to build their own on-premise private clouds will find Cisco ACI provides most if not all the features they need
to do so. The cloud-like features of Cisco ACI enable customers to leverage a fundamentally more secure approach
to data and network security by moving to a security model independent of routing and network topology.
Check Point CloudGuard for Cisco ACI delivers industry-leading security management and enforcement tailored to
protecting customer information assets. Security service insertion in modern, application-centric private and hybrid
cloud networks is sophisticated, yet simple, way to design, deploy, scale and operate in a complex environment.
.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 4
Cisco APIC
Cisco APIC is the main architectural component of Cisco ACI. It automates and manages the Cisco
ACI fabric, enforces policies, and monitors health. Cisco APIC establishes, stores, and enforces
Cisco ACI application policies based on the application's network requirements. Cisco APIC also
provides policy authority and resolution mechanisms.
It is important to distinguish between two views when looking at Cisco ACI and Check Point
Security Gateways integration: Logical and Infrastructure. Infrastructure - relates to all the
physical components: switches, routers, etc. Logical is used to set up communication between
workloads within the switch fabric.
Policy models are based on promise theory, allowing declarative, scalable control of intelligent
objects. Promise theory relies on the underlying objects handling configuration state changes
initiated by the control system. This reduces the complexity of the controller and allows for greater
scalability.
Spine
Spines are special switches that form the backbone of ACI networks. All leaf switches must be
connected to spines, and spines handle leaf-to-leaf communication. Spine switches typically
contain a large number of high bandwidth (40 /100 GbE) aggregation ports. The ACI fabric relies
on these ports for bandwidth throughput.
Leafs
ACI's spine-leaf topology uses leaf switches to connect all endpoint devices, such as servers,
routers or firewalls, to the ACI fabric. Leaf switches can be defined based on the roles attached
to endpoints by enabling all of them to connect at the same layer:
1
Source: Cisco Data Center Spine-and-Leaf Architecture: Design overview White paper:
https://www.cisco.com/c/en/us/products/collateral/switches/nexus-7000-series-switches/white-paper-
c11-737022.html
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 6
● Transit Leaf Switches - Used to connect to the spines on other data centers. It exists
only in stretched fabric topologies.
● Management Leaf Switches - Used to connect to the OOB Network for the Operations
and Management Services for the infrastructure.
Deployment Modes
In traditional data centers design, Cisco Systems typically used to refer to three tiers: core,
aggregation, and access, while modern and advanced data centers design is typically based on a
two-tier spine-leaf architecture. The new approach offers a more optimized design to
accommodate east-to-west traffic flows, which are predominant in the new application based on
the following design patterns.
Single Pod
APIC pods are sets of interconnected leaf and spine switches (ACI Fabrics) that are under the
control of an APIC cluster.
Multi-Pod
The Multi-pod2 is an architectural design that has multiple ACI fabrics under the control of single
management or administration.
2
Source: Cisco ACI Multi-Pod White Paper - URL:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-737855.html
3
Source: Cisco ACI Multi-Pod White Paper, Inter-Pod Connectivity Deployment Considerations - URL:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-737855.html#InterPodConnectivityDeploymentConsiderations
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 8
Multi-Site
The Multi-site4 architecture is the interconnection of APIC cluster domains with their associated
pods. Multi-Site designs may also be called Multi-Fabric designs since they interconnect separate
availability zones (ACI fabrics), deployed either as single pods or multiple pods (Multi-Pod design).
In a Multi-Site topology, each fabric could be considered a separate availability zone. These
availability zones are managed cohesively by the Multi-Site Orchestrator. The nature of the
4
Source: Cisco ACI Multi-Site White Paper - URL:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739609.html
5
Source: Cisco ACI Multi-Site White Paper, Intersite Network (ISN) deployment considerations - URL:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739609.html#IntersiteNetworkISNdeploymentconsiderations
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 9
architecture ensures that whatever happens to one site (availability zone) in terms of network-
level failures and configuration mistakes will not impact other site(s) or availability zones. This
guarantees business continuance at the highest level.
There is a misconception that Multi-Site somehow supersedes Multi-Pod or that the Multi-Pod
architecture is no longer relevant. In reality, they are two separate technologies applicable to
different use cases.
Furthermore, there is no valid reason for Multi-Pod and Multi-Site topologies not to work together.
For example, there could be multiple data centers deployed all over the world, but each can have
its own ACI Multi-Pod fabric and tied together through the Multi-Site Orchestrator. These two
architectures are built to work harmoniously, so you are no longer faced with an either/or decision
and will ultimately have a high degree of deployment flexibility.
Note: VRFs are also known as contexts; each VRF can be associated with multiple bridge
domains.
Tenants mainly serve as a logical separator for customers, business units, groups or similar
entities. They can be used to separate traffic, visibility, or admin separation. For example, private
networks that are intended for use by multiple tenants and are not created in the common tenant
require explicit configuration to be shared.
6
Source: Operating Cisco Application Centric Infrastructure, Tenants - URL:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-
x/Operating_ACI/guide/b_Cisco_Operating_ACI/b_Cisco_Operating_ACI_chapter_0111.html
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 11
VRF/Private Network
The Virtual Routing and Forwarding (VRF) object, or context, represents a tenant network (a
private network in the APIC GUI). A tenant can have multiple VRFs. A VRF is a unique Layer 3
forwarding and application policy domain.
It is used to define a unique layer 3 forwarding domain within the fabric. One or more VRF can
be created inside a tenant, also known as 'private networks', and can be viewed as the equivalent
of a VRF in the traditional networking world. Each context defines a separate layer 3 domain,
which means IP addresses within a context can overlap with addresses within other contexts.
7
Source: Cisco Application Centric Infrastructure Fundamentals, Bridge Domains and Subnets - URL:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACI-
Fundamentals/b_ACI-
Fundamentals_chapter_010001.html#concept_8FDD3C7A35284B2E809136922D3EA02B
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 12
a bridge domain. This gateway will typically be used by hosts associated with a bridge domain as
their next-hop gateway. A bridge domain's gateways are available on all leaf switches where the
bridge domain is active.
A Virtual Machine Manager (VMM)9 domain profile specifies the policies for connecting virtual
machine controllers to the ACI fabric. The VMM domain policy is created in APIC and pushed into
the leaf switches. VMM domains contain VM controllers, such as VMware vCenter, and the
credential(s) required for the ACI API to interact with the VM controller. A VMM domain enables
VM mobility within the domain but not across different domains. A single VMM domain can contain
multiple instances of VM controllers but they must be the same kind.
8
Source: Cisco APIC Layer 2 Networking Configuration Guide - URL: Cisco APIC Layer 2 Networking
Configuration Guide, Release 3.x and Earlier - Networking Domains [Cisco Application Policy
Infrastructure Controller (APIC)] - Cisco
9
Source: Configure VMM Domain Integration with ACI and UCS B Series, Create the VMM Domain - URL:
https://www.cisco.com/c/en/us/support/docs/cloud-systems-management/application-policy-
infrastructure-controller-apic/118965-config-vmm-aci-ucs-00.html#anc5
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 13
Figure 11: EPG mapping with traditional VLAN approach, Source: Cisco Systems
EPGs act as a container for collections of applications, or application components and tiers, which
can be used to apply forwarding and policy logic. They allow the separation of network policy,
security, and forwarding from addressing and instead apply it to logical application boundaries.
EPGs are designed to abstract the instantiation of network policy and forwarding from basic
network constructs (VLANs and subnets.) This allows applications to be deployed on the network
in a model consistent with their development and intent. Endpoints assigned to an EPG can be
defined in several ways. Endpoints can be defined by virtual port, physical port, IP address, DNS
name, and in the future through identification methods such as IP address plus Layer 4 port and
others.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 14
There is no dedicated manner in which EPGs should be deployed and utilized; however, the rest
of this document will cover some typical EPG uses.
Figure 12: EPG grouping devices and Endpoints that share a common policy
MicroEPG (uEPG)
Micro-segmentation10 is the method of creating zones in data centers and cloud environments to
isolate workloads from one another and secure them individually. By default, Endpoints inside the
same EPG can communicate freely without any restrictions. A Micro EPG (uEPG) is equivalent to
a regular EPG for all intents and purposes (as Service Graphs and PBRs), but the classification is
based on endpoint attributes (and dynamic in nature). This enables the organization the capability
to filter with those attributes and apply more dynamic policies and traffic inspection through the
Service Graphs using Check Point Firewalls applying policies to any endpoints within the tenant.
10
Cisco ACI Virtualization Guide 3.0 - URL: Cisco ACI Virtualization Guide, Release 3.0(1) -
Microsegmentation with Cisco ACI [Cisco Application Policy Infrastructure Controller (APIC)] - Cisco
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 15
● For endpoints on Physical Domains (bare metal), you can use IP or MAC addresses
● For endpoints on VMware or Microsoft VMM Domains, you can use IP, MAC addresses or VM-
attributes
Figure 13: Example of Micro-segmentation with Cisco ACI and Check Point
The direct relationship between the bridge domain and an EPG limits the possibility of an EPG
spanning more than one Bridge Domain. This kind of limitation can be resolved by using a new
ESG construct because it will allow the relationship between endpoints from multiple BD / EPGs
but still limited to a single VRF.
The Endpoint Security Group (ESG) enables organizations to move towards with an Application
Centric model approach, instead of spending a lot of time on preparing for a migration from a
Network Centric to an Application Centric model.
11
Cisco APIC Security Configuration Guide 5.2 - URL: Cisco APIC Security Configuration Guide, Release
5.2(x)
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 16
Figure 14: Example of a practical use case of ESG with Check Point Firewalls
Application Profile
An application profile12 defines the policies, services, and relationships between endpoint groups
(EPGs). Application profiles can contain one or more EPGs. Modern applications typically contain
multiple components. For example, an e-commerce application could require a web server, a
database server, data located in a storage area network, and access to outside resources that
enable financial transactions.
The application profile includes as many (or as few) EPGs as necessary to provide necessary the
functionality of an application.
12
Source: Cisco Application Centric Infrastructure Fundamentals, Application Profile - URL:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACI-
Fundamentals/b_ACI-
Fundamentals_chapter_010001.html#concept_6914B5520ECA4731962F30F93E5A77A6
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 17
Figure 15: Application Profile and its interaction with other layers
Service Contract
A Service contract13 within Cisco ACI defines how EPGs can communicate with each other, defining
the Ingress and Egress traffic flows. This is based on an allow list - without a permit contract (by
default) traffic between different EPGs is not allowed. A contract consists of subjects, each made
up of filters, actions, and labels. A contract can have many subjects.
Service Graph
By using Cisco ACI's service graph, traffic between different security zones within the fabric can
be redirected to a firewall or load balancer, eliminating the need to configure the firewall or load
balancer as the default gateway for servers. Furthermore, Cisco ACI can selectively send traffic
to L4-L7 devices (for example Check Point firewall).
13
Source: Cisco ACI Contract Guide, How contracts work - URL:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-743951.html#Howcontractswork
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 18
Firewall inspection also can be transparently inserted in a Layer 2 domain with almost no
modification to existing routing and switching configurations. Moreover, Cisco ACI allows
increasing the capacity of L4-L7 devices by creating a pool of devices to which Cisco ACI can
distribute traffic using Symmetric PBR mechanism.
With the service graph, Cisco ACI introduces changes in the operating model. A configuration can
now include not only network connectivity—VLANs, IP addresses, and so on, but also the
configuration of Access Control Lists (ACLs), load-balancing rules, etc.
Figure 17: Examples of Service Graphs for North-South and East-West traffic flows
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 19
Figure 18: Check Point Integration with Cisco ACI and Maestro
Check Point CloudGuard for Cisco ACI has two main components:
14
Source: CloudGuard for ACI R80.10 Administration Guide, URL:
https://sc1.checkpoint.com/documents/R80.10/WebAdminGuides/EN/CP_R80.10_vSEC_for_ACI_AdminGu
ide/html_frameset.htm?topic=documents/R80.10/WebAdminGuides/EN/CP_R80.10_vSEC_for_ACI_Admin
Guide/171241
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 20
visibility into Data Center security. The CloudGuard Controller can be used to
generate security policies for installations on any Check Point Security Gateway
across the network.
The license covers Management High Availability for the Security Management Server and the
Multi-Domain Server. All processes not
associated with ACI integration must have a
separate license. For example, licenses to
enable typical management and/or gateway
functions or capabilities. The license is
perpetual and cumulative, which means it is
always possible to add more leaf licenses.
from using a single abstraction view of objects and policies within the datacenter. Furthermore,
Cloudguard Controller supports integration with Azure, AWS, Alibaba, VMware vCenter, VMware
NSX-T, Kubernetes, and others in the same way, making unified multi-cloud policies possible.
Comparison of Cisco ACI and Check Point Software Technologies constructs used in the policy:
● EPG Consumer → Source/From
● EPG Provider → Destination/To
● Filters → Ports/Applications/Signatures
● Actions → Action (Allow, Deny, Drop)
When the Application Profile is built using a Service Graph, we can import EPG objects through
the Datacenter configuration in the Check Point Management Console.
Figure 20: Mapping Cisco APIC Service Contract with Check Point Security Policies
ACI and Check Point Gateways physically start interacting only after their logical configurations
are completed.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 22
Therefore, it is vital to consider both perspectives in order to create both Network Policies and
Security Policies.
Figure 21: Infrastructure view for the traffic flow between EPG's and Service Bridge
Below is an example of a Security Policy that is mapped from the Logical Perspective above.
Figure 22: Check Point Security policy example - EPG objects propagated from APIC to the CloudGuard Controller
All ACI objects (EPGs, Application Profiles, Tenants, and Application EPGs) used in the unified
security policy get updated automatically in nearly real-time via API provided by the CloudGuard
Controller without any manual EPG object modifications in the Check Point policy.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 23
Whenever an organization converts its data centers into a hybrid model, it must be able to
manage a dramatic increase in lateral network traffic between applications, both dynamically and
automatically. Check Point CloudGuard Network Security for Cisco ACI provides these capabilities
delivering comprehensive and dynamic security specifically architected for Cisco ACI-enabled data
centers.
15
Source: Check Point Virtual Systems - URL: https://www.checkpoint.com/downloads/products/virtual-
systems-datasheet.pdf
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 26
In the following sections, three separate Check Point designs are presented to illustrate Check
Point's integration with Cisco ACI.
Figure 25: Single Pod Architecture - Check Point and Cisco Systems integration.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 27
Multi-Pod Design
ACI Fabric can scale in a Multi-Pod topology but be still managed by a single APIC cluster. This
kind of design deployment would require a special network between the pods - the InterPod
Network (IPN). This special IPN is essentially an external network to the fabric and for this reason,
it is not managed by the APIC.
Figure 26: Multi Pod Architecture - Check Point and Cisco Systems integration.
Multi-Site Design
This design consists of multiple separate fabrics, each with its own APIC cluster and an
orchestrator that runs on top. Separate fabrics in this deployment can be connected together
using L3Out connectivity. A Multi-site Orchestrator (MSO) running on top of it enables
administrators to configure the connectivity between sites and to have tenants stretching the
sites (stretched bridge domains and EPGs). This kind of architecture can be useful for Multi-cloud
environment to stretch Tenants between Private Cloud to the Public Cloud (Azure, Amazon,
Google)
Figure 27: Multi-Site Architecture - Check Point and Cisco Systems integration.
There are other topologies that Cisco ACI supports but are not within the scope of this whitepaper:
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 28
● Remote Leaf
● vPod
● cPod
● Cloud Only (Azure/Amazon/Google)
In the case of PBR, Cisco ACI technology allows integration with the Check Point Firewall through
a technique called service graph deployment based on Layer 4 - Layer 7 Policy-Based Redirect
where specific traffic can be redirected for firewall inspection based on application requirements
instead of the network path.
Figure 28: Check Point Service provisioning for L3 Go-To (Routed Mode)
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 29
The Check Point L3 Go-To (Routed-Mode) security devices can be deployed in the ACI fabric via
service graph. A virtual system instantiated from the L4-L7 routed (GoTo) device is used as the
default gateway of endpoints (EPs) contained in the Bridge Domain connected to its connector.
With this deployment, firewall can provide all the Check Point capabilities, including HTTPS
inspection, NAT, IPsec VPN, etc. for the endpoints connected via the associated bridge domain.
However, in this scenario, the firewall network configuration as the default gateway of the
application EPs is necessary.
Please note: configuring the device as a default gateway for the EPs requires disabling Unicast
Routing capability on the bridge domain (as it does not provide routing services for the EPs).
Disabling unicast routing on a Bridge Domain (BD) can cause a design constraint as it prevents
the fabric from learning EPs IP addresses.
Figure 29: Check Point Service provisioning, before and after insertion
Switch Fabric can send traffic to Check Point Firewalls selectively, for instance, only on specific
Layer 4 ports. The firewall inspection can also be transparently inserted into a Layer 2 domain
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 30
(Security Service Bridge Domain) without requiring any adjustments to switching or routing
configuration.
Figure 30: Check Point Service provisioning, traffic flows configured in the Service Graph
16
Source: Cisco APIC Layer 4 to Layer 7 Services Deployment Guide, About Policy-Based Redirect - URL:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/L4-
L7_Services_Deployment/guide/b_L4L7_Deploy_ver211/b_L4L7_Deploy_ver211_chapter_01001.html
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 31
ACI contracts define which traffic is permitted without inspection done by the gateway. Whenever
traffic is permitted by a contract, the administrator can apply PBR based on the service-graph
template (SGT) and the specific service node for inspection. Once the traffic is classified at the
ingress leaf and the PBR service node is identified, it is switched to the PBR service node rather
than going directly to the destination leaf. After processing on the service node, the flow is passed
back to ACI which then switches to the destination leaf switch.
The CloudGuard Gateways are deployed on their own bridge domain. The same Check Point
Gateway can be configured to secure datacenter internal networks (east-west traffic) as well as
network perimeter (north-south) traffic flows. Furthermore, Check Point gateways can be
provisioned with advanced PBR configurations to provide an extra level of resilience with active-
backup PBR, and additional capacity with symmetric PBR.
In the following diagram, we have an example of Symmetric PBR provisioning to provide these
advanced capabilities
To provide high availability and an additional level of resiliency in this environment Check Point
firewall Clusters can be deployed in High-Availability configuration.
This architecture design has the Check Point Security Gateway cluster deployed in a one-arm
configuration as the simplest network topology in order to connect to service leaf switches to
secure east-west traffic within a tenant. A different firewall cluster would be connected to border
leaf switches to provide north-south traffic flow inspection.
The most important part of the overlay configuration for Leaf and Spine switches is relevant to
the logical perspective on segmentation and services separations based on the risk or criticality.
This is a list of examples of various types of tenants:
Check Point Smart-1 appliance typically provides a management layer for VSX deployment where
all multiple virtual systems can be easily administered through a single interface.
Each Virtual System would act as a full functional Security Gateway, typically protecting specified
network segments. In order to reduce overall complexity and contain traffic locally, it is
recommended to use internal VSX resources like virtual switches or routers.
Figure 36: North-South and East-West Firewalls for Single Pod Architecture
Location-based PBR can be utilized for the traffic inspection on the Incoming traffic from the
Border Leaf to check Point gateways using the One-Arm mode. This way security services can be
inserted without changing routing or redesigning network topology within the datacenter.
Concerns regarding network segmentation within the datacenter can be addressed by Cisco ACI's
ability to create "virtual" network segments where traffic forwarding can only occur if specifically
permitted by the Service Contracts. Service Graph with traffic diversion to Check Point Gateways
will provide more advanced and deep traffic inspection capabilities than what can be done within
native Cisco ACI.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 37
The following diagrams will demonstrate how different traffic flows can be managed by PBR.
Figure 37: North-South Traffic Flow from L3Out to Web Server through Service Contract and protected by Service
Graph (Firewall Service Insertion)
Following is a diagram showing how traffic flows between workloads connected to different leaves
in a Switch Fabric.
Figure 38: East-West Traffic Flow between EPG Web to EPG App through Service Contract and inspected by Firewall
using Service Insertion
In the following diagram, we can see how traffic inspection for N-S and E-W can be integrated to
provide a full Web Application with their Application Server providing a service. Two different
security gateways are suggested in this case. Furthermore, Symmetric PBR can be utilized to
scale the number of Security gateways to increase throughput of the security Gateways.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 39
Figure 39: North-South and East-West traffic flows between L3Out, Web EPG Web and App EPG via the Service
Contracts and protected using Service Graph (Firewall Insertion for different services)
The diagram above demonstrates how various security groups can be utilized in order to process
different traffic flows.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 41
Figure 41: Check Point Maestro Topology with Multiple SGs using regular (left) and VSLS (right) systems deployment
In this architecture deployment, multiple security groups are provisioned within the Maestro
fabric. Each of them is dedicated to protecting specific parts or traffic flow within the Cisco ACI
fabric. One security group is connected to the service lead and responsible for the intra-tenant
security (East-West traffic). Another SG can be responsible for a similar East-West traffic security
but in another tenant. A different SG could be responsible for the North-South traffic associated
with a traffic flow via border leaf. One more security group can be responsible for security in the
common tenant.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 42
Figure 42: Hyperscale capabilities with a Fleet of Appliances and Symmetric PBR
While symmetric PBR can be useful to accommodate large deployments, it has a limitation, in the
event of a firewall failure, existing connections will not survive the failover and they will need to
be reestablished due to the fact that traffic load sharing is managed outside of the firewall's
control plane.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 43
The same APIC cluster can manage multiple Pods and to increase resiliency the various controller
nodes (that make up the cluster) can be deployed across different Pods. Check Point and ACI
integration allow one firewall management server to be fully integrated with multiple POD or
multiple ACI fabrics as in both cases management platforms communicating with each other via
API have full visibility of the environment.
For example, in the following multi-POD scenario there are two Cisco ACI PODs interconnected
via the IPN (InterPod Network). As with a single pod design, multiple tenants can be
accommodated in this environment, however, there are significant variations in how the HA model
can be implemented.
In order to prevent any potential complications associated with returning traffic or asymmetric
routing, firewall deployment in a multi-Pod environment requires special considerations
concerning firewall HA state configuration and connections synchronizations.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 44
In the case of Active-Standby FW cluster deployment with a stretched firewall cluster across two
PODs traffic always flows via the Active cluster member and the whole design and traffic flow is
relatively straightforward.
However, there could be a potential issue with the additional overuse of the IPN network if the
point of traffic initiation and the destination are located within a POD where the firewall module
is not Active. In this case, IPN would be used for request and reply traffic forwarding between
two PODs. This can result in additional latency and IPN bandwidth saturation.
Figure 44: Network services deployment17: Active-Standby Firewalls pair stretched across pods - Source: Cisco
Systems
Active-Active stretched cluster deployment provides an active firewall module in each POD. Traffic
traversing firewall would stay within local POD as long as both source and destination belong to
the POD. Firewall cluster members would process traffic independently synchronizing connections
across the cluster.
Please note that traffic is not going to be load-balanced between the cluster members. Also, in
some scenarios hairpin traffic forwarding would still be occurring. Additional IPN capacity planning
would be required to avoid any capacity issues in a fail-safe scenario.
17
Source: Cisco ACI Multi-Pod and Service Node Integration White Paper, ACI service integration - URL:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739571.html#CiscoACIserviceintegration
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 45
Figure 45: Network services deployment18: Active-Active Firewalls pair stretched across pods - Source: Cisco Systems
The last scenario is when there is an Active-Standby cluster in each POD. This design pattern
provides an extra level of resiliency, but it would require some additional routing and NATing
considerations to avoid problems caused by asymmetrical routing and hairpin traffic flows.
Figure 46: Network services deployment19: Active-Standby Firewalls per each pod - Source: Cisco Systems
As highlighted above there is a variety of ways of how firewalls in High Availability can be deployed
in the multi-pod environment. The final decision should be made based on the specific
requirements regarding the level of resiliency within the Pod as well as considerations regarding
additional latency and bandwidth utilization of the IPN link.
18
Source: Cisco ACI Multi-Pod and Service Node Integration White Paper, ACI service integration - URL:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739571.html#CiscoACIserviceintegration
19
Source: Cisco ACI Multi-Pod and Service Node Integration White Paper, ACI service integration - URL:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739571.html#CiscoACIserviceintegration
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 46
The recommendation would be to use firewall cluster closest to the source and local to the pod,
however this is not mandatory.
Which of the firewalls is going to be utilized in the service graph is predetermined in the PBR
policy configuration in the Device Selection Policies under L4-L7 Policy-Based Redirect option.
The returning traffic will be automatically forwarded by ACI using PBR to the same firewall cluster.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 48
This scenario also addresses the challenge of the distance between two pods deployed in different
geo-locations, where it would be impossible to connect security gateways (SG) with DAC cables
to each Maestro Orchestrator (MHO). Among the Security Groups, only those in the same pod
were connected to the MHO representing that pod. The MHOs in different Pods are in sync
typically via dual fiber optic links to providing connections synchronization within Maestro Cluster.
VSLS would be provisioned on top of security groups with MHO being Active only in one of the
Pods. Traffic is directed by service graph a certain Virtual System (VS) using SPBR to keep proper
proximity function and avoid a hairpin type of issues. A secondary (backup) firewall module for a
particular VS would be utilized in case of a failover.
For the East-West traffic flow the best case scenario would be when the closest firewall (Cluster
A) is chosen to process the traffic like shown below:
Figure 52: E-W Traffic inside the Pod with SPBR in the best case scenario
However, due to the way SPBR selection mechanism works, the worst-case scenario would be
when the nearest firewall not chosen to handle connectivity, thus creating an alternative traffic
path via IPN with a hairpin scenario (via Cluster B in a different Pod) is.
Figure 53: E-W Traffic between Pods with SPBR in the worst case scenario (hairpin)
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 52
Figure 54: N-S Traffic per each Pod with location-based PBR
However, LPBR is not supported for the East-West traffic between the pods, because the return
traffic from the destination pod will be sent back to the nearest (local to the Pod) firewall, which
will drop the connection due to a lack of synchronization state.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 53
In the example below, traffic flowing from BD2 to BD1 passes through both firewalls located at
POD1 and POD2
Figure 57: Firewall deployment option for Multi-Pod stretched networks E-W communications
There are two deployment options for firewalls to be synced and in Active-Active state:
1) Active-Active Firewall with different IP / MAC addresses using LPBR function to select
the node
2) Active-Active Firewall with the same IP/MAC addresses using Cisco Anycast mechanism
to select the node.
Essentially both firewalls will complement each other from the traffic processing perspective.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 56
Figure 60: Active-Active Firewalls for Multi-Pod traffic with Local PBR - Topology
In this scenario, we have an Active-Active Firewall deployment with R80.4020 or R8121; both
cluster members have different IP addresses (also could be part of different subnets) and Sync
interfaces between them. The traffic sync link should be configured outside of the Switch Fabric
as a best practice. This scenario is applicable for Appliances only; Maestro Security Groups or VSX
VSLS are not applicable.
Please note: LPBR should have IP SLA tracking configured to allow failover between the
different Security gateways.
Cisco ACI IP SLA22 tracking allows to collect information about network performance in real time
tracking an IP address using ICMP and TCP probes. Tracking configurations can influence the
20
Active-Active Cluster XL R80.40 Admin Guide – URL:
https://sc1.checkpoint.com/documents/R80.40/WebAdminGuides/EN/CP_R80.40_ClusterXL_AdminGuide/
Topics-CXLG/Active-Active-Mode.htm
21
Active-Active Cluster XL R81 Admin Guide – URL:
https://sc1.checkpoint.com/documents/R81/WebAdminGuides/EN/CP_R81_ClusterXL_AdminGuide/Topics
-CXLG/Active-Active-Mode.htm
22
IP SLA Tracking, Cisco APIC Layer 3 Networking Configuration Guide, Release 5.0(x)- URL: Cisco APIC
Layer 3 Networking Configuration Guide, Release 5.0(x) - IP SLAs [Support] - Cisco
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 57
PBRs, allowing to be removed when tracking results come in negative and returning the route
to the table when the results become positive again.
ACI IP SLAs are available for Policy-based redirect (PBR) tracking:
Automatically remove or add a next -hop
Track the next-hop IP address using ICMP and TCP probes (in this case for the Check
Point FWD ports like TCP port 256)
Redirect traffic to the PBR node based on the reachability of the next-hop
With Cisco ACI, the IP SLA monitoring policy for Check Point Security Gateways is associated
with:
Service Redirect Policies: All the destinations under a service redirect policy are
monitored based on the configurations and parameters that are set in the monitoring
policy.
In the visual flow representation, we can overview how the Internet traffic can ingress to the
routers in this shared L3Out, then according to the workload distribution (Pod-1 or Pod2).
B. Service Contract defines that External EPG should access the EPG WEB located in Pod 1 or Pod 2,
allowing the traffic forwarding, important to note that the Service Graph redirects the traffic to the
relevant Check Point Gateway using the LPBR located in Pod 1 or Pod 2.
C. Once traffic is redirected, processed, and inspected, traffic is forwarded to the Spine. According to
the workload distribution, traffic will be forwarded to Pod 1 or Pod 2.
D. Spines "Knows" that EPG Web is located in one of the Computing Leafs in Pod 1 or Pod 2, where
the traffic is forwarded.
E. Traffic is delivered to the final destination in the EPG Web.
EPG's has the Name, IP Address, and MAC Address information per each workload connected in
the switch fabric. This information is critical because the traffic flow can be forwarded/redirected
into the relevant transit components (Spines, Leafs, and Security Gateways). We have another
use case applicable, similar principles, but a different way to do the configurations.
Figure 62: Visual representation of N-S traffic flows between stretched Bridge domains with Local PBR
Figure 63: Active-Active Firewalls for Multi-Pod traffic inspection – East-West Traffic for stretched bridge domains
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 60
Please note: 1) This solution is not part of the GA release and could be delivered through
the RFE process on demand.
In this scenario, Maestro MHO (orchestrators) are being deployed per pod with the synced
interface. Each MHO has a Security Gateway attached with a required number of security
gateways. The Security Group is configured in Active mode on both MHO / PODs.
Figure 64: Maestro deployment in stretched domains for Multi-Pod scenario with LPBR
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 61
Diagrams demonstrating traffic flows via the second Pod traversing through different firewall
modules for the initial and the returning flow:
Figure 66: Active-Active Firewall with Anycast support for E-W traffic flow
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 62
Please note: 1) This solution is not part of the GA release and could be delivered through
the RFE process on demand.
In this scenario, Maestro MHO (orchestrators) are being deployed per pod with a synchronization
link across the pods. Each MHO has a number of Security Gateways attached with a necessary
number of Security Groups provisioned. Similar to the previous design pattern SC (firewalls) in
each pod would be Active-Active and have the same IP and MAC addresses as on the diagram
below:
Figure 67: Maestro Active-Active for Multi-Pod traffic inspection with Anycast configuration
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 63
Anycast mechanism would select the closest FW to the source of the connection and the returning
traffic is not going to be dropped because connection tables are synchronized.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 64
Here is a quick comparison table to demonstrate most of the differences between multi-Pod and
Multi-Site in order to choose the best deployment option:
Multi-Pod Multi-Site
Availability Single Multiple
Zones
Redundancy Redundant nodes, interfaces, and Full site deployment with end-to-end
devices within a fabric policy definition and enforcement
Configuration APIC cluster pushes configuration Multi-Site can selectively push confi-
Change changes into the entire Multi-Pod guration changes to specified sites
fabric while preserving tenant enabling staging/validating while
isolation preserving tenant isolation
Interconnects Typically uses lower latency IP Multi-Site can deploy policies in
network between pods fabrics across continents
L4-L7 Services Services stitching across pods Site local L4-L7 services stitching
The architecture allows you to interconnect separate Cisco ACI APIC cluster domains (fabrics),
each representing a different region, all part of the same Cisco ACI Multi-Site domain. Doing so
helps ensure multitenant Layer 2 and Layer 3 network connectivity across sites, and it also
extends the policy domain end-to-end across the entire system. This design is achieved by using
the following functional components:
In summary, this architecture allows organizations to interconnect separate Cisco ACI APIC cluster
domains (fabrics), each representing a different region or Data Centers, all as part of the same
Cisco ACI Multi-Site domain. Doing so helps to ensure multitenant Layer 2 and Layer 3 network
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 65
connectivity across sites, and it also extends the policy domain end-to-end across the entire
system.
Figure 68: Multi-Site Architecture integrating the Infrastructure, Management, and Logical views
Check Point integration with Cisco ACI is based on API integration and provisioning between the
Check Point management server and Cisco API Controller (APIC). So, in Multi-Site deployment,
each APIC would need to be integrated in order to cover the whole ACI fabric network. Firewall
gateway integration in each side would be primarily based on consideration related to the Pod
topologies covered in the sections above.
PRIVATE CLOUD SECURITY FOR
CISCO ACI INFRASTRUCTURE 66
Summary
Cisco ACI provides effective micro-segmentation for next generation datacenters through the
integration of physical and virtual environments under a common policy model for networks,
servers, storage and security. Cisco ACI's application-aware policy model and native security
capabilities are leveraged by Check Point CloudGuard to dynamically insert, deploy and
orchestrate advanced security protections within software-defined datacenters.
Together, Cisco and Check Point provide a powerful solution that gives customers complete traffic
visibility and reporting in addition to proactive protection from even the most advanced threats
within virtual network environments. The joint solution forms the foundation of an intelligent
application delivery network architecture where security seamlessly follows application workloads
and accelerates application deployment while maintaining reliability, multi-tenancy and
operational workflows.
5 Ha’Solelim Street, Tel Aviv 67897, Israel | Tel: 972-3-753-4555 | Fax: 972-3-624-1100 | Email: [email protected]
www.checkpoint.com