0% found this document useful (0 votes)
26 views39 pages

Kub MicroSegm

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views39 pages

Kub MicroSegm

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Kubernetes: Micro-Segmentation for Kubernetes

Instantiated Ephemeral Workloads

Author: Kenneth Huss, [email protected]


Advisor: Domenica “Lee” Crognale

Accepted: July 15, 2024

Abstract

Defensive security professionals will have to commit focused energy and resources to
protect Kubernetes instantiated ephemeral workloads and not leave it solely in the hands
of the company's development team. This paper examines the reasons behind the
emergence and popularity of microservice-based application development, the risks these
workloads pose to the security posture, and whether scalable micro-segmentation of
container-based workloads can be a practical part of a Zero Trust Architectural strategy.

Micro-segmentation can hinder lateral movement into data center assets outside
Kubernetes domains and slow attackers, allowing security teams to detect that movement.
The Kubernetes cluster is inside a development enclave, and workloads with external
cluster connectivity should not be allowed to form connections with the production
network assets. A slingshot workstation inside the production enclave will scan and test
connectivity between enclaves. Cisco Secure Workload will be used to segment and stop
the traffic flow between these enclaves. Nmap scanning tools will used to observe the
efficacy of micro-segmentation.
Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 2

1. Introduction
Enterprise infrastructure and security teams need help with development-centric
quality assurance and preproduction workloads being fully utilized for production
delivery, violating documented and enforced security and change management
governance policies. This observed behavior was the catalyst for this research project.
Corporations are moving quickly to deploy feature-rich mobile apps and internal software
capabilities that can give them a differentiated advantage in the marketplace. Time-to-
market pressure pushes developers to embrace microservice software design
architectures, moving away from large and unruly monolithic application design. The
dramatic rise of distributed applications has contributed to the explosive growth of
container-based architectures like Kubernetes. While developers are motivated to
embrace secure development practices, many "Enterprise organizations are not allowing
or preparing their applications developers to implement security measures for the
applications they develop and deploy" (Warren, 2022). Companies cannot leave the
security of container-based ephemeral workloads solely in the hands of application and
mobile developers. IT and infosec teams need a scalable way of protecting corporate
assets from these workloads should they be compromised.

Kubernetes orchestrated distributed microservice applications are complex to set


up and manage. They are susceptible to common misconfigurations and exposure to
Runtime threats, and it can be challenging to detect and remediate risky vulnerabilities
(RedHat, 2024). After surveying 600 DevOps, engineering, and security professionals
worldwide, RedHat highlighted that 93% of respondents experienced at least one security
incident in their Kubernetes (K8s) environments within the last 12 months (RedHat,
2022). It is safe to say that Kubernetes orchestration clusters with Docker and Containerd
runtimes have fast become a fixture of the enterprise data center landscape. These
developer-centric deployments support agile software release cycles, pressuring software
engineers to roll out working code relevant to the business units funding the work.
Engineers are “aware of the various current and emerging industry best practices and
techniques to securing applications such as zero-trust, and either was or would be
interested in striving to meet these best practice framework's recommendations while also

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 3

conforming to legal and regulatory compliance requirements” (Warren, 2022).


Employing segmentation is one of the anchor best practice recommendations in a zero-
trust design.

Various segmentation types could be used to protect Kubernetes clusters, and this
research will focus on a network-centric segmentation model. In the book Zero Trust
Architecture, Patrick Lloyd highlights that application segmentation is already in use with
container-based technologies; “containers, common to platforms such as docker, separate
application functionality and processing of data from the reliance upon a common set of
resources to function” (Green-Ortiz et al., 2023). The containerization of functional
pieces of an application can slow a threat actor at the cost of increasing the overall attack
surface. Adding a network-centric segmentation layer to these ephemeral workloads can
“prevent the spread, and limit the blast zone, or impacted area, of the network should one
client be compromised." (Green-Ortiz et al., 2023) Kubernetes environments are
especially vulnerable to runtime threats like unauthorized access, privilege escalation,
and lateral movement, which are among the top five security issues organizations face.
(RedHat, 2024) Limiting east-west directional network access with micro-segmentation
can be essential to a layered zero-trust segmentation design. Cisco Secure Workload
(CSW) is a scalable approach to providing micro-segmentation to Kubernetes-
orchestrated workloads. It can provide visibility into all the pods in the cluster. CSW can
capture and baseline traffic flows into and out of workloads to simplify the design and
deployment of L3 and L4 segmentation.

1.1 Study Problem

Development-centric quality assurance (QA) workloads are exposed to patients or


customers as approved production applications in enterprise healthcare, insurance
providers, and retail financial corporations. Developers aim to create valuable and
revenue-generating tools to improve customer experience and engagement. They would
love to focus on security, but business timelines and budgets do not allow them to
prioritize secure development techniques (Warren, 2022). Exposing (QA) Kubernetes
development workloads to production network assets adds immense risk to an

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 4

organization. This workload could be missing proper production instrumentation for


management visibility and could be operated outside of the authorized change control
processes. Vulnerability scanning may not be performed, and there may be limited
patching and remediation compliance. If the corporation must comply with government
regulations, it could expose the company to penalties and financial risk. Healthcare
providers could increase patient risk if an app in QA were actively used for clinical
purposes.

1.2 Research Question


As part of a Zero Trust Architecture, can micro-segmentation protect an organization
using Kubernetes instantiated ephemeral workloads?
Organizations cannot leave the security of container-based ephemeral workloads
solely in the hands of application and mobile developers. They need a scalable way to
add micro-segmentation to each container as its runtime is advertised to the network.
Protecting corporate users and assets from these workloads is essential should the
applications or runtimes comprising the containers be compromised. This research will
explore micro-segmentation with Cisco Secure Workload deployed inside Kubernetes as
an added layer of visibility and workload security.

1.3 Proposed Solution

East-West directional segmentation can protect data center assets at layer 2 or


layer 3 of the OSI model from containerized ephemeral workloads orchestrated within the
Kubernetes cluster. In the book Zero Trust Architecture, Patrick Lloyd highlights what is
driving the need for segmentation, “with modern attack vectors such as malware …
deeper models of segmentation are required to prevent the spread, and limit the” blast
zone,” or impacted area, of the network should one client be compromised” (Green-Ortiz
et al., 2023). Patrick goes on to explain, "Malware has adopted the ability to seek out
open ports and protocols … and communicate to hosts via both IP addresses and MAC
addresses at Layers 3 and 2 of the OSI model” (Green-Ortiz et al., 2023). Micro-
segmentation protects data center assets from scanning and lateral movement should a

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 5

container in the cluster be compromised. Cisco Secure Workload has distributed firewall
capabilities with security policies that comply with governance and compliance
requirements. It is a scalable way to enforce policy, protecting data center assets outside
the Kubernetes cluster.

2. Research Method
2.1 Constraints
This research paper does not address the additional layers of security necessary for
protecting enterprise and cloud Kubernetes deployments. Instead, it concentrates on
network segmentation, particularly micro-segmentation, to manage traffic within and
between network segments and limit connectivity within the same segment. The paper
does not cover macro segmentation, deep application segmentation, or user segmentation.
In IT, segmentation involves dividing environments and solutions into smaller parts to
hinder an attacker's lateral movement within the infrastructure.

The Kubernetes cluster runs on a Windows 10 host employing VMware


Workstation Pro 17 and limited network assets. The test lab does not emulate cloud and
enterprise network design fundamentals.

Cisco Secure Workload SaaS was selected as a network segmentation control


point for this research because of its ability to scale to protect thousands of distributed
workloads outside of Kubernetes or OpenShift. It can be used within cloud or hybrid data
centers for any workload and provides robust security telemetry outside of the active
L3/L4 network segmentation capabilities. An employer demonstration program provided
research access to the Cisco Secure Workload SaaS tenant.

Security within Kubernetes orchestration infrastructure should not be limited to


only using network or service micro-segmentation. Many layers of additional security
hardening must be applied to protect these deployments properly.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 6

2.2 Research Environment


Figure 1 is a visualization of the research architecture used for testing.

Figure 1: Kubernetes Research Environment

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 7

The core of the Kubernetes test lab is a Windows 10 Pro-64-bit Host equipped
with an AMD Ryzen 9 39000X 24 Core- processor, 64 Gig of RAM, and multiple 2 TB
NVME drives. VMware Workstation Pro 17 is the tier 2 hypervisor that supports the five
VMs that make up the test bed. The Windows 10 host is connected to the local network
via an L2 switch with an uplink to a Ubiquiti UniFI Dream Machine Pro. The UDM-Pro
supports access to the local Wi-Fi networks and provides firewall services for the fiber
uplink connection to the internet.
Cisco Secure Workload is delivered globally from Cisco’s data centers; however,
it can be deployed and consumed directly from Amazon AWS, Microsoft Azure, and
Google GCP cloud networks. The CSW tenant runs software Version 3.9.1.1-patch 38
and operates on Cisco’s own dCloud™.

2.2.1 VMware Workstation Pro version 17


Four Ubuntu server 22.04 LTS virtual machines were built to support the
Kubernetes Cluster. A single CentOS 7 machine was built to host the Secure Connector
used by the Cisco Secure Workload SaaS tenant. This Secure Connector punches an
encrypted tunnel from the private test network out to the public cloud hosting the SaaS
CSW tenant. The network adapters for each virtual machine were set to Bridged
(Automatic) and provisioned with the IP addresses highlighted in Table 1.

Virtual Machines CPU/COREs RAM Disk Ip Address

K8master 1 CPU/2 Cores 4 Gig 40 Gig 192.168.0.210

K8worker1 2 CPU/4 Cores 8 Gig 40 Gig 192.168.0.211

K8worker2 2 CPU/4 Cores 8 Gig 40 Gig 192.168.0.212

K8workstation 1 CPU/2 Cores 2 Gig 20 Gig 192.168.0.200

Secure 1 CPU/ 2 Cores 4 Gig 20 Gig 192.168.0.231


Connector

Table 1: Virtual Machine Resources

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 8

2.2.2 Kubernetes Architecture Used


Kubernetes containers are as flexible for application virtualization as VMware
ESXi is for operating system virtualization. It can be deployed on a Windows server and
a half dozen Linux distributions. Please see Appendix A, 10 Step Process to Build a
Kubernetes Cluster on Ubuntu Server 22.0.4 LTS, for detailed instructions on how to
build a Kubernetes test lab. The research labs use four Ubuntu VMs within the cluster,
including the master control node, two separate worker nodes, and an external
workstation operating the cluster via the API server. Figure 2 shows the nodes that make
up the research cluster.

Figure 2: Kubernetes Cluster Nodes

Docker and Containerd are the virtualization runtimes used to support the creation
of all Kubernetes pods. Calico containers within the control node handle networking
within the cluster and facilitate connectivity outside the cluster. Kubectl is a command-
line interface tool that utilizes the Kubernetes API to control the cluster control plane
assets like pods, services, and nodes, as shown in Figure 3.

Figure 3: Kubernetes Management Pods

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 9

2.2.3 Applications Present in the Research Environment


The main application used for testing is a docker containerized Kali Linux
distribution operating inside a Kubernetes pod. Kubernetes uses YAML files to create
pods, services, and deployments running in the cluster. A Kali-deployment YAML file
was written in Visual Studio Code and applied to the cluster. The YAML configuration
specifies the number of replicas and the image running in the container. Figure 4 shows
the configuration details of the deployment.

Figure 4: Kali Deployment YAML File

Once the planned YAML files are created, they are activated from a file in
Kubernetes with a kubectl apply command:

kubectl apply -f /home/khuss/kali-deployment.yaml

After the file is applied, the pods will be built automatically by Kubernetes and run on the
worker node selected by the scheduler running within the control node.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 10

2.2.4 Cisco Secure Workload SaaS Tenant


Cisco Secure Workload (CSW) comprises four different elements used to manage
Kubernetes when hosted on a private network. The solution utilizes a SaaS tenant, a
Kubernetes agent/control pod, a secure connector, and an external orchestrator. Please
refer to the consolidated build guide located in Appendix B to ease the lab rollout. Below
is a functional summary of the four components that comprise the Cisco Secure
Workload portion of this security lab.

SaaS Tenant - The heart of the Cisco Secure Workload solution. It is used to
manage and deploy distributed firewall capabilities, gather and analyze telemetry, and
create and enforce security policies.

Kubernetes agent/control pod - is a software agent operating in the Kubernetes


cluster in a tetration namespace and can be provisioned to interact with hundreds or
thousands of pods/containers instantiated by the cluster.

Secure Connector – Used by Cisco Secure Workload to build a secure encrypted


tunnel from the internal private network hosting the Kubernetes cluster and send it out to
the SaaS Tenant to manage the segmentation for the workloads.

External Orchestrator – Collects essential metadata about workloads from network


systems and can also be used to enforce segmentation policies.

2.3 Test Methodology


The test experiment is straightforward once the complexities of the test
environment have been built out. Cisco Secure workload will be configured to have two
different active segmentation workspaces, a development and production enclave. The
Kubernetes cluster will be added to the development workspace, and a single Kali
workstation will be provisioned into the production enclave.

Two development containers will be provided for testing. The first is a web
application connected to a back-end MongoDB; the second will be a QA-kali workstation
with penetration and scanning tools. Cisco Secure Workload will have policies built to
block all production assets from communicating with any development workloads.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 11

Testing will be done before and after enforcement is enabled to showcase the
effectiveness of traffic segmentation between the production and development enclaves.

3. Findings

3.1 Load Tools on the Test Systems


The qa-kali container running in development is a stripped-down Kali Linux
deployment and will need to load all the necessary low-level Linux packages to support
testing. Start by connecting to the qa-qa-kali pod via the following kubectl command:
kubectl exec -it <pod-name> -- /bin/bash
It will put you into a root-level bash shell, as seen in Figure 5.

Figure 5: Access qa-qa-kali pod

Six separate low-level Linux packages will be required to scan and pull HTML web
pages and telnet or ping workloads from the command line. Update the Ubuntu packages
and load each package with the following six commands:
apt update
apt install net-tools
apt install iputils-ping
apt install telnet
apt install curl
apt install nmap
This testbed also has the official SANS slingshot Linux distribution loaded as a virtual
machine in the production enclave. The production network simulates the east-west data
center assets that would be a lateral movement target for a compromised Kubernetes pod.
The free SANS slingshot distribution for download can be found at
https://www.sans.org/tools/slingshot/.
To prepare the slingshot VM, do an apt update and an apt upgrade followed by a
virtual machine reboot. The distribution comes with all the low-level packages required

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 12

to complete the research test plan. The slingshot VM also has an Apache 2 web server
that will be reachable during testing. Ping will not work from the production slingshot
machine inbound to the pods in the Kubernetes. Ping uses the ICMP protocol directly to
an IP address and will not work with a specified TCP Port. NMAP and telnet will be used
to specify TCP ports for the pods' NodePort, confirming IP continuity. TCP port 30100 is
used for the webapp pod, and TCP port 30280 is used for the qa-qa-kali pod.

3.2 Configure CSW Scope, Workspace, and Segmentation Policy


Cisco Secure Workload will require four configuration steps in the user interface
before a security policy can be enforced, effectively applying segmentation to the
workload. The first step is to organize the workload inventory by putting each workload
into a unique scope. The scope tree organizes the corporate network into functional areas
like production, preproduction, development, and DMZs. Figure 6 shows the production
and development scopes created for the research lab.

Figure 6: Scopes Organize Workloads

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 13

After the scopes for the network have been created and workloads assigned to
those scopes, the second step is to create a workspace for each workload. Workspaces in
CSW contain all the policy and enforcement configurations to segment each workload.
Figure 7 shows the workspaces that have been defined for this lab.

Figure 7: Workspaces Receive Policy and Enforcement Configurations

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 14

Cisco Secure Workload can monitor all the traffic connections in and out of each
workload being monitored and send detailed flow metadata back to the SaaS tenant. That
data is used to run traffic analysis for each workload and will eventually be used to create
a baseline security policy profile automatically. Additional white list security rules can be
manually added to the policy. Many enterprise IT engineers will collect several weeks of
baseline traffic for each workload before doing policy analysis. Figure 8 shows the flow
observations that will be analyzed to create a dynamic policy for each workspace.

Figure 8: Flow Observations in CSW

Step three of the process starts with the automatic analysis of the flow data that
will be used to generate a baseline traffic policy. This policy makes it easy to see critical
service data like DNS, DHCP, and NTP. Figure 9 has one manually added security rule to
this dynamic policy, which will deny communication between production and
development scopes.

Figure 9: Dynamically Created Traffic Policy

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 15

The fourth and final step will be moving the security policy into enforcement. The
Cisco Secure Workload tenant will communicate with the agent to push iptables firewall
rules to the workload, which will match the security rules defined by the policy. Figure
10 highlights the enforce policies button used to push the policy to the workload.

Figure 10: Policy Enforcement Button

3.3 Test Plan


Testing will be split into two sections— before and after the application
of segmentation. Segmentation policy rules will be applied on two specific pods in the
Kubernetes cluster and a single test machine outside the Kubernetes cluster on the
production data center network. Segmentation rules will be configured to block traffic bi-
directionally for each workload. Only one test machine is within the production scope of
Cisco Secure Workload. That production workload demonstrated how useful nmap
scanning is from inside the Kubernetes cluster. Segmentation rules on the Kubernetes
workload will block access to any subnet or machine assigned to the production scope. A
single deny rule to block the development scope from forming connections to the
production scope could scale to thousands of workloads.
Various tests were conducted to validate segmentation rules. Since a typical ping
test cannot target an IP address and TCP port combination, additional tools were used. A
web browser can access the IP address and port, while Curl can retrieve HTML code
from an active webpage. Telnet and nmap can also connect to specific IP addresses and
ports. These tests are effective on Kubernetes NodePorts configured with TCP ports,
enabling external network access for pods.
The webapp pod is a stripped-down application using runtime components from
Docker to run the image. It supports delivering the HTML webpage and storing form data
to a back-end MongoDB, but the container is very lightweight and cannot be accessed

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 16

from the kubectl command line API. The WebApp was developed by Nana Janashia for
the TechWorld with Nana YouTube channel. (Janashia, 2021) Kali Linux image was
deployed as a pod to facilitate outbound scanning of the production network. The SANS
slingshot Linux deployment will be used to test k8s pods in the deployment scope. Table
2 captures the tests that will be performed for each segmentate+on scenario.

Test Source Test Destination Test Tool Test Results

Slingshot Webapp pod Curl û or ü


Slingshot Webapp pod Web browser û or ü
Slingshot Webapp pod Nmap -p scans û or ü
Slingshot Webapp pod telnet û or ü
Qa-kali pod Slingshot Nmap scans û or ü
Qa-kali pod Slingshot Curl û or ü
Qa-kali pod Slingshot ping û or ü
Qa-kali pod DC Subnet Nmap subnet û or ü
192.168.0.0/24 scan is
192.168.0.164
present
Slingshot Qa-kali pod Nmap -p scans û or ü
Table 2: Planned Tests

Nmap scans are a key part of the testing for this research project. Four scans will
be performed for each source and destination pair test scenario in the planned tests shown
in Table 2. The following is a list of the core scans performed and sample syntax:

1. Basic Port Scan

a. nmap -p 30100 192.168.0.210

This will check if the NodePort 30100 is open on the specified IP address.

2. Service Version Detection

b. nmap -sV -p 30100 192.168.0.210

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 17

This scan will attempt to determine the version of the service running on the target
system.

3. Vulnerability Scanning

c. nmap –script vuln -p 30100 192.168.0.210

This will run vulnerability detection scripts against the service to identify potential
security issues that exploit tools can use against the machine.

4. Script Scanning

d. nmap -sC 192.168.0.164

This uses unmap's default scripts to gather more information about services running on
the target system.

5. Subnet System Scan

e. nmap -sP 192.168.0.0/24

This scan will identify all the systems on the subnet that respond and display that in the
output.

The results of the first four nmap scans will be documented in the test results tables.
The fifth nmap scan of the entire data center subnet 192.168.0.0/24 will be run from the
qa-kali pod in the k8s cluster. The documentation will include that output to showcase
micro segmentation’s ability to obfuscate network-attached assets and hinder an
attacker’s visibility.

3.4 Test Results Without Micro-Segmentation Enforcement


This set of tests was performed within the lab network with the CSW segmentation
configurations disabled. Table 3 captures the results of this portion of the test plan.

Test Source Test Destination Test Tool Test Results

Slingshot Webapp pod Curl ü

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 18

Slingshot Webapp pod Web browser ü


Slingshot Webapp pod Nmap -p scans ü
Slingshot Webapp pod telnet ü
Qa-kali pod Slingshot Nmap scans ü
Qa-kali pod Slingshot Curl ü
Qa-kali pod Slingshot ping ü
Qa-kali pod DC Subnet Nmap subnet ü
192.168.0.0/24 scan is
192.168.0.164
present
Slingshot Qa-kali pod Nmap -p scans ü
Table 3: Tests Results Without Segmentation

The production slingshot machine is located at IP address 192.168.0.164 and uses


a functional GUI interface. One of the planned tests uses Mozilla Firefox with three
separate browser tabs; each opened to the NodePort 30100 for the IP addresses of the
Kubernetes k8master, k8worker1, and k8worker2 nodes. Figure 11 is an image capture of
the Slingshot workstation with the open browser tabs, each with full access to the
webpage being served up from the webapp container virtualized by Kubernetes.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 19

Figure 11: WebApp Pod WebSite Developed by (Janashia, 2021)

Table 4 captures the outputs of a pair of single target nmap port scans from the
Slingshot machine toward the NodePorts 30280 and 30100 for the qa-kali and webapp
pods. Table 4 also shows the outputs of a Ping from qa-kali to Slingshot and a successful
telnet connection from Slingshot into the webapp pod.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 20

Table 4: Successful Ping, Telnet, and Nmap Tests

The last test output for this section is the nmap scan of the data center IP subnet
hosting the production servers and the development Kubernetes cluster. This output was
performed from the qa-kali pod, which enumerates each IP address in the 192.168.0.0/24
network. Without segmentation enforcement, the Slingshot workstation provisioned at
192.168.0.164 in the production enclave in Cisco Secure Workload is visible to pods in
the Kubernetes cluster. Figure 12 shows all the active hosts discovered by the nmap scan
of this network.

Slingshot
production workload
is visible to the Kali
pod in the k8s
cluster.

Figure 12: Nmap Host Enumeration Without Segmentation

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 21

3.5 Test Results with Micro-Segmentation Enforcement


Testing with micro-segmentation enforced starts in Cisco Secure Workload.
Navigate to the menu on the left of the GUI, select >Defend, then >Segmentation, then
choose>Workspace, and go through the process to enable enforcement. Enforcement is
enabled on three separate agents in the test lab, blocking production and development
workloads from forming connections. See Figure 13, highlighting that enforcement is
active and the agent rejects traffic between production and development workloads.
Slingshot is not communicating with pods in Kubernetes.

Figure 13: Enforcement Activly Rejecting Traffic Between Enclaves

Testing was repeated, and the results are documented in Table 5. The test results
demonstrate that micro-segmentation is working well.

Test Source Test Destination Test Tool Test Results

Slingshot Webapp pod Curl û


Slingshot Webapp pod Web browser û
Slingshot Webapp pod Nmap -p scans û
Slingshot Webapp pod telnet û
Qa-kali pod Slingshot Nmap scans û
Qa-kali pod Slingshot Curl û

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 22

Qa-kali pod Slingshot ping û


Qa-kali pod DC Subnet Nmap subnet û
192.168.0.0/24 scan is
192.168.0.164
present
Slingshot Qa-kali pod Nmap -p scans û
Table 5: Test Results Using Full Segmentation

Nmap tools are scanning the entire subnet, but the 192.168.0.164 machine was not
showing up in the scan output. The agent enforcement protects the production slingshot
machine from the qa-kali Linux pod inside the Kubernetes cluster. The micro-
segmentation agents installed in Kubernetes are blocking the pods and are not blocking
access to the 210, 211, and 212 addresses of the master and two worker machines. The
agents are segmenting the pods inside Kubernetes. Figure 14 shows the details of the
nmap scan with micro-segmentation enforcement.

Slingshot
production workload
is not visible in the
scan from the Kali
pod in the k8s
cluster.

Figure 14: Nmap Host Enumeration With Segmentation

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 23

3.6 Summary of Test Results


The critical takeaway uncovered by this testing is that micro-segmentation is still
very effective when applied only to hosts in the production data center or workloads in
the Kubernetes development environment or within both enclaves. It is recommended to
be provisioned in multiple places in case one of the layers of zero trust is compromised.
Another interesting finding was how effective Cisco Secure Workload is at discovering
every connection in and out of any workload. It can easily find all the critical operational
protocols and connections required for container operation. Those essential resources can
be provisioned at scale and applied as a policy to any workload dynamically created by
Kubernetes. Cisco Secure Workload allows agents to be provisioned so attackers cannot
remove them from the workload. Locking the installation of the agent adds another layer
of difficulty for an attacker living off the land. However, Locking the agent install can
create some troubleshooting challenges for developers who may need to remove the agent
while testing.

As a single layer of a zero-trust framework, micro-segmentation can be used very


effectively to protect Kubernetes instantiated ephemeral workloads.

4. Recommendations and Implications for Future


Research
Preproduction development workspaces should not be open sandbox playgrounds for
developers to build and test applications with production customers or patients. Even
with security and governance policies in place, these development networks often have
non-compliant workloads. Defenders can also use these same QA networks to build and
optimize security solutions outside production. Micro-segmentation should be adopted
early so developers are also learning and maturing with the layers of security that will be
used on these workloads when they finally move into production. Cisco Secure Workload
can also provide monitoring into and out of workloads to help provide a baseline of the
traffic. That data can be precious to application and mobile developers and can be used to
positively motivate them to work with defenders to establish network segmentation.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 24

5. Conclusion
Future researchers can explore how defenders can use CSW for advanced threat
hunting. Deep visibility into the traffic and processes used by workloads can be evaluated
for use by security teams. Researchers can also explore the application of segmentation to
the internal network inside the Kubernetes cluster. Segmentation can protect the cluster
API server and other critical resources from exposure to compromised pods. Future
researchers could also explore scalability and performance impact testing and analysis.

In conclusion, the rapid adoption of Kubernetes-orchestrated microservice


architectures has introduced significant security challenges, particularly in development-
centric environments where QA workloads are improperly exposed to production
networks. This research underscores the critical need for scalable security measures that
go beyond the capabilities of application and mobile developers. Organizations can
effectively limit the blast radius of potential security incidents by implementing a
network-centric micro-segmentation model, thereby protecting corporate assets and
sensitive data (Green-Ortiz et al., 2023).
The deployment of Cisco Secure Workload within Kubernetes clusters has proven to
be an effective solution for achieving this goal. Its ability to provide visibility into all
pods, capture and baseline traffic flows, and enforce Layer 3 and 4 segmentation policies
ensures that ephemeral workloads are securely managed. This approach aligns with zero-
trust architecture principles and meets governance and compliance requirements, offering
a robust defense against runtime threats such as unauthorized access, privilege escalation,
and lateral movement.
Ultimately, the integration of Cisco Secure Workload for micro-segmentation within
Kubernetes environments offers a scalable and effective means to enhance security,
protect data center assets, and mitigate risks associated with the rapid deployment of
feature-rich applications.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 25

References

Cisco Systems Inc. (2023, January 11). Cisco Secure Workload User Guide, Release 3.7.

Retrieved February 4, 2024, from cisco.com website:

https://www.cisco.com/c/en/us/td/docs/security/workload_security/secure_worklo

ad/user-guide/3_7/cisco-secure-workload-user-guide-v37.pdf

Cox, R. (2020). Network Segmentation of Users on Multi-User Servers and Networks

GIAC (GCUX) Gold Certification Network Segmentation of Users on Multi-User

Servers and Networks │ 2. In SANS Information Security White Papers.

Retrieved from https://www.sans.org/white-papers/

D’Silva, D., & Ambawade, D. (2021, April 2). Building A Zero Trust Architecture Using

Kubernetes. Retrieved February 4, 2024, from IEEE Xplore website: 2021 6th

International Conference for Convergence in Technology (I2CT)

Dale, C. (2023, December 18). Top 3 Cybersecurity Predictions for 2024 in EMEA |

SANS Institute. Retrieved February 3, 2024, from www.sans.org website:

https://www.sans.org/blog/top-3-cybersecurity-predictions-for-2024-in-emea/

Green-Ortiz, C., Fowler, B., Houck, D., Hensel, H., Lloyd, P., McDonald, A., & Frazier,

J. (2023). Zero Trust Architecture (1st ed., p. 304). San Jose, CA: Cisco Press.

(Original work published 2023)

Helco, C. (2023). Kubernetes: Stealing Service Account Tokens to Obtain Cluster-Admin

Kubernetes: Stealing Service Account Tokens to Obtain Cluster Admin 2. In

SANS Information Security White Papers. Retrieved from

https://www.sans.org/white-papers/

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 26

Jaworski, S. (2017). WHITE PAPER Does Network Micro-segmentation Provide

Additional Security? In SANS Information Security White Papers. Retrieved from

https://www.sans.org/white-papers/

Peterson, B. (2016). WHITE PAPER Secure Network Design: Micro Segmentation. In

Sans Information Security White Papers. Retrieved from

https://www.sans.org/white-papers/

RedHat. (2022). The state of Kubernetes security report. In www.redhat.com (1st ed., p.

29). Raleigh, NC: Red Hat. Retrieved from

https://www.redhat.com/rhdc/managed-files/cl-state-kubernetes-security-report-

2022

RedHat. (2024). The state of Kubernetes security report. In www.redhat.com (1st ed., p.

30). Raleigh, NC: Red Hat. Retrieved from

https://www.redhat.com/rhdc/managed-files/cl-state-kubernetes-security-report-

2024-1210287-202406-en.pdf

Shackleford, D. (2019). How to Effectively Use Segmentation and Micro Segmentation.

In SANS Information Security White Papers. Retrieved from

https://www.sans.org/white-papers/

Shackleford, D. (2020a). How to Create a Comprehensive Zero Trust Strategy

Introduction: Micro-segmentation Evolution. In SANS Information Security White

Papers. Retrieved from https://www.sans.org/white-papers/

The Verizon DBIR Team, Hylender, C. D., Langlois, P., Pinto, A., & Widup, S. (2024).

2024 data breach investigations report. In Verizon Business (p. 99). New York

City: Verizon Business. Retrieved from Verizon Business website:

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 27

https://www.verizon.com/business/resources/Tdb6/reports/2024-dbir-data-breach-

investigations-report.pdf

Verizon Business, Hylender, C. David., Langlois, P., Pinto, A., & Widup, S. (2023).

2023-data-breach-investigations-report-dbir. In

https://www.verizon.com/business/resources/reports/dbir/ (p. 88). New York

City: Verizon Business. Retrieved from Verizon Business website:

https://www.verizon.com/business/resources/reports/dbir/

Warren, D. (2022). A Qualitative Study of Cloud Computing Security and Data Analytics

Adoption for Application Developers: A Dissertation Presented in Partial

Fulfillment of the Requirements for the Doctor of Computer Science Degree. In

ProQuest. Retrieved from

https://www.proquest.com/openview/46b2c0927ae0ff56f198afc59d5b4df6/1.pdf?

pq-origsite=gscholar&cbl=18750&diss=y

Wickramasinghe, S. (2023, September 8). How to Install Kubernetes on Ubuntu 22.04 |

Step-by-Step. Retrieved June 24, 2024, from How to Install Kubernetes on

Ubuntu 22.04 | Step-by-Step website: https://www.cherryservers.com/blog/install-

kubernetes-on-

ubuntu#:~:text=Benefits%20of%20Kubernetes,Verify%20the%20cluster%20and

%20test

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 28

Appendix A

10-Step Process to Build a Kubernetes Cluster on Ubuntu


Server 22.0.4 LTS

Finding an accurate step-by-step process for building version 1.28+ Kubernetes


Cluster on Ubuntu Server 22.0.4 LTS was difficult. Cherry Servers published a blog post
authored by senior software engineer Shanika Wickramasinghe. While the blog is
accurate, it still needs to be updated for Kubernetes 1.28, and it did not account for the
deprecation of the Google certificate repository. Step five below is broken into two
halves and needs to be modified away from the blog’s guidance with modified
configurations to get the desired outcome. The remaining steps are Shanika
Wickramasinghe’s original work (Wickramasinghe, 2023). Her blog can be found on the
following website (Wickramasinghe, 2023) :
https://www.cherryservers.com/blog/install-kubernetes-on-
ubuntu#:~:text=Benefits%20of%20Kubernetes,Verify%20the%20cluster%20and%20test

Step 1: Disable Swap


khuss@k8master:~$ sudo swapoff -a
khuss@k8master:~$ sudo sed -i '/ swap / s/^/#/' /etc/fstab
Step 2: Set up Hostnames
khuss@k8master:~$ sudo hostnamectl set-hostname “master-node”
khuss@k8master:~$ exec bash
Step 3: Update the /etc/hosts File for Hostname Resolution
khuss@k8master:~$ sudo nano /etc/hosts
NOTE – Add the host IP address and corresponding hostname to the file.
IE – 10.0.0.2 master-node
Step 4: Set up the IPV4 bridge on all nodes
khuss@k8master:~$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 29

khuss@k8master:~$ sudo modprobe overlay


khuss@k8master:~$ sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
khuss@k8master:~$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward =1
EOF

# Apply systemctl params without reboot


khuss@k8master:~$ sudo sysctl --system

Step 5: Install Kubelet, kubeadm, and kubectl on each node


khuss@k8master:~$ sudo apt-get update
khuss@k8master:~$ sudo apt-get install -y apt-transport-https ca-certificates curl
khuss@k8master:~$ sudo mkdir /etc/apt/keyrings
NOTE – TWO FIXED COMMAND BELOW:
khuss@k8master:~$ curl -fsSL
https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o
/etc/apt/keyrings/kubernetes-apt-keyring.gpg

khuss@k8master:~$ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-


keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee
/etc/apt/sources.list.d/kubernetes.list
khuss@k8master:~$ sudo apt-get update
khuss@k8master:~$ sudo apt install -y kubelet kubeadm kubectl

Step 6: Install Docker


khuss@k8master:~$ sudo apt install docker.io
khuss@k8master:~$ sudo mkdir /etc/containerd
khuss@k8master:~$ sudo sh -c "containerd config default >
/etc/containerd/config.toml"
khuss@k8master:~$ sudo sed -i 's/ SystemdCgroup = false/ SystemdCgroup =
true/' /etc/containerd/config.toml

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 30

khuss@k8master:~$ sudo systemctl restart containerd.service


khuss@k8master:~$ sudo systemctl restart kubelet.service
khuss@k8master:~$ sudo systemctl enable kubelet.service

Step 7: Initialize the Kubernetes cluster on the master node


khuss@k8master:~$ sudo kubeadm config images pull
khuss@k8master:~$ sudo kubeadm init –pod-network-cider=10.10.0.0/16
khuss@k8master:~$ mkdir -p $HOME/.kube
khuss@k8master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
khuss@k8master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 8: Configure kubectl and Calico


khuss@k8master:~$ kubectl create -f
https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-
operator.yaml
khuss@k8master:~$ curl
https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-
resources.yaml -O
khuss@k8master:~$ sed -i 's/cidr: 192\.168\.0\.0\/16/cidr: 10.10.0.0\/16/g'
custom-resources.yaml
khuss@k8master:~$ create -f custom-resources.yaml

Step 9: Add Worker Nodes to the cluster


NOTE – USE THE TOKEN GENERATED WHEN YOU INIT KUBEADM THE
MASTER NODE.
khuss@k8master:~$ sudo kubeadm join
&lt;MASTER_NODE_IP>:&lt;API_SERVER_PORT> --token &lt;TOKEN> --
discovery-token-ca-cert-hash &lt;CERTIFICATE_HASH>
Step 10: Verify the cluster and test
khuss@k8master:~$ kubectl get node
khuss@k8master:~$ kubectl get pod -A

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 31

Appendix B
Loading Cisco Secure Workload Agent on Kubernetes

This Appendix will outline the build summary for the four main Cisco Secure
Workload elements used in this test lab. The four main components of CSW that needed
to be built to have an operational system that could be used to deploy micro-segmentation
into Kubernetes are the tenant, the agents specific to the workloads, a secure connector,
and finally, an external orchestrator. This Appendix will document the build process for
each component.
A system administrator with the required CSW access must configure a SaaS or
hybrid appliance Secure Workload tenant. A Cisco cloud security administrator created
the SaaS tenant used for this research, and here is a summary of the tenant configuration
process.
Creating a SaaS tenant in Cisco Secure Workload involves several steps in a general
outline:
1. Access Cisco Secure Workload:
- Log in to the Cisco Secure Workload account using admin credentials.
2. Navigate to Tenant Management:
- Once logged in, go to the "Tenant Management" section. This wizard is in the
administrative or settings area of the dashboard.
3. Create a New Tenant:
- Click on the option to create a new tenant. This might be labeled as "Add
Tenant," "New Tenant," or something similar.
4. Enter Tenant Details:
- Fill in the required information for the new tenant. The required data usually
includes:
- Tenant Name
- Description
- Contact Information
- Any other relevant details are specific to your organization's needs.
5. Configure Tenant Settings:

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 32

- Set up the necessary configurations for the tenant. This may include:
- Resource allocation (e.g., CPU, memory, storage)
- Network settings
- Security policies
- Access controls and permissions
6. Assign Users and Roles:
- Add users to the tenant and assign appropriate roles and permissions. Ensure
that users have access to perform their tasks within the tenant.
7. Review and Confirm:
- Review all the entered information and configurations to ensure accuracy.
- Confirm the creation of the tenant.
8. Deployment:
- Deploy the tenant, which may involve provisioning resources and applying
configurations.

Figure B1 contains the tenant external agent config server and collector IP
addresses. These addresses must be fully reachable by your workloads.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 33

Figure B1: Research SaaS Tenant Details

The Secure Connector installation is very straightforward and one of the more
manageable components. A Centos 7 VM is required to be loaded and operational in your
local lab network. The software package will set up an encrypted tunnel from your
Centos 7 VM out of the local network over the internet and into the SaaS Tenant for
CSW.
1. Access Cisco Secure Workload:
- Log in to your Cisco Secure Workload account using your credentials.
2. Navigate to Workload Management:
- Once logged in, navigate to the menu down the left side of the GUI, select
"Workloads," and then choose Secure Connector from the drop-down menu.
3. Click on the “Download Latest RTM” button:
- Once the file is downloaded to your local host, use SCP to move it to the
/home directory of the Centos 7 VM.
4. Install the RTM package:
- Run the install RPM
- The CSW tenant automatically configures the rpm file with target IP
addresses and security information for the tunnel.
- sudo rpm -i <package name>.rpm
5. Check the connector status in CSW:
- Navigate to the "Workloads" menu and choose "Secure Connector."
- The page will show the status of the connector tunnel in Green or Red.

Figure B2 is a screenshot of the operational Secure Connector, with the green “Active”
Tunnel Status.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 34

Figure B2: SaaS Secure Connector Tunnel Status

The third CSW component required for this research lab is the CSW "Agents ."
The agent will operate locally on the production slingshot Linux workstation and be
installed into the Kubernetes cluster. Three agents will be used, two installed into the k8s
cluster and the last on the slingshot workstation.

Each environment will require that a list of low-level Linux packages be installed
before trying to run the "Installer Script," which will be built by the customer and
downloaded from your SaaS CSW tenant. Figure B3 has a detailed list of the required
Linux packages that must be installed with the “apt install <package name> command.

Example: apt install curl

Example: apt install openssl

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 35

Figure B3: Required Linux Packages

This next step is a critical prerequisite and could cost hours of troubleshooting if
forgotten or missed. On the Ubuntu server running 22.04 LTS, which was provisioned as
the master Kubernetes control node, a containerd registry file must be edited.

sudo nano /etc/containerd/config.toml

- Look through the file until you find the following line: [plugins."
io.containerd.grpc.v1.cri" .registry]

- Just below that line, add the required edit:

Config_path = “/etc/containerd/certs.d”

Figure B4 has a screen capture of the exact command. This registry must be edited before
running the “Installer Script” on the Kubernetes master node.

Figure B4: Required Registry Edit

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 36

With the above registry edit in place, the agent installed on Kubernetes should not have
any issues.
1. Navigate to the CSW menu on the left side of the GUI, select "Manage" and then
"Workloads," and select "Agents."
2. Choose the "Agent Script Installer" method to load the agents.
- From the drop-down menu, select the target OS for the agent deployment; in
this case, select Kubernetes.
- The tenant will be selected automatically; in the example below, it is khuss
- Download the automatically configured script and move it to the /home
directory of your master Kubernetes control node.
3. From the Kubernetes control/master node run the following kubectl command:
- kubectl get pod -v6
- The output of that command will have the path to the Kubernetes config file,
which is required to edit the bash command to execute the "Agent Script
Installer."
- The path to the Kubernetes config file should look like this:
/home/khuss/.kube/config
- Edit the bash command on the CSW installer page and run the script as shown
in Figure B5.
- $ bash tetration_installer_khuss_enforcer_kubernetes_tet-pov-rtp1.sh –
kubeconfig /home/khuss/.kube/config

Figure B5: Kubernetes Agent Script Installer

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 37

4. Once the script has been run on the Kubernetes control node, execute the
following kubectl command to verify the installation:

- Kubectl get pod -A

- See the output in Figure B6 and look for the two tetration pods located in the
tetration namespace, and make sure they are operational with the “Running”
status.

Figure B6: Tetration Pods Running Status

5. Log back into the CSW tenant and navigate to "Manage," then to "Workloads"
and choose "Agents."

- Across the top of the GUI is a horizontal menu; choose "Agent List," the
following screen in Figure B7 will show the status of all properly installed
agents. Two agents are located in the Kubernetes cluster, and the last is loaded
onto the Slingshot Linux workstation.

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 38

Figure B7: CSW Active Agent List Status

The final component of the CSW portion of this test lab is the External
Orchestrator. The orchestrator configuration guide is

https://www.cisco.com/c/en/us/td/docs/security/workload_security/secure_worklo
ad/user-guide/3_9/cisco-secure-workload-user-guide-saas-v39/external-
orchestrators.html#concept_684859

Cisco Secure Workload External Orchestrator requires an API connection directly


to the Kubernetes API server. Version 1.28 of Kubernetes has significant changes to the
process of building this API connection, and the current version of Cisco documentation
has not been updated.

The services account must be named csw.read.only, and the five kubectl
commands required to set up the API connection and generate a token must be completed
before going to CSW to provision the Orchestrator.

kubectl create serviceaccount csw.read.only

kubectl create -f clusterrole.yaml (After you create the clusterrole.yaml file)

kubectl create clusterrolebinding csw.read.only --clusterrole=csw.read.only --


serviceaccount=default:csw.read.only

kubectl get serviceaccount -o yaml csw.read.only

kubectl create token csw.read.only

Kenneth Huss, [email protected]


Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 39

The final command will generate a sizeable token string. Use that token in the
Cisco Secure Workload External Orchestrator installation wizard.
Open the CSW tenant and go to the menu on the left side of the GUI. Select
Manage > Workloads > External Orchestrator. Then, from the GUI screen for the
External Orchestrator, select Create Orchestrator. Follow the wizard to set it up. The
default port number is 10250 for the API configuration. The API connection and the
token created in Kubernetes will be needed to complete the Orchestrator setup.

Kenneth Huss, [email protected]

You might also like