Kub MicroSegm
Kub MicroSegm
Abstract
Defensive security professionals will have to commit focused energy and resources to
protect Kubernetes instantiated ephemeral workloads and not leave it solely in the hands
of the company's development team. This paper examines the reasons behind the
emergence and popularity of microservice-based application development, the risks these
workloads pose to the security posture, and whether scalable micro-segmentation of
container-based workloads can be a practical part of a Zero Trust Architectural strategy.
Micro-segmentation can hinder lateral movement into data center assets outside
Kubernetes domains and slow attackers, allowing security teams to detect that movement.
The Kubernetes cluster is inside a development enclave, and workloads with external
cluster connectivity should not be allowed to form connections with the production
network assets. A slingshot workstation inside the production enclave will scan and test
connectivity between enclaves. Cisco Secure Workload will be used to segment and stop
the traffic flow between these enclaves. Nmap scanning tools will used to observe the
efficacy of micro-segmentation.
Micro-segmentation for Kubernetes Instantiated Ephemeral Workloads 2
1. Introduction
Enterprise infrastructure and security teams need help with development-centric
quality assurance and preproduction workloads being fully utilized for production
delivery, violating documented and enforced security and change management
governance policies. This observed behavior was the catalyst for this research project.
Corporations are moving quickly to deploy feature-rich mobile apps and internal software
capabilities that can give them a differentiated advantage in the marketplace. Time-to-
market pressure pushes developers to embrace microservice software design
architectures, moving away from large and unruly monolithic application design. The
dramatic rise of distributed applications has contributed to the explosive growth of
container-based architectures like Kubernetes. While developers are motivated to
embrace secure development practices, many "Enterprise organizations are not allowing
or preparing their applications developers to implement security measures for the
applications they develop and deploy" (Warren, 2022). Companies cannot leave the
security of container-based ephemeral workloads solely in the hands of application and
mobile developers. IT and infosec teams need a scalable way of protecting corporate
assets from these workloads should they be compromised.
Various segmentation types could be used to protect Kubernetes clusters, and this
research will focus on a network-centric segmentation model. In the book Zero Trust
Architecture, Patrick Lloyd highlights that application segmentation is already in use with
container-based technologies; “containers, common to platforms such as docker, separate
application functionality and processing of data from the reliance upon a common set of
resources to function” (Green-Ortiz et al., 2023). The containerization of functional
pieces of an application can slow a threat actor at the cost of increasing the overall attack
surface. Adding a network-centric segmentation layer to these ephemeral workloads can
“prevent the spread, and limit the blast zone, or impacted area, of the network should one
client be compromised." (Green-Ortiz et al., 2023) Kubernetes environments are
especially vulnerable to runtime threats like unauthorized access, privilege escalation,
and lateral movement, which are among the top five security issues organizations face.
(RedHat, 2024) Limiting east-west directional network access with micro-segmentation
can be essential to a layered zero-trust segmentation design. Cisco Secure Workload
(CSW) is a scalable approach to providing micro-segmentation to Kubernetes-
orchestrated workloads. It can provide visibility into all the pods in the cluster. CSW can
capture and baseline traffic flows into and out of workloads to simplify the design and
deployment of L3 and L4 segmentation.
container in the cluster be compromised. Cisco Secure Workload has distributed firewall
capabilities with security policies that comply with governance and compliance
requirements. It is a scalable way to enforce policy, protecting data center assets outside
the Kubernetes cluster.
2. Research Method
2.1 Constraints
This research paper does not address the additional layers of security necessary for
protecting enterprise and cloud Kubernetes deployments. Instead, it concentrates on
network segmentation, particularly micro-segmentation, to manage traffic within and
between network segments and limit connectivity within the same segment. The paper
does not cover macro segmentation, deep application segmentation, or user segmentation.
In IT, segmentation involves dividing environments and solutions into smaller parts to
hinder an attacker's lateral movement within the infrastructure.
The core of the Kubernetes test lab is a Windows 10 Pro-64-bit Host equipped
with an AMD Ryzen 9 39000X 24 Core- processor, 64 Gig of RAM, and multiple 2 TB
NVME drives. VMware Workstation Pro 17 is the tier 2 hypervisor that supports the five
VMs that make up the test bed. The Windows 10 host is connected to the local network
via an L2 switch with an uplink to a Ubiquiti UniFI Dream Machine Pro. The UDM-Pro
supports access to the local Wi-Fi networks and provides firewall services for the fiber
uplink connection to the internet.
Cisco Secure Workload is delivered globally from Cisco’s data centers; however,
it can be deployed and consumed directly from Amazon AWS, Microsoft Azure, and
Google GCP cloud networks. The CSW tenant runs software Version 3.9.1.1-patch 38
and operates on Cisco’s own dCloud™.
Docker and Containerd are the virtualization runtimes used to support the creation
of all Kubernetes pods. Calico containers within the control node handle networking
within the cluster and facilitate connectivity outside the cluster. Kubectl is a command-
line interface tool that utilizes the Kubernetes API to control the cluster control plane
assets like pods, services, and nodes, as shown in Figure 3.
Once the planned YAML files are created, they are activated from a file in
Kubernetes with a kubectl apply command:
After the file is applied, the pods will be built automatically by Kubernetes and run on the
worker node selected by the scheduler running within the control node.
SaaS Tenant - The heart of the Cisco Secure Workload solution. It is used to
manage and deploy distributed firewall capabilities, gather and analyze telemetry, and
create and enforce security policies.
Two development containers will be provided for testing. The first is a web
application connected to a back-end MongoDB; the second will be a QA-kali workstation
with penetration and scanning tools. Cisco Secure Workload will have policies built to
block all production assets from communicating with any development workloads.
Testing will be done before and after enforcement is enabled to showcase the
effectiveness of traffic segmentation between the production and development enclaves.
3. Findings
Six separate low-level Linux packages will be required to scan and pull HTML web
pages and telnet or ping workloads from the command line. Update the Ubuntu packages
and load each package with the following six commands:
apt update
apt install net-tools
apt install iputils-ping
apt install telnet
apt install curl
apt install nmap
This testbed also has the official SANS slingshot Linux distribution loaded as a virtual
machine in the production enclave. The production network simulates the east-west data
center assets that would be a lateral movement target for a compromised Kubernetes pod.
The free SANS slingshot distribution for download can be found at
https://www.sans.org/tools/slingshot/.
To prepare the slingshot VM, do an apt update and an apt upgrade followed by a
virtual machine reboot. The distribution comes with all the low-level packages required
to complete the research test plan. The slingshot VM also has an Apache 2 web server
that will be reachable during testing. Ping will not work from the production slingshot
machine inbound to the pods in the Kubernetes. Ping uses the ICMP protocol directly to
an IP address and will not work with a specified TCP Port. NMAP and telnet will be used
to specify TCP ports for the pods' NodePort, confirming IP continuity. TCP port 30100 is
used for the webapp pod, and TCP port 30280 is used for the qa-qa-kali pod.
After the scopes for the network have been created and workloads assigned to
those scopes, the second step is to create a workspace for each workload. Workspaces in
CSW contain all the policy and enforcement configurations to segment each workload.
Figure 7 shows the workspaces that have been defined for this lab.
Cisco Secure Workload can monitor all the traffic connections in and out of each
workload being monitored and send detailed flow metadata back to the SaaS tenant. That
data is used to run traffic analysis for each workload and will eventually be used to create
a baseline security policy profile automatically. Additional white list security rules can be
manually added to the policy. Many enterprise IT engineers will collect several weeks of
baseline traffic for each workload before doing policy analysis. Figure 8 shows the flow
observations that will be analyzed to create a dynamic policy for each workspace.
Step three of the process starts with the automatic analysis of the flow data that
will be used to generate a baseline traffic policy. This policy makes it easy to see critical
service data like DNS, DHCP, and NTP. Figure 9 has one manually added security rule to
this dynamic policy, which will deny communication between production and
development scopes.
The fourth and final step will be moving the security policy into enforcement. The
Cisco Secure Workload tenant will communicate with the agent to push iptables firewall
rules to the workload, which will match the security rules defined by the policy. Figure
10 highlights the enforce policies button used to push the policy to the workload.
from the kubectl command line API. The WebApp was developed by Nana Janashia for
the TechWorld with Nana YouTube channel. (Janashia, 2021) Kali Linux image was
deployed as a pod to facilitate outbound scanning of the production network. The SANS
slingshot Linux deployment will be used to test k8s pods in the deployment scope. Table
2 captures the tests that will be performed for each segmentate+on scenario.
Nmap scans are a key part of the testing for this research project. Four scans will
be performed for each source and destination pair test scenario in the planned tests shown
in Table 2. The following is a list of the core scans performed and sample syntax:
This will check if the NodePort 30100 is open on the specified IP address.
This scan will attempt to determine the version of the service running on the target
system.
3. Vulnerability Scanning
This will run vulnerability detection scripts against the service to identify potential
security issues that exploit tools can use against the machine.
4. Script Scanning
This uses unmap's default scripts to gather more information about services running on
the target system.
This scan will identify all the systems on the subnet that respond and display that in the
output.
The results of the first four nmap scans will be documented in the test results tables.
The fifth nmap scan of the entire data center subnet 192.168.0.0/24 will be run from the
qa-kali pod in the k8s cluster. The documentation will include that output to showcase
micro segmentation’s ability to obfuscate network-attached assets and hinder an
attacker’s visibility.
Table 4 captures the outputs of a pair of single target nmap port scans from the
Slingshot machine toward the NodePorts 30280 and 30100 for the qa-kali and webapp
pods. Table 4 also shows the outputs of a Ping from qa-kali to Slingshot and a successful
telnet connection from Slingshot into the webapp pod.
The last test output for this section is the nmap scan of the data center IP subnet
hosting the production servers and the development Kubernetes cluster. This output was
performed from the qa-kali pod, which enumerates each IP address in the 192.168.0.0/24
network. Without segmentation enforcement, the Slingshot workstation provisioned at
192.168.0.164 in the production enclave in Cisco Secure Workload is visible to pods in
the Kubernetes cluster. Figure 12 shows all the active hosts discovered by the nmap scan
of this network.
Slingshot
production workload
is visible to the Kali
pod in the k8s
cluster.
Testing was repeated, and the results are documented in Table 5. The test results
demonstrate that micro-segmentation is working well.
Nmap tools are scanning the entire subnet, but the 192.168.0.164 machine was not
showing up in the scan output. The agent enforcement protects the production slingshot
machine from the qa-kali Linux pod inside the Kubernetes cluster. The micro-
segmentation agents installed in Kubernetes are blocking the pods and are not blocking
access to the 210, 211, and 212 addresses of the master and two worker machines. The
agents are segmenting the pods inside Kubernetes. Figure 14 shows the details of the
nmap scan with micro-segmentation enforcement.
Slingshot
production workload
is not visible in the
scan from the Kali
pod in the k8s
cluster.
5. Conclusion
Future researchers can explore how defenders can use CSW for advanced threat
hunting. Deep visibility into the traffic and processes used by workloads can be evaluated
for use by security teams. Researchers can also explore the application of segmentation to
the internal network inside the Kubernetes cluster. Segmentation can protect the cluster
API server and other critical resources from exposure to compromised pods. Future
researchers could also explore scalability and performance impact testing and analysis.
References
Cisco Systems Inc. (2023, January 11). Cisco Secure Workload User Guide, Release 3.7.
https://www.cisco.com/c/en/us/td/docs/security/workload_security/secure_worklo
ad/user-guide/3_7/cisco-secure-workload-user-guide-v37.pdf
D’Silva, D., & Ambawade, D. (2021, April 2). Building A Zero Trust Architecture Using
Kubernetes. Retrieved February 4, 2024, from IEEE Xplore website: 2021 6th
Dale, C. (2023, December 18). Top 3 Cybersecurity Predictions for 2024 in EMEA |
https://www.sans.org/blog/top-3-cybersecurity-predictions-for-2024-in-emea/
Green-Ortiz, C., Fowler, B., Houck, D., Hensel, H., Lloyd, P., McDonald, A., & Frazier,
J. (2023). Zero Trust Architecture (1st ed., p. 304). San Jose, CA: Cisco Press.
https://www.sans.org/white-papers/
https://www.sans.org/white-papers/
https://www.sans.org/white-papers/
RedHat. (2022). The state of Kubernetes security report. In www.redhat.com (1st ed., p.
https://www.redhat.com/rhdc/managed-files/cl-state-kubernetes-security-report-
2022
RedHat. (2024). The state of Kubernetes security report. In www.redhat.com (1st ed., p.
https://www.redhat.com/rhdc/managed-files/cl-state-kubernetes-security-report-
2024-1210287-202406-en.pdf
https://www.sans.org/white-papers/
The Verizon DBIR Team, Hylender, C. D., Langlois, P., Pinto, A., & Widup, S. (2024).
2024 data breach investigations report. In Verizon Business (p. 99). New York
https://www.verizon.com/business/resources/Tdb6/reports/2024-dbir-data-breach-
investigations-report.pdf
Verizon Business, Hylender, C. David., Langlois, P., Pinto, A., & Widup, S. (2023).
2023-data-breach-investigations-report-dbir. In
https://www.verizon.com/business/resources/reports/dbir/
Warren, D. (2022). A Qualitative Study of Cloud Computing Security and Data Analytics
https://www.proquest.com/openview/46b2c0927ae0ff56f198afc59d5b4df6/1.pdf?
pq-origsite=gscholar&cbl=18750&diss=y
kubernetes-on-
ubuntu#:~:text=Benefits%20of%20Kubernetes,Verify%20the%20cluster%20and
%20test
Appendix A
Appendix B
Loading Cisco Secure Workload Agent on Kubernetes
This Appendix will outline the build summary for the four main Cisco Secure
Workload elements used in this test lab. The four main components of CSW that needed
to be built to have an operational system that could be used to deploy micro-segmentation
into Kubernetes are the tenant, the agents specific to the workloads, a secure connector,
and finally, an external orchestrator. This Appendix will document the build process for
each component.
A system administrator with the required CSW access must configure a SaaS or
hybrid appliance Secure Workload tenant. A Cisco cloud security administrator created
the SaaS tenant used for this research, and here is a summary of the tenant configuration
process.
Creating a SaaS tenant in Cisco Secure Workload involves several steps in a general
outline:
1. Access Cisco Secure Workload:
- Log in to the Cisco Secure Workload account using admin credentials.
2. Navigate to Tenant Management:
- Once logged in, go to the "Tenant Management" section. This wizard is in the
administrative or settings area of the dashboard.
3. Create a New Tenant:
- Click on the option to create a new tenant. This might be labeled as "Add
Tenant," "New Tenant," or something similar.
4. Enter Tenant Details:
- Fill in the required information for the new tenant. The required data usually
includes:
- Tenant Name
- Description
- Contact Information
- Any other relevant details are specific to your organization's needs.
5. Configure Tenant Settings:
- Set up the necessary configurations for the tenant. This may include:
- Resource allocation (e.g., CPU, memory, storage)
- Network settings
- Security policies
- Access controls and permissions
6. Assign Users and Roles:
- Add users to the tenant and assign appropriate roles and permissions. Ensure
that users have access to perform their tasks within the tenant.
7. Review and Confirm:
- Review all the entered information and configurations to ensure accuracy.
- Confirm the creation of the tenant.
8. Deployment:
- Deploy the tenant, which may involve provisioning resources and applying
configurations.
Figure B1 contains the tenant external agent config server and collector IP
addresses. These addresses must be fully reachable by your workloads.
The Secure Connector installation is very straightforward and one of the more
manageable components. A Centos 7 VM is required to be loaded and operational in your
local lab network. The software package will set up an encrypted tunnel from your
Centos 7 VM out of the local network over the internet and into the SaaS Tenant for
CSW.
1. Access Cisco Secure Workload:
- Log in to your Cisco Secure Workload account using your credentials.
2. Navigate to Workload Management:
- Once logged in, navigate to the menu down the left side of the GUI, select
"Workloads," and then choose Secure Connector from the drop-down menu.
3. Click on the “Download Latest RTM” button:
- Once the file is downloaded to your local host, use SCP to move it to the
/home directory of the Centos 7 VM.
4. Install the RTM package:
- Run the install RPM
- The CSW tenant automatically configures the rpm file with target IP
addresses and security information for the tunnel.
- sudo rpm -i <package name>.rpm
5. Check the connector status in CSW:
- Navigate to the "Workloads" menu and choose "Secure Connector."
- The page will show the status of the connector tunnel in Green or Red.
Figure B2 is a screenshot of the operational Secure Connector, with the green “Active”
Tunnel Status.
The third CSW component required for this research lab is the CSW "Agents ."
The agent will operate locally on the production slingshot Linux workstation and be
installed into the Kubernetes cluster. Three agents will be used, two installed into the k8s
cluster and the last on the slingshot workstation.
Each environment will require that a list of low-level Linux packages be installed
before trying to run the "Installer Script," which will be built by the customer and
downloaded from your SaaS CSW tenant. Figure B3 has a detailed list of the required
Linux packages that must be installed with the “apt install <package name> command.
This next step is a critical prerequisite and could cost hours of troubleshooting if
forgotten or missed. On the Ubuntu server running 22.04 LTS, which was provisioned as
the master Kubernetes control node, a containerd registry file must be edited.
- Look through the file until you find the following line: [plugins."
io.containerd.grpc.v1.cri" .registry]
Config_path = “/etc/containerd/certs.d”
Figure B4 has a screen capture of the exact command. This registry must be edited before
running the “Installer Script” on the Kubernetes master node.
With the above registry edit in place, the agent installed on Kubernetes should not have
any issues.
1. Navigate to the CSW menu on the left side of the GUI, select "Manage" and then
"Workloads," and select "Agents."
2. Choose the "Agent Script Installer" method to load the agents.
- From the drop-down menu, select the target OS for the agent deployment; in
this case, select Kubernetes.
- The tenant will be selected automatically; in the example below, it is khuss
- Download the automatically configured script and move it to the /home
directory of your master Kubernetes control node.
3. From the Kubernetes control/master node run the following kubectl command:
- kubectl get pod -v6
- The output of that command will have the path to the Kubernetes config file,
which is required to edit the bash command to execute the "Agent Script
Installer."
- The path to the Kubernetes config file should look like this:
/home/khuss/.kube/config
- Edit the bash command on the CSW installer page and run the script as shown
in Figure B5.
- $ bash tetration_installer_khuss_enforcer_kubernetes_tet-pov-rtp1.sh –
kubeconfig /home/khuss/.kube/config
4. Once the script has been run on the Kubernetes control node, execute the
following kubectl command to verify the installation:
- See the output in Figure B6 and look for the two tetration pods located in the
tetration namespace, and make sure they are operational with the “Running”
status.
5. Log back into the CSW tenant and navigate to "Manage," then to "Workloads"
and choose "Agents."
- Across the top of the GUI is a horizontal menu; choose "Agent List," the
following screen in Figure B7 will show the status of all properly installed
agents. Two agents are located in the Kubernetes cluster, and the last is loaded
onto the Slingshot Linux workstation.
The final component of the CSW portion of this test lab is the External
Orchestrator. The orchestrator configuration guide is
https://www.cisco.com/c/en/us/td/docs/security/workload_security/secure_worklo
ad/user-guide/3_9/cisco-secure-workload-user-guide-saas-v39/external-
orchestrators.html#concept_684859
The services account must be named csw.read.only, and the five kubectl
commands required to set up the API connection and generate a token must be completed
before going to CSW to provision the Orchestrator.
The final command will generate a sizeable token string. Use that token in the
Cisco Secure Workload External Orchestrator installation wizard.
Open the CSW tenant and go to the menu on the left side of the GUI. Select
Manage > Workloads > External Orchestrator. Then, from the GUI screen for the
External Orchestrator, select Create Orchestrator. Follow the wizard to set it up. The
default port number is 10250 for the API configuration. The API connection and the
token created in Kubernetes will be needed to complete the Orchestrator setup.