SG 248568
SG 248568
Redbooks
IBM Redbooks
August 2024
SG24-8568-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Contents vii
6.2.4 Misconfiguration and human errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.3 Linux on IBM Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.3.1 Linux Distributions on IBM Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.4 Hardening Linux Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
6.4.1 Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
6.4.2 Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.4.3 User policies and access controls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.4.4 Logging, audits and file integrity monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.4.5 File system security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.4.6 SIEM & EDR Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.4.7 Malware Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6.4.8 Backup strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.4.9 Consistent update strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.4.10 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.5 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.6 Develop Incident Response Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Chapter 11. Lessons Learned and Future Directions in Power System Security . . . 239
11.1 Lessons Learned from Real-World Breaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
11.1.1 Recommendations to Reduce Data Breach Costs . . . . . . . . . . . . . . . . . . . . . . 240
11.1.2 Summary of IBM X-Force Threat Intelligence Index 2024 . . . . . . . . . . . . . . . . 240
11.1.3 Best practices for data breach prevention. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
11.1.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
11.2 Basic AIX Security Strategies and Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . 241
11.2.1 Usernames and Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
11.2.2 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
11.2.3 Insecure Daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
11.2.4 Time Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
11.2.5 Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
11.2.6 Server Firmware and I/O Firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
11.2.7 Active Directory and LDAP Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
11.2.8 Enhanced Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
11.2.9 Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
11.2.10 A Multi-Silo Approach to Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
11.2.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
11.3 Fix Level Recommendation Tool for IBM Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
11.4 Physical security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
11.4.1 Key Physical Security Measures: A Layered Approach . . . . . . . . . . . . . . . . . . 247
11.4.2 Perimeter Security and Beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Contents ix
B.4 Anypoint Flex Gateway (Salesforce/Mulesoft) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
B.5 Active IBM i Security Ecosystem Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Z® Redbooks (logo) ®
DB2® IBM z Systems® System z®
DS8000® Instana® SystemMirror®
FlashCopy® POWER® Tivoli®
GDPS® Power9® WebSphere®
IBM® PowerHA® X-Force®
IBM Cloud® PowerPC® z Systems®
IBM FlashSystem® PowerVM® z/OS®
IBM Instana™ QRadar®
IBM Security® Redbooks®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Ansible, Ceph, Fedora, JBoss, OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc.
or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
A multi-layered security architecture is essential for protection. Key areas to focus on include:
– Hardware-Level Security: Prevent physical tampering and ensure data integrity.
– Virtualization Security: Isolate environments and control resource access.
– Management Tool Security: Secure hardware and cloud resources.
– Operating System Security: Continuously update for robust security.
– Storage Security: Protect data at rest and in transit.
– Networking Security: Prevent unauthorized access and data breaches.
This Redbook describes how the IBM Power ecosystem provides advanced security
capabilities at each of these layers. IBM Power systems are designed with security as a core
consideration.
At the hardware level, advanced technology includes tamper-resistant features built into the
processor to prevent unauthorized access and modifications, secure cryptographic engines to
provide strong encryption of data, and Trusted Boot to ensure that only authorized software
components are loaded during system startup.
At the virtualization level, the hypervisor – which manages virtual machines – is designed to
be secure and resistant to attacks. The hypervisor isolates workloads within a single physical
server, allowing for secure resource sharing within your infrastructure. The Hardware
Management Console (HMC) provides centralized management and control of Power
systems in a secure manner.
The operating systems that run on IBM Power servers – AIX, IBM i, and Linux on Power –
offer robust security features, including user authentication, access controls, and encryption
support. In addition, tools such as IBM PowerSC provide a comprehensive security and
compliance solution that helps manage security policies, monitor threats, and enforce
compliance.
Security also requires solid management and control. This book describes best practices
such as conducting regular security audits, keeping operating systems and applications
up-to-date with the latest security patches, and implementing strong user authentication and
authorization policies. Other critical elements include the implementation of data encryption
for both data at rest and in flight, and strong network security processes utilizing firewalls,
intrusion detection systems, and other security measures.
By combining these hardware, software, and management practices, IBM Power systems
provide a robust foundation for security in your IT environment.
Tim Simon is an IBM® Redbooks® Project Leader in Tulsa, Oklahoma, USA. He has over 40
years of experience with IBM, primarily in a technical sales role working with customers to help
them create IBM solutions to solve their business problems. He holds a BS degree in Math from
Towson University in Maryland. He has extensive experience creating customer solutions using
IBM Power, IBM Storage, and IBM System z® throughout his career.
Felipe Bessa is an IBM Brand Technical Specialist and Partner Technical Advocate on IBM
Power. He works for IBM Technology in Brazil and has over 25 years of experience in the areas
of research, planning, implementation, and administration of IT infrastructure solutions. Before
joining IBM, he was recognized as a Reference Client for IBM Power Technologies for SAP and
SAP HANA, IBM PowerVC, IBM PowerSC, Monitoring and Security, IBM Storage, and the Run
SAP Like a Factory (SAP Solution Manager) Methodology. He was chosen as an IBM
Champion for IBM Power for 2018 - 2021.
Hugo Blanco is an IBM Champion based in Madrid, has been working with Power systems
since 2008. He began his career as an instructor and has since taken on a variety of roles at
SIXE, IBM BP, gaining extensive experience across different roles and functions. Hugo is
deeply passionate about AIX, Linux on Power, and various cybersecurity solutions. He has
contributed to the development of several IBM certification exams and actively participates in
Common Iberia, Common Europe, and TechXchange. He enjoys delivering technical talks on
emerging technologies and real-world use cases. Beyond his technical pursuits, he is also a
dancer, DJ, and event producer.
Carlo Castillo is a Client Services Manager for Right Computer Systems (RCS), an IBM
Business Partner and Red Hat partner in the Philippines. He has over thirty years of experience in
pre-sales and post-sales support, designing full IBM infrastructure solutions, creating pre-sales
configurations, performing IBM Power installation, implementation and integration services, and
providing post-sales services and technical support for customers, as well as conducting
presentations at customer engagements and corporate events. He was the very first IBM-certified
IBM AIX Technical Support engineer in the Philippines in 1999. As training coordinator during
RCS' tenure as an IBM Authorized Training Provider from 2007 to 2014, he also administered the
IBM Power Systems curriculum, and conducted IBM training classes covering AIX, PureSystems,
PowerVM, and IBM i. He holds a degree in Computer Data Processing Management from the
Polytechnic University of the Philippines.
Rohit Chauhan is a Senior Technical Specialist with expertise in IBM i architecture, working at
Tietoevry Tech Services, Stavanger, Norway, an IBM Business Partner and also one of the biggest
IT service provider in Nordics. He has over 12 years of experience working on the IBM Power
platform with design, planning and implementation of IBM i infrastructure including High
availability/Disaster recovery solutions for many customers during this tenure. Before his current
role, Rohit worked for clients in Singapore and U.A.E in the technical leadership and security role
on IBM Power domain. He possesses rich corporate experience in architecting solution design,
implementations and system administration. He is also a member of Common Europe Norway with
strong focus on IBM i platform and security. He is also recognized as IBM Advocate, Influencer
and Contributor for 2024 through IBM Rising Champions Advocacy Badge program. He holds a
bachelor degree in Information Technology. He is a IBM certified technical expert and also holds
ITIL CDS certificate. His areas of expertise includes overall IBM i, IBM Hardware Management
Console (HMC), Security enhancements, IBM PowerHA®, Systems Performance analysis and
tuning, BRMS, External storage, Power VM and providing solutions to the customers on IBM i
platform.
Gayathri Gopalakrishnan works for IBM India and has over 22 years of experience as a
technical solution and IT architect, working primarily in consulting. She is a results-driven IT
Architect with extensive working experience in spearheading the management, design,
development, implementation, and testing of solutions. A recognized leader, applying
high-impact technical solutions to major business objectives with capabilities transcending
boundaries. She is adept at working with management to prioritize activities and achieve
defined project objectives with an ability to translate business requirements into technical
solutions.
Samvedna Jha is a Senior Technical Staff Member in the IBM Power Systems organization,
Bengaluru, India. She holds a masters degree in Computer Application and has more than
twenty years of work experience. In her current role as Security Architect, IBM Power, she has
worldwide technical responsibility to handle security and compliance requirements of Power
products. Samvedna is a recognized speaker in conferences, has authored blogs and
published disclosures. She is also the security focal point for Power products secure release
process.
Andrea Longo is a Partner Technical Specialist for IBM Power in Amsterdam, the
Netherlands. He has a background in computational biology research and holds a degree in
Science and Business Management from Utrecht University. He is also serving IBM in the
role of Quantum Ambassador to prepare academia and industry leaders to be quantum-safe
and to experiment the immense possibilities of the technology.
Ahmed Mashhour is a Power Technology Services Consultant Lead at IBM Saudi Arabia. He
is an IBM L2 certified Expert. He holds IBM AIX, Linux, and IBM Tivoli® certifications. He has
19 years of professional experience in IBM AIX and Linux systems. He is an IBM AIX
back-end SME who supports several customers in the US, Europe, and the Middle East. His
core expertise is in IBM AIX, Linux systems, clustering management, IBM AIX security,
virtualization tools, and various IBM Tivoli and database products. He authored several
publications inside and outside IBM, including co-authoring other IBM Redbooks publications.
He also hosted IBM AIX, Security, PowerVM®, IBM PowerHA, PowerVC, Power Virtual Server
and IBM Storage Scale classes worldwide.
Preface xv
Amela Peku is a Partner Technical Specialist with broad experience in leading technology
companies. She holds an MS in Telecommunication Engineering and is part of the IBM Power
team, working with Business Partners and customers to showcase the value of IBM Power
solutions. Previously, she provided technical support for Next Generation Firewalls, Webex,
Webex Teams focusing on performance and networking, and handled escalations, working
closely with engineering teams. She is certified in Networking, Security, and IT Management.
Prashant Sharma is the IBM Power Technical Product Leader for the Asia Pacific region,
based in Singapore. He holds a degree in Information Technology from the University of
Teesside, England, and an MBA from the University of Western Australia. With extensive
experience in IT infrastructure enterprise solutions, he specializes in pre-sales activities,
client and partner consultations, technical enablement, and the implementation of IBM Power
servers, IBM i, and IBM Storage. He drives technical strategy and product leadership for IBM
Power Systems, ensuring the delivery of innovative solutions to diverse markets.
Vivek Shukla is a Technical Sales Specialist for IBM Power, Hybrid Cloud, AI, and Cognitive
Offerings in Qatar working for GBM. He has rich experience in Sales, Application
Modernization, Digital Transformation, Infrastructure Sizing, Cyber Security & consulting,
SAP HANA/Oracle/Core Banking. He is an IBM Certified L2 (Expert) Brand Technical
Specialist. He has over 22 years of IT experience in Technical Sales, Infrastructure
Consulting, IBM Power servers, AIX, IBM i and IBM Storage implementations. He has
hands-on experience on IBM Power servers, AIX, PowerVM, PowerHA, PowerSC, Requests
for Proposals, Statements of Work, sizing, performance tuning, root cause analysis, DR, and
mitigation planning. In addition to writing multiple IBM Power FAQs, he is also a Redbook
Author. He is a presenter, mentor, and profession champion accredited by IBM. He graduated
with a bachelor's degree (BTech) in electronics and telecommunication engineering from
IETE, New Delhi, and a master's degree (MBA) in information technology from IASE
University. Red Hat OpenShift, IBM Cloud® Paks, Power Enterprise Pools, and Hybrid Cloud
are among his areas of expertise.
Dhanu Vasandani is a Staff Software Test Engineer with over 13 years of experience,
specializing in AIX Operating System Security Testing at IBM Power Systems in Bangalore,
India. She holds a Bachelor of Technology degree in Computer Science and has been
instrumental in testing multiple AIX releases across various Power Server Models. In her
current role, Dhanu serves as the Component Lead for the AIX Operating System Security
Guild, overseeing various sub-components. She is responsible for conducting comprehensive
system testing for Pre-GA and Post-GA phases of multiple AIX releases across different
Power Server Models. Dhanu is known for her expertise in areas such as Encryption,
Trustchk, Audit, RBAC, and other security aspects, contributing significantly to IBM's
Lighthouse Community and Knowledge Centre. She is recognized for her proficiency in
identifying and addressing high-impact AIX defects within the ISST System organization,
ensuring the delivery of top-quality products to customers.
Henry Vo is an IBM Redbooks Project Leader with 10 years experience in IBM. He has
technical expertise in business problem solving, risk/root-cause analysis, and writing technical
plans for business. He has held multiple roles at IBM including Project management, ST/FT/ETE
Test, Back End Developer, DOL agent for NY. He is a certified IBM zOS Mainframe Practitioner
including IBM Z® System programming, Agile, and Telecommunication Development Jumpstart.
Henry holds a Master of MIS (Management Information System) from the University of Texas in
Dallas.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xvii
xviii IBM Power Security Catalog
1
The chapter delves into the concept of cyber resilience, focusing on the zero trust security
model, which mandates continuous verification of users, devices, and network components in
an environment with both internal and external threats. It also provides an in-depth look at
IBM's security approach, showcasing how its advanced technologies and methodologies are
designed to defend against various threats.
In summary, this chapter offers a comprehensive analysis of the security and cybersecurity
challenges faced by organizations, presenting detailed insights into strategies and
technologies to mitigate these threats and enhance overall security resilience, with a
particular emphasis on IBM's response to these challenges.
At the hardware level, built-in protections that prevent physical tampering and ensure data
integrity are needed. Virtualization technologies need to enhance security by isolating
environments and controlling resource access. The security of the hypervisor, a critical
component in virtualized environments, is paramount in preventing attacks that could
compromise multiple virtual machines. In the IBM Power environment, logical partitioning
(LPAR) provides strong isolation between different workloads on the same physical hardware,
enhancing security. Figure 1-1 shows how IBM Power10 works to provide protection at every
layer.
Management tools like the Hardware Management Console (HMC) and Cloud Management
Console (CMC) play a vital role in securing hardware and cloud resources. Operating
systems need to continuously provide better security features as they are often a vector of
attack and their contribution is critical to the overall security posture of a system.
Storage security involves protecting data at rest and in transit through techniques such as
encryption and access controls. Methods for creating secure, resilient copies of data – known
as safeguarded copy – and data resiliency are needed to protect against data corruption or
loss. Finally, networking security is integral to overall security with a focus on secure network
design, monitoring, and protection mechanisms to prevent unauthorized access and data
breaches.
1
https://hc32.hotchips.org/assets/program/conference/day1/HotChips2020_Server_Processors_IBM_Starke_POW
ER10_v33.pdf
At the heart of IBM's approach is the integration of security throughout its systems, building trust
and resilience from the ground up. This includes safeguarding firmware integrity with secure
boot processes and bolstering data protection through hardware-based encryption acceleration.
IBM goes beyond basic protection with a proactive cybersecurity strategy. It offers secure
storage solutions and advanced threat prevention and detection mechanisms. In the event of an
incident, IBM provides rapid response and recovery options to minimize downtime and
effectively manage operational risks.
IBM simplifies regulatory compliance with continuous compliance and audit capabilities.
Automated monitoring and enforcement tools ensure adherence to industry standards, while
unified security management tools facilitate consistent governance across diverse IT
environments.
Collaborating closely with ecosystem partners, IBM integrates security across hybrid cloud
environments, networks, software systems, architectures, and chip designs. This comprehensive
approach ensures holistic protection and resilience across all facets of IT infrastructure.
In summary, IBM Infrastructure sets a high standard for security excellence by embedding
advanced features into its solutions and equipping businesses to address both current and
future cybersecurity challenges with confidence. Through collaborative efforts with ecosystem
partners and a focus on regulatory compliance, IBM delivers secure, resilient, and compliant
infrastructure solutions, empowering businesses to thrive in the digital age amidst evolving cyber
threats.
The necessity for digital transformation spans businesses of all sizes, from small enterprises
to large corporations. This message is conveyed clearly through virtually every keynote, panel
discussion, article, or study related to how businesses can remain competitive and relevant
as the world becomes increasingly digital. However, there are many considerations, with
security being one of the most important. Ensuring that the outcome of digital transformation
is more secure than before, and that the transition process is handled securely, is crucial.
Data protection and privacy are paramount as data flows between data centers, cloud
services, and edge devices. Ensuring data is encrypted in transit and at rest across various
platforms is essential. Proper encryption and access controls safeguard stored data, while
compliance with data sovereignty regulations ensures data is processed and stored
according to regional laws. Businesses need to implement end-to-end encryption, enforce
access controls, and stay updated on data protection regulations.
Access control and identity management become more complex in hybrid environments.
Consistent identity and access management (IAM) across on-premises and cloud
environments is crucial. Managing and monitoring privileged access helps prevent
unauthorized access and insider threats. Strong authentication methods, such as multi-factor
authentication (MFA), enhance security by adding additional layers of protection.
Implementing robust IAM solutions, continuously monitoring access, and ensuring the use of
MFA are key steps.
Visibility and monitoring across hybrid and multi-cloud environments are critical for detecting
anomalies and threats. Achieving comprehensive visibility involves implementing unified
monitoring solutions that provide a holistic view of the entire infrastructure. Consistent logging
and auditing mechanisms are necessary to track activities and support incident response.
Network monitoring helps detect and respond to threats in real time. Organizations should
invest in integrated monitoring tools, establish thorough logging practices, and deploy
real-time network monitoring systems.
Expanding operations beyond the traditional data center offers numerous benefits, but it also
introduces a range of security challenges that organizations must proactively address. By
prioritizing comprehensive data protection, robust access control, enhanced visibility,
regulatory compliance, advanced threat management, consistent configuration, and strong
network security, businesses can mitigate these risks and fully leverage the advantages of
hybrid and multi-cloud environments. Security in these complex infrastructures is an ongoing
process that requires vigilance, adaptability, and a commitment to staying ahead of emerging
threats.
One of the most critical security challenges in cloud adoption is the potential for data
breaches and data loss. Sensitive information stored in the cloud can be an attractive target
for cyber criminals. Unauthorized access can lead to the exposure of confidential data,
resulting in financial losses, reputational damage, and legal repercussions. To mitigate these
risks, businesses should implement end-to-end encryption for data at rest and in transit,
enforce strict access control policies, and conduct regular security audits and vulnerability
assessments.
Compliance and regulatory issues add another layer of complexity to cloud security. Cloud
environments often span multiple jurisdictions, each with its own set of regulations and
compliance requirements. Ensuring that cloud operations comply with laws such as GDPR,
HIPAA, and CCPA can be complex and resource-intensive. Organizations must stay informed
about relevant regulations, utilize compliance management tools, and engage third-party
auditors to verify compliance.
Insider threats, whether from malicious intent or inadvertent actions, pose a significant risk.
Employees, contractors, or third-party vendors with access to cloud systems can potentially
misuse their access, leading to data leaks or disruptions. To counter these threats,
businesses should implement regular security training programs, utilize monitoring and
anomaly detection systems, and apply the principle of least privilege to limit access based on
necessity.
The shared responsibility model in cloud security, where both the cloud provider and the
customer share security responsibilities, can lead to confusion and security gaps. Clear
definitions of security responsibilities in contracts, regular reviews of cloud provider security
documentation, and ongoing collaboration between IT teams and cloud providers are
essential to avoid misunderstandings and ensure comprehensive security coverage.
Application Programming Interfaces (APIs) are essential for cloud integration and operations
but can also introduce vulnerabilities. Poorly secured APIs can become entry points for
attackers. To secure APIs, organizations should adopt secure coding practices, use API
gateways to manage and secure API traffic, and implement rate limiting to prevent abuse.
Interfaces and APIs, as gateways to cloud services, need robust security measures. If not
properly secured, they can be exploited to gain unauthorized access or disrupt services.
Following best practices in API design and security, conducting regular penetration testing
and vulnerability assessments, and implementing strong authentication and authorization
measures are necessary to secure these critical components.
Adopting cloud technology offers numerous benefits but also introduces a range of security
challenges that organizations must proactively address. By implementing robust security
measures, maintaining regulatory compliance, and fostering a culture of security awareness,
businesses can mitigate these risks and fully leverage the advantages of the cloud. Security
in the cloud is an ongoing process that requires vigilance, adaptability, and a commitment to
staying ahead of emerging threats.
Cyber attacks, such as phishing, malware, and advanced persistent threats, disrupt
operations and compromise data. To combat these threats, organizations must implement
robust firewalls, intrusion detection and prevention systems, and leverage threat intelligence.
Ransomware, which encrypts data and demands a ransom for its release, has become
particularly destructive. Preventing ransomware involves regular software updates, strong
email filtering, and anti-phishing solutions. Regular data backups stored securely and offline,
along with tested recovery processes, are critical for mitigating ransomware impact. IBM
Storage Defender is a storage software solution that can help protect your data and
accelerate recovery in the event of a cyber attack or other catastrophic events. It includes
immutable backups, early threat detection, data copy management, and automated recovery
capabilities.
Employee training and awareness are essential, as many attacks exploit human
vulnerabilities. Regular training can help employees recognize phishing attempts and follow
best practices for data security, acting as a front line defense against cyber threats.
Endpoint security is crucial as employees access corporate resources from various devices.
Advanced endpoint protection solutions, including anti-virus software, endpoint detection and
response (EDR) tools, and mobile device management (MDM) systems safeguard endpoints
from malicious activity.
Network segmentation limits the spread of ransomware and other threats. Dividing the
network into smaller segments and implementing strong access controls and monitoring can
contain damage and prevent lateral movement by attackers.
Incident response planning is vital for minimizing the impact of attacks. An up-to-date incident
response plan with clear communication protocols, roles, and procedures for isolating
affected systems and restoring operations is essential. Regular drills ensure the readiness of
the response team.
Cyber resilience encompasses strategies to prepare for, respond to, and recover from cyber
incidents effectively. This includes comprehensive risk management to prioritize critical
assets and threats, incident response planning with clear protocols, regular data backups,
and continuous improvement through assessments and updates.
The zero trust model challenges traditional security approaches by assuming no implicit trust
based on network location. Instead, it verifies and validates all devices, users, and
applications attempting to connect, regardless of their location. Key principles include explicit
verification, least privilege access, micro-segmentation to limit lateral movement, and
continuous monitoring of network traffic and user behavior.
By integrating cyber resilience with zero trust principles, organizations enhance their ability to
detect, respond to, and mitigate cyber threats. Continuous monitoring and analysis of network
activity and user behavior enable prompt threat detection and response. Dynamic, risk-based
access controls based on real-time assessments improve security without hindering
productivity. Robust backup and recovery measures combined with strict access controls
ensure data integrity and availability, even in the event of a breach.
In conclusion, cyber resilience and the zero trust model are essential for organizations
striving to fortify their security posture amidst a complex threat landscape. By adopting
proactive strategies and integrating zero trust principles into their security framework,
businesses can safeguard critical assets, maintain operational continuity, and mitigate the
impact of cyber attacks. These frameworks not only strengthen defenses but also foster a
culture of security awareness and readiness across the organization, ensuring ongoing
protection against evolving cyber threats.
Cybersecurity Frameworks
There are groups of regulations that address cybersecurity requirements, such as:
NIST Cybersecurity Framework
Developed by the National Institute of Standards and Technology (NIST), this framework
provides guidelines for improving cybersecurity practices. While not mandatory, many
organizations adopt it to align with best practices and regulatory expectations.
Federal Information Security Management Act (FISMA)
In the United States, FISMA requires federal agencies and contractors to implement
information security programs and comply with NIST standards.
Industry-Specific Regulations
There are specific industry requirements that address data management and security, such
as:
Health Insurance Portability and Accountability Act (HIPAA)
For the healthcare industry in the U.S., HIPAA sets standards for protecting patient health
information and requires secure handling and storage of sensitive data.
Payment Card Industry Data Security Standard (PCI DSS)
This set of standards applies to organizations handling credit card information and
mandates secure processing, storage, and transmission of payment data.
Overall, government regulations help establish a baseline for security practices, protect
sensitive information, and promote trust in digital systems. Organizations must understand
and comply with these regulations to safeguard their operations and avoid legal
repercussions.
Figure 1-2 illustrates how the IBM Power ecosystem with IBM Power10 processors provides
protection at every layer.
2 https://events.ibs.bg/events/itcompass2021.nsf/IT-Compass-2021-S06-Power10.pdf
By adopting these practices, users of IBM Power systems can not only bolster their defenses
against current threats but also foster a more resilient posture to adapt to future security
challenges.
1.4.2 Hardware
The security of physical hardware is fundamental to protecting the overall integrity of IBM
Power systems. This section explores the multiple facets of hardware security, from the
physical measures used to protect equipment to the embedded technologies designed to
safeguard data and systems from cyber threats.
Access Controls
Access controls are designed to ensure that only authorized individuals can enter specific
physical areas where sensitive hardware is located. There are multiple types of access
control systems that work with different authentication methodologies. These systems can be
broadly categorized into:
Biometric Systems
Biometric systems use fingerprint scanners, retina scans, and facial recognition
technologies to provide a high level of security by verifying the unique physical
characteristics of individuals.
Electronic Access Cards
Technologies such as RFID cards, magnetic stripe cards, and smart cards grant access
based on credentials stored on the card. Many of these can be managed centrally to
update permissions as needed.
PIN Codes and Keypads
Requiring the entry of personal identification numbers (PINs) into keypads provides a
method of access control that can be easily updated and managed remotely.
Surveillance Systems
Surveillance systems are essential for monitoring physical environments to detect, deter, and
document unauthorized activities. This section introduces the purpose and strategic
placement of surveillance systems within power system facilities. Depending on your
requirements, you may need multiple complementary surveillance systems. Here are some
types of surveillance technologies you can consider:
CCTV Cameras
Closed-circuit television cameras provide identification and monitoring capabilities.
Different types, such as dome, bullet, and PTZ (pan-tilt-zoom) cameras, should be
strategically placed and monitored.
Motion Detectors
Motion detectors that trigger alerts or camera recordings can enhance the efficiency of
surveillance by focusing resources on areas where activity is detected.
Advanced Surveillance Technologies
Newer technologies like thermal imaging and night vision cameras, which capture video in
low light or through obstructions, can enhance your around-the-clock surveillance
capabilities.
Important: Data management and privacy considerations are involved with the collection
and storage of surveillance information. It is important to manage surveillance footage
securely, including storage, access controls, and compliance with privacy laws and
regulations to protect the rights of individuals.
TPM enhances Secure Boot by recording measurements of the system’s firmware and
configuration during the startup process. Through an attestation process the TPM can
provide a signed quote that can be used to verify the system integrity and firmware
configuration at any time.
TPM also provides key storage and management. It safeguards cryptographic keys at a
hardware level, preventing them from being exposed to outside threats. These keys can be
used for encrypting data and securing communications (for example during PowerVM Live
Migration.
Secure Boot
IBM Power10 and Power9® servers incorporate Secure Boot, a critical security feature that
ensures the integrity of the system's firmware and operating system at startup. This
safeguards against unauthorized modifications and potential attacks, providing a robust
foundation for secure operations.
Secure Boot verifies the integrity of the firmware, boot loader, and operating system to
prevent unauthorized code from running during the boot process. It ensures that only trusted
software signed with a valid certificate is executed, protecting against rootkits and boot-level
malware that could compromise the system’s security before the operating system starts.
Secure Boot utilizes digital signatures and certificates to validate the authenticity and integrity
of firmware and software components. Each component in the boot process is signed with a
cryptographic key, and the system verifies these signatures before allowing the component to
execute.
Organizations can manage keys and certificates used in Secure Boot through configuration
settings, enabling them to control which software and firmware are trusted.
Secure Boot helps prevent unauthorized code execution during the boot process, protecting
the system from early-stage attacks and aiding in meeting compliance requirements for
security standards and regulations that mandate secure boot processes.
Hardware Encryption
Hardware encryption involves using dedicated processors that perform cryptographic
operations directly within the hardware itself, enhancing security by isolating the encryption
process from software vulnerabilities.
Regular vulnerability assessments are vital for identifying weaknesses in hardware that could
be exploited by attackers or fail under operational stress. These assessments should include
physical inspections, cybersecurity evaluations, and testing against environmental and
operational conditions. Techniques such as penetration testing and red team exercises can
simulate real-world attack scenarios to test the resilience of hardware components.
Protecting your environment should include the use of continuous monitoring technologies,
including hardware sensors and network monitoring tools, which play a critical role in the
early detection of potential failures or security breaches.
Regular reviews ensure that risk management strategies and practices stay relevant as new
threats emerge and business needs change. This involves re-evaluating and updating risk
assessments, mitigation strategies, and response plans at defined intervals or after
significant system changes.
Having detailed incident response and recovery plans is essential for minimizing downtime
and restoring functionality in the event of hardware failure or a security incident. These plans
need to include roles and responsibilities, communication strategies, and recovery steps.
Training programs for IT staff, operators, and other stakeholders involved in hardware
management are crucial for maintaining system security. Effective documentation and
reporting are also fundamental to the risk management process. It is important to be
transparent in reporting to stakeholders and regulatory bodies.
1.4.5 Virtualization
Virtualization has become a cornerstone of modern IBM Power systems, enabling enhanced
flexibility and efficiency. However, the shift to virtual environments also introduces specific
security challenges that must be addressed to protect these dynamic and often complex
systems.
The function that allows virtualization in a system is called a hypervisor, also known as a
virtual machine monitor (VMM). The hypervisor is a type of computer software that creates
and runs virtual machines, which are also called logical partitions (LPARs). The hypervisor
presents the guest operating systems with a virtual operating platform and manages the
execution of the guest operating systems. Hypervisors are generally classified into two types.
A Type 1 hypervisor is a native hypervisor that runs on bare metal. In contrast, a Type 2
hypervisor is hosted on an underlying operating system. The Type 1 hypervisor is considered
more secure as it can provide better isolation between the VMs and generally offers better
performance to those VMs.
The following list provides some security implications that need to be addressed by the
virtualization layer:
Isolation Failures
As there are multiple VMs running at any one time on the same physical hardware, it is
imperative that the hypervisor maintain strict isolation between virtual machines to prevent
a breach in one VM from compromising others.
Hypervisor Security
HMC
The Hardware Management Console (HMC) is used to configure and manage IBM Power
systems. Its capabilities encompass logical partitioning, centralized hardware management,
Capacity on Demand (CoD) management, advanced server features, redundant and remote
system supervision, and security.
The HMC provides a reliable and secure console for IBM Power systems. It is built as an
appliance on a highly secured system, tied to specific hardware, and not compatible with
other systems. This stringent build process includes incorporating advanced hardware and
CMC
The Cloud Management Console (CMC) allows you to securely view information and gain
insights about your Power Systems infrastructure across multiple locations. Dynamic views of
performance, inventory, and logging for your complete Power enterprise—whether
on-premises or off-premises—simplify and unify information in a single location. CMC
provides consolidated information and analytics, which can be key enablers for the smooth
operation of infrastructure. Hosted on IBM Cloud, the CMC is a highly secure cloud-based
service accessible from mobile devices, tablets, and PCs.
4.1.0.0 or later
3.1.4.10 or later
PowerVM Virtual I/O Server 3.1.3.10 or later
3.1.2.30 or later
3.1.1.50 or later
7.5 or later
IBM i 7.4 TR5 or later
7.3 TR11 or later
8.4 or later
Red Hat Enterprise Linux
9.0 or later
15.3 or later
SUSE Linux Enterprise Server
12.5 or later
A full list of operating systems that run on IBM Power is also available at
https://www.ibm.com/power#Operating+systems.
1.4.8 Storage
An enterprise’s data is a critical resource and must be available play a crucial role in data
management, ensuring that vast amounts of operational and historical data are securely
stored, readily accessible, and efficiently managed. This section delves into the various
aspects of storage technology, highlighting types of storage architectures, security measures,
and best practices for managing storage in power systems.
Storage topologies
There are multiple methods of connecting storage to your servers. The different options have
evolved over time to meet different requirements and each type has benefits and
disadvantages. They also vary in performance, availability, and price.
In summary, DAS offers high performance and simplicity but can be limited in scalability and
sharing capabilities. Its benefits make it suitable for scenarios where high speed and control
are priorities, while its disadvantages suggest it may not be ideal for environments needing
extensive collaboration or large-scale storage expansion.
In summary, NAS offers centralized, accessible, and scalable storage solutions suitable for a
wide range of environments, from home use to enterprise settings. It excels in providing file
sharing and backup capabilities but may face limitations in performance and complexity as
needs grow. Proper network infrastructure and security measures are crucial for optimizing
NAS performance and protecting data
Cloud Storage
Cloud storage refers to the practice of storing data on remote servers that can be accessed
over the internet. Providers manage these servers and offer various services for storing,
managing, and retrieving data. This model contrasts with traditional on-premises storage
solutions, where data is stored locally on physical devices. Cloud storage can be
characterized as:
Usage
– Small and Medium Businesses (SMBs): SMBs leverage cloud storage for file sharing,
collaboration, and remote work. It provides a cost-effective way to scale storage needs
without investing in physical infrastructure.
– Large Enterprises: Enterprises use cloud storage for scalable data storage solutions,
disaster recovery, and global access. It supports extensive data needs, facilitates
collaboration, and integrates with various enterprise applications.
– Developers and IT Professionals: Cloud storage is used for hosting applications,
managing databases, and providing scalable storage solutions for big data and
analytics.
Benefits
– Scalability: Cloud storage offers virtually unlimited storage capacity. Users can scale
their storage up or down based on their needs without needing to invest in physical
hardware.
– Accessibility: Data stored in the cloud can be accessed from anywhere with an internet
connection. This supports remote work, collaboration, and access from multiple
devices.
– Cost-Effectiveness: Typically, cloud storage operates on a pay-as-you-go model,
allowing users to pay only for the storage they use. This reduces upfront costs
associated with purchasing and maintaining physical storage hardware.
– Automatic Updates and Maintenance: Cloud providers handle software updates,
security patches, and hardware maintenance, freeing users from these tasks and
ensuring that the storage environment is up-to-date.
– Disaster Recovery: Many cloud storage services include built-in redundancy and
backup solutions, providing enhanced data protection and recovery options in case of
data loss or system failures.
Disadvantages
– Security and Privacy: Storing data off site introduces concerns about data security and
privacy. Users must trust cloud providers to protect their data from breaches and
unauthorized access. Encryption and other security measures are essential but may
not be foolproof.
Cloud storage offers flexible, scalable, and accessible solutions suitable for personal,
business, and enterprise needs. Its benefits include scalability, cost-effectiveness, and
automatic maintenance, making it an attractive option for modern data management.
However, concerns about security, reliance on internet connectivity, ongoing costs, and
potential vendor lock-in are important considerations that users must address when opting for
cloud storage solutions.
Securing storage systems involves a comprehensive approach that includes data encryption,
robust access controls, regular backups, physical security, network protections, monitoring,
compliance, and vendor management. By addressing these considerations, organizations
can significantly reduce the risk of data breaches, loss, and other security incidents.
In summary, Safeguarded Copy refers to backup copies of data that are specifically protected
to ensure their integrity, security, and reliability. This involves creating consistent and reliable
backups, encrypting and securing backup data, protecting against threats like ransomware,
and automating management processes. By implementing SafeGuarded copies,
organizations can ensure that their data backups are robust, secure, and capable of
supporting effective disaster recovery and data protection strategies.
Important: Safeguarded Copy is not just a physical copy of the data. It involves automation
and management to take regular copies, validate that they are valid and are stored so that
they cannot be modified. Of equal importance is the ability to quickly recognize when your
data has been compromised and recover to a last good state. It also involves business
processes to be able to recover applications and databases to minimize data loss.
IBM Storage provides a Safeguarded Copy capability in both the IBM DS8000® and the IBM
FlashSystem® systems. For more information on the IBM solutions see:
– IBM Storage DS8000 Safeguarded Copy: Updated for DS8000 Release 9.3.2,
REDP-5506
– Data Resiliency Designs: A Deep Dive into IBM Storage Safeguarded Snapshots,
REDP-5737
– Cyber Resiliency with IBM Storage Sentinel and IBM Storage Safeguarded Copy,
SG24-8541
1.4.9 Networking
Security considerations for networking involve several key aspects to protect data integrity,
confidentiality, and availability across networked systems. Whether using physical networking
connections or virtualize the network functions, the considerations are generally the same.
Here are some essential considerations:
Addressing these considerations helps build a robust network security posture and protect
against various cyber threats.
Workloads on the Power10 server see significant benefits from improved cryptographic
accelerator performance compared to previous generations. Specifically, the Power10 chip
supports accelerated cryptographic algorithms such as AES, SHA2, and SHA3, resulting in
considerably higher per-core performance for these algorithms. This enhancement allows
features like AIX Logical Volume Encryption to operate with minimal impact on system
performance.
Delaying QSE adoption could have severe consequences. Legacy cryptographic systems left
unaltered could be compromised in the event of a successful quantum attack, exposing
sensitive data and risking confidential business transactions and individual privacy. Financial
institutions, critical infrastructure providers, and government agencies would face significant
challenges in maintaining operational integrity and confidentiality. Therefore, prioritizing QSE
implementation is crucial for long-term cybersecurity resilience.
Power10 Implementation
Power10 processors support these quantum-safe algorithms through:
Crypto Engines: Multiple engines per core enable efficient execution of cryptographic
operations.
Software Updates: The architecture allows updates to cryptographic libraries, ensuring the
integration of new quantum-safe algorithms as they become standardized.
Power10’s design and capabilities ensure robust security against future quantum threats by
leveraging hardware acceleration and flexible software updates, maintaining high-security
standards as the cryptographic landscape evolves.
Workloads on the Power10 benefit from cryptographic algorithm acceleration, enabling much
higher per-core performance than POWER9 processor-based servers for algorithms like
Advanced Encryption Standard (AES), SHA2, and SHA3. Features like AIX Logical Volume
Encryption can be activated with minimal performance overhead thanks to this performance
enhancement.
With four times as many AES encryption engines, Power10 processor technology is designed to
offer noticeably quicker encryption performance. Power10 is more advanced than IBM
POWER9 processor-based servers, with updates for the most stringent standards of today as
well as future cryptographic standards including post-quantum and fully homomorphic
encryption. It also introduces additional improvements to container security. Through the use of
hardware features for a seamless user experience, transparent memory encryption aims to
simplify encryption and support end-to-end security without compromising performance.
These coprocessors enable you to accelerate cryptographic processes that safeguard and
secure your data, while protecting against a wide variety of attacks. The IBM 4769, 4768 and
4767 HSMs deliver security-rich, high-speed cryptographic operations for sensitive business
and customer information with the highest level of certification for commercial cryptographic
devices.
Cryptographic Coprocessor cards relieve the main processor from cryptographic tasks. The
IBM HSMs have a PCIe local-bus-compatible interface are tamper responding, programmable,
cryptographic coprocessors. Each coprocessor contains a CPU, encryption hardware, RAM,
persistent memory, hardware random number generator, time-of-day clock, infrastructure
firmware, and software. Their specialized hardware performs AES, DES, DES, RSA, ECC,
AESKW, HMAC, DES/3DES/AES MAC, SHA-1, SHA-224 to SHA-512, SHA-3, and other
cryptographic processes. this relieves the main processor from these tasks. The coprocessor
design protects your cryptographic keys and any sensitive customer applications.
The CHIM workstation connects via secure sessions to the cryptographic coprocessors to let
authorized personnel perform the following tasks:
– View coprocessor status
– View and manage coprocessor configuration
– Manage coprocessor access control (user roles and profiles)
– Generate and load coprocessor master keys
– Create and load operational key parts
The following are the requirements to use CHIM to manage IBM 4769 Cryptographic
Coprocessor(s) located in IBM i systems:
Installation of the following products (with appropriate PTF levels):
– 5770SS1 option 35 - CCA Cryptographic Service Provider
– 5733CY3 - Cryptographic Device Manager
– 5733SC1 option 1 - OpenSSH, OpenSSL, zlib
The Secure Shell (SSH) server daemon must be active (use STRTCPSVR *SSHD), must
be configured to allow local port forwarding from the CHIM workstation to the CHIM
catcher port (which defaults to 50003) on localhost, and must have logging configured for
at least the INFO level (the default).
The CHIM catcher must be active (use STRTCPSVR *CHIM). The CHIM catcher will not
start successfully if the previous requirements are not met.
Cryptographic device descriptions must be created for each IBM 4769 Cryptographic
Coprocessor being managed (use CRTDEVCRP) and must be in *ACTIVE status (use
VRYCFG or WRKCFGSTS).
The IBM i user profile used when authenticating from the CHIM workstation must have
*IOSYSCFG special authority and have *USE authority for the cryptographic device
descriptions for each IBM 4769 Cryptographic Coprocessor being managed.
The CHIM catcher is controlled like all other TCP servers on IBM i. The STRTCPSVR, ENDTCPSVR,
and CHGTCPSVR commands can be used to manage the CHIM catcher. The server application
value for CHIM is *CHIM. The CHIM catcher port is configured with service name “chim” which
is set to port 50003. The CHIM catcher will only listen for incoming connections on localhost.
The CHIM catcher will end itself if no server activity occurs for 1 hour.
Benefits
IBM PCIe Cryptographic Coprocessors provide you the ability to:
Keep data safe and secure
Safeguard data with a tamper-responding design and sensors that protect against module
penetration and power or temperature manipulation attacks.
Choose your platform
Available on select IBM z Systems® servers, on z/OS® or Linux; IBM LinuxONE Emperor,
Rockhopper; IBM Power servers; and x86-64 servers with certain RHEL releases.
Note: At the time of this publication, IBM Power supports both the 4769 and 4767 HSMs.
The 4769 is currently available, while the 4767 has been withdrawn from marketing.
The remainder of this section covers the 4769 Cryptographic Coprocessor, which is the
currently available HSM option for IBM Power.
The IBM 4769 is available as FC EJ35, Customer Card Identification Number (CCIN) C0AF
(without blind-swap cassette custom carrier) and as FC EJ37, CCIN C0AF (with blind-swap
cassette custom carrier) on IBM Power10 servers, either on IBM AIX, IBM i, or Power Linux
(Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES)) operating
systems. It is also available as FC EJ35 and EJ37 on IBM Power9® servers, either on IBM
AIX or IBM i.
The IBM 4769 hardware provides significant performance improvements over its
predecessors while enabling future growth. The secure module contains redundant IBM
PowerPC® 476 processors, custom symmetric key and hashing engines to perform AES,
DES, TDES, SHA-1 and SHA- 2, MD5 and HMAC as well as public key cryptographic
algorithm support for RSA and Elliptic Curve Cryptography. Other hardware support includes
a secure real-time clock, hardware random number generator and a prime number generator.
The secure module is protected by a tamper responding design that protects against a wide
variety of attacks against the system.
The “Payment Card Industry Hardware Security Module” standard, PCI HSM, is issued by the
PCI Security Standards Council. It defines physical and logical security requirements for
HSMs that are used in the finance industry. The IBM CEX7S with CCA 7.x has PCI HSM
certification.2
The IBM 4769 is designed for improved performance and security rich services for sensitive
workloads, and to deliver high throughput for cryptographic functions. For a detailed summary
of the capabilities and specifications of the IBM 4769, refer to the IBM 4769 Data Sheet.
Reliability, Availability, and Serviceability (RAS)
1
https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4079
2 https://listings.pcisecuritystandards.org/popups/pts_device.php?appnum=4-20358
PowerVM, the virtualization management tool for IBM Power, provides real-time monitoring of
virtualized environments. It helps track performance, resource utilization, and security status
across virtual machines and physical servers. In addition the Hardware Management Console
(HMC) offers real-time monitoring and management of IBM Power systems. It provides
insights into system health, performance metrics, and potential security issues.
Power10 systems can be configured to generate real-time alerts for various events, including
security incidents, system performance issues, and hardware faults. These alerts can be
integrated with enterprise monitoring solutions for centralized management. Integration with
Security Information and Event Management (SIEM) systems allows for real-time analysis of
security events and incidents. This helps in detecting and responding to potential threats as
they occur.
IBM Power10 systems provide a robust framework for compliance automation and real-time
monitoring through integrated tools and features. By leveraging solutions like IBM PowerSC,
advanced monitoring tools, and real-time alert systems, organizations can ensure continuous
Here are some options within the IBM Power ecosystem to support EDR:
IBM PowerSC is a security and compliance solution optimized for virtualized environments
on IBM Power servers running AIX, IBM i or Linux. PowerSC sits on top of the IBM Power
server stack, integrating security features built at different layers. You can now centrally
manage security and compliance on Power for all IBM AIX and Linux on Power endpoints.
For more information see 9.3, “Endpoint Detection and Response” on page 229.
IBM Security® QRadar® EDR remediates known and unknown endpoint threats in near
real time with easy-to-use intelligent automation that requires little-to-no human
interaction.
For more information see “IBM QRadar Suite (Palo LAlto Networks)” on page 258.
Modern compilers and operating systems for Power10 can include additional security features
and mitigations. Developers should ensure that their software is built with the latest security
practices and that the operating system is up-to-date with relevant patches.
Secure boot protects the initial program load by ensuring that only authorized modules (those
that are cryptographically signed by the manufacturer) are loaded during the boot process.
Trusted boot starts with the platform provided by the secure boot process and then builds on
that-by recording measurements of the system's firmware and configuration during the
startup process to the Trusted Platform Module (TPM). Through an attestation process the
TPM can provide a signed quote that can be used to verify the system firmware integrity at
any time.
Figure 2-3 illustrates how the different layers work together to support Secure Boot within
Power10.
Power Secure and Trusted boot make use the following components:
Firmware Secure Boot
Integrity validation of all firmware components from the hardware root of trust up through
PowerVM and partition firmware (PFW).
Secure Boot implements a processor based chain of trust based in the IBM Power processor
hardware and enabled by the IBM Power firmware stack. Secure Boot provides for a trusted
firmware base to enhance confidentiality and integrity of customer data in a virtualized
environment.
Secure Boot establishes trust through the platform boot process. With secure boot, the system
will IPL to a trusted and well defined state. Trusted here means that the code executed during
IPL process has been originated from the platform manufacturer, signed by the platform
manufacturer and has not been modified since. For more information on Secure boot
processing in PowerVM see Secure Boot in IBM documentation or this Secure Boot PDF.
The AIX boot image includes digital signatures of the boot loader and kernel, allowing the
PFW to validate them. The PFW also validates the digital signature of the boot code in
adapter microcode. If an adapter's boot code lacks a valid digital signature, it cannot be used
as a boot device for the trusted LPAR.
The AIX Secure boot feature is configured using the management console, with the Hardware
Management Console (HMC) currently supporting it.
The AIX operating system supports the following basic secure boot settings:
0. Secure boot disabled
1. Enabled (or log only)
2. Enforce (abort the boot operation if signature verification fails)
3. Enforce policy 2 and avoid loading programs or libraries not found in TSD, also disabling
write access to /dev/*mem devices.
4. Enforce policy 3 and disable the kernel debugger (KDB)
If file integrity fails validation during the boot operation in Audit mode, the LPAR continues to
boot, but the system administrator logs errors in /var/adm/ras/securebootlog for inspection
after the LPAR boots. When digital signature verification of files fails during the boot in
Enforce mode, the boot process is aborted, and the LPAR status is displayed in the HMC with
a specific LED code.
Linux boot images are signed by distributions like Red Hat and SUSE, allowing PFW to
validate them using PKCS7 (Cryptographic Message Syntax or CMS). PKCS7 is one of the
family of standards called Public-Key Cryptography Standards (PKCS) created by RSA
Laboratories and is a standard syntax for storing signed or encrypted data. PowerVM
includes the public keys used by PFW to validate the grub boot loader.
The partition firmware verifies the appended signature on the GRUB image before handing
control to GRUB. Similarly, GRUB verifies the appended signature on the kernel image before
booting the OS. This ensures that every image that runs at boot time is verified and trusted.
Limitations
The following are current limitations for Secure Boot with Linux on Power:
Key rotations for the GRUB or kernel require a complete firmware update.
Administrators have no ability to take control of the LPAR and manage their keys.
User-signed custom builds for kernel or GRUB does not boot by using static key
management.
Secure boot enables lock-down in the kernel to restrict direct or indirect access to the
running kernel, which protects against unauthorized modifications to the kernel or access
to sensitive kernel data.
Lock-down impacts some of the IBM Power platform functions that are accessible by using
the userspace RTAS interface.
Table 2-2 shows the supported combinations of firmware and Linux distribution.
For more information on Secure Boot with Linux see Guest secure boot with static keys in
IBM Documentation.
For AIX, the trusted boot function is handled by the Trusted Execution functions as discussed
in section 4.9, “Trusted Execution” on page 112.
Hypervisor vulnerabilities
The first step in securing hypervisors is understanding the unique vulnerabilities they face.
The following are some vulnerabilities and a discussion of how to avoid them in your
IBM Power environment.
Hyperjacking
Hyperjacking is a type of advanced cyber attack where threat actors take control of a crucial
element called the hypervisor, which handles the virtualized environment within a main
computer system. Their ultimate aim is to deceive the hypervisor into executing unauthorized
tasks without leaving traces elsewhere in the computer.
VM Escape
VM escape is a security vulnerability that lets attackers breach virtual machines and obtain
unauthorized access to the underlying physical hardware, such as the hypervisor or host
system. It circumvents the virtualization layer's isolation barriers, enabling potential exploits.
IBM Power and PowerVM provide many options to help preventing a VM Escape type attack.
PowerVM has excellent LPAR isolation that prevents an LPAR from seeing resources out of its
defined VM.
Resource Exhaustion
Attackers can target the resource allocation features of a hypervisor, leading to denial of
service (DoS) by exhausting resources such as CPU and memory, which affects all VMs
hosted on the hypervisor.
Within PowerVM, when a resource is defined to an LPAR there are limits that are enforced by
the hypervisor to protect from overallocation. Defining the minimum, maximum and requested
values for memory and CPU correctly will ensure that you will avoid resource exhaustion
attacks.
Ensure that the administrator credentials used for configuring PowerVM are protected and
use RBAC to limit the scope of changes that can be made.
For more information on securing your PowerVM environment see section and 3.1, “Hardware
Management Console Security” on page 50 and section 3.3, “VIOS Security” on page 69
LPAR isolation is a basic tenet of IBM PowerVM, the IBM Power hypervisor. PowerVM is
designed to share resources from a single machine across all of the LPARs defined on that
machine. Additionally, PowerVM has additional capabilities that allow LPARs to be
non-disruptively moved from one host machine to another to provide load balancing and high
availability configurations. LPAR restart technologies also support disaster recovery options
to restart workloads in another site in case of site failure.
One of the strengths of PowerVM is its flexibility in sharing processing resources. CPUs can
be defined to an LPAR as dedicated or shared, capped or uncapped and donating or not
donating. This allows you to effectively allocate resource among LPARs, ensuring that each
partition receives the necessary resources to perform optimally without affecting the
performance of others. This includes dynamic resource allocation techniques that allow
resources to be reallocated based on workload demands.
The task of managing the virtualization layer in the IBM Power ecosystem is divided into two
distinct areas: hardware management, and I/O virtualization. The hardware management
aspect is handled by the Hardware Management Console (HMC) which is an appliance
designed to define the logical partitions in each server, dividing and sharing the installed
resources across the various virtual machines that are supported. An HMC can manage
multiple servers, but as your infrastructure grows across multiple locations and large number
of servers the Cloud Management Console provides a solution to having a single tool for
consolidating information across a number of HMCs.
The Virtual I/O Server (VIOS) is a special partition running in an IBM Power server that allows
sharing of physical devices across multiple LPARs. The purpose of the VIOS is to virtualize
the physical adapters in the system, reducing the number of adapters to be reduced. Systems
with virtualized i/O can also be more easily moved to other servers as needed for load
balancing and high availability during planned or unplanned outages—providing a more
available and resilient environment.
Each of these functions: HMC, CMC and VIOS are discussed in this chapter:
3.1, “Hardware Management Console Security” on page 50
3.2, “Cloud Management Console security” on page 61
3.3, “VIOS Security” on page 69
HMC Packaging
Initially, the HMC was delivered solely as a traditional hardware appliance, with the software
and hardware bundled together and installed on-site. As client environments grew, there was
a demand to virtualize the HMC function to minimize infrastructure needs. In response, IBM
introduced the virtual HMC (vHMC), which enables you to use your own hardware and server
virtualization to host the IBM-provided HMC virtual appliance. The vHMC image is available
for both x86 and IBM Power servers and supports the following hypervisors:
For x86 virtualization:
– Kernel-based Virtual Machine (KVM) on Ubuntu 18.04 LTS or Red Hat Enterprise
Linux 8.0 or 9.0
– Xen on SUSE Linux Enterprise Server 12
– VMware ESXi 6.5, 7.0 or 7.0.2
For Power virtualization:
– PowerVM
The distribution of HMC service packs and fixes is consistent for both hardware and virtual
HMCs. However, for vHMC on PowerVM, Power Firmware updates are managed by IBM. For
vHMC on x86 systems, if security vulnerabilities arise, you should consult with the hypervisor
and x86 system vendors for any necessary updates to the hypervisor and firmware. The
steps for enabling secure boot differ between hardware and virtual HMCs due to architectural
differences. For detailed instructions on enabling the secure boot function, refer to section
3.1.9, “Secure boot” on page 58.
HMC Functions
The HMC enables you to create and manage logical partitions, including the ability to
dynamically add or remove resources from active partitions. It also handles advanced
virtualization functions such as Capacity Upgrade on Demand and Power Enterprise Pools.
In addition, the HMC provides terminal emulation for the logical partitions on your managed
systems. You can connect directly to these partitions from the HMC or configure it for remote
access. This terminal emulation feature ensures a reliable connection, useful if other terminal
devices are unavailable or not operational. It is particularly valuable during the initial system
setup, before configuring your preferred terminal.
One HMC can oversee multiple servers and multiple HMCs can connect to a single server. If
a single HMC fails or loses connection to the server firmware, the server will continue to
operate normally, but changes to the logical partition configuration won't be possible. To
mitigate this, you can connect an additional HMC as a backup to ensure a redundant path
between the server and service and support.
Each HMC comes preinstalled with the HMC Licensed Machine Code to ensure consistent
functionality. You have several options for configuring HMCs to provide flexibility and
availability:
Local HMC
A local HMC is situated physically close to the system it manages and connects via a
private or public network. In a private network setup, the HMC acts as a DHCP server for
the system’s service processors. Alternatively, it can manage the system over an open
network, where the service processor’s IP address is manually assigned through the
Advanced System Management Interface (ASMI).
Remote HMC
A remote HMC is located away from its managed systems, which could be in a different
room, building, or even a separate site. Typically, a remote HMC connects to its managed
servers over a public network, although it can also be configured to connect via a private
network.
IBM has created a document that provides a starting point on understanding the connectivity
used by the HMC and how to make it secure. The HMC 1060 Connectivity Security White
Paper is a good starting point for enabling a secure HMC environment in your enterprise.
Level 2
Level 2 defines some actions that you should consider when you have multiple HMC users
defined in the environment. If you have multiple users defined to use the HMC then consider:
HMC supports fine-grained control on resources and roles. Create account for each user
on HMC.
Assign only necessary roles to users.
Assign only necessary resources (Systems, Partitions, etc.) to users.
Both resources and roles assigned to the users must be minimum that’s required for doing
the job. Create custom roles if necessary.
Enable user data replication between HMCs with different modes.
Import a certificate signed by Certificate Authority.
Enable secure boot.
Enable Multi Factor Authentication.
Enable PowerSC profile.
Level 3
Level 3 defines additional considerations when you have multiple HMCs in the environment. If
you have many HMCs and Sysadmins:
Use centralized authentication—LDAP or Kerberos (HMC does not support SSO feature).
Enable user data replication between HMCs.
Put HMC in NIST SP 800-131A mode so that it uses strong ciphers only.
Block unnecessary ports in firewall.
All other ports should be kept within a private or isolated network for security purposes.
Use this command on the HMC to set the NIST SP 800-131A mode:
chhmc -c security -s modify --mode nist_sp800_131a
If you wish to return the HMC to legacy mode, use this command:
chhmc -c security -s modify --mode legacy
3.1.5 Encryption
All communication channels utilized by the HMC are encrypted. By default, the HMC employs
Transport Layer Security (TLS) and HTTPS with secure cipher sets bundled with the HMC.
The default ciphers provide strong encryption and are used for secure communication on
ports 443, 17443, 2301, and 5250 proxy, as well as for internal HMC communication.
Note: For details on the encryption ciphers used by the HMC, you can execute the
lshmcencr command in the HMC command-line interface (CLI). If your organization's
corporate standards require different ciphers, you must use the chhmcencr command to
modify them. For more details see:
https://www.ibm.com/docs/en/power10/000V-HMC?topic=sh-managing-https-ciphers-hm
c-web-interface-by-using-hmc
The HMC supports both self-signed and CA-signed certificates for encryption. Starting with
HMC Version 10.2.1040.0 and later, you can select the key size for certificates when
generating a certificate signing request (CSR), with options of 2048, 3072, or 4096 bits. When
using CA-signed certificates, ensure a minimum of 2048-bit RSA encryption is employed. By
default, the HMC uses a self-signed certificate with the SHA256 algorithm and 2048-bit RSA
encryption.
When a user seeks remote access to the HMC user interface via a web browser, they initiate
a request for the secure page using https://hmc_hostname. The HMC then presents its
HMC task roles are either predefined or customized. When you create an HMC user, you
must assign a task role to that user. Each task role allows the user varying levels of access to
tasks that are available on the HMC interface. You can assign managed systems and logical
partitions to individual HMC users, allowing you to create a user that has access to managed
system A but not to managed system B. Each grouping of managed resource access is called
a managed resource role. Table 3-1 lists the predefined HMC task roles, which are the
default on the HMC.
hmcviewer A viewer can view HMC information, but cannot change any
configuration Information.
You can create customized HMC task roles by modifying predefined HMC task roles. Creating
customized HMC Task Roles is useful for restricting or granting specific task privileges to a
certain user.
Authentication
User authentication is the first step to protecting your HMC and ensuring that only authorized
users are access the management console. The HMC supports various authentication
methods to validate users:
Local Authentication
If you select local authentication then password and number of days the password is valid
are required to be set.
Kerberos Authentication
If you select Kerberos Authentication, specify a Kerberos remote user ID and configure
HMC to use Kerberos. When a user logs in to the HMC, authentication is first verifies
against a local password file. If a local password file is not found, the HMC can contact a
remote Kerberos server for authentication. You must configure your HMC so that it uses
Kerberos remote authentication.
Information on setting up kerberos can be found in this IBM Document Link.
User properties
The User Properties has the following properties that you can set:
Timeout Values
These values specify values for various timeout situations.
– Session timeout minutes
Specifies the number of minutes, during a logon session, that a user is prompted for
identity verification. If a password is not re-entered within the amount of time that was
specified in the Verify timeout minutes field, then the session is disconnected. A
zero (0) is the default and indicates no expiration. You can specify up to a maximum
value of 525600 minutes (equivalent to one year).
– Verify timeout minutes
Specifies the amount of time that is required for the user to re-enter a password when
prompted, if a value was specified in the Session timeout minutes field. If the password
is not re-entered within the specified time, the session will be disconnected. A zero
(0) indicates there is no expiration. The default is 15 minutes. You can specify up to a
maximum value of 525600 minutes (equivalent to one year).
– Idle timeout minutes
Specifies the number of minutes the user’s session can be idle. If the user does not
interact with the session in the specified amount of time, the session becomes
disconnected. A zero (0) is the default and indicates no expiration. You can specify up
to a maximum value of 525600 minutes (equivalent to one year).
– Minimum time in days between password changes
Specifies the minimum amount of time in days that must elapse between changes for
the user’s password. A zero (0) indicates that a user’s password can be changed at any
time.
Inactivity Values
These define what actions to take due to various periods of inactivity:
– Disable for inactivity in days
This value defines the number of days of inactivity after which a user is temporarily
disabled. A value of zero (0) means that the user will not be disabled regardless of the
duration of inactivity.
– Never disable for inactivity
If you do not want to not disable user access based on inactivity, select “Never disable
for inactivity”.
Password policy
The HMC ships with default password policies which can be used to meet general corporate
requirements. To meet specific requirements, users can create a custom password policy and
apply it on the HMC. Password policies are enforced for locally authenticated HMC users only.
To see what password policies are defined on the HMC use the lspwdpolicy1 command as
follows:
List all of the HMC password policies:
lspwdpolicy -t p
List just the names of all of the HMC password policies:
lspwdpolicy -t p -F name
List HMC password policy status information:
lspwdpolicy -t s
The “HMC Medium Security Password Policy” is defined by default but not activated. It has
the following settings:
min_pwage=1
pwage=180
min_length=8
hist_size=10
warn_pwage=7
min_digits=0
min_uppercase_chars=1
min_lowercase_chars=6
min_special_chars=0
inactivity_expiration=180
The policy can be activated with the chpwpolicy2 command: chpwdpolicy -o -n “HMC Medium
Security Password Policy”. It can be deactivated by: chpwdpolicy -o d. If you deactivate a
password policy, be sure to activate another policy to protect your system.
An additional defined policy, “HMC Standard Security Password Policy”, is also available and
might be acceptable for use depending on your corporate requirements. Its setting are
defined as:
min_lowercase_chars=1
min_uppercase_chars=1
min_digits=1
1
https://www.ibm.com/docs/en/power10/7063-CR1?topic=commands-lspwdpolicy
2 https://www.ibm.com/docs/en/power10/7063-CR1?topic=commands-chpwdpolicy
If you wish to create your own policy, use the mkpwpolicy3 command. One example of
creating a new password policy is shown in Example 3-1.
The “-i” flag shown uses the command line input to define the parameters of the policy. Using
the “-f” flag allows the use of a file with the parameters defined to simplify the entry of the
command and to provide consistency across your HMCs. Reminder, once the policy is
defined it still needs to be activated before it is effective.
Deleting password policies is done using the rmpwdpolicy command. The single parameter is
“-n” to specify the name to be deleted. For example rmpwdpolicy -n xyzPolicy would delete
the policy “xyzPolicy”.
Enabling MFA
Multi-Factor Authentication is disabled on the HMC by default. For HMC GUI login, when MFA
is enabled and the user is configured on the PowerSC MFA server, enter the Cache Token
Credential (CTC) code in the password field. For Secure Shell (SSH) login, when MFA is
enabled, all users that login through SSH are prompted for a CTC code. If the user is
configured on the PowerSC MFA server, then you can enter the CTC code at the prompt. If
the user is not configured on the PowerSC MFA server, press Enter when prompted for CTC
code, and then enter the password of the user at the prompt. When HMC is enabled with
Power SC MFA, all the users including local, LDAP, Kerberos will be prompted with PowerSC
MFA authentication process. If for any reason you don’t want MFA for certain user, HMC is
enabled with an PowerSC MFA allow list for users exempt from MFA. The users which are
added for allow list will be exempted from PowerSC MFA authentication on HMC.
3 https://www.ibm.com/docs/en/power10/7063-CR1?topic=commands-mkpwdpolicy
Most tasks performed on the HMC (either locally or remotely) are logged by the HMC. These
entries can be viewed by using the Console Events Log task, under Serviceability →
Console Events Log or by using the lssvcevents command from the restricted shell.
A log entry contains the time stamp, the user name, and the task being performed. When a
user logs in to the HMC locally or from a remote client, entries are also recorded. For remote
login, the client host name or IP address is also captured, as in Example 3-2.
Standard log entries from syslogd can also be seen on the HMC by viewing the /var/hsc/log
file. This file can be read by users with the hmcsuperadmin role. It is under logrotate control. A
valid user can simply use the cat or tail command to view the file.
A user with the hmcsuperadmin role can also use the scp command to securely copy the file
to another system. If you want to copy syslogd entries to a remote system, you may use the
chhmc command to change the /etc/syslog.conf file on the HMC to specify a system to which
to copy. For example, the following command causes the syslog entries to be sent to
the myremotesys.company.com host name:
chhmc -c syslog -s add -h myremotesys.company.com
The systems administrator must be sure that the syslogd daemon running on the target
system is set up to receive messages from the network. On most Linux systems, this can be
done by adding the -r option to the SYSLOGD_OPTIONS in /etc/sysconfig/syslog file.
Due to the difference in the architecture, different steps are required to enable secure boot on
physical and virtual HMCs:
Full documentations for secure boot feature enablement steps for hardware HMC can be
found at:
https://www.ibm.com/docs/en/power10/7063-CR2?topic=rack-enabling-secure-boot-70
63-cr2-hmc
The dedicated steps to enable secure boot for virtual HMC based on VMWare ESXi can
be found at:
https://www.ibm.com/docs/en/power10?topic=ihvax-installing-hmc-virtual-applianc
e-enabled-secure-boot-by-using-vmware-esxi
For additional information on the enabling the PowerSC profile in the HMC see:
https://www.ibm.com/docs/en/power10/000V-HMC?topic=hmc-enabling-powersc-profile
https://www.ibm.com/docs/en/powersc-standard/2.1?topic=concepts-hmc-hardening-p
rofile
Comprehensive documentation for configuring the HMC to send call-home data, test problem
reporting, manage user authorization, and handle information transmission can be found at:
:https://www.ibm.com/docs/en/power10?topic=menus-configuring-local-console-report-
errors-service-support
To enable the communication from the HMC to the BMC of the inband connection, the user
needs to configure credentials to allow the HMC to connect to the BMC for periodic
monitoring of hardware problem events and other management console functions.
Note: The default password auto-expire3s on the first access by the user, and must
be changed.
It is recommended that a local user, other than root, is configured with administrator
privilege to be used for console inband communications. See article “How to add a
user to the BMC on the 7063-CR2 HMC” for detailed steps.
Note: The Change Expired Password task, cannot be selected by the user, it is only
available when the previously provided password has expired. This scenario can be
common on first time setups when the user has yet to configure the BMC and the
default credentials of root/0penBMC are still in place.
4. If the current credentials are valid, there is nothing else to do, click Close on the message
panel, and then click Close to end the task.
If the credentials are failed or not set, then update the credentials by providing a valid
username and password and clicking on the Set Credentials button. If the credentials are
accepted click Close to exit.
5. If the credentials are expired, clicking on Close on the message panel switches to the
Change Expired Password task. Provide a new password (twice to confirm), to update the
new password for the user, on the BMC. Click Change Expired Password.
3.1.14 Summary
The HMC is an integral part of the management of the IBM Power ecosystem. To provide the
range of functions, it needs to connect to: the servers it is managing, to IBM for call home,
and to the users that are administering the environment. It is designed to make these
connections in a secure manner. We have discussed many areas where you can ensure that
you are taking full advantage of the security options available.
Hosted in the IBM Cloud, the CMC ensures secure, anytime access, enabling system
administrators to easily generate reports and gain insights into their Power cloud
deployments. The CMC is required for IBM Power Enterprise Pools 2, a cloud solution that
facilitates dynamic resource allocation and pay-as-you-go capacity management within the
IBM Power environment.
Cloud Connector is the service that runs on the IBM Power Hardware Management Console
(HMC) and sends system resource usage data to the IBM Cloud Management Control (CMC)
service. The Cloud Connector and the Cloud Management Console provide the applications
shown in Table 3-2 for the IBM Power ecosystem.
Know all your patch planning needs at glance! (incl. Firmware, VIOS,
OS and HMC)
Patch Planning
Identify all patch dependencies
Integrated, collaborative planning with stakeholders
User management is handled by the administrator of the organization registered for IBM
Cloud Management Console for Power Systems services. Administrators can manage users
from the Settings page. To access this page, click the navigation menu icon in the CMC
portal header, then select the Settings icon. On the Settings page, click the Manage Users
tab to view all users configured for your organization. Users without administrator privileges
will have limited access to specific applications.
To be added to the CMC, users must have a valid IBMid. In addition to the IBMid, users must
be added to the CMC application by the administrator within your company. The resource role
assignment feature allows administrators to assign appropriate tasks to users.
Resource roles can be managed from the CMC Portal Settings page. On this page, click the
Manage Resource Roles tab to view and manage resource roles. Administrators can add,
modify, or delete resource roles for other users from this page.
Attention: Data filtering for the allowlist is supported only with HMC version 1020 or later.
If your version does not meet this requirement, data from systems not on the allowlist will
still be uploaded to the CMC.
No List: Selecting No List disables both filtering types and shows data from all managed
systems.
To view the current managed systems on the blocklist or allowlist, click the Blocklist and
Allowlist tabs in the Managed System Filter area.
Important: Only one filter type, Blocklist, Allowlist, or No List, can be active at a time
Note that adding a system to the blocklist does not automatically remove its existing data from
the cloud. To purge this data, ensure the Cloud Connector is running, and use the command
run chsvc -s cloudconn -o stop --purge from the management console command line.
Data Filter
To filter the data in Cloud Connector from getting pushed to cloud storage, you add the
systems to this table. Selections are available to filter the System IP Address and Logical
Partition/Virtual IO Server IP Address. These systems can be re-enabled if you want to.
After the selection is made, it will take about 5 to 10 minutes to reflect the data in the CMC.
The patch planning data gets updated once a day, so you might see delay in reflecting the
changes in the Patch Planning app. The purge operation is supported if the HMC is at Version
8 Release 8.6.0 Service Pack 2 or higher.
To enable Attribute Masking, from the Cloud Console interface click Settings → Cloud
Connector → Cloud Connector Management, scroll down to the end of the page, and then
set Attribute Masking to On.
The attribute masking feature is available with HMC 1040 and later only. Data from earlier
HMC versions is not masked and will continue to be displayed unmasked even when Attribute
Masking is enabled. For details on what fields are masked see Attribute Masking.
Important: Starting with HMC version 9.1.941.0, the Cloud Connector supports an HTTP
proxy. If you are using versions of the HMC older than that, the Cloud Connector requires a
SOCKS5 proxy.
Cloud Connector supports Kerberos, LDAP, and Digest-MD5 based proxy server authentication
in addition to Basic authentication. While starting Cloud Connector, an attribute can be specified
which designates the authentication type to be used for the proxy connection. The default
authentication used is Basic.
Cloud Connector utilizes a one-way push model where It initiates all outbound communication.
For automatic network based configuration, where Cloud Connector pulls the configuration file
from the cloud database, HTTPS is used; and for application data flow (push) between Cloud
Connector and the CMC data ingestion node, TCP with SSL is used. All communication from
the Cloud Connector to the CMC are secured using the Transport Layer Security Version 1.2
protocol (TLSv1.2).
The startup key for the HMC based cloud connector is used to establish a valid connection
between the connector and the CMC Cloud Portal Server (cloud portal). This key is also used
for connection between the Connector and the configuration database. Once a valid connection
is established to the cloud portal, credentials are returned to the Cloud Connector allowing for
dynamic configuration and reconfiguration.
Figure 3-2 shows the Cloud Connector establishing trust with the cloud portal via pushing the
user provided key to a cloud portal key verification endpoint.
A security test is executed to assert that the startup key provided is valid. The test begins with
a GET request from the connector to the cloud portal which will return a cross-site request
forgery (XSRF) header. This XSRF header, along with a portion of the decoded key, the
POST operation is performed to the same cloud portal endpoint. If the key is considered valid,
the cloud portal will respond with a set of encoded credentials giving cloud connector access
to a database containing the customers Cloud Connector configuration file.
In addition, it provides credentials for fetching an SSL certificate and key pair used in
communication between the Connector and the cloud data ingestion node. The credentials
are used to access a separate database from the one used to fetch the configuration file.
However, the underlying network location and mechanism used to fetch the certificates is the
same.
An SSL connection is established and the data is returned to the connector. Every minute the
cloud connector fetches a new configuration to ensure changes are handled. All communication
from the connector to the cloud database are secured using the Transport Layer Security
Version 1.2 protocol (TLSv1.2). Using the received credentials as shown in Figure 3-3, the
cloud connector pulls the customer specific cloud connector configuration file from the CMC
Cloud Configuration Database as shown in Figure 3-4 on page 67.
4
https://ibmcmc.zendesk.com/hc/en-us/article_attachments/360083545614/CloudConnectorSecurityWhitePape
r.pdf
Once credentials from the configuration file is collected as shown in Figure 3-3 on page 66,
the cloud connector pulls the SSL certificates and key from the CMC Cloud Certificate
Database as shown in Figure 3-5.
Figure 3-5 Cloud Connector pulls SSL keys from cloud database4
Figure 3-6 on page 68 shows the connection between the HMC and the ingestion node
through SOCKS5 proxy using the certificate obtained in Figure 3-5. With 9.1.941.0 version of
HMC, cloud connector can be started only with HTTP proxy option.
If the Cloud Connector is started with only HTTP proxy, then it uses HTTP proxy to establish
connection between HMC and ingestion node as shown in Figure 3-7. The proxy options
shown in Figure 3-6 are still supported in current versions of the HMCs, when the Cloud
Connector is started with both HTTP and SOCKS5 proxies.
Figure 3-7 Cloud Connector authentication to CMC data ingestion node new HTTP proxy mode4
IBM Power Enterprise Pools 2.0 provides enhanced multi-system resource sharing and
by-the-minute consumption of on-premises compute resources to clients who deploy and
manage a private cloud infrastructure. A Power Enterprise Pool 2.0 is monitored and
managed by the IBM Cloud Management Console. The CMC Enterprise Pools 2.0 application
helps you to monitor base and metered capacity across a Power Enterprise Pool 2.0
environment, with both summary views and sophisticated drill-down views of real-time and
historical resource consumption by logical partitions.
Given that all virtualized I/O traffic routes through VIOS, securing it is crucial. If an attacker
compromises VIOS, they could gain access to all virtualized network and storage traffic on
the system, potentially infiltrating client LPARs.
After deploying VIOS, the first priority for an administrator is to configure it securely. Many of
the security settings applicable to AIX can also be applied to VIOS. Given VIOS's appliance
nature, if you're unsure about applying specific security configurations, contact IBM support
for assistance.
Although VIOS does not have its own published security benchmarks, you can refer to the
Center for Internet Security (CIS) AIX benchmark to guide your VIOS security configuration.
Note that VIOS version 3.1 is based on AIX 7.2, while VIOS version 4.1 is based on AIX 7.3.
The easiest way to get information about new VIOS releases, service packs and security fixes
is to subscribe to Virtual I/O Server notifications on the IBM support portal.
Occasionally, VIOS receives security fixes to address newly identified vulnerabilities. You
should apply these fixes in accordance with your company's security and compliance policies.
Many VIOS administrators delay installing security updates, opting to wait for the next service
pack instead. This approach can be acceptable if you've evaluated the risks associated with a
compromised virtualization infrastructure and your organization is prepared to accept those
risks.
Decide which VIOS to update first based on the needs of your client LPARs. To prevent log
messages during the update, you can disable the storage paths to the VIOS that is being
updated.
To help you set up system security when you initially install the Virtual I/O Server, the Virtual
I/O Server provides the configuration assistance menu. You can access the configuration
assistance menu by running the cfgassist command. Using the viosecure command, you
can set, change, and view current security settings. By default, no Virtual I/O Server security
levels are set. You must run the viosecure command to change the settings.
The system security hardening feature protects all elements of a system by tightening
security or implementing a higher level of security. Although hundreds of security
configurations are possible with the Virtual I/O Server security settings, you can easily
implement security controls by specifying a high, medium, or low security level.
Using the system security hardening features provided by Virtual I/O Server, you can specify
values such as the following:
Password policy settings
Actions such as usrck, pwdck, grpck, and sysck
Default file-creation settings
Settings included in the crontab command
Configuring a system at too high a security level might deny services that are needed. For
example, telnet and rlogin are disabled for high-level security because the login password
is sent over the network unencrypted. If a system is configured at too low a security level, the
system might be vulnerable to security threats. Since each enterprise has its own unique set
of security requirements, the predefined High, Medium, and Low security configuration
settings are best suited as a starting point for security configuration rather than an exact
match for the security requirements of a particular enterprise. As you become more familiar
with the security settings, you can make adjustments by choosing the hardening rules that
you want to apply. You can get information about the hardening rules by running the man
command.
The low-level security settings are a subset of the medium level security settings, which are a
subset of the high-level security settings. Therefore, the high level is the most restrictive and
provides the greatest level of control. You can apply all of the rules for a specified level or
select which rules to activate for your environment. By default, no Virtual I/O Server security
levels are set; you must run the viosecure command to modify the settings.
To set a Virtual I/O Server security level of high, medium, or low, use the command viosecure
-level.
viosecure -level low -apply
For example:
1. At the Virtual I/O Server command line, type viosecure -level high. All the security level
options (hardening rules) at that level are displayed ten at a time (pressing Enter displays
the next set in the sequence).
2. Review the options that are displayed and make your selection by entering the numbers,
which are separated by a comma, that you want to apply, or type ALL to apply all the
options or NONE to apply none of the options.
3. Press Enter to display the next set of options, and continue entering your selections.
Note: To exit the command without making any changes, type ““q””.
To remove the security settings that have been applied, run the command viosecure -undo.
The Virtual I/O Server firewall is not enabled by default. To enable the Virtual I/O Server
firewall, you must turn it on by using the viosecure command with the -firewall option.
When you enable it, the default setting is activated, which allows access for the following IP
services:
ftp
ftp-data
ssh
web
https
rmc
cimom
Note: The firewall settings are contained in the viosecure.ctl file in the /home/ios/security
directory. If for some reason the viosecure.ctl file does not exist when you run the
command to enable the firewall, you receive an error. You can use the -force option to
enable the standard firewall default ports.
You can use the default setting or configure the firewall settings to meet the needs of your
environment by specifying which ports or port services to allow. You can also turn off the
firewall to deactivate the settings.
Tip: Additional information on securing your Virtual I/O Server can be found in this IBM
Document.
The account “admin” will get all privileges to change VIOS configuration but will not be able to
switch into oem_setup_env mode and execute root commands.
The account “monitor” will be able to login into VIOS and see the current configuration of the
VIOS, but will not be able to make changes to the configuration.
The super-administrators with access to oem_setup_env mode will have role PAdmin. All
other administrators have role Admin. View-only users have role ViewOnly.
After you change the privileges, the user must re-login to the system in order the changes be
effective.
histexpire Period in weeks when the user can’t reuse old password
Maximum number of weeks when the user can change its expired
maxexpired
password.
Minimum number of weeks when the user can’t change the password
minage
after the new password is set.
The option -rmidr will remove user’s home directory. Please note: the files on VIOS which are
owned by the removed user, will not change their ownership automatically,
Local date, time, user account and the issued command are saved in the file. An example is
shown in Example 3-6.
In this chapter, we will cover some of the basics of locking down your AIX LPARs. Some of
security hardening is implemented in AIX 7.3 by default. Usually you must check and implement
some additional security settings according to your environmental requirements.This includes
setting default permissions and umasks, using good usernames and passwords, hardening the
security with AIX Expert, protecting the data at rest directly on the disk or at the logical volume
layer via encryption, removing insecure daemons and integrating with LDAP directory services
or Microsoft Active Directory (AD)).
While this checklist is not exhaustive, it provides a solid foundation for developing a
comprehensive security plan tailored to your environment. We will cover these
recommendations and introduce additional considerations in the following sections.
Disable the root account from being able to remotely log in. The root account should be
able to log in only from the system console.
Enable system auditing. For more information, see Auditing overview in AIX
documentation.
Enable a login control policy. For more information, see Login control in AIX
documentation.
Disable user permissions to run the xhost command. For more information, see Managing
X11 and CDE concerns in AIX documentation.
Prevent unauthorized changes to the PATH environment variable. For more information,
see PATH environment variable in AIX documentation.
1 https://www.ibm.com/docs/en/aix/7.3?topic=security-checklist
Later, when the process needs to open an EFS-protected file, these credentials are checked.
If a key matching the file protection is found, the process can decrypt the file key and,
consequently, the file content. Group-based key management is also supported
Note: EFS is part of an overall security strategy. It is designed to work in conjunction with
sound computer security practices and controls.
EFS is part of the base AIX operating system. To enable EFS, root – or any user with the
RBAC authority aix.security.efs authorization – must use the efsenable command to
activate EFS and create the EFS environment. See section 4.2.3, “Root Access to User Keys”
on page 80 for more information on who can manage EFS. This is a one time system
enablement.
After EFS is enabled, when a user logs in, their key and keystore are silently created and
secured or encrypted with the user’s login password. The user's keys are then used
automatically by the JFS2 file system for encrypting or decrypting EFS files. Each EFS file is
When a file system is EFS-enabled, the JFS2 File System transparently handles encryption
and decryption in the kernel for read and write requests. User and group administration
commands (such as mkgroup, chuser, and chgroup) manage the keystores for the users and
groups seamlessly.
The following EFS commands are provided to allow users to manage their keys and file
encryption:
efskeymgr Manages and administers the keys.
efsmgr Manages the encryption of files/directories/file system.
Users can change their login password without affecting open keystores and the keystore
password can be different from the login password. When the user password differs from the
keystore password, requiring manual loading with efskeymgr.
Keystore Details
The keystore has the following characteristics:
Protected with passwords and stored in PKCS #12 format.
Location:
– User: /var/efs/users/<username>/keystore
– Group: /var/efs/groups/<groupname>/keystore
– efs_admin: /var/efs/efs_admin/keystore
Users can choose encryption algorithms and key lengths.
Access is inherited by child processes.
efs_admin Key
The efs_admin key is a special key stored in the root user's keystore that grants root-level
access to all keystores in Root Admin mode. Permissions to access this key can be
granted/revoked to specific users or groups using the efskeymgr command. This requires the
aix.security.efs RBAC authorization for users to manage EFS.
Note: The EFS keystore is opened automatically as part of the standard AIX login only
when the user’s keystore password matches their login password. This is set up by default
during the initial creation of the user’s keystore. Login methods other than the standard AIX
login, such as loadable authentication modules and pluggable authentication modules may
not automatically open the keystore.
All cryptographic functions come from the CLiC kernel services and CLiC user libraries.
By default, a JFS2 File System is not EFS-enabled. A JFS2 File System must be
EFS-enabled before EFS inheritance can be activated or any EFS encryption of user data
The cp and mv commands can handle metadata and encrypted data seamlessly across
EFS-to-EFS and EFS-to-non-EFS scenarios.
The backup, restore, and tar commands and other related commands can back up and
restore encrypted data, including the EFS meta-data used for encryption and decryption.
When backing up EFS encrypted files, you can use the –Z option with the backup command
to back up the encrypted form of the file, along with the file’s cryptographic meta-data. Both
the file data and meta-data are protected with strong encryption. This has the security
advantage of protecting the backed-up file through strong encryption. It is necessary to back
up the keystore of the file owner and group associated with the file that is being backed up.
These key stores are located in the following files:
users keystores /var/efs/users/user_login/*
group keystore /var/efs/groups//keystore
efsadmin keystor /var/efs/efs_admin/keystore
Use the restore command to restore an EFS backup that was made with the backup
command and –Z option. The restore command ensures that the crypto-meta data is also
restored. During the restore process, it is not necessary to restore the backed-up keystores if
the user has not changed the keys in their individual keystore. When a user changes their
password to open their keystore, their keystore internal key is not changed. Use the
efskeymgr command to change the keystore internal keys.
If the user’s internal, keystore key remains the same, the user can immediately open and
decrypt the restored file using their current keystore. However, if the key internal to the user’s
keystore has changed, the user must open the keystore that was backed up in association
with the backed-up file. This keystore can be opened with the efskeymgr –o command. The
efskeymgr command prompts the user for a password to open the keystore. This password is
the one used in association with the keystore at time of the backup.
For example, assume that a user Bob’s keystore was protected with the password foo (the
password ‘foo’ is not a secure password and only used in this example for simplicity sake) and
a backup of Bob’s encrypted files was performed in January along with Bob’s keystore. In this
example, Bob also uses foo for his AIX login password. In February, Bob changed his
password to bar, which also had the effect of changing his keystore access password to bar.
If, in March, Bob’s EFS files were restored, then Bob would be able to open and view these
files with his current key store and password, because he did not change the internal key of
the keystore.
If however, it was necessary to change the internal key of Bob’s keystore (with the efskeymgr
command), then by default the old keystore internal key is deprecated and left in Bob's
keystore. When the user accesses the file, EFS will automatically recognize that the restored
If the deprecated internal key is removed through efskeymgr, then the old keystore containing
the old internal key must be restored and used in conjunction with the files encrypted with this
internal key.
This raises the question of how to securely maintain and archive old passwords. There are
methods and tools to archive passwords. Generally, these methods involve having a file which
contains a list of all old passwords, and then encrypting this file and protecting it with the
current keystore, which in turn is protected by the current password. However, IT
environments and security policies vary from organization to organization, and consideration
and thought should be given to the specific security needs of your organization to develop
security policy and practices that are best suited to your environment.
The content of the extended attribute (EA) content is opaque for JFS2. Both user credentials
and EFS meta-data are required to determine a crypto authority (access control) for any
given EFS-activated file.
Note: Special attention should be given to situations where a file or data may be lost—for
example, removal of the file's EA.
The scope of the inheritance of a directory is exactly one level. Any newly created child also
inherits the EFS attributes of its parent if its parent directory is EFS-activated. Existing
children maintain their current encrypted or non-encrypted state. The logical inheritance
chain is broken if the parent changes its EFS attributes. These changes do not propagate
down to the existing children of the directory and must be applied to those directories
separately,
If a filesystem already exists, it can be enabled for encryption by using chfs command, for
example:
chfs -a efs=yes /foo
From this point forward, when a user or process with an open keystore creates a file on this
filesystem, the file will be encrypted. When the user or file reads the file, the file is automatically
decrypted for users who are authorized to access the file.
The Lightweight Directory Access Protocol (LDAP) defines a standard method for accessing
and updating information in a directory (a database) either locally or remotely in a
client-server model.
The AIX operating system provides utilities to help you perform the following management
tasks:
Export local keystore data to an LDAP server
Configure the client to use EFS keystore data in LDAP
Control access to EFS keystore data
Manage LDAP data from a client system
All of the EFS keystore database management commands are enabled to use the LDAP
keystore database. If the system-wide search order is not specified in the /etc/nscontrol.conf
file, keystore operations are dependent on the user and group efs_keystore_access attribute.
If you set the efs_keystore_access to ldap, the EFS commands perform keystore operations
on the LDAP keystore. Table 4-1 describes changes to EFS commands for LDAP.
efsenable Includes the -d Basedn option so that you can perform the
initial setup on LDAP for accommodating the EFS keystore.
The initial setup includes adding base distinguished names
(DNs) for the EFS keystore and creating the local directory
structure (/var/efs/).
efskstoldif Generates the EFS keystore data for LDAP from the following
databases on the local system:
• /var/efs/users/username/keystore
• /var/efs/groups/groupname/keystore
• /var/efs/efs_admin/keystore
• Cookies, if they exist, for all the keystores
All of the keystore entries must be unique. Each keystore entry directly corresponds to the DN
of the entry that contains the user and group name. The system queries the user IDs
(uidNumber), group IDs (gidNumber), and the DNs. The query succeeds when the user and
group names match the corresponding DNs. Before you create or migrate EFS keystore
entries on LDAP, ensure that the user and group names and IDs on the system are unique.
Before you create or migrate EFS keystore entries on LDAP, ensure that the user and group
names and IDs on the system are unique.
efsusrkeystore This search order is common for all users. LDAP, files
efsgrpkeystore This search order is common for all groups. files, LDAP
efsadmkeystore This search order locates the admin keystore for any LDAP, files
target keystore.
Attention: The configuration defined in the /etc/nscontrol.conf file overrides any values set
for the user and group efs_keystore_access attribute. The same is true for the user
efs_adminks_access attribute.
After you configure a system as an LDAP client and enable LDAP as a lookup domain for EFS
keystore data, the /usr/sbin/secldapclntd client daemon retrieves the EFS keystore data from
the LDAP server whenever you perform LDAP keystore operations.
Some organizations are required to show that data at rest is encrypted. A common example
is the payment card industry PCI DSS requirement to encrypt sensitive data such as a direct
link between card holder name and card number.
Using LV encryption is similar to physical disk encryption. Once operational, the application
environment does not even know the data is encrypted. The encryption is only noticeable
when the (disk) storage is mounted somewhere else and the data is unreadable. Outside of
the configured environment information in the logical volume cannot be accessed.
For more information about the LV encryption architecture, see the blog: AIX 72 TL5: Logical
Volume Encryption2.
Logical volume encryption (LV encryption) is simple to use and is transparent to the
applications. Once the system has been booted and an authorized process or user is active
on the system the data is accessible to authorized users based on classic access controls
such as ACLs.
Enabling LV encryption creates one data encryption key for each logical volume. The data
encryption key is protected by storing the keys separately in other data storage devices. The
following types of key protection methods are supported:
Paraphrase
Key file
Cryptographic key server
Platform keystore (PKS) which is available in IBM PowerVM firmware starting at firmware
level FW950
2 https://community.ibm.com/community/user/power/blogs/xiaohan-qin1/2020/11/23/aix-lv-encryption
Display :
showlv : Displays LV encryption status
showvg : Displays VG encryption capability
showpv : Displays PV encryption capability
showmd : Displays encryption metadata related to device
showconv : Displays status of all active and stopped conversions
Authentication control :
authinit : Initializes master key for data encryption
authunlock : Authenticates to unlock master key of the device
authadd : Adds additional authentication methods
authcheck : Checks validity of an authentication method
authdelete : Removes an authentication method
authsetrvgpwd : Adds "initpwd" passphrase method to all rootvg's LVs
PKS management :
pksimport : Import the PKS keys
pksexport : Export the PKS keys
pksclean : Removes a PKS key
pksshow : Displays PKS keys status
Conversion :
plain2crypt : Converts a LV to encrypted
crypt2plain : Converts a LV to not encrypted
PV encryption management :
pvenable : Enables the Physical Volume Encryption
pvdisable : Disables the Physical Volume Encryption
pvsavemd : Save encrypted physical volume metadata to a file
pvrecovmd : Recover encrypted physical volume metadata from a file
Note: The bos.hdcrypt and bos.kmip_client filesets are not installed automatically when
you run the smit update_all command or during an operating system migration operation.
You must install it separately from your software source such as a DVD or an ISO image.
3. Check the encryption state of varied on volume groups by running the command shown in
Example 4-5.
4. Check the volume group encryption metadata by running the command shown in
Example 4-6.
2. Check the details of the new volume group by running the command shown in
Example 4-8.
3. Check the authentication state of the logical volume by running the command shown in
Example 4-9.
2. Check the authentication status and authentication methods for the logical volume by
running the command shown in Example 4-11.
3. Vary off and vary on the volume group by running the command in s:
# varyoffvg testvg
# varyonvg testvg
4. Check the authentication status of the logical volume by running the command shown in
Example 4-12.
The output shows that the logical volume testlv is not authenticated.
6. Check the authentication state of the logical volume again as shown in Example 4-14.
The output in this example shows that the PKS is not activated. The keystore size of a
logical partition is set to 0 by default.
2. Shut down the LPAR and increase the keystore size in the associated HMC. The keystore
size is in the range 4 KB to 64 KB. You cannot change the value of the keystore size when
the LPAR is active.
3. Check the LPAR PKS status again by running the command shown in Example 4-16.
4. Add the PKS authentication method to the logical volume by running the command shown
in Example 4-17.
5. Check the encryption status of the logical volume by running the command shown in
Example 4-18.
6. Check the PKS status by running the command shown in Example 4-20.
To add the key server authentication method, complete the following steps:
1. Check the key servers in the LPAR by running the command shown in Example 4-22.
3. Check the key servers in the LPAR again by running the command shown in
Example 4-24.
4. Check the encryption key server information that is saved in the ODM KeySvr object class
by running the command shown in Example 4-25.
5. Add the key server authentication method to the logical volume by running the command
shown in Example 4-26.
6. Check the encryption status of the logical volume by running the command shown in
Example 4-28 on page 96.
2. Add the key file authentication method to the logical volume by running the command in
Example 4-29.
3. Check the contents of the testfile file by running the command in Example 4-30.
4. Check the encryption status of the logical volume by running the command in
Example 4-31.
4.3.8 Migrating the PKS to another LPAR before the volume group is migrated
To migrate the platform keystore (PKS) to another LPAR, complete the following steps:
1. Export the PKS keys into another file by running the command shown in Example 4-34.
4. Check whether the authentication method is valid and accessible by running the command
shown in Example 4-36.
6. Check whether the authentication method is valid and accessible by running the command
shown in Example 4-38.
2. Check the details of the volume group by running the command shown in Example 4-40.
1. Enable the logical volume encryption by running the command shown in Example 4-41.
3. Check the encryption status of the logical volume by running the command shown in
Example 4-43.
With AIX 7.3 TL1, IBM continues to address clients’ need to protect data by introducing
encrypted physical volumes. This capability encrypts data at rest on disks, and since the
data is encrypted in the OS, the disk data in flight is encrypted as well.
You must install the following filesets to encrypt the physical volume data. These filesets are
included in the base operating system.
bos.hdcrypt
bos.kmip_client
security.acf
openssl.base
AIX has historically supported encrypted files using the Encrypted File System (EFS). More
recently, AIX 7.2 TL 5 introduced support for logical volume encryption, as detailed in 4.3,
“Logical Volume Encryption” on page 87.
Now, AIX offers a new level of security with physical volume encryption. This feature allows for
the encryption of entire physical volume, providing enhanced protection for applications that
don't rely on volume groups or logical volumes, such as certain database applications.
However, it's also possible to create volume groups and logical volumes on encrypted disks.
Physical volume encryption leverages the infrastructure developed for logical volume
encryption. Therefore, many of the concepts and features described in previous section on
logical volume encryption also apply to encrypted physical volumes. For instance, both types
support the same key management functions.
The hdcryptmgr command is used to manage encrypted physical volumes, and the hdcrypt
driver handles the encryption process. While the core functionality remains similar, some new
options and actions have been added to the hdcryptmgr command specifically for physical
volume encryption.
The size of the encrypted physical volume is smaller than the size of the physical volume
before encryption because the encryption feature reserves some space on the physical
volume for the encryption process.
The command to enable encryption on disk hdisk10 is hdcryptmgr pvenable hdisk10. This
command prompts the user for a passphrase to use to unlock the disk and then reserves
some space at the beginning of the disk for metadata. As with logical volume encryption, a
data encryption key is created automatically when the disk is initialized for encryption. The
pvenable action also prompts the user to add a passphrase wrapping key to encrypt the data
encryption key. Additional wrapping keys may be added using the authadd action of the
hdcryptmgr command. Note that since space is reserved for metadata, the space available for
user data on an encrypted physical volume is slightly smaller than the total size of the
physical volume.
When the key is stored in a PKS or in a network key manager, the physical volume is
unlocked automatically during the boot process. The authunlock action parameter of the
hdcryptmgr command can be used to manually unlock an encrypted physical volume. Any
attempts to perform I/O operation on a locked encrypted physical volume fails with a
permission denied error until that physical volume is unlocked.
If the AIX LPAR is rebooted, encrypted disks that use only the passphrase wrapping key
protection method must be manually unlocked using the hdcryptmgr authunlock action. If
one of the other methods, such as using a key server or PKS, has been added to the disk
using the authadd action, AIX attempts to automatically unlock the disk during boot. Any
attempt to do I/O to an encrypted disk that is still locked fails. Figure 4-1 illustrates the
encryption process.
Figure 4-2 shows the output of the showpv and showmd actions of the hdcryptmgr command.
The showpv output displays three encrypted disks, two that are unlocked (able to be read from
or written to) and one that is locked. The locked disk requires hdcrpytmgr authunlock
hdisk32 before it is usable.
If the data backup operation is running in the operating system instance, the operating system
reads data and decrypts that data before sending it to the backup software. The backup
media contains the decrypted user data. The metadata related to encryption is not stored in
the backup media. If this backup data is restored to another physical volume, data is
encrypted only if encryption is enabled for that physical volume. If encryption is not enabled
for the destination physical volume, the restored data is not encrypted and can be used
directly even by older levels of AIX.
If data is backed up by using a storage device such as snapshot or IBM FlashCopy®, the data
that is backed up is encrypted. The backup data in the storage device includes both the
encryption metadata and the encrypted user data. The storage-based backup is a
block-for-block copy of the encrypted data and the storage cannot determine that the data is
encrypted by the operating system.
In addition to the standard UNIX discretionary access control (DAC) AIX has Access Control
Lists (ACL). ACLs enable you to define access to files and directories more granularly.
Typically an ACL consists of series of entries called an Access Control Entry (ACE). Each
ACE defines the access rights for a user in relationship to the object.
When an access is attempted, the operating system will use the ACL associated with the
object to see whether the user has the rights to do so. These ACLs and the related access
checks form the core of the Discretionary Access Control (DAC) mechanism supported by
AIX.
The operating system supports several types of system objects that allow user processes to
store or communicate information. The most important types of access controlled objects are
as follows:
Files and directories
Named pipes
IPC objects such as message queues, shared memory segments, and semaphores
All access permission checks for these objects are made at the system call level when the
object is first accessed. Because System V Interprocess Communication (SVIPC) objects are
accessed statelessly, checks are made for every access. For objects with file system names,
it is necessary to be able to resolve the name of the actual object. Names are resolved either
relatively (to the process' working directory) or absolutely (to the process' root directory). All
name resolution begins by searching one of these directories.
The discretionary access control mechanism allows for effective access control of information
resources and provides for separate protection of the confidentiality and integrity of the
information. Owner-controlled access control mechanisms are only as effective as users
make them. All users must understand how access permissions are granted and denied, and
how these are set.
For example, an ACL associated with a file system object (file or directory) could enforce the
access rights for various users in regards to access of the object. It is possible that such an
ACL could enforce different levels of access rights, such as read or write, for different users.
The following list contains direct access control attributes for the different types of objects.
Owner
For System V Interprocess Communication (SVIPC) objects, the creator or owner can
change the object's ownership. SVIPC objects have an associated creator that has all the
rights of the owner (including access authorization). The creator cannot be changed, even
with root authority.
SVIPC objects are initialized to the effective group ID of the creating process. For file
system objects, the direct access control attributes are initialized to either the effective
group ID of the creating process or the group ID of the parent directory (this is determined
by the group inheritance flag of the parent directory).
Group
The owner of an object can change the group. The new group must be either, the effective
group ID of the creating process, or the group ID of the parent directory. (As above, SVIPC
objects have an associated creating group that cannot be changed, and share the access
authorization of the object group.)
Mode
The chmod command (in numeric mode with octal notations) can set base permissions and
attributes. The chmod subroutine that is called by the command, disables extended
permissions. The extended permissions are disabled if you use the numeric mode of the
chmod command on a file that has an ACL. The symbolic mode of the chmod command
disables extended ACLs for NSF4 ACL type but does not disable extended permissions for
AIXC type ACLs. For information about numeric and symbolic mode, see chmod.
Many objects in the operating system, such as sockets and file system objects, have ACLs
associated for different subjects. Details of ACLs for these object types could vary from one to
another.
Traditionally, AIX has supported mode bits for controlling access to the file system objects. It
has also supported a unique form of ACL around mode bits. This ACL consisted of base
mode bits and also allowed for the definition of multiple ACE entries; each ACE entry defining
access rights for a user or group around the mode bits. This classic type of ACL behavior will
continue to be supported, and is named AIXC ACL type.
Note that support of an ACL on file system objects depends on the underlying physical file
system (PFS). The PFS must understand the ACL data and be able to store, retrieve, and
enforce the accesses for various users. It is possible that some of the physical file systems do
not support any ACLs at all (may just support the base mode bits) as compared to a physical
file system that supported multiple types of ACLs. Few of the file systems under AIX have
been enhanced to support multiple ACL types. JFS2 and GPFS will have the capability to
support NFS version 4 protocol based ACL type too. This ACL has been named NFS4 ACL
type on AIX. This ACL type adheres to most of the ACL definition in the NFS version 4
protocol specifications. It also supports more granular access controls as compared to the
AIXC ACL type and provides for capabilities such as inheritance.
Most environments require that different users manage different system administration duties.
It is necessary to maintain separation of these duties so that no single system management
user can accidentally or maliciously bypass system security. While traditional UNIX system
administration cannot achieve these goals, role-based access control (RBAC) can.
Beginning with AIX 6.1, a new implementation of RBAC provides for a very fine granular
mechanism to segment system administration tasks. Since these two RBAC implementations
differ greatly in functionality, the following terms are used:
Legacy RBAC Mode: The historic behavior of AIX roles that apply to versions before AIX 6.1
Enhanced RBAC Mode: The new implementation introduced with AIX 6.1
Both modes of operation are supported. However, Enhanced RBAC Mode is the default on a
newly installed AIX systems after AIX 6.1. The following sections provide a brief discussion of
the two modes and their differences. We also include information on configuring the system to
operate in the desired RBAC mode.
While this implementation provides the ability to partially segment system administration
responsibilities, it functions with the following constraints:
1. The framework requires changes to commands and applications to be RBAC-enabled.
2. Predefined authorizations are not granular and the mechanisms to create authorizations
are not robust.
3. Membership in a certain group is often required as well as having a role with a given
authorization in order to run a command.
4. Separation of duties is difficult to implement. If a user is assigned multiple roles, there is
no way to act under a single role. The user always has all of the authorizations for all of
their roles.
Legacy RBAC Mode is supported for compatibility, but Enhanced RBAC Mode is the default
RBAC mode. Enhanced RBAC Mode is preferred on AIX.
These integration options center on the use of granular privileges and authorizations and the
ability to configure any command on the system as a privileged command. Features of the
enhanced RBAC mode will be installed and enabled by default on all installations of AIX
beginning with AIX 6.1.
The enhanced RBAC mode provides a configurable set of authorizations, roles, privileged
commands, devices and files through the following RBAC databases listed below. With
enhanced RBAC, the databases can reside either in the local filesystem or can be managed
remotely through LDAP.
Authorization database
Role database
Privileged command database
Privileged device database
Privileged file database
Enhanced RBAC mode introduces a new naming convention for authorizations that allows a
hierarchy of authorizations to be created. AIX provides a granular set of system-defined
authorizations and an administrator is free to create additional user-defined authorizations as
necessary.
The behavior of roles has been enhanced to provide separation of duty functionality.
Enhanced RBAC introduces the concept of role sessions. A role session is a process with
one or more associated roles. A user can create a role session for any roles that they have
been assigned, thus activating a single role or several selected roles at a time. By default, a
new system process does not have any associated roles. Roles have further been enhanced
to support the requirement that the user must authenticate before activating the role to protect
against an attacker taking over a user session since the attacker would then need to
authenticate to activate the user’s roles.
The introduction of the privileged command database implements the least privilege principle.
The granularity of system privileges has been increased, and explicit privileges can be
granted to a command and the execution of the command can be governed by an
authorization. This provides the functionality to enforce authorization checks for command
execution without requiring a code change to the command itself. Use of the privileged
command database eliminates the requirement of SUID and SGID applications since the
capability of only assigning required privileges is possible.
The privileged device database allows access to devices to be governed by privileges, while
the privileged file database allows unprivileged users access to restricted files based on
authorizations. These databases increase the granularity of system administrative tasks that
can be assigned to users who are otherwise unprivileged.
The information in the RBAC databases is gathered and verified and then sent to an area of
the kernel designated as the Kernel Security Tables (KST). It is important to note that the
state of the data in the KST determines the security policy for the system. Entries that are
Note: A full discussion of Role-based access control on AIX can be found in IBM
Documentation at
https://www.ibm.com/docs/en/aix/7.3?topic=system-role-based-access-control.
Note: For AIX users, these commands are available in the IBM AIX Toolbox for Open
Source Software at
https://www.ibm.com/support/pages/aix-toolbox-open-source-software-overview.
In addition to the GNU Public License (GPL), each of these packages includes its own
licensing information, so remember to consult the individual tools for their licensing
information.
Important: The freeware packages provided in the AIX Toolbox for Open Source Software
are made available as a convenience to IBM customers. IBM does not own these tools, did
not develop or exhaustively test them, nor do they provide support for these tools. IBM has
compiled the these tools so that they will run with AIX.
With AIX Security Expert, you can easily apply a chosen security level without the need for
extensive research and manual implementation of individual security elements. Additionally,
the tool enables you to create a security configuration snapshot, which can be used to
replicate the same settings across multiple systems, streamlining security management and
ensuring consistency across an enterprise environment.
AIX Security Expert can be accessed either through SMIT or by using the aixpert command.
AIX Security Expert provides a menu to centralize effective and common security
configuration settings. These settings are based on extensive research on properly securing
UNIX systems. Default security settings are provided for broad security environment needs
(High Level Security, Medium Level Security, and Low Level Security), and advanced
administrators can set each security configuration setting independently.
Configuring a system at too high a security level might deny necessary services. For
example, telnet and rlogin are disabled for High Level Security because the login password is
sent over the network unencrypted. Conversely, if a system is configured at too low a security
level, it can be vulnerable to security threats. Since each enterprise has its own unique set of
security requirements, the predefined High Level Security, Medium Level Security, and Low
Level Security configuration settings are best used as a starting point rather than an exact
match for the security requirements of a particular enterprise.
The practical approach to using AIX Security Expert is to establish a test system (in a realistic
test environment) similar to the production environment in which it will be deployed. Install the
necessary business applications and run AIX Security Expert via the GUI. The tool will
analyze this running system in its trusted state. Depending on the security options you
choose, AIX Security Expert will enable port scan protection, turn on auditing, block network
ports not used by business applications or other services, and apply many other security
settings. After re-testing with these security configurations in place, the system is ready to be
deployed in a production environment. Additionally, the AIX Security Expert XML file defining
the security policy or configuration of this system can be used to implement the exact same
configuration on similar systems in your enterprise.
Note: For more information on security hardening, see NIST Special Publication 800-70,
NIST Security Configurations Checklist Program for IT Products. The fourth revision of the
document is at: https://csrc.nist.gov/pubs/sp/800/70/r4/final.
A full discussion of AIX Security Expert on AIX v7.3 is available on IBM Documentation at
https://www.ibm.com/docs/en/aix/7.3?topic=security-aix-expert.
The fpm command allows administrators to harden their system by setting permissions for
important binaries and dropping the setuid and setgid bits on many commands in the
operating system. This command is intended to remove the setuid permissions from
commands and daemons that are owned by privileged users, but you can also customize it to
address the specific needs of unique computer environments.
The setuid programs on the base AIX operating system have been grouped to allow for levels
of hardening. This grouping allows administrators to choose the level of hardening according
to their system environment. Also, you can use the fpm command to customize the list of
Changing execution permissions of commands and daemons with the fpm command affects
non-privileged users, denying their access to these commands and daemons or functions of
the commands and daemons. Also, other commands that call or depend on these commands
and daemons can be affected. Any user-created scripts that depend on commands and
daemons with permissions that were altered by the fpm command cannot operate as
expected when run by non-privileged users. Give full consideration to the effect and potential
impact of modifying default permissions of commands and daemons.
Perform appropriate testing before using this command to change the execution permissions
of commands and daemons in any critical computer environment. If you encounter problems
in an environment where execution permissions have been modified, restore the default
permissions and recreate the problem in this default environment to ensure that the issue is
not due to lack of appropriate execution permissions.
The fpm command provides the capability to restore the original AIX installation default
permissions by using the -l default flag.
Also, the fpm command logs the permission state of the files before changing them. The fpm
log files are created in the /var/security/fpm/log/date_time file. If necessary, you can use
these log files to restore the system's file permissions that are recorded in a previously saved
log file.
When the fpm command is used on files that have extended permissions, it disables the
extended permissions, though any extended permission data that existed before the fpm
invocation is retained in the extended ACL.
Customized configuration files can be created and enacted as part of the high, medium, low,
and default settings. File lists can be specified in the /usr/lib/security/fpm/custom/high/*
directory, the /usr/lib/security/fpm/custom/medium/* directory, and the
/usr/lib/security/fpm/custom/default/* directory. To take advantage of this feature,
create a file containing a list of files that you want to be automatically processed in addition to
the fpm commands internal list. When the fpm command is run, it also processes the lists in
the corresponding customized directories. To see an example of the format for a customized
file, view the /usr/lib/security/fpm/data/high_fpm_list file. The default format can be
viewed in the /usr/lib/security/fpm/data/default_fpm_list.example file. For the
customization of the -l low flag, the fpm command reads the same files in the
/usr/lib/security/fpm/custom/medium directory, but removes the setgid permissions,
whereas the -l medium flag removes both the setuid and setgid permissions.
The usual way for a malicious user to negatively impact the system typically involves gaining
unauthorized access and subsequently installing harmful programs like Trojans, rootkits, or
modifying sensitive security files, thereby rendering the system vulnerable and prone to
exploitation. Trusted Execution aims to prevent such activities or, in cases where incidents do
occur, quickly identify them.
Using the functionality provided by Trusted Execution, the system administrator can define the
exact set of executables that are permitted to run or specify the kernel extensions that are
allowed to load. Additionally, it can be utilized to examine the security status of the system
and identify files that have been updated, thereby raising the trustworthiness of the system
and making it harder for an attacker to cause damage.
Trusted Execution is a more powerful and enhanced mechanism that overlaps some of the
TCB functionality and provides advance security policies to better control the integrity of
the system. While the TCB is still available, TE introduces a new and more advanced
concept of verifying and guarding the system integrity.
AIX Trusted Execution uses whitelisting to prevent or detect malware that is executed on your
AIX system. It provides the following features:
– Provides cryptographic checking that will allow you to determine if a hacker has
replaced an IBM published file with his own Trojan horse
– Provides the ability to scan for root kits
– Provides the ability to detect if various attributes of a file have been altered
– Provides the ability to correct certain file attribute errors
– Provides “white listing” functionality
– Provides a numerous configuration options
– Provides the ability to detect and/or prevent malicious scripts, executables, kernel
extensions and libraries
– Provides functionality for protecting files from alteration by a hacker that has gained
root access
– Provides functionality for protecting the Trusted Execution’s configuration from a hacker
that has gained root access
– Provides functionality for utilizing digital signatures to verify IBM and non-IBM
published files haven’t been altered by an attacker
Integrity Checking reference System and runtime checking System checking only
In order for TE to work, the CryptoLight for C library (CLiC) and kernel extension must be
installed. To see if it is installed and loaded into the kernel, run the commands shown in
Example 4-44.
If the file set is not installed, install it on your system and load it into the Kernel when
installation completes successfully, by running:
# /usr/lib/methods/loadkclic
Every trusted file must ideally have an associated stanza or a file definition stored in the TSD.
A file can be marked as trusted by adding its definition in the TSD using the trustchk
command. This command can be used to add/delete/list entries from the TSD. The TSD can
be locked so even root cannot write to it any longer. Locking the TSD becomes immediately
effective. Example 4-45 shows how the ksh command appears in the TSD db file.
/usr/lib/drivers/igcts:
Owner = root
Group = system
Mode = 555
Type = HLINK
Size = 7714
Cert_tag = 00af4b62b878aa47f7
Signature =
b47d75587bbd4005c3fe98015d9c0776fd8d40f976fb0f529796ffe1b2f9028500ffd2383ca31cd2f39712f70e36c522dc1ba5
2c44334781a389ea06cdabd82c72d705fd94
bffe59817b5a4d45651e2d5457cb83ebdb3b705a3b5c981c51eae79facfe271fbde0e396b7ea64d4dbd6ab753a3fa7a9578b7f
5e6458b83d8f08df
Hash_value = 6d13bbd588ecfdd06cbb2dc3a17eabad6b51a42bd1fd62e7ae5402a75116e8bd
To enable the TSD for write access again, you either need to turn off TE completely or set
tsd_lock to off using the trustchk command.
When the system is blocking any untrusted shell scripts by using the CHKSCRIPT policy as
shown in Example 4-48 make sure all scripts needed by your services are included in the
TSD.
For example, if you are using OpenSSH make sure the sshd and ksshd start and stop scripts
in /etc/rc.d/rc2.d are in the TSD. Otherwise, sshd does not start when the system is
restarted and it will not be shut down on a system shutdown.
When you try to start a script with chkscript=on and that script is not included in the TSD, its
execution is denied, regardless of its permissions, even when root is starting it. This is shown
in Example 4-49.
# ls -l foo
-rwx------- root system 17 May 10 11:51 foo
The Trusted Execution Path defines a list of directories that contain the trusted commands.
When Trusted Execution Path verification is enabled, the system loader allows commands in
the specified paths to run.
The Trusted Library Path has the same function as Trusted Execution Path with the only
Difference that it is used to define the directories that contain trusted libraries of the system.
When TLP is enabled, the system loader allows the libraries from this path to be linked to the
commands.
The trustchk command can be used to enable or disable the Trusted Execution Path or
Trusted Execution Library as well as to set the Colon-separated path list for both, using
Trusted Execution Path and Trusted Library Path command-line attributes of trustchk:
Doing so most probably results in a system that will not restart and function properly since it cannot
access necessary files and data any longer.
# cp /usr/bin/ls /usr/bin/.goodls
- Hash value of "/usr/bin/ls" command changed
# trustchk -p TE=ON CHKEXEC=ON STOP_ON_CHKFAIL=ON
# ls
ksh: ls: 0403-006 permission denied.
# cp /usr/bin/ls /usr/bin/.badls
# cp /usr/bin/.goodls /usr/bin/ls
# chown bin:bin /usr/bin/ls
# ls
file1 file2 dir1
With the constant threat of security breaches, companies are under pressure to lock down
every aspect of their applications, infrastructure, and data.
One method of securing IBM AIX network transactions is to establish networks based on the
IPSEC protocol. Internal Protocol Security (IPSEC) is an IBM AIX network based protocol
that defines how to secure a computer network at the IP layer.When determining how to
secure your IPSEC connections, you may need to consider these items:
Connectivity architecture—whether it is an internal or external connection.
Encryption mechanisms or a use of authentication services.
IBM AIX IPSEC native tool uses the mkfilt and genfilt binaries to activate and add the filter
rules. It can also be used to control the filter logging functions.This works on IP version 4 and
IP version 6. With the IPSEC feature enabled, you can also create IP filtering rules to block
the IP address from accessing hosts or exact ports.
One of the interesting features of IPSEC is IP Security dynamic tunnels. These tunnels use
the Internet Key Exchange (IKE) Protocol to protect IP traffic by authenticating and encrypting
IP data. The ike command performs several functions such as activate, remove, or list IKE
and IP Security tunnels.
By default, auditing is disabled in AIX. When activated, the auditing subsystem begins
collecting information based on your configuration settings. The frequency of auditing
depends on your environment and usage patterns. While recommended for enhanced
security and troubleshooting, the decision to enable auditing and its frequency is ultimately
yours.
The audit logger is responsible for constructing the complete audit record, consisting of the
audit header, that contains information common to all events (such as the name of the event,
the user responsible, the time and return status of the event), and the audit trail, which
contains event-specific information. The audit logger appends each successive record to the
kernel audit trail, which can be written in either (or both) of two modes:
– BIN mode
The trail is written into alternating files, providing for safety and long-term storage.
– STREAM mode
The trail is written to a circular buffer that is read synchronously through an audit
pseudo-device. STREAM mode offers immediate response.
Information collection can be configured at both the front end (event recording) and at the
back end (trail processing). Event recording is selectable on a per-user basis. Each user has
a defined set of audit events that are logged in the audit trail when they occur. At the back
end, the modes are individually configurable, so that the administrator can employ the
back-end processing best suited for a particular environment. In addition, BIN mode auditing
can be configured to generate an alert in case the file system space available for the trail is
getting too low.
These processing options help manage and analyze audit data effectively.
The STREAM mode audit trail can be monitored in real time, to provide immediate
threat-monitoring capability. Configuration of these options is handled by separate programs
that can be invoked as daemon processes to filter either BIN or STREAM mode trails,
although some of the filter programs are more naturally suited to one mode or the other.
To ensure that the AIX audit subsystem can retrieve information from the AIX security audit,
you must set the following files to the AIX server to be monitored.
– streamcmds
– config
– events
– objects
For more information on how to configure the AIX audit subsystem for collecting, recording
and auditing the events, please check below links:
https://www.ibm.com/support/pages/aix-audit-audit-subsystem-aix
https://www.ibm.com/docs/en/aix/7.3?topic=files-config-file
4.11.2 Accounting
The accounting subsystem provides features for monitoring system resource utilization and
billing users for the use of resources. Accounting data can be collected on a variety of
system resources: processors, memory, disks, and such.
Another kind of data collected by the accounting system is connect-time usage accounting,
which lets us know how many users are connected to a system and for how long. The
connect time data enables us to detect unused accounts, which have to be invalidated (for
security reasons) or even erased to save resources. Also, the connect-time usage may
enable the discovery of suspect activities (such as too many unsuccessful logon attempts)
that signal that security measures should be adopted.
The data collected by the accounting subsystem is used to automatically generate reports,
such as daily and weekly reports. The reports can be generated at any time, using
accounting-specific commands. The accounting subsystem provides tools that enable us to
observe how the system reacts at a particular moment in time (for instance, when executing a
specific command or task).
For more details, on how to set up accounting subsystem and accounting internals, please
click on the below links:
https://www.ibm.com/docs/en/aix/7.3?topic=accounting-administering-system
An event in the AIX Event Infrastructure refers to any detectable change in a system's state or
values by the kernel or its extensions at the moment the modification takes place. These
events are stored as files within a specialized file system known as the pseudo file system.
The AIX Event Infrastructure offers several benefits, including:
There is no need for constant polling. Users monitoring the events are notified when those
events occur.
Detailed information about an event (such as stack trace and user and process
information) is provided to the user monitoring the event.
Existing file system interfaces are used so that there is no need for a new application
programming interface (API).
Control is handed to the AIX Event Infrastructure at the exact time the event occurs.
The POWER8 processor provided a new set of VMX/VSX in-core symmetric cryptographic
instructions that are aimed at improving performance of various crypto operations. In most
circumstances, the in-core crypto instructions provide better performance with lower latency
and no extra page requirements. To be able to use the in-core crypto instructions in kernel,
there is a small amount of overhead to save and restore the vector register content.
Additional improvements were made in the IBM Power9 and IBM Power10 chips to increase
the encryption capabilities and greatly improve system performance.
The ACF kernel services are implemented in pkcs11 device driver (kernel extension),
providing services for other kernel subsystems like EFS, IPSec and LV-Encryption. User
space applications can also use ACF kernel services by calling the AIX PKCS #11 subsystem
library (/usr/lib/pkcs11/ibm_pkcs11.so).
The purpose of this feature is to improve the performance in pkcs11(Public Key
Cryptography Standards) kernel extension.
Use of the AES instruction set can greatly reduce the CPU utilization and improve the
performance of AIX applications which use AES crypto features—EFS, IPSEC, Trusted
Execution.
This feature enables the in-core vector AES crypto instructions under CLiC interfaces of
pkcs11 kernel extension.
Customers can enable/disable the In-Core support in ACF kernel extension though a CLI
interface.
ODM Support is provided for enabling or disabling the feature after reboots.
Display the Status of the in-core crypto enablement.
Supported IBM POWER8 and above.
Prerequisites:
– OS Level: 7.2 TL5 and later
– VIOS:v3.1
– Hardware: POWER8 or Higher
– Firmware: Any
Enablement, how to turn it on:
– Two flags introduced:
• in_core_capable
• in_core_enabled (acfo -t in_core_enabled=1)
LDAP defines a message protocol used by directory clients and directory servers. LDAP
originated from X.500 Directory Access Protocol, which is considered as heavyweight. X.500
needs the entire OSI protocol stack, where LDAP is built on the TCP/IP stack. LDAP is also
considered lightweight because it omits many X.500 operations that are rarely used.
An application-specific directory stores the information which do not have general search
capabilities. Keeping multiple copies of information up-to-date and synchronized is difficult.
What is needed is a common, application-independent directory. A dream of single common
Directory can be achieved using LDAP. Clients can interact independent of the platform. Also
clients can be setup without any dependency.
LDAP works with most vendor directory services, such as Active Directory (AD). With LDAP,
sharing information about users, services, systems, networks, and applications from a
directory service to other applications and services becomes easier to implement. When
using LDAP, the client access is independent of the platform. Since LDAP is a standard
protocol, clients can be setup without any dependency on the specific LDAP server being
utilized.
For example if you have a Microsoft Active Directory (LDAP Server), you can configure an
LDAP client with IBM TDS file-sets and access the data from the server. Example 4-52 shows
a sample of an LDAP entry for multiple applications.
When setup to use LDAP, multiple applications such as IBM Verse, intranet page, BestQuest,
RQM, ClearQuestcan be connected to a user entry in the same directory. If a user changes
their password once, it is reflected in all the applications.
For more information on how to set up an LDAP server and to configure clients in AIX, see the
following:
– Integrating AIX into Heterogeneous LDAP Environments, SG24-7165
– http://theaix.blogspot.com/2009/10/ldap-in-aix.html
– https://community.spiceworks.com/t/how-to-install-ldap-on-aix-7-1-and-configur
e-as-ldap-server/836720/2
– https://www.ibm.com/docs/bg/aix/7.2?topic=module-setting-up-ldap-client
The AIX LDAP load module is fully integrated within the AIX operating system. After the LDAP
authentication load module is enabled to serve user and group information, high-level APIs,
commands, and system-management tools work in their usual manner. An -R flag is
introduced for most high-level commands to work through different load modules.
AIX supports LDAP-based user and group management, integrating with IBM Security Verify
Directory servers, non-IBM RFC 2307-compliant servers, and Microsoft Active Directory. The
recommended option for use in defining AIX users and groups is IBM Security Verify
Directory. Refer to Setting up an IBM Security Verify Directory Server for more information on
setting up the server.
AIX supports non-IBM directory servers as well. A directory server that is RFC 2307
compliant is supported and AIX treats these servers similarly to IBM Security Verify Directory
Servers. Directory servers that are not RFC 2307 compliant can be used but they require
additional manual configuration to map the data schema. There may be some limitations due
AIX also supports Microsoft Active Directory (AD) as an LDAP server for user and group
management. This requires the UNIX supporting schema be installed (included in Microsoft
Service For UNIX). AIX supports AD running on Windows 2000, 2003, and 2003 R2 with
specific SFU schema versions.
Some AIX commands may not function with LDAP users if the server is AD due to differences
in user and group management between UNIX and Windows systems. Most user and group
management commands (e.g., lsuser, chuser, rmuser, lsgroup, chgroup, rmgroup, id, groups,
passwd, chpasswd) should work, depending on access rights.
The procedure to set up the AIX Security Subsystem to use IBM Security Verify Directory
(LDAP) involves two steps. The first step sets up a IBM Security Verify Directory Server that
serves as a centralized repository for user and group information when authenticating. The
second step in the procedure sets up the host systems (clients) to use the IBM Security Verify
Director server for authentication and to retrieve user/group information.
Instructions for installing the IBM AIX LDAP can be found here:
https://www.ibm.com/support/pages/ldap-aix-step-step-instructions-installing-ldap-
client-filesets-aix
A small system might have three to five users and a large system might have several
thousand users. Some installations have all their workstations in a single, relatively secure
area. Others have widely distributed users, including users who connect by dialing in and
indirect users connected through personal computers or system networks. Security on IBM i
is flexible enough to meet the requirements of this wide range of users and situations.
System security has some important objectives. Each security control or mechanism should
satisfy one or more of the following security goals:
Confidentiality
Confidentiality concerns include:
Protecting against disclosing information to unauthorized people
Restricting access to confidential information
Protecting against curious system users and outsiders
Integrity
Integrity is an important aspect when applied to data within your enterprise. Integrity goals
include:
Protecting against unauthorized changes to data
Restricting manipulation of data to authorized programs
Providing assurance that data is trustworthy
Availability
Systems are often critical to keep an enterprise running. Availability includes:
Preventing accidental changes or destruction of data
Protecting against attempts by outsiders to abuse or destroy system resources
Authentication
Ensuring that your data is only accessible by entities that are authorized is one of the basic
tenets of data security. Proper authentication methodologies are important to:
Determines whether users are who they claim to be. The most common technique to
authenticate is by user profile name and password.
Provide additional methods of authentication such as using Kerberos as an authentication
protocol in a single sign-on (SSO) environment.
Authorization
Once a user is authenticated, it is also important to ensure that they only access the data and
tasks that are relevant to their job. Proper authorization is important to:
Permit a user to access resources and perform actions on them.
Define access permissions (public or private rights) to objects to ensure that they are not
accessed except by those that have authorization.
System security is often associated with external threats, such as hackers or business rivals.
However, protection against system accidents by authorized system users is often the
greatest benefit of a well-designed security system. In a system without good security
features, pressing the wrong key might result in deleting important information. System
security can prevent this type of accident.
The best security system functions cannot produce good results without good planning.
Security that is set up in small pieces, without planning, can be confusing and is difficult to
maintain and to audit. Planning does not imply designing the security for every file, program,
and device in advance. It does imply establishing an overall approach to security on the
system and communicating that approach to application designers, programmers, and
system users.
As you plan security on your system and decide how much security you need, consider these
questions:
Is there a company policy or standard that requires a certain level of security?
Do the company auditors require some level of security?
How important is your system, and the data on it, to your business?
How important is the error protection provided by the security features?
What are your company security requirements for the future?
To facilitate installation, many of the security capabilities on your system are not activated
when your system is shipped. Recommendations are provided in this chapter to bring your
system to a reasonable level of security. Always consider the security requirements of your
own installation as you evaluate any recommendations.
IBM periodically releases fixes to address issues discovered in IBM i programs. These fixes
are bundled into cumulative PTF packages, which contain recommended fixes for specific
time periods. Consider installing cumulative PTF packages twice a year in dynamic
environments and less frequently in stable ones. Additionally, apply them when making major
hardware or software changes.
By prioritizing fixes, fix groups, cumulative packages, and high-impact pervasive (HIPER)
fixes, you can prevent security issues caused by failing to implement operating system fixes to
correct known issues.
Another option for managing fixes is to use a SQL query to identify any issues. This is
documented in this IBM document. The query is shown in Example 5-1.
For more information on staying current on IBM i see this document on using fixes.
Figure 5-2 The QSECURITY system value and the various security levels on IBM i
System values also provide customization on many characteristics of your IBM i platform. You
can use system values to define system-wide security settings. To access the jobs category
of system values from IBM Navigator for i, select Configuration and Services and then select
System Values. This is shown in Figure 5-3.
Figure 5-3 “System values” option under “Configuration and Service” menu within IBM Navigator for i
You can restrict users from changing the security-related system values. The Change SST
Security Attributes (CHGSSTSECA) command, system service tools (SST) and dedicated
service tools (DST) provide an option to lock these system values. By locking the system
To see a list of all the security-related system values, from the IBM Navigator for i go to
Security → Security Config. info. This is typically related to your security environment
requirement and may differ slightly for every organization.
5.5 Authentication
Authentication is the set of methods used by organizations to ensure that only the authorized
personnel, services, and applications with the correct permissions can get access to
company resources. There are those who wish to gain access to your systems with ill
intentions, thus making authentication a critical part of cybersecurity. These bad actors will try
to steal credentials from users who already have access to your environment. Therefore, your
authentication process should primarily include these three steps:
1. Identification - ensure that the one requesting access is who they are, usually through a
user name or other type of login ID.
2. Authentication - users will usually provide a password (a random word or phrase or
sequence of characters that a user is the only one who is supposed to know) to prove that
they are who they claim to be, but if you want to strengthen security, organizations may
also require the user to provide something they have (a phone or token device) to further
prove their identity, or a unique characteristic that is part of their person (a face or
fingerprint scan).
3. Authorization - The system then verifies that the user is indeed who he/she claims to be
and allows them access to the system or application that they are trying to gain access to.
To enable a single sign-on environment, IBM provides two technologies that work together to
enable users to sign in with their Windows user name and password and be authenticated to
While Network Authentication Service (NAS) allows a IBM i platform to participate in the
Kerberos realm, EIM provides a mechanism for associating these Kerberos principals to a
single EIM identifier that represents that user within the entire enterprise. Other user
identities, such as an IBM i user name, can also be associated with this EIM identifier. When
a user signs on to the network and accesses an IBM i platform, that user is not prompted for a
user ID and password. If the Kerberos authentication is successful, applications can look up
the association to the EIM identifier to find the IBM i user name. The user no longer needs a
password to sign on to IBM i platform because the user is already authenticated through the
Kerberos protocol. Administrators can centrally manage user identities with EIM while
network users need only to manage one password. You can enable single sign-on by
configuring Network Authentication Service (NAS) and Enterprise Identity Mapping (EIM) on
your system.
Note: Full documentation of Single sign-on for IBM i 7.5 can be found at
https://www.ibm.com/docs/en/ssw_ibm_i_75/pdf/rzamzpdf.pdf.
The user profile is a powerful and flexible tool. It controls what the user can do and
customizes the way the system appears to the user. The following list describes some of the
important security features of the user profile:
Special authority
Special authorities determine whether the user is allowed to perform system functions, such
as creating user profiles or changing the jobs of other users. The special authority available
are enumerated in the Table 5-1.
*ALLOBJ All-object (*ALLOBJ) special authority allows the user to access any
resource on the system whether private authority exists for the user.
*SECADM Security administrator (*SECADM) special authority allows a user
to create, change, and delete user profiles.
*JOBCTL The Job control (*JOBCTL) special authority allows a user to
change the priority of jobs and of printing, end a job before it has
finished, or delete output before it has printed. *JOBCTL special
authority can also give a user access to confidential spooled output,
if output queues are specified OPRCTL(*YES).
*SPLCTL Spool control (*SPLCTL) special authority allows the user to
perform all spool control functions, such as changing, deleting,
displaying, holding and releasing spooled files.
*SAVSYS Save system (*SAVSYS) special authority gives the user the
authority to save, restore, and free storage for all objects on the
system, regardless of whether the user has object existence
authority to the objects.
*SERVICE Service (*SERVICE) special authority allows the user to start
system service tools using the STRSST command. This special
authority allows the user to debug a program with only *USE
authority to the program and perform the display and alter service
functions. It also allows the user to perform trace functions.
*AUDIT Audit (*AUDIT) special authority gives the user the ability to view
and change auditing characteristics.
*IOSYSCFG System configuration (*IOSYSCFG) special authority gives the user
the ability to change how the system is configured. Users with this
special authority can add or remove communications configuration
information, work with TCP/IP servers, and configure the internet
connection server (ICS). Most commands for configuring
communications require *IOSYSCFG special authority.
Limit capabilities
The limit capabilities field in the user profile determines whether the user can enter
commands and change the initial menu or initial program when signing on. The Limit
capabilities field in the user profile and the ALWLMTUSR parameter on commands apply only
to commands that are run from the command line, the Command Entry display, FTP, REXEC,
using the QCAPCMD API, or an option from a command grouping menu. Users are not
restricted to perform the following actions:
Run commands in CL programs that are running a command as a result of taking an
option from a menu
Run remote commands through applications
A key component of security is integrity: being able to trust that objects on the system have
not been tampered with or altered. Your IBM i operating system software is protected by
digital signatures.
Signing your software object is particularly important if the object has been transmitted across
the Internet or stored on media which you feel might have been modified. The digital
signature can be used to detect if the object has been altered.
Digital signatures, and their use for verification of software integrity, can be managed
according to your security policies using the Verify Object Restore (QVFYOBJRST) system
value, the Check Object Integrity (CHKOBJITG) command, and the Digital Certificate
Manager tool. Additionally, you can choose to sign your own programs (all licensed programs
shipped with the system are signed).
A group profile can own objects on the system. You can also use a group profile as a pattern
when creating individual user profiles by using the copy profile function.
SSL/TLS supports multiple symmetric ciphers and asymmetric public key algorithms. For
example, AES with 128-bit keys is a common symmetric cipher, while RSA and ECC
commonly use asymmetric algorithms.
Overview
The IBM i system offers multiple SSL/TLS implementations, each adhering to
industry-defined protocols and specifications set by the Internet Engineering Task Force
(IETF). These implementations cater to different application needs and offer varying
functionalities. The specific implementation used by an application depends on the chosen
API set.
For Java applications, the configured JSSE provider determines the implementation, as Java
interfaces are standardized. Alternatively, an application can embed its own implementation
for exclusive use.
System SSL/TLS
System SSL/TLS is a set of generic services that are provided in the IBM i Licensed Internal
Code to protect TCP/IP communications by using the SSL/TLS protocol. System SSL/TLS is
tightly coupled with the operating system and the LIC sockets code specifically providing extra
performance and security.
System TLS has the infrastructure to support multiple protocols. The following protocols can
be supported by System TLS:
– Transport Layer Security version 1.3 (TLSv1.3)
– Transport Layer Security version 1.2 (TLSv1.2)
– Transport Layer Security version 1.1 (TLSv1.1)
– Transport Layer Security version 1.0 (TLSv1.0)
– Secure Sockets Layer version 3.0 (SSLv3)
The QSSLPCL special value *OPSYS allows the operating system to change the protocols
that are enabled on the system. The value of QSSLPCL remains the same when the system
upgrades to a newer operating system release. If the value of QSSLPCL is not *OPSYS, then
the administrator must manually add newer protocol versions to QSSLPCL after the system
moves to a new release.
For the most current information on System SSL/TLS support for protocols and cipher suites
see this IBM document on System SSL/TLS.
Important: IBM strongly recommends that you always run your IBM i server with the
following network protocols disabled. Using configuration options that are provided by IBM
to enable the weak protocols results in your IBM i server being configured to allow use of
the weak protocols. This configuration results in your IBM i server potentially being at risk
of a network security breach.
– Transport Layer Security version 1.1 (TLSv1.1)
– Transport Layer Security version 1.0 (TLSv1.0)
– Secure Sockets Layer version 3.0 (SSLv3)
– Secure Sockets Layer version 2.0 (SSLv2)
The QSSLCSL system value setting identifies the specific cipher suites that are enabled on
the system. Applications can negotiate secure sessions with only a cipher suite that is listed
in QSSLCSL. No matter what an application does with code or configuration, it cannot
negotiate secure sessions with a cipher suite if it is not listed in QSSLCSL. Individual
application configuration determines which of the enabled cipher suites are used for that
application.
To restrict the System TLS implementation from using a particular cipher suite, follow these
steps:
– Change QSSLCSLCTL system value to special value *USRDFN to allow the
QSSLCSL system value to be edited.
– Remove all cipher suites to be restricted from the list in QSSLCSL.
The QSSLCSLCTL system value special value *OPSYS allows the operating system to
change the cipher suites that are enabled on the system. The value of QSSLCSLCTL
remains the same when the system upgrades to a newer operating system release. If the
value of QSSLCSLCTL is *USRDFN, then the administrator must manually add in newer
cipher suites to QSSLCSL after the system moves to a new release. Setting QSSLCSLCTL
back to *OPSYS also adds the new values to QSSLCSL.
A cipher suite cannot be added to QSSLCSL if the TLS protocol that is required by the cipher
suite is not set in QSSLPCL.
Service tools can be accessed from dedicated service tools (DST) or system service tools
(SST). Service tools user IDs are required if you want to access DST, SST, and to use the
IBM Navigator for i functions for disk unit management.
Service tools user IDs have been referred to as DST user profiles, DST user IDs, service
tools user profiles, or a variation of these names. Within this topic collection, the term “service
tools user IDs” is used.
Note: Full documentation of Service Tools for IBM i 7.5 can be found at
https://www.ibm.com/docs/en/ssw_ibm_i_75/pdf/rzamhpdf.pdf
The service tools user ID you use to access SST needs to have the functional privilege to use
SST. The IBM i user profile needs to have the following authorizations:
Authorization to the Start SST (STRSST) CL command.
Service special authority (*SERVICE).
To exit from SST after performing the desired action, press F3 (Exit) until you get to the Exit
System Service Tools display then press Enter to end SST.
The service tools user ID that you use to access service tools with DST needs to have the
functional privilege to use DST. You can start the DST by using function 21 from the system
control panel or by using a manual initial program load (IPL).
Accessing service tools using DST from the system control panel
To access service tools using DST from the control panel, complete the following steps:
1. Put the control panel in manual mode.
2. Use the control panel to select function 21 and press Enter. The DST Sign On display
appears on the console.
3. Sign on to DST using your service tools user ID and password. The Use dedicated service
tools (DST) display appears.
A digital certificate is an electronic credential that you can use to establish proof of identity in
an electronic transaction. There are an increasing number of uses for digital certificates to
provide enhanced network security measures. For example, digital certificates are essential
to configuring and using the TLS. Using TLS allows you to create secure connections
between users and server applications across an untrusted network, such as the Internet.
TLS provides one of the best solutions for protecting the privacy of sensitive data, such as
user names and passwords, over the Internet. Many IBM i platforms and applications, such as
FTP, Telnet, HTTP Server provide TLS support to ensure data privacy.
IBM i provides extensive digital certificate support that allows you to use digital certificates as
credentials in a number of security applications. In addition to using certificates to configure
TLS, you can use them as credentials for client authentication in both TLS and virtual private
network (VPN) transactions. Also, you can use digital certificates and their associated
security keys to sign objects. Signing objects allows you to detect changes or possible
tampering to object contents by verifying signatures on the objects to ensure their integrity.
Proper planning and evaluation are the keys to using certificates effectively for their added
security benefits.
As discussed in 2.1, “Encryption technologies and their applications” on page 29, Power10
provides Transparent Memory Encryption which transparently encrypts and protects memory
within the system utilizing the encryption acceleration processors built into the Power10
processing chip, providing protection without performance penalties.
IBM i offers various levels of encryption for databases and attached storage devices. Using
Field Procedures within IBM DB2®, IBM i provides field-level encryption to directly protect
sensitive data fields within the database. Additionally, IBM i supports encryption for directly
attached storage devices to safeguard data at rest within the system.
IBM i includes both software cryptography and a range of cryptographic hardware options for
data protection and secure transaction processing. Users can leverage the built-in encryption
acceleration processors on the Power10 chip or integrate specialized cryptographic
coprocessors—both options provide robust security without compromising performance.
IBM i cryptographic services help ensure data privacy, maintain data integrity, authenticate
communicating parties, and prevent repudiation when a party denies having sent a message.
Cryptographic Services supports a hierarchical key system. At the top of the hierarchy is a set
of master keys. These keys are the only key values stored in the clear (unencrypted).
Cryptographic services securely stores the master keys within the IBM i Licensed Internal
Code (LIC).
Eight general-purpose master keys are used to encrypt other keys which can be stored in
keystore files. Keystore files are database files. Any type of key supported by cryptographic
services can be stored in a keystore file, for example AES, RC2, RSA, SHA1-HMAC.
In addition to the eight general-purpose master keys, cryptographic services supports two
special-purpose master keys. The ASP master key is used for protecting data in the
Independent Auxiliary Storage Pool (in the Disk Management GUI this is known as an
Independent Disk Pool). The save/restore master key is used to encrypt the other master
keys when they are saved to media using a Save System (SAVSYS) operation.
After you connect to IBM Navigator for i, click Security → Cryptographic Services Key
Management. You can, thereafter, work with managing master keys and cryptographic
keystore files.
You can also use the cryptographic services APIs or the control language (CL) commands to
work with the master keys and keystore files.
Note: You should use Transport Layer Security (TLS) to reduce the risk of exposing key
values while performing key management functions.
Note: The IBM 4767 Cryptographic Coprocessor is no longer available but it is still
supported.
You can specify detailed authorities, such as adding records or changing records. Or you can
use the system-defined subsets of authorities: *ALL, *CHANGE, *USE, and *EXCLUDE.
Files, programs, and libraries are the most common objects requiring security protection, but
you can specify authority for any object on the system. The following list describes the
features of resource security:
Group profiles
A group of similar users can share the same authority to use objects. See 5.5.4, “Group
profiles” on page 138 for more information.
Authorization lists
Objects with similar security needs can be grouped in one list. Authority can be granted to the
list rather than to the individual objects.
Object ownership
Every object on the system has an owner. Objects can be owned by an individual user profile
or by a group profile. Correct assignment of object ownership helps you manage applications
and delegate responsibility for the security of your information.
Primary group
You can specify a primary group for an object. The primary group’s authority is stored with the
object. Using primary groups may simplify your authority management and improve authority
checking performance.
Library authority
You can put files and programs that have similar protection requirements into a library and
restrict access to that library. This is often easier than restricting access to each individual
object.
Directory authority
You can use directory authority in the same way that you use library authority. You can group
objects in a directory and secure the directory rather than the individual objects.
Object authority
In cases where restricting access to a library or directory is not specific enough, you can
restrict authority to access individual objects.
Public authority
For each object, you can define what kind of access is available for any system user who
does not have any other authority to the object. Public authority is an effective means for
securing information and provides good performance.
Authority holder
An authority holder stores the authority information for a program-described database file.
The authority information remains, even when the file is deleted. Authority holders are
commonly used when converting from the System/36, because System/36 applications often
delete files and create them again.
The IBM i operating system provides the ability to log selected security-related events in a
security audit journal. Several system values, user profile values, and object values control
which events are logged.
The security audit journal is the primary source of auditing information about the system. This
section describes how to plan, set up, and manage security auditing, what information is
recorded, and how to view that information.
A security auditor inside or outside your organization can use the auditing function that is
provided by the system to gather information about security-related events that occur on the
system.
When a security-related event that might be audited occurs, the system checks whether you
have selected that event for audit. If you have, the system writes a journal entry in the current
receiver for the security auditing journal (QAUDJRN in library QSYS).
When you want to analyze the audit information you have collected in the journal you can use
IBM Navigator for i to display the output. You can also use SQL commands as documented in
this link: https://www.ibm.com/docs/en/i/7.5?topic=services-journal.
The concept of iASP is very straightforward and there are many solutions built around it.
iASPs provide an attractive solutions for clients who are looking at server consolidation and
continuous availability with a minimum amount of downtime. Using the iASP provides both
technical and business advantages on IBM i.
The key difference between the system auxiliary storage pool (ASP) and an IASP is that the
system ASP is always accessible when system is up and running, while IASP can be brought
online or offline independent of the system activity on any other pools.
An IASP must be brought online or “varied on” to make it visible to the system before making
any attempt to access data on it. If you want to make the IASP inaccessible by the system you
“Vary off” the IASP. The vary on process is not instantaneous and can take several minutes š
the amount of time required depends on several factors. Figure 5-5 shows a system with its
SYSBAS or ASP and an iASP defined.
Figure 5-5 Single system having an iASP where application data resides
Independent ASPs are always numbered starting from 33 up through 255 while the basic
ASPs are always numbered 2 through 32. All basic ASPs are automatically available when
the system is online and cannot be independently varied on or off. Figure 5-6 shows a system
with a system pool, a user ASP, and an iASP defined.
When considering IASP implementation, business needs should be considered first and
accordingly a plan should be made to implement this in the client environment. At the
application level you should have a good understanding about where objects reside, who are
the users and how the program and data is accessed. There are certain type of objects that
while they are supported in IASP, they should remain in the system ASP only in order to
maintain the expected or normal behavior of the system. Some work management related
changes will need to be made with the introduction of an IASP. In general there are two
environments in which IASP can be used.
Single system environment
In this case you have an Independent disk pool on the local system. It can be brought
online or offline without impacting other storage pools on the system or the need to do an
Initial Program Load (IPL). This is often used in a local system which contains multiple
databases located on IASPs. The iASP can be made available while system is active
without the need to perform an IPL and the independent disk pool can remain offline until
it is needed. This kind of setup is very common if you want to segregate application data,
keep historical and archived data on same system, maintain multiple application versions
or to meet data compliance rules where you need to have data in different pools and keep
it offline unless needed by the business.
Multi-system environment
In this case you have one or more IASPs which are shared between multiple IBM i
partitions—on the same system or on different systems, possibly even in another
locations—that are members of the cluster. In this kind of setup this IASP can be switched
between these systems without the need of any IPL for any of the partitions. This is quite a
significant advantage because it allows continuous availability of the data. There can be
various reasons to implement IASPs in multi-system environments. For example, if you
are implementing a new disaster recovery or high availability solution then you would
normally choose a switchable IASP setup for the most flexible implementation.
The registration facility provides a central point to store and retrieve information about IBM i
and non-IBM i exit points and their associated exit programs. This information is stored in the
registration facility repository and can be retrieved to determine which exit points and exit
programs already exist.
You can use the registration facility APIs to register and unregister exit points, to add and
remove exit programs, and to retrieve information about exit points and exit programs. You
can also perform some of these functions by using the Work with Registration Information
(WRKREGINF) command.
The exit point provider is responsible for defining the exit point information, defining the format
in which the exit program receives data, and calling the exit program. There are particularly
four areas in which Exit points provide a another layer of security.
IBM provides socket exit points that make it possible to develop exit programs for securing
connections to your IBM i by specific ports and/or IP addresses
This support is not a replacement for resource security. Function usage does not prevent a
user from accessing a resource (such as a file or program) from another interface. Function
usage support provides APIs to perform the following tasks:
Register a function
Retrieve information about the function
Define who can or cannot use the function
Check to see if the user is allowed to use the function
The system administrator specifies who is allowed or denied access to a function. The
administrator can either use the Work with Function Usage Information (WRKFCNUSG)
command to manage the access to program function or use Security → Function Usage in
the IBM Navigator for i.
Separation of duties
Separation of duties helps businesses comply with government regulations and simplifies the
management of authorities. It provides the ability for administrative functions to be divided
across individuals without overlapping responsibilities, so that one user does not possess
unlimited authority—such as with *ALLOBJ authority. The function, QIBM_DB_SECADM,
provides a user with the ability to grant authority, revoke authority, change ownership, or
change primary group, but without giving access to the object or, in the case of a database
table, to the data that is in the table or allowing other operations on the table.
QIBM_DB_SECADM function usage can be given only by a user with *SECADM special
authority and can be given to a user or a group.
QIBM_DB_SECADM is also responsible for administering Row and Column Access Control.
Row and
Column Access Control provides the ability to restrict which rows a user is allowed to access
in a table and whether a user is allowed to see information in certain columns of a table. For
more information, see Row and column access control (RCAC).
Note: You can find extensive documentation on IBM i security by viewing the IBM i 7.5
Security Reference at https://www.ibm.com/docs/en/ssw_ibm_i_75/pdf/sc415302.pdf
and IBM i 7.5 Security - Planning and setting up system security at
https://www.ibm.com/docs/en/ssw_ibm_i_75/pdf/rzamvpdf.pdf for a more in-depth
discussion.
A major concern is that the root directory “/” is publicly accessible, with the default setting
allowing full access for public users. Upon installation of a new IBM i operating system, the
default permission for root is set to *RWX, which poses a considerable risk and should be
Figure 5-7 A structure for all information stored in the IBM i operating system
IBM i does support scanning for malicious activities through third-party software. Users can
scan objects within the integrated file system, providing them with the flexibility to determine
the timing of scans and the actions to take based on the outcomes. There are two exit-points
related to this support are:
"QIBM_QP0L_SCAN_OPEN - Integrated File System Scan on Open Exit Program. For
this exit point, the integrated file system scan on open exit program is called to do scan
processing when an integrated file system object is opened under certain conditions.
"QIBM_QP0L_SCAN_CLOSE - Integrated File System Scan on Close Exit Program. For
this exit point, the integrated file system scan on close exit program is called to do scan
processing when an integrated file system object is closed under certain conditions.
Figure 5-8 shows setting a file share directory (/PTF) which is secured by limiting access to
members of the authorization list PRODACC.
Figure 5-9 shows the interface to display the current access permissions for directory /ptf.
Additional access can be set from this screen.
Important: Authorization lists do not restrict access to users with *ALLOBJ special
authority. Any user profile with *ALLOBJ special authority will be able to access IBM i
NetServer as if there is no authorization list restriction in place. This can be used to create
administrative shares that can only be accessed by IBM i administrative profiles by
specifying an authorization list that only lists public *EXCLUDE.
Additional information on the IBM i 7.5 Integrated File system can be found at
https://www.ibm.com/docs/en/ssw_ibm_i_75/pdf/rzaaxpdf.pdf
The IBM Technology Expert Labs team for IBM i Security is an IBM team that helps
specializing in IBM i security services such as security assessments, system hardening and
developing IBM i utilities. This family of utilities goes under the name “Security Compliance
Tools for IBM i.”
For more details on these offerings see “Security assessment for IBM Power from IBM
Technology Expert Labs” on page 252
This chapter provides an overview of Linux on Power, highlighting its unique features and
challenges. It discusses various supported Linux environments and offers guidance on
implementing robust security measures to establish a secure and high-performing Linux
system. By combining the strengths of Linux and IBM Power technology, organizations can
benefit from a powerful and flexible infrastructure.
Linux is an open source based system in nature. In contrast to AIX or IBM i, which
experienced significantly fewer than ten reported vulnerability reports in 2023, the Linux
kernel suffered more than a hundred documented flaws during the same period. Given its
open nature and extensive user base, this outcome was predictable and it makes the task of
protecting Linux workloads even more critical.
In regard to security, the discussion will encompass practices, processes, and tools that are
specifically designed to safeguard Linux systems on Power from cyber threats, thereby
ensuring the confidentiality, integrity, and availability (CIA triad) of these systems.
The intricate nature of Linux systems demands a diverse set of tools and methodologies to
effectively reduce the attack surface and bolster defenses against both established and
emerging threats.
Note: In our laboratory setting, we utilize a variety of distributions, including Red Hat,
SUSE, and Ubuntu, as well as Debian, CentOS, Fedora, Alma, Rocky, and OpenSUSE,
which offers robust support for the ppc64le architecture.
6.2 Threats
Linux systems, while powerful and flexible, are not immune to security threats. These
vulnerabilities can expose systems to various attacks, including malware, unauthorized
access, and data breaches, even in Power Systems due to their widespread deployment. To
safeguard these systems, a comprehensive, cross-functional approach is required to identify,
assess, and mitigate these threats.
6.2.1 Malware
Malware, including viruses, worms, Trojans, and ransomware, poses significant risks to Linux
systems on Power. These malicious programs can disrupt operations, steal sensitive
information, and cause substantial financial and reputational damage.
IBM was one of the earliest champions of open source, backing influential communities like
Linux, Apache, and Eclipse, pushing for open licenses, open governance, and open
standards. Beginning in the late 1990s, IBM supported Linux with patent pledges, a $1 billion
investment of technical and other resources, and helped to establish the Linux Foundation in
2000. Since then, IBM has been consistently behind open source initiatives in general, and
Linux and accompanying technologies in particular. Proof of this is IBM’s support of the Linux
operating system on its own hardware, including IBM Power.
For more information about Red Hat Enterprise Linux see this Red Hat website.
Debian-based distributions
Debian is a popular and widely-used operating system, primarily known for its stability,
reliability, security and extensive software repositories. It is a Linux distribution consisting
entirely of free software. Debian is the foundation for many other distributions, most notably
Ubuntu also supported on Power.
Ubuntu is optimized for workloads in the mobile, social, cloud, Big Data, analytics and
machine learning spaces. With its unique deployment tools (including Juju and MAAS),
Ubuntu makes the management of those workloads trivial. Starting with Ubuntu 22.04 LTS,
POWER9 and POWER10 processors are supported. For more information about Ubuntu
Server see this website: https://ubuntu.com/server
SUSE-Based Distributions
SUSE Linux Enterprise Server (SLES) traditionally used for SAP HANA on Power
environments is an alternative also for classic workloads to RHEL. In addition OpenSUSE
Leap is a community-driven, open-source Linux distribution, developed by the OpenSUSE
Project. It shares its core with SUSE Linux Enterprise (SLE), providing a highly stable and
well-tested base. receives the same security fixes as soon as they are released to SLE
customers.
SUSE Linux Enterprise Server for IBM POWER® is an enterprise-grade Linux distribution
optimized for IBM POWER-based systems. It is designed to deliver increased reliability and
provide a high-performance platform to meet increasing business demands and accelerate
innovation while improving deployment times.
For more information about SUSE Linux Enterprise Server for IBM Power see this website:
(https://www.suse.com/products/power/)
Supported Distributions
In the previous section we discussed a number of Linux distributions that are available
including versions for IBM Power. Table 6-1 is a table of Linux distributions that are supported
by IBM on IBM Power10 based systems. Also listed are the Ubuntu distributions where the
support comes directly from Canonical.
9043-MRX (IBM Power E1050) Red Hat Enterprise Linux 9.0, any subsequent RHEL 9.x
9105-22A (IBM Power S1022) releases
9105-22B (IBM Power S1022s) Red Hat Enterprise Linux 8.4, any subsequent RHEL 8.x
9105-41B (IBM Power S1014) releases
9105-42A (IBM Power S1024) SUSE Linux Enterprise Server 15 SP3, any subsequent
9786-22H (IBM Power L1022) SLES 15 updates
9786-42H (IBM Power L1024) Red Hat OpenShift Container Platform 4.9, or later
Ubuntu 22.04, or latera
9080-HEX (IBM Power E1080) Red Hat Enterprise Linux 9.0, any subsequent RHEL 9.x
releases
Red Hat Enterprise Linux 8.4, any subsequent RHEL 8.x
releases
Red Hat Enterprise Linux 8.2 (POWER9 Compatibility
mode only)b
SUSE Linux Enterprise Server 15 SP3, any subsequent
SLES 15 updates
SUSE Linux Enterprise Server 12 SP5 (POWER9
Compatibility mode only)
Red Hat OpenShift Container Platform 4.9, or later
Ubuntu 22.04, or latera
9028-21B (IBM Power S1012) Red Hat Enterprise Linux 9.2, for PowerLE, or later
Red Hat OpenShift Container Platform 4.15, or later
Ubuntu 22.04, or latera
a. Ubuntu on Power support is available directly from Canonical.
b. Red Hat Business Unit approval is required for using RHEL 8.2 on IBM Power10 processor
based systems.
IBM Power10 processor-based systems support the following configurations per logical
partition (LPAR):
SUSE Linux Enterprise Server 15 SP4: up to 64 TB of memory and 240 processor cores.
SUSE Linux Enterprise Server 15 SP3: up to 32 TB of memory and 240 processor cores.
Red Hat Enterprise Linux 8.6, or later: up to 64 TB of memory and 240 processor cores.
Red Hat Enterprise Linux 8.4 and 9.0: up to 32 TB of memory and 240 processor cores.
SUSE Linux Enterprise Server 12 SP5 and RHEL 8.2: up to 8 TB of memory and 120
processor cores.
For libraries and tools that can aid in leveraging the capabilities of Linux on Power10 servers,
see IBM Software Development Kit for Linux on Power tools
(https://developer.ibm.com/linuxonpower/sdk/). Other information about packages and
migration assistance can be found in the Find packages built for POWER
(https://developer.ibm.com/linuxonpower/open-source-pkgs/) in the IBM Linux on Power
developer portal.
Cores is supported as a part of OpenShift Container Platform (OCP). For more information
about OCP, see Getting started with Red Hat OpenShift on IBM Cloud
(https://cloud.ibm.com/docs/openshift?topic=openshift-getting-started) and
Architecture and dependencies of the service
(https://cloud.ibm.com/docs/openshift?topic=openshift-service-arch).
Given the complexity of Linux systems, a variety of tools and methodologies are necessary to
effectively minimize the attack surface and strengthen defenses against both established and
emerging threats.
Implementing security measures and utilizing available tools will vary depending on the
chosen distribution and version. This guide outlines general principles without focusing on
specific configurations, which may change over time.
This section covers essential aspects of hardening a GNU/Linux OS on IBM Power from a
distribution-neutral perspective. We will provide practical examples and guidelines using
open-source software tested on ppc64le, specifically in Debian and Fedora, to ensure our
Linux systems on Power are as secure as possible using an open-source first approach.
While Linux offers the advantage of open-source software, challenges remain in building and
deploying applications on ppc64le due to lack of access to some proprietary programs and
tools, missing dependencies and build processes. However, as data centers embrace
multi-architecture environments, these gaps are gradually closing. When selecting tools,
prioritize those with native ppc64le support.
6.4.1 Compliance
Compliance ensures that Linux deployments meet the minimum required standards in terms
of configuration, patching, security, and regulatory compliance.
CIS Benchmarks are developed by the Center for Internet Security and offer best practices
for securing a wide range of systems and applications, including various Linux distributions.
They are community-driven and cover a broad spectrum of security configurations.
DISA STIGs, on the other hand, are developed by the Defense Information Systems Agency
and are tailored to the stringent security requirements of the U.S. Department of Defense.
These guides provide highly detailed security configurations and are mandatory for
DoD-related systems. DISA STIGs offer comprehensive security measures that address
potential threats specific to defense environments. Implementing these guidelines ensures
that systems meet federal security standards and are protected against sophisticated threats.
For our purpose of providing a good basis for Linux security in Power, we will use CIS as a
reference, but other standards such as PCI-DSS may be more appropriate depending on the
environment.
It can verify that the PPC64LE system adheres to various security benchmarks and standards
such as CIS (Center for Internet Security) benchmarks, NIST (National Institute of Standards
and Technology) guidelines, custom security policies or vulnerability lists. It also has a GUI,
scap-workbench, available at least on RHEL-based distributions on Power such as Alma Linux
9 which is shown in Figure 6-1 on page 164.
To comply with security regulations and policies, we take the following approach:
1. Install the Linux ISO of your choice
2. Decide which set of rules should use (always start with a dry run). Figure 6-3 on page 165
shows using CIS Level 2 benchmark using scap-workbench (GUI)
3. Automatically address these compliance gaps when technically feasible, with Bash scripts
and Ansible playbooks, as shown in the screen shot or via the command line as shown
below:
oscap xccdf generate fix --profile [PROFILE_ID] --output remediation_script.sh
\ usr/share/xml/scap/ssg/content/ssg-[OS].xml
It is crucial to be aware that automated remediation may yield unexpected results on systems
that have already been modified. Therefore, administrators are strongly advised to thoroughly
evaluate the potential impact of remediation actions on their specific systems. You might want
to make a snapshot / backup before moving on.
Tip: Under normal conditions the remediation of compliance issues will be the result of
several iterations and some backtracking by recovering snapshots or backups until we
reach a level of security adequate for our purposes. Always in balance with the usability of
the system.
OpenSCAP is not only about compliance. It can also help you to check if there is any
vulnerability in our current OS version using OVAL (Open Vulnerability and Assessment
Language) and generating a report.
Tip: If you are satisfied with the image you have just evaluated and you use PowerVC (IBM ‘s Power
virtualization solution based in OpenStack) it is a good time to capture this system as a template or
even create an OVA. Be careful when using “default” installations as they may be missing important
security protection settings. Always utilize appropriate compliance policies to ensure that your Linux
systems running on IBM Power are all well configured and protected.
Firewall Technologies
Firewalls are a critical component of network security, essential for controlling the flow of
incoming and outgoing traffic based on predefined security rules. Effective firewall
management on Linux systems involves various tools, each offering different levels of control,
efficiency, and ease of use. This section explores the primary tools used in Linux firewall
implementations, their relationships, and practical guidance on their use.
Linux firewalls have evolved significantly over time, starting from simple packet filtering
mechanisms to more sophisticated and user-friendly management tools. The primary tools
used in Linux firewall implementations include iptables, nftables, firewalld, and UFW
(Uncomplicated Firewall). Understanding the background and functionality of these tools
helps in choosing the right one for your specific needs (including the distribution you chose)
Netfilter is a framework within the Linux kernel that provides various networking-related
operations such as packet filtering, network address translation (NAT), and packet mangling.
It is the core infrastructure that enables these operations, with hooks in the kernel where
modules can register callback functions to handle network packets. Both iptables and nftables
are user-space utilities that interact with the netfilter framework,
nftables is the successor to iptables, designed to provide a more efficient and streamlined
framework for packet filtering and Network Address Translation (NAT). Introduced in the Linux
kernel since version 3.13, nftables offers a simplified syntax and enhanced performance. It
has been gradually adopted by many distributions as the default backend for firewall
configurations, aiming to overcome part of the complexity and performance limitations of
iptables. The command to permit SSH in nftables is:
sudo nft add rule inet filter input tcp dport 22 accept
Firewall tools
Within the front end tools for creating and maintaining firewall rules in Linux (using iptables or
nftables) we have two main options:
firewalld is a dynamic firewall management tool included in Red Hat Enterprise Linux (RHEL)
since version 7. It simplifies firewall management by using the concept of network zones,
which define the trust level of network connections and interfaces. Firewalld allows for
real-time changes without needing to restart the firewall, providing a flexible and dynamic
approach compared to traditional static tools like iptables.Firewalld uses nftables as its
backend by default on modern systems and firewall-cmd as the command line tool.
sudo firewall-cmd --zone=public --add-port=22/tcp --permanent
Regular reviews and updates of firewall rules are also recommended to maintain compliance
with security policies and adapt to emerging threats. These measures collectively aim to
fortify Linux systems against a variety of network-based threats.
In Example 6-2 we show a simple firewall configuration on Linux on Power using firewall-cmd.
# Enable UFW
sudo ufw enable
Additionally, CIS emphasizes the importance of logging and auditing firewall activity to detect
and respond to suspicious behavior, and suggests using stateful inspection and rate limiting
to prevent attacks like Denial of Service.
We will be using Suricata because it has strong support for ppc64le architecture. Suricata is a
versatile and high-performance Network Security Monitoring (NSM) tool capable of detecting
and blocking network attacks. By default, Suricata operates as a passive Intrusion Detection
System (IDS), scanning for suspicious traffic on a server or network and generating logs and
alerts for further analysis. Additionally, it can be configured as an active Intrusion Prevention
System (IPS) to log, alert, and completely block network traffic that matches specific rules.
Suricata is open source and managed by the community-run non-profit organization, the
Open Information Security Foundation.
##
## Step 3: Configure common capture settings
##
## See "Advanced Capture Options" below for more options, including Netmap
## and PF_RING.
##
For additional information on Suricata, including installation instructions, see the Suricata
documentation.
Encryption in Flight
Encrypting data in transit protects it from being intercepted and read by unauthorized parties.
Protocols such as SSL/TLS are used to secure communications over networks.
SSL/TLS are Secure protocols for encrypting web traffic, email, and other
communications.
To secure your web server with SSL/TLS, you first need to obtain a digital certificate.
Certbot is an automated tool designed to streamline the process of acquiring and installing
SSL/TLS certificates. It is one of many technology projects developed by the Electronic
Frontier Foundation (EFF) to promote online freedom.
Certbot is available in different Linux repositories including ppc64le versions, making
installation straightforward, It has plug-ins for both apache and nginx among other typical
deployments and includes a tool to automatically renew these certificates.
Example 6-6 shows how to install Certbot in a Debian Linux system. Other Linux versions
might differ slightly.
Encryption at rest
Encryption at Rest is a form of encryption that is designed to prevent an attacker from
accessing data by ensuring it is encrypted when stored on a persistent device. This can be
done at different layers, from physical storage systems to the OS. If you choose to encrypt at
the OS level, it is best to employ full disk encryption using LUKS with LVM (Debian /
RHEL-Based) or BTRFS (SUSE).
Linux Unified Key Setup (LUKS) offers a suite of tools designed to simplify the management
of encrypted devices. LUKS allows you to encrypt block devices and supports multiple user
keys that can decrypt a master key. This master key is used for the bulk encryption of the
partition.
You can configure disk encryption at the installation time or later using cryptsetup, a
command-line tool used to conveniently set up disk encryption based on the dm-crypt kernel
module. It offers a range of functionalities, including creating, opening, and managing
encrypted volumes.
Prerequisites for installing LUKS are:
– A Linux system with disk attached.
– cryptsetup installed.
– Root or sudo privileges
Example 6-7 provides an example of activating encryption at rest for a logical volume using
lvm.
WARNING!
========
This will overwrite data on /dev/sdX irrevocably.
Linux utilizes Pluggable Authentication Modules (PAM) in the authentication process, serving
as an intermediary layer between users and applications. PAM modules are accessible on a
system-wide basis, allowing any application to request their services. The PAM modules
implement most of the user security measures that are defined in various files within the /etc
directory, including LDAP, Kerberos and Active Directory connections or MFA options.
Access control mechanisms ensure that only authorized users can access specific resources.
This includes configuring SUDO, managing user groups, and maintaining access logs.
Password Policies
Enforcing strong password policies is crucial to prevent unauthorized access. Policies should
mandate complex passwords, regular password changes, and account lockout mechanisms
after multiple failed login attempts.
# The maximum credit for having digits in the new password. If less than 0 it is the
minimum number of digits in the new password.
dcredit = -1
# The maximum credit for having uppercase characters in the new password. # If less than 0
it is the minimum number of uppercase characters in the new
# password.
ucredit = -1
..
Regular changes: CIS recommends specific password change policies for Linux systems
to enhance security. These include setting a maximum password age of 90 days or less to
ensure regular password updates, a minimum password age of 7 days to prevent rapid
password changes that could cycle back to previous passwords, and a password
expiration warning of 7 days to notify users in advance of impending password expiry.
These guidelines help maintain robust security by ensuring that passwords are regularly
updated and users are adequately informed.
Enforce password expiration policies are defined in login.defs. Example 6-9 shows an
excerpt of a sample configuration of /etc/login.defs (Ubuntu) to log both successful
logins and su activity.
#
# Enable logging of successful logins
#
LOG_OK_LOGINS yes
#
# If defined, all su activity is logged to this file.
#
SULOG_FILE var/log/sulog
Groups
Grouping users based on their roles and responsibilities helps in managing permissions
efficiently. Assigning users to appropriate groups ensures they have access only to the
necessary resources.
For example, a file with permissions rw-rw---- (660) allows the owner and the group to read
and write the file, but others cannot access it. This reduces the risk of accidental or malicious
modifications to sensitive files.
In this way developers can be part of a dev group with access to development files, while the
production team is part of a prod group with access to production files.
CIS advises regular audits of group memberships to ensure that users have appropriate
permissions and to remove any unnecessary or outdated group assignments. Additionally,
the creation of custom groups for specific tasks or roles is recommended to further refine
access control and minimize potential security risks.
Example 6-11 shows the process to grant read and write permissions to another user (e.g.,
john). After setting the ACL for the user the getfacl command can be used to display the
ACL as shown in the example.
When attempting to login to a system secured by multi-factor authentication (MFA), users are
required to supply extra credentials beyond their standard username and password. In the
context of Linux systems, Secure Shell (SSH) serves as a common method for remotely
accessing the system. To enhance security further, it's advisable to incorporate MFA when
SSH is used.
One method of implementing MFA is the use of IBM PowerSC. However, MFA can also be
implemented using native tools like Google authenticator. This can be done using:
– google-authenthicator libpam-google-authenticator for Debian based systems
– google-authenticator-libpam in SUSE based systems
– google-authenticator in Extra Packages for Enterprise Linux (EPEL) for Red Hat
Enterprise Linux based systems
Google authenticator has a setup script for configuration that works out of the box. This uses
the Google Authenticator app available for Android and iOS to generate authentication codes.
The authentication code is shown in Figure 6-5.
For more information on adding MFA to other distribution, see the following links:
https://ubuntu.com/tutorials/configure-ssh-2fa#1-overview
https://fedoramagazine.org/two-factor-authentication-ssh-fedora/
The following list provides some methods to help assist with setting appropriate access
controls within your system:
SELinux (RHEL / SUSE based) allows for the definition of roles and the assignment of
domains (or types) to these roles. Users are then assigned roles, and the roles define the
allowable operations on objects within the system which makes it a RBAC-like solution.
SELinux utilizes security policies that are label-based, identifying applications through
their file system labels. SELinux might be complex to configure and manage.For additional
information see this link: https://github.com/SELinuxProject.
AppArmor (Debian-based) employs security profiles that are path-based, identifying
applications by their executable paths. This means it does not have a traditional RBAC
approach but allows defining profiles for applications, which can be seen as a form of
access control. For more information see: https://apparmor.net
FreeIPA aims to provide a centrally-managed Identity, Policy, and Audit (IPA) system.
FreeIPA – the upstream open-source project for Red Hat Identity Management – is an
integrated security information management solution combining Fedora Linux, 389
Directory Server, Kerberos, NTP, DNS, and Dogtag (Certificate System). It provides
centralized identity management and includes support for RBAC, allowing administrators
to define roles and associate permissions and policies with these roles across a network
of Linux systems. For more information see: https://www.freeipa.org/
RHEL System Roles is a collection of Ansible roles and modules that provide a stable
and consistent configuration interface to automate and manage multiple releases of Red
Hat Enterprise Linux. The RHEL System Roles are supported as provided from the
following methods:
• As an RPM package in the RHEL 9 or RHEL 8 Application Streams repositories
• As a supported collection in the Red Hat Automation Hub
More information see: https://access.redhat.com/articles/3050101
Each solution offers different features and complexities, allowing administrators to choose the
most appropriate tool based on their specific security requirements and environment. Red
Hat based distributions come preconfigured with a lot of SELinux policies but the
configuration might be more complex than FreeIPA or AppArmour. Using RHEL roles will
typically be part any of automation policies.
SUDO
The widespread reliance on sudo in most Linux distributions over other choices is attributed
to its ease of use and the granular control it provides over user permissions. Sudo simplifies
the delegation of limited root access, specifies allowed commands through the sudoers file,
and maintains an audit trail, which makes it highly practical for routine administrative tasks.
Other tools, while powerful, involve complex management and a level of detail that is typically
unnecessary for everyday operations, making them more suitable for specialized use cases.
That said, to implement group access control using sudo, we can follow these steps.
1. Determine the different roles in the organization and the specific permissions or
commands each role needs.
2. Create Unix groups corresponding to each role. For example, admin, developer, auditor,
etc. Example 6-12 shows adding groups.
Add users to the appropriate groups based on their roles as seen in Example 6-13.
3. Edit the sudoers file to grant permissions to groups. This is done using the visudo
command to ensure proper syntax and prevent mistakes.
sudo visudo
In the sudoers file, define the commands that each group can execute. Example 6-14
shows group permissions.
4. Users can now use the sudo command to execute commands based on their roles as
shown in Example 6-15.
In this example, admin role has full control over the system, the developer role grants
access to development tools like git, make, gcc and the auditor role has read-only access
to logs and configuration files. You can learn more about sudo at https://www.sudo.ws/.
We will show how to deploy and combine all these tools in a practical example.
Syslog is a standard for message logging that allows separation of the software that generates
messages from the system that stores them and the software that reports and analyzes them.
Rsyslog is an enhanced version of syslog, It builds upon the foundation of syslog, providing
advanced features and greater flexibility.
Edit /etc/rsyslog.conf to configure log levels and destinations as shown in Example 6-16.
Auditd is the userspace component of the Linux Auditing System, which is used to collect,
filter, and store audit records generated by the kernel. These records can include information
about system calls, file accesses, user logins, and other significant security events. The audit
daemon (auditd) is responsible for writing these records to disk and managing the log files.
Install auditd using this command:
sudo apt-get install auditd audispd-plugin
You can also use ausearch in conjunction with aureport for detailed reports as shown in
Example 6-19. The command is:
sudo aureport -k
AIDE helps to monitor and verify the integrity of files and directories on a system. It helps
detect unauthorized changes, such as modifications, deletions, or additions, by creating a
database of file attributes and comparing the current state to the baseline. It is a File Integrity
Monitor initially developed as a free and open source replacement for Tripwire licensed under
the terms of the GNU General Public License
To begin using AIDE, you must make sure the database is present:
ls /var/lib/aide
Once the AIDE database is in place, you can initialize the database with this command from a
terminal prompt: (this can take a while, go for a coffee)
aide --config /etc/aide/aide.conf --init
To perform an initial check of the directories and files specified in /etc/aide/aide.conf, enter
this command in a terminal prompt:
sudo aide -check
If everything in the monitored directories and files is correct, you will see the following
message when the check completes:
All files match AIDE database. Looks okay!
AIDE will also run daily via the /etc/cron.daily/aide crontab, and the output will be emailed to
the user specified in the MAILTO= directive of the /etc/default/aide configuration file as
mentioned above.
AIDE is able to determine what changes were made to a system, but is not able to determine
who made the change, when the change occurred, and what command was used to make the
change. For that, you use auditd and ausearch
By combining these tools, you establish a robust system for logging, integrity checking, and
auditing. This multi-layered approach enhances the security and integrity of your Linux
installation on ppc64le architecture, providing early detection of potential security incidents
and unauthorized changes.
Tip: Forwarding these events to a SIEM or remote log solution (including PowerSC Trusted
Logging on VIOS) would be the best practice to ensure these logs are tamper-proof stored
and therefore cannot be modified or deleted.This applies to any other log or audit file of
security interest.
Integrating Linux on IBM Power systems with SIEM tools such as IBM QRadar involves
several steps to ensure that logs from the Linux systems are properly collected, transmitted,
and ingested by the SIEM platform. The same steps apply if instead of a classic SIEM tool it is
a remote log collector or other observability tool that allows us to centralize the logs from
different environments for their secure storage and correct subsequent analysis.
A second approach would be to send JSON or field-based logs to IBM QRadar without using
traditional syslog daemons or after storing this messages in a database. This can be done
with tools like Fluentd or even your own scripts in python.
Fluentd, a extensively deployed open-source log collector written in Ruby, stands out for its
versatile pluggable architecture. This design enables it to seamlessly connect to a broad array
of log sources and storage solutions, including Elasticsearch, Loki, Rsyslog, MongoDB, AWS
S3 object storage, and Apache Kafka, among others. Figure 6-7 shows how Fluentd can help
with log management.
IBM leverages Fluentd to streamline its log management processes across diverse
environments including sending logs from kubernetes based deployments on IBM Cloud or
To prevent these threats on Linux systems, ClamAV can be utilized to detect and remove
various forms of malware through regular scans and real-time protection, while chkrootkit can
identify and report signs of rootkits, both tools enhancing security by ensuring the system
remains free from unauthorized access and malicious activity.
Virus detection
There are a couple of options for virus detection on IBM Power.
ClamAV
ClamAV (Comprehensive Malware Detection and Removal) is a versatile and powerful
open-source anti-virus engine designed for detecting Trojans, viruses, malware, and other
malicious threats. It offers several features that make it a valuable tool for enhancing Linux
system security:
– Regular Scans: ClamAV can be configured to perform regular scans of the system,
ensuring that any new or existing malware is promptly detected and addressed.
– Real-Time Protection: With the ClamAV daemon, real-time scanning can be enabled to
monitor file activity continuously, providing immediate detection and response to
potential threats.
– Automatic Updates: ClamAV includes an automatic update mechanism for its virus
definitions, ensuring that the system is protected against the latest threats.
– Cross-Platform Support: ClamAV supports multiple platforms, making it a flexible
solution for various environments working on Linux on Power but also in AIX and IBM i
(PASE)
To install and configure ClamAV on a Linux system, follow these steps (Debian based):
Install ClamAV using this command:
sudo apt-get install clamav clamav-daemon
Update ClamAV database using this command:
sudo freshclam
Start ClamAV daemon using this command:
sudo systemctl start clamav-daemon
Schedule a daily scan (add this line to your crontab) and send a report by email using this
command:
[email protected] 0 1 * * * /usr/bin/clamscan -ri --no- summary /
Powertech Antivirus offers both on-demand and scheduled scanning, allowing you to balance
security and system performance. Compatible with IBM, Fortra, and third-party scheduling
solutions, you can customize scan frequency and target directories. Powertech Antivirus can
be run independently on each endpoint or it can be centrally managed.
Rootkit detection
The tool chkrootkit is a rootkit detector that checks for signs of rootkits on Unix-based
systems. It scans for common signatures of known rootkits and helps ensure the system
remains uncompromised.
You can scan for many types of rootkits and detect certain log deletions using chkrootkit.
While it doesn't remove any infected files, it does specifically tell you which ones are infected,
so that you can remove/reinstall/repair the file or package.
Using Formeman and Katello together with Red Hat Satellite provides one option for update
management.
Katello is a plug-in for Foreman that adds content management and subscription
management capabilities. It allows administrators to manage software repositories, handle
updates, and ensure compliance with subscription policies.
Using Red Hat Satellite along with Foreman/Katello manages package and patch lifecycles,
including update distribution. Using Red Hat Satellite in this environment can initiate updates.
However, Ansible offers a more comprehensive automation solution for keeping systems up to
date. Ansible can:
– Perform prechecks, backups, and snapshots
– Initiate patch updates,
– Reboot systems
– Conduct post-checks for complete patch automation
Thus, combining Satellite and Ansible is optimal. Satellite handles lifecycle management and
package provision, while Ansible automates the entire patching process. This integration
ensures efficient and consistent updates across your environment.
6.4.10 Monitoring
Monitoring Linux on Power systems plays a vital role in ensuring their security, supplementing
the specialized tools mentioned in this chapter and providing additional insights.
There are many options for monitoring Linux systems in Power. Most commercial and
community solutions have ppc64le agents. As an example, consider Pandora FMS. There are
also solutions that support monitoring Linux on Power and also fit into a complete monitoring
infrastructure across all of your IBM Power workloads running on Linux, AIX, and IBM i where
you can visualize the status any partition, and generate alerts which can be redirected to a
centralized monitoring environment.
One of the simplest options for this multiple architecture monitoring tool would be nmon.
Nmon was originally written for AIX and is now an integrated tool within AIX. A version for
Linux was written by IBM and later released as open source for Linux across multiple
platforms including x86, IBM Power, IBM Z and even ARM. There are multiple integrations for
using and analyzing nmon data, including charts and spreadsheet integrations. There is even
a newer version (nmonj) that saves the performance data in JSON format for better
Another tool is htop, an interactive process viewer that offers several enhanced functionalities
that make it particularly user-friendly and versatile. For example, allowing users to scroll
through and select processes for detailed information, and to make changes in priority and
terminate processes directly from the interface. Figure 6-11shows an example screen from
htop.
Figure 6-11 Screen shot of htop
There are more and more projects being developed using python and other languages that
are easily portable between architectures. Some of them have good export capabilities to
InfluxDB, Cassandra, OpenTSDB, StatsD, ElasticSearch or RabbitMQ.
Nagios
At a next level, we would have the deployment of complete monitoring environments such as
Nagios or Zabbix. These frameworks support extensive customization and scalability. Their
source code can be easily downloaded and compiled on IBM Power, with agents /
IBM Instana
In the field of commercial monitoring solutions, we highlight IBM Instana™. It leverages
various open-source projects to provide advanced monitoring and observability capabilities,
making it an excellent enterprise-supported solution for monitoring Linux on Power (ppc64le)
systems but also AIX and IBM i.
IBM Instana® integrates with technologies such as Apache Kafka for real-time data
processing, Prometheus for metrics collection, Grafana for data visualization, OpenTelemetry
for tracing and metrics, Elastic Stack (ELK) for log management, Kubernetes for container
orchestration, and Jenkins for continuous integration and delivery.
With support for Debian, Red Hat, and SUSE on ppc64le, Instana ensures comprehensive,
real-time visibility into the performance and health of applications and systems, backed by
IBM's robust enterprise support
System Hardening
Minimal Installation: Begin with a minimal base installation to reduce the attack surface.
Only install necessary software and services.This will reduce the surface attack by limiting
installed software and services. It is always easier to add software than to remove it.
Compliance: use tools such as OpenSCAP or PowerSC to help ensure minimum levels of
compliance in all systems. This can be done by generating a base image and then being
Access Control
User Authentication: Implement strong authentication mechanisms, including multi-factor
authentication (MFA). Use SSH key pairs instead of passwords for remote access, if
possible in combination with second method of authentication.
Role-Based Access Control (RBAC): Assign permissions based on roles rather than
individual users. Sudo is a great, powerful and probably the easiest tool to do it locally.
Password Policies: Enforce strong password policies, including complexity requirements,
expiration, and account lockout mechanisms.
Data Protection
Encryption: Use encryption for data at rest and in transit. Implement SSL/TLS for network
communications and encrypt sensitive files on disk.
Backup Strategies: Regularly back up critical data and test restore procedures. Use tools
like Bacula or IBM Storage Protect for automated backups.
Summary
In summary, layered security provides enhanced safety. However, it's important to note that
even the most robust defenses have weaknesses, which makes their effectiveness dependent
on the least secure component. Achieving the right balance between security and usability is
essential; while technologies advance and operating systems change, core problems remain
and new ones emerge.
A well-defined incident response plan is crucial for minimizing the impact of security incidents.
Here are the key components of an effective incident response plan:
Preparation: Define roles and responsibilities for incident response specific to Linux on
Power environments. Ensure all team members are trained and familiar with the response
procedures for this architecture. Make sure you have a clearly defined architecture where the
people who specialize in each technology: PowerVM, SUSE, Red Hat, Ubuntu, databases,
applications, storage and communications are located and understand the Linux on Power
environment.
Containment: Develop strategies for containing incidents to prevent further damage. This
may involve isolating affected Power Systems or networks. Consider the specific containment
techniques suitable for Power hardware, such as leveraging virtualization features to isolate
affected Logical Partitions (LPARs), VLANs or shared storage.
Eradication: Identify and remove the root cause of the incident in the Power environment.
This may involve applying patches, removing malware, or addressing configuration issues
specific to ppc64le systems. Ensure the incident response team is familiar with patch
management and malware removal tools compatible with Linux on Power.
Recovery: Restore affected Power Systems to normal operation. This may involve restoring
data from backups, rebuilding compromised LPARs, or reconfiguring network settings specific
to the Power architecture. Ensure that recovery procedures are tested and validated for
ppc64le environments.
Tip: Regularly test and update your incident response plan to ensure it remains effective
and relevant to the current threat landscape.
Red Hat OpenShift is a unified platform to build, modernize, and deploy applications at scale.
Work smarter and faster with a complete set of services for bringing apps to market on your
choice of infrastructure. OpenShift delivers a consistent experience across public cloud,
on-premise, hybrid cloud, or edge architecture.
Red Hat OpenShift offers you a unified, flexible platform to address a variety of business
needs spanning from an enterprise-ready Kubernetes orchestrator to a comprehensive
cloud-native application development platform that can be self-managed or used as a fully
managed cloud service.
Figure 7-1 shows how Kubernetes is only one component (albeit a critical one) in Red Hat
OpenShift.
Built by open source leaders, Red Hat OpenShift includes an enterprise-ready Kubernetes
solution with a choice of deployment and usage options to meet the needs of your
organization. From self-managed to fully managed cloud services, you can deploy the
platform in the data center, in cloud environments, and at the edge of the network. With Red
Hat OpenShift, you have the option to get advanced security and compliance capability,
end-to-end management and observability, and cluster data management and cloud-native
data services. Red Hat Advanced Cluster Security for Kubernetes modernizes container and
Kubernetes security, letting developers add security controls early in the software life cycle.
Red Hat Advanced Cluster Management for Kubernetes lets you manage your entire
application life cycle and deploy applications on specific clusters based on labels, and Red
Hat OpenShift Data Foundation supports performance at scale for data-intensive workloads.
Red Hat OpenShift is an enterprise level production product that entitles enterprise level
support based on Kubernetes and Kubernetes management. Red Hat OpenShift provides the
following benefits:
Red Hat OpenShift offers automated installation, upgrades, and lifecycle management
throughout the container stack – the operating system, Kubernetes, cluster services, and
applications – on any cloud.
Red Hat OpenShift helps teams build with speed, agility, confidence, and choice. Get back
to doing work that matters.
Red Hat OpenShift is a strong leader in the cloud landscape of Kubernetes platforms, and is
chosen for its strengths in enterprise environments, multi-environment consistency, and
developer-centric features.
Basic components
The basic components of Kubernetes can be described as:
Pods The smallest deployable units created and managed by Kubernetes. A pod is
a group of one or more containers that share storage, network, and
specifications on how to run the containers. Pods are ephemeral by nature;
they are created and destroyed to match the state specified by users.
Nodes The physical or virtual machines where Kubernetes runs the pods. A node
can be a worker node or a master node, although with the latest Kubernetes
(and by extension OpenShift) practices, the distinction is often abstracted
away, especially in managed environments.
Clusters A cluster consists of at least one worker node and at least one master node.
The master node manages the state of the cluster, including scheduling
workloads and handling scaling and health monitoring.
The major services that are running in the control plane are:
API Server Acts as the front end for Kubernetes. The API server is the component
that clients and external tools interact with.
etcd A highly-available key-value store used as Kubernetes' backing store
for all cluster data. It maintains the state of the cluster.
Scheduler Watches for newly created pods with no assigned node, and selects a
node for them to run on based on resource availability, policies, and
specifications.
Controller Manager Runs controller processes, which are background tasks in Kubernetes
that handle routine tasks such as ensuring the correct number of pods
for replicated applications.
Workload Resources
The control plane is in charge of setting up and managing the worker nodes which are
running the application code. Workload components can be described as:
Deployments A deployment specifies a desired state for a group of pods. You
describe a desired state in a deployment, and the Deployment
Controller changes the actual state to the desired state at a controlled
rate. You can define deployments to create new ReplicaSets, or to
remove existing deployments and adopt all their resources into new
deployments.
Networking
Networking connectivity between pods and between pods and outside services is managed
within a Kubernetes cluster. The following functions are maintained by the cluster:
Service
An abstraction that defines a logical set of pods and a policy by which to access them.
Services enable communication between different pods and external traffic routing into the
cluster.
Ingress
Manages external access to the services in a cluster, typically HTTP. Ingress can provide
load balancing, SSL termination, and name-based virtual hosting.
Storage
Containers are by definition ethereal as is any data stored in the container. To enable persistent
storage, Kubernetes uses the following concepts:
Persistent Volumes (PV)
PVs are resources in the cluster which can be connected to containers to provide persistent
storage.
Persistent Volume Claims (PVC)
PVCs are requests for storage by users. These requests are satisfied by allocating PVs.
Security
Role-Based Access Control (RBAC): Controls authorization – determining what
operations a user can perform on cluster resources. It's crucial for maintaining the security
of the cluster.
Here's a detailed look at how OpenShift builds on the core Kubernetes architecture:
Enhanced Developer Productivity
– OpenShift includes a sophisticated web-based console that provides a more
user-friendly interface than the standard Kubernetes dashboard. This console allows
developers to manage their projects, visualize the state of their applications, and
access a broad range of development tools directly.
– Code-Ready Containers simplifies the setup of local OpenShift clusters for
development purposes, providing a minimal, preconfigured environment that can run
on a developer s workstation. It s particularly useful for simplifying the “getting started”
experience.
– The Source-to-Image (S2I) tool is a powerful feature for building reproducible container
images from source code. This tool automates the process of downloading code,
injecting it into a container image, and assembling a new image. The new image
incorporates runtime artifacts necessary to execute the code, thus streamlining the
workflow from source code to deployed application.
Advanced Security Features
– OpenShift enhances Kubernetes security by implementing Security Context
Constraints. SCCs are akin to Pod Security Policies but provide more granular security
controls over the deployment of pods. They allow administrators to define a set of
conditions that a pod must run with to be accepted into the system, such as forbidding
running containers as root.
– OpenShift integrates an OAuth server that can connect to external identity providers,
allowing for a streamlined authentication and authorization process. This integration
enables users to log into OpenShift using their corporate credentials, simplifying
access management and enhancing security.
– OpenShift provides extensive support for Kubernetes network policies, which dictate
how pods communicate with each other and other network endpoints. OpenShift takes
this further with the introduction of egress firewall capabilities, allowing administrators
to control outbound traffic from pods to external networks.
Operational Efficiency
– OpenShift fully embraces the Kubernetes Operator pattern, which extends Kubernetes
capabilities by automating the deployment, scaling, and management of complex
applications. OpenShift includes the Operator Hub, a marketplace where users can
find and deploy Operators for popular software stacks.
– OpenShift offers a streamlined and highly automated installation process that simplifies
the setup of production-grade Kubernetes clusters. This extends to updates, which can
be applied automatically across the cluster, reducing downtime and manual
intervention.
– OpenShift includes built-in monitoring and telemetry capabilities that are preconfigured
to collect metrics from all parts of the cluster. This feature provides insights into the
performance and health of applications and infrastructure, enabling proactive
management and troubleshooting.
Enterprise Integration and Support
– OpenShift integrates Istio-based service mesh capabilities directly into the platform,
facilitating microservices architecture by providing service discovery, load balancing,
failure recovery, metrics, and monitoring, along with complex operational requirements
like A/B testing, canary releases, and more.
Developer Productivity
OpenShift is designed to enhance developer productivity by streamlining processes and
reducing the complexities typically associated with deploying and managing applications.
Here is a detailed look at how OpenShift achieves this through its key features:
Developer-Focused User Interface
– The OpenShift Console is a powerful, user-friendly interface that provides developers
with an overview of all projects and resources within the cluster. It offers a perspective
tailored to developers' needs, allowing them to create, configure, and manage
applications directly from the browser. Features like the Topology view let developers
visualize their applications and services in a graphical interface, making it easier to
understand and manage the relationships between components.
– OpenShift includes a Developer Catalog that offers a wide array of build and deploy
solutions, such as databases, middleware, and frameworks, which can be deployed on
the cluster with just a few clicks. This self-service portal accelerates the setup process
for developers, allowing them to focus more on coding and less on configuration.
Code-Ready Workspaces
– OpenShift integrates with Code-Ready Workspaces, a Kubernetes-native IDE that
developers can use within their browser. This IDE provides a fully featured
development environment, complete with source code management, runtimes, and
dependencies that are all managed and kept consistent across the development team.
This ensures that the entire team works within a controlled and replicable environment,
reducing “works on my machine” problems.
Application Templates and S2I
– OpenShift application templates are predefined configurations for creating applications
based on specific languages, frameworks, or technologies. These templates include
everything needed to build and deploy an application quickly, such as build
configurations, deployment strategies, and required services.
– S2I is a tool for building reproducible Docker images from source code. S2I lets
developers build containerized applications without needing to write Dockerfiles or
become experts in Docker. It combines source code with a base Docker image that
contains the appropriate runtime environment for the application. The result is a
ready-to-run Docker image built according to best practices.
Automated Build and Deployment Pipelines
– OpenShift has robust support for CI/CD processes, integrating tools like Jenkins,
GitLab CI, and others directly into the platform. It automates the build, test, and
deployment pipeline, enabling developers to commit code changes frequently without
the overhead of manual steps.
By focusing on these aspects of developer productivity, OpenShift significantly lowers the barrier
to entry for deploying applications in a Kubernetes environment, simplifies the management of
these applications, and accelerates the development cycle. This enables developers to spend
more time coding and less time dealing with deployment complexities, leading to faster
innovation and deployment cycles in a cloud-native landscape.
Beginning with the Operating System layer, this section will then explore the Compute layer,
specifically focusing on the IBM Power server, to emphasize the security features integrated
into its hardware design. Before delving into more detailed discussions, we will also introduce
the Network and Storage layers, highlighting how the OpenShift platform provides strategies
to address the challenges mentioned earlier.
Red Hat OpenShift Container Platform leverages Red Hat CoreOS, a container-oriented
operating system which implements the Security Enhanced Linux (SELinux) kernel to achieve
container isolation and supports access control policies. CoreOS includes:
Ignition: first boot system configuration responsible for starting and configuring machines
CRI-O: container runtime integrating with the OS, responsible for running, stopping and
restarting containers (it replaces the Docker Container Engine)
Kubelet: node agent responsible for monitoring containers
Ultimately, SELinux isolates namespaces, control groups and secure computing nodes.
Importantly, IBM Power10 has in-core hardware that protects against the Return-Oriented
Programming (ROP) cyberattacks with incredibly limited performance overhead (1-2%). ROP
attacks are difficult to identify and contain, as they are based on collecting and reusing
existing code from memory (also known as “gadgets”), rather than injecting new code in the
system. In fact, hackers chain the commands already existing in the memory to perform
malicious actions.
IBM Power10 isolates the Baseboard Management Controller (BMC), the micro-controller
embedded on the motherboard responsible to control remote management capabilities, and
implements allowlist and blocklist approaches to limit the CPU resources that the BMC can
access.
Figure 7-3 BM Power10 security for LPARs and Cloud Native applications
For additional information please refer to section 1.4, “Architecture and implementation
layers” on page 9.
Red Hat OpenShift comes with Red Hat Single Sign-On (SSO) which acts as a API
authentication and authorization measure to secure platform endpoints.
As previously mentioned, Kubernetes clusters are composed of at least one master node
(preferably more for redundancy purposes) and multiple worker nodes, which are virtual or
physical machines on top of which containers run. Each node has an IP address and
containerized applications are deployed on these nodes as pods. Each pod is also identified
with a unique IP address and this results in network management ease, as the pod can be
treated as a physical host or VM in terms of port allocation, naming and load balancing.
The Red Hat SDN utilizes Open vSwitch to manage network traffic and resources as
software, allowing policy-based management. SDN controllers satisfy applications requests
by managing networking devices and routing the data packages to their destination.
The network components in a cluster are managed by a Cluster Network Operator (CNO),
which runs in turn on an OpenShift cluster.
Leveraging Single Root I/O Virtualization (SR-IOV) on IBM Power servers, the network design
becomes more flexible.
Before moving to storage, another functional aspect of OpenShift is Network Files System
(NFS), which is the method used to share files across clusters over the network. While NFS is
an excellent solution for many environments, understanding the workload requirements of an
application is important when selecting NFS based storage solutions.
The storage layer section aims to address the first of the challenges mentioned above:
complexity and visibility.
When a container is created, a transient layer handling all read/write data is present within it.
However, when the container stops running, this ephemeral layer is lost. Certainly, according
to the nature of the container, administrators decide to assign either volumes (bound to the
lifetime of the pod), or persistent volumes (persisting longer than the lifetime of the pod).
With the Red Hat OpenShift Platform Plus plan, the enterprise can leverage Red Hat Data
Foundation, a software-defined storage orchestration platform for container environments.
The data fabric capabilities of the OpenShift Data Platform are derived from the combination
of Red Hat Ceph (software-defined storage platform), Rook.io (storage operator) and NooBaa
(storage gateway). OpenShift Data Platform can be deployed as internal storage cluster or
external storage cluster and it utilizes CSI to serve storage to the OpenShift Container
Platform pods. The capabilities provided allow to manage block, file and object storage to
serve databases, CI/CD tools and S3 API endpoints to the nodes.
Once clarified the contextual framework of the storage layer in OpenShift, here are reported
the security measures that Red Hat Caph enforces to address threat and vulnerability
management, encryption and identity and access management.
– Maintaining upstream relationships and community involvement to help focus on
security from the start.
– Selecting and configuring packages based on their security and performance track
records.
– Building binaries from associated source code (instead of simply accepting upstream
builds).
– Applying a suite of inspection and quality assurance tools to prevent an extensive array
of potential security issues and regressions.
– Digitally signing all released packages and distributing them through cryptographically
authenticated distribution channels.
– Providing a single, unified mechanism for distributing patches and updates.
Source trusting
When pulling code from a Github repository, the first consideration should be whether on not
you can trust the third-party developer. Inevitably developers might overlook at vulnerabilities
of libraries or other dependencies used in the code, therefore it is recommended to conduct a
proper due diligence before deploying a container in your enterprise environment.
To mitigate the risk, Red Hat provides Quay, a security focused container image registry
which is included in the Red Hat OpenShift Platform Plus.
Deployments on cluster
It is recommended to leverage automated policy-based tools to deploy containers in
production environments. In this regard, Security Context Constraints (SCCs), packaged in
Red Hat OpenShift Container Platform (extensively discussed in 8.3.1), support
administrators in securing sensitive information by allowing/denying access to volumes,
accept/deny privileges and extending/limiting capabilities that a container requires.
Orchestrating securely
Red Hat OpenShift extends K8s capabilities in terms of secure containers orchestration by:
Handling access to the master node via Transport Layer Security (TLS), which ensures
that the data over the internet are encrypted
Ensuring that the API server access is based on X.509 certificates or OAuth access
tokens
Avoiding the exposure of etcd (open source key-value store database for critical data) to
the cluster
SELinux
Moreover, Red Hat Single Sign-On (SSO), API authentication and authorization service,
features client adapters for Red Hat JBoss, a Node.js and Lightweight Directory Access
Protocol (LDAP)-based directory services. An API management tool advised in this context is
Red Hat 3scale API management.
To configure a firewall for OpenShift Container Platform 4.12, it is required to define the sites
that OCP requires so that the firewall grants access to those. As a first step it is
recommended to create an allowlist containing the URLs in Figure 7-4 on page 206.
Obviously, if a specific framework requires additional resources, this is the step at which it is
recommended to include them.
If you wish to use Telemetry to monitor the health, security and performance of application
components, the URLs shown in Figure 7-5 in order to access Red Hat Insights.
If the environment extends to Alibaba, AWS, GCP or Azure to host the cluster, it will be
necessary to grant access to the provider API and DNS for the specific cloud. An example of
this is shown in Figure 7-6.
Figure 7-7, shows an example of a YAML definition of a secret object type and describes
some of the contents.
1. Indicates the structure of the secret (in this case, opaque identifies a key-value pair)
2. The format for the keys in “data” must meet the guidelines for DNS_SUBDOMAIN of the
K8s glossary. More information found at this link:
https://github.com/kubernetes/kubernetes/blob/v1.0.0/docs/design/identifiers.md
3. Values associated with the keys in “data” must be base64 converted
4. Entries in “stringdata” are converted to base64 and will be moved to “data” automatically
5. Plain text strings associated with the “stringdata” key
Security contexts and security context constraints are required for a container to configure
access to protected Linux operating system functions on an OpenShift Container Platform
cluster. While SCs are defined by the development team, SCCs are determined by cluster
administrators. An application's security context specifies the permissions that the application
needs, whereas the cluster's security context constraints specify the permissions that the
cluster allows. An SC with an SCC enables an application to request access while limiting the
access that the cluster will grant.
By default, OpenShift prevents the containers running in a cluster from accessing protected
functions. These functions – Linux features such as shared file systems, root access, and
some core capabilities such as the KILL command – can affect other containers running in
the same Linux kernel, so the cluster limits access to them. Most cloud-native applications
work fine with these limitations, but some (especially stateful workloads) need greater access.
Applications that need these functions can still use them, but they need the cluster's
permission.
SCs are defined as a YAML file within the pod that attempts to deploy the application into
production. SCCs determine which Linux functions a pod can request for its application. The
pod requesting access to specific functions via SCs will fail to launch unless SCCs give
permission to proceed.
Taking a closer look to one of the SCCs illustrated above, the object would look like the one
represented in Figure 7-10.
The following sections discuss protected Linux functions such as privileges, access controls
and capabilities.
7.4.1 Privileges
Privileges describe the authority of a determined pod and the containerized applications
running within it. There are two places that privileges can be assigned – either in the SC
when privilege is set equal to true in the SC request, or set in the SCC where privilege is set
to true. This is shown in Example 7-1.
In Example 7-1, the first line indicates that the container will run with specified privileges,
whereas the second line grants the possibility for a pod derived by the parent pod to be
allowed to run with additional privileges than the parent pod.
The request for privileges in an SC is shown in Example 7-2. It is worth noting that when
privileges are requested from an SCs perspective, the developer needs to only request for
privileges, whereas from the SCCs perspective, the administrator is expected to be specific
about the set of privileges that are allowed.
It is good practice to keep in mind that privileged pods might endanger the host and other
containers, therefore only well-trusted processes should be allowed privileges.
As previously illustrated, the correct syntax for the development team to include these
requests is:
securityContext.field
Once the request is made, it will be processed and validated against the cluster SCCs.
Example 7-4 shows how a new SCC would look, integrating the fields listed in Example 7-3
on page 210.
7.4.3 Capabilities
Some capabilities, specifically Linux OS capabilities, take precedence over the pod’s settings.
A list of these capabilities can be found in this document. For completeness, Example 7-5
shows some of the most popular ones.
In Figure 7-11, the SC fails to pass due to three critical issues shown as points 1, 2 and 4.
First, in the attempt to control the pod storage volumes, the SC requests fsGroup 5555.
The reason this fails is that SCC “restricted” does not specify a range for fsGroup,
therefore the default range is used (1000000000-1000009999) excluding the requested
fsGroup 5555. (1)
Secondly, the SC is asking permission to runAsUser 1234. However the SCC “restricted”
once again takes into consideration the default range (1000000000-1000009999)so the
request is failed as not within the range. (2)
Finally, the deployment manifest requests the capability “SYS_TIME” (gives ability to
manipulate the system clock). This request fails as the SCC does not specify “SYS_TIME”
either in “allowedCapabilities” nor is it included in “defaultAddCapabilities” (4). The only
requests that passes is (3). The SC requests to runAsGroup 5678 and this is allowed by
the runAsAny field of the “restricted” SCC.
As a final remark, (5) is a note to highlight that the container is assigned to the project
default context value as the seLinuxContext is set as MustRunAs but lacks the specific
context.
The Red Hat OpenShift Container Monitoring Platform addresses many of these monitoring
challenges through a preconfigured, automatically updating stack based on Prometheus,
Grafana, and Alertmanager. Key components of this platform include:
Prometheus: Used as a backend to store time-series data, Prometheus is an
open-source solution for cloud-native architecture monitoring. It offers powerful querying
capabilities and a flexible data model, making it suitable for a wide range of monitoring
scenarios.
Alertmanager: Handles alarms and sends notifications. It integrates seamlessly with
Prometheus, allowing for sophisticated alerting rules and notification mechanisms.
Alertmanager supports multiple notification channels, including email, Slack, and
PagerDuty, ensuring that alerts reach the right people at the right time.
Grafana: Provides visual data representation through graphs. Grafana's rich visualization
capabilities allow users to create dynamic and interactive dashboards, making it easier to
interpret monitoring data and identify trends and anomalies.
IBM Instana enhances the observability and APM functions provided by the default Red Hat
OpenShift container monitoring tools. Instana is an automated system and APM service that
visualizes performance through machine learning-generated graphs. It increases application
performance and reliability through deep observability and applied intelligence. Instana excels
in cloud-based microservices architectures, enabling development teams to iterate quickly and
address issues before they impact customers. Instana provides several key capabilities:
Automatic Discovery and Instrumentation: Instana automatically discovers applications
and their dependencies, and instruments them without requiring manual intervention. This
reduces the overhead associated with setting up monitoring and ensures that all
components are monitored from the outset.
Real-Time Data Collection: Instana collects data in real-time, providing immediate
insights into application performance and health. This real-time visibility is critical for
identifying and resolving issues before they affect users.
Machine Learning-Based Analytics: Instana uses machine learning algorithms to
analyze performance data and detect anomalies. This predictive capability helps in
identifying potential issues early and taking preemptive action.
Comprehensive Dashboards: Instana offers comprehensive dashboards that provide a
unified view of application performance, infrastructure health, and user experience. These
dashboards can be customized to meet the specific needs of different stakeholders, from
developers to operations teams.
By integrating IBM Instana with Red Hat OpenShift, organizations can elevate their
monitoring and observability capabilities, ensuring that their cloud-native applications remain
perform-ant, resilient, and reliable.
Audit logs provide a detailed record of all activities and changes within the system. They are
crucial for tracking user actions, detecting unauthorized access, and investigating security
incidents. Effective audit logging helps in maintaining compliance with regulatory
requirements and provides an audit trail that can be used for forensic analysis.
The Red Hat OpenShift File Integrity Operator enhances security by monitoring file integrity
within the cluster. It detects unauthorized changes to critical system files, ensuring that the
integrity of the operating environment is maintained. The File Integrity Operator works by
periodically checking the hashes of monitored files and comparing them to known good
values. Any discrepancies trigger alerts, allowing administrators to investigate and remediate
potential security breaches.
The authentication process in Red Hat OpenShift Container Platform involves multiple layers to
ensure secure access to its resources. Users authenticate primarily through OAuth access
tokens or X.509 client certificates. OAuth tokens are obtained via the platform's built-in OAuth
server, which supports authentication flows such as Authorization Code Flow and Implicit Flow.
The server integrates seamlessly with various identity providers, including LDAP, Keystone,
GitHub, and Google, enabling organizations to leverage existing user management systems
securely.
X.509 client certificates are utilized for HTTPS-based authentication, providing a robust
mechanism for verifying the identity of clients interacting with the OpenShift API server. These
certificates are verified against a trusted Certificate Authority (CA) bundle, ensuring the integrity
and authenticity of client connections.
In OpenShift, users are classified into different categories based on their roles and
responsibilities within the platform. Regular users are typically individuals who interact directly
with applications and services deployed on OpenShift. System users, on the other hand, are
automatically generated during the platform's setup and are associated with specific
system-level tasks, such as managing cluster nodes or executing infrastructure-related
operations.
Service accounts represent a specialized type of system user tailored for project-specific roles
and permissions. These accounts enable automated processes within projects, ensuring that
applications and services can securely access resources without compromising system
integrity.
Groups play a pivotal role in managing authorization policies across OpenShift environments.
Users can be organized into groups, facilitating streamlined assignment of permissions and
simplifying the enforcement of access control policies. Alongside user-defined groups,
OpenShift automatically provisions virtual groups, which include system-defined roles and
default access configurations. This hierarchical group structure ensures efficient management
of user permissions while adhering to organizational security policies and compliance
requirements.
The internal OAuth server in OpenShift acts as a central authority for managing authentication
and authorization workflows. It issues and validates OAuth tokens used by clients to
authenticate API requests, ensuring that only authorized users and applications can access
protected resources. Administrators can configure the OAuth server to integrate seamlessly
with various identity providers, including htpasswd, Keystone, LDAP, and external OAuth
providers like GitHub or Google. Each identity provider offers distinct authentication
mechanisms, such as simple bind authentication for LDAP or OAuth 2.0 flows for external
identity providers, enhancing flexibility and compatibility with diverse organizational
environments.
ClusterRoles extend RBAC capabilities by providing cluster-wide permissions that apply to all
users within the platform. ClusterRoleBindings establish associations between ClusterRoles
and subjects (users or groups), allowing administrators to manage permissions consistently
across large-scale deployments.
Administrators can configure and manage RBAC roles and role bindings using command-line
interfaces (CLI) or graphical user interfaces (GUI) provided by OpenShift. Practical examples
illustrate the steps for creating, modifying, and deleting roles and bindings, ensuring precise
control over access permissions across diverse user populations and project environments.
Role-based access control strategies empower organizations to align access policies with
business requirements, enforcing security best practices while facilitating seamless
collaboration and application deployment within OpenShift Container Platform.
7.7 Tools
There are multiple tools available to assist you in setting up and monitoring security in your
OpenShift environment. This section describes some of them.
7.7.1 Aqua
This section delves into Aqua, a robust security tool designed explicitly for safeguarding
workloads hosted on Red Hat OpenShift running on IBM Power servers. Developed by an
IBM Business Partner, Aqua addresses the intricate security challenges inherent in
cloud-native environments, spanning the entire lifecycle of containerized applications.
Aqua integrates seamlessly with Red Hat OpenShift on IBM Power by deploying an Aqua
Enforcer container on each node within the cluster. These enforcers communicate with the
Aqua Security Control Plane, enabling the enforcement of security policies and providing
real-time visibility into the security status of the cluster. This integration augments native
OpenShift security controls, enhancing overall security posture without compromising
platform compatibility or performance.
Recognizing the trend towards hybrid and multi-cloud deployments, Aqua supports security
management across diverse infrastructure environments. It enables organizations to maintain
consistent security policies and compliance measures across on-premises data centers and
public cloud platforms, thereby reducing the attack surface and mitigating risks associated
with complex deployment landscapes.
The solution helps protect containerized Kubernetes workloads in all major clouds and hybrid
platforms, including Red Hat OpenShift, Amazon Elastic Kubernetes Service (EKS), Microsoft
Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).
Red Hat Advanced Cluster Security for Kubernetes is included with Red Hat OpenShift
Platform Plus, a complete set of powerful, optimized tools to secure, protect, and manage the
applications. See the detailed informaion at the following link:
https://www.redhat.com/en/technologies/cloud-computing/openshift/advanced-cluster-
security-kubernetes
A good feature in Red Hat ACS is that it works to prevent risky workloads from being
deployed or running. Red Hat Advanced Cluster Security monitors, collects, and evaluates
system-level events such as process execution, network connections and flows, and privilege
escalation within each container in your Kubernetes environments. Combined with behavioral
baselines and “allowlisting”, it detects anomalous activity indicative of malicious intent such
as active malware, cryptomining, unauthorized credential access, intrusions, and lateral
movement.
See the full features at the Red Hat ACS Data Sheet.
Chapter 8. Certifications
Security standards are a set of guidelines and best practices that organizations can follow to
protect their sensitive information and systems from cyber threats. These standards are
developed by various organizations and agencies, such as the International Organization for
Standardization (ISO) and the National Institute of Standards and Technology (NIST).
IBM continuously works to maintain certification for industry security standards to provide our
clients with a product base that will help them build systems that are compliant to the relevant
industry standards.
8.1.2 Certifications
Certifications for security standards provide third party validation that an enterprise is
compliant with specific security standards. Enhanced Security Posture: Certification
demonstrates a commitment to robust security practices, reducing the risk of data breaches
and cyberattacks. Certification is designed to:
Build customer trust
Certified organizations gain the trust of customers and partners, especially those in highly
regulated industries.
Show regulatory compliance:
Many industries have specific regulations that require adherence to certain security
standards. Certification can help organizations meet these requirements.
Create a competitive advantage
Certification can differentiate an organization from competitors, showcasing a strong
security culture.
8.2 FIPS
The Federal Information Processing Standards (FIPS) are a set of publicly announced
standards that the National Institute of Standards and Technology (NIST) has developed for
use in computer of agencies and contractors. FIPS standards establish requirements for
ensuring computer security and interoperability, and are intended for cases in which suitable
industry standards do not already exist.
If we are focusing on IBM Power Servers security standard, we shall mention IBM PCIe
Cryptographic Coprocessors of which they are a family of high-performance hardware
security modules (HSM). These programmable PCIe cards work with IBM Power servers to
offload computationally intensive cryptographic processes such as secure payments or
transactions from the host server. Using these HSMs allow you to gain significant
performance and architectural advantages and enable future growth by offloading
cryptographic processing from the host server in addition to delivering a high-speed
cryptographic functions for data encryption and digital signing, secure storage of signing keys
or custom cryptographic applications. That has been validated to FIPS PUB 140-2, Security
Requirements for Cryptographic Modules, Overall Security Level 4, the highest level of
certification achievable.
https://www.ibm.com/products/pcie-cryptographic-coprocessor
Since each of IBM’s HSM devices offer the highest cryptographic security available
commercially. Federal Information Processing Standards (FIPS) publication 140-2 defines
security requirements for cryptographic modules. It is issued by the National Institute of
Standards and Technology (NIST) and is widely used as a measure of the security of HSMs.
The cryptographic processes of each of the IBM HSMs are performed within an enclosure on
the HSM that is designed to provide complete physical security.
https://www.ibm.com/docs/en/cryptocards?topic=hsm-highlights
IBM AIX operating system supports FIPS, you can read more information at the below links:
– https://www.stigviewer.com/stig/ibm_aix_7.x/2023-08-23/
If you are dealing with the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your
OpenShift cluster, this can be applied when the machines are deployed based on the status
of certain installation options that governs the cluster options of which a user can change
during cluster deployment. With Red Hat Enterprise Linux (RHEL) machines, you must
enable FIPS mode when you install the operating system on the machines that you plan to
use as worker machines. These configuration methods ensure that your cluster meets the
requirements of a FIPS compliance audit: only FIPS validated or Modules In Process
cryptography packages are enabled before the initial system boot.
Common Criteria (ISO 15408) is the only global mutually recognized product security
standard. The goal of the Common Criteria is to develop confidence and trust in the security
characteristics of a system and in the processes used to develop and support it.
The ISO 15408 international standard is specifically for computer security certification. The
full description of the ISO 15408 standard can be found at:
– https://www.iso.org/standard/72891.html.
Federal IT security pros within the DoD must comply with the STIG technical testing and
hardening frameworks. According to DISA (https://disa.mil/), STIGs “are the configuration
standards for DOD [information assurance, or IA] and IA-enabled devices and system. The
CIS is best known for its CIS Controls, a comprehensive framework consisting of 20 essential
safeguards and countermeasures designed to improve cyber defense. These controls offer a
prioritized checklist that organizations can use to significantly reduce their vulnerability to
cyberattacks. Additionally, CIS produces CIS Benchmarks, which provide best practice
recommendations for secure system configurations, referencing these controls to guide
organizations in building stronger security measures.
CIS benchmarks align closely with-security and data privacy regulatory frameworks including
the NIST (National Institute of Standards and Technology) Cybersecurity Framework, the PCI
DSS (Payment Card Industry Data Security Standard) (PCI DSS), HIPAA (Health Insurance
Portability and Accountability Act), and ISO/EIC 2700. As a result, any organization operating
in an industry governed by these types of regulations can make significant progress toward
compliance by adhering to CIS benchmarks. In addition, CIS Controls and CIS Hardened
Images can help support an organization's compliance with GDPR (the European Union's
General Data Protection Regulation).
Each CIS Benchmark offers configuration recommendations organized into two profile levels:
Level 1 and Level 2. Level 1 profiles provide base-level configurations that are easier to
implement with minimal impact on business operations. Level 2 profiles are designed for
high-security environments, requiring more detailed planning and coordination to implement
while minimizing business disruption.
Currently, there are more than 100 CIS Benchmarks that are available through free PDF
download for non-commercial use.
Here are some CIS Benchmarks for that are relevant to IBM Power systems:
CIS Benchmark for IBM AIX:
This benchmark provides security configuration guidelines for IBM's AIX operating system,
which is commonly used on IBM Power Systems. It includes best practices for system
configuration to enhance security and reduce vulnerabilities.
CIS Benchmark for IBM i
This benchmark offers recommendations for securely configuring the IBM i operating
system, It focuses on system settings, security policies, and configurations to improve
overall security posture.
CIS Benchmarks for Linux
For IBM Power Systems running Linux there is a generic Linux benchmark as well as
benchmarks for Red Hat Enterprise Linux, SUSE Enterprise Linux and Ubuntu Linux.
These benchmarks are regularly updated to reflect the latest security practices and
vulnerabilities. You can find the most recent versions and additional details on the CIS
website or through their publications and resources. Table 8-2 provides a more
comprehensive list.
For the most accurate and current information, always refer to the CIS official website.
Chapter 9. PowerSC
IBM PowerSC is a security and compliance solution optimized for virtualized environments on
IBM Power servers running AIX, IBM i or Linux.
PowerSC sits on top of the IBM Power server stack, integrating security features built at
different layers. You can now centrally manage security and compliance on Power for all IBM
AIX and Linux on Power endpoints. In this way you can get better support for compliance
audits, including GDPR.
This chapter discusses PowerSC security features and most usable components such as:
9.1, “Compliance automation” on page 228
9.2, “Real-time file integrity monitoring” on page 228
9.3, “Endpoint Detection and Response” on page 229
9.4, “Anti-malware integration” on page 229
9.5, “Multi factor authentication” on page 230
PowerSC helps to automate the configuration and monitoring of systems that must be
compliant with the Payment Card Industry (PCI) data security standard (DSS). Therefore, the
PowerSC Security and Compliance Automation feature is an accurate and complete method
of security configuration automation that is used to meet the IT compliance requirements of
the DoD UNIX STIG, the PCI DSS, the Sarbanes-Oxley act, SOX/COBIT, and HIPAA.
The PowerSC Security and Compliance Automation feature creates and updates ready XML
profiles that are used by IBM Compliance Expert express (ICEE) edition. You can use the
PowerSC XML profiles with the pscxpert command.
The preconfigured compliance profiles delivered with PowerSC reduce the administrative
workload of interpreting compliance documentation and implementing the standards as
specific system configuration parameters. This technology reduces the cost of compliance
configuration and auditing by automating the processes. IBM PowerSC is designed to help
effectively manage the system requirement associated with external standard compliance
that can potentially reduce costs and improve compliance.
The interesting part is that it takes care of critical files that exist on a system that contain
sensitive data, such as configuration details and user information. From a security
perspective, it is important to monitor changes that are made to these sensitive files.
File Integrity Monitoring (FIM) is a method that can detect all of that not only for critical files,
but also for binaries and libraries.
PowerSC has the capability to generate real-time alerts whenever the contents of a monitored
file is changed and even when a file’s characteristics are modified. By using the AHAFS event
monitoring technology, PowerSC RTC monitors all of these changes and will generate alerts
using the following methods:
Email alerts
Log message to a file
SNMP message to your monitoring server
Alert to PowerSC GUI server
See more information in this document and in this PowerSC GUI description.
More information can be found in this document and in this IBM Support page.
One of the EDR forms in PowerSC is that you can configure IDP for a specific endpoint. For
AIX, the PowerSC GUI allows you to use the IP Security (IPSec) facility of AIX to define
parameters for intrusion detection. The IP Security (IPSec) facility of AIX must already be
installed on the AIX endpoint. For Red Hat Enterprise Linux Server and SUSE Linux
Enterprise Server, you must install the psad package on each endpoint on which you want to
run psad, as described in Installing PowerSC on Linux systems, before you can use it with
PowerSC GUI.
The PowerSC GUI uiAgent monitors the endpoint for port scan attacks on the ports listed in
IPSec filter rules. By default, PowerSC creates an IPv4 rule in /etc/idp/filter.rules to monitor
operating system network ports. PowerSC also creates the /var/adm/ipsec.log log file. The IP
Security (IPSec) facility of AIX also parses IPv6 rules in /etc/idp/filter.rules and the IPv6
addresses appear in the event list.
IBM PowerSC has the capability to integrate with ClamAV global anti-virus software toolkit to
help prevent malware attacks and detect Trojans, viruses, malware and other malicious
threats. This happens by scanning all incoming data to prevent malware from being installed
and infecting the server.
Through the PowerSC server UI, you can configure anti-malware settings for specific
endpoints. ClamAV will then move or copy any detected malware to the quarantine directory
on the PowerSC uiAgent, assigning a time-stamped prefix and nullifying file permissions to
prevent access. Note that ClamAV is not included in the initial PowerSC package, so you’ll
need to install it on the uiAgent before it can be utilized with the PowerSC GUI.
See the following links for installing the ClamAV toolkit on the operating systems:
– https://www.ibm.com/docs/en/powersc-standard/2.2?topic=malware-installing-anti-aix
– https://www.ibm.com/docs/en/powersc-standard/2.2?topic=cam-installing-anti-malware-r
ed-hat-enterprise-linux-server-suse-linux-enterprise-server
– https://www.ibm.com/docs/en/powersc-standard/2.2?topic=malware-installing-configurin
g-anti-i
IBM PowerSC has a capability of deploying a Multi-Factor Authentication (MFA) for mitigating
the risk of data breach caused by compromised credentials. PowerSC Multi-Factor
Authentication (PMFA) provides numerous flexible options for implementing MFA on Power.
PMFA is implemented with a Pluggable Authentication Module (PAM), and can be used on
AIX, VIOS, RHEL, SLES, IBM i, HMC, and PowerSC Graphical User Interface server.
The National Institute of Standards and Technology (NIST) defines MFA as authentication
that uses two or more factors to achieve authentication.
Factors include “something that you know”, such as password or personal identification
number. Factors include “something that you have”, such as a cryptographic identification
device or a token. Factors include “something that you are”, such as a biometric.
IBM PowerSC authentication factors improves the security of user accounts. It allows the user
to either provide the credentials directly in the application (in-band) or out-of band.
For in-band authentication, users can generate a token to satisfy a policy and use that token
to directly log in, however out-of-band authentication allows users to authenticate on a
user-specific web page with one or more authentication methods to retrieve a cache token
credential (CTC) that they then use to log in. For more information, see Out-of-band
authentication type.
IBM PowerSC MFA server can be installed on AIX, IBM i or Linux operating systems. See the
links for installation procedures:
– https://www.ibm.com/docs/en/powersc-mfa/2.2?topic=installing-powersc-mfa-server-aix
– https://www.ibm.com/docs/en/powersc-mfa/2.2?topic=installing-powersc-mfa-server-pas
e-i
– https://www.ibm.com/docs/en/powersc-mfa/2.2?topic=installing-powersc-mfa-server-linu
x
See also the full use guide and installation for IBM PowerSC MFA:
– https://www.ibm.com/docs/en/SS7FK2_2.2/pdf/powersc_mfa_users_pdf.pdf
– https://www.ibm.com/docs/en/SS7FK2_2.2/pdf/powersc_mfa_install_pdf.pdf
In the replication model, the Postgres database on the secondary IBM PowerSC MFA server
is a read-only copy of the database on the primary IBM PowerSC MFA server. In the event
Before you configure IBM PowerSC MFA for high availability, satisfy the following
prerequisites:
The primary and secondary server must use the same operating system.
Updates to any files in /opt/IBM/powersc/MFA/mfadb are not preserved if you reinstall the
IBM PowerSC MFA server.
If the secondary server uses Red Hat Enterprise Linux Server or SUSE Linux Enterprise
Server, install Postgres, openCryptoki and opencryptoki-swtok on the secondary server.
With the introduction of Power Virtual Server and its ability to run AIX, IBM i, and Linux on
Power in the cloud, understanding Power Virtual Server security is crucial for establishing a
reliable and secure environment.
This chapter is designed to give a high level overview of security in Power Virtual Server. For
additional information, reference the links in section 10.7, “Additional References” on
page 237
You can use the service access roles to define the actions that the users can perform on
Power Virtual Server resources. Table 10-2 displays the IAM service access roles and the
corresponding actions that a user can complete by using the Power Virtual Server:
When you assign access to the Power Virtual Server service, you can set the access scope
to:
All resources
Specific resources, which support the following selections:
– Resource group
– Service instance
Editor, Manager, Operator, Reader, Viewer, VPN Client VPC Infrastructure Services service
While learning from real-world incidents is valuable, proactive measures are crucial to prevent
costly breaches. Security experts emphasize the importance of cultivating a
security-conscious workforce through targeted training and awareness campaigns. By
fostering a culture where security is a shared responsibility, organizations can significantly
reduce their risk exposure.
By following these recommendations, organizations can significantly reduce the financial and
reputational impact of a data breach.
IBM X-Force published the IBM X-Force Threat Intelligence Index 2024. The following is a
summary of the findings:
Identity-Centric Attacks: Cybercriminals increasingly target identities as the easiest point
of entry, with a significant rise in credential theft and abuse.
Ransomware Decline, Data Theft Surge: While ransomware attacks decreased, data theft
and leaks became the primary motivation for cyberattacks.
Infostealer Malware Growth: The use of infostealer malware to steal credentials has
skyrocketed, fueling the dark web's stolen credential market.
1 IBM X-Force Threat Intelligence Index 2024
Overall, the report highlights a shift in cybercrime tactics towards identity-based attacks and
data theft, while also warning of the growing threat posed by AI. Organizations must prioritize
identity protection, implement strong security measures, and stay vigilant against evolving
threats.
11.1.4 Summary
The importance of fixing the basics is key. In other words, security is built from steps such as
asset inventory, patching and training. Some important points to take in consideration:
Develop an automated methodology for secure assessments and detection.
Establish a risk management framework that includes cyber insurance.
Maintain a dedicated environment for testing security patches.
Ensure rollback options are available in all scenarios.
Chapter 11. Lessons Learned and Future Directions in Power System Security 241
11.2.1 Usernames and Passwords
This is one of the most basic protections. In order to have longer usernames and passwords,
you need to make a system change. Changing the username length is almost always required
if you want to integrate with LDAP or AD (active directory) and it requires a reboot. Below is
the command to increase the maximum username length to 36:
chdev -l sys0 -a max_logname=36
The above change requires a reboot of the LPAR. In order to have longer passwords, you
need to use the chsec command. The version below causes the system to use ssha256 (up to
255 characters) for passwords. The next time local users change their password they will get
a much longer, more secure password.
chsec -f /etc/security/login.cfg -s usw "pwd_algorithm=ssha256"
Finally, I normally set the system up to automatically create home directories‚ this is important
in an LDAP or AD environment. An illustration of this is shown in Example 11-1.
11.2.2 Logging
Logging is a critical part of any system-protection strategy. Without logs, it is impossible to
know what has been happening on the system. The syslog daemon (syslogd) starts by
default on AIX, but the log configuration file is not set up to actually log everything. The first
step is to correctly set up /etc/syslog.conf. It is best to set up a separate file system for logs
(e.g., /usr/local/logs) rather than use the default of /var/spool. If /var fills up, the system will
crash; if your separate file system fills up, it will just stop logging. Although file systems should
be monitored, it is still wise to store logs in their own file system to protect against large logs
bringing down the system. Logs can be written to a file, sent to the console, logged to a
central host across the network (be wary of this as the traffic can be substantial), e-mailed to
an administrator or sent to all logged-in users or any combination thereof. The most
commonly used method is writing to a file in a file system. Once the file system is set up, code
a /etc/syslog.conf file. Example 11-2 on page 243 shows an example file that writes to a local
filesystem. It keeps the logs to no more than 2MB, then rotates and compresses them,
keeping the last 10 logs. I do this on all LPARs and VIO servers.
Go into /usr/local/logs and create each of the files above using touch. Now you can stop
(stopsrc -s syslogd) and start (startsrc -s syslogd) the logging daemon.
You will notice it is only four lines and everything is commented out. On a NIM server you will
see tftp and bootp uncommented. Occasionally when you do maintenance it uncomments or
adds back in services. When the file is only 4 lines you can see immediately that it did that. I
do not use ftp and telnet because they are insecure and I have ssh and sftp instead. If you
have to use telnet or ftp then you can uncomment them, but remember they send passwords,
etc in clear text. I would also recommend looking at /etc/rc.tcpip to see if snmp, sendmail and
other daemons are starting. If you need snmp or sendmail to run then they should be properly
configured to keep hackers from taking advantage of default exploits.
Chapter 11. Lessons Learned and Future Directions in Power System Security 243
11.2.5 Patching
At a minimum, make sure you are running a fully supported version of the OS (VIOS, AIX,
IBM i, Linux). You can check this using the FLRT (fix level recommendation tool). It is
important to keep your patching up to date to proactively solve problems.
In the AIX/VIOS world there are two different kinds of patching. The first is fix packs
(technology levels and service packs) and the second is efixes or ifixes (emergency or interim
fixes). Fix packs are installed using install and efixes/ifixes are installed using emgr.
Technology levels and service packs are found at Fix Central. You should check here
regularly for updates to your LPARs, VIO servers, server and I/O firmware and HMCs.
Additionally, there are products installed – even at the latest service pack – that need
updating. Typically this includes Java, OpenSSH and OpenSSL. Java patches are
downloaded at Fix Central. OpenSSH and OpenSSL are downloaded at the Web Applications
site. I try to get a full patching window every six months unless it is an emergency. You can
use the FLRT and FLRTVC tools to determine what patching needs to occur.
Typically, I update the HMC first, then the server firmware, then the I/O firmware and VIOS
servers and finally the LPARs. However, you should look at the readme/description files for
every update to make sure IBM does not have prerequisites that must be followed. This is
particularly important with the HMC and server firmware interaction. There are also some
requirements with POWER9 and adapter firmware because of the new trusted boot settings.
To run flrtvc you first need to download the zip file and then unzip it. You may also need to
download the apar.csv file. If your LPAR/VIO does not have access to IBM, then you will need
to get the file from IBM and upload it to the LPAR yourself. You then edit the script and change
SKIPDOWNLOAD from 0 to 1. It will now look for the apar.csv file in the same directory the
script is in. Once that is done you can run it in compact mode and produce an output file as
follows:
– cd /directory where flrtvc is
– ksh93 ./flrtvc.ksh >systemname-flrtvc-output.csv
Then sftp or scp (as ASCII) the systemname-flrtvc-output.csv file to your computer and open
it with Excel as a csv file. The delimiter is |.
There are a number of flags that you can use but for the most part I do not use any of them as
I want to get everything. I tend to have the output go to an NSF mounted filesystem so that all
of my security reports are in one place. That way you can concatenate them together or at
least just download them all from one place. You can also write scripts that grep on certain
things in the output and email those to yourself.
You can run flrtvc ahead of time and then download and prestage the updates. Flrtvc
typically identifies efixes and ifixes that need to go on as well as Java, OpenSSH, OpenSSL
and other updates that need to go on to the system.
Typically I will wait until firmware, technology levels or service packs have been out for at least
one month (preferably two) before I update to them. At that time, I will update my NIM server
and then start to migrate the updates through test, dev, QA and finally, production. Having a
good update strategy will save you a lot of downtime and will help with securing your systems.
Chapter 11. Lessons Learned and Future Directions in Power System Security 245
being able to use certain commands as root. This is very useful for level 1 support and DBAs
who need privileges to perform certain tasks.
11.2.9 Backups
Everyone thinks about taking backups for data, but data is of no use if you have no OS. It s
critical to take regular mksysb (OS bootable) backups. I normally take them to my NIM server
as that is where I would restore the system from. When discussing backups, you need to
make sure these bare metal mksysb backups are part of any backup and disaster recovery
plan. A mksysb should be taken at least monthly, and before and after any system
maintenance. Additionally, I always have two disks (even on the SAN) on the system reserved
for rootvg. One is active and the other is one I use to take an alt_disk_copy backup of rootvg
before I make changes. You can never have enough backups!
11.2.11 References
The following links will be helpful as you set up your AIX security.
FLRT Home Page
FLRTVC Home Page
Apar.csv file
FLRTVC Online Tool
Fix Central (patches and updates)
FLRT LITE (Check firmware and software supported levels)
Web Applications (OpenSSH, ldap, OpenSSL, Kerberos)
AIX Linux Toolbox
Figure 11-1 The IBM Fix Level Recommendation Tool for IBM Power
Note: You can find the Fix Level Recommendation Tool for IBM Power at
https://esupport.ibm.com/customercare/flrt/power.
Protecting hardware, data, and backup systems from damage or theft is paramount. A robust
physical security framework is essential for any organization, serving as the bedrock upon
which other security measures are built. Without it, securing information, software, user
access, and networks becomes significantly more challenging.
Beyond internal systems, physical security encompasses protecting facilities and equipment
from external threats. Building structures, such as fences, gates, and doors, form the initial
defense against unauthorized access. A comprehensive approach considers both internal
and external factors to create a secure environment.
Chapter 11. Lessons Learned and Future Directions in Power System Security 247
Effective physical security is essential for protecting facilities, assets, and personnel. A
comprehensive strategy involves a layered approach that combines various security
measures to deter, detect, delay, and respond to potential threats.
Deterrence
Discourage unauthorized access through visible security measures such as:
– Clear signage indicating surveillance
– Robust physical barriers like fences and gates
– High-quality security cameras
– Controlled access systems (card readers, keypads)
Detection
Identify potential threats early with:
– Motion sensors and alarms
– Advanced video analytics
– Environmental sensors (temperature, humidity)
– Real-time monitoring systems
Delay
Hinder intruders and buy time for response through:
– Multiple points of entry and exit
– Sturdy doors, locks, and window reinforcements
– Access control measures (biometrics, mobile credentials)
– Security personnel or guards
Response
Swiftly address security incidents with:
– Emergency response plans and procedures
– Integration of security systems with communication tools
– Trained personnel for incident management
– Collaboration with law enforcement
Perimeter security forms the initial line of defense for any facility. Physical barriers like fences,
gates, and surveillance systems create a deterrent against unauthorized access. Strategic
landscaping and lighting can further enhance perimeter protection by improving visibility and
restricting movement.
It's essential to remember that even the most sophisticated security systems are only as
effective as the people who use them. Employees who understand their role in security can
significantly enhance a facility's protection. By equipping your staff with the knowledge and
skills to handle emergencies, you create a safer environment for everyone.
Chapter 11. Lessons Learned and Future Directions in Power System Security 249
250 IBM Power Security Catalog
A
They utilize proven methodologies, practices, and patterns to help partners develop complex
solutions, achieve better business outcomes, and drive client adoption of IBM software,
servers, and storage.
This chapter will focus on the Security offerings from IBM Technology Expert Labs. For more
information on Technology Expert Labs broader offerings see the Technology Expert Labs
website.
Engaging IBM Technology Expert Labs allows you to properly secure your IBM Power
environment by having them make a proper assessment of your setup. The purpose of this
services activity is to help you to assess system security on IBM Power and it provides a
comprehensive security analysis of either a single AIX, IBM i, or Linux instance or a single
Red Hat OpenShift cluster.
This service is designed to help you address issues that affect IT compliance and governance
standards.
A.1.1 Assess IBM Power Security for AIX, Linux, or Red Hat OpenShift
The goal of this service is to assist the client in assessing system security on IBM Power,
providing a thorough security analysis of the AIX instance, Linux instance, or Red Hat
OpenShift cluster. This service is aimed at helping the client address issues related to IT
compliance and governance standards.
IBM will:
1. Conduct a comprehensive security analysis of the following environments:
– AIX
– Red Hat Enterprise Linux
– SUSE Linux Enterprise Server (SLES)
– Ubuntu
– Red Hat OpenShift v4 cluster
2. Evaluate the security configuration details of the AIX, Linux or Red Hat OpenShift system.
3. For Red Hat OpenShift, analyze security recommendations for master node configuration
files, API server, controller manager, scheduler, etcd, control plane configuration, worker
nodes, and kubelet configuration.
4. For AIX or Linux, review administrative privileges, logging, monitoring, vulnerability
management, malware defenses, and the limitation and control of network ports,
protocols, and services.
5. Provide guidance on security best practices based on the Center for Internet Security
(CIS) Critical Controls and CIS Benchmarks.
6. Offer detailed recommendations for potential adjustments and remediation to enhance
overall security.
The complete list of standard services offered by IBM Technology Expert Labs for IBM Power
can be found at:
https://www.ibm.com/support/pages/ibm-technology-expert-labs-power-offerings
The offerings may differ according to your geographical region. For details specific to your
region, it would be best to get in touch with an IBM Technology Expert Labs seller to
determine details.
The utilities range from simple to complex and complement the tools provided natively in
IBM i. Each tool has its own purchase price and is available directly from IBM Technology
Expert Labs. A quick summary of the tools is as follows:
Compliance Automation Reporting Tool (CART)
After a Security Assessment and subsequent remediation, systems must be monitored to
maintain compliance. Without monitoring, the state of the system is unknown. And so,
while your system might have been secure at one point in time, without ongoing
monitoring you cannot be sure of your current status. While there are many security tools
available, most of them do not focus on IBM i. In fact, several do not even run on IBM i nor
analyze IBM i security attributes. For this reason, the IBM Technology Expert Labs
security and database teams collaborated to create a tool specifically for IBM i, taking
advantage of the unique features of our system. This tool provides built-in reports and
dashboards for monitoring security attributes that highlight where vulnerabilities or
configuration mistakes may exist.
Advanced Authentication for IBM i
The primary purpose of this tool is to provide a second factor that users must enter when
attempting to gain access to a system. In addition to the standard user password (which
should expire on a regular basis), users need to provide a unique six (6) digit code that
changes every 30 seconds on a hardware token or software pap. This is known as a
time-based one-time password (TOTP) and is based on RFC 6238. This forces users to
not only provide something they know (their standard password) but also something they
have (the hardware token or software pap). Without both items, access to the system is
denied
Syslog Reporting Manager for IBM i
The primary purpose of this tool is to provide a simple way to extract native IBM i logs and
send them to a centralized security information and event monitoring (SIEM) solution. We
do this by extracting entries from various native IBM i logging facilities, transforming them
into properly formatted syslog messages (per RFC 3164 and 5424), and sending them
over to a central collection system. In addition to the native logs, our tool can also monitor
and report on changes to IFS files.
https://ibm.biz/IBMiSecurity
Acquisition by Palo Alto of QRadar Suite SaaS offerings was closed as of Sept 4, 2024.
QRadar Suite SaaS offerings are to be integrated into Cortex XSIAM, see
https://www.paloaltonetworks.com/cortex/cortex-xsiam.
With a common user interface, shared insights and connected workflows, it offers integrated
products for:
Endpoint security (EDR, MDR)
Endpoint detection and response (EDR) solutions are more important than ever, as
endpoints remain the most exposed and exploited part of any network. The rise of
malicious and automated cyber activity targeting endpoints leaves organizations
struggling against attackers who easily exploit zero-day vulnerabilities with a barrage of
ransomware attacks.
IBM QRadar EDR provides a more holistic EDR approach that:
• Remediates known and unknown endpoint threats in near real time with intelligent
automation
• Enables informed decision-making with attack visualization storyboards
• Automates alert management to reduce analyst fatigue and focus on threats that
matter
• Empowers staff and helps safeguard business continuity with advanced continuous
learning AI capabilities and a user-friendly interface
SIEM
As the cost of a data breach rises and cyberattacks become increasingly sophisticated,
the role of security operations center (SOC) analysts is more critical than ever. IBM
QRadar SIEM is more than a tool; it is a teammate for SOC analysts—with advanced AI,
powerful threat intelligence and access to the latest detection content.
IBM QRadar SIEM uses multiple layers of AI and automation to enhance alert enrichment,
threat prioritization and incident correlation—presenting related alerts cohesively in a
unified dashboard, reducing noise and saving time. QRadar SIEM helps maximize your
security team’s productivity by providing a unified experience across all SOC tools, with
integrated, advanced AI and automation capabilities.
SOAR
The IBM QRadar SOAR platform is built to optimize your security team’s decision-making
processes, improve your security operations center (SOC) efficiency, and ensure your
incident response processes are met with an intelligent automation and orchestration
solution.
Winner of a Red Dot User Interface Design Award, QRadar SOAR helps your
organization:
• Cut response time with dynamic playbooks, customizable and automated workflows
and recommended responses
• Streamline incident response processes by time-stamping key actions and aiding in
threat intelligence and response
• Manage incident response to over 200 international privacy and data breach
regulations with Breach Response
For more information on th QRadar Suite see the QRadar Web Page.
In today’s complex threat environment, the ability to stay ahead of adversaries, design for
resilience, and create secure work environments is paramount. Trend Micro’s XDR services
are engineered to provide advanced threat defense through technologies and human
intelligence that proactively monitor, detect, investigate, and respond to attacks. The IBM
Power partnership ensures data is protected with comprehensive end-to-end security at every
layer of the stack. These integrated security features are designed to ensure compliance with
security regulatory requirements.
Trend Vision One delivers real-time insights neatly displayed on your executive dashboard.
No more manual tasks—just efficient, informed decision-making. While IBM Power frees up
client resources, allowing them to focus on strategic business outcomes, Trend Vision One
automates cyber security reporting and playbooks for more efficient and productive security
operations. Security teams can stay ahead of compliance regulations, with real-time updates
ensuring their enterprise security posture remains robust.
Other features
Antimalware
Web reputation service
Activity monitoring
Activity firewall
Application control
Behavioral analysis
Machine learning
EDR and XDR
Device control
Virtualization protection
To transport data through APIs, however, requires a protection layer to ensure security of data
and accessibility only to known actors. Mulesoft has partnered with IBM to provide Anypoint
Flex Gateway on IBM Power.
In today's digital landscape, seamless connectivity and rapid data exchange are crucial for
business success. Organizations constantly seek innovative solutions to streamline
operations, and MuleSoft Anypoint Flex Gateway provides that capability.
MuleSoft Anypoint Flex Gateway is an Envoy-based, ultra-fast, lightweight API gateway built
on Envoy technology. Designed for seamless integration with DevOps and CI/CD workflows,
Anypoint Flex Gateway delivers the performance needed for demanding applications and
microservices, while ensuring enterprise-grade security and manageability across any
environment.
Deploying Anypoint Flex Gateway close to your IBM Power-hosted applications, APIs, and
data significantly enhances the customer experience, enforces security policies, reduces data
latency, and boosts application performance. You can deploy the gateway on Red Hat
OpenShift, Red Hat Enterprise Linux (RHEL), and SUSE Linux Enterprise Server (SLES).
By combining the strengths of MuleSoft's Anypoint Platform with the performance and
reliability of IBM Power servers, businesses can confidently embark on their digital
transformation journeys, equipped with the tools and capabilities to drive innovation, agility,
and growth.
Note: IBM and Mulesoft’s partnership announcement for Anypoint Flex Gateway on IBM
Power can be found at
https://www.ibm.com/blog/announcement/ibm-and-mulesoft-expand-global-relationsh
ip/ while the solution brief can be found at
https://www.mulesoft.com/sites/default/files/cmm_files/MuleSoft_AnypointFlexGat
eway_IBM%20Power_0.pdf.
Raz-Lee Security
Raz-Lee Security specializes in providing advanced security solutions for IBM i. Their
offerings include tools for real-time threat detection, audit and compliance management, and
vulnerability assessment. Raz-Lee's iSecurity suite is highly regarded for its powerful and
customizable security modules, which help organizations proactively manage and mitigate
security risks. Their customer base spans various sectors such as banking, insurance,
manufacturing, and government, reflecting their ability to address diverse security challenges
across different industries.
Precisely
Precisely provides a range of IBM i solutions aimed at ensuring data integrity, availability,
security, and compliance. Their IBM i security solutions include tools for access control,
monitoring, privacy, and malware defense. Precisely is known for its robust, scalable solutions
that can integrate seamlessly into existing IT infrastructures. These solutions deliver
market-leading IBM i security capabilities that help organizations successfully comply with
cybersecurity regulations and reduce security vulnerabilities. In addition, the security
offerings seamlessly integrate with Precisely’s IBM i HA solutions to deliver an even greater
level of business resilience. Precisely’s customers ranges from large enterprises to SMB in
sectors like telecommunications, financial services, and logistics.
Precisely also offers a free assessment tool for IBM i. Assure Security Risk Assessment
checks over a dozen categories of security values, compares them to recommended best
practices, reports on findings, and makes recommendations. You can find this security risk
assessment at: https://www.precisely.com/product/precisely-assure/assure-security.
Fresche Solutions
Fresche Solutions offers a comprehensive IBM i Security Suite designed to protect IBM i
systems from modern security threats. Their solutions include tools for real-time monitoring,
vulnerability assessment, and compliance management. Fresche’s security suite is noted for
its innovative approach to security management, combining ease of deployment with powerful
analytical capabilities. Their customer base includes businesses of all sizes, from SMBs to
large enterprises, in industries such as retail, manufacturing, and services, demonstrating
their versatile and scalable security offerings.
These companies, among others, play a vital role in the IBM i ecosystem by continuously
innovating and providing security solutions tailored to the unique needs of IBM i users. Their
diverse customer bases and strong industry reputations underscore their effectiveness in
delivering reliable, high-quality security solutions.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
Security Implementation with Red Hat OpenShift on IBM Power Systems, REDP-5690
Implementing, Tuning, and Optimizing Workloads with Red Hat OpenShift on IBM Power,
SG24-8537
IBM Storage DS8000 Safeguarded Copy: Updated for DS8000 Release 9.3.2,
REDP-5506
Data Resiliency Designs: A Deep Dive into IBM Storage Safeguarded Snapshots,
REDP-5737.
IBM Power Systems Cloud Security Guide: Protect IT Infrastructure In All Layers,
REDP-5659
Implementing, Tuning, and Optimizing Workloads with Red Hat OpenShift on IBM Power,
SG24-8537
Introduction to IBM PowerVM, SG24-8535
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Online resources
These websites are also relevant as further information sources:
Cloud Management Console Cloud Connector Security White Paper
https://www.ibm.com/downloads/cas/OGGYD90Y
IBM AIX Documentation on Security
https://www.ibm.com/docs/en/aix/7.3?topic=security
Red Hat OpenShift Documentation on Configuring your Firewall
https://docs.openshift.com/container-platform/4.12/installing/install_config/co
nfiguring-firewall.html
Modernizing Business for Hybrid Cloud on OpenShift Video Series
https://community.ibm.com/community/user/power/blogs/jenna-murillo/2024/01/29/m
odernizing-business-for-hybrid-cloud-on-openshift
SG24-8568-00
ISBN
Printed in U.S.A.
®
ibm.com/redbooks