ICS Handbook
ICS Handbook
for
SELF-ASSESSING SECURITY VULNERABILITIES & RISKS
of
INDUSTRIAL CONTROL SYSTEMS
on
DOD INSTALLATIONS
19 December 2012
This handbook is a result of a collaborative effort between
the “Joint Threat Assessment and Negation for Installation
Infrastructure Control Systems” (JTANIICS) Quick Reaction
Test (QRT) and the Joint Test and Evaluation (JT&E)
Program under the Director, Operational Test and
Evaluation, Office of the Secretary of Defense. The JT&E
Program seeks nominations from Services, combatant
commands, and national agencies for projects that
develop test products to resolve joint operational
problems. The objective of the JT&E Program is to find
ways for warfighters to improve mission performance with
current equipment, organizations, and doctrine.
i
Contents
EXECUTIVE SUMMARY ........................................................................................................................................ 1
INDUSTRIAL CONTROL SYSTEMS “101”............................................................................................................... 5
HANDBOOK AUTHORITIES................................................................................................................................... 8
DISTINCTIONS BETWEEN ICS AND IT ................................................................................................................... 8
THREATS ............................................................................................................................................................ 10
MISSION PRIORITIES .......................................................................................................................................... 11
MISSION IMPACT............................................................................................................................................... 15
THE MOST SECURE ICS ...................................................................................................................................... 16
RISK ASSESSMENT & MANAGEMENT ................................................................................................................ 19
FRAMEWORK FOR SUCCESSFUL ICS DEFENSE................................................................................................... 19
ICS SECURITY ASSESSMENT PROCESS ............................................................................................................... 21
SOFTWARE TOOLS ............................................................................................................................................. 25
ADDITIONAL RESOURCES .................................................................................................................................. 26
ICS SECURITY ACTIONS ...................................................................................................................................... 26
RECOMMENDED ICS DEFENSE ACTIONS ........................................................................................................... 27
POLICY ........................................................................................................................................................... 27
LEADERSHIP ................................................................................................................................................... 28
PERSONNEL ................................................................................................................................................... 29
TRAINING....................................................................................................................................................... 30
ORGANIZATION ............................................................................................................................................. 31
FACILITIES ...................................................................................................................................................... 32
MATERIEL ...................................................................................................................................................... 32
CYBER SECURITY ............................................................................................................................................ 34
APPENDIX A REFERENCES .............................................................................................................................. 37
APPENDIX B WEB LINKS ................................................................................................................................. 42
APPENDIX C ACRONYMS................................................................................................................................ 44
APPENDIX D GLOSSARY ................................................................................................................................. 48
APPENDIX E CE BRIEFING GRAPHICS ............................................................................................................. 55
APPENDIX F RISK ASSESSMENT & MANAGEMENT MODELS ......................................................................... 56
APPENDIX G CSET ........................................................................................................................................... 60
APPENDIX H DCIP........................................................................................................................................... 62
APPENDIX I UNIVERSAL JOINT TASKS ............................................................................................................ 63
ii
APPENDIX J ICS TRAINING OPPORTUNITIES .................................................................................................. 65
APPENDIX K ICS SECURITY ORGANIZATIONS ................................................................................................. 69
ATTACHMENT 1 MAPPING INTERDEPENDENCIES & ASSESSING RISK ........................................................... 71
ATTACHMENT 2 CHECKLIST OF RECOMMENDED ACTIONS .......................................................................... 84
ATTACHMENT 3 COMMITTEE ON NATIONAL SECURITY SYSTEMS INSTRUCTION 1253 ICS OVERLAY
VERSION 1 ....................................................................................................................................................... 105
ATTACHMENT 4 CSET 5.1 INSTALLATION ICS ENCLAVE EXAMPLE .............................................................. 200
Figures
1. ICS Security Assessment Eight-Step Process p. 3
2. PLCs & RTUs: The Challenge of Finding the Connectivity p. 6
3. Mapping Mission Assurance to ICS p. 12
4. The ICS Security Team p. 19
5. It Only Takes a Minute p. 34
iii
Industrial Control Systems
Vulnerability & Risk Self-Assessment Aid
EXECUTIVE SUMMARY
Key Points
• The primary goal is mission assurance.
• The primary focus is on risk management.
• The primary audience is the installation commander, with his or her staff as close
secondary.
• The primary intent is to facilitate self-assessment of Industrial Control Systems (ICS)
security posture vis-à-vis missions’ priorities.
• The primary approach is generic, enabling broad (Joint/all Services) utility.
One of the essential responsibilities of the installation commander and supporting staff is to
manage risks to establish optimal conditions for assuring successful accomplishment of
assigned missions every day. Although not always obvious, many missions depend on the
unfailing functioning of ICS and therefore on the security of those systems.
A mission assured today is never taken for granted as assured tomorrow. Mission assurance
demands constant vigilance along with proactive risk management. Risks come in myriad
shapes and sizes—some enduring, some sporadic and situational, others appearing without
warning. ICS represent only one set among a vast array of mission vulnerabilities and risks, an
array that often competes for resources and, therefore, requires prioritization of management
actions.
This handbook is intended for use primarily by Department of Defense (DOD) installation
commanders, supported by staff members, as a management tool to self-assess,1 prioritize,
and manage mission-related vulnerabilities and risks that may be exposed or created by
connectivity to ICS. ICS include a variety of systems or mechanisms used to monitor and/or
operate critical infrastructure elements, such as electricity, water, natural gas, fuels, entry and
access (doors, buildings, gates), heating & air-conditioning, runway lighting, etc. Other terms
1
Other entities and programs are available to conduct formal and very thorough technical assessments, but those
must be coordinated, scheduled, and resourced (i.e., funded). This aid provides an ability to conduct self-
assessments when/as necessary or desired, and thereby, also the ability to prioritize and manage the resources
required to address identified vulnerabilities and risks.
1
often heard include SCADA, DCS, or EMCS.2 Throughout this book the term “ICS” is used as
encompassing such variations.
This book is intentionally generic. Whatever the category of ICS, the approach to vulnerability
assessment and risk management is similar. The applicability of actions recommended here
may be extended to any DOD military installation regardless of the specific categories of ICS
encountered. In keeping with the generic approach and due primarily to the unique nature of
each installation’s infrastructure, beyond a couple of exceptions there are no checklists,
standard operating procedures (SOP), or similar sets of lock-step actions provided here.
However, a risk management team using the handbook likely will want to develop checklists
tailored to their specific circumstances.
Among other purposes, this handbook is intended to increase awareness of how a threat
related to the ICS itself translates into a threat to the mission, either directly through the ICS or
circuitously via network connections. Every military installation has numerous mission-support
processes and systems controlled by, or that otherwise depend on, ICS. Every connection or
access point represents potential vulnerabilities and, therefore, risks to the system under
control (i.e., electrical, water, emergency services, etc.), which can escalate quickly to adverse
impact on mission essential functions (MEF) and mission accomplishment.
Fundamentally then, this handbook is provided to help the installation leadership conduct a risk
self-assessment focused on ICS and supported missions and then implement plans to manage
that risk. Most of the information contained herein is not unique to this publication. Two
unique aspects are: (1) the aggregation of disparate information into one place, distilling
essentials, and tailoring to DOD installation leadership; and (2) bringing cyber/information
technology (IT), civil engineers, public works, and mission operators together with a singular
focus on ICS security in support of missions. This handbook (via Appendices) also points to
additional resources.
The key set of activities—one exception to the “no checklists” approach—is found under the
heading “ICS Security Assessment Process.” Succinctly the process consists of eight steps,
which if implemented with deliberation and in a team environment, will set the success
conditions for all other actions recommended or suggested within this handbook (see Figure 1).
This set of eight steps represents the core of the handbook. All other information herein is
intended to support implementation of those eight steps.
2
SCADA= Supervisory Control and Data Acquisition; DCS = Distributed Control System; EMCS = Energy
Management Control System. Other variations exist; for example, building control systems.
2
Before explaining the eight-step assessment process, the handbook provides introductory,
informative and supporting information. Closely aligned with and serving as companion to the
“Assessment Process” is a section titled “Framework for Successful ICS Defense.” If the
installation does not already have a single ICS manager and/or team, the “Framework” should
be considered prior to engaging on the eight-step process.
3
Figure 1. ICS Security Assessment Eight-Step Process
4
INDUSTRIAL CONTROL SYSTEMS “101”
Key Point
• Understanding ICS is not difficult; the challenge is to understand the ICS relationship to
missions.
Fundamentally Industrial Control Systems (ICS) are systems and mechanisms that control flow.
ICS control flow of electricity, fluids, gases, air, traffic, and even people. They are the
computer-controlled electro-mechanical systems that ensure installation infrastructure services
are delivered when and where required to accomplish the mission. In the electric
infrastructure, they control actions such as opening and closing switches; for water, they open
and close valves; for buildings, they control access to various doors as well as operation of the
heating, ventilation, and air conditioning (HVAC) system.
The term “Industrial Control System” is broad; specific instances of ICS may be called
Supervisory Control and Data Acquisition (SCADA), Distributed Control System (DCS), Energy
Management Control System (EMCS), Emergency Medical Service (EMS), or other terms but all
perform the same fundamental function. Also, on DOD installations, ICS are associated
primarily with infrastructure elements; therefore, though not technically accurate, they may be
referred to as “Infrastructure” vice “Industrial” control systems. The hardware components
that comprise an ICS usually are classed as operational technology (OT) versus information
technology (IT), which refers to (among other things) the computer equipment that sits on
nearly every desk. Another term used in this domain is Platform Information Technology (PIT),3
(and PITI, with the ‘I’ referring to interconnect, or connected to the network) although ICS are
only one sub-category of PIT, which also includes weapons systems, aircraft, vehicles, buildings,
etc. Terminology is not that critical. What is important is to know that ICS are critical to the
mission.
You frequently have used an ICS—though not by that term—in your home. It is called a
“thermostat.” The most simple of thermostats may not be so obvious as an ICS, but the more
sophisticated can be programmed to automatically control the flow of air (heated, cooled, or
just fan) by day, time, room, zone, etc. The most advanced allow the owner to monitor and
operate the system over an Internet connection or Wi-Fi, using a Smartphone or tablet.
The “Smart Grid,” once fully implemented, will allow your utility company to operate your
thermostat remotely. The thermostat monitors temperatures (and some include humidity) and
3
“PIT” is used more by the Air Force and to a lesser extent by the Navy. At the DoD level, PIT is addressed mostly
under information assurance (IA) guidance, such as DODI 8500.2. See the Glossary for a DOD definition of PIT.
5
then operates the electro-mechanical equipment (furnace, air conditioner, fan) to respond to
the preset conditions you have selected. If the thermostat fails—even though the mechanicals
are in perfect operating condition—the mission (cool, heat) fails. This same concept translates
to the installation’s missions: if the ICS fails the mission can fail, although the direct cause and
effect may not always be so obvious.
ICS typically are not visible to the general population. The control devices themselves are
behind panels, behind walls, inside cabinets, under floors, under roads; the master control
computers more often reside in a room in a civil engineer (CE) or public works (PW) facility.
Because they are essentially invisible to all but CE and PW, and are considered as simply
infrastructure elements, ICS often are overlooked when assessing mission dependencies.
Regardless of where they physically reside or who directly operates the ICS, every person and
every mission on the installation is a stakeholder in their properly functioning.
While the ICS field (vs. control room) elements consist of mostly electro-mechanical devices,
some are actually computers that control other field devices and communicate with other
computers in the system with minimal human interaction. The most common example of this
type is the Programmable Logic Controller (PLC).4 PLCs (and their cousins, Remote Terminal
Units or RTU) are very important because they are computers, typically not under direct human
supervision, and offer multiple pathways (e.g., wireless, modem, Ethernet, Universal Serial Bus
[USB]) for connecting to both the controlled infrastructure and the network. This combination
of characteristics makes the PLC an especially vulnerable node in the ICS.
At the front end—the control center—is where most of the computers (servers, system
interfaces, etc.) and, more critically, connection to other networks reside. While PLCs/RTUs
may become connected (autonomously or by human intervention) for intermittent periods,
control center computers may be continuously connected5 to the Non-secure Internet Protocol
Router Network (NIPRNet), other elements of the Global Information Grid (GIG), and/or an
Internet Service Provider (ISP). It is especially at this node that ICS should be treated with the
same security considerations as with IT.
Whether an ICS element is continuously or only intermittently connected presents the same
fundamental security issue. Anytime an element is connected to a network, even if for only
4
Many will recall that a Siemens PLC was the primary target for the Stuxnet code that impacted the centrifuges in
the Iranian nuclear processing facility at Natanz in 2010. This same PLC (or variants) can be found in the critical
infrastructures on numerous DOD installations. See an informative Wikipedia article on Stuxnet here:
http://en.wikipedia.org/wiki/Stuxnet
5
Another term used here is PIT-I, or PIT-Interconnect—that juncture where the ICS (or OT) connects with the IT
network. For detail on PIT and PIT-I, in addition to DODI 8500.2, see: DODD 8500.01E; AFI 33-210 with AFGM2.2;
AFCESA ETL 11-1; DON CIO Memo 02-10 and Enclosures.
6
brief instances, it is vulnerable to destructive attack, compromise, or manipulation. Therefore,
when assessing risk the key question is not “Is it connected?” but “Is it connect-able?”
Network mapping (via software) will not reveal such potential connectivity; only a physical,
visual inspection of an element by a knowledgeable expert (more likely an IT than a CE or PW
person) will yield specific information on what type of connection ports exist on what elements.
Where the location of an element makes visual inspection impractical or impossible, use the
manufacturer’s or vendor’s published manuals for that specific piece of equipment.
Understanding the fundamentals of ICS is not difficult. The challenge is in understanding the
dependencies of mission on ICS and therefore, appropriately managing the risks to the ICS and
to the missions they support. This handbook is intended to assist the commander and staff in
gaining that understanding.
7
HANDBOOK AUTHORITIES
Key Point:
• The handbook reflects breadth and depth of ICS community expertise.
This handbook was developed based on a broad collection of authoritative sources;6 underwent
field testing to validate the framework and applicability at the installation command level; was
reviewed by a Joint Warfighter Advisory Group (JWAG7) to verify broad (i.e., Joint) applicability;
and received direct input by a broad-based selection of ICS and risk management subject
matter experts (SME). Users of this handbook will gain even greater value by referencing
current publications of primary sources. Some of the major publications are listed in Appendix
A, References. Note that while this handbook is advisory, many of the sources are authoritative
and/or directive.
Key Point:
• ICS and IT share similarities, but also have unique characteristics.
Fundamentally ICS security is a combination of IA, cyber security, physical security, and
operations security (OPSEC). This same combination is applicable to IT, so is there any
difference between ICS and IT? Yes. One key distinction between ICS and other IT
architectures is that the physical world can be impacted disastrously by malicious (or only
accidental) manipulation of the ICS. For example, with IT there typically is linkage with only
other IT components; with ICS the linkage can be to the electric grid, powering other critical
assets, as well as to other infrastructure elements. This gives rise to another primary
distinction, namely that ICS must always be available while “pure” IT can survive downtimes.
Another important difference is in the “refresh” rates of the technologies: IT tends to turn over
in three years or less while OT (ICS) can be on a 20-year cycle. Why is this important to know?
6
Among sources: Idaho National Laboratory (INL); AF Civil Engineer Support Agency (AFCESA); Department of
Homeland Security DHS ICS-Computer Emergency Response Team (CERT); the National SCADA Test Bed (NSTB);
DHS Center for the Protection of National Infrastructure (CPNI); National Institute of Standards and Technology
(NIST); National Security Agency’s (NSA) Committee on National Security Systems (CNSS); Federal Information
Processing Standards (FIPS); and numerous SMEs who are members of or closely associated with the DoD.
7
JWAG participants may differ from meeting-to-meeting, but broadly represent stakeholders in the outcome or
product of a specified Joint activity or project. For this handbook the initial JWAG included representatives from
United States Cyber Command (USCYBERCOM), Northern Command (NORTHCOM), AFCESA, INL, Sandia National
Laboratory (SNL), and various CE and communications experts from both Army and Air Force elements of Joint
Base San Antonio.
8
The long refresh cycle of ICS results in hardware, software, and operating systems no longer
supported by vendors. The impacts of lack of support include: woefully stale malware
detection programs, operating systems that cannot handle newer (and more efficient/effective)
software programs, and hardware that may be on the verge of catastrophic failure with no
backup or failover equipment available.
The following extract from NIST Special Publication 800-828 provides an excellent review of not
only the distinctions but also the similarities and how OT (such as ICS) and IT are converging.
“Although some characteristics are similar, ICS also have characteristics that
differ from traditional information processing systems. Many of these
differences stem from the fact that logic executing in ICS has a direct affect on
the physical world. Some of these characteristics include significant risk to the
health and safety of human lives and serious damage to the environment, as well
as serious financial issues such as production losses, negative impact to a
nation’s economy, and compromise of proprietary information. ICS have unique
performance and reliability requirements and often use operating systems and
applications that may be considered unconventional to typical IT personnel.
Furthermore, the goals of safety and efficiency sometimes conflict with security
in the design and operation of control systems.
8
NIST SP 800-82, 2011 version, Executive Summary, p.1.
9
Originally, ICS implementations were susceptible primarily to local threats
because many of their components were in physically secured areas and the
components were not connected to IT networks or systems. However, the trend
toward integrating ICS systems with IT networks provides significantly less
isolation for ICS from the outside world than predecessor systems, creating a
greater need to secure these systems from remote, external threats. Also, the
increasing use of wireless networking places ICS implementations at greater risk
from adversaries who are in relatively close physical proximity but do not have
direct physical access to the equipment. Threats to control systems can come
from numerous sources, including hostile governments, terrorist groups,
disgruntled employees, malicious intruders, complexities, accidents, natural
disasters as well as malicious or accidental actions by insiders. ICS security
objectives typically follow the priority of availability, integrity and confidentiality,
in that order.”
Distinctions between ICS and IT aside, from a purely technical security standpoint, ICS may be
considered on par with IT or IA vis-à-vis security challenges, albeit with warnings about use of
certain software tools on the networks.9
THREATS
Key Point:
• Threats are global but assessments must be local.
What threats could be posed to an installation’s mission by or through the ICS? This is an
essential question, but one that cannot be answered specifically in an unclassified venue or
simplistically in any venue. Generically, threats fall into categories similar to IT and/or cyber:
terrorist, criminal, insider, environmental, etc. Use of this self-assessment handbook can lead
to a deeper understanding of the infrastructure and establish mitigation conditions whereby
specific threats may be identified. But even without specific threats known many risks can be
9
Caveat emptor with respect to software tools. Software applications that test, penetrate, scan, characterize,
and/or defend networks should be considered equivalent to “loaded weapons” with respect to control systems.
Some tools that are entirely “safe” when used on IT networks have been demonstrated both in the field and under
controlled conditions to have negative, even catastrophic effects on ICS networks. If such tools are considered for
use on ICS, the decision must be an informed one and the tool operator must be a SME who understands potential
effects of that tool on an ICS. Furthermore, any such use must be coordinated with the relevant IT agency (e.g.,
Service CERT) because tool use on the connected ICS could trip various IT network defense mechanisms (firewalls,
intrusion detection system (IDS), intrusion prevention system (IPS), etc).
10
identified and managed. In other words, this handbook can help to establish a more effective
and efficient security posture to conduct formal threat assessments.
The Security Incidents Organization in a 2009 survey (not specific to DOD) assessed that roughly
75% of ICS incidents were unintentional. Of the 25% that were intentional, over half were by
insiders. In other words, external threat actors were responsible for events only about 10% of
the time. Based on percentages alone, the hostile threat actor would appear to be of far less
concern than a mistake committed by a legitimate operator. However, the external threat
actor represents a potentially far more malicious and far-reaching impact on mission than
either the intentional insider or unintentional event. Among external threats, perhaps the most
insidious is the so-called Advanced Persistent Threat, or APT. The National Institute of
Standards and Technology (NIST) (et al) assesses that the external threat actor found ways not
only to get “inside” but also to stay there as long as he wants or needs. The APT,10 especially
nation-state sponsored, is perhaps the most ominous threat to DOD networks. Open source
information on threats is plentiful and readily available, but ICS security teams will need access
to classified intelligence resources to obtain more “actionable” information.
“The increasing interconnectivity and interdependence among commercial and defense infrastructures
demand that DOD take steps to understand and remedy or mitigate the vulnerabilities of, and threats to,
the critical infrastructures on which it depends for mission accomplishment.”
MISSION PRIORITIES
Key Points:
• Missions are interconnected and mutually dependent in complex ways.
• Priorities tend to be situational and event-driven.
The US Navy, articulating what is essentially true for all the Services and including ICS as part of
their cyber infrastructure, has stated:11
“The Department of the Navy (DON) relies on a network of physical and
cyber infrastructure so critical that its degradation, exploitation, or
destruction could have a debilitating effect on the DON’s ability to project,
support, and sustain its forces and operations worldwide. This critical
infrastructure includes DON and non-DON domestic and foreign
10
NIST addressed the APT in Revision 4 to SP 800-53.
11
Critical Infrastructure Protection Program, Strategy for 2009 and Beyond, 2009
11
infrastructures essential to planning, mobilizing, deploying, executing, and
sustaining U.S. military operations on a global basis. Mission Assurance is a
process to ensure that assigned tasks or duties can be performed in
accordance with the intended purpose or plan. It is made more difficult
due to increased interconnectivity and interdependency of systems and
networks. DON critical infrastructures, both physical and cyber, even if
degraded, must be available to meet the requirements of multiple,
dynamic, and divergent missions.”
“Major ship systems may be impacted by SCADA network attacks ashore and
afloat. This may impact a ship’s ability to start or stop engines remotely
disabling portions of the propulsion system and other engineering systems.”
Which ICS receive greater focus for security efforts will depend in most cases on what missions
they support. The 262d Network Warfare Squadron (262 NWS) defines this as the “criticality”
of the system component whereby lesser systems may receive little to no focus while very
critical and centralized systems are recommended to be hardened and protected significantly.
An example might be where it is impossible to protect every component on a network; focus
would be on critical servers, in essence accepting the risk of an individual personal computer
compromise so long as it can be isolated and secure operation of the critical server maintained.
But even on a given installation mission priorities—and the importance of the supporting
control systems—can change quickly and without advance notice. Consider the following
hypothetical scenario highlighting such a rapid change.
12
and fire department dispatch, etc.). As a result of the incident, one of the
ICS was damaged. The ICS maintainer is a commercial contractor whose
facilities are not on the base. The contractor has backups of the ICS
operating system, programs and data, but all are at the contractor facility
off base. Because of the elevated FPCON, the contractor cannot enter the
base. CE can provide some limited manual operation of the system, but is
neither capable nor prepared to operate at even 50% efficiency and
effectiveness.
Is the installation prepared for departures from the norm where ICS are part of the equation?
Do the various existing installation plans (incident response, disaster recovery, installation
emergency management, etc.) encompass ICS contingencies and emergencies? If so, have such
contingencies been exercised (not simply “white-carded” during an exercise)? These are some
of the considerations that will prove foundational to identifying ICS security priorities relative to
missions and changing mission priorities.
13
Figure 3. Mapping Mission Assurance to ICS
14
MISSION IMPACT
Key Point:
• If an element of the ICS and/or controlled infrastructure is compromised, critical mission
functions may be degraded or even entirely failed.
For any installation commander, mission assurance is of utmost concern. Anything that may
impact the mission rises to the top of the priority list. ICS and the controlled critical
infrastructure are deemed to be mission enablers; damage to or compromise of ICS can
degrade, compromise, or even deny the mission. With mission assurance foremost in mind,
this handbook provides the installation commander with a generalized approach to eliminate,
minimize, or otherwise mitigate risks to the mission as posed by ICS vulnerabilities.
It is important to note that this handbook does not attempt to achieve a level of specificity that
addresses vulnerabilities of specific products from specific vendors in specific applications. Nor
does it capture the range of threat actors who may be seeking to exploit those vulnerabilities.
Such level of specificity must be addressed on a case-by-case basis under the collaborative
efforts of the installation commander, CEs or PWs, communications element, and mission
operations representatives, and, in some cases, external experts. Specifically for the threat
piece of the equation, intelligence and/or law enforcement entities also must be consulted.
The probability of a threat actor finding and traversing all such interconnections to create
negative effects on the mission may not be high at a given moment, but threat actors
12
“Connection” includes wired network, wireless, radio, modem, USB port, Ethernet port….anything that enables
one element to connect to another.
15
continuously develop more advanced skills.13 Though current probability of a successful attack
may not be high, the advanced skill sets available to malicious actors combined with more
freely available advanced exploitation tools, many of which are created with ICS attack
components or even specifically for ICS, make this a serious threat. No system or sub-system
can be overlooked or assumed secure simply because it appears isolated. It is important to also
note that a vulnerability and risk assessment should consider not just primary effects on a
system, but potential second—and even third—order effects. Succinctly stated, an ICS
vulnerability and risk assessment should be supported by a thorough mission effects
assessment.
Prioritization of defense measures and resource allocation requires more than a one-to-one
matching with missions, but rather needs to be approached comprehensively. There is added
complexity created by Joint Base administration
“The purpose of the Air Force’s Critical
Infrastructure Program (CIP) is to ensure Air
where one Service’s primary mission likely is
Force’s ability to execute missions and capabilities not the same as that of the other Services.
that are essential to planning, mobilizing, Take for example a fuel delivery control system
deploying, executing, and sustaining military on a Joint Base hosting both Air Force and Army
operations on a global basis.”
missions. For efficiency and cost savings, the
Air Force Energy Plan 2010 (p. 19) fuels delivery systems and automated control
may be consolidated. Hypothetically, for this
installation the Air Force’s primary mission may be to launch sorties providing defense of the
North American airspace while the Army’s may be to train vehicle maintenance and repair. If
the Army is lead agent for the Joint Base, do they consider the fuels system as the top priority
ICS to protect? How will this be decided where three or all Services are included in a Joint Base
structure? Such questions underscore the imperative for a comprehensive approach.
Key Points:
• No ICS is 100% secure 100% of the time.
• Misconceptions undetected or neglected vulnerabilities unmanaged risk.
13
For example, the ICS-CERT in Alert 12-046-01, February 2012, stated: “ICS-CERT is monitoring and responding to
an increase in a combination of threat elements that increase the risk of control system attacks. These elements
include Internet accessible configurations, vulnerability and exploit tool releases for ICS devices, and increased
interest and activity by hacktivist groups and others.”
16
An absolutely 100% vulnerability-free, risk-free ICS does not exist and likely will not. To be
nearly invulnerable,14 an ICS must not be connected to anything other than its own
infrastructure elements. There also must be no potential method for external connections:
USB ports, Ethernet ports, wireless access points, satellite radio, modems, etc. Additionally,
there would have to be unassailable physical controls. However, vendors often need real-time
access to the infrastructure, and operators cannot be in all places all the time, which typically is
mitigated by remote access capability. Also, the ICS manufacturing industry favors connectivity
especially for vendor maintenance. Further complicating the security task is DOD’s “green”
mandate to convert the electric infrastructure to the “Smart Grid,” which depends on wireless
connectivity.15
The following “Top ICS Security Misconceptions” were presented in “318 OSS/IN SCADA Threat
Assessment Report”.16 Note that this list reflects ideas at a point in time and then only the top
five are presented; there are other relevant misconceptions, and all will most certainly change
over time. The point is simply that there is widespread misunderstanding about ICS security
and that such misunderstanding can result in a less-than-secure system. For brevity, the
“misconceptions” have been edited but retain the essential message of each as presented in
the original report:
14
Complete invulnerability is unachievable especially where the human element is necessary—thus the insider
threat is always a potential.
15
For example, “demand-response” management by the EMCS connecting to any load (fuels, lighting, HVAC,
whatever, wherever, whenever) depends on dedicated Internet connectivity. Military installations are
implementing smart grid technology as microgrids. On the other hand and more optimistically there are a number
of initiatives to enhance security for new (not legacy, though) systems such as the Advanced Metering
Infrastructure (AMI).
16
Classified report published June 2012. Portions reproduced here are marked unclassified in the source report.
17
(network) components. While many engineers receive IT training it is often not as extensive as
for an IT professional, and typically centers on operational rather than security aspects.
Misconceptions abound; therefore security may never be assumed or taken for granted. Any
given installation’s “most secure” ICS is fundamentally a function of continuous risk assessment
and management relative to given missions and situationally-dependent mission priorities.
17
“Air Gap” refers to having no electronic connection, requiring data to be moved “by hand” from one system to
another via media such as USB drives, CDs, etc.
18
Continuous awareness is key to recognizing vulnerabilities early and committing necessary
resources to manage potential risks. Proactivity is fundamental.
Key Point:
• Risk management is a continuous process.
Risk is a function of the interaction among threat,18 vulnerability, and consequence (or mission
impact). Risk management involves a process of understanding each element of the equation,
how those elements interact, and how to respond to the assessed risk. Every installation will
face an ever-changing threat-vulnerability-consequence equation. SMEs within DOD,
Department of Energy (DOE), and industry agree that even the most secure network has, or will
have, inherent vulnerabilities. Therefore risk management is essential and must be a
continuous process rather than an event that takes place annually, quarterly, or even monthly.
Risk management is not only continuous but is situational based on the relative uniqueness of
each ICS infrastructure. Appendix F provides examples of risk assessment models.
Key Point:
• ICS defense is a team effort.
While there is no DOD, Joint, or Service policy or directive specific to creating a security
program for installation ICS, numerous publications do provide some guidance and address
elements of ICS security. The following “best practice” framework is derived from such
guidance.
1. Appoint a full-time ICS Information Assurance Manager (IAM) specifically for installation
control systems (i.e., distinct from an IT IAM).19 As an on-going coordinator of a team
formed in the next step, the ICS IAM20 will be responsible specifically for ICS and should
function directly under the authority of the installation commander.
18
“Threat” is further deconstructed into capability + intent + opportunity.
19
Engineering Technical Letter (ETL) 11-1, released Mar 2011 by [then] HQ AFCESA/CEO, requires USAF CEs at base
level to appoint both primary and alternate IAMs with a focus on certification & accreditation (C&A) of all CE-
managed ICS. Note that AFCESA became AFCEC, of AF Civil Engineer Center, in October 2012.
20
The ICS IAM should be officially designated, trained for the position, and delegated authority to immediately
address issues within a defined sphere of responsibility. The ICS IAM should not be an additional-duty position.
19
2. Form an ICS security team led by the ICS IAM. Securing installation ICS networks cannot
be fully accomplished by any single individual or necessarily by any single base entity
(such as CEs or PWs typically considered “owners” of the infrastructure). Securing the
ICS and reducing risk to mission must be a team effort. This team of authoritative
experts should represent at least the CEs/PWs, the cyber unit, physical security, OPSEC,
and missions operations. Engineers can inform the “what” and “where”; the cyber or
communications experts can provide the “how”; and the mission representatives can
explain the “why” as well as the consequences of failure. Intelligence producers can
help understand the “who” that represents the threat. The installation commander sets
priorities in the form of “when” and makes the critical decisions on commitment and
allocation of resources and assets. Include other stakeholders as appropriate to the
installation and mission set, such as when there are tenant organizations (e.g., hospital)
whose missions may be distinct, but still rely on the installation ICS infrastructure.21
Consider creating also as a sub-element of this team an ICS-Computer Emergency
Response Team (CERT),22 modeled on that led by Department of Homeland Security
(DHS).23 If a network CERT already resides on the installation, coordinate to include ICS.
3. Direct the ICS security team in identifying existing and/or developing new policies with
respect to key elements of the ICS security program.
4. Promulgate policies and concurrently hold training sessions on the policies for all ICS
users, operators, and maintainers (analogous: IA training for anyone who touches a
network).
5. Implement policies and hold individuals accountable for adherence.
6. Assess effectiveness of measures undertaken (i.e., conduct risk analysis, exercise, red
team, or/and tabletop review).
7. Monitor and adjust as needed.
Last resort: An IT IAM could have ICS added to their “job jar” but should receive additional training specific to ICS.
Reference also ETL 11-1.
21
(USAF) SAF/CIO A6, in a Memo dated 20 March 2012 (mandatory compliance) instructed installations to create a
multi-disciplined Integrated Product Team (IPT) comprised of all stakeholders to assess IA of PIT, which includes
control systems.
22
CERT = Cyber Emergency Response Team.
23
The DHS ICS-CERT website is found at http://www.us-cert.gov/control_systems/ics-cert/
20
Figure 4. The ICS Security Team
21
ICS SECURITY ASSESSMENT PROCESS
Key Point:
• Must begin with missions analysis and prioritization.
The following eight-step process is the heart of this handbook. All other included information
is in support of preparing for and understanding the criticality of the assessment process. Best
practice is to follow the steps as presented, but individual circumstances may warrant reversing
some steps and or accomplishing some in parallel. However approached, Step 1 must always
be accomplished first.
While virtually every major entity engaged in ICS defense recommends some version of a “best”
process for risk assessment and management, no two approaches are exactly the same. For
example OPNAVINST 3500.39C on Operational Risk Management presents a 5-step process.24
The approach presented here was developed by ICS SMEs working on the National SCADA Test
Bed at Idaho National Laboratory (INL) and fits well with a DOD military installation focus.
In association with Step 7 of the process there is also a companion checklist of specific actions
to consider. That checklist is found at Attachment 2 and is introduced by a textual section titled
“Recommended Defense Actions.”
Step 1. Mission analysis. For ICS defense, the task is to establish a baseline understanding
among the stakeholders of the missions relative to the support infrastructure (both IT and
ICS). A key product of this first step is a prioritization of missions that can be linked to
assets and then ICS dependencies. Key question: If I have to devote all of my very limited
resources to protecting one mission, what would that be? Then the one after that?
Applying Mission Assurance Category (MAC) levels25 can be useful to this endeavor. Also
included may be a review of Mission Essential Tasks (MET)26 with reference to the Defense
Readiness Reporting System (DRRS). Mission analysis and decomposition, especially to a
granularity useful to the rest of the steps, likely will not be a trivial process and may require
significant commitment of the resource of time. A solid investment of time at this step will
make the follow-on steps easier to accomplish.
24
The OPNAVINST 5-step ORM process: Identify, Asses, Make decisions, Implement controls and Supervise,
remembered by the mnemonic “I AM IS”.
25
MAC definitions found in the Glossary.
26
MET examples in Appendix I.
22
Step 2. Identify assets. This includes not only direct mission assets (such as aircraft, tanks,
ships, etc.) but more pointedly the infrastructure systems (such as fuels management and
delivery) that support those. The key is to identify the thread from mission to asset to
supporting infrastructure to ICS dependencies. This thread will reveal which ICS systems
are more critical when it comes to applying security controls.
“As a network defender, it is critical to know how the network is laid out as well as
the hardware associated with the network. In order to defend SCADA, the
operator needs to know what he or she has to work with.“
Step 4. Determine ICS dependencies. Which missions and their supporting infrastructure
are dependent on a properly functioning control system? Are multiple control systems
involved (as in the earlier example of traffic control, emergency systems, fuels delivery)?
This step also requires technical network mapping typically coupled with a physical
inventory and an operational-level understanding of the missions. See Attachment 1,
Mapping Interdependencies, for an example methodology. A comprehensive approach to
this must be followed with collaboration among representatives from at least the cyber,
engineering, and mission operations communities.
Step 5. Assess risk. Risk is characterized as an outcome of the interaction among threat,
vulnerability, and consequence. The goal is to gain a clear understanding of actual risks
that can be managed. All stakeholders need to be engaged in every step of this entire
process, but here is where collaboration becomes absolutely essential. Intelligence
27
Arguably extreme, but since we do not know what we do not know (in this example) one is left contemplating
“worst case.”
23
analysts help identify external threats; engineer, PWs, and comm/IT specialists provide
understanding of the control infrastructure and its vulnerabilities; and operations
personnel can define the mission consequences or impacts of a realized threat event.
Numerous risk analysis publications and external organizations are available to assist with
this step.
Step 6. Prioritize risk management actions. Risk management typically entails deciding
among a finite list of response options: avoid, share/transfer, mitigate, or accept. A
response option or course of action (COA) typically is selected based on what is feasible,
practical, and affordable (i.e., a cost-benefit analysis relative to mission impact). In most
cases the commander, decides on a COA and then prioritizes commitment of resources to
accomplish the actions.
Step 8. Monitor, and reenter the cycle as required. This is never a “fire and forget”
activity. Any (even trivial) change to an architecture can introduce new vulnerabilities
(emphasizing also the imperative to institute a configuration control process). Additionally,
threat actors are continuously on the hunt for vulnerabilities not yet discovered by
legitimate owners and operators. To maintain a steady state of security requires
continuous monitoring. Furthermore, implementation of any plan is likely to encounter
impediments. This will be the phase or step to identify those and readjust as necessary.
The success of this step depends on existence of feedback processes and mechanisms,
which should have been implemented already.
24
SOFTWARE TOOLS
Key Point:
• Tools can be good or bad.
• Even a “good” tool is only as good as the expert who uses it.
For many steps in the process of assessing and defending security of ICS, there exists a broad
selection of supporting primarily software tools. At the installation command level, it is
important simply to note that while such tools are available, tools alone will not guarantee a
successful defensive posture of ICS. The human element is essential in every step.
Perhaps the most important thing to understand about software tools used with or on any ICS
is that the tool must not affect the operation of the ICS or, more specifically, the infrastructure
it controls. The most important thing to do with respect to software tools is to defer to IT SMEs
who already have a set of approved tools and understand potential impacts of using those tools
on particular networks.
Because the services provided by critical infrastructure (electricity heading the list) must always
be available, the ICS likewise must be always available. Therefore, any software-based
assessment or forensics action upon or through the ICS must not impede, deny, or otherwise
alter the system, the data throughput, or the services supported.
While significant to those customers, this is trivial compared to impacts on national defense
missions. For example, consider potential consequences if the same action involved an ICS
supporting fuels management for a combat flying mission or life support systems at a hospital.
25
ADDITIONAL RESOURCES
Key Point:
• Outside help is available—and much of it is at no cost to the requestor.
Due primarily to the ever-changing nature of the ICS security landscape, published guidance
tends to quickly become obsolete. Fortunately, beyond the array of formal publications there
exists a helpful offering of additional useful resources. For example:
• Numerous web sites provide detailed information on ICS security issues, current threats,
tools, etc. Some of the more prominent are provided as Appendix B, Web Links.
• Students at the Services’ advanced Professionally Military Education (PME) schools can
be exceptionally good sources for current insights as many engage in fresh research and
produce theses specific to emerging ICS/SCADA issues.
• Industry conferences can be exceptional sources of lessons learned and/or best
practices, as well as provide opportunity to network with experts.
• Finally, establishing a close relationship with local critical infrastructure owners (e.g., the
electric power company, water provider, etc.) can yield better understanding of local
threats and risks, and thus better security for the entire community.
For current threat assessment information, sources may include: Air Force Office of Special
Investigations (AFOSI), the Army Criminal Investigative Division (CID), or other Service
equivalent; intelligence analysts (the J2/A2/G2 shop); and the ICS-CERT’s Alerts and Warnings.
Key Point:
• Use this with Step 7 of the Process and with the tabular checklist provided as
Attachment 2.
The following section presents a series of action recommendations for securing ICS. Numerous
entities, to include DOD, DOE, Commerce Department, and commercial vendors have published
similar lists (see the “References” appendix for some of those). These recommendations are
augmented by the “stand-alone” tabular checklist found at Attachment 2, and most
appropriately considered at Step 7 of the “Security Assessment Process.”
The recommendations that follow are in an outline that follows the familiar doctrine,
organization, training, materiel, leadership & education, personnel, facilities, and policy
26
DOTMLPF-P28 framework29 but with minor modification. The two modifications are that (1)
doctrine (D) is not directly addressed30 while (2) cyber security (C) has been added, resulting in
a COTMLPF-P framework that exists only in this publication. “Cyber security” is used here to
distinguish between those measures taken in, on and/or through the network(s) and those
actions of a more or less physical nature (such as using access control lists). Arguably the most
critical set of security measures, cyber security is addressed last because such measures are
most effective when supported by solid implementation of actions in the other areas and in
particular when guided by clear policy.
POLICY
“The development of the organization’s security policy is the first and most important step in
developing an organizational security program. Security policies lay the groundwork for
securing the organization’s physical, enterprise, and control system assets.” [Catalog of Control
Systems Security, DHS, Apr 2011, p. 4.] [emphasis added]
The National Security Agency (NSA), in its “Securing SCADA and Control Systems” brochure
(referring to Sandia National Lab’s Framework for SCADA Security Policy), states: “A Security
Policy defines the controls, behaviors, and expectations of users and processes, and lays the
groundwork for securing CS31 assets. Since the acceptable use of CS is narrower and may have
more demanding operational requirements than IT systems, they also demand their own
Security Policy.” [emphasis added]
The installation commander must establish authoritative and directive policies with regard to
all other aspects of the ICS, thus the rationale for starting with the Policy area.
28
DOTMLPF-P: doctrine, organization, training, materiel, leadership & education, personnel, facilities, and policy.
This is borrowed from Chairman Joint Chiefs of Staff Instruction (CJCSI) 3170.01H, Joint Capabilities Integration and
Development System (JCIDS); and further acknowledges DoDM 3020.45 Vol 2, Defense Critical Information
Program (DCIP) Remediation Planning, which states that remediation planning “shall consider a full range of…
[DOTMLPF] options”.
29
Examples of other frameworks: People, processes & technology; strategic, operational & tactical; management,
operational & technical.
30
Any action undertaken in any other area may lead to consideration of doctrinal change. However, this handbook
facilitates practical application and so intentionally does not directly address doctrine. Ultimately, best practices
may result in recommended changes to doctrine, requiring entering the JCIDS process.
31
CS = NSA’s abbreviation for “control systems.”
27
Policy Actions
• Reuse policy where appropriate. Usually it is not necessary to start from scratch on every
policy. Many ICS security issues are also IT and/or IA issues. Many published IT and IA
policies may be adapted to ICS. Also, there is increasing promulgation of Service- and DOD-
level policies specific to ICS (for example, Air Force Civil Engineering Support Agency32
[AFCESA]’s ETL 11-1).
• Ensure policies are promulgated to the lowest user level, and require training programs to
address ICS policies.
• With the ICS security team (discussed previously), determine which elements of the ICS
require specific policies vs. those that may be combined into a single policy document.
Examples:
o An access control policy might include password management, physical facilities control,
and connectivity controls.
o A personnel security policy likely will warrant a dedicated policy document.
• Once complete, the set of policies should address at minimum:
o Access control
o Inventory accounting
o Security of physical assets
o Configuration control
o Acquisition of new hardware/software
o Patching of operating systems and programs
o Vendor / third-party roles and responsibilities
o Conduct of vulnerability and risk assessments
LEADERSHIP
Much is subsumed in “leadership.” With regard to ICS security it is important that leadership
remain engaged and that operators are confident there is a “top-down” emphasis on ICS
security. Promulgation of policy is a critical start, but ongoing leadership gives life to those
policies. Delegate requisite authority and demand accountability, but do not retreat from
oversight.
Leadership Actions
• Conduct periodic awareness briefings to ICS operators and users. Recommend including
quarterly reminders of potential threats.
32
In October 2012 AFCESA merged with AFCEE and AFRPA to become AFCEC, or Air Force Civil Engineer Center.
ETL 11-1 still validly exists as an AFCESA publication.
28
• Participate in ICS security stakeholder events, such as DOD conferences, industry group
seminars, and on-line discussion forums.
• Establish collaborative relationships with commercial service providers (electric, water, gas,
etc.), with focus on their security programs to secure the infrastructure beyond the
installation fence. Invite them to training sessions as adjunct members of the security
team.
• Identify and mitigate the conditions whereby reliance on vendors creates potential single
points of failure. Vendors often are the ones most familiar with installation systems, do not
always have immediate access to those systems, and at times can be denied access (such as
during elevating FPCONs).
• Add ICS information to the Commander’s Critical Information List.
• Engage in the ICS acquisition process from planning through installation; include upgrades
to existing systems as well as new systems.
• Develop plans where none exist or otherwise incorporate ICS into those that do. Examples:
o System Security Plan (SSP)
o Continuity of Operations Plan (COOP)
o Disaster Recovery Plan (DRP)
o Contingency Plans–ICS operation under various INFOCON, FPCON, and other
emergencies
o Operations Security self-assessments and surveys
PERSONNEL
The human element is necessary for the successful operation of ICS, and therefore is a critical
area. All individuals who operate, maintain, or otherwise access ICS must understand their
respective roles and responsibilities and be appropriately trained to those responsibilities. The
insider threat (legitimate operators with legitimate access but illegitimate intent) can overcome
most security controls. Even an “honest broker” can make a mistake that results in the same
(or worse) impact on mission that a true threat actor can cause.
Personnel Actions
• Ensure every individual is trained for their specific responsibilities and undergoes
mandatory periodic update/refresher training (similar to IA training).
• Enforce access controls and establish consequences for violations. For example, every
individual has a unique logon (best practice = role-based) and is allowed access only by that
logon (i.e., no “guest” accounts).
29
• Require special background checks on individuals who have access to ICS elements that are
critical to mission accomplishment. Consider requiring Secret clearances at least for those
individuals with access to mission-critical elements and/or who have full system
administration privileges.
• Request ICS managers and operators to sign confidentiality or non-disclosure agreements.
Treat ICS information at the very least as unclassified but sensitive.
• Maintain rosters for physical access to facilities, such as rooms where servers are
maintained. Require sign-in/sign-out when accessed.
• Create an ICS incident response team modeled on DHS’ ICS-CERT.
• Ensure that personnel who resign, retire, or are fired do not have continued access to any
element of the ICS. Extend this vigilance to employees of contractors and vendors.
• Ensure that relevant personnel (members of ICS security team, asset owners, etc.) either
monitor or routinely are made aware of new vulnerabilities and incidents published by ICS-
CERT and ICS component vendors.
TRAINING
Training includes formal, informal, and exercise. Many negative incidents involving ICS, the
controlled infrastructure, and/or the missions they support are attributed to legitimate
operators who made mistakes due to training deficiencies. A systematic program of mandatory
training should be implemented for all managers, operators, and other users of the installation
control systems.
Training Actions
• Ensure all operators (at minimum) have had ICS-specific training prior to granting access to
any element or component.
• Require IA and OPSEC training for every individual accessing ICS computer systems even if
those systems are not directly connected to the IT network. This training must also include
contractors and vendors who only sometimes connect to ICS computer systems.
• Provide threat and vulnerability awareness via appropriate forums, unit security awareness
training, workplace bulletin boards, etc.
• Exercise plans (incident response, disaster recovery, continuity of operations, etc). Include
with other installation exercises where practicable and include ICS-related scenarios under
elevating INFOCON and/or FPCON.
• Document all training and ensure each individual maintains currency.
30
ORGANIZATION
In many cases ICS tend to be “out-of-sight, out-of-mind” to all installation personnel but the CE
or PW personnel. As long as the lights are on, water is running, and gates function properly
there is no need to be concerned with the systems that make that true. The downside of this
view is that it creates a dampening effect on responsibility and accountability for the total ICS
infrastructure. While PW/CE accept “ownership” responsibility for the control system field
elements, anecdotal information is that the IT side often is viewed as entirely under purview of
the IT organization. Conversely, in some cases IT considers the entirety of the ICS network,
including the front-end IT elements, as CE’s responsibility.33 The ICS must be considered as a
mission-critical system of systems and treated as such organizationally, with collaboration
among CE, IT, and the operations’ stakeholders. Ill-defined division of labor (responsibility) can
create gaps that become threat vectors, or the trade-space of threat actors (both internal and
external).
Organization Actions
• Create a position for an ICS IAM with functional authority and direct access to the
installation commander. Ensure the IAM’s participation in key venues to provide
commander advocacy for ICS security and awareness.
• Clarify (and document on command relationship charts) roles and responsibilities of PW/CE,
communications, operations (and other stakeholders as appropriate) with respect to
operation, maintenance, and security of installation ICS.
• Fully document the ICS—hardware, software, firmware, connectivity, and physical locations
of all. Create a topology or ICS system map reflecting connections to supported missions
and a logic diagram depicting all information/data flows.
• Assign responsibility for ICS configuration management and control. May require creation
of a configuration control board (CCB). The key is this entity documents configuration and
maintains continuing control over changes.
• Identify the entity/individuals responsible for developing formal plans (continuity of
operations, disaster recovery, etc.).
• Establish roles and responsibilities with regard to third-party relationships.
• Ensure all ICS users and operators understand the chain of command particularly for
incident reporting and response mechanisms.
33
On a more positive note, the Air Force has made progress in resolving this issue. Implementation typically lags
policy and direction but four key publications have been promulgated beginning in early 2011: SAF/CIO A6 memo
of Feb 2011 appointing Designated Accrediting Authority for PIT (includes ICS); SAF/CIO A6 memo (Feb 2011)
delegating Certifying Authority to AFCESA/CEO; AFCESA’s ETL 11-1 (Mar 2011) which deals with ICS information
assurance; and SAF/CIO A6 guidance memo (AFGM2.2, Mar 2012) addressing IA of all PIT and announcing
commensurate changes to AFI 33-210.
31
FACILITIES
While some elements of the controlled infrastructure are of necessity exposed (for example the
wires of the electric power grid, pipelines for natural gas) most of the control system elements
are housed in facilities ranging from guarded buildings to remote access panels. Each type of
facility engenders its own relatively unique security challenges, but all share the dichotomous
requirement to ensure that legitimate users can gain quick access when necessary while at the
same time exclude everyone else from any access whatsoever.
Facilities Actions
• Physically identify and visually inspect every facility that houses any element of the ICS,
however seemingly insignificant. This includes fenced enclosures, buildings, rooms in
buildings, field huts, lockboxes, panels, etc. Often overlooked but must be included in
“facilities” are the physical connections (e.g., coaxial cable, digital subscriber line (DSL), fiber
optics, telephone lines). On large installations it is especially important to identify, inspect,
and secure any facility near the perimeter fence (where feasible, relocate away from
perimeter).
• Ensure that cabling terminations and their housings are not overlooked. Threats can come
from cutting, splicing, tapping, and/or intercepting.
• Develop a plan of action and milestones (POA&M) for addressing physical security
deficiencies. An extremely high level of physical security may be achieved by placing
cameras, alarms, and armed guards on every facility. However, this typically is neither
practical nor cost-efficient. Focus should be on those ICS that are critical to the missions,
and then emphasizing where the control system is most exposed to risk. Feasibility,
practicality, and expense all will temper selected COAs.
• Create a map of the facilities and the assets housed by each. Use in training, exercises, and
actual incident response.
• Ensure portable equipment (e.g., laptops) that may be in storage until required (for
backups, recovery, etc.) is included in the inventory and security measures.
• Consider an OPSEC survey focused on ICS. In any event, OPSEC measures should be applied
to appropriate elements.
MATERIEL
Consider how physical assets are acquired, maintained, and removed from service. Does policy
or other guidance exist? Historically control systems have had a life cycle measured in decades
as opposed to IT, which has a life cycle of three years or less. One outcome is that components
that have built-in vulnerabilities can remain in the network for years, often without those
32
vulnerabilities and their attendant risks being addressed. Replacement and maintenance of ICS
should be approached strategically (i.e., long-term) as well as tactically and operationally.
Materiel Actions
• Assign responsibility for oversight of the physical assets to a configuration control manager
or board.
• Establish a formal process for acquisition of new components.
• Operationally test (including vulnerability assessment) proposed new components off-line
before introducing into the live network. Collaborate with the National SCADA Test Bed
(NSTB) entities (INL, Sandia National Laboratory [SNL]) to test components in “live” and
simulated environments.
• Treat adjunct materials (software, tech manuals, SOPs, plans, schematics, etc.) with the
same level of security as the ICS.
33
CYBER SECURITY
Cyber security for ICS is in many respects the same as for IT. Most front-end elements (e.g.,
servers, operating systems, human-machine interfaces, connectivity) are in fact information
technology elements. On the other hand, since most non-IT components typically are on a 15-
20 year refresh (replacement) cycle and are tied to the operating system with which they
originally were installed, even the IT elements can “age-out” or become unsupported making
cyber security at the front end more challenging. Differences from IT become more distinct
“downstream” in the system, with RTUs, PLCs, and of course the field mechanisms (sensors,
gauges, etc.) interfaced directly with the controlled infrastructure. Cyber security applies to all
those elements because they are still part of the network. Nearly every component in the
system could provide a threat vector into the network. Of primary concern is connectivity
from/into NIPRNet and/or any segment of the GIG, but any connection into the system must be
considered as creating a risk to the DOD missions of the installation.
“Asset owners should not assume that their control systems are secure or that
they are not operating with an Internet accessible configuration. Instead, asset
owners should thoroughly audit their networks for Internet facing devices, weak
authentication methods, and component vulnerabilities.”
This section reflects more actions, and more of them technology-based, than the other
(D)OTMLPF-P areas. However, in spite of the expansion of information this listing should not be
viewed as absolutely complete, finite, or prescriptive. Existing policies and procedures already
in place and proven effective should not be replaced based solely on this listing. Consult other
publications (see References), engage the IT and IA professionals, seek assistance/advice from
other government-related ICS experts, and consider contracting for assessment services from
commercial providers.
34
Discussed elsewhere, but a reminder is warranted: Be cautious with software tools. Some very effective tools
for IT networks can cause problems on ICS. Typically, passivity is the key characteristic. Tools that interact with
the system, such as for intrusion protection mostly, are to be avoided. If an interactive tools is determined to be
necessary it must first be tested off-line before use on ICS. Even if it “passes” the test, tools should be closely
monitored for unanticipated, undiscovered negative effects.
35
Air Force is implementing Public Key Infrastructure (PKI) as widely as possible. Also, DoD-wide many logons are
accomplished with a Common Access Card (CAC). Aside from the method of network access, the key takeaway is
unique, linked to a vetted user, not shareable with anyone else.
35
Figure 5. It Only Takes a Minute
36
APPENDIX A REFERENCES
AFI 33-210, Air Force Certification and Accreditation Program, [with attached SAF/CIO A6
guidance memo, AFGM 2.2, 2012, on the AFCAP] 2008.
CJCSM 6510.01, Defense-in-Depth: Information Assurance (IA) and Computer Network Defense
(CND), 2006.
CNSSI 1253, Security Categorization and Control Selection for National Security Systems, 2012.
CNSSP 22, Policy on Information Assurance Risk Management for National Security Systems,
2012.
DHS, Catalog of Control Systems Security: Recommendations for Standards Developers, 2011.
37
DHS, Cross-Sector Roadmap for Cybersecurity of Control Systems, 2011.
DHS, Primer: Control Systems Cyber Security Framework and Technical Metrics, 2009.
DHS, Recommended Practice: Creating Cyber Forensics Plans for Control Systems, 2008.
DOD, Risk Management Guide for DOD Acquisition, Sixth Edition, 2006.
DODD 3020.40, DOD Policy and Responsibilities for Critical Infrastructure, 2010.
DODD 8100.02, Use of Commercial Wireless Devices, Services, and Technologies in the
Department of Defense (DOD) Global Information Grid (GIG), 2004 [certified current as of 2007].
38
DODI 3020.45, Defense Critical Infrastructure Program (DCIP) Management, 2008.
DODI 5240.19, Counterintelligence Support to the Defense Critical Infrastructure Program, 2007.
DODM 3020.45, Vol. 1, Defense Critical Infrastructure Program (DCIP): DOD Mission-Based
Critical Asset Identification Process (CAIP), 2008.
DOE, Electricity Sector Subsector Risk Management Process (draft for public comment), 2012.
DOE / PNNL-20376, Secure Data Transfer Guidance for Industrial Control and SCADA Systems,
2011.
DON (Dept. of the Navy), CIO Memo 02-10, Information Assurance Policy Update for Platform
Information Technology, 2010.
ETL 11-1, [USAF] Civil Engineer ICS Information Assurance Compliance, 2011.
FIPS 199, Standards for Security Categorization of Federal Information and Information Systems,
2004.
FIPS 200, Minimum Security Requirements for Federal Information and Information Systems,
2006.
39
IEEE C37.1, IEEE Standard for SCADA and Automation Systems, 2008.
ITL Bulletin 8/11, Protecting ICS–Key Components of Our Nation’s Critical Infrastructures, 2011.
JTF CapMed Inst. 8510.01, Information Technology (IT) Platform Guide, 2011.
NERC CIPs 002-009, Critical Infrastructure Protection Series. (Version 4 released 2012; version 5
in review process.)
NIST Interagency Report 7435, The Common Vulnerability Scoring System (CVSS) and Its
Applicability to Federal Agency Systems, 2007.
NIST SP 800-37, Rev. 1, Guide for Applying the Risk Management Framework to Federal
Information Systems: A Security Life Cycle Approach, 2010.
NIST SP 800-40, V.2, Creating a Patch and Vulnerability Management Program, 2005.
NIST SP 800-53, Rev. 4, Security & Privacy Controls for Federal Information Systems and
Organizations, 2012.
NIST SP 800-53A, Rev. 1, Guide for Assessing the Security Controls in Federal Information
Systems and Organizations: Building Effective Security Assessment Plans, 2010.
NIST SP 800-61, Rev. 2, Computer Security Incident Handling Guide (Draft), 2012.
40
NIST SP 800-82, Guide to Industrial Control Systems Security, 2011.
NSA, A Framework for Assessing and Improving the Security Posture of Industrial Control
Systems, 2010.
NSTB, NSTB Assessments Summary Report: Common Industrial Control System Cyber Security
Weaknesses, 2010.
SECNAVINST 3501.1B, Department of the Navy Critical Infrastructure Protection Program, 2010.
SECNAVINST 5239.3A, Department of the Navy Information Assurance (IA) Policy, 2004.
SNL, Sandia Report SAND2004-4233, A Classification Scheme for Risk Assessment Methods,
2004.
SNL, Sandia Report SAND2007-2070P, Security Metrics for Process Control Systems, 2007.
SNL, Sandia Report SAND2010-5183, Control System Devices: Architectures and Supply
Channels Overview, 2011.
41
APPENDIX B WEB LINKS
346 TS https://www.my.af.mil/gcss-
af/USAF/ep/globalTab.do?channelPageId=sF575FC8E22DC74AF01230B02F
[CAC required] DC91C2B
AFCESA http://www.afcesa.af.mil/
DTIC http://www.dtic.mil/dtic/
ICS-CERT http://www.us-cert.gov/control_systems/cstraining.html
Idaho NL https://inlportal.inl.gov/portal/server.pt/community/home/255
JDEIS https://jdeis.js.mil/jdeis/index.jsp?pindex=0
NIST http://www.nist.gov/index.html
42
Sandia NL http://www.sandia.gov/
http://energy.sandia.gov/?page_id=5800
USACE http://www.usace.army.mil/
43
APPENDIX C ACRONYMS
AFCEC Air Force Civil Engineer Command [result of Oct 2012 merger of AFCESA,
AFCEE and AFRPA]
AFCESA Air Force Civil Engineer Support Agency [later AFCEC]
AFI Air Force Instruction
AFMAN Air Force Manual
AFNIC Air Force Network Integration Center
AFOSI Air Force Office of Special Investigations
AFPD Air Force policy Directive
AFTTP Air Force Tactics, Techniques, and Procedures
AIC availability, integrity, confidentiality [vs. IT systems’ CIA]
AIS automated information system
AMI Advanced Metering Infrastructure
APT Advanced Persistent Threat
AR Army Regulation
AT antiterrorism
AV antivirus
CAC common access card
CAIP critical asset identification process
C&A certification & accreditation
CCB Configuration Control Board
CCDR combatant commander
CE civil engineer
CERT Computer Emergency Readiness Team
CIA confidentiality, integrity, availability [vs. ICS systems’ AIC]
CID Criminal Investigation Division
CJCSI Chairman, Joint Chiefs of Staff Instruction
CJCSM Chairman, Joint Chiefs of Staff Manual
CNSS Committee on National Security Systems
CNSSI Committee on National Security Systems Instruction
CNSSP Committee on National Security Systems Pamphlet
COA course of action
COOP Continuity of Operations Plan
COTS commercial off-the-shelf
CPNI Center for the Protection of National Infrastructure
CS control system [NSA term]
CSET Cyber Security Evaluation Tool
44
CVSS Common Vulnerability Scoring System
DCI Defense Critical Infrastructure
DCIP Defense Critical Infrastructure Program
DCS Distributed Control System
DEP data execution prevention
DHS Department of Homeland Security
DIACAP DoD Information Assurance Certification & Accreditation Process
DISA Defense Information Systems Agency
DSL digital subscriber line
DISLA Defense Infrastructure Sector Lead Agent
DMZ demilitarized zone
DOD Department of Defense
DODD Department of Defense Directive
DODI Department of Defense Instruction
DODM Department of Defense Manual
DOE Department of Energy
DON Department of the Navy
DOTMLPF-P Doctrine, Organization, Training, Materiel, Leadership (& education),
Personnel, Facilities, and Policy
DRP Disaster Recovery Plan
DRRS Defense Readiness Reporting System
DTIC Defense Technical Information Center
DUSD Deputy Under Secretary of Defense
EMCS Energy Management Control System
EMS Emergency Medical Services
ETL Engineering Technical Letter
FIPS Federal Information Processing Standards
FPCON Force Protection Condition
GAO Government Accountability Office
GIG Global Information Grid
GIS geographical information services
HIPS McAfee Host Intrusion Prevention System
HSPD Homeland Security Presidential Directive
HVAC Heating, Ventilation and Air Conditioning
IA information assurance
IAM Information Assurance Manager
ICS Industrial Control Systems [US Army has used ICS also for
“Instrumentation Communication Subsystem”]
45
ICS-CERT ICS Cyber Emergency Response Team
IDART Information Design Assurance Red Team
IDS intrusion detection system
IEEE Institute of Electrical and Electronics Engineers
IEM Installation Emergency Management
INFOCON Information Operations Condition
INL Idaho National Laboratory
IPS intrusion prevention system
IPT Integrated Product Team
IS information system
ISO International Organization for Standardization
ISP Internet service provider
ISSM Information System Security Manager
IT information technology
JCIDS Joint Capabilities Integration and Development System
JDEIS Joint Doctrine, Education, and Training Electronic Information System
JIT just in time [refers to a just-in-time compiler]
JP Joint Publication
JTF Joint Task Force
JWAG Joint Warfighter Advisory Group
LUA least user access
MAC Mission Assurance Category
MCCIP Marine Corps Critical Infrastructure Program
MCO Marine Corps Order
MEF mission essential functions
MET mission essential task
MMS multimedia messaging service
NERC CIPS North American Electric Reliability Council Critical Infrastructure
Protection Series
NIPRNet Non-secure Internet Protocol Router Network
NIST National Institute of Standards and Technology
NSA National Security Agency
NSTB National SCADA Test Bed
OPNAVINST Office of the Chief of Naval Operations Instruction
OPSEC operations security
OSI open system interconnect
OT operational technology
PIT Platform Information Technology (includes ICS)
46
PIT-I PIT Interconnect [refers to PIT connected to IT network]
PKI Public Key Infrastructure
PLC Programmable Logic Controller
PME Professional Military Education
PNNL Pacific Northwest National Laboratory
POA&M Plan of Actions & Milestones
PW public works
RBAC role-based access control
ROP return-oriented programming
RTU Remote Terminal Unit
SCADA Supervisory Control and Data Acquisition
SECNAVINST Secretary of the Navy Instruction
SMB server message block
SME subject matter expert
SMS short message service
SNL Sandia National Laboratory
SOP standard operating procedures
SP Special Publication
SSP System Security Plan
STIG Security Technical Implementation Guide
TM Technical Manual
UAC user access control
USACE United States Army Corps of Engineers
USB Universal Serial Bus
USSTRATCOM United States Strategic Command
VoIP voice over Internet Protocol
VPN virtual private network
WAF web application firewall
47
APPENDIX D GLOSSARY
Advanced Persistent Threat. An adversary that possesses sophisticated levels of expertise and
significant resources, which allow it to create opportunities to achieve its objectives by using
multiple attack vectors (e.g., cyber, physical, and deception). These objectives typically include
establishing and extending footholds within the information technology infrastructure of the
targeted organizations for purposes of exfiltrating information, undermining or impeding
critical aspects of a mission, program, or organization; or positioning itself to carry out these
objectives in the future. The advanced persistent threat: (i) pursues its objectives repeatedly
over an extended period of time; (ii) adapts to defenders’ efforts to resist it; and (iii) is
determined to maintain the level of interaction needed to execute its objectives. [NIST SP 800-
53 Rev 4]
• Advanced: The actor is adaptive and able to evade detection and is able to gain and
maintain access to protected networks and resident sensitive information.
• Persistent: The actor has a strong foothold in/on the target network and is
exceptionally difficult to completely remove or deny even if detected.
• Threat: The actor has both capability and intent that is counter to the best interests of
the network and/or the legitimate users.
Asset. A distinguishable entity that provides a service or capability. Assets are people, physical
entities, or information located either within or outside the United States and employed,
owned, or operated by domestic, foreign, public, or private sector organizations. [DODD
3020.40]
Defense Critical Infrastructure (DCI). DCI is the DOD and non-DOD networked assets essential
to project, support, and sustain military forces and operations worldwide. Assets are people,
physical entities, or information. Physical assets would include installations, facilities, ports,
bridges, power stations, telecommunication lines, pipelines, etc. The increasing
interconnectivity and interdependence among commercial and defense infrastructures demand
that DOD take steps to understand and remedy or mitigate the vulnerabilities of, and threats
to, the critical infrastructures on which it depends for mission accomplishment. The DCIP is a
fully integrated program that provides a comprehensive process for understanding and
protecting selected infrastructure assets that are critical to national security during peace,
crisis, and war. It involves identifying, prioritizing, assessing, protecting, monitoring, and
48
assuring the reliability and availability of mission-critical infrastructures essential to the
execution of the NMS. The program also addresses the operational decision support necessary
for CCDRs to achieve their mission objectives despite the degradation or absence of these
infrastructures. [Joint Publication 3-27] [see also: DODD 3020.40, DODI 3020.45, and DODM
2020.45 vols 1-5]
Disaster Recovery Plan (DRP). A written plan for processing critical applications in the event of
a major hardware or software failure or destruction of facilities. [NIST SP 800-82]
DOD Information Assurance Certification and Accreditation Process (DIACAP). The DOD
process for identifying, implementing, validating, certifying, and managing IA capabilities and
services, expressed as IA controls, and authorizing the operation of DOD ISs, including testing in
a live environment, in accordance with statutory, Federal, and DOD requirements. [DODI
8510.01]
Force Protection Condition (FPCON). A Chairman of the Joint Chiefs of Staff-approved standard
for identification of and recommended responses to terrorist threats against US personnel and
facilities. [Joint Pub 1-02]
49
security procedures, or acceptable use policies. Incidents may be intentional or unintentional.
[NIST SP 800-82]
Information Assurance (IA). Measures that protect and defend information and information
systems by ensuring their availability, integrity, authentication, confidentiality, and non-
repudiation. This includes providing for restoration of information systems by incorporating
protection, detection, and reaction capabilities. [DODD 8500.01E]
Information Assurance Manager (IAM). The individual responsible for the information
assurance program of a DOD information system or organization. While the term IAM is
favored within the Department of Defense, it may be used interchangeably with the IA title
Information Systems Security Manager (ISSM). [DODI 8500.2]
Intrusion Detection System (IDS). A security service that monitors and analyzes network or
system events for the purpose of finding, and providing real-time or near real-time warning of,
attempts to access system resources in an unauthorized manner. [NIST SP 800-82]
Intrusion Prevention System (IPS). A system that can detect an intrusive activity and can also
attempt to stop the activity, ideally before it reaches its targets. [NIST SP 800-82]
Mission Assurance. A process to ensure that assigned tasks or duties can be performed in
accordance with the intended purpose or plan. It is a summation of the activities and measures
taken to ensure that required capabilities and all supporting infrastructures are available to the
Department of Defense to carry out the National Military Strategy. It links numerous risk
management program activities and security-related functions, such as force protection;
antiterrorism; critical infrastructure protection; IA; continuity of operations; chemical,
biological, radiological, nuclear, and high explosive defense; readiness; and installation
preparedness to create the synergy required for the Department of Defense to mobilize,
deploy, support, and sustain military operations throughout the continuum of operations.
[DODD 3020.40]
Mission Assurance Category (MAC). Applicable to DOD information systems, the mission
assurance category reflects the importance of information relative to the achievement of DOD
50
goals and objectives, particularly the warfighters' combat mission. Mission assurance
categories are primarily used to determine the requirements for availability and integrity. The
Department of Defense has three defined mission assurance categories: [DODD 8500.01E]
• MAC I. Systems handling information that is determined to be vital to the operational
readiness or mission effectiveness of deployed and contingency forces in terms of both
content and timeliness. The consequences of loss of integrity or availability of a MAC I
system are unacceptable and could include the immediate and sustained loss of mission
effectiveness. MAC I systems require the most stringent protection measures.
• MAC II. Systems handling information that is important to the support of deployed and
contingency forces. The consequences of loss of integrity are unacceptable. Loss of
availability is difficult to deal with and can only be tolerated for a short time. The
consequences could include delay or degradation in providing important support
services or commodities that may seriously impact mission effectiveness or operational
readiness. MAC II systems require additional safeguards beyond best practices to
ensure adequate assurance.
• MAC III. Systems handling information that is necessary for the conduct of day-to-day
business, but does not materially affect support to deployed or contingency forces in
the short-term. The consequences of loss of integrity or availability can be tolerated or
overcome without significant impacts on mission effectiveness or operational readiness.
The consequences could include the delay or degradation of services or commodities
enabling routine activities. MAC III systems require protective measures, techniques, or
procedures generally commensurate with commercial best practices.
Mission Essential Functions (MEF). The specified or implied tasks required to be performed by,
or derived from, statute, Executive Order, or other appropriate guidance, and those
organizational activities that must be performed under all circumstances to achieve DOD
component missions or responsibilities in a continuity threat or event. Failure to perform or
sustain these functions would significantly affect the Department of Defense’s ability to provide
vital services or exercise authority, direction, and control. [DODD 3020.26]
National Security System. Any information system (including any telecommunications system)
used or operated by an agency or by a contractor of an agency, or other organization on behalf
of an agency—(i) the function, operation, or use of which involves intelligence activities;
involves cryptologic activities related to national security; involves command and control of
military forces; involves equipment that is an integral part of a weapon or weapons system; or
is critical to the direct fulfillment of military or intelligence missions (excluding a system that is
to be used for routine administrative and business applications, for example, payroll, finance,
logistics, and personnel management applications); or (ii) is protected at all times by
51
procedures established for information that have been specifically authorized under criteria
established by an Executive Order or an Act of Congress to be kept classified in the interest of
national defense or foreign policy. [44 U.S.C., Sec. 3542]
Platform Information Technology (PIT) and PIT-Interconnection (PITI). For DOD IA purposes,
platform IT interconnection refers to network access to platform IT. Platform IT
interconnection has readily identifiable security considerations and needs that must be
addressed in both acquisition, and operations. Platform IT refers to computer resources, both
hardware and software, that are physically part of, dedicated to, or essential in real time to the
mission performance of special purpose systems such as weapons, training simulators,
diagnostic test and maintenance equipment, calibration equipment, equipment used in the
research and development of weapons systems, medical technologies, transport vehicles,
buildings, and utility distribution systems such as water and electric. Examples of platform IT
interconnections that impose security considerations include communications interfaces for
data exchanges with enclaves for mission planning or execution, remote administration, and
remote upgrade or reconfiguration. [emphasis added] [DODD 8500.01E, DODI 8500.2]
Risk.
• Potential for an unwanted outcome resulting from an incident, event, or occurrence, as
determined by its likelihood and the associated consequences. [DHS Risk Lexicon]
• A measure of the extent to which an entity is threatened by a potential circumstance or
event, and typically a function of: (i) the adverse impacts that would arise if the
circumstance or event occurs; and (ii) the likelihood of occurrence. [CNSSI 4009]
• An expression of consequences in terms of the probability of an event occurring, the
severity of the event and the exposure of personnel or resources to potential loss or
harm. A general expression of risk as a function of probability [P], severity [S], and
exposure [E] can be written as: Risk = ƒ(P, S, E). [AFPAM 90-902]
Risk Management. The program and supporting processes to manage information security risk
to organizational operations (including mission, functions, image, reputation), organizational
52
assets, individuals, other organizations, and the Nation, and includes: (i) establishing the
context for risk-related activities; (ii) assessing risk; (iii) responding to risk once determined; and
(iv) monitoring risk over time. [NIST SP 800-53]
Security Audit. Independent review and examination of a system’s records and activities to
determine the adequacy of system controls, ensure compliance with established security policy
and procedures, detect breaches in security services, and recommend any changes that are
indicated for countermeasures. [NIST SP 800-82]
Security Policy. Security policies define the objectives and constraints for the security program.
Policies are created at several levels, ranging from organization or corporate policy to specific
operational constraints (e.g., remote access). In general, policies provide answers to the
questions “what” and “why” without dealing with “how.” Policies are normally stated in terms
that are technology-independent. [NIST SP 800-82]
System Security Plan (SSP). Formal document that provides an overview of the security
requirements for an information system and describes the security controls in place or planned
for meeting those requirements. [NIST SP 800-18]
Supervisory Control and Data Acquisition (SCADA). A generic name for a computerized system
that is capable of gathering and processing data and applying operational controls over long
distances. Typical uses include power transmission and distribution and pipeline systems.
SCADA was designed for the unique communication challenges (e.g., delays, data integrity)
posed by the various media that must be used, such as phone lines, microwave, and satellite.
Usually shared rather than dedicated. [NIST SP 800-82]
53
Task Critical Asset. An asset that is of such extraordinary importance that its incapacitation or
destruction would have a serious, debilitating effect on the ability of one or more DOD
Components or DISLA organizations to execute the task or mission-essential task it supports.
Task critical assets are used to identify defense critical assets. [DODD 3020.40]
Virtual Private Network (VPN). A restricted-use, logical (i.e., artificial or simulated) computer
network that is constructed from the system resources of a relatively public, physical (i.e., real)
network (such as the Internet), often by using encryption (located at hosts or gateways), and
often by tunneling links of the virtual network across the real network. [NIST SP 800-82]
54
APPENDIX E CE BRIEFING GRAPHICS
The following two graphics were extracted from an AFCESA brief dated February 2012. (The
Reference Model is modified from the original.) They are offered simply as representative of a
Service view of ICS.
55
APPENDIX F RISK ASSESSMENT & MANAGEMENT MODELS
Extracted from selected publications as representative of varying approaches for modeling the
basic process of risk management. Numerous varieties exist. Figures F1–F4 illustrate these
varieties.
F1. DCIP Risk Management Process Model copied from DODI 3020.45, (p. 16).
56
F2. Risk Assessment Model as represented in NIST SP 800-30 (Rev. 1, Draft, 2011), (p. 7).
57
F3. Risk Management Process model depicted in ISO 31000, (p. 14).
58
F4. Generic model of risk assessment process.
59
APPENDIX G CSET
CYBER SECURITY EVALUATION TOOL (CSET)
The Cyber Security Evaluation Tool (CSET™) is a Department of Homeland Security (DHS)
product that assists organizations in protecting their key national cyber assets. It was
developed under the direction of the DHS National Cyber Security Division (NCSD) by
cybersecurity experts and with assistance from the National Institute of Standards and
Technology. This tool provides users with a systematic and repeatable approach for assessing
the security posture of their cyber systems and networks. It includes both high-level and
detailed questions related to all industrial control and IT systems. CSET is a desktop software
tool that guides users through a step-by-step process to assess their control system and
information technology network security practices against recognized industry standards. The
output from CSET is a prioritized list of recommendations for improving the cybersecurity
posture of the organization's enterprise and industrial control cyber systems. The tool derives
the recommendations from a database of cybersecurity standards, guidelines, and practices.
Each recommendation is linked to a set of actions that can be applied to enhance cybersecurity
controls.
A caveat provided by the ICS-CERT: CSET is only one component of the overall cyber security
picture and should be complemented with a robust cyber security program within the
organization. A self-assessment with CSET cannot reveal all types of security weaknesses, and
should not be the sole means of determining an organization’s security posture. The tool will
not provide an architectural analysis of the network or a detailed network hardware/software
configuration review. It is not a risk analysis tool so it will not generate a complex risk
assessment. CSET is not intended as a substitute for in depth analysis of control system
vulnerabilities as performed by trained professionals.
Question 12. Is a disaster recovery plan prepared, tested, and available in the event of a major hardware or
software failure or destruction of the facility? Check all that apply.
Result Answer(s)
Not answered
60
Pass A disaster recovery plan (DRP) is available and is tested.
The DRP includes a communication procedure and list of personnel to contact in the case of an emergency
including ICS vendors, network administrators, ICS support personnel, etc.
The DRP includes an authorized personnel list of those required for the ICS operations and maintenance.
The DRP includes procedures for operating the ICS in manual mode until secure conditions are restored.
The DRP includes process and procedures for backup and secure storage of information.
The DRP includes required response to events that activate the recovery plan.
The DRP indicates requirements for the timely replacement of components in the case of an emergency.
A disaster recovery plan is essential to continued availability of the ICS. The DRP should include the
following items: Required response to events or conditions of varying duration and severity that
would activate the recovery plan; Procedures for operating the ICS in manual mode with all external
electronic connections severed until secure conditions can be restored; Roles and responsibilities of
responders; Processes and procedures for the backup and secure storage of information; Complete
and up-to-date logical network diagram; Personnel list for authorized physical and cyber access to
the ICS; Communication procedure and list of personnel to contact in the case of an emergency
including ICS vendors, network administrators, ICS support personnel, etc.; Current configuration
information for all components.
The plan should also indicate requirements for the timely replacement of components in the case of
an emergency. If possible, replacements for hard-to-obtain critical components should be kept in
inventory.
61
APPENDIX H DCIP
DEFENSE CRITICAL INFRASTRUCTURE PROGRAM (DCIP)
DCIP is an integrated risk management program designed to support DOD Mission Assurance
programs. When effectively applied, these programs form a comprehensive structure to secure
critical assets, infrastructure, and key resources for our nation. The nation’s defense and
economic vitality is highly dependent upon the availability and reliability of both DOD and non-
DOD owned critical infrastructure (such as: power, transportation, telecommunications, water
supply, etc.). With limited resources to address risk to critical infrastructure, the DCIP relies on
continuous analysis of changing vulnerabilities to all types of threats and hazards to effectively
manage risk to the nation’s most essential infrastructure.
Recognizing how critical the infrastructure is to accomplishing DOD's missions and the effects of
vulnerabilities to threats and hazards of infrastructure assets, DOD Directive 3020.40, DOD
Policy and Responsibility for Critical Infrastructure, established the Defense Critical
Infrastructure Program (DCIP), a program responsible for coordinating the management of risk
to the critical infrastructure that DOD relies upon to execute its missions.
62
APPENDIX I UNIVERSAL JOINT TASKS
UJTs Relevant to Securing Critical Infrastructure
Universal Joint Tasks (UJT) provide the foundation upon which METs are constructed. The
following selected UJTs are verbatim from the UJT List (UJTL) database found on the Joint
Doctrine, Education & Training Electronic Information System (JDEIS).36 The selection is not
meant to be all-inclusive but representative and is provided merely to highlight the link
between installation-level activities to secure ICS and national-level requirements.
ST 6.6.3 Manage Mission Risk Resulting From Defense Critical Infrastructure (DCI)
Vulnerabilities
To manage actions taken at combatant command level to reduce the risk of mission
degradation or failure, induced by known vulnerabilities of defense critical assets,
infrastructure, or functional capability.
36
https://jdeis.js.mil/jdeis/index.jsp?pindex=43
63
ST 6.6 Perform Mission Assurance
Maintain plans and programs to ensure assigned tasks or duties can be performed IAW the
intended purpose or plan.
Note: This task focuses on fully integrating a mission-focused process to understand and
protect physical and information capabilities critical to performance of assigned missions at the
strategic theater level of war. It links risk management program activities and security related
functions -- such as force protection; antiterrorism; critical infrastructure protection;
information assurance; continuity of operations; chemical, biological, radiological, nuclear and
high-explosive defense; readiness and installation preparedness -- to create the synergistic
effect required for the Department of Defense to mobilize, deploy, support, and sustain military
operations throughout the continuum of operations.
64
APPENDIX J ICS TRAINING OPPORTUNITIES
Training on various aspects of ICS to include security is available from numerous providers and
in a variety of venues. The following samples are by no means all-inclusive but represent the
variety of vendors and venues. Descriptions are from the vendors’ or sponsors’ web sites. For
those not overly familiar with ICS, an excellent starting point is the US-CERT’s web-based “Cyber
Security for Control Systems Engineers & Operators” (link below). In spite of the course title, it
is not necessary to be either an engineer or an ICS operator to gain valuable fundamental
understanding about ICS security in a very short time.
US-CERT (http://www.us-cert.gov/control_systems/cstraining.html)
• Web-based Training
The following summary level courses are available for on-line training:
OPSEC for Control Systems
Cyber Security for Control Systems Engineers & Operators
• Instructor Led format - Introductory Level
Introduction to Control Systems Cybersecurity (101) - 1 day or 8 hrs
ICS Security for Management (111) - 1 - 2 hrs
• Instructor Led format - Intermediate Level
Intermediate Cybersecurity for Industrial Control Systems (201) - lecture only - 1 day or
8 hrs
• Hands-on format - Intermediate Technical Level
Intermediate Cybersecurity for Industrial Control Systems (202) - with lab/exercises - 1
day or 8 hrs
• Hands-on format - Advanced Technical Level
ICS Advanced Cybersecurity (301) - 5 days
• The Control Systems Security Program (CSSP) provides training courses and workshops
at various industry association events. These courses are packed with up-to-date
information on cyber threats and mitigations for vulnerabilities. I f your organization
would like to learn more about training opportunities, please contact
[email protected].
65
course is offered at Sandia’s discretion to individuals with need-to-know and by invitation
only.
66
interconnection to other networks. Learn the skills required to direct and manage the
appropriate cyber security protection for your SCADA system.
67
68
APPENDIX K ICS SECURITY ORGANIZATIONS
The following organizations can advise and assist with ICS vulnerability and risk assessments
mostly using their own sets of tools and SMEs. This is merely a subset of a broader community
engaged on ICS security.
ICS-CERT http://www.us-cert.gov/control_systems/ics-cert/
The Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) [Dept of Homeland
Defense] provides a control system security focus in collaboration with US-CERT to:
• respond to and analyze control systems related incidents,
• conduct vulnerability and malware analysis,
• provide on-site support for incident response and forensic analysis,
• provide situational awareness in the form of actionable intelligence,
• coordinate the responsible disclosure of vulnerabilities/mitigations, and
• share and coordinate vulnerability information and threat analysis through information
products and alerts.
AFCESA http://www.afcesa.af.mil/
The Air Force Civil Engineering Support Agency (HQ USAF/A7C) [USAF-centric], CEO Division,
provides (on a scheduled basis) specialized ICS and IT teams to apply the ICS PIT C&A Program.
AFCESA/CEO’s standard procedure is to apply the risk assessment program at Air Force-
managed installations on a scheduled basis, with a goal of revisitation every three years; they
may visit out-of-cycle on an as-requested basis but will be constrained by already-scheduled
assessments. As of this publication, the AFCESA C&A teams operate under authority of SAF/CIO
A6, and in accordance with DODI 8500.01E, AFI 33-210, and AFCESA ETL 11-1.
69
NSTB http://www.inl.gov/scada/
To ensure the secure, reliable and efficient distribution of power, the DOE jointly established
the National SCADA Test Bed (NSTB) program at INL and SNL. The program works to support
industry and government efforts to enhance the cyber security of control systems used
throughout the electricity, oil, and gas industries. Among the services available: Control
system security product and technology assessments to identify vulnerabilities and
corresponding mitigation approaches.
IDART http://idart.sandia.gov/
The Information Design Assurance Red Team (IDART) provides independent, objective,
adversary-based assessments of information, communication, and critical infrastructure
systems throughout their lifecycle (concept through retirement) in order to identify
vulnerabilities and threats, improve design, and assist decision makers with choices in the
development, security, and use of their systems. [Operates out of Sandia National Laboratory.]
70
ATTACHMENT 1 MAPPING INTERDEPENDENCIES & ASSESSING RISK
With reference to the eight-step process introduced at the beginning of this handbook, this
section will facilitate the following activities:
• Mission analysis
• ID assets
• Determine ICS dependencies
• Determine ICS connectivity
• Assess risk
• Prioritize risk management actions
This leaves the following activities to be addressed by the installation ICS security team.
• Implement actions
• Monitor and reenter the cycle as required
The duration of this effort depends on the depth of knowledge and documentation of the
existing systems. Documenting processes that do not exist, have been abandoned, or were
never installed will diminish the value of this. Rigor should be applied to the process of
ensuring the dependencies and interconnects are characterized as precisely as possible. This
activity also may need to be iterative; a lightning strike knocking out power may expose a
connection that was unknown, at which point the diagram/table should be updated as this will
alter the relative importance values. The initial amount of time to allocate would be one hour
per infrastructure or one-half day for a large meeting with several infrastructures. The
facilitated meetings (combined) should not take more than one day for each mission.
Additional time can be determined based on the outcome of the first session. Incomplete
71
groups of data can be collected and documented. Gaps should be noted and filled as experts or
data become available.
CAVEAT: Some data may require a clearance to obtain and may result in generating classified
documents. A derivative classifier or classification authority should be consulted prior to
beginning this effort. The documentation will be, at a minimum, “For Official Use Only.”
Some data may not be obtainable. Remember, this is not documenting how the processes
work; this is documenting the mission dependencies on processes and systems that in turn
depend on one another. Here are examples of utilities and infrastructures that should be part
of this effort:
• Electricity
• Fuel
• Water/Waste Water
• Natural Gas
• Security (gates, doors, surveillance, etc.)
o If security is the mission, then ensure security and the systems/networks security
uses are represented in the diagram. If security is a mitigating measure, then make
notation of this. Essentially, if the doors lock due to a power outage of the security
system, can the mission still function?
• Lights (emergency, runway, search, etc.)
• Emergency Services
• Communications (networks, wired, wireless)
• People (groups, organizations, contractors, etc.)
• Control systems, SCADA systems, HVAC systems, etc.
72
• “If power were cut, how long can you remain operational?”
• “If the temperature in this building reaches 90 degrees, will the equipment remain
functional?”
• “If a <insert disaster> destroyed the <insert part of the building>, would that impact our
<insert infrastructure>?”
73
The figure has color coded hexagons to make finding items easier. For example, electrical
systems are blue with a yellow outline. The cyber systems (control systems) associated with
the electrical systems are indicated as their own green hexagons. The electrical systems
located at the center bottom of the figure have the highest relative importance. The leased
line, electrical supply, substation A, and their associated cyber systems (control systems) all
have values of 20. The quick meaning is that without electricity, the rest of the systems are
likely to be non-functional. The pumps will no longer work to deliver fuel or water to the
mission. The IT equipment will no longer be powered. The only item that would remain
functional would be the backup generator located to the upper left of the mission (in the
middle). This would allow the mission to function at least until the fuel for the backup
generator was consumed.
Taking another look at the figure, in the upper-right corner there is a virtual local area network
(VLAN) switch with quite a few connections. If there was found a vulnerability that allowed
switching from one VLAN to another, could an intruder from the water cyber system get into
the process cyber system? That VLAN switch concentrates a significant portion of the cyber
traffic around that mission. As such, that component is fairly significant to the successful
operation of the mission. The numeric value of 5 indicates the relative importance.
Without prior knowledge of the mission or how the systems operate, a determination can be
made of the relative importance of the systems with a quick glance. Another potential
representation of this data would be as a topological map imposed on the facility. The highest
point would still be the electrical systems. When attempting to control or secure an area that is
on low ground surrounded by high ground, are defenses placed on the low ground or the high
ground? This diagram and method for generating this diagram should help make that decision,
back that decision up with numeric values based on the infrastructure in place, and then apply
resources as the installation commander sees fit.
74
Each layer beyond the initial layer of utilities, systems, and people may comprise a system in
and of itself that needs to be identified by the boundaries. Two separated control systems
running two segmented parts of a system would be two different representations linked by a
process system physical connection. Physical and cyber systems should not be combined so the
impacts to one can be seen on the other. The cyber system should be attached and will inherit
the value from the process. An example is shown in Figure Atch1-2 below. The SCADA/ICS
cyber systems may have several network boundaries they traverse, this is also shown in the
figure where the system on Substation A and the system on Electrical Supply External are using
a Leased Line to communicate. The Leased Line is owned by a different group and the
communications between the two systems depends on it. It is not uncommon to have shared
data highways, such as a fiber optic ring, that infrastructure use to communicate. Treat shared
interconnects (e.g., a fiber optic ring with the associated switch gear) as a system/process.
Use the following types of diagrams and drawings to assist in creating the interdependency
diagram:
• Network diagrams
• Cabinet drawings
• Electrical drawings
• Process diagrams
• Site location diagrams
Use names or locations for tie-in points to components that are connected to multiple
components or the diagram may become overly cumbersome. An example of what to do is
shown in Figure Atch1-3. The VLANB switch has multiple cyber connections that overlap other
systems. While this does represent the connections, this can make the diagram difficult to
read.
75
Figure Atch1-3. Congested Dependencies
76
The diagram itself should show the dependencies of each system relative to the mission going
outward until the installation commander reaches a point where he/she no longer has
ownership. At that point, the system dependencies end as indicated by circles in the diagrams
above.
The numeric values are assigned to each process or system based on this one rule.
• A system inherits the values of all of those systems that depend on it.
o The mission value is set to 1.
o All other systems derive their values from the mission value.
o Circular connections are handled consistently (choose to either add them or do not
add them)
The easiest way of generating these values is to use a database or a table in Excel. An example
table is shown in Table Atch1-1. Do not generate these values while determining
dependencies. Attempting to do so will not benefit the facilitated meeting. The table should
contain these columns: Process/System Boundary, Zi (relative importance), Dependants, and
geographical information services (GIS) coordinates of the Process/System (optional).
The cells should be linked as indicated by the cell references depicted. The geographical
information services (GIS) data are the GIS coordinates of that component, they are optional. Zi
is the aggregated impact or importance value of the component based on dependencies.
A B C D E F G H
1 Process / System GIS Zi D1 D2 D3 D4 D5
Boundary Data [=sum(D1…Dm)
]
2 Mission 1
3 Cyber System Mission =SUM(D3:H3) =C2
4 Backup Generator =SUM(D4:H4) =C2
5 Fuel Supply =SUM(D5:H5) =C2 =C4
6 Cyber System Fuel
=SUM(D6:H6) =C5
Supply
7 Cyber System Fuel
=SUM(D7:H7) =C6 =C9
Supply DMZ
8 IT System Fuel Supply
=SUM(D8:H8) =C7
Mgmt
9 Cyber System Fuel
=SUM(D9:H9) =C10
Supply Tank Farm
77
10 Fuel Supply Tank Farm =SUM(D10:H10) =C5
11 Substation A =SUM(D11:H11) =C17 =C12 =C9
12 Substation B =SUM(D12:J12) =C2 =C16 =C5 =C20 =C7
13 Cyber System
=SUM(D13:H13) =C11
Substation A
14 Cyber System
=SUM(D14:H14) =C12
Substation B
15 Leased Line =SUM(D15:H15) =C13
16 Water Supply =SUM(D16:H16) =C2
17 Water Supply Tanks =SUM(D17:H17) =C16
18 Electrical Supply
=SUM(D18:H18) =C11
External
19 Cyber System External
=SUM(D19:H19) =C18
Electrical Supply
If this were mapped to a GIS map using an alternative elevation of the relative importance, the
data would represent terrain that needs securing. The lower elevations are the items of
interest and the areas of higher relative importance would be key locations to control the
region. The scope of this project prevents the creation of a graphical tool kit so the variations
of the graphical depiction will be based on the people contributing to this activity. A white
board “exercise” would also work to create a physical image depicting the interdependencies.
The facilitator should use materials they have at hand. A large white board, poster-sized paper
hung on the wall, or poster-sized paper on a tabletop are examples of suitable mediums.
1. Starting with the mission, draw an object and label it “mission” or use the proper
mission name.
2. Describe the mission and its functions to the assembled experts and draw radial lines
outward from the mission object to show dependencies.
Example: The facilitator makes the statement, “The mission is to provide bombers; which
require maintenance, fuel, runways, ordinance, and crew.” The facilitator draws radial lines
outward connecting it to objects labeled “maintenance,” “fuel,.” “runways,” “ordinance,” and
“crew.”
3. The experts assembled should represent people knowledgeable about each function
with which the mission relies. Some experts will know about several functions. In an
78
orderly fashion, capture everyone’s input. Drawing objects and connecting lines to
show the systems/processes and dependencies. Use arrows if the dependency is one
way with the arrow pointing toward the downstream or consumer component. Resolve
conflicts in a professional manner. Resolution may take the form of a field trip, a field
test, or a discussion. If the resolution must be postponed until after the facilitated
session, document the object with a question mark to show uncertainty.
79
Table Atch1-2. Number of Reported Vulnerabilities
Access to a control system allows users to perform actions. Stuxnet showed how important this
is. The vulnerabilities used by Stuxnet were not vulnerabilities in the Siemens software; they
were vulnerabilities in the operating system. Once on the consoles, Stuxnet made use of the
Siemens software to perform tasks it was designed to do. Any vulnerability that allows
arbitrary execution of code can allow malicious software access to control system functions
that are available to the user account the vulnerable program is using.
80
A control system uses three methods of user access control:
1. The first is no security. The software will run as the operating system account currently
logged in. These types of systems often run as an administrative level user. If one can
log into the console, one can perform any action on the system such as opening
breakers or valves, adjusting set points, or downloading new configuration to the field
controllers.
2. The second method is a custom user account manager on top of the operating system
accounts. This method can result in security being turned on/off for the control system
and circumstances where no user accounts exist for the control system thereby locking
the console until it is rebuilt with an image or reinstalled. This method will typically use
an auto-login account for the operating system and then have the operations personnel
use their own custom user account to gain access to the control system interfaces. The
auto-login account is often an administrative level user.
3. The third method is to use accounts integrated into the operating system user accounts.
This is more common of systems designed after 2001. This will be a mix of user
accounts with role-based privileges. A look at the processes running on the console will
show a number of user accounts that are control-system specific that likely have
administrative rights, which are used to keep key system functions operational.
This is why software management and system monitoring is important for control systems.
Assume that the system can be compromised then watch the system for aberrant behavior
indicating unstable code. Achieving this level of monitoring takes resources in the form of
people, procedures, and technology. All of which cost money to deploy and maintain. In the
previous section, the interdependencies of the infrastructure were determined and a table was
built. The relative importance to the mission was determined for each system. That value does
not take into consideration operational conditions or mitigation measures in place. The
following columns should be added to the table of relative importance:
• Maintenance (patching, evaluating/testing patches, etc.) performed regularly for
o Operating system
o Hardware
o Third-party software
o Control system software
o Customized software
• System monitoring frequency (how often is the system used/observed)
• System log (all logs) monitoring frequency
• Physical connections
The resulting table with values is shown in Table Atch1-3.
81
Table Atch1-3. Operational Considerations for Relative Importance
A I J K L M N O P
Log
Operating Third-Party Control Customized Monitoring
1 Process / System Boundary System
Hardware
Software System Software Frequency
Monitoring Connections
Frequency
2 Mission
3 Cyber System Mission 1 0 0 1 1 0.2 1 1
4 Backup Generator
5 Fuel Supply
6 Cyber System Fuel Supply 1 1 1 1 1 0.1 1 2
Cyber System Fuel Supply
7 0 1 1 0 1 0.4 1 2
DMZ
Operating System Value of 1 if this needs attention. Value of 0 if this is maintained and fully patched.
Hardware Value of 1 if this needs attention. Value of 0 if this is maintained and fully patched.
Third-Party Software Value of 1 if this needs attention. Value of 0 if this is maintained and fully patched.
Control System Value of 1 if this needs attention. Value of 0 if this is maintained and fully patched.
Customized Software Value of 1 if this needs attention. Value of 0 if this is maintained and fully patched.
Monitoring Value based on the frequency of operations monitoring - Continuous: 0.1, Hourly: 0.2, Daily: 0.4, Weekly:
Frequency 0.8, Monthly: 1.0, Yearly: 2.0, More: 4.0
Log Monitoring Value based on the frequency of monitoring any logs - Continuous: 0.1, Hourly: 0.2, Daily: 0.4, Weekly: 0.8,
Frequency Monthly: 1.0, Yearly: 2.0, More: 4.0
Number of interfaces. Console +1, Network (wired / wireless) +1, USB/Serial/Firewire/CDROM/DVD etc. +1
Connections
(max 3)
82
The calculation for the relative importance of interdependent systems (Zi) was the sum of the value
of the dependencies shown as the yellow highlighted cell, C3 of Table Atch1-4.
A B C D E F G H
1 Process / System GIS Zi D1 D2 D3 D4 D5
Boundary Data [=sum(D1…Dm)
]
2 Mission 1
3 Cyber System Mission =SUM(D3:H3) =C2
The table additions of columns I through P will be used in the calculation of the relative importance to
mission modified by operational considerations. This value will be called the Cyber Readiness.
Attention should be given to the entries with higher values.
An additional column should be added to the table so the calculations can be automated. For the
purposes of this calculation, row 3 in the table will be used. All cells will be using this reference. C3
represents Zi.
Cyber Readiness
= Log10(C3) * sum(I3:M3) * N3 * O3 * P3
These values are then the risk prioritizations of the ICS/cyber components that support the functions
with which missions rely. The higher values represent more at risk to the system. Some systems will
have mitigations already in place, having the conversation with the system owner can determine if
this is the case. The judgment of what resources to place should never be solely on the numeric
value, however the numeric value can assist in making that determination.
83
ATTACHMENT 2 CHECKLIST OF RECOMMENDED ACTIONS
The following tabular format checklist presents recommendations made earlier in the handbook
using a modified DOTMLPF-F37 construct. The checklist does not cover every last action that may be
taken to secure installation ICS. Additional actions may be identified during assessment or even in
the midst of implementation. Also, this is generic, meaning that applicability is broad rather than
specific. Each installation will have differences, in some cases significant, in control systems
architectures, security measures already in place, organizational and personnel management, and
missions. The “one-size-fits-all” approach offered here will indeed yield a more secure ICS, but a
closer fit will require tailoring (such as using other tools, requesting assistance of SMEs, etc.).
Actions are not listed in a particular order, except that policy should first be well-established so as to
facilitate implementing actions in other areas. Nor do actions need to be implemented sequentially;
many actions may be undertaken in parallel.
NOTE: A separate table may be used for each type of ICS or by mission supported.
Blank rows are included at the end of each “Focus” section for installation-specific additions.
37
Modified by replacing the “D” with a “C” for cybersecurity.
84
TITLE <name of control system, infrastructure, or mission>
MISSION(S) SUPPORTED:
OTHER INFORMATION:
FOCUS ACTION COMMENTS PRI POC DATE DATE
DONE
# ASSGND
POLICY
Review ICS policy requirements with
ICS Security Team
• Access control
• Security of assets
P • Configuration control
85
O software
• Patch management
L
• Inventory accounting
I
• Education, training & exercises
C
Review ICS service level agreements Changes to ICS systems often require vendor
Y with vendors and integrators and/or integrator approval or support, which
may not be covered in existing service level
agreements
Set software and SDLC requirement See the DHS Cyber Security Procurement
standards for ICS procurements Language for Control Systems document,
http://www.us-
cert.gov/control_systems/pdf/FINAL-
Procurement_Language_Rev4_100809.pdf
86
LEADERSHIP
Promulgate policies
Collaborate with ICS vendors and Focus should be on security and training
service providers
87
D to INFOCON, FPCON)
• Continuity of Operations
E
Add key ICS information to the Think of most if not all ICS-related
R Commander’s Critical Information List information as at least FOUO
PERSONNEL
Train all ICS managers, operators & Include policies, roles, security, incident
users response handling, etc.
88
P Create an ICS incident response team Can model on existing IT CERT or on DHS’s
ICS-CERT
E
Enforce system access controls Includes network (logons) and physical
(cipher-locked doors)
R
Maintain rosters for access to physical
S facilities
O Immediately delete all access (physical This must include third-party vendors,
and system) of those who resign, contractors, etc. as well as direct employees
N retire, move, are fired, etc. and military
TRAINING
Ensure ICS-specific training prior to
granting individuals access
Require IA training (initial & refresher) In some cases, users of IT components of ICS
89
for all ICS managers, operators & users are overlooked in IA training
ORGANIZATION
Appoint an ICS IAM Most installations with DOD networks already
have an assigned IT network IAM; an ICS IAM
is distinctly separate and trained specifically
to ICS issues (but will coordinate with IT IAM)
O
Assign responsibility for ICS
R configuration control
90
G at least:
• Commander
A
• CEs/PWs
N
• Communications/IT
I
• Operations
Z
Identify leads for developing ICS- Or for incorporating ICS considerations into
specific plans existing plans
A
Publish chain-of-command for incident
T response
FACILITIES
Create a map/chart/topology of all Include buildings, rooms, panels, cabling, etc.
physical facilities
91
Identify & inspect all physical facilities
Identify and secure portable assets For example, fly-away kits, laptops, spares.
C
Depict their locations on the facility map
I Secure all cable terminations (their Wiring termination boxes often are located in
housings) isolated areas and with only minimum
L security controls (e.g., easily cut padlock,
wire with lead breakage seal)
I
MATERIEL
Document the entire ICS infrastructure Include logic diagrams, data flows,
dependencies, and particularly connection to
mission/mission support assets
92
A Establish acquisition policy and
process
T
Require testing of any new component Always test before adding to the live
or program off-line infrastructure
E
Identify and control all ICS
R documentation and software media
CYBER SECURITY
Define & defend perimeters; Part of a comprehensive defense-in-depth
approaches may include: strategy
• Segmentation
• DMZs
93
• Enclaves
94
put in place incident evaluation rather than through
security event monitoring; make sure ICS
admins are reviewing their systems for
security-impacting events
• Use (and keep current) virus- Virus detection programs may be difficult to
checking software update on live systems, and therefore will
C require diligence in maintaining currency
95
S • Implement security policies per Security configurations should be done on
vendor best security practices each OS in addition to the external access
E list controls like port configuration; this means
disabling autorun, limiting remote registry
C access, etc.
Y Protect data:
• Encrypt data in motion (at least Probably not able to encrypt data on the
on the IT side) purely ICS component side, such as between
a PLC and a master server; definitely unable
to encrypt between a PLC and a sensor
• Back up system routinely and A backup held exclusively by a vendor will not
keep backups secure & be “accessible” in certain circumstances, such
accessible as a FPCON Charlie or Delta
96
systems to the new system without being
checked, malware authors hide RAT software
in them
• Map the flows of critical data at Identify and map exactly where critical data
least annually to ensure data is goes throughout its usable life cycle to ensure
being protected and accessed you know exactly where it can be accessed
appropriately and what protections are in place; use a data
flow diagram and threat modeling tools to
ensure appropriate trust boundaries and
technical security controls are in place
97
S nature of ICS networks make the WAF easier
to implement
E
• Implement best security OS security is not sufficient to defend against
practices per browser vendor endpoint attacks over port 80; browser level
C
security controls need to be put in place as
U well
• Use native browser tools and Use of tools like NoScript or BetterPrivacy
R third party browser security prevent automatic execution of scripts within
applications the browser and prevent auto-execution of
I
malware via browser components or web
services
T
• Develop and implement web Web service and web component attacks
Y component and services against the browser are a big deal; browser
software development lifecycle plug-ins, extensions, and services must be
controlled to prevent attack methodologies
such as JIT spraying, ROP, etc.
• Incorporate web app security Make sure vendors, integrators, and in-house
touchpoints into the standard development teams are testing their web
development lifecycle of all apps and have a mature software security
web-enabled software development program for any software
deployed on a system
98
long-term storage of web gaining access via long term cookie and data
service information like cookies storage by web services
and temporary cache
• Modems/dial-up
• Wireless
• Cable/DSL
C
• Fiber-optic
Y
• Satellite
B
• Ethernet
E • Cellular
99
R Identify and manage all messaging
service access points; disable those
I not needed and protect all others
• Unified communications
solutions
• Remote hardware
management tools
101
Y
102
S
103
104
ATTACHMENT 3 COMMITTEE ON NATIONAL SECURITY SYSTEMS INSTRUCTION 1253 ICS
OVERLAY VERSION 1
BACKGROUND
The CNSSI 1253 ICS Overlay Version 1 was developed by a Technical Working Group (TWG) charted by
the Installations and Environment, Business Enterprise Integration office in 2012. The TWG was
comprised of Subject Matter Experts from DoD, DHS, and NIST and the JTANICS staff. The overlay was
published in January 2013 and incorporated into the DHS CSET 5.1 tool, released in June 2013.
The intent of the overlay was to bridge the gap between the need to have a standardized process
across the DoD to address the growing concern about the lack of ICS cybersecuriity and have basic
“primer” that the engineering, information technology, and information assurance community could
use in preparation for the DoD issuance of the new DoDI 8500 Cybersecurity instruction and adoption
of the NIST Risk Management Framework. The RMF is replacing the Defense Information Assurance
Certification and Accreditation Process (DIACAP).
At the same time the CNSSI ICS Overlay was being developed, NIST was also updating the NIST SP
800-53 Rev 3 Recommended Security Controls For Federal Information Systems and Organizations,
and the NIST 800-82 Industrial Control Systems Security Guide. As part of the update process, NIST
moved the NIST SP 800-53 Rev 3 Appendix I Industrial Control Systems Security Controls,
Enhancements, and Supplemental Guidance to the NIST 800-82 Rev 1 Appendix G. This kept the ICS
guidance and controls in one contiguous document. NIST is in the process of updating the NIST SP
800-82 to incorporate the new controls developed in the NSIT SP 800-53 Rev 4, with a target date of a
spring 2014 release.
The NIST SP 800-82 writers group has incorporated a great deal of the CNSII 1253 Overlay Version 1
content into the draft NIST 800-82 Rev 2, and has created a master ICS Overlay template that can be
used by any organization.
Future versions of the CNSSI 1253 ICS Overlay may simply refer to the NIST SP 800-82 Rev 2 or will be
a very condensed version of this overlay.
105
Committee on National Security Systems Instruction
(CNSSI) No. 1253
This Draft version of the Overlay is for informational and instructional purposes only and meant
to be used a companion to the DHS Cybersecurity Evaluation Tool (CSET). The Draft version is
based on the NIST SP 800-53 Rev 3 publication. The Final version is being revised to follow the
new format of the NIST SP 800-53 Rev 4 publication.
The National Institute of Standards and Technology (NIST) created NIST Special Publication
(SP) 800-53, “Recommended Security Controls for Federal Information Systems and
Organizations,” to establish a standardized set of information security controls for use within the
United States (U.S.) Federal Government. As part of the Joint Task Force Transformation
Initiative Working Group, the Committee on National Security Systems (CNSS) has worked
with representatives from the Civil, Defense, and Intelligence Communities to produce a unified
information security framework and to ensure NIST SP 800-53 contains security controls to meet
the requirements of National Security Systems (NSS).
Security control overlays are specifications of security controls and supporting guidance used to
complement the security control baselines and parameter values in CNSSI No. 1253 and to
complement the supplemental guidance in NIST SP 800-53. Organizations select and apply
security control overlays by using the guidance in each of the standardized, approved and CNSS-
published overlays.
The Industrial Control Systems (ICS) Overlay applies to Platform IT (PIT) systems. As stated in
DoDD 8500.01 Cybersecurity Directive, Enclosure 3, “Examples of platforms that may include
PIT are:
“weapons, training simulators, diagnostic test and maintenance equipment, calibration
equipment, equipment used in the research and development of weapons systems,
medical technologies, vehicles and alternative fueled vehicles (e.g., electric, bio-fuel,
Liquid Natural Gas that contain car-computers), buildings and their associated control
systems (building automation systems or building management systems, energy
management system, fire and life safety, physical security, elevators, etc.), utility
distribution systems (such as electric, water, waste water, natural gas and steam),
telecommunications systems designed specifically for industrial control systems to
include supervisory control and data acquisition, direct digital control, programmable
logic controllers, other control devices and advanced metering or sub-metering, including
associated data transport mechanisms (e.g., data links, dedicated networks).”
ICSs are physical equipment oriented technologies and systems that deal with the actual running
of plants and equipment, include devices that ensure physical system integrity and meet technical
constraints, and are event-driven and frequently real-time software applications or devices with
embedded software. These types of specialized systems are pervasive throughout the
infrastructure and are required to meet numerous and often conflicting safety, performance,
security, reliability, and operational requirements. ICSs range from non-critical systems, such as
those used for building environmental controls (HVAC, lighting), to critical systems such as the
electrical power grid.
Within the controls systems industry, ICS systems are often referred to as Operational
Technology (OT) systems. Historically, the majority of OT systems were proprietary, analog,
vendor supported, and were not internet protocol (IP) enabled. Systems key components, such as
Remote Terminal Units (RTUs), Programmable Logic Controllers (PLCs), Physical Access
Control Systems (PACs), Intrusion Detection Systems (IDSs), closed circuit television (CCTV),
fire alarm systems, and utility meters are now becoming digital and IP enabled. OT systems use
Human Machine Interfaces (HMIs) to monitor the processes, versus Graphical User Interfaces
for IT systems, and most current ICS systems and subsystems are now a combination of
Operational Technologies (OT) and Information Technologies (IT).
An emerging concept in technology is to refer to the hybrid OT and IT ICS systems as cyber-
physical systems (CPS). As defined by the National Science Foundation:
“cyber-physical systems are engineered systems that are built from and depend upon the
synergy of computational and physical components. Emerging CPSs will be coordinated,
distributed, and connected, and must be robust and responsive. The CPS of tomorrow
will need to far exceed the systems of today in capability, adaptability, resiliency, safety,
security, and usability. Examples of the many CPS application areas include the smart
CNSSI No. 1253 108 Attachment 1 to Appendix K
As these new technologies are developed and implemented, this Overlay will be updated to
reflect advances in related terminology and capabilities. This Overlay focuses on the current
generation technologies already in the field, and the known technologies likely to remain in
inventory for at least the next ten years.
Figure 1 is a typical electrical supervisory control and data acquisition (SCADA) type system
which shows the HMI at the operators console, the transmission system infrastructure, and the
RTU in the field. At the substation and building level, the meters are monitored in a local Energy
Operations Center (EOC) or Regional Operations Center (ROC), which use real time analytics
software to manage the energy loads and building control systems, down to the sensor or actuator
device level.
ICSs differ significantly from traditional administrative, mission support and scientific data
processing information systems, and use specialized software, hardware and protocols. ICS
systems are often integrated with mainstream organizational information systems to promote
connectivity, efficiency, and remote access capabilities. The “front end” portions of these ICSs
resemble traditional information systems in that they use the same commercially available
hardware and software components. While the majority of an ICS system still does not resemble
a traditional information system (IS), the integration of the ICS’s “front end” with IS introduces
some of the same vulnerabilities that exist in current networked information systems.
ICSs can have long life spans (in excess of 20 years) and be comprised of technology that in
accordance with Moore’s law suffers rapid obsolescence. This introduces two issues: first,
38
The pictures and devices shown in this Overlay are for illustrative purposes only and are intended to show typical
field devices that are the core elements of OT. This Overlay document does not endorse any specific vendor or
product.
While many Information Assurance (IA) controls from the baselines can be applied to an ICS,
how they are implemented varies, primarily because of technical and operational constraints and
differences in the evaluation of risk between ICSs and standard ISs. Interconnections between
ICSs and the organizational network and business systems expose ICSs to exploits and
vulnerabilities, and any attempts to address these exploits and vulnerabilities must consider the
constraints and requirements of the ICS. These constraints can be both technical - most ICS
components have limited storage and processing capacity – or practical, as most ICSs are
funding and personnel constrained so resources allocated to IA are removed from other functions
(such as maintenance), which often adversely impacts the function of the ICS. Unlike most ISs,
ICSs are driven primarily by availability, which requires a different approach to making IA
decisions.
Electromechanical, sensors,
GUI, Web browser, terminal
Interfaces actuators, coded displays,
and keyboard
hand-held devices
“All interconnections of DoD IT will be managed to minimize shared risk by ensuring that
the security posture of one system is not undermined by vulnerabilities of interconnected
systems.
Interconnections between PIT systems and DoD ISs must be protected either by
implementation of security controls on the PIT system or the DoD IS.”
In this Overlay, the terms IT and OT are used to define the Tiers Architecture and delineate the
boundary between the PIT and DoD IS.
Implementing security controls in ICS environments should take advantage of the concept of
common security controls in order to mitigate the constraints caused by the characteristics of
those environments. By centrally managing and documenting the development, implementation,
assessment, authorization, and monitoring of common controls and security services,
organizations can decrease the resources associated with implementing security controls for
individual systems without sacrificing the protections provided by those controls, and security
controls are then implemented as part of a greater security environment. For example, a control
implemented within a service provided by an EOC may be inherited by a system on a mobile
platform (i.e. field power generation). Networked systems can, and will, depend on one another
for security protections or services as part of a defense in depth strategy. Controls meant to be
common across connected systems – access controls, for example – can be provided once and
inherited across many, assuming the implementation is adequate to support multiple systems.
Every implementation of common controls requires analysis from a risk management
perspective, with careful communication among representatives from all interconnected systems
and organizations. The new Advanced Meter Infrastructure shown in Figure 2 is being installed
on Department of Defense (DoD) buildings, and highlights the challenges and complexities of
the new hybrid OT systems. Unfortunately, due to the issues associated with implementing IA
for older ICSs, many older (legacy) ICSs will operate in isolation and may be unable to make use
of many of the common controls.
Figure 2 – Advanced Meter Infrastructure (Smart Meters) for Electric, Water, Gas
CNSSI No. 1253 111 Attachment 1 to Appendix K
3-FS: A Tier 3 FPOC gateway (application 3-LIP: A Tier 3 FPOC router between 3
layer proxy). The Tier 4 network is Modbus LonTalk networks: Tier 1 Lon over TP/FT-
over IP over 10/100 Mbps Ethernet. The 10, Tier 1 Lon over TP/FT-10, and Tier 4
Tier 1 network is proprietary over Lon over IP over Ethernet.
proprietary 2-wire media. Note this is the
same device as in 2C-FS.
1A-LGR: Programmable
1A-VAV: VAV box controller with no analog
controller with multiple inputs or outputs. Primary
analog inputs and outputs. network is BACnet over
Also incorporates dedicated Ethernet (not IP) media at 1N-Lswitch: LonTalk
actuator and pressure sensor 10/100 Mbps. Also router between 2 TP/FT-10
(normally Tier 0 devices). supports BACnet over (media) network segments.
Network is LonTalk over MS/TP media and Also has RS-232 console
TP/FT-10 media at 78 Kbps. proprietary protocol over port for configuration
RS-485 media. Can also be (generally not used).
in Tier 2.
The ICS Architecture is described in five Tiers (and multiple sub-tiers), where each tier
represents a collection of components that can be logically grouped together by function and IA
approach. There are several critical considerations to the tiered architecture:
1) Not every implementation of an ICS will make use of every tier;
2) The same device may reside in different tiers, depending on its configuration. For
example, some BACnet controllers may support different networks based on a dual in-
line packet (DIP) switch, and thus the same device could reside in either Tier 1 or Tier 2.
3) In some cases, a single device may simultaneously fit into two principal tiers. For
example, a device may act as both a Tier 2 controller and a Tier 3 Facility Point of
Connection (FPOC).
4) In many cases, a device will fit multiple sub-tiers within the same principal tier, usually
within Tier 2. For example, a Tier 2A BACnet controller will often act as a Tier 2C
router to a Tier 1 network beneath it.
5) A single device may belong in different tiers, depending on the specific architecture. For
example, the Tier 2A/2C controller in the example above may, in a small system, be the
only IP device, in which case it is also the Tier 3 FPOC. In a larger system, there would
be multiple IP devices and the upstream IP device (EUB switch or router) would be the
Tier 3 FPOC.
In many architectures, this tier provides the • Firewalls • IT and communications staff and • Wide area networks (WANs) This Tier should implement a
enclave boundary defense between the PIT (at • DMZ/Perimeter Networking contractors. • Metropolitan area networks "deny all / permit exception"
Tiers 4 and below) and IP networks external to • Proxy Servers (MANs) policy to protect the PIT from the
the PIT. (In other architectures, this boundary • Domain Controller, etc. • Local area networks (LANs) external network and the external
defense occurs in the external network). In • Campus area networks (CANs) network from the PIT.
many cases, there is a component within the • Virtual private networks (VPNs)
PIT which would reside in Tier 5. • Point of Presence
• Demarcation Point or Main
This tier may be absent for a variety of reasons:
Point of Presence
there may not be an external connection, or the
connection may be handled in the external
network.
5 Generally speaking from the perspective of
"External" Connection and Platform ICSs functionality, this connection should be
Information Technology (PIT) severely restricted, if not eliminated entirely.
Management The ICSs can function in a completely isolated
configuration. Additional functionality allowed
through external connections includes:
• Sending alarm notification using outbound
(“External Connection Between PIT access to a SMTP email server.
and IP Network External to PIT) • Upload of historical data and meter data to an
enterprise server using outbound
Platform IT System Management HTTP/HTTPS access for uploading.
3 In many cases, there is a single Tier 2A This device should, in effect, have
controller in the system (generally with a Tier 1 a "deny all / permit exception"
Facility Points of Connection network beneath it). In these cases, we may policy applied. In many cases,
(FPOCs) consider the controller the FPOC, or we may this is inherent in the design of
consider the upstream IP networking hardware the network – a Tier 1 (non-IP)
(EUB switch or router) to be the FPOC. network inherently "denies" all
Similarly, a device normally at Tier 2C could be protocols other than its specific
the Tier 3 FPOC. Finally, we may have a Tier control protocol. In other cases,
2D computer which is the only IP device in the this device may be a gateway
stand alone system; this may be considered the ("application layer proxy") that
FPOC. does not permit any networking
traffic through it, and only
supports a very limited set of
Note that a large base-wide system will have control functionality to pass.
hundreds of these devices, one at each These devices tend to be very
"dumb" devices and may not
connection of a field control system to the base-
support many of the IA controls,
wide system.
but the critical "deny all / permit
exception" approach should be
designed into the device. Where
this tier is an upstream IT device,
it should be set up with the most
restrictive set of access control
CNSSI No. 1253 120 Attachment
list (ACL) 1 to Appendix K
possible.
The objective of this overlay is to develop the baseline of Low, Moderate, and High Impact
ICSs, and define the accreditation boundary and types of devices typically found on an ICS. A
Low Impact ICSs addresses the “80%” non-critical systems (i.e., typical office, administrative,
housing, warehouse, et al. buildings control systems). Many of the Moderate and High Impact
systems are listed as Task Critical Assets in the Defense Critical Infrastructure Protection (DCIP)
program, and are classified at the Secret level or higher. Figure 10 illustrates the conceptual
types of ICSs and criticality. There are approximately 400 plus major military installations and
operating sites. For this Overlay, the ICS systems PIT boundary is defined as the Enclave and
Installation Processing Node. Currently, there are 611 controls in the CNSSI 1253, of which 197
have been determined to apply for ICSs.
For ICSs systems, it is extremely important to recognize that availability is often of much more
significance than confidentiality or integrity. Additionally, ICS systems do not have a distinct
2. Applicability
The following questions are used to determine whether or not this overlay applies to a Low
Impact ICS system:
1. Does the ICS have a CNSSI 1253 C-I-A rating of Low-Low-Low or less, where “loss of
confidentiality, integrity, or availability could be expected to have a limited adverse
effect on organizational operations, organizational assets, or individuals”?
2. Is the ICS part of a real property asset listed in the DoD Federal Real Property Profile
(FRPP) and listed as “Not Mission Dependent – mission unaffected”?
3. Is the ICS designated a prior DoD Information Assurance Certification and Accreditation
Process (DIACAP) “Mission Assurance Category 3 – These systems handle information
that is necessary for the conduct of day-to-day business, but does not materially affect
support to deployed or contingency forces in the short-term”?
If the answer is yes to any of the questions, STOP here and use the Low Impact
overlay.
The following questions are used to determine whether or not this overlay applies to a Moderate
ICS system:
1. Does the ICS have a CNSSI 1253 C-I-A rating of Moderate-Moderate-Moderate or less,
where “loss of confidentiality, integrity, or availability could be expected to have a
serious adverse effect on organizational operations, organizational assets, or
individuals”?
2. Is the ICS part of a real property asset listed in the DoD FRPP and listed as “Mission
Dependent, Not Critical – does not fit into Mission Critical or Not Mission Dependent
categories”?
3. Is the ICS designated a prior DIACAP “Mission Assurance Category 2 – Systems
handling information that is important to the support of deployed and contingency forces.
Loss of availability is difficult to with and can only be tolerated for a short time”?
4. Has the installation commander designated the ICS as producing critical information?
If the answer is yes to any of the questions, STOP here and use the Moderate Impact
overlay.
If the answer is yes to any of the questions, the High Impact overlay applies. In some
of the highest criticality installations the addition of controls during tailoring (above
and beyond the overlay) will be required.
If you did not answer yes to any of these questions, go back and re-evaluate your answers to the
questions related to Low and Moderate systems.
3. Implementation
• ANSI/ISA 99.00.01 2007 Security for Industrial Automation and Control Systems
• CNSSI No. 1253, Revision 1.1, Security Controls and Control Selections for National
Security Systems, March 2012
The ICS Overlay can apply to all the baselines defined in CNSSI No. 1253. The overlay does
not require any other overlays to provide the needed protection for systems within ICS
environments. Care should be taken when tailoring information systems that contain ICS
information, since numerous security controls are required by legislation, building code, and
transportation code. See Section 7 for the list of security controls required to meet
regulatory/statutory requirements.
The tables below contain the security controls to be tailored from a CNSSI baseline for Low,
Moderate, and High Impact ICS systems:
1. For Low Impact systems, start with a CNSSI L-L-L baseline and use the "Low Overlay"
column. Note that the Low Overlay only removes controls (--); there are no additions.
2. For Moderate Impact systems, start with a CNSSI M-M-M system and use the “Moderate
Overlay" column. Note that the Moderate Overlay only removes controls (--); there are
no additions.
3. For High Impact systems, start with a CNSSI H-H-H system and the "High Overlay"
column. Note that the High Overlay only removes controls (--); there are no additions.
AC-2(2) -- NA NA
AC-2(3) -- NA NA
AC-2(4) -- NA NA
AC-2(7) -- NA NA
AC-3(4) -- NA NA
AC-3(6) NA NA --
AC-4 -- NA NA
AC-5 -- NA NA
AC-6(1) -- NA NA
AC-6(2) -- NA NA
AC-6(5) -- NA NA
AC-6(6) NA NA --
AC-9 NA -- NA
AC-17(4) -- NA NA
AC-17(5) -- NA NA
AC-17(6) -- NA NA
AC-17(7) -- NA NA
AC-18(2) -- NA NA
AC-18(4) -- NA NA
AC-18(5) -- NA NA
AC-19(1) -- NA NA
AC-19(2) -- NA NA
AC-19(4) -- NA NA
AC-20(2) -- NA NA
AT-3(2) -- NA NA
AT-5 -- NA NA
AU-2(3) -- NA NA
AU-2(4) -- NA NA
AU-3(1) -- NA NA
AU-3(2) -- NA NA
AU-5(1) -- NA NA
AU-5(2) NA -- NA
AU-6(3) -- NA NA
AU-7(1) NA -- NA
AU-8(1) -- NA NA
AU-10 NA -- NA
AU-10(5) NA -- NA
CA-2(1) -- NA NA
CA-3(1) -- NA NA
CA-3(2) NA -- NA
CA-7(1) -- NA NA
CA-7(2) -- NA NA
CM-2(5) -- NA NA
CM-3(4) -- NA NA
CM-4(2) -- NA NA
CM-5(1) -- NA NA
CM-5(2) -- NA NA
CM-5(5) -- NA NA
CM-5(6) -- NA NA
CM-6(3) -- NA NA
CM-7(1) -- NA NA
CM-7(3) -- NA NA
CM-8(4) -- NA NA
CM-8(5) -- NA NA
CM-9 -- NA NA
CP-6 NA -- NA
CP-6(1) NA -- NA
CP-6(3) NA -- NA
CP-7 NA -- NA
CP-7(1) NA -- NA
CP-7(2) NA -- NA
CP-7(3) NA -- NA
CP-7(4) NA -- NA
CP-7(5) NA -- NA
CP-8(1) NA -- NA
CP-8(2) NA -- NA
CP-9(1) -- NA NA
CP-10(2) -- NA NA
IA-2(5) -- NA NA
IA-2(8) -- NA NA
IA-3 -- NA NA
IA-3(1) -- NA NA
IA-3(2) -- NA NA
IA-3(3) -- NA NA
IA-4(4) -- NA NA
IA-5(2) -- NA NA
IA-5(3) -- NA NA
IA-5(4) -- NA NA
IA-5(6) -- NA NA
IA-5(8) -- NA NA
IR-3 -- NA NA
IR-4(1) -- NA NA
IR-4(3) -- NA NA
IR-4(4) -- NA NA
MA-2(1) -- NA NA
MA-2(2) NA NA --
MA-3 -- NA NA
MA-3(1) NA -- NA
MA-3(2) -- NA NA
MA-3(3) -- NA NA
MA-4(2) -- NA NA
MA-4(3) -- NA NA
MA-4(5) -- NA NA
MA-4(6) -- NA NA
MA-4(7) -- NA NA
MA-5(1) -- NA NA
MP-3 -- NA NA
MP-4 -- NA NA
MP-4(1) NA NA --
MP-5 -- NA NA
MP-5(2) -- NA NA
MP-6(3) -- NA NA
MP-6(4) -- NA NA
MP-6(5) -- NA NA
MP-6(6) -- NA NA
PE-2(3) -- NA NA
PE-3(2) -- NA NA
PE-3(3) -- NA NA
PE-5 -- NA NA
PE-9 -- NA NA
PE-9(2) NA -- NA
PE-10 -- NA NA
PE-19 NA -- NA
PE-19(1) NA -- NA
PL-2(1) -- NA NA
PL-2(2) -- NA NA
PL-6 -- NA NA
PS-3(1) -- NA NA
PS-3(2) -- NA NA
PS-6(1) -- NA NA
PS-6(2) -- NA NA
RA-5(1) -- NA NA
RA-5(2) -- NA NA
RA-5(5) -- NA NA
RA-5(7) -- NA NA
SA-4(6) -- NA NA
SA-5(1) -- NA NA
SA-5(2) -- NA NA
SA-9(1) -- NA NA
SA-10 -- NA NA
SA-10(1) -- NA NA
SA-11 -- NA NA
SA-12 -- NA NA
SA-12(2) -- NA NA
SC-2 -- NA NA
SC-2(1) -- NA NA
SC-4 -- NA NA
SC-5(1) -- NA NA
SC-7(1) -- NA NA
SC-7(2) -- NA NA
SC-7(4) -- NA NA
SC-7(5) -- NA NA
SC-7(7) -- NA NA
SC-7(8) -- NA NA
SC-7(12) -- NA NA
SC-7(13) -- NA NA
SC-7(14) -- NA NA
SC-8 -- NA NA
SC-8(2) NA NA --
SC-9 -- NA NA
SC-9(1) -- NA NA
SC-9(2) NA -- NA
SC-10 -- NA NA
SC-11 -- NA NA
SC-12(1) -- NA NA
SC-15 -- NA NA
SC-15(1) -- NA NA
SC-15(2) -- NA NA
SC-15(3) -- NA NA
SC-17 -- NA NA
SC-18(1) -- NA NA
SC-18(2) -- NA NA
SC-18(3) -- NA NA
SC-18(4) -- NA NA
SC-19 -- NA NA
SC-21 -- NA NA
SC-21(1) -- NA NA
SC-23 -- NA NA
SC-23(1) -- NA NA
SC-23(2) -- NA NA
SC-23(3) -- NA NA
SC-23(4) -- NA NA
SC-24 -- NA NA
SC-28 -- NA NA
SC-32 NA -- NA
SI-2(3) -- NA NA
SI-2(4) -- NA NA
SI-3(1) -- NA NA
SI-3(2) -- NA NA
SI-3(3) -- NA NA
SI-4(1) -- NA NA
SI-4(2) -- NA NA
SI-4(4) -- NA NA
SI-4(5) -- NA NA
SI-4(7) -- NA NA
SI-4(8) -- NA NA
SI-4(9) -- NA NA
SI-4(11) -- NA NA
SI-4(12) -- NA NA
SI-4(15) -- NA NA
SI-4(16) -- NA NA
SI-4(17) -- NA NA
SI-5(1) -- NA NA
SI-6 -- NA NA
SI-6(1) -- NA NA
SI-6(3) -- NA NA
SI-8 -- NA NA
SI-8(1) -- NA NA
SI-8(2) -- NA NA
SI-10 NA -- NA
SI-11 -- NA NA
PM-8 -- NA NA
5. Supplemental Guidance
The security controls and control enhancements are likely candidates for tailoring, with the
applicability of scoping guidance indicated for each control/enhancement. The citation of a
control without enhancements (e.g., AC-17) refers only to the base control without any
enhancements, while reference to an enhancement by a parenthetical number following the
control identification (e.g., AC-17(1)) refers only to the specific control enhancement.
Organizations are required to conduct a risk assessment, taking into account the tailoring and
supplementation performed in arriving at the agreed-upon set of security controls for the ICS, as
well as the risk to the organization’s operations and assets, individuals, other organizations, and
the Nation being incurred by operation of the ICS with the intended controls. Based on an
evaluation of the risk, the organization will further tailor the control set obtained using this
overlay by adding or removing controls in accordance with the CNSSI 1253 process. The
addition or removal of controls during tailoring requires justification.
The systems controls supplemental guidance provided below is a combination of NIST 800-53
Rev 3 Appendix I, and the DoD ICSs-PIT Technical Working Group decision to include
definitive text for each control as well as guidance on how to apply a control for OT and the
unique DoD environment. Refer to Figure 3 – ICS Tiers diagram and Table 3 for the Tier
definitions.
Tier Description
IP Network External to PIT
5 "External" Connection and PIT Management
"External" Connection (between PIT and IP Network External to PIT)
Platform IT System Management
4 UMCS Front End and IP Network
4N UMCS IP network -- PIT Network
4A M&C Server (including any web server, data historian, etc.)
4B OWS
3 Facility Points Of Connection (FPOCs)
2 IP Portion of the Field Control System
2N IP Field Control Network (FCN)
2A IP based networked controllers
2B Field control network Ethernet hardware
2C IP to non-IP control protocol routers or control protocol gateways
2D Field control system local computers (front-ends, engineering tools)
1 Non-IP portion of the Field Control System
1N Network (non-IP)
1A Networked controllers (non-IP)
0 "DUMB" non-networked sensors and actuators
ICS Supplemental Guidance: The organization has policies and procedures in place to
restrict physical access to the ICS (e.g., workstations, hardware components, field
devices) and predefine account privileges. Where the ICS (e.g., certain remote terminal
units, meters, relays) cannot support account management, the organization employs
appropriate compensating controls (e.g., providing increased physical security, personnel
security, intrusion detection, auditing measures) in accordance with the general tailoring
guidance.
ICS Supplemental Guidance: In situations where physical access to the ICS (e.g.,
workstations, hardware components, field devices) predefines account privileges or
where the ICS (e.g., certain remote terminal units, meters, relays) cannot support account
management, the organization employs appropriate compensating controls (e.g.,
providing increased physical security, personnel security, intrusion detection, auditing
measures) in accordance with the general tailoring guidance.
ICS Enhancement Supplemental Guidance: In situations where the ICS (e.g., field
devices) cannot support the use of automated mechanisms for the management of ICS
accounts, the organization employs non-automated mechanisms or procedures as
compensating controls in accordance with the general tailoring guidance.
ICS Supplemental Guidance: In situations where the ICS cannot support differentiation
of privileges, the organization employs appropriate compensating controls (e.g.,
providing increased personnel security and auditing) in accordance with the general
ICS Supplemental Guidance: In situations where the ICS cannot support account/node
locking or delayed login attempts, or the ICS cannot perform account/node locking or
delayed logins due to significant adverse impact on performance, safety, or reliability, the
organization employs appropriate compensating controls (e.g., logging or recording all
unsuccessful login attempts and alerting ICS security personnel though alarms or other
means when the number of organization-defined consecutive invalid access attempts is
exceeded) in accordance with the general tailoring guidance.
ICS Supplemental Guidance: In situations where the ICS cannot support system use
notification, the organization employs appropriate compensating controls (e.g., posting
physical notices in ICS facilities) in accordance with the general tailoring guidance.
ICS Supplemental Guidance: The ICS employs session lock to prevent access to specified
workstations/nodes. The ICS activates session lock mechanisms automatically after an
organizationally defined time period for designated workstations/nodes on the ICS. In
some cases, session lock for ICS operator workstations/nodes is not advised (e.g., when
immediate operator responses are required in emergency situations). Session lock is not a
substitute for logging out of the ICS. In situations where the ICS cannot support session
lock, the organization employs appropriate compensating controls (e.g., providing
increased physical security, personnel security, and auditing measures) in accordance
with the general tailoring guidance.
ICS Supplemental Guidance: The organization only allows specific user actions that can
be performed on the ICS system without identification or authentication to be performed
on non-IP sensor and actuator devices.
Applies to Tier 4a
ICS Enhancement Supplemental Guidance: ICS security objectives typically follow the
priority of availability, integrity and confidentiality, in that order. The use of
cryptography is determined after careful consideration of the security needs and the
potential ramifications on system performance. For example, the organization considers
whether latency induced from the use of cryptography would adversely impact the
operational performance of the ICS. The organization then explores all possible
cryptographic mechanisms (e.g., encryption, digital signature, hash function), as each
mechanism has a different delay impact. In situations where the ICS cannot support the
use of cryptographic mechanisms to protect the confidentiality and integrity of remote
sessions, or the components cannot use cryptographic mechanisms due to significant
adverse impact on safety, performance, or reliability, the organization employs
appropriate compensating controls (e.g., providing increased auditing for remote sessions
or limiting remote access privileges to key personnel) in accordance with the general
tailoring guidance.
ICS Supplemental Guidance: In situations where the ICS cannot implement any or all of
the components of this control, the organization employs other mechanisms or procedures
as compensating controls in accordance with the general tailoring guidance.
ICS Supplemental Guidance: In situations where the ICS cannot implement any or all of
the components of this control, the organization employs other mechanisms or procedures
as compensating controls in accordance with the general tailoring guidance.
ICS Enhancement Supplemental Guidance: Per DoD guidance, no USB thumb drives are
authorized for use. Other authorized removable media must be identified with username
and contact information.
ICS Supplemental Guidance: Security awareness training includes initial and periodic
review of ICS-specific policies, standard operating procedures, security trends, and
vulnerabilities. The ICS security awareness program is consistent with the requirements
of the security awareness and training policy established by the organization.
ICS Supplemental Guidance: Security training includes initial and periodic review of
ICS-specific policies, standard operating procedures, security trends, and vulnerabilities.
The ICS security training program is consistent with the requirements of the security
awareness and training policy established by the organization.
ICS Supplemental Guidance: The organization, in conjunction with the IAMS, develops,
disseminates, and annually reviews/updates a formal, documented audit and
accountability policy that addresses purpose, scope, roles, responsibilities, management
commitment, coordination among organizational entities, and compliance. The
organization also follows formal documented procedures to facilitate the implementation
of the audit and accountability policy and associated audit and accountability controls.
ICS Supplemental Guidance: Most ICS auditing occurs at the application level.
ICS Supplemental Guidance: The ICS produces audit records that contain sufficient
information to, at a minimum, establish what type of event occurred, when (date and
time) the event occurred, where the event occurred, the source of the event, the outcome
(success or failure) of the event, and the identity of any user/subject associated with the
event. An ICS system usually has a front-end server(s), workstation(s) and possibly
laptops that produce audit logs in great detail. Other ICS components are limited in what
events can be audited; enabling auditing on controllers/PLCs can create a self-denial of
service because the CPU and memory are limited.
ICS Supplemental Guidance: The organization allocates audit record storage capacity,
and in accordance with individual device design, configures auditing to reduce the
likelihood of such capacity being exceeded.
ICS Supplemental Guidance: The organization reviews and analyzes ICS audit records
every seven days for indications of inappropriate or unusual activity, and reports findings
to designated organizational officials. The organization adjusts the level of audit review,
analysis, and reporting within the ICS when there is a change in risk to organizational
operations, organizational assets, individuals, other organizations, or the Nation, based on
law enforcement information, intelligence information, or other credible sources of
information.
ICS Supplemental Guidance: The ICS uses internal system clocks to generate time
stamps for audit records. The preferred method uses Network Timing Protocol (NTP) to
synchronize servers and workstations. The ICS should have all of the internal clocks
standardized to a specific time zone (GMT, ZULU, UTC, etc.) and all clocks must agree
with each other, though they may not necessarily have the exact time.
ICS Supplemental Guidance: The ICS protects audit information and audit tools from
unauthorized access, modification, and deletion. Auditing roles will be established on all
devices that can be audited.
ICS Supplemental Guidance: The organization retains audit records for one year to
provide support for after-the-fact investigations of security incidents and to meet
regulatory and organizational information retention requirements.
ICS Supplemental Guidance: The organization authorizes connections from the ICS to
other information systems outside of the authorization boundary through the use of
Interconnection Security Agreements such as an MOA/MOU and/or an SLA; documents,
for each connection, the interface characteristics, security requirements, and the nature of
the information communicated; and monitors the information system connections on an
ongoing basis, verifying enforcement of security requirements.
ICS Supplemental Guidance: The organization develops a plan of action and milestones
for the ICS to document the organization’s planned remedial actions to correct
weaknesses or deficiencies noted during the assessment of the security controls, and to
reduce or eliminate known vulnerabilities in the system. The organization updates
existing plans of action and milestones (POA&M) at least every 90 days based on the
findings from security controls assessments, security impact analyses, and continuous
monitoring activities. The POA&M from the initial Risk Assessment (RA) will be used
as the systems security lifecycle vulnerability and mitigation remediation tool. As ICS
and IT technology changes regularly, the initial RA will be reviewed in order to
determine how the POA&M should be revised to account for improvements or upgrades
to legacy systems that might allow more stringent controls to be put into place without
adversely affecting operations.
ICS Supplemental Guidance: The organization determines the types of changes to the
ICS that are configuration controlled; approves configuration-controlled changes to the
system with explicit consideration for security impact analyses; documents approved
configuration-controlled changes to the system; retains and reviews records of
configuration-controlled changes to the system; audits activities associated with
configuration-controlled changes to the system; and coordinates and provides oversight
for configuration change control activities through a configuration control board (CCB)
that convenes at a frequency determined by the CCB.
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot support
the use of automated mechanisms to implement configuration change control, the
organization employs non-automated mechanisms or procedures as compensating
controls in accordance with the general tailoring guidance.
ICS Supplemental Guidance: The organization considers ICS safety and security
interdependencies.
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot support
the use of automated mechanisms to enforce access restrictions and support auditing of
enforcement actions, the organization employs non-automated mechanisms or procedures
as compensating controls in accordance with the general tailoring guidance.
ICS Supplemental Guidance: The organization configures the ICS to provide only
essential capabilities and specifically prohibits or restricts the use of functions, ports,
protocols, and/or services in accordance with DoDI 8551.01.
ICS Supplemental Guidance: The organization defines contingency plans for categories
of disruptions or failures. In the event of a loss of processing within the ICS or
communication with operational facilities, the ICS executes predetermined procedures
(e.g., alert the operator of the failure and then do nothing, alert the operator and then
safely shut down the industrial process, alert the operator and then maintain the last
operational setting prior to failure). Consideration is given to restoring system state
variables as part of restoration (e.g., valves are restored to their original settings prior to
the disruption).
ICS Supplemental Guidance: The organization trains personnel in their contingency roles
and responsibilities with respect to the ICS and provides refresher training at least
annually as defined in the contingency plan.
ICS Supplemental Guidance: In situations where the organization cannot test or exercise
the contingency plan on production ICSs due to significant adverse impact on
performance, safety, or reliability, the organization employs appropriate compensating
controls (e.g., using scheduled and unscheduled system maintenance activities including
responding to ICS component and system failures, as an opportunity to test or exercise
the contingency plan) in accordance with the general tailoring guidance.
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot support
multifactor authentication, the organization employs compensating controls in
accordance with the general tailoring guidance (e.g., implementing physical security
measures).
ICS Supplemental Guidance: Where users function as a single group (e.g., control room
operators), user identification may be role-based, group-based, or device-based.
ICS Supplemental Guidance: The organization manages ICS authenticators for users and
devices by verifying, as part of the initial authenticator distribution, the identity of the
individual and/or device receiving the authenticator; establishing initial authenticator
content for authenticators defined by the organization; ensuring that authenticators have
sufficient strength of mechanism for their intended use; establishing and implementing
administrative procedures for initial authenticator distribution, for lost/compromised or
damaged authenticators, and for revoking authenticators; changing default content of
authenticators upon ICS installation; establishing minimum and maximum lifetime
restrictions and reuse conditions for authenticators (if appropriate); changing/refreshing
authenticators’ Common Access Cards (CACs) every 3 years, or 1 year from term of
contract, Password every 60 days, Biometrics every 3 years; protecting authenticator
content from unauthorized disclosure and modification; and requiring users to take, and
having devices implement, specific measures to safeguard authenticators.
CNSSI No. 1253 152 Attachment 1 to Appendix K
ICS Supplemental Guidance: The ICS uniquely identifies and authenticates non-
organizational users (or processes acting on behalf of non-organizational users).
ICS Supplemental Guidance: The organization trains personnel in their incident response
roles and responsibilities with respect to the ICS; and provides refresher training
annually.
ICS Supplemental Guidance: The organization tracks and documents ICS security
incidents. Security incidents and monitoring should be coordinated with the DHS ICS-
CERT and USCYBERCOM ICS functional leads.
ICS Supplemental Guidance: The United States Computer Emergency Readiness Team
(US-CERT) maintains the ICS Security Center at http://www.uscert.gov/control_systems.
ICS Supplemental Guidance: The organization develops an incident response plan that
provides the organization with a roadmap for implementing its incident response
capability; describes the structure and organization of the incident response capability;
provides a high-level approach for how the incident response capability fits into the
overall organization; meets the unique requirements of the organization, which relate to
mission, size, structure, and functions; defines reportable incidents; provides metrics for
measuring the incident response capability within the organization; defines the resources
and management support needed to effectively maintain and mature an incident response
capability; and is reviewed and approved by designated officials within the organization;
distributes copies of the incident response plan to all personnel with a role or
responsibility for implementing the incident response plan; reviews the incident response
plan at least annually (incorporating lessons learned from past incidents); revises the
incident response plan to address system/organizational changes or problems encountered
during plan implementation, execution, or testing; and communicates incident response
plan changes to all personnel with a role or responsibility for implementing the incident
response plan, not later than 30 days after the change is made.
ICS Supplemental Guidance: The organization authorizes, monitors, and controls non-
local maintenance and diagnostic activities; allows the use of non-local maintenance and
diagnostic tools only as consistent with organizational policy and documented in the
security plan for the ICS; employs strong identification and authentication techniques in
the establishment of non-local maintenance and diagnostic sessions; maintains records for
non-local maintenance and diagnostic activities; and terminates all sessions and network
connections when non-local maintenance is completed.
ICS Supplemental Guidance: The organization restricts access to ICS media which
includes both digital media (e.g., diskettes, magnetic tapes, external/removable hard
drives, flash/thumb drives, compact disks, digital video disks) and non-digital media
(e.g., paper, microfilm). This control also applies to mobile computing and
communications devices with information storage capability (e.g., notebook/laptop
computers, personal digital assistants, cellular telephones, digital cameras, and audio
recording devices, controller interfaces and programming devices) to the organization-
CNSSI No. 1253 156 Attachment 1 to Appendix K
ICS Supplemental Guidance: The organization sanitizes ICS media, both digital and non-
digital, prior to disposal, release out of organizational control, or release for reuse.
ICS Supplemental Guidance: The organization develops and keeps current a list of
personnel with authorized access to the facility where the ICS resides (except for those
areas within the facility officially designated as publicly accessible); issues authorization
credentials; reviews and approves the access list and authorization credentials every 90
days, removing from the access list personnel no longer requiring access.
ICS Supplemental Guidance: The organization considers ICS safety and security
interdependencies. The organization considers access requirements in emergency
situations. During an emergency-related event, the organization may restrict access to
ICS facilities and assets to authorized individuals only. ICS are often constructed of
CNSSI No. 1253 157 Attachment 1 to Appendix K
ICS Supplemental Guidance: The organization monitors physical access to the ICS to
detect and respond to physical security incidents; reviews physical access logs every 30
days; and coordinates results of reviews and investigations with the organization’s
incident response capability.
ICS Supplemental Guidance: The organization controls physical access to the ICS by
authenticating visitors before authorizing access to the facility where the ICS resides,
other than areas designated as publicly accessible.
ICS Supplemental Guidance: The organization maintains visitor access records to the
facility where the ICS resides (except for those areas within the facility officially
designated as publicly accessible) and reviews visitor access records every 30 days.
ICS Supplemental Guidance: The organization employs and maintains fire suppression
and detection devices/systems for the ICS that are supported by an independent energy
source.
ICS Supplemental Guidance: The organization protects the ICS from damage resulting
from water leakage by providing master shutoff valves that are accessible, working
properly, and known to key personnel.
ICS Supplemental Guidance: The organization authorizes, monitors, and controls all
system components entering and exiting the facility and maintains records of those items.
ICS hardware, sensors and devices are typically maintained by contractor support and not
always under the direct control of the organization.
ICS Supplemental Guidance: The organization establishes the rules that describe their
responsibilities and expected behavior with regard to information and ICS usage, makes
them readily available to all ICS users, and receives signed acknowledgment from users
indicating that they have read, understand, and agree to abide by the rules of behavior,
before authorizing access to information and the ICS.
ICS Supplemental Guidance: The organization assigns a risk designation to all positions;
establishes screening criteria for individuals filling those positions; and reviews and
revises position risk designations annually.
ICS Supplemental Guidance: The organization reviews logical and physical access
authorizations to ICS/facilities when personnel are reassigned or transferred to other
positions within the organization and initiates actions to ensure all system accesses no
longer required are removed within 24 hours.
ICS Supplemental Guidance: The organization ensures that individuals requiring access
to organizational information and the ICS sign appropriate access agreements prior to
being granted access; and reviews/updates the access agreements annually or upon
departure.
ICS Supplemental Guidance: The organization employs a formal sanctions process for
personnel failing to comply with established information security policies and
procedures.
ICS Supplemental Guidance: Vulnerability scanning and penetration testing are used with
care on ICS networks to ensure that ICS functions are not adversely impacted by the
scanning process. Production ICS may need to be taken offline, or replicated to the extent
feasible, before scanning can be conducted. If ICS are taken offline for scanning, scans
are scheduled to occur during planned ICS outages whenever possible. If vulnerability
scanning tools are used on non-ICS networks, extra care is taken to ensure that they do
not scan the ICS network. In situations where the organization cannot, for operational
reasons, conduct vulnerability scanning on a production ICS, the organization employs
compensating controls (e.g., providing a replicated system to conduct scanning) in
accordance with the general tailoring guidance.
ICS Supplemental Guidance: The organization manages the ICS using a system
development life cycle methodology that includes information security considerations;
defines and documents ICS security roles and responsibilities throughout the system
development life cycle; and identifies individuals having ICS security roles and
responsibilities.
SA-4 ACQUISITIONS
References: http://msisac.cisecurity.org/
ICS Supplemental Guidance: The organization obtains, protects as required, and makes
available to authorized personnel, administrator documentation for the ICS that describes:
secure configuration, installation, and operation of the ICS; effective use and
maintenance of security features/functions; and known vulnerabilities regarding
configuration and use of administrative (i.e., privileged) functions. The organization also
obtains, protects as required, and makes available to authorized personnel, user
documentation for the ICS that describes user-accessible security features/functions and
how to effectively use those security features/functions; methods for user interaction with
the ICS, which enables individuals to use the system in a more secure manner; and user
responsibilities in maintaining the security of the information and ICS; and documents
attempts to obtain ICS documentation when such documentation is either unavailable or
nonexistent. Because ICSs can have a very long life, many vendors’ user manuals are
available online. As the firmware embedded default passwords cannot be changed, these
legacy systems should be isolated as a compensating measure.
ICS Supplemental Guidance: The organization enforces explicit rules governing the
installation of software by users.
ICS Supplemental Guidance: The organization: requires that providers of external ICS
services comply with organizational information security requirements and employ
appropriate security controls in accordance with applicable federal laws, Executive
Orders, directives, policies, regulations, standards, and guidance; defines and documents
government oversight and user roles and responsibilities with regard to external ICS
services; and monitors security control compliance by external service providers.
ICS Supplemental Guidance: The ICS protects against or limits the effects of the
following types of denial of service attacks: consumption of scarce, limited, or non-
renewable resources; destruction or alteration of configuration information; physical
destruction or alteration of network components.
Applies to Tier 5
ICS Supplemental Guidance: The ICS monitors and controls communications at the
external boundary of the system and at key internal boundaries within the system; and
connects to external networks or information systems only through managed interfaces
consisting of boundary protection devices arranged in accordance with an organizational
security architecture.
ICS Enhancement Supplemental Guidance: The organization limits the number of access
points to the ICS to allow for more comprehensive monitoring of inbound and outbound
communications and network traffic.
ICS Supplemental Guidance: The organization identifies and reports ICS flaws; tests
software updates related to flaw remediation for effectiveness and potential side effects
on organizational ICS before installation; and incorporates flaw remediation into the
organizational configuration management process. Patching of software security flaws
CNSSI No. 1253 167 Attachment 1 to Appendix K
ICS Supplemental Guidance: The use of malicious code protection is determined after
careful consideration and after verification that it does not adversely impact the
operational performance of the ICS.
ICS Supplemental Guidance: The organization ensures that the use of monitoring tools
and techniques does not adversely impact the operational performance of the ICS.
Applies to Tier 5 or at connection side of external network, may apply to Tiers 2d, 4a,
and 4b
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot prevent
non-privileged users from circumventing intrusion detection and prevention capabilities,
the organization employs appropriate compensating controls (e.g., enhanced auditing) in
accordance with the general tailoring guidance.
ICS Supplemental Guidance: The organization receives ICS security alerts, advisories,
and directives from designated external organizations on an ongoing basis; generates
internal security alerts, advisories, and directives as deemed necessary; disseminates
security alerts, advisories, and directives to CNDSP Tier 1 for vetting. The CNDSP Tier
1 will pass the information to the accredited Tier 2 CNDSPs. Tier 2 CNDSPs are
responsible for ensuring all Tier 3 entities receive the information. Tier 3 organizations
will ensure all local Op Centers/LAN shops receive information; and implements security
directives in accordance with established time frames, or notifies the issuing organization
of the degree of noncompliance. ICS vulnerabilities and patches are coordinated through
the Department of Homeland Security (DHS) DHS Industrial Control Systems Cyber
Emergency Response Team (ICS-CERT).
ICS Supplemental Guidance: The organization handles and retains both information
within and output from the ICS in accordance with applicable federal laws, Executive
Orders, directives, policies, regulations, standards, and operational requirements. In
general, ICS do not output information other than audit and performance logs; the output
is the continuous availability of the essential service being provided (i.e., power, water,
HVAC, etc.). Reporting of performance and consumption data should be closely
coordinated with the OPSEC Functional lead to ensure critical information is not
divulged.
ICS Supplemental Guidance: The organization appoints a senior ICS officer with the
mission and resources to coordinate, develop, implement, and maintain an organization-
wide information security program.
ICS Supplemental Guidance: The organization implements a process for ensuring that
plans of action and milestones for the security program and the associated organizational
ICS are maintained and document the remedial information security actions to mitigate
risk to organizational operations and assets, individuals, other organizations, and the
Nation.
ICS Supplemental Guidance: The organization develops and maintains an inventory of its
ICS. The I&E community will inventory and maintain all ICS systems to the sensor and
actuator level using As-Built drawings, Building Information Models, Computerized
Maintenance Management Systems, Builder, and Construction-Operations Building
information exchange data (if used). The I&E and IT communities will identify the ICS
enclave boundary and use this as the system of record identifier for DITPR.
ICS Supplemental Guidance: The organization develops, monitors, and reports on the
results of information security measures of performance.
ICS Supplemental Guidance: The organization manages (i.e., documents, tracks, and
reports) the security state of organizational ICS through security authorization processes;
designates individuals to fulfill specific roles and responsibilities within the
organizational risk management process; and fully integrates the security authorization
processes into an organization-wide risk management program.
In addition to the Low Impact systems controls, the Moderate Impact system includes the
additional following controls:
ICS Supplemental Guidance: In situations where the ICS cannot support concurrent
session control, the organization employs appropriate compensating controls (e.g.,
providing increased auditing measures) in accordance with the general tailoring guidance.
ICS Enhancement Supplemental Guidance: The ICS integrates audit review, analysis, and
reporting processes to support organizational processes for investigation and response to
suspicious activities.
ICS Supplemental Guidance: In general, audit reduction and report generation is not
performed on the ICS, but on a separate information system. In situations where the ICS
cannot support auditing including audit reduction and report generation, the organization
employs compensating controls (e.g., providing an auditing capability on a separate
information system) in accordance with the general tailoring guidance.
ICS Enhancement Supplemental Guidance: The ICS backs up audit records weekly onto
a different system or media than the system being audited.
CNSSI No. 1253 172 Attachment 1 to Appendix K
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot support
the use of automated mechanisms to centrally manage, apply, and verify configuration
settings, the organization employs non-automated mechanisms or procedures as
compensating controls in accordance with the general tailoring guidance.
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot employ
automated mechanisms to prevent program execution, the organization employs
compensating controls (e.g., external automated mechanisms, procedures) in accordance
with the general tailoring guidance.
ICS Enhancement Supplemental Guidance: The organization plans for the resumption of
essential missions and business functions within 12 hours (Availability Moderate), as
defined in the contingency plan of contingency plan activation.
ICS Enhancement Supplemental Guidance: The organization plans for the full
resumption of missions and business functions within 1-5 days (Availability Moderate),
as defined in the contingency plan of contingency plan activation.
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot support
multifactor authentication, the organization employs compensating controls in
accordance with the general tailoring guidance (e.g., implementing physical security
measures).
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot support
multifactor authentication, the organization employs compensating controls in
accordance with the general tailoring guidance (e.g., implementing physical security
measures).
ICS Enhancement Supplemental Guidance: The ICS uses multifactor authentication for
local access to non-privileged accounts.
ICS Supplemental Guidance: The organization obtains maintenance support and/or spare
parts for security-critical ICS components and/or key information technology
components within 12 hours of failure (Availability Moderate).
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot support
cryptographic mechanisms, the organization employs compensating controls in
accordance with the general tailoring guidance (e.g., implementing physical security
measures).
SA-4 ACQUISITIONS
Applies to Tier 5
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot support
the use of automated mechanisms to conduct and report on the status of flaw remediation,
the organization employs non-automated mechanisms or procedures as compensating
controls in accordance with the general tailoring guidance.
In addition to the Low and Moderate Impact systems controls, the High Impact system includes
the additional following controls:
ICS Enhancement Supplemental Guidance: In situations where the ICScannot support the
use of automated mechanisms to generate audit records, the organization employs non-
automated mechanisms or procedures as compensating controls in accordance with the
general tailoring guidance.
ICS Enhancement Supplemental Guidance: In situations where the ICS cannot prevent
the installation of software programs that are not signed with an organizationally-
recognized and approved certificate, the organization employs alternative mechanisms or
procedures as compensating controls (e.g., auditing of software installation) in
accordance with the general tailoring guidance.
ICS Enhancement Supplemental Guidance: The organization plans for the continuance of
essential missions and business functions with little or no loss of operational continuity
and sustains that continuity until full ICS restoration at primary processing and/or storage
sites.
ICS Enhancement Supplemental Guidance: The organization provides for the transfer of
all essential missions and business functions to alternate processing and/or storage sites
with little or no loss of operational continuity and sustains that continuity through
restoration to primary processing and/or storage sites.
Applies to Tier 5
ICS Enhancement Supplemental Guidance: The organization stores backup copies of the
operating system and other critical ICS software, as well as copies of the ICS inventory
(including hardware, software, and firmware components) in a separate facility or in a
fire-rated container that is not collocated with the operational system.
ICS Enhancement Supplemental Guidance: The organization ensures that the facility
undergoes annual fire marshal inspections and promptly resolves identified deficiencies.
SA-4 ACQUISITIONS
ICS Supplemental Guidance: In situations where the ICS cannot support security function
isolation, the organization employs compensating controls (e.g., providing increased
auditing measures, limiting network connectivity) in accordance with the general
tailoring guidance.
ICS Supplemental Guidance: The ICS limits the use of resources by priority.
ICS Supplemental Guidance: The ICS protects the integrity of information during the
processes of data aggregation, packaging, and transformation in preparation for
transmission.
ICS Supplemental Guidance: The ICS detects unauthorized changes to software and
information.
ICS Supplemental Guidance: The organization protects the ICS from harm by
considering mean time to failure for any component within a system requiring high
availability in specific environments of operation; and provides substitute ICS
components, when needed, and a mechanism to exchange active and standby roles of the
components.
7. Regulatory/Statutory Controls
CONTROL ICSs
PE-14 Temperature and Humidity Controls Reference: National Electric Code (NFPA 70)
PE-15 Water Damage Protection Reference: National Electric Code (NFPA 70)
8. Tailoring Considerations
When tailoring a security control set that includes the ICS Overlay, care should be taken that
regulatory/statutory security controls are not tailored out of the control set. These security
controls are required to satisfy the regulatory/statutory requirements of the Energy Performance
ACT 2005, Energy Independence Security Act 2007, and Fiscal Year 2010 National Defense
Authorization Act.
9. Duration
Alternative Fuel Vehicle (AFV) An AFV is a vehicle that runs on a fuel other
than "traditional" petroleum fuels (petrol or
diesel); and also refers to any technology of
powering an engine that does not involve
solely petroleum (e.g. electric car, hybrid
electric vehicles, solar powered).
Federal Real Property Profile (FRPP) A common data dictionary and system used by
the federal government to create a unique Real
Property Unique Identifier (RPUID) for owned
and leased properties.
Field Control Network The network used by a field control system.
Field Control System (FCS) A building control system or Utility Control
System (UCS).
Fire Alarm and Life Safety Systems A fire alarm system consists of components
and circuits arranged to monitor and
annunciate the status of fire alarm or
supervisory signal initiating devices and to
initiate the appropriate response to those
signals. Fire systems include the sprinklers,
sensors, panels, exhaust fans, signage, and
emergency backup power required for building
protection and occupant emergency egress.
Life safety systems enhance or facilitate
evacuation smoke control,
compartmentalization, and/or isolation.
Monitoring and Control (M&C) The UMCS 'front end' software which
Software performs supervisory functions such as alarm
handling, scheduling and data logging and
provides a user interface for monitoring the
system and configuring these functions.
Physical Access Control System PACS are required by HSPD-12 and the basic
(PACs) components of a PACs are the head-end
server, panels, door controllers, readers, lock
or strike mechanisms and the user identity
cards.
Platform Information Technology PIT are IT or OT resources, both hardware and
(PIT) software, and include: weapons, training
simulators, diagnostic test and maintenance
equipment, calibration equipment, equipment
used in the research and development of
weapons systems, medical technologies,
vehicles and alternative fueled vehicles (e.g.,
electric, bio-fuel, Liquid Natural Gas that
contain car-computers), buildings and their
associated control systems (building
automation systems or building management
systems, energy management system, fire and
life safety, physical security, elevators, etc.),
utility distribution systems (such as electric,
water, waste water, natural gas and steam),
telecommunications systems designed
specifically for industrial control systems to
include supervisory control and data
acquisition, direct digital control,
programmable logic controllers, other control
devices and advanced metering or sub-
metering, including associated data transport
mechanisms (e.g., data links, dedicated
networks).
CNSSI No. 1253 197 Attachment 1 to Appendix K
Special Purpose Processing Node A fixed data center supporting special purpose
functions that cannot (technically or
economically) be supported by CDCs or IPNs
due to its association with mission specific
infrastructure or equipment (e.g.,
communications and networking,
manufacturing, training, education,
meteorology, medical, modeling & simulation,
test ranges, etc.). No general purpose
processing or general purpose storage can be
provided by or through a SPPN. SPPNs will
connect to the CDCs via IPNs.
200