Complete With Covers PDF
Complete With Covers PDF
UCISA TOOLKIT
Management Toolkit
Edition 1.0
Volume 1
UCISA Information Security Management Toolkit
Edition 1.0
Volume 1
UCISA
UCISA is the Universities and Colleges Information Systems Association. UCISA is a membership organisation
representing almost all the higher education institutions in the UK. It exists to promote best practice and to
act as a representative and lobbying body.
Copyright
This publication is licensed under the Creative Commons Attribution-NonCommercial 4.0 International licence.
Subject to the source being appropriately acknowledged and the licence terms preserved, it may be copied in
whole or in part and incorporated into another document or shared as part of information given, except for use
for commercial gain.
The publication also contains resources from institutions; where this material is copied or otherwise reused,
both UCISA and the institution concerned should be acknowledged.
Disclaimer
The information contained herein is believed to be correct at the time of issue, but no liability can be accepted
for any inaccuracies. The reader is reminded that changes may have taken place since issue, particularly in
rapidly changing areas, such as internet addressing, and consequently URLs and email addresses should be
used with caution. UCISA cannot accept any responsibility for any loss or damage resulting from the use of the
material contained herein.
Availability
The UCISA Information Security Management Toolkit is freely available to download for non-commercial use
from www.ucisa.ac.uk/ismt
I consider this publication to be a significant addition to the suite of materials that UCISA provides to the
sector, and I hope that you find it useful within your institution.
Cyber security is an increasingly business critical issue for universities. Universities operate on the trust of
students, staff and partners to manage information safely and securely. Furthermore, universities produce
research of great value which can be a target for a variety of economic and political reasons. The UCISA
Information Security Management Toolkit is an important guide for security practitioners working toward
implementing proportionate risk based controls in complex institutions.
Introduction
Without it, they cannot operate. Equally, if it is
untrustworthy, every activity is suspect, and no-one
can make a reliable decision. This is especially
important in research, where data accuracy is vital.
Finally, some information, such as commercial
research findings, personnel records and medical
data, is so sensitive that even knowledge of it by the
wrong people is dangerous to the organisation.
Because everyone in any organisation needs to create, access and use information, everyone is responsible for Information is not the
protecting it and using it appropriately. This protection requires a culture where information is seen as being sole domain of the IT
valuable and worth protecting, where effective data management is established, and where student and staff department— it is a cross-
institutional concern.
privacy are respected.
In order to achieve and maintain a good approach to information risk management, or information security,
organisations can benefit from the well-developed international standards in this area.
ISO/IEC 27001 is the international standard describing the creation and maintenance of an information
security management system (ISMS). It can be used by any size of organisation, and is flexible enough to fit any
sector. It has been in existence for over twenty years, and is used by many universities in the UK and abroad.
The UCISA Information Security Management Toolkit has been constructed for use by information security/
governance professionals wishing to put in place an ISMS in their organisation. It also addresses how to convey
the importance of information security to the organisation, since the need for an ISMS is based upon the
acceptance that information security is worth investing in. This edition of the Toolkit outlines an approach to
successfully implement an ISMS based on ISO/IEC 27001:2013 (Information technology — Security techniques
— Information security management systems — Requirements). It is intended as a practical resource,
providing an overview of the key aspects of a successful ISMS and guidance on how to implement them. It also
includes case studies, as well as templates and example resources which organisations can tailor to suit their
needs.
The Toolkit has evolved from edition three of the UCISA Information Security Toolkit, which was based upon
the 2005 version of ISO/IEC 27002 (Information technology — Security techniques — Code of practice for
information security management), and included sample policies for all the Standard’s controls, grouped
according to the internal functions of the organisation.
The different approach taken this time reflects the need of organisations for advice and guidance upon setting
up and maintaining the organisational infrastructure (including top level policies, processes, and governance)
which enables policies and other security measures (controls) to be appropriate, well maintained, and
effectively implemented. Information on how to implement controls is not included.
The document has also been revised to reflect the changing trends in the workplace, such as: the growth of the
use of personal devices to access organisational systems and services; the increase in off-site working; and the
complexities of the research agenda (e.g. protecting intellectual property in an open environment).
Introduction 11
The UCISA Information Security Management Toolkit will:
• assist those who have responsibility for implementing information security across the organisation by
providing advice and guidance to them;
• help them to provide senior university management with an understanding of why information security
is an important, organisation-wide issue.
It is also strongly recommended that readers read this document in conjunction with the standards ISO/IEC
27001 and ISO/IEC 27002.
Stage 1
Foundations
Stage 2
Planning, assessment and evaluation
Stage 3
Implementation, support and operation
Stage 4
Performance, evaluation and improvement
• Measure and evaluate performance [§10] Measurement
• Respond effectively to incidents and [§11] When things go wrong:
when things go wrong non - conformities and
• Deliver continual improvement incidents
• Implement iterative risk assessment [§12] Continual improvement
[§5] Risk assessment
Introduction 13
Resources Reading list
Different elements of an Information Security Management System - ISO/IEC 27000:2014 Information technology — Security techniques
Cardiff University — Information security management systems — Overview and
vocabulary
Stages for implementing an Information Security Framework (ISF)
www.ucisa.ac.uk/ismt1
programme - Cardiff University
http://standards.iso.org/ittf/PubliclyAvailableStandards/
Key topics
• The three aspects of information security
• How threats to information are changing
• The purpose of information security management
Regardless of its form and content, information has value. This value is maintained by its:
Each organisation will have its own attitude to information risk, and should take this into account when
deciding what controls to implement.
All members of an organisation are responsible for contributing to its management of information security:
their actions, or inaction, can protect or expose information to risk.
1.2 Context
All universities are facing increasing threats to their information from a wide range of sources, including
organised crime, as noted in the Universities UK publications on Cyber Security. New sources of threat, such
as nation states, and ideologically motivated organisations, continue to emerge. Such threats are becoming
more widespread, more ambitious and increasingly sophisticated. Attacks can also be carried out without an
attacker even having to leave their home.
According to the Ponemon Institute, the cost of a data breach in 2014 was $145 (£90) per record, including The UK National Security
recovery costs, fines/legal costs and impact to normal operations. Thus the overall cost of a breach affecting a Strategy identifies attacks
database containing 2,000 student records would be expected to be £180,000. in cyber space and cyber-
crime as a “Tier 1” threat
Attackers motivated by money will attack anything from which they can make a profit: e.g. by reselling the on a par with terrorism.
At the same time, due to organisations’ evolving usage of IT, they are becoming more vulnerable to less obvious
threats. The growth of cloud services, outsourced approaches to information management and external
collaborations present new opportunities for misuse and error, and reduce the role of central, specialised
control of IT facilities.
Furthermore, since research activities are increasingly intended to show real-world relevance and benefit, it is
reasonable to expect that their work will become more appealing to attackers, as it will be more likely to have a
value on the black market.
Educational institutions have other unique properties which make their information risks, and approaches to
handle them, different from other organisations (see Chapter 2, Information security governance, for more
information).
As their awareness of information risk increases, institutions are seeking to align their operational information
security activities to business goals, and asking information security teams to provide assurance of information
risk management.
Other contractual agreements bring with them further sources of requirements, such as the Health and Social
Care Information Centre’s Information Governance Toolkit (IG Toolkit) and the Payment Card Industry Data
Security Standard (PCI DSS).
Finally, in order to be granted permission to use certain datasets for research purposes (medical records, for
example), organisations are increasingly being required to provide evidence of mature information governance.
Summary
• Information security applies to all forms of information
• Threats are becoming more sophisticated and revenue-led
• Information security is the responsibility of all members of an organisation
Key topics
• The most critical components in the development, implementation and maintenance of a
successful ISMS
• How to use your organisational structure to give your ISMS the greatest possible chance of
success
• How to align your ISMS with your organisation’s business strategy
“the specification of decision rights and an accountability framework to encourage desirable behaviour in the
valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards
and metrics that ensure the effective and efficient use of information in enabling an organisation to achieve its
goals.” - Gartner
Governance is the foundation of an ISMS, as it provides both strategic and operational frameworks.
Information security governance is an integral part of the organisation’s wider governance structures and
mechanisms, such as business continuity, risk management, financial planning and research ethics.
Policy and objectives - There should be a top level policy which sets out the information security objectives,
that in turn support the overall organisational strategy. The policy must be signed off by top management.
Accountability and responsibility - The organisation should define accountabilities and responsibilities at a
high level, making top management explicitly accountable for information security but ensuring that personal
roles and responsibilities are also defined (see Chapter 8, Roles and competencies).
Stakeholders, in this context, may also be referred to as interested parties and could involve any of the
following:
• top management
• business/process owners
• third parties setting requirements/standards (such as the NHS, UK Government, merchant banks)
• internal and/or external auditors
• relevant internal professional service departments (IT Services, Legal Services etc.)
• customers and end users (e.g. students and/or staff).
Identifying stakeholders and interested parties will help ascertain the applicable security requirements for
the ISMS. The primary stakeholder(s) will be those who ultimately require assurance that any applicable
requirements are being implemented appropriately. These stakeholders may also be the sponsor of the ISMS.
In the financial industry, measures are taken to reduce the risk of cognitive bias: the same should exist in the
realm of information security.
In order to achieve these two complementary goals, the organisation should ensure that it gives a group or
body the formal responsibility for reviewing the effectiveness of the organisation’s activities to manage risk.
This can be a formal governance body, supported by an executive group made up of staff with responsibility
for aspects of risk management. The body with such responsibilities should be independent of the areas upon
which it is reporting.
It is recommended that a project or programme be set up to manage any large scale work involved in creating
(or significantly changing) any ISMS, with top level oversight. The programme or project should have a cross
organisational focus, and should not be presented or perceived as a piece of work for the IT function alone.
It can be quite helpful to have the lead for such a programme or project based, in, for example, a governance
or risk management role. Significant input will be required from both the governance and IT departments. In
addition, representatives for the Human Resources function and those responsible for physical security should
be involved in the programme or project. Finally, requirements, products, outputs and outcomes should be
quality checked with the wider stakeholders, including the academic community.
Once the initial work is complete, resources required for the continuing management and maintenance of the
ISMS should be made available from the organisation’s recurrent (or business as usual) budget. This will ensure
that the ISMS can continue to be effective, and can adapt to take account of the needs of the organisation into
the future.
• review the ISMS and play a role in its continual improvement (see Chapter 12, Continual improvement)
• act as the focal point for co-ordination of cross-organisational controls and conflict resolution
• provide regular performance reports (see Chapter 10, Measurement) to the Audit Committee or similar
body
• provide advice and expert assistance, including carrying out risk assessments.
Note: This structure contains the risk of conflict of interest, as the CISO team is responsible for advising on
controls as well as measuring and reporting on the effectiveness of the ISMS; care should be taken to ensure
separation of duties to manage this risk.
When selling an ISMS to top management, and getting their buy-in, it is important to not use the legal
argument as the only, or even the most important, reason for implementing a formal ISMS. For many members
of top management, arguments relating to the protection of reputation, safeguarding of intellectual property
and maintaining a competitive edge when competing for research grants and contracts are just as persuasive.
Information security should be sold as a business enabler as opposed to a cost. It is imperative to pitch this
in terms of the key drivers which align to the organisation’s strategic objectives (also see Chapter 3, Drivers).
Some examples are given below.
• Ensuring continued revenue – most funding councils are now asking for assurances that the information
which is to be used in research is to be properly protected.
• Risk management – top management is de facto responsible for information security risk. An ISMS
provides them, and their governing body and funding body, with assurance that the risk is being
appropriately managed throughout the organisation.
• Effective use of resources - an ISMS can lead to more efficient ways of working and best use of resources
as controls are deployed (or relaxed) in a co-ordinated, cross-organisation manner to meet the corporate
risk appetite. This can lead to higher staff satisfaction as policies are made clear, training is provided and
resources are provided to allow application of controls.
• Customer satisfaction - staff and students can feel assured that their own personal data is held securely
and that their identity is safe. In the absence of an ISMS, how confident is top management that their
own personal data is safe in the organisation’s hands? Are they happy for their own salary details or
performance reviews to be held in insecure systems, or potentially accessible by untrained staff in a cyber café?
• Top management have very little time to spend on any topic: keep your presentations, documents and
arguments very brief.
• Use fear, uncertainty and doubt arguments with great care - only make cases based upon provable fact,
e.g. recent and relevant incidents within the organisation.
• Be prepared to “show your working” on any particular statement you make in a document.
2.7.4 Preparing a business case
Visual representations In order to secure resources for an ISMS, an initial or outline business case should be presented. It should
of what you are trying to include relevant drivers as described above and provide an outline of the costs involved in setting up and
achieve, and how you maintaining an ISMS.
plan to get there, can
be more effective than The business case may, if appropriate, explain how the ISMS will provide a return on investment; this is more
lengthy narratives. of a challenge, since an ISMS is largely insurance against loss. It helps to be able to compare the ISMS costs to
Relevant metrics should be identified to show how the information security team will be able to measure and
report on the effectiveness of the ISMS.
It is important to continually reinforce that the purpose of an ISMS is to achieve a level of security which
is consistent with the organisation’s risk appetite and which enables corporate objectives. Once a project
or programme to establish an ISMS has been agreed and an initial risk assessment exercise undertaken, it
is often the case that the originally agreed risk appetite is modified when the financial cost of the security
controls required to meet that appetite is presented to the organisation’s executives. It is important, however,
to keep reflecting the costs back in terms of the risks mitigated and make it clear that a decision not to spend
is in effect a decision to accept a specific risk - and that this is perfectly acceptable as long as it is explicitly
understood by top management.
Summary
• A suitable governance framework is a critical component in the development, implementation and
maintenance of a successful ISMS
3 Drivers
which influence an organisation to adopt formal
information security management, and which shape an
information security management system. It also provides
advice on how to balance conflicting drivers. It forms
part of Stage 1 – Foundations in the Toolkit Route map.
Key topics
• The levels at which drivers operate
• Where drivers come from
• How to manage drivers
3.1 Overview
In any academic environment, there will be many different external pressures that may seek to influence how
the organisation handles information. These may range from formal contractual and statutory requirements,
through industry standards (both formal and informal) and funder expectations, to informal but nonetheless
compelling desires to gain competitive advantage by enhancing or maintaining the organisation’s reputation
as a safe place to conduct research. These can all be seen as “drivers”.
There are drivers which might influence an organisation to adopt formal information security management
(e.g. ISO/IEC 27001); then there are drivers which might influence and shape an existing formal ISMS. The
former include contracts, the need to be competitive, and the desire to use existing “known good” methods
to secure information. The latter include those standards (including PCI DSS, the IG Toolkit and the Cyber
Essentials scheme) which mandate the adoption of certain controls, or which require the use of detailed
processes, such as a particular risk assessment methodology (see Chapter 6, Controls).
Failure to adequately recognise and address drivers can have consequences from adverse headlines to loss of
future research contracts, financial loss (e.g. if PCI DSS requirements are not met), fines, or the loss of licences
for sensitive areas of research or study.
Unfortunately drivers may conflict with internal requirements, or even with each other. For example there may
be opposing requirements between open research and commercial exploitation of results, or between data
protection and freedom of information, whose resolution will depend on the organisation’s priorities and risk
appetite.
A formal ISMS can provide a framework for addressing potential conflicts between drivers in a transparent and
coherent way that supports the organisation’s objectives. See Chapter 2, Information security governance, for
more advice on how to encourage the implementation of an ISMS, and advice on stakeholders.
3 Drivers 25
In any organisation, it is likely that the work of managing the impact of external drivers will be spread across
multiple departments. For example, DPA and FOIA compliance may be managed by the Legal department,
while Finance handles PCI DSS, the Medical School addresses the IG Toolkit, and the IT department, along
with Estates, manages the business continuity plan. See Chapter 8, Roles and competencies, for more on this
subject.
The table below provides a sample of different types of drivers, and indicates how specific and granular they
are.
Internal or
Driver Issues addressed by driver Type of driver
external?
Data Protection Act 1998 External Publication of, or damage Legislation
(DPA) to: personal data; inaccurate
High-level principles
personal data; personal
data used for unapproved
purposes; personal data
retained for too long
Research contracts External Loss, publication, or damage Contractual obligation varies:
to sensitive research data. mostly high level, referencing
other standards
Business advantage Internal Loss of contracts, staff and Business policy
students to competitors
High level direction from top
management
Risk management Internal Inconsistent, inappropriate Business policy
and ineffective controls which
High level direction from top
waste money and do not
management
protect the organisation
Cyber Essentials External Compromise of insecure Good practice guidance
Standard, Top 20 Cyber- computers through malware
Granular specifications
security controls, etc. or “hacking”, focussing on
likely routes and commonly
neglected technical measures
The organisation’s Internal Damage to operations during Business policy
Business Continuity Plan a natural disaster or systems
Granular specifications
failure
Information Governance External Insecure storage and use of Contractual obligation
Toolkit (IGT) medical data
Granular specifications and
high level content
Anti-terrorism legislation External Access by terrorists to certain Legislation
research areas and equipment
High level principles
Payment Card Industries External Fraud through theft of credit Contractual obligation
Data Security Standard and debit card data
Granular and prescriptive
(PCI DSS)
ISO/IEC 27001 External Inconsistent approach to Good practice standard
security; ineffective measures;
High level guidance and
recurring incidents; inability to
principles
demonstrate due diligence
IT Infrastructure Library External Inconsistent and expensive IT Good practice guidance
(ITIL) and ISO/IEC support
High level guidance and
20000-1
principles
Different stakeholders may have different (possibly conflicting) requirements, in which case the primary
stakeholders need to agree which requirements are considered to be in scope.
A full lifecycle approach to external drivers is therefore required (see Chapter 12, Continual improvement). This
should be linked to organisational change processes (e.g. the project management process, strategic planning
process, and research funding process) so that changes to drivers can be identified and assessed in good time
i.e. before any formal commitments are made by the organisation. It may be possible to extend existing change
management processes to cover the activities described in this chapter.
The organisation should develop a standard process for assessment of requirements provided by a new,
changed, or retired driver.
A new driver should be assessed to determine whether it is indeed relevant and appropriate; the correct role
in the organisation should provide this verification. This step reduces the risk of inappropriate drivers being
included, and of inconsistency within organisations where multiple areas are running semi-independent
information security management systems. Equally, changed and retired drivers should be ratified.
Summary
• Drivers can operate at a very high level (e.g. organisational reputation), or be very granular in their level of
detail (e.g. researcher reputation)
• Drivers can be internal (e.g. responsibility to students and staff), but are often external (e.g. the
Information Governance Toolkit)
3 Drivers 27
Resources Reading list
Incidental security improvements from sustainability policies – Criminal Justice Secure Email
UCL, case study www.ucisa.ac.uk/ismt12
http://cjsm.justice.gov.uk/
Information security within the research arena – Loughborough
University, case study
Business Impact Levels
www.ucisa.ac.uk/ismt13
http://www.cesg.gov.uk/publications/Documents/business_impact_tables.pdf
CERT Top 10 List for Winning the Battle Against Insider Threats
www.ucisa.ac.uk/ismt15
www.rsaconference.com/writable/presentations/file_upload/star-203.pdf
4 Scoping
and how to decide the scope for an ISMS. It
forms part of Stage 2 – Planning, assessment
and evaluation in the Toolkit Route map.
Key topics
• How scope can mean something different depending on the context
• How to successfully define the scope of an ISMS
• What to consider when scoping outsourced/third-party services
4.1 Introduction
Scoping is a critical part of planning the roll-out and implementation of an information security management
system (ISMS). An organisation is often sub-divided into smaller ISMS scopes (e.g. an ISMS relating to a
particular project, service, audit or policy etc). In either case, the scope determines the boundaries and
applicability of information security management and controls. Scope will be shaped by:
Starting with a reduced scope (as opposed to trying to tackle too much too quickly) may also increase the
chances of success, and of achieving the objectives of the ISMS in a reasonable time.
The scope of an ISMS may initially be defined to include only specific processes, services, systems or particular
departments. Success stories can then be presented as a business case for expanding the scope of the ISMS, or
creating another, separate scope with different requirements and protections.
In order to make the scope entirely clear, especially to third parties, it is a useful exercise to identify what is not
in scope (e.g. the activities of the HR department).
Either way, the scope should clearly define what is being included, based on the business objectives and
information assets to be protected, and it should be clear that anything else is out of scope.
Where the scope of an ISMS is defined by the need to protect a particular asset (e.g. cardholder data) or
delivery of an objective (e.g. certification against ISO/IEC 27001) then it is important to first understand
system components and structure involved in the delivery of relevant services. This may include, for example,
obtaining system diagrams showing data stores and flows and relevant IT systems. Personnel involved in
managing and delivering all system components will then likely be considered “in scope”.
For those managing information security, it is important to consider the boundaries of control and authority. If,
for example, the security of services or systems in a particular department are beyond the control or authority
of the owners of the ISMS, they should not be included in the scope.
In the context of an audit, agreeing which systems are in scope may be particularly important so as to ensure that
it is clear which systems the auditor is authorised to access and under what circumstances. Failure to obtain such
authorisation in advance could even lead to a breach of law (such as the Computer Misuse Act 1990).
• time dependencies: e.g. the scope of a particular ISMS and/or security project may only be applicable for a
particular time period
Cloud services: A shared computer-based storage solution for data that is based in a virtualised computer
environment. Cloud services can describe any shared environment, which can be provided both locally or
outsourced.
All organisations will outsource some activities to third parties. Some third parties are taken so much for
granted that, when questioned, staff do not remember them – e.g. the cleaning teams, waste removal
contractors, and potentially accountants or auditors. Their activities may not be under scrutiny, yet they may
have the highest levels of access.
There are many reasons why an organisation may want or need to outsource some (or all) of its IT provision.
As information technology changes and evolves extremely quickly, it can be more cost-effective to outsource
some of an organisation’s IT solution, or to use cloud storage or services. Economies of scale means that large
data warehouse-style storage facilities can offer cheap storage and extremely good availability. Externally
hosted services may also provide specialist IT knowledge and support that is not available within the
organisation.
If managed properly, outsourced IT or cloud technology carries no greater risk, and arguably less risk, than
managing an in-house IT environment. However, poorly sourced or managed outsourcing, or inappropriate
cloud provision, can be extremely risky.
Responsibility for implementing security may be outsourced, but the accountability cannot be, and so it is
therefore important to understand the scope of an ISMS in this context. Put simply, when it comes to meeting
certain security requirements, outsourced functions or processes will be in scope for an ISMS, but the suppliers
are unlikely to be. It is up to the organisation to decide how it may be assured that services provided are of an
appropriate standard.
For further information, the ICO has produced a guide on the use of cloud computing, and UCISA has a briefing
paper on cloud computing.
When working with any third party, it is important for information security that the following are defined:
• Legal responsibility, accountability and insurance: all the parties’ responsibilities must be detailed and
understood. Running through a risk assessment process will uncover many areas where accountability
needs to be defined. Disaster planning and incident response is also a good way of verifying that
ownership and insurance responsibilities are correctly scoped.
• Access and authorisation: it is essential to make sure the rules and regulations for who can access what
are clearly defined. If the organisation is allowing contractors into buildings, it should understand who
has the keys or access codes; and who ensures the staff are trained and things are secure. Out of hours
4 Scoping 31
office cleaning staff often have more physical access to an organisation than even the most trusted day
staff. Access to IT systems and data should also be considered.
• Disclosure and privacy: the organisation should define and categorise the information that is being used
and shared, and specify the applicable rules and regulations.
• Contract terms: The terms of contracts with third parties should be clearly defined to make sure that all
parties are clear on the expectations of the work to be conducted, and sanctions or liabilities in the event
of default are assigned.
• What data are going to be on the outsourced system? Do the data include any sensitive information, or
have special requirements?
• What laws or regulations apply to the service provider who is supplying the IT provision? If it is a company
outside of the EU, how will that interact with the requirements of the data which it will handle? Where
will the data itself be stored?
• Who needs access to the IT solution? Is it something that needs a lot of physical involvement or does it
not need any attention for many months?
• Are there restrictions on who administers the system? Who will the administrators be and who controls
the access rights?
• Where is the system physically housed? Is the facility secured, who is it shared by, and who controls the
access?
• Does the outsourced service provider themselves outsource any of their provision (e.g. off-site back-ups)?
How do they manage the security controls which their third parties are handling?
• What service level is expected or provided? What levels of assurance for confidentiality, availability and
integrity of data are there? Check the policies in place within your organisation.
• At the end of the business relationship, how will it be possible for the organisation’s information to be
extracted from the third party environment in a usable form?
• What provisions (if any) are in place for compensating the organisation for the impact of a business
continuity incident or disaster (e.g. loss or exposure of information)?
Service providers can demonstrate PCI DSS “compliance” either by having their service included in the
organisation’s assessment, or by undergoing an assessment themselves. In either case, the services provided
that may affect the security of cardholder data must be considered to be in scope. It is the responsibility of the
organisation to demonstrate compliance – rather than the service provider.
In one area of the organisation, IT is provided by an external company employed by the organisation. The
units using this “group IT provision” treat it as outsourced provision and have service level agreements in
place. The reason it is considered outsourced is that, when the scope was defined, the control of the system
administration, access control, physical security and changes to the systems were out of the control of the
individual units.
One of the units using the “group IT provision” requires a fully validated and highly secure database for one
project. This is a very specialised system that neither the unit IT nor the “group IT provision” can provide on
its own. The unit employs a third party software provider to build, maintain and support the database, but,
because of the sensitivity of the data, it has been built on the servers and storage provided by the group IT
provider, and is managed by the IT support personnel employed by the unit.
Third party
database
provider
Group IT
provider
Unit IT
Summary
• Successfully defining and agreeing the scope of an ISMS from the beginning is a critical success factor in
the implementation of any ISMS – if the scope is wrong you will not know where you are going or when
you got there!
• There are different scopes involved in implementing information security in an organisation, from high-
level scopes covering the entire organisation, to the scope of a particular project or service
• Start small with one limited scope, demonstrate success and build from there
• Monitor and review, and if your scope is wrong then change it accordingly
4 Scoping 33
Resources Reading list
Scope definition for a data safe haven – UCL, case study The Common Vulnerabilities and Exposures database:
www.ucisa.ac.uk/ismt18
https://cve.mitre.org/
5 Risk assessment
information security risk assessment and management.
Information risk management is important as It is important to ensure
that any corporate risk
organisations cannot avoid being exposed to information management strategy,
risk management method
risk. It forms part of Stage 2 – Planning, assessment and assessment methods
are borne in mind when
and evaluation, Stage 3 – Implementation, support carrying out information
security risk assessments.
and operation and Stage 4 – Performance, evaluation
and improvement in the Toolkit Route map.
Organisations wishing to achieve certification to ISO/IEC 27001 should note that (as per clauses 8.2 and 8.2 of
ISO/IEC 27001) they should carry out information security risk assessments, keep records of those information
risk assessments and use the information risk treatment plan derived from the information risk assessments to
treat the documented information risks.
The exact risk assessment methodology to be used is not specified by the Standard. Organisations can choose
to follow the approach described here, or another approach which suits them better.
All organisations have information assets. These information assets are often critical in supporting business
operations. Equally, all organisations are exposed to threats and vulnerabilities which constitute risks to those
information assets and if left unchecked have the potential to damage the organisation’s ability to meet its The extent to which an
organisation invests
stated objectives.
resource in protecting its
As such it is prudent to consider the risks which may have a negative impact on their information assets and, information assets should
be directly related to the
through the consistent application of information risk assessments, determine the controls they wish to apply
potential impact of the
to treat the risks to those assets.
risks on those assets.
5 Risk assessment 35
Only by carrying out information security risk assessments to identify and assess all the risks facing its
information assets can an organisation hope to identify how to best utilise its resources to treat those risks.
Additionally carrying out and documenting information risk assessments provides for an auditable process
demonstrating and providing justification for decisions made in relation to information security.
Each organisation should Whilst this toolkit is written from the perspective of risk assessing information assets, it is important to note
determine the specific this is not the only approach. For those pursuing certification against ISO/IEC 27001:2013, the latest version of
threats which affect the the standard does not require an information asset-based approach. However, certainly in the short term this
confidentiality, integrity
is what auditors will be used to seeing, and it will not invalidate an ISMS from their perspective. Regardless
and availability of their
information assets.
of the risk assessment methodology chosen, the essential steps of information risk identification followed by
assessment of impact and likelihood still apply.
The reading list for this chapter contains links to examples of established best practice.
It is also important to note that information risks can be mapped to the type of organisational objective
concerned, that is to say strategic (long-term), programme/project (medium-term) and operational (short-
term) objectives. The type of objective which an information risk affects will have some bearing on the level
of audience who should be reviewing and managing the risk. However there may be interplay between the
different levels. For example a project risk could quite easily be relevant in terms of the programme to which it
belongs and potentially could affect a strategic objective. As such, risks identified at one level will often feature
on the risk register at another.
Information risk assessments should consider impact in terms of the effect on the organisation’s stated
purpose and objectives.
The OCTAVE Allegro guidebook V1.0 on information security risk assessment suggests that as a minimum the
following impact areas are considered: reputation/customer confidence, financial, productivity, safety and
health, fines/legal penalties, plus one or more user-defined impact areas.
When evaluating risk against an information asset, it is important to have a sufficient understanding of the
information asset (or class of assets).
A clear understanding of the asset enables better understanding of the threats and vulnerabilities and thus
enables more effective information risk assessment.
When carrying out a threat assessment, each identified threat should be classified and ranked according to
potential impact. There are a range of models which can be used to rank threats. Typically they will include
some or all of the following:
With regard to types of vulnerabilities, it is possible to find lists of typical vulnerabilities online. For example,
the Common Vulnerabilities and Exposures database is a freely available dictionary of publicly known
information security vulnerabilities and exposures. However, information security vulnerabilities come in
human, physical and process form as well as software and hardware. Identification of vulnerabilities can also
be treated hierarchically, as for threats (see previous subsection).
The potential impact of each vulnerability should then be assessed and quantified in order to allow the highest
priority vulnerabilities to be addressed first.
5 Risk assessment 37
vulnerabilities together to discuss and agree the likelihood and impact of each risk. Examples of this type of
information risk assessment can be seen in the resources section for this chapter.
Quantitative information risk assessment, unlike qualitative information risk assessment, uses numerical
values (normally monetary) rather than subjective values (high, medium, low) for risk assessment. Figures are
derived for the Single Loss Expectancy (how much the occurrence of a given information risk costs) and Annual
Rate of Occurrence (how often a risk will occur per year). From these it is possible to calculate the Annual Loss
Expectancy (how much the organisation can expect to lose each year for a given risk).
By defining a monetary value for risks and having the historic data to determine the expected frequency, it
is not only possible to prioritise information risks in order of the financial impact on the organisation, but in
combination with an understanding of the costs of your controls and their effectiveness at mitigating risk, it is
possible to make some statements about the Return On Security Investment.
Unfortunately, quantitative information risk assessment requires a significant amount of data about
information risk impacts and probabilities, which may not be readily available and which are resource intensive
to collect. Calculations can be complex and resource intensive and, as a result, professional risk management
software is often required for effective analysis. In addition, technology changes so fast that historical data
may not be a good source of information about current and future impacts and probabilities.
It is often the case, particularly with information security risk, that the impact of a risk cannot be defined
solely as a numerical value or monetary sum. For example, the reputational impacts of a data breach cannot
easily be measured by quantitative methods. Quantitative information risk assessment is a process which
requires experience and competence to use and is not as straightforward to involve colleagues in as qualitative
information risk assessment.
One possible approach is to use qualitative information risk management by default, and quantitative
information risk assessment where it is felt that the benefits provided by the technique outweigh the costs.
5.7 Process
The information risk assessment case study provides a practical example of how information risk
measurement criteria can be used to help achieve consensus when using qualitative information risk
assessment.
Since qualitative information risk assessment is largely subjective, agreement may not be reached if a simple
high, medium, low rating is used to rate impact and likelihood. Using information risk measurement criteria
provides a consistent basis on which to assess the impact and likelihood of a risk and provides a descriptor for
each impact level and likelihood rating so that individual perceptions of what is high or low are excluded and
consensus is reached on which impact statement best described the perceived risk.
1. Considering the threats and vulnerabilities, generate information risk scenarios (e.g. through
brainstorming). These scenarios should, in real world terms, outline something which could go wrong and
the mechanism by which it could occur. You can also use a standard list of risk scenarios.
4. Plot impact and likelihood of each information risk on a risk acceptance matrix (examples of which appear
in the case studies supporting this chapter).
When describing risks, it It is important to retain some sense of proportion when attempting to estimate impacts and effects; the
is good practice to break organisation should bear in mind that some of the most devastating impacts actually rely on a chain of specific
the description down into circumstances, which reduces the likelihood of an event with that very high impact occurring.
a statement clarifying the
cause, event and effect.
It is essential that, as part of the process, information risk owners and action owners are assigned. The
information risk owner is the person or body which has the authority and accountability for managing an
information risk. The action owner is the individual responsible for carrying out the activities to control the
information risk. It is possible that the information risk owner and action owner may be the same person.
At a higher level, whether part of the organisation’s pre-existing risk management framework or a specific
information security governance body, there should be a review body which on a regular basis scrutinises the
management of information security risk. See Chapter 2, Information security governance.
A further reason for maintaining an information risk register is to provide an auditable account of decisions
made. This will allow the organisation to manage identified information risks as well as to determine the
overall information risk exposure. The register will also act as an historical record of the assessed value of each
information risk over time.
It should be noted that information security risk assessment cannot be carried out and managed in isolation.
Risks identified as part of the information security process should be integrated into the appropriate
organisational risk registers. For example, information security risks which have the potential to impact on
organisation strategies should be referenced from the organisation’s overall strategic risk register.
Summary
• Information risk management is a systematic, consistent, iterative process where risks are identified and
assessed before being treated and monitored
• Information risk treatment options should not cost more to deploy and manage than the cost of the risk
itself
• Information risk management should not be done in a vacuum, but as part of the overall organisational
risk management process
1
NB: Risk transference cannot entirely mitigate a risk, as reputational risk tends to remain with the organisation (e.g. TKMaxx’s incident was due to
activities of a third party, yet it was TKMaxx which experienced the reputational damage).
5 Risk assessment 39
Resources Reading list
Template for information risk management principles A complete set of resources necessary to perform an information
security assessment based on the OCTAVE Allegro method
Development and use of risk assessment templates - UCL,
www.ucisa.ac.uk/ismt21
case study
www.cert.org/resilience/products-services/octave/octave-allegro-method.cfm
Project information risk assessment – Requirements and
expectations - UCL European Union Agency for Network and Information Security
Service information risk assessment – Requirements and (ENISA) Risk Management Hub
expectations - UCL www.ucisa.ac.uk/ismt22
www.enisa.europa.eu/activities/risk-management
Project information risk assessment – Capability - UCL
Service information risk assessment – Capability - UCL ISO 31000:2009 Risk Management Principles and Guidelines
Risk treatment plan – UCL www.ucisa.ac.uk/ismt23
www.iso.org/iso/home/standards/iso31000.htm
Risk assessment methodology – Cardiff University
Information asset register tool – University of Oxford University of Oxford Risk Assessment of Information Assets
www.ucisa.ac.uk/ismt24
http://www.it.ox.ac.uk/policies-and-guidelines/is-toolkit/risk-assessment
6 Controls
measures, or controls, and how to make them work in
practice. It forms part of Stage 2 – Planning, assessment
and evaluation and Stage 3 – Implementation,
support and operation in the Toolkit Route map.
Key topics
• Definition of a control
• How to pick and assess controls
• What to do about “ready-made” sets of security controls
Some controls can be applied to the whole organisation (e.g. the authentication scheme, or retention
schedules), while some can be specific to a particular scope (e.g. password lifespans, or patching policies).
The organisation should first consolidate its business and compliance requirements, and only then design and
consolidate controls. This minimises duplication and redundancy.
Pre-employment screening Y N
Segregation of duties Y Y
Back-ups Y Y
Controls can fall into more than one category. For example, anti-malware software both prevents infection and
acts to remove existing malware.
Controls can also be technical (such as anti-malware) or non-technical (such as keeping documents in a drawer
overnight, rather than on a desk). Non-technical controls often involve changes to business processes, which
6 Controls 41
may require more involvement from different parts of the organisation to implement, but which are more cost-
effective. When selecting technical controls, the organisation must always ask itself the question, “Why is this
control the best or only option? Is there a non-technical approach which is more effective?”.
Different cultures may have very different attitudes to acceptable behaviour, which should be taken into
account when designing controls (see Chapter 5, Risk assessment). For example, in some cultures, it is
considered unacceptable to let a door close in the face of someone walking behind you – in this case, the
organisation should consider alternative controls to pass-opened doors, such as turnstiles.
The crucial point of difference between a control set and a diet, where the analogy breaks down, and which
explains the fluidity in the information security sector (which far exceeds the confusion even in the nutrition
sector) is the rapidly changing and volatile state of technology. Human biology is relatively static. Imagine if a
person born thirty years ago was unrecognisable to anyone born twenty years ago, and could not eat the same
food or even talk to them.
The more specific and technology-focused a control set is, the more effort it will take to keep it up to date –
which is why managing people and business processes can be a better way to manage a risk.
Any organisation seeking to identify a control set to implement should assess it for stability (given the
changing nature of technology), suitability for their needs, and other side benefits (e.g. will it make it easy to
get government funding?). The control set chosen will almost certainly need to be augmented to fill in the
areas where organisational risk tolerance differs from the tolerance of the authors of the control set.
Governmental control sets can be used to improve top level buy-in, as top management may have been
contacted by governmental bodies asking for feedback on compliance with the currently popular control set.
An example of an initial risk assessment which helped an organisation to raise awareness, gain support from
governance and executive bodies and make the case for increased investment in information security controls
was to take the CPNI 20 controls and assess: current organisational compliance (rated as red, amber or green),
priority for action, actions recommended with cost, timescales and responsibilities.
The purpose of the SOA is mainly to ensure that an organisation has not missed anything. The Annex is
not intended to be a control set, or a means of bypassing a risk assessment. A good way to think of it is as a
supermarket containing all the foods you can imagine – your list of controls is your shopping list. Going to
this supermarket without a list and buying everything on the shelves will bankrupt you, and leave you with
many foods you don’t need or want. Equally, implementing all controls in Annex A of ISO/IEC 27001 will be too
expensive for the organisation, and will not meet its needs. That is why the list of controls in Annex A is best
ignored until after the organisation has sorted out its list of required controls.
For example, a policy developed by a small group, published on a website and left to the ravages of chance will
• developed through consultation with affected parties, transcending any internal “silos”
• designed to address a risk
• proportionate
• supported by top management
• tested
• implemented with appropriate awareness work to ensure that all impacted users understand what to do
and have support in any transitional period
• managed, with non-compliance detected, followed up, reported on, and persistent issues handled
effectively.
To put this another way, controls are a component of an ISMS; they do not replace it.
Implementation of a control should be managed as any other business change, using the techniques which the
organisation finds most effective, and the management channels which are in place already.
The impact on the organisation of each control introduction/change/retirement should then be assessed, so
that any changes which are not feasible can be identified. As previously noted, this assessment should be done
as early as possible, ideally before the organisation commits to a project or new service which brings with it
changes which are not feasible to implement. For example, taking payment card data may result in specialist
security software being required (file integrity monitoring), and hence a much higher cost for software licenses.
Using this information, the appropriate level in the organisation should then make a decision on the business
change: should it go ahead?
Assuming that the business change will go ahead, once the changes to controls are clear, a plan should be
agreed which leads to their implementation/alteration/removal (as relevant) in a suitable time frame.
In many cases, especially where legislation is concerned, an external requirement will not specify exact
controls. In this situation, the organisation should use the driver to inform its risk assessment and control
selection processes, with reference to its own risk appetite and legal counsel as appropriate. This ensures that
legislation is not over- or under-interpreted.
In order to relate controls derived from external drivers to controls derived from risk assessment, the
organisation should decide how to treat externally derived controls. They may be seen in one of two ways:
Of these two options, the first is the easiest to do initially, but leaves externally derived requirements in a
separate group, and does not, perhaps, encourage all controls to be treated equally. The second approach
requires each control to be “deconstructed” to identify what risk (or risks) it is actually going to be addressing.
This takes more time, but is a much more effective (and satisfying) approach.
6 Controls 43
Summary
• Controls reduce the impact and/or likelihood of incidents
• Ready-made control sets should be considered carefully
• Controls form part of an ISMS – they do not replace it
• Controls should be traceable to the requirements/risks which they are intended to address
7 Information management
involved in designing an information management
scheme and making it operate in practice. It
forms part of Stage 2 – Planning, assessment
and evaluation in the Toolkit Route map.
Key topics
• The benefits of having an information management scheme
• The components of an information management scheme
• Tips for creating and using a workable and appropriate scheme
• A classification scheme: a list of classifications with definitions to allow people to consistently classify
information.
• A labelling scheme: a way for documents and other information to be visibly associated with a
classification.
• Handling rules: information on how to use and protect information with each of the defined
classifications.
• A process which explains how to use the above three documents (e.g. how to decide who is responsible
for classifying a given item of information).
As usual, these documents should be supported by a policy statement, and endorsed by top management. The
statement specifies the scope of the information management scheme; who is responsible for maintaining
and controlling it; and what sanctions should apply in the case of non-compliance (sanctions can often be
handled via normal disciplinary processes).
The organisation should appoint a suitable role(s) to develop this scheme (see Chapter 8, Roles and
competencies), and ensure that the scheme is tested and approved (see Chapter 2, Information security
governance).
7 Information management 45
7.2 Classification
In our daily lives we tend to see a huge number of attempts to mark information with a classification –
“confidential”, “personal”, “commercial in confidence”, “private”, “off-the-record”, and even “classified”.
In addition, there are formal schemes such as the Information Sharing Traffic Light Protocol2 and the UK
Government’s Security Classification scheme.
When designing an information management scheme for an organisation, it may seem prudent to implement
all of the above classifications, if not more. However, a scheme that is too complicated will produce confusion,
non-compliance and other unintended consequences – e.g. either information being seen when it shouldn’t
be, or not disclosed when it needs to be. A scheme that is too simple carries the same risks, as it forces people
to either over- or under-classify. It should be noted that the UK Government’s new scheme only has three
classifications above the base level of unclassified – “Official”, “Secret” and “Top Secret”, of which the top level
may never be encountered in most branches of the Government.
Classifications must apply to information, not to the particular form it is in: it makes little sense to say that
a printed copy of a document must not be left on a desk, if computers with access to the same information
are left logged in when unattended. As information changes format, it must experience a consistent level of
protection.
If the initial attempt at designing a scheme does produce a large number of classifications, the organisation
should check for mixed or inconsistent treatments for different formats of information. This is likely to
undermine both the actual effectiveness and the credibility of the scheme.
The classification levels chosen should be compatible with the classification structure implied by the Freedom
of Information Act 2000 (FoIA). The Act effectively groups information into three classes with regard to
confidentiality:
Business Impact Levels (BILs), as used by the UK Government, are a way to formalise the assessment of risk.
However the seven columns and significant detail in their Impact Level tables are likely to be too complex for
practical information classification schemes. Noting the Business Impact Levels that are likely to match the
organisation’s own information classifications may, however, be a useful check, especially if the organisation
will be expected to engage with BILs in its interactions with other organisations, e.g. funding bodies.
Classifications should take account the information’s requirements for integrity and availability, as well as
confidentiality.
The important thing is that the label is understandable (this is why the classifications are created first), and
visible to all readers, even those who only skim the start of a message. Everyone who sees information must
know how to handle it.
It may be necessary to consider information as being of two types: structured (e.g. files in a database or CMS)
and unstructured files (anything ad hoc, e.g. in a private file system, email, or in a notebook). Structured
information will be much easier to label than unstructured information, so it may be necessary to consider
how information of value is being managed in general, in order to make labelling and handling it more
feasible.
One method for labelling information, which is simple but very effective, is to specify that everything in a
particular system, or environment, is automatically of a particular classification. This approach requires there
to be a verification process at the point where information is introduced into the system, to make sure that
information with a higher classification is not entered into the system, and at the point of data extraction, to
ensure that it is labelled and handled effectively outside the system.
While the classification label, along with a handling scheme, defines how information should be handled,
labelling related to confidentiality can also be used to indicate who should handle the information. Here the
most important thing may well be that those who are not entitled to see the information should be able to
immediately recognise that fact, return the information and report a security breach. Labels also need to make
clear to those who are entitled to see the information who they may share it with. Provided that labels meet
these twin requirements of being immediately clear to both authorised and unauthorised recipients, they can
be relatively flexible. For example, information might be labelled with the department(s) where it should be
used, or with the name of a project, event or function.
The “how” and “who” labels may appear together, for example as “SENSITIVE:Finance”. “SENSITIVE:Finance”
and “SENSITIVE:Physics”. They must require the same handling rules in all departments, otherwise the security
of the department’s information may be breached by accident.
7.5 Handling
Each classification of information should have its own set of rules for how that information should be handled.
Although many information management schemes concentrate on the confidentiality of information, rules
should also address the organisation’s requirements for integrity and availability. These too, must be consistent
across different formats of information: making a written note of information from a conversation or phone
call, and ensuring that work is not left on a single laptop or memory stick, both protect the availability of
information. Ensuring that only authorised individuals can alter information, whether it is on paper or digital
form, protects its integrity.
Information handling rules will probably have emerged during the development of the classification levels
(especially if the advice above has been followed), but it is recommended that they be revisited after the initial
decision on classifications, to ensure that they are appropriate, clear, and provide consistent risk management.
7 Information management 47
Each classification should relate to unique rules for how information with that classification is handled: if
two different classification levels impose the same rules, information owners are likely to be confused about
which classification they should apply, and users are less likely to understand how they should handle the
information. A useful test for consistency is to consider which format of information a determined attacker
would find it easiest to gain unauthorised access to: do they need to hack central servers or can they just hang
around in the coffee room? With consistent handling rules, the difficulty (or ease) of unauthorised access
should be about the same for all formats.
Once the organisation has established and agreed a consistent set of handling rules, it should look at how
current processes require information to be used, to identify any inconsistencies. For example, if tender
documents have been given the highest classification, but have to be sent to external assessors for review,
then a “does not leave the building” rule will not work and the classification, and the handling rules, should be
reviewed and revised as necessary.
Organisations should therefore expect to make a series of adjustment to classifications and rules as
inconsistencies with either the organisation’s risk or operational requirements are discovered. The goal should
be a classification and rules that satisfy both.
One way to document handling rules, and to highlight the need for consistency across formats, is to start with
the high-level risk the information needs to be protected against, then list the measures to be taken for each
classification level and each format of information. For example:
A particular characteristic of educational organisations is that information may have different classifications
and different points in its lifecycle. Both research data and exam results change their requirements for
confidentiality and availability after publication, for example. The information management process therefore
needs to be able to handle these time-related aspects. Lifecycles, like classifications, are best identified when
the collection or creation of information is planned, whether the information is destined for publication,
archiving or destruction. The Jisc model Records Retention Schedules may be a useful starting point.
1. Roles are chosen to take responsibility for classifying, labelling and handling information (see Chapter 8,
Roles and competencies) – this may include the information owner.
2. Decisions are made as to how the information management scheme is maintained and how compliance
with it is measured (see Chapter 10, Measurement).
3. Top management are regularly informed of how the scheme is working, and whether there is anything
they need to be aware of, or make decisions on (see Chapter 2, Information security governance).
4. Staff are trained in the content and use of the scheme (see Chapter 9, Awareness raising).
5. Problems with the implementation and running of the scheme are identified and addressed (see Chapter
11, When things go wrong: nonconformities and incidents).
6. Opportunities to make the scheme work better are identified, assessed and implemented if appropriate
(see Chapter 12, Continual improvement).
Summary
• An Information Management scheme should comprise: a classification scheme, a labelling scheme,
handling rules and processes to define how these all interact
7 Information management 49
Resources Reading list
Information Classification Scheme – University of York UK Government information classification scheme
www.ucisa.ac.uk/ismt31
Development of an Information Classification and Handling Policy –
www.gov.uk/government/uploads/system/uploads/attachment_data/file/251480/Govern-
Cardiff University, case study ment-Security-Classifications-April-2014.pdf
Information Classification and Handling Policy – Cardiff University
CESG, Business Impact Level Tables
University Guidance on Classification of Information - University of www.ucisa.ac.uk/ismt32
Oxford www.cesg.gov.uk/publications/Documents/business_impact_tables.pdf
There are few roles which do not involve interaction with information or information management systems It is important to
(either paper or IT based). Staff present the greatest risk to information security; although malicious action distinguish between
by individuals cannot be ruled out, there is a greater risk of breaches occurring as a result of ignorance, personal and
organisational risk
inconsistent risk tolerances, or carelessness. Roles within the organisation share responsibility for achieving
tolerance.
and maintaining appropriate information security.
At the top of the organisation, governance and oversight must be the priority; the creation of goals and
objectives and the balancing of information risk (see Chapter 2, Information security governance). Top
management roles have top-level responsibility for implementation of objectives. Senior information security
specialists provide specialist advice and support to executives, along with legal roles and other information
management roles (e.g. records managers). Asset owners and technical specialists supply the decisions and
expertise to make goals and objectives a reality, while operational staff, students and contractors need to be
aware of, and comply with, information security requirements which apply to their roles.
An individual’s role in the organisation should dictate their level of responsibility for information security
processes and controls. The organisation should ensure that responsibilities are appropriate and fit-for-
purpose. These responsibilities should subsequently be reflected formally in the agreements between the
organisation and its members – whether these are employment contracts, or any other legal document
defining the relationship of a member and the organisation.
Implementing a policy and technical measures goes some way to achieving a good level of information security
in an organisation, but should be supplemented by individuals having an understanding of the value of
information security and how it relates to their jobs (also see Chapter 9, Awareness raising). Different roles and
job functions require different levels of competence, ranging from fairly elementary to a deep understanding
of a wide range of topics, and may therefore require different levels of training and awareness.
The risks associated with information need to be owned by a member of top management. In many
organisations, this role is known as the Senior Information Risk Owner (SIRO). The role of the SIRO focuses upon
ownership of information risk, which is necessary for good information security governance (see Chapter 2,
Information security governance).
It is likely that the SIRO will be part of the portfolio of responsibilities of a top level operational manager within
the organisation (e.g. the Director of Governance, Risk and Compliance). It is a key decision making role which,
in order to be effective, must fit in with any existing risk management hierarchy. The role of the SIRO is an
established part of most public sector ISMSs, notably in the NHS.
The SIRO acts as lead and champion for information risk management initiatives and ensures that top management
are adequately briefed on strategic level information risk management issues. The SIRO can also authorise
acceptance or mitigation of major information security risks that deviate from agreed standards, and determine
when (and by whom) breaches of information security will be reported to relevant external authorities.
• the principles of risk management and the part that information security management plays in
mitigating risks
• In addition to the above, the Senior Information Risk Owner (SIRO) for the organisation needs to have:
• the communications skills to ‘sell’ why information security is important to the other
members of top management
• the ability to link information security issues to the overall organisational strategy.
The responsibility will cover being aware of, and authorising, the uses to which asset(s) are put. It may also
extend to ensuring that those that have access to assets are trained and are operating appropriately.
Asset owners must have sufficient seniority to take decisions about the protection of their asset from a
strategic perspective. Issues may arise here, as information asset owners may have no direct management
control over dispersed instances of those assets. In this case, the focus of the role may be on setting security
policy/standards for handling of the asset.
• the principles of risk management and the part that information security management plays in
mitigating risks
• the contribution which policies and processes in their business areas make to organisational information security
• the processes relating to the maintenance and use of their information assets.
Understanding the interaction between business functions and information security is particularly critical for
those responsible for information relating to people in the organisation.
Maintenance of the information asset may be devolved to departments across the organisation. Consequently
some of the responsibility for ensuring that information security policy and principles are applied appropriately
to assets may also be devolved to individuals at a departmental level (for example, a departmental
administrator). Individuals fulfilling these roles will need the same competencies as the organisational
asset owner, albeit applied in a departmental context. Overall responsibility, however, remains with the
organisational asset owner.
There will be a number of roles required to support the CISO that are focused on advice, implementation
and monitoring. These are often hybrid roles requiring an understanding of legislative requirements and
organisational policy and, in IT focused teams, may also require technical knowledge.
Responsibilities may include activities such as conducting risk assessments, monitoring and reporting breaches
and monitoring for potentially malicious activity (see Chapter 10, Measurement), and developing processes
to ensure that access to systems is removed in a timely fashion from those leaving the organisation. These
responsibilities may be vested in an individual or in a team; sample job descriptions are given in the Annex:
Example resources to accompany the Toolkit.
The individuals fulfilling the implementation and monitoring roles will often form the core of the team to
manage security incidents in the organisation. The team will have responsibility for the monitoring, detection
and reporting of security breaches (see Chapter 11, When things go wrong: nonconformities and incidents),
for the implementation of centrally mediated measures, and for providing advice and guidance to local areas.
The team will also be a focal point for notifications of potential breaches of security and will lead internal
investigations. In some instances, there may also be a sub-team with specific responsibility for IT-based
information security.
The overall information security team may adopt a collaborative approach to inform decisions and guidance; a
case study from UCL included in the Annex illustrates this approach.
Where possible, areas which handle subjects strongly related to information security should be integrated, or
work closely together. Relationships should be maintained with areas which influence information security, but
which have their own identity (the physical security department may be a good example of this).
Legal professionals will also exist within the specialist information security space, to handle situations relating
to Data Protection, Freedom of Information, Information Rights management, Information Governance
Toolkit compliance, payment card security, intellectual property and other related areas. Some roles also
audit and review activities as part of organisational processes which do not directly or explicitly involve
information security or information technology - this can include paper-based information or physical access
to organisation facilities.
• to understand the relevant laws and contracts which relate to the information being handled by the
organisation
• to maintain cordial relations with external bodies which impose requirements upon the organisation
• to liaise closely with information security professionals to ensure that policies and advice are legally
acceptable
• to be able to negotiate with internal and external parties and achieve a common understanding of
ambiguous or conflicting requirements, or agree approaches to handling risk which do not exactly match
external requirements
• to be able to liaise with areas across the organisation to advise on and provide approval (as appropriate)
for proposed activities.
Principal investigators, as the lead individuals on research projects, will create information assets during the
course of their research. There is, for publicly funded research, growing emphasis on the curation of research
data and open access to both research outcomes and the data that informed it.
A significant volume of research is funded by commercial organisations. Such research may be carried out
to meet the specific needs of the funder, who may need to protect the commercial value of the research and
related data. Research contracts with commercial organisations may well stipulate that the data and outcomes
from the research are commercially sensitive and so need to be held securely, and require a data management
plan in advance as supporting evidence. In addition, if medical data is being received, additional requirements
such as the Information Governance Toolkit may need to be satisfied prior to the provision of data.
Similarly, whilst adequate security constraints may be in force for employees and contractors, those same
levels of safeguard may be overlooked when dealing with third parties, such as hardware and software
suppliers, consultants and other service providers. See Chapter 4, Scoping for advice on managing third parties.
• understand the need to protect information (much of this may be tied to the need to adhere to data
protection principles) and the risks of not protecting information
• apply information security principles when dealing with information such as staff and student records
• understand how the information is being used.
8.5.4 Competencies of all staff
The majority of staff will have access to a limited range of systems and will require a clear understanding of How well is responsibility
relevant information security principles. The relevant competencies are: for information security
embedded in individual
• to understand why protecting information is important job descriptions in your
• to understand the relationship between the information they maintain and information security and organisation?
• to be able to distinguish between types of information (and hence, what is important to protect)
• to understand why it is good practice to back up data, change passwords, etc.
The final point is important to ensuring accurate data – whilst it may be clear that there is a statutory
requirement to record staff sickness absences, it may not be apparent to the person entering the data why
some details may be required. For example, HR staff may not be aware that entering an end date against an
employee’s record will trigger the termination of access to IT systems and buildings. Consequently timely entry
of that data may mitigate an information security risk. Understanding the use of data and its importance to
the organisation assists in ensuring its accuracy; central administrative staff may have a role in educating
those maintaining data within departments on how the information is used.
Postgraduate research students may require an additional level of awareness and competency, particularly if
they are making use of personal data for their research, or are part of a wider research project (for example, as
a research assistant).
Summary
• The organisation’s information security policy should define the roles and responsibilities required of staff
• All staff have responsibility for information security; this responsibility will be included in general terms
and conditions of employment as well as individual job descriptions
• An information security group should be established at a high level, chaired by a member of top
management with specific responsibility for championing information security and including owners (and
representatives of owners) of key information assets
• A team to monitor and implement information security measures should be established and should be
represented on the information security group
9 Awareness raising
for, and approaches to, improving awareness of
information security, as well as mistakes to avoid.
It forms part of Stage 2 – Planning, assessment
and evaluation and Stage 3 – Implementation,
support and operation in the Toolkit Route map.
Key topics
• The different ways to target awareness communications to members of an organisation
and to specific groups, as well as related challenges
• The qualities that awareness material should have in order to get attention and support
individuals in developing the necessary security skills
• How to align awareness activities with the rest of the organisation, in terms of managing
risk and measuring effectiveness
9.1 Introduction
Information security is a collective responsibility for all members of an organisation. Members of the organisation
must be appropriately aware of the risks to information within their role, and how they should use processes and
technologies – as provided or sanctioned by the organisation – to manage those risks. Skills must be developed
through engagement with individuals and teams working with information, coupled with delivery of targeted
knowledge to those who can apply the expertise in practice. Evidence should be available to external parties to
show due diligence in making staff aware of their responsibilities, for example after a data protection incident.
For further information see Security Awareness, Education and Training in the reading list.
Security awareness encourages people to be interested in security, by attracting attention and conveying the
effect security has within their roles.
9 Awareness raising 57
With increased awareness, people respond better to security education – materials or courses that provide
information about threats and vulnerabilities, and the actions individuals should take to protect themselves
and the organisation. This can effect a change in perceptions and attitudes towards security.
Realising change in behaviour – the breaking of old habits and establishing of new habits – requires security
training. Through a programme of training new behaviours are presented, but also tested and corrected to
develop competencies and skills. Security training must be based in the work context and address specific
security needs, and needs to be repeated enough to form the right habits. Monitoring capabilities and user
feedback channels should be provided to determine the effectiveness of the programme. In the remainder of
this chapter awareness, education, and training will be collectively referred to as awareness activities.
Communication channels should be maintained to support response to trigger events (rather than being
developed in response to an event). A coordinated response also limits awareness content to that which is
necessary for members of the organisation - this is important, as security is an enabling task supporting people
in doing their job. Referring to the previous section, members may require awareness, education or training,
depending on how the trigger event affects their work.
Members may also look to the organisation to provide guidance to address concerns or a desire to work more
securely. Factors can include the vicarious experience of information security threats and visible enforcement
of policies, but also social elements such as wishing to avoid embarrassment, demonstrating allegiance to the
organisation and respect for others, and maintaining the reputation of the organisation. Factors should be
identified through user engagement activities such as targeted surveys, regular involvement in team talks, or
dedicated feedback channels within the organisation. (see Who does information security? within Chapter 8).
Whilst general awareness raising is extremely important, the organisation should start with the clear message
that compliance is required, both to encourage appropriate behaviour and to demonstrate to third parties,
such as the ICO, that it takes information security seriously.
See Chapter 5, Risk assessment, for a discussion of risk management – a risk-driven awareness programme
tempers the amount and relevance of training, through regular review of risks as the operating environment
and threat landscape change. Messaging should also align with the values of the organisation, and the shared
sense of professional responsibility for upholding those values.
An awareness programme cannot necessarily achieve its goals through fixed-period computer-based training
alone; embedded training develops skills to address risks as they arise within the production task i.e. the
person’s job.
Good training requires appropriate resources and expertise. Trainers must be prepared to help individuals
repeat awareness activities sufficiently often to form secure habits. Monitoring of the internalisation of
awareness material, and the effects of awareness campaigns upon the operating environment, should be
implemented, as well as a capacity for corrective feedback while skills are being developed (rather than as
an isolated, static, one-off exercise). The organisation (specifically those managing the application of the
awareness programme) should be prepared to dedicate extra resources to those who may fail to develop skills
despite training and feedback – these individuals or groups may benefit from alternative solutions such as
supporting processes or technologies rather than the application of more training.
The ENISA publication The new users guide: how to raise information security awareness notes the advantages
and disadvantages of various approaches. Note that there should be a recognition of general communications
intended for all members of the organisation (supporting the values and intended image of the organisation);
targeted communications for specific groups requiring particular competencies based on the risks they must
manage (see Chapter 8, Roles and competencies), and targeted behaviour change activities that address
specific scenarios (which especially can involve interactive or embedded training). Note that individuals learn
mostly from doing, then less so from others around them, with formal training having the smallest impact.
The Raising user awareness of information security - Cardiff University case study demonstrates an awareness
programme which uses a range of approaches together - doing so can serve to reach a wider audience within
the organisation.
Carefully-designed survey exercises or user quizzes can identify the needs of technology users and those
handling information. This includes capturing how groups use IT facilities and sensitive information. The
ENISA publication The new users guide: how to raise information security awareness includes template user
questionnaires. Surveys and questionnaires provide the user perspective, and engagement with decision-
makers and implementers identifies the system-level and strategic measures for monitoring the effectiveness
of awareness activities.
The organisation should also consider how messaging around security relates to promoted values. Content
should be able to change the way people think about security and make it fun and interesting (through
cartoons or games). Role models – ideally organisational leadership – must be seen to follow the rules. Training
should then be supported with strategic buy-in and tailored to those with authority and influence. Materials
should be of appropriate technical level, as technology-related information can fall on deaf ears. Certain buzz-
phrases can also cause a negative reaction (e.g. “information security”, ironically).
9 Awareness raising 59
For the design of security messaging, the organisation should be realistic in the demands made on user time
and attention. Some principles from advertising may be useful: try to make material informative, brief, visual,
attractive, unexpected, or funny. The posters appended to Cardiff University’s case study on raising user
awareness of information security, in the resources section at the end of this document, demonstrate some of
these qualities in practice.
Every member of the organisation should have the skills to use the organisation’s facilities securely in their role.
In using basic organisation facilities such as provisioned email accounts, all members likely need some basic
comprehension of the threats posed by phishing, spam, and social engineering. There is also a need to manage
regular access to system accounts (through passwords or other authentication technologies), as well as the
management of data according to the organisation’s policy (see Chapter 7, Information management, for
reasons why an information management scheme must be easily understood). Information security extends
to all forms of information/records, not just electronic copies (for example exam scripts, paper-based records,
etc.). Mobility activities may require instruction on how to work remotely, use teleconferencing facilities, and
work at conferences/events in a secure manner (not just with technology, but also in respect to information
that is shared during those activities).
Staff and management should appreciate the impact of their actions on organisational reputation, as should
students (potentially including recent or not-so-recent alumni). For those involved in securing funding,
professional reputation is important, and security events can impact upon this - they will want to know how
to protect their standing in the community. Those involved in research must manage intellectual property
(unpublished work, research data, sensitive data, personal data), and those managing sensitive data must
have the skills to appropriately adhere to data protection regulations. Staff with administrative duties must,
amongst other things, consider protection of student coursework records, management of staff payroll
details, and on- and off-boarding of staff or students to managed systems. Temporary or irregular visitors and
collaborators may need a highly-targeted crib sheet that outlines their responsibilities even when they may
only be working with organisation representatives for a brief time in limited ways. Hosts must know where
to find this information and where it fits in the on-boarding process (whether shared upon arrival or made
available beforehand).
There may also be third parties such as cleaning staff and contractors to consider, as they will at the very least
have physical access to campus facilities – departmental representatives may have to understand procedure
for overseeing access, and the third parties themselves should be aware of practices that relate to their
activities on-campus.
Internet2 describes further considerations for various user groups in their Information Security Guide (see the
reading list for this chapter).
9.8 Challenges
An individual’s perception of security may make changing habits difficult. This may be seen as a failure to
appreciate or understand threats – “I know how to do my job”, “Nobody would target me”); frequently-made
excuses (such as futility in the face of a determined attacker); or that security-conscious behaviour is not seen
as an attractive or socially-accepted trait (e.g. challenging people when they try to follow an employee through
a secure door without authenticating). Issues such as these should be identified in user engagement activities,
such as surveys and team talks, and may require dedicated effort to change, or alternative solutions to manage
related risks. Different cultures may have very different attitudes to acceptable behaviour, and this should be
taken into account when designing awareness materials.
Individual capacity to engage with awareness material is limited, and competition for user attention is fierce.
Individuals grow used to messaging that is targeted at them – even the most well-designed security posters
can blend into the environment. Awareness techniques should then be creative and frequently changed. Care
should also be taken not to overburden individuals with unneeded details, especially as they will be the target
Pitching material can be difficult. Awareness activities should act to improve basic security practices, not to
make individuals security experts in their own right – security-specific terminology may further confuse non-
experts in security. At the other end of the scale, awareness activities can fail if they provide no explanation as
to why a behaviour should be adopted. User engagement activities can help to find a balance.
In addition, records should be kept to verify that training is actually taking place as planned, and to record
employees’ performance in training courses, if relevant.
Referring to Chapter 11, When things go wrong: nonconformities and incidents, it may be that policies
are not being followed, or that training to support policies is not effective. If training is not effective - and
not relevant to the role - by drawing time away from the productive task it can also frustrate individuals.
Feedback on the quality of training then contributes to the management of risks.
Embedded exercises such as self-phishing (as in Raising user awareness of information security - Cardiff
University, case study) can serve as a leading indicator that something is happening or likely to happen,
where incidents (see Chapter 11, When things go wrong: nonconformities and incidents) are a lagging
indicator that something has happened. In line with organisation values and the management of risks, be
careful in considering ways to reward good behaviour and punish bad behaviour beyond the awareness
campaign.
The organisation should be in a position to identify security champions – there may be individuals who
represent good security habits and are able to discuss security well with others in their team or the larger
organisation. These role models should be supported in being seen in the organisation, and ideally would
include top executives in their number.
When deciding which training is mandatory, the suggestion of punishment for not following instructions
should not be promoted if it is known that the instructions will be disobeyed (“we tell them not to, but we
know they do it anyway”) – this would impact the visibility of policy enforcement. It should be determined
upfront whether there are the monitoring capabilities to detect an infringement, and the resources and
buy-in to take action.
On a related note, training records are a valuable source of evidence that people have at least undergone
training. This is helpful when attempting to demonstrate that due diligence has been followed, especially
when working with an external body (such as the ICO) to determine the cause(s) of an incident, and
possibly assign liability. Records should include the names of people trained, dates, and any scores or
training measurements (see Chapter 10, Measurement).
If technologies or processes are consistently not working or being ignored, no amount of training may
persuade users to use them; consider alternative solutions beyond awareness.
There are a number of indirect indicators that training is not working or will not work: security education
is static, one-way and saturates attention (such as one-way “briefings”, lectures, and posters); efforts are
fragmented; the programme is the same for everyone regardless of responsibilities or where the message
is best targeted; and education activities remain the same across consecutive years (regardless of any new
technologies or feedback gathered in that time). These points should be addressed in the dialogue with
decision-makers (see Who does information security? within Chapter 8, Roles and competencies). It is
necessary to set a realistic timeline for achieving change in security habits.
9 Awareness raising 61
Summary
• For greatest impact, target content to match identified risks and roles within the organisation, in response
to changes in the organisation environment and threat landscape
• Target learning through media, practical instruction, or theoretical instruction, using physical handouts
such as flyers, electronic communications, fixed-place messaging like posters, and persistent messaging
(such as screensavers and online training)
• Consider that security is supporting the individual to do their job well, and that there is competition for
their attention – security needs to be there to help develop skills that will be applied in targeted roles
Roper, Grau and Fischer, Security Awareness, Education and Training, 2006
Sasse et al, Human Factors Working Group White Paper: Human
Vulnerabilities in Security Systems, Cyber Security KTN, 2007
Information Security Forum, From Promoting Awareness to
Embedding Behaviours, Version 2, 2014
10 Measurement
the field of measurement,
such as statistic, metric,
measurements for information security management, and KPI. Find out what the
rest of your organisation
both generated within the organisation and drawn from calls them, and use the
same terms with the same
external sources. It forms part of Stage 4 – Performance, meaning in your ISMS.
Key topics
• Why measurements are worth using
• How to identify useful measurements, and evaluate the usefulness of the ones you are
already using
• How to use measurements
• Judging the performance of the ISMS (and its controls) over time: which actions are taking the
organisation closer to/further from its objectives?
10 Measurement 63
10.3 How to design a useful measurement
In simple terms, an organisation can measure:
It is important that Risk cannot be measured directly, but can be determined indirectly. Measurements of threat and of control
measurements consider effectiveness, as well as information from detective controls, are used to inform risk assessment, one of the
people and processes, as ISMS processes (see Chapter 5, Risk assessment). This produces information on actual and acceptable risk.
well as technology.
Measurements should include, in their definition, the following:
• what is to be measured
• how the measurement will be made
• the purpose for which the measurement is taken
• when and/or how often a measurement should take place
• which role is responsible for ensuring that the measurement takes place
• how a measurement is to be used (including reporting format)
• the intended audience for a measurement (e.g. top management, or technical specialists)
• the classification of the information obtained (see Chapter 7, Information management).
See SANS guidelines and ISO/IEC 27004 for more information on measurement definition.
The role of internal audit is key in ensuring that the organisation has, and maintains, an effective ISMS. Even if
an organisation is not aiming for compliance with ISO/IEC 27001, its internal audit function can still undertake
periodic reviews of different aspects of information security, especially to determine whether effective
monitoring and controls are in place.
The presentation of measurements is critical to their usefulness, as appropriate presentation allows audiences
to interpret the information easily. Reporting should be tailored to the audience. See Chapter 2, Information
security governance, for suggestions on developing material for top management.
Reporting can also be used to highlight features of the information which the audience would otherwise not
have noticed, or to explain subtleties in the measurement which can easily be misinterpreted.
The context of a measurement is also important; for example, if an organisation carries out an awareness
campaign and subsequently sees an increase in reporting, this should normally be seen as a sign of increased
awareness, rather than of decreased security.
Measurements gathered from other environments may not be directly comparable and may have been
collected in different ways.
• “opportunity window” between vulnerabilities being known and patches being installed
• number of unpatched systems at any given time
• percentage of sensitive data being handled in secure environments
• number of copyright complaints (which are correlated to the possibility of legal action, but not strongly
correlated to the number of actual copyright violations)
10 Measurement 65
• percentage attendance at management review meetings
• percentage of policies reviewed on or before their review dates
• percentage of controls whose effectiveness is being measured
• number of minor and major non-conformities found at last internal audit
• time to resolve non-conformities
• shortfall in resources.
10.7.3 Measurements of control effectiveness
The following examples are designed to show whether a control which the organisation has decided to
implement is performing as required.
Normalising these values to incidents per 100 users creates statistics that can be compared between
organisations: differences in the rates of occurrence, the proportions of different categories or the trends in
their prevalence have prompted useful discussions of the impact of different security approaches.
Summary
• Measurements may be used to measure the performance of an ISMS, the effectiveness of controls, to
track threat levels, and as part of controls themselves
• Every measurement must have a purpose: to direct action, and/or to support decision making
• Suitable presentation of measures is critical to their effectiveness
10 Measurement 67
Resources Reading list
How to evaluate a measurement Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2)
www.ucisa.ac.uk/ismt39
http://energy.gov/oe/cybersecurity-capability-maturity-model-c2m2-program/electricity-subsec-
tor-cybersecurity
Key topics
• How to detect and recover when things go wrong
• How to reduce the impact of adverse events
• How learning from adverse events can improve information security
11.1 Introduction
It is dangerous to assume that nothing will ever go wrong: information security is about managing the risk
from adverse events, not eliminating them. Planning how to recover from failures is an important aspect of
managing information security.
Nonconformities may increase the likelihood of incidents, and incidents are one way in which that
nonconformities are discovered. However not all incidents indicate the presence of nonconformities. A realistic
ISMS will accept, and manage, a certain frequency and level of incidents (see Chapter 5, Risk assessment).
Both incidents and nonconformities require a prepared and timely response that remedies the immediate
problem and learns lessons to reduce the likelihood of recurrence. Both involve identification, corrective action
and analysis of root causes, which may follow similar processes. Both may (nonconformities must) lead to
improvements in the ISMS.
A key difference is that, subject to audit requirements, the organisation will normally control the timescale
on which it responds to a nonconformity, typically weeks or months. Incidents arise and evolve outside the
organisation’s control and require a technical response in minutes or hours. Incident response should link
to the organisation’s crisis communications plan (for example, see the final case study), and to business
continuity and disaster recovery plans.
• the documented management system deviates from the requirements of ISO/IEC 27001
• the implemented management system deviates from its documented state.
Nonconformities may result from not doing enough, or from doing too much (overkill). For example, blocking
the use of USB sticks where a block has not been justified is a nonconformity; but so is identifying a need for
such a block, documenting that it is in place, and then not implementing it.
Nonconformities are often found during an audit. A Stage 1 audit (“document review”) may identify that
the ISMS documentation does not contain what is required by the ISO standard; a Stage 2 (“conformance”)
audit may identify that the organisation’s practice does not match its documentation. For example a required
firewall might not have been installed or a password policy be being ignored.
Nonconformities may also be discovered outside the audit process, including through information security
incidents or if staff have difficulty implementing or working within a prescribed control (see Chapter 6,
Controls, for more advice). Organisations should ensure their processes can capture these nonconformities and
that staff feel comfortable pointing them out.
If an existing process, such as those which are part of ITIL service management or COBIT IT governance, is available
and suitable to handle corrective actions, this should be used to simplify matters.
The process must ensure that each nonconformity is reviewed and appropriate corrective actions taken to deal
with it and any consequences. Corrective actions may involve any part of the system, from a more accurate
implementation of a required technical security control, better training for users in implementing it, to a change in
the risk management process itself. Records of nonconformities and corrective actions must be kept.
The causes of nonconformities must also be reviewed, investigated and corrected. This may reveal wider issues
across the ISMS, where a nonconformity may recur at different times or in different parts of the system. Wider
corrective actions may be required if, for example, the ISMS has failed to identify a significant risk or selected an
unsuitable control. Again, these conclusions must be documented and the required actions managed to completion.
Nonconformities should be used as a tool for the continual improvement of the organisation’s information security
(see Chapter 12, Continual improvement). It is important to remember that it is the organisation, not the individual,
that is being assessed. Using nonconformities to assign blame will discourage correct behaviour among those
involved in the ISMS.
Managing incidents effectively can significantly reduce their impact and so is a valuable way to enhance
information security. For research and education organisations this reactive approach is particularly important
since the wide range of legitimate users and activities makes strong preventive controls less appropriate (also
see Chapter 6, Controls).
Information security incidents may also highlight areas needing improvement: for example identifying policies
that are not followed, policies that are not effective, risks that have changed, or systems lacking necessary
resources or skills. Reviewing incidents may also, of course, reveal the need for improvements to the incident
management processes themselves.
Both outcomes require incident management to be planned, documented, resourced and recorded. The
organisation must first define what it classes as an incident and plan its response.
Different events may qualify as incidents (“business-affecting events”) in different parts of the organisation; the desired
response to an incident may also vary. In general the accidental destruction of a user’s file is unlikely to constitute an
incident, but the accidental destruction of a business-critical database almost certainly will. If the organisation’s web
server is compromised, the priority is likely to be to re-establish a secure web presence: if a server storing sensitive
research data is compromised, the priority will be to find out what data may have been affected and how. The incident
response policy may comprise a standard set of incident definitions and desired outcomes with variations covering
areas with different requirements, consistent with the classification of information processed.
Once policy is set, an incident response plan should be developed to implement it. The plan will comprise processes,
procedures and the systems and resources needed to implement them. These will themselves require preparation.
The CERT Coordination Center’s Mission Risk Diagnostic for Incident Management Capabilities provides a useful
health check. Exercises are a good way to identify problems and to train incident responders to work together.
Case studies in this chapter include how one organisation developed its incident response policy and plan and two
examples of incident response plans.
Incident coordinators must be granted, or be able to quickly obtain, the authority to modify or suspend any
of the organisation’s activities until they are restored to a secure state. For example a compromised computer
may need to be disconnected from the network, a suspect account have its access rights withdrawn, or a
research activity suspended while the integrity of its data is checked.
Incidents may be detected directly, by someone noticing a security failure, or indirectly, by human or computer
monitoring of computer logs or other records. The organisation should ensure it has the reporting and monitoring
systems needed to detect the types of incident defined in the policy, and that these are known to and trusted by all
those who may detect signs of an incident. Information gathered during security events and incidents is likely to be
sensitive: a case study shows one organisation’s policy for handling this material.
Successful incident management depends upon a critical resource – availability of the right people/roles to receive
alerts, do the analysis, coordinate work, and carry out response activities. The organisation must have communication
routes previously agreed, and tested – and contingency plans for those (frequent) cases where someone is unavailable.
Not all reported events will indicate incidents. An initial triage process determines whether a report, or group of
reports, should be treated as an incident or whether another process, for example for faults or helpdesk enquiries, is
appropriate. Those reports that are classed as incidents are likely to require further analysis to determine how best to
restore the organisation to its desired operational state.
In complex incidents, the response stage may begin with containment to prevent the impact getting worse. This gives
more time for the steps required to remove the incident’s cause and, to the extent possible, investigate and mitigate
its consequences. Both containment and response are likely to involve working with those having relevant expertise
both inside and outside the organisation (see Chapter 8, Roles and competencies), as well as official communications
channels to ensure that the right messages are getting out. All actions taken to respond to incidents should be
recorded, to ensure the response is effective and to inform the subsequent review.
They should also generate a report for the organisation’s ISMS review process (see Chapter 12, Continual
improvement). The detail in this report may vary. For routine incidents resolved successfully it may just
summarise the number of incidents and the systems or units affected; but for serious or novel incidents, the
report should include the root cause of the incident (to the extent that this can be determined), the impact on
the organisation, and the controls that were, and were not, effective in managing it.
The final case study describes one organisation’s response to a security incident affecting personal data.
Summary
• The ISMS should aim to manage the level and severity of adverse events, not to eliminate them
• The ISMS should contain plans to respond to, and learn from, these events
• Incident response requires trusted cooperation both within and outside the organisation; trust must be
established in advance
12 Continual improvement
improvement and looks at how to initiate
processes and activities to achieve it. It forms
part of Stage 4 – Performance, evaluation and
improvement in the Toolkit Route map.
Key topics
• What is continual improvement?
• How to identify opportunities for improvement
• How to create an improvement plan for your organisation
• improving efficiency of the ISMS and controls in meeting security objectives; and/or
• improving the effectiveness of the ISMS and controls in meeting security objectives.
Continual improvement needs to be promoted by leadership and commitment of management and should
be included in policy, planning and resources. Implementing a continual improvement process will help an
organisation create prioritised and cost-effective improvements that are aligned to business requirements and
available resources. Resulting monitoring and reporting capabilities will then increase the potential to identify
further opportunities for improvement.
The process for continual improvement should be defined and overseen by the information security function
within the organisation. The process should be integrated into existing procedures and processes where
possible, so that existing process managers will be responsible for implementing the continual improvement
process within their respective area.
Among the many frameworks for continual improvement are COBIT, the Deming cycle and ITIL:
• The Deming cycle is a method for continual improvement, characterised by the Plan-Do-Check-Act
iterative steps
• The ITIL set of practices for IT service management defines a seven-step improvement process.
12 Continual improvement 73
One example of how these processes might relate to continual improvement of an ISMS is given below:
Improving practice (i.e. what the organisation chooses to do, rather than what its members do) can increase
the effectiveness of the ISMS and resulting security controls.
Improving processes can increase the efficiency of controls and surrounding processes.
In reality, there is considerable overlap since, for example, improving strategy may result in an increase of both
effectiveness and efficiency. Examples of these types of improvement and their effect(s) are given in the table
below:
• Organisational changes
12 Continual improvement 75
Table 8 - Sources of improvement opportunities
Following the ITIL continual service improvement approach, organisations can create an improvement plan by
considering the following:
• Who will provide ownership and direction for information security improvement?
• Where does information security report within the organisation (i.e. level of seniority)?
• How quickly does the organisation wish/need to change?
• How much resource can be made available?
• What is the scope and remit of the improvement programme?
• How can goals be made specific, in order to provide clear directing and measurable targets?
12.6.2 Where are we now?
To measure its current level of maturity of information security, the organisation can carry out benchmarking
and comparisons with similar organisations. This can give an indication of relative maturity and help to
prioritise certain work areas.
Assessment may also be carried out via self-assessment, internal or external audit. Self-assessment can
be a useful tool, but involves time and effort from internal staff, and the level or assurance may not be as
great as that provided by a more formal audit. However, this will often be an appropriate starting point
for an improvement process. External audits may provide more assurance and act as a greater catalyst for
improvement, but can be more costly.
12.6.3 Planning and implementing (where do you want to be and how to get there)
Once the organisation has analysed its current state and compared it to its desired state, the results should be
documented and compared in a gap analysis, which will form the basis of the improvement programme.
The gap analysis will provide the objectives for the improvement programme, which should be prioritised
according to business requirements and an assessment of how much effort is required. Certain objectives
might provide the opportunity for “quick-wins”, which can be useful to improve buy-in and demonstrate
progress, whereas other activities may need long-term projects to achieve. Benchmarking against similar
organisations can also prove to be useful during the prioritisation process.
Improvement plans for identified activities can then be planned in an incremental manner to increase the
overall level of maturity in a measurable way. For example, in planning to improve awareness of individuals
across the organisation, where the organisation has identified that it needs to train everyone annually, the
following stages might provide observable milestones:
12 Continual improvement 77
Table 9 - Simple maturity model for awareness activities
12.7 Measurement
In order to determine whether or not goals have been achieved, appropriate measurements should be used
(see Chapter 10, Measurement, for further information).
Summary
• The goal of continual improvement is to iteratively identify and implement ways to make an established
ISMS more cost effective and appropriate
• Continual improvement should be an objective from the outset when implementing any ISMS
13 Policies
Foundations in the Toolkit Route map.
Every organisation requires a top level policy for information security which must define clear lines of
responsibility for delivery and risk ownership. The policy and associated responsibility should be developed
as a result of the governance arrangements in place within the organisation (see Chapter 2, Information
security governance), and in particular the policy must be approved by the highest body in the organisation’s
governance framework.
Managing information security risks should be part of an organisation’s overall risk management strategy,
and the formulation of information security policy should form part of that strategy. In organisations with a
low maturity in terms of risk management, a governance structure may need to be developed specifically for
the purpose of writing the information security policy and the use of a RACI matrix (Responsible, Accountable,
Consulted, and Informed) may help to establish it.
The supplementary volume to this publication which, at the time of writing, is still in production, builds on the
third edition of UCISA’s Information Security Toolkit published in 2007 (the predecessor of this publication) and
will include revised policies to comply with ISO 27001:2013.
Summary
• The information security policy should not stand alone; it should be part of an organisation’s risk
management strategy and must be approved by the highest governance body in the organisation
• The organisation’s policy for information security defines responsibility for delivery and risk ownership
13 Policies 79
Resources Reading list
Template for a generic policy No items.
14 Conclusion
the educational sector to design, establish, maintain and
improve an information security management system. From
getting a clear picture of the organisation is, to achieving
buy-in, to selecting controls, implementing business
changes and ensuring that these changes are properly
embedded by the use of awareness materials, measurement
and reporting, each step builds on the previous one to
create something which is genuinely worth having and
which continues to be relevant and cost-effective.
In summary, it is vital for everyone in the organisation to understand that this is not a finite project that can A well-managed ISMS is a
be implemented and forgotten about. Continual improvement is crucial, for a successful ISMS will require powerful business enabler.
ongoing investment. Analysis of information security threats and incidents over the years shows that there is
no room for complacency, as threats and risks are ever-changing.
The battle for hearts and minds is a key one. At all times, organisations must ensure that their ISMS is as
user-friendly as possible. Information security professionals should work closely with colleagues across the
organisation to maintain an holistic approach. The way to ensure the continued relevance of an ISMS is to keep
it closely and visibly linked to the organisation’s strategic objectives and risk appetite.
The summaries for each chapter are collected below. Readers are also encouraged to review and use the
resources the material referenced in the reading lists at the end of each chapter and in the Annex: Example
resources to accompany the Toolkit.
Overall summary
14 Conclusion 81
3 Drivers
• Drivers can operate at a very high level (e.g. organisational reputation), or be very granular in their level of
detail (e.g. researcher reputation)
• Drivers can be internal (e.g. responsibility to students and staff), but are often external (e.g. the
Information Governance Toolkit)
4 Scoping
• Successfully defining and agreeing the scope of an ISMS from the beginning is a critical success factor in
the implementation of any ISMS – if the scope is wrong you will not know where you are going or when
you got there!
• There are different scopes involved in implementing information security in an organisation from high-
level scopes covering the entire organisation to the scope of a particular project.
• Start small with your scope, demonstrate success and build from there.
• Monitor and review, and if your scope is wrong then change it accordingly
5 Risk assessment
• Information risk management is a systematic, consistent, iterative process where risks are identified and
assessed before being treated and monitored
• Information risk treatment options should not cost more to deploy and manage than the cost of the risk
itself
• Information risk management should not be done in a vacuum, but as part of the overall organisational
risk management process
6 Controls
• Controls reduce the impact and/or likelihood of incidents
• Ready-made control sets should be considered carefully
• Controls form part of an ISMS – they do not replace it
• Controls should be traceable to the requirements/risks which they are intended to address
7 Information management
• An information management scheme should comprise of: a classification scheme, a labelling scheme,
handling rules and processes to define how these all interact
• An information security group should be established at a high level, chaired by a member of top
management with specific responsibility for championing information security and including owners (and
representatives of owners) of key information assets
9 Awareness raising
• For greatest impact, target content to match identified risks and roles within the organisation, in response
to changes in the organisation environment and threat landscape
• Target learning through media, practical instruction, or theoretical instruction, using physical hand-outs
such as flyers, electronic communications, fixed-place messaging like posters, and persistent messaging
(such as screensavers and online training)
• Consider that security is supporting the individual to do their job well, and that there is competition for
their attention – security needs to be there to help develop skills that will be applied in targeted roles
10 Measurement
• Measurements may be used to measure the performance of an ISMS, the effectiveness of controls, to
track threat levels, and as part of controls themselves (see Chapter 6, Controls).
• Every measurement must have a purpose: to direct action, and/or to support decision making
• Suitable presentation of measures is critical to their effectiveness
12 Continual improvement
• The goal of continual improvement is to iteratively identify and implement ways to make an established
ISMS more cost effective and appropriate
• Continual improvement should be an objective from the outset when implementing any ISMS
13 Policies
• The information security policy should not stand alone; it should be part of an organisation’s risk
management strategy and must be approved by the highest governance body in the organisation
• The organisation’s policy for information security defines responsibility for delivery and risk ownership
14 Conclusion 83
84 UCISA Information Security Management Toolkit Edition 1.0
Annex
Example resources to accompany the
Toolkit
This section provides examples of actual documents and resources created and used
successfully by organisations in the educational sector.
They are largely published as supplied – the text, style and tone of these documents have not
been altered to match the remainder of this document.
They are not intended to be perfect, nor to be used verbatim by the reader, but to provide
concrete examples of what has worked for others.
Remote and
Mobile Working Policy Network Security
• Policy • Overarching policy • Secure authentication
• Processes • Information Classification • Endpoint integrity
• Tools to enable secure and Handling Rules • Account and privileges
remote and mobile working management
Testing, Monitoring
and Continual
Physical Security Improvement External Assurance
• Secure areas • Framework review • Baseline Personnel
• Access management • Penetration and Security Standard
• Secure disposal vulnerability tests • ISO27001
• Metrics gathering
Management framework:
• Guiding principles Information
• Assurance
• Roles and responsibilities Risk
• Audit
• Definitions and key Assessment
reference documents
• Risk assessment methodology Selection of
Periodic
Security
Review
Controls
“Reputation is like fine china: once broken it’s very hard to repair.”
— Abraham Lincoln
Information risk management, or information security, is a crucial part of <ORGANISATION>’s operations. This document explains the
case for improving our capabilities in this area, and provides an outline strategy to do so.
Without adequate information security, the reputation of the University cannot be maintained. With our foundation of trust broken,
funding will be awarded to other, worthier recipients, and lucrative partnerships will be dissolved. Research papers may no longer
be accepted for publication. Lawsuits and fines could bring further financial losses, compounding reputational harm, and impacting
<ORGANISATION>’s ability to recruit the best and brightest students and staff.
In summary, the goal of information security is to enable the University to be, and to be seen as, a safe pair of hands. The role of the
<Infosec Department> is to advise, monitor and support the University in this area.
Within <ORGANISATION>, incidents are increasing in frequency and severity <stats here>. The frequency of near misses and the risk of a
catastrophe are also at <high/alarming/unacceptable> levels. <Give concrete example here>.
An increasing appetite for partnerships and research collaborations handling sensitive information.
The widespread use of unsuitable tools, such as email, for handling highly confidential information.
A widespread view that information risk management can be left until more important issues have been addressed.
Do nothing
This is, as should be clear from the previous section, untenable. As threat levels are increasing, doing nothing actually means losing
ground. This approach will result in increasing numbers of serious incidents, loss of reputation, and other damage to the University.
However, this “one size fits all” approach is inevitably going to be overkill in some environments, while inadequate in others. The
University structure is also federated, so blanket implementation is likely to be a challenge to implement.
To achieve this goal, <ORGANISATION> could create three sets of good practice baseline recommendations, each relating to a level of
risk. University members would be given the right support to select and tailor the most suitable baseline in any given situation. Their
decisions on information risk would be independently validated5 against the University’s risk tolerance, while operational activities to
manage information risk would be integrated into normal reporting lines.
1. Assign overall responsibility for information risk management to a top level role.
3. Create a detailed plan for independent governance of information risk that avoids conflicts of interest, includes segregation of
duties, and leverages <ORGANISATION>’s existing resources and expertise.
4. Engage further with key areas handling medical, personal and other sensitive information to assess risks and requirements.
7. Invest in a broad-reaching programme of awareness and support for University members, with additional support for key areas.
The key message: Information security is everyone’s responsibility.
5
In order to avoid the classic problem of “marking one’s own homework”.
Introduction
[ORGANISATION]’s computer and information systems underpin all [ORGANISATION]’s activities, and are essential to [ENTER MAIN
BUSINESS/FUNCTIONAL OBJECTIVES HERE].
The [ORGANISATION] recognises the need for its members, employees and visitors to have access to the information they require in
order to carry out their work and recognises the role of information security in enabling this.
Security of information must therefore be an integral part of the [ORGANISATION]’s management structure in order to maintain
continuity of its business, legal compliance and adhere to the University’s own regulations and policies.
Purpose
This information security policy defines the framework within which information security will be managed across the [ORGANISATION]
and demonstrates management direction and support for information security throughout the [ORGANISATION]. This policy is the
primary policy under which all other technical and security related policies reside. [ENTER ANNEX LINK HERE] provides a list of all other
policies and procedures that support this policy.
Scope
This policy is applicable to and will be communicated to [EXAMPLE: all staff, students and other relevant parties including senior and
junior members, employees, visitors and contractors].
It covers, but is not limited to, any systems or data attached to the [ORGANISATION]’s computer or telephone networks, any systems
supplied by the [ORGANISATION], any communications sent to or from the [ORGANISATION] and any data - which is owned either by
the University or the [ORGANISATION]- held on systems external to the [ORGANISATION]’s network.
[SENIOR MANAGEMENT GROUP] are responsible for reviewing this policy on an annual basis. They will provide clear direction, visible
support and promote information security through appropriate commitment and adequate resourcing.
The [INFORMATION SECURITY ROLE] is responsible for the management of information security and, specifically, to provide advice and
guidance on the implementation of this policy.
The [INFORMATION SECURITY ADVISORY GROUP] comprising representatives from all relevant sections of the [DEPARTMENT/COLLEGE/
OTHER UNIT] is responsible for identifying and assessing security requirements and risks.
It is the responsibility of all line managers to implement this policy within their area of responsibility and to ensure that all staff for
which they are responsible are 1) made fully aware of the policy; and 2) given appropriate support and resources to comply.
Policy Statement
The [ORGANISATION] is committed to protecting the security of its information and information systems. It is also committed to a
policy of education, training and awareness for information security and to ensuring the continued business of the [DEPARTMENT/
COLLEGE/HALL]. It is the [ORGANISATION]’s policy that the information it manages shall be appropriately secured to protect against
breaches of confidentiality, failures of integrity or interruptions to the availability of that information and to ensure appropriate legal,
regulatory and contractual compliance.
To determine the appropriate level of security control that should be applied to information systems, a process of risk assessment shall
be carried out in order to define security requirements and identify the probability and impact of security breaches.
Specialist advice on information security shall be made available throughout the [DEPARTMENT/COLLEGE/OTHER UNIT] and advice can
be sought via the University’s Information Security Team [ADD URL] and/or [ADD ADDITIONAL URLS, if required].
It is the [UNIT NAME]’s policy to report all information or IT security incidents, or other suspected breaches of this policy. The [UNIT
NAME] will follow the University’s advice for the escalation and reporting of security incidents and data breaches that involve personal
data will subsequently be reported to the University’s Data Protection Officer. Records of the number of security breaches and their type
should be kept and reported on a regular basis to the [SENIOR MANAGEMENT GROUP/INFORMATION SECURITY ROLE].
Failure to comply with this policy that occurs as a result deliberate, malicious or negligent behaviour, may result in disciplinary action.
Data
Stewards
Action
Own
risks
associated
with
specified
IA
systems
Risk
plans
Responsible
for
data
quality
within
the
IA
system
Assessments
Provide
assurance
on
quality
and
security
to
IAOs
Conduct
granular
risk
assessments
Oversee
implementation
of
quality
and
security
controls
BIS
Team,
contractors,
IT
Services
and/or
locally
appointed
Controls
Responsible
for
the
technical
environment
97
Developing an information security policy – University of York, case study
Like many organisations, in York we knew that our existing regulations and policies were old and increasingly inadequate. Research
contracts were asking for policies which aligned with ISO/IEC 27001, our auditors were commenting on lack of policies, and IS/IT staff
wanted more policy, both general and detailed to solve problems.
We had existing policy on Data Protection, Freedom of Information, Records Management and IT operations. This gave us a way of
establishing the hierarchy of the new policies- but all the existing policies were rewritten during the course of the work.
Our first attempt was to take the specimen policies in the previous UCISA Toolkit and set about editing them. We thought this would
be quick, but it turned out to be a disaster. The policies were too general, and in many cases did not fit with our institutional ethos. We
found that such key policies are very institution based - specimen policies help to guide but very the real policies are very much about
what will be tolerated and work in the setting of a given institution.
For our second attempt, we started from scratch. We agreed with institutional senior management an approval process for Information
Security policies and drafted a list of policies that we needed based on ISO/IEC 27001. In our first attempt, we used existing committee
structures to approve policy and the delays introduced were very large. For the new process, a senior member of staff was delegated
power to approve policies in this area, with only policies that were felt to be contentious or that affected other areas taken to
committee.
From there, we defined the general format of a policy. For us, a policy was to be a short (two page max) document at a high level.
Underneath each policy there would be method statements and guidance containing the detail. We also agreed some basic definitions
and use of terminology (e.g. must vs. should).
• our aim was to help people to use data safely, not lock it away and make it hard for people
• the policies should apply to all data, irrespective of format (paper or electronic)
• avoid jargon
• exploit current good practice, introducing changes only where necessary
• we would not do any publicity or training until most of the policy suite was in place
These framework and principles helped us to get past some initial stumbling blocks around format and ensured that the suite of
policies have a consistent feel, with common section and definitions and gave us some high level principles to help clarify what was in
scope and not in scope for the policy suite.
Finally we agreed to avoid wide consultation early on in the process. We found it better to have something which has been worked on
and is in quite good shape before opening it up for wider consultation. Without a specific document to focus on, we found that people
found the issues hard to grasp and discussions were very unfocussed.
Once we had that overall process agreed, we started working our way through the list. We found some tricky issues during the process
and, as ever, progress was slower than we expected in advance but overall, we have made good progress.
Even after a year of work, we are not done. Some of the subsidiary policies are not done and we are only just starting awareness raising
but the policies are being referenced when new projects are started or bids submitted, the auditors are happier and external funders
are being assured that the University can handle sensitive research data in a secure fashion.
Summary
• Create policies tailored to your environment; do not copy templates blindly
• Provide well-developed documents for wider consultation, rather than a very initial draft
• Develop and agree a simple approvals process
At UCL, we began to formalise our information security strategy in late 2012, when the post of Head of Information Security was
created.
The first stage involved finding out what was already happening, not on the process/controls level, but on the strategic and governance
levels. We discovered the following:
• An existing UCL-wide risk management process, which was under further development
• The IT department was working to implement ITIL for improved process management and better customer service.
• There was a major University initiative to formalise project management.
• A role handling data protection and Freedom of Information, in the Legal Department
• A PCI DSS governance role in the Finance Department
• A Records Manager in the Library with responsibility for setting data retention policy
• A project underway to provide a secure data storage and processing facility (the Data Safe Haven project) in the School of Life and
Medical Sciences. This project had already requested participation from the Information Security area.
All of the above activities revealed both organisational structures and roles with which information security management activities
would have to interoperate, and existing processes which we could use or adapt.
But, although we could already see how to link information security management to some existing processes (e.g. risk management),
and could see some new processes we’d need, we could not actually make any changes until we had top level buy-in: and a strategy.
We started by establishing how changes to top level university activities were normally raised, discussed and approved. It turned out
that the existing management hierarchy was clearly defined and provided us with a route which looked as if it could work: through the
Security Working Group and a number of other committees to the Senior Management Team of the University.
In parallel to my identification of a suitable and effective way to get the material to senior management, we began to write up an
actual strategy.
Initially, we chose to create a presentation in order to keep the structure as fluid as possible, and to provide flexibility in presenting it-
we could vary the path through the presentation dynamically to adapt to the audience. This also had the added benefit that we could
easily present it in person for feedback during its development, rather than mailing out a document. This gave us immediate and frank
feedback (e.g. if people fell asleep!) as well as the opportunity to get lots of practice in explaining the material to each audience.
While it was being developed, the presentation/strategy was presented to people and groups going up the management chain to
senior management, so that each group could have a say in the content and it could continue to evolve. The net effect was that it would
not, by the time it reached senior management, be a single person’s take on what needed to happen, but a consensus and (hopefully)
already acceptable approach. At each level, we asked for permission (and sometimes was urged) to take it to the next step in the
governance chain.
The golden rule we adhered to during the development and presentation of the strategy was that, at the time of the presentation of
the strategy to senior management, there should be no surprises on either side. A strategy without a suitable foundation would be
less likely to be accepted. On the downside, the “excitement” was inevitably going to be diminished- but in information security and
management, excitement is not generally conducive to effective operations...
To make the suggestions in the strategy more likely to be well received in a meeting with senior management, we realised that
we should not rely on one route alone. It also seemed sensible to try out the draft strategy on individual members of the senior
management team, to get feedback and suggestions. One obvious venue was the governance body which had arisen within the School
of Life and Medical Sciences to manage sensitive data. We presented the draft strategy at one of the meetings, and received a huge
amount of helpful suggestions. The ones which made the most impact were:
• It should contain concrete examples- e.g. how much money could be saved by avoiding incidents.
• It should not require explanations- it should make sense by itself.
Following this meeting, the strategy was revised and improved significantly, and began to look like something which senior
management would be happier with. we did not, however, add content to promise specific cost savings, as it would not have been
based upon reliable data. One thing we did add to improve the immediacy of the proposal was a short list of recent incidents affecting
universities.
With the strong support of my line management, and after about ten months of preparation, we were authorised to present the
proposed strategy to senior management. Since the meeting format did not permit the use of presentations, we summarised the
When the date of the meeting arrived, the topic was scheduled for 10 minutes total - five minutes on the standard information security
update, and five minutes on the strategy.
At the meeting, the group took some considerable interest in the information security update, but showed even greater enthusiasm
for the strategy, which received unanimous support. In the end, the information security section of the meeting stretched to over 20
minutes.
The substantive feedback from UCL’s senior management team was as follows:
2. They approved initial organisational changes to embed information risk management into normal operations, including the
introduction of senior information risk owners at Faculty level.
3. They recognised that culture change was important to the success of information risk management across UCL.
4. They were strongly in favour of an awareness programme to improve attitudes to information risk.
Now the real work begins. We have an approved strategy, but need to make it happen. The challenges at hand are really high level and
pervasive, such as culture change, and technical capabilities. The next steps we are going to take are the formalisation of information
risk management across UCL, the implementation of an awareness programme, and further engagement with departments and
faculties to understand their ways of working, risks and needs.
Summary points
• Ensure you know how risk management is already working.
• Find out the accepted route for new ideas to be received, assessed and approved, and use it.
• Be patient: big changes which are going to stick take time to get going.
• Ensure that the strategy evolves as you present it to more people, so that it is fit for purpose.
The university environment has some characteristics which influence the way in which information security can be managed. The
organisation’s senior management team, or “top management”, having overall responsibility for information security, must consider
these characteristics when designing the ISMS.
Federation
Not all the information technology used within an organisation is provided (or indeed controlled) by a central IT service. This is
particularly the case with IT supporting research but may also be the case in collegiate institutions or those that have a high degree
of devolution. Similarly there may be specific administrative functions within departments or colleges. However, the impact of any
information security breach is likely to be on the organisation, not the department.
• How do you ensure buy-in from those departments/units that operate semi-independently?
• Who, in those departments, is responsible for information security and how do they link with the institutional information security
operation?
• How do you ensure that IT systems that are not under central control meet a base level of security (such as the Cyber Essentials
promoted by BIS1)?
Suggestions:
• The Senior Information Risk Owner (see Chapter 8, Roles and competencies), as part of their role, should take responsibility for
selling information security to devolved departments.
• It may be appropriate for there to be a Senior Information Risk Owner for each devolved operational unit. These would have
responsibility for championing information security policy and requirements within their department.
• A hybrid approach to technical security may be adopted where a base level of security is required for a given classification
of information, but each operational area is provided with the tools to implement controls as they see fit. Relies heavily on
independent support and oversight, quite time consuming as there is no economy of scale for a number of things.
Autonomy
Academic staff involved in research often operate with a degree of autonomy. They bid for funding and are responsible for the use of
those funds to deliver the specified research. The requirements of that research may result in the development of bespoke IT systems.
Although these are effectively production systems, they are largely unsupported and may present an information security risk. Staff can
and do go to retail outlets and purchase IT equipment for use in the organisation, particularly for research. Such equipment may not
meet the standards of the organisation.
• How do you ensure researchers understand the sensitivity of the information they collect and store?
• Does your research ethics policy take into account information security issues?
• Do you know where collections of personal or sensitive data used in research exist?
• Do you have a procurement policy that restricts the purchase of equipment to a defined and supported product set?
• Do you have a storage strategy that mitigates the need for researchers to purchase storage outside of the institutional purchasing
procedures?
Suggestions:
• Researchers that are working in departments where they are likely to utilise personal or sensitive information should be regularly
given awareness training and refresher courses;
• As a minimum, the minutes of research ethics committees should be forwarded to the information security group to allow them to
advise on appropriate security measures, and log the existence of sensitive data collections.
• Consideration should be given to mandating procurement of IT equipment through established procedures to ensure that all
equipment is to a standard that may be supported by the IT function.
1
https://www.gov.uk/government/publications/cyber-essentials-scheme-overview
How are the security risks associated with personal devices accommodated in your policies and procedures?
Suggestions:
• The location of the key information assets in your organisation needs to be known and understood and appropriate measures
taken to protect them.
• It may be appropriate to restrict home working to trusted devices provided by the organisation and with appropriate security
measures already in place.
• The wireless network may be deemed to be untrusted given that personal devices may connect to it with little or no verification.
Consequently the organisation may consider placing a firewall between the wireless part of the network and the main campus
network.
Unclear boundaries
The individuals who access an organisation’s information are not just restricted to traditional definitions of staff and students. The
permanent staff may be supplemented with visiting lecturers, research collaborations will require staff from other universities to
have access to resources, alumni may have access to resources, employees from other organisations may take part in professional
development activities, etc. The physical boundaries of an organisation may not be restricted to the organisation alone as buildings can
be shared with commercial entities which are spun-off from research projects, and which use the same facilities as the organisation
itself.
In some cases, access to resources is provided to individuals that are never physically present. The increase of distance learning means
that the organisation may virtually extend to all corners of the globe. Some universities have sought to extend their reach by engaging
in partnerships with overseas institutions or by setting up overseas campuses.
• How are the information security requirements of the organisation communicated to non-traditional members of the
organisation?
• The organisation may need to consider adherence to information security policy as part of any tenancy arrangement for external
organisations.
• Access to resources and systems should be time limited for temporary staff.
• There should be processes in place to determine the appropriate levels of access to organisational resources and systems for all
members of the organisation.
• Organisations should consider the legislative requirements of nations where overseas campuses are being established and their
impact on the institutional information security policy as part of the planning process.
• Research that may deliver a commercial benefit to the organisation should be treated as a critical information asset and protected as such.
• Does your organisation take a lead in defining the security requirements for commercially sensitive research data?
Suggestions:
• A structured approach should be in place to manage contracts which may be commercially sensitive. These should build on
existing business processes and ensure legal due diligence.
• The processes should be in place to ensure that, if required, the ability to provide information security oversight can be easily
demonstrated.
Communities of users
Difference categories of users will have different views on information security, different appetites for risk, and different levels of
understanding of the requirements of the organisation’s information security policy. Culture varies across the organisation. This is not
restricted to differences between staff and students nor is it to differences between administrative and academic staff. All need an
understanding of information security risks and their roles in delivering the information security policy.
• How do you accommodate a wide range of skills and experience levels when implementing an institution wide policy?
Suggestions:
• Risk management should be built into normal working practices so that staff recognise all risks, report them and take mitigating
action where appropriate.
• The awareness activity needs to take cognisance of the variety of expertise, approaches and understanding members of the
organisation have
Turnover
Universities are dynamic organisations with a high turnover of personnel. Some of this turnover is known and managed; student
course dates are known and established processes are in place to manage registration and, on completion, graduation. However, not
everyone completes their course and there need to be processes to manage exit of those students who drop out or otherwise do not
complete their studies. There will be processes to manage regular staff entry and exit and these will usually be the responsibility of a
Human Resources function, whether centralised or devolved. The processes around ad hoc members of staff and other members of the
organisation that have access to resources and systems also need to be well managed; these processes may be devolved to a wide range
of departments or functions and so may not be so well defined.
• How well is information security awareness built into your induction processes for staff and students?
• Are ad hoc members of the organisation included in awareness activity?
Suggestions:
• Tailor the awareness campaigns for each part of the organisation’s community which matches the level of risk and the rate of turn
over.
• There should be appropriate processes in place for induction, ongoing awareness and exit for all members of the organisation.
Slide 1
Slide 2
Slide 5
Slide 6
Importance of the risk appetite being set at a senior executive level.
Looking at risk register and how we assess and accept risk. The all
important strategic balance.
Slide 8
Delivering business change all important – not just an exercise in
producing policies but in effecting a change in behaviour.
Slide 9
UCL Human Factors researchers, led by Prof. Angela Sasse, collaborated with a large telecommunications company and a large utilities
company. Researchers interviewed employees across a variety of roles within the companies to understand how security played a role in
their working practices: specifically, the interplay between security mechanisms and individual employees’ primary (productive) tasks. It
was important to understand perceived frictions and benefits of security within the workforce (see Chapter 8, Roles and competencies).
The study found that there was scope for policies and organisation-wide initiatives outside of security to indirectly improve the security
posture of an organisation, or otherwise encourage behaviours which were also more secure.
Awareness
Environmentally-sound practices may be promoted side by side with health and safety, or be the focus of specific campaigns such as
organisation-wide sustainability drives. In the telecommunications company, employees were encouraged, through training and visible
campaigns, to consider the cost of their working behaviours to the environment, and adopt green thinking in practice. For some in the
organisations, this thinking green was more approachable and had a clearer purpose than understanding security and its drivers.
Individuals can be encouraged to adopt visible practices that they can take pride in, and which the organisation can measure against targets. Training
relating to sustainability was found to be a channel for recommending behaviours which were incidentally more secure than existing practices.
Paperless approaches
Related policies included the development of a paperless office. This encouraged the rationalisation (and ultimately, limiting) of
document printing, and - when documents were printed - having printing enabled locally at the printer with existing ID access cards.
This then involved users consciously in their own printing of documents and had the potential to create a greater sense of ownership
over their personal impact within the organisation, while also recording print rates per employee. This demonstrates change within
already existing habits (conscious printing), incidental security (limitation of unattended printouts), as well as a clear measure of
performance against company policy (the limitation of printed documents).
Secure disposal
Secure document bins were provided, distributed in such a way as to be within easy access of all employees, further supporting
involvement. Crucially this showed consideration of both employee needs (the need to dispose of sensitive documents) and the
minimisation of disruption to working habits (limiting the cost to the individual to comply with the policy).
Indirect improvements included a reduced likeliness of documents being left on desks or left overnight in shared office space. There
were also less physical copies of confidential documents in circulation, and there was less opportunity for individuals to pick up
someone else’s printouts from printers (where the action taken with those printouts could otherwise not then be tracked).
Remote working
A paperless office also complemented remote working - individuals would tend towards carrying less print-outs with them when travelling,
where otherwise printed documents might have been carried around or left over various locations for any length of time. In the utilities
company, there was also a drive toward minimising travel outside of the company where possible as a means to develop sustainable practices
(moving instead to (secure) communications applications) - this would incidentally limit exposure of the organisation’s assets to risks present
in other locations, but would have required investment and guarantees around reliable alternative solutions for communication
Storage benefits
The utilities company also tried to minimise storage costs and adopt just-in-time resourcing practices where appropriate. Sustainable
practices such as site management can then be linked to recognised standards, e.g. ISO14001 (“Environmental Management”). This
demonstrates clear relation of outcomes to high-level expectations.
Avoiding confusion
There was a need to be mindful of how policies must join up effectively. Both companies provided secure shredders, where shredded documents
were then collected by a contracted outside company for disposal. Staff in the telecommunications company were instructed to direct documents
either to recycling bins or to confidential document shredders based on their sensitivity classification - this had the potential to confuse employees
who wanted to respect both security and the environment. In the utilities company, the policy around disposal of computers (and specifically hard
drives) needed to respect data protection concerns raised by employees themselves (“There’s no way you’re just taking this away to recycle it, do
whatever with it.”) - individuals with knowledge of security will need assurances that other parties are protecting their information.
Key points
• Ensure that there are tangible benefits of following organisation policies.
• Know your suppliers
• Engage with members of the organisation to understand how policies combine in practice within their roles, and where
improvements can be made.
Over the last few years within Loughborough University, there has been an increased requirement for Information Security input to
research project proposals, grant applications and contract agreements. Supporting research within the organisation is an important
activity; but it is recognised that it is different to supporting the teaching and learning activity.
Awareness
One of the earliest activities required was an awareness campaign within the research community. Colleagues were often in sections of
academic schools which did not have the same level of cascaded information about central IT services and support. Providing focused
communications explaining what support was available to researchers was greatly beneficial to increase awareness and was welcomed
by the recipients.
There is a parallel theme of changing culture and improving training, which is focused across the organisation. The research community
within Loughborough have welcomed these initiatives with open arms as they can sometimes feel isolated or unsure how to get the
IT support required by their research. Opening a dialogue in the information security area has brought about a change in provision in
other areas to the benefit of the wider organisation.
Auditing
As part of a pre-existing research contract, the University received a request from the commercial company supporting the research to
allow a security audit of the information systems used to facilitate the research activities.
The commercial company audit team spent two days on site reviewing the written policies, technical controls and undertaking testing
of the systems concerned. Whilst the activity took several days to prepare for (as the information security controls were not as mature
as required), the exercise was greatly beneficial, as the University received an external and commercial perspective on the security
controls required. With a small number of recommendations for improvement being received, it was a positive activity and something
which has been repeated, in subsequent years, by the commercial company.
Depending on the nature of the research, organisations will engage with different companies, research organisations and other
academic institutions. Based on the number of information security surveys, questionnaires, forms and interviews completed over the
last ten years, there is an increasing similarity of questions being posed. From the straight forward such as “Do you have a firewall?” to
providing copies of patching policies. In order to improve responsiveness to these requests for additional information, colleagues in the
information security function were quick to produce a number of stock responses in a default proforma.
Data disposal
One of the interesting requests received was that of assurance of data disposal following previous research activities. One of the
research grants required assurance that previous research information assets from a previous grant were passed to the funder to be
centralised with confirmation that the data has been securely destroyed from site. This can be a challenge with modern file space
provision with: volume shadow copies, tiered storage and backup robots. In the case of Loughborough University, it took a three month
period for this data to be removed through a cycle of standard process activity.
Secure communications
The use of secure electronic mail technology has been raised a couple of times by funders; the preferred access mechanism for
government related research grants appears to be the Criminal Justice Secure Email. This provides a secure webmail facility to interact
with the police, government and solicitors. There is an option to integrate this into the standard desktop email client in some scenarios,
depending on the Business Impact Level (BIL).
Information leakage
A large part of supporting research at Loughborough University has focused on a risk assessment of information leakage and what
controls are required to mitigate this risk. The government Business Impact Level assessment process continues to be used, despite
the new Government Security classifications introduced on 2 April 2014. The Business Impact Level provides guidance on the risk
assessment process; the new classifications are not used to label the information. However, researchers do not fully understand
what this means, and what systems can meet this level of data. Funding applications are starting to request a Risk Management and
Accreditation Documentation Set (RMADS), which describes the Business Impact Level of the information being held/processed.
Managing costs
If there are any costs associated with providing information security support to a research project, it is important to investigate the
funding schedule available as this tends to differ depending on the research funding partner. Based upon experience, information
security costs are often overlooked when putting the research grant together. At Loughborough University this was introduced as
part of the central advice provided by the research and enterprise offices. Areas which may need to be included in funding include
Within Loughborough, a pragmatic approach has been made to address the areas described in this case study, initially utilising the
resource which is available for information security, there was no additional funding or posts provided.
Based on experience in the sector, non-maintained and development systems pose a credible risk and easy attack vector.
One of the first steps to addressing this issue at Loughborough has been to require these systems to be installed on virtual machines
within the infrastructure managed environment. This provides a regular three monthly vulnerability assessment of the virtual machine
with exception reporting. It is important to recognise this is the first step to try and address this problem and it is not the panacea
to solving the issue. Within Loughborough University, we also provide a central hosting solution for blogs based upon WordPress, to
manage the security aspects of the software.
Learning Points
• Researchers may not be aware of the Information Security support offered by your organisations, so run a campaign to make them
aware.
• Some research grants or contracts may require your organisation to undertake a formal security assessment as part of the agreed
terms.
• The questions which form part of research, grant or contract applications are broadly similar. Consider creating a bank of stock
responses which will make the completion of these documents easier.
• Funding for research tends to be made available up front as part of the grant or contract; introduce any associated costs, for
example an external penetration test, upfront.
• At the end of a research project, some information systems will become orphaned. Consider research data management and how
the organisation will manage information at the end of a project to avoid information systems being left unmanaged.
The purpose of the UCL Data Safe Haven is to enable researchers to access and use sensitive identifiable data in a secure manner. It was
created by the Information Services Division on behalf of the School of Life and Medical Sciences (SLMS), but a few research studies
outside SLMS who need to handle sensitive identifiable data also use it.
In 2013, a decision was made to achieve certification to ISO/IEC 27001 for this environment. Work began in early 2014, and we passed
our first certification audit in May 2014.
Multiple scopes
Certification of the whole of UCL was not appropriate or feasible, so we had to think very carefully about how to define our scope: what
were we controlling, what would be audited and what would be certified?
• The scope of the Data Safe Haven, which we called the “organisation”, to match the term used in 27001. This was a challenging
term to use, as it had to be clear that we did not mean the whole of UCL when we used the word “organisation” in project
meetings
• Staff who manage and administer the environment (physically and logically)
• Staff who use the environment for research
• Top management who make executive decisions
External Organisations
(inc NHS, UCL Partners)
SCOPE OF
UCL HR
CERTIFICATION
Division
UCL
UCL UCL IS
Estates Division
Division
The first problem which we had to solve was how internal third parties, such as HR and Estates, would fit into the picture. HR defines
contracts and pre-employment checks, as well as carrying out some checks at the request of research units, while Estates controls
physical security (e.g. card access to server rooms, and physical access to researcher offices).
It was decided that UCL parties which were outside our organisation should be treated:
• as sources of risk;
• as providers of controls.
We considered the options and agreed that the controls they carried out were within the ISMS, but external to the organisation. This
was arrived at through externally facilitated workshops with members of the core project team, focusing on specific areas of risk.
What about external third parties? We could have managed service providers through contracts - but thankfully we had no external
service providers. We did, however, have external entities which needed to pass data into the environment. We treated them as external
potential sources of risk, rather than as part of our scope for certification.
Finally, how should we handle researchers? Most researchers were included within the scope of certification. However, one research
study from another faculty was agreed to be part of the organisation, but was being treated as a “customer”, and was hence excluded
from the scope of certification. This is shown in the illustration as the diagonal part of the dotted line inside the “golden egg”.
Learning points
• There are likely to be a number of different ideas about what scope means. At the beginning of your compliance work, put more
effort than you think is necessary into clarifying the terms used, and reinforce the definitions for your chosen scopes regularly, to
keep everyone on track.
• Identify your external parties and decide early on whether they are within your scope for compliance or outside it, and how you
will handle the issue at audit time
• Specify all of your external drivers explicitly, so that you can justify any controls which they require, and so that their impact on
your decisions about scope can be understood.
These top level guiding principles apply to all information handling activities, including project work and day to day operations. They
are intended to be used to inform and guide organisations in their normal work, and to ensure that information is handled in a suitably
secure fashion.
1 Business requirements drive Security requirements should exist to support the requirements of a business activity
security requirements and should be relevant and appropriate.
2 Protect the confidentiality, Information's requirements for confidentiality, integrity and availability should be
integrity and availability of identified and security measures should be matched to these requirements.
information to the right levels
3 The campus network is not the Because the campus network connects thousands of machines under widely varying
security perimeter management regimes, it should be considered in the same risk category as the open
Internet. Any machine which you would not connect to the Internet without some
form of protection should have the same protection installed before enabling access
from the general campus network.
4 Role-based access Privileges should be assigned to roles, not individual people. People should then be
assigned roles.
5 Least privilege Each role should have the minimum set of privileges needed to carry out the tasks
required of that role.
6 Separation of duties Where the risk or impact of a failure to execute a process correctly is unacceptably
high, the process should require appropriate oversight before it can be completed. For
example, when placing a purchase order, it has to be approved by a second person
before it can be placed, or when writing software, a code review is undertaken by a
second person before release.
7 Segregation of environments Systems which store, process or transmit information classified as secret should
handling information rated at be physically segregated from other systems which operate with information at a
different security levels different level (e.g. normal). Systems which store, process or transmit other levels of
information should be logically separated. Logical segregation can be achieved by
appropriate network architecture. Note that it is expected that development/test and
pre-production/production systems will be handling information at different levels.
8 No sensitive data on test systems Development systems should not hold any data rated other than normal. Test and pre-
production systems which require data rated other than normal should be secured to
the same standard (or better e.g. only accessible to a specific network or set of hosts)
as the related production system.
9 Traceability of activity to This is restricted to operations involving information above normal. Actions carried
individuals out by an individual on information should be capable of being traced back to that
individual.
10 Documented security standards Information processing systems where data with a classification other than normal
are processed should be designed, deployed and managed according to documented
security standards (e.g. a secure software development lifecycle).
11 Competence and training Individuals must be competent to carry out their responsibilities. Heads of
Departments and Divisions must ensure that training to the appropriate level is
provided.
12 Responsibility and accountability Roles where there are activities which require access to information rated other than
normal should have those activities clearly documented as part of the role description.
In addition, the responsibility for protecting that data should also be clearly defined
in the role description along with a path of accountability to the line management
structure.
13 Continuous improvement All roles should be responsible for identifying and highlighting opportunities for
improvement to manage risk.
We are also included as approvers in the yearly bid process. This involves us reviewing all the bid documents and adding comments
relating to the amount of input we will need to have and whether any penetration testing needs to be budgeted for.
Findings so far
The general consensus has been very positive. However we have found the perception before a risk assessment has taken place to be
quite negative, with people presuming that we would block their project or slow it down. By integrating ourselves into the project
delivery process, we are hoping to stop the possibility of us slowing projects down; this would only happen if we were not consulted
until the very last minute. We now try to make it clear at the beginning of the process that we are there to help project teams achieve
what they need to achieve in a safe way; we are not there to stop them. We see this whole process as continually improving; the more
risk assessments we do, the more we can add changes and improve the process.
Learning Points
• Do not reinvent the wheel - use approaches that have already been tested and adapt them to suit your organisation.
• Ensure you integrate with your organisation’s project delivery process; it’s the easiest way to make sure they involve you.
• Get buy-in; if those involved understand that you are ultimately trying to help them, you are less likely to find resistance.
Project Name:
Project Manager:
Service Owner:
Author(s):
Date completed:
Scope:
Information Classification
Classification(s) of information involved:
Confidentiality
Integrity
Availability
2 Internal requirements
What internal policies, procedures and other requirements apply to the security of the information being handled by the project?
3 External requirements
What external legislation, contracts and other requirements apply to the security of the information being handled by the project?
Server Y/N
Service Name:
Service Owner:
Service Operations
Manager:
Author(s):
Date completed:
Scope:
1 Information Classification
Classification(s) of information Description of classification Is this information used, stored Type(s) of Information
involved: Classification or affected by service?
Secret Loss, tampering or disclosure Y/N Describe information which
would seriously damage has been identified as being
operations as a teaching, Secret.
learning and research
organisation.
Confidentiality
Integrity
Availability
3 Internal requirements
What internal policies, procedures and other requirements apply to the security of the information being handled by the service?
4 External requirements
What external legislation, contracts and other requirements apply to the security of the information being handled by the service?
Server Y/N
Project Name:
Project Manager:
Service Owner:
Author(s):
Date completed:
Scope:
1 Project Context
1
Includes creation and destruction
2
Sent and received
Confidentiality
Integrity
Availability
Service Name:
Service Owner:
Service Operations
Manager:
Author(s):
Date completed:
Scope:
1 Service Context
1
Includes creation and destruction
2
Sent and received
Confidentiality
Integrity
Availability
• Treating risk
• Formulating Risk Treatment Plans, ensuring that necessary controls have not been omitted and gaining approval for the risk
treatment plan and residual risks
Treating risk
Risk is treated by applying controls that modify the risk in such a way that it meets the specified Risk Acceptance Criteria. This is
achieved through controls which either:
• Reduce the likelihood of the risk occurring by attempting to prevent the occurrence of the event, or detect it in sufficient time for
the organisation to deal with it or
Determination of controls
Each event is considered to determine:
• Why it is used? This is explained through a cross-reference to the associated event in the Risk Treatment Plan
• What is the implementation status (Implemented; In Progress or Not Started)
If as a result of this process an Annex A control is determined to be applicable, but isn’t already covered the Risk Treatment Plan is
revised to include it.
4. The risks after treatment (with corresponding Risk Rating Graph) with an explanation of why the risk acceptance criteria are met;
Calculations of residual risk are based on an appraisal of the likely outcome judged against the criteria documented in the Risk
Assessment Process to ensure consistency.
Section
1 Introduction
2
Risk assessment
Methodology
Appendices
A Information Classification
H
Risk Register
1 Introduction
1.1 In order to ensure consistency, a standard methodology, which can be used across all information security risk assessments is
required. The methodology selected for use at Cardiff University is described below.
2 Risk Assessment
2.1 What is an Information Security Risk Assessment?
3.2 Each year a risk assessment of key information assets shall be carried out in accordance with section 4.4.2 of the University
Information Security Policy.
3.3 The process will be initiated by the SIRO and coordinated by the Information Asset Owner for each Key Information Asset (see
Appendix I).
3.4 The Information Asset Owner will direct Data Stewards to arrange for risk assessments of the systems or containers they
manage e.g. SIMS to be carried out. N.B. These should not be carried out in isolation by the Data Steward but should involve
suitable representation and input from users and administrators of the system.
3.5 Using the Information Classification Document (Appendix A) identify the classifications of information encompassed by the
selected asset i.e. C1 Classified - Highly Confidential, C2 Classified - Confidential or NC Non-Classified.
3.7 Complete the Key Information Asset Risk Environment Map (Appendix C).
3.8 Consider the threats to the asset using the typical threats document (Appendix D) to assist in this process.
3.9 Brainstorm/Discuss the potential risks, ensuring you categorise their impact in terms of – confidentiality, integrity, availability
and compliance.
3.10 Make a list of the risks to be quantified, take each in turn and using the key information asset template (Appendix E) describe
Cause - As a result of …
3.11 Use the Risk Measurement Criteria (Appendix F) to assess the impact of each risk against each impact area I.e. you must
develop the scenario to describe the likely severity of impact against each impact area in that scenario. Having done this, total
up the impact scores for each of the impact areas to give an overall risk impact score (pay careful attention to the Scoring table
on the last page of the risk measurement criteria).
3.12 Having assessed the impact of each risk, determine the probability of occurrence. Using the Risk Measurement Criteria which
provide definitions of likelihood (Appendix F).
3.13 Once an overall impact score and probability have been determined you can plot the risk on the Risk Acceptance Matrix
(Appendix G).
3.14 Each section of the Matrix has a colour and the colour can be translated into the appropriate risk response action. I.e. a risk with
a high likelihood and high impact score would plot onto a red section and would translate as a severe pool 1 risk which must be
given immediate attention and priority over all lower rated risks.
3.15 Having plotted the risks into the matrix and consequently identified the risk response actions, appropriate risk control
(mitigation) actions should be identified, discussed and documented in a risk register (see Appendix H). For each risk there
must be one owner who is accountable for the management of that risk. Since one risk may have a number of distinct control
actions, the risk owner shall identify who is responsible for ensuring that each control is implemented and managed.
3.16 The process will generate a completed: Key Information Asset Profile, Key Information Asset Risk Environment Map, Risk
Identification and Assessment Worksheet, a populated Risk Acceptance Matrix and Risk Register.
3.17 The Risk register shall be reviewed by the Data Steward and Asset Owner in order to determine the overall level of information
risk exposure as well as to agree and sign off asset specific security requirements and priorities for implementation. However
all risks which plot as Severe or Substantial should be referred via the Information Asset Owner to the SIRO for referral to the
Information Security Risk Group (ISRG) to determine whether the risks should be added to the University Risk Register
3.18 N.B. where a risk assessment was carried out the previous year, reference should be made to the relevant paperwork as a
primer for the current years risk assessment. However it is not simply enough to review the risks from the previous year as it
is possible that new risks may have arisen in the intervening 12 months due to changes in legislation, reporting requirements,
technological developments etc.
General advice:
Always aim to keep Classified Information (C1 and C2) within the University’s secure environment.
Where this is not possible consider whether the information can be redacted or anonymised to remove confidential or highly
confidential information, thereby converting it to Non-Classified Information (NC).
Report any potential loss or unauthorised disclosure of Classified Information to the IT Service Desk on 74xxx
Seek advice on secure disposal of equipment containing Classified Information via the IT Service Desk on 74xxx
Use the Confidential Waste Service for disposal of paper and small electronic media [email protected]
Name of Key Information Asset and sub Rationale for selection Description
category
Why is this information asset important to What is the agreed-upon description of this
the organisation? information asset?
□ Classified - Confidential
□ Classified - Protect
□ Non-Classified
Security requirements
□ Confidentiality Only authorised staff can view this information asset, as follows:
□ Integrity Only authorised staff can modify this information asset, as follows:
□ Availability This asset must be available for these staff to do their jobs as follows:
□ Compliance This asset has special regulatory compliance protection requirements as follows:
Appendix D
List of Typical Threats
• Enter your 5 risks in order of priority with 1 being the most significant risk.
• Name the risk
• Provide a description of the risk (how it would occur and why)
• Indicate whether it would affect confidentiality, integrity, availability or compliance if it did occur
• Estimate how likely it is to occur and any controls you know about that are designed to limit or prevent it, views on their effectiveness
• What impact the risk would have on the University if the worst case scenario of this risk did occur. Referring to the Risk
Measurement Criteria as a guide, then score each risk against the listed impact areas in the table.
3. Risk Description: Staff are able to access and amend records on the system they are not permitted to and can access information
beyond that required for them to carry out their role. That access has the potential to cause significant issues with data integrity as
users will be able to delete or change records which indicate invoices received. This would also have the effect of undermining the
purpose of the system and the confidence that staff have in it and the organisation. It would also impact on supplier confidence
in the organisation if invoices were late being paid. This risk could materialise through staff having over privileged access rights to
information beyond that required to undertake their role due to permissions not being set correctly or not being amended according
to role changes.
5. Likelihood (and existing controls): Likelihood is high as there are a great deal of staff role changes and people joining the organisation.
Current controls rest with those who administer account access and insufficient resource to administer accounts has been identified
meaning there is a significant lag in access changes to XYZ system being requested and their implementation.
Impact Value
Impact Area No (0) Negligible (1) Minor (2) Moderate (4) High (6) Major (8) Relative Risk
Score
Corporate Reputation X 2
Research Profile & 0
X
Income
Student Experience X 0
Financial Sustainability X 2
Health & Safety X 0
Staff Experience X 4
Legal Obligations X 6
14
Risks
Which sources of information, if compromised, would have an adverse impact on the organisation (as defined by the risk measurement
criteria) if one or more of the following occurred?
Definitions: short term: 1 week to 5 months medium term: 6 months to one year long term: in excess of a year
Definitions: short term: 1 week to 5 months medium term: 6 months to one year long term: in excess of a year
Achievement of
KPIs threatened
Financial Operating costs Operating costs Operating costs Operating costs Operating costs
Sustainability increase, revenue increase, revenue increase, revenue increase, revenue increase, revenue
loss (excluding loss (excluding loss (excluding loss (excluding loss (excluding
that deriving from that deriving from that deriving from that deriving from that deriving from
damage to research damage to research damage to research damage to research damage to research
reputation) or one reputation) or one reputation) or reputation) or one reputation) or one
time financial loss time financial loss one time financial time financial loss time financial loss
of less than £500K of between £500K- loss of between of between £2.5M- of greater than £5M
£1M £1M-£2.5M 5M
Definitions: short term: 1 week to 5 months medium term: 6 months to one year long term: in excess of a year
Definitions: short term: 1 week to 5 months medium term: 6 months to one year long term: in excess of a year
Likelihood Definitions
Has not occurred before, but may History of similar occurrences, History of previous occurrence.
occur in exceptional circumstances situations or near misses.
Very difficult to control due to
Not dependent on external factors Could be difficult to control due to significant external factors.
external factors.
Appendix H
Risk Register
Risk Date Risk Likelihood Impact Risk Control Control Target Risk Target Risk Owner
ID Identified Description Rating Measure Owner Rating Date
(mitigation)
1.0 01/01/2013 Risk Low 1 - 56 Severe Is Medium 01/01/2014 Is
expressed responsible x 31 = accountable
Medium Substantial
in the (name and Substantial (name and
terms: As High Moderate role) for role) for
a result Tolerable actioning ensuring
of… the the risk is
mitigation effectively
There is a
action managed.
risk that….
Which
may…
Education information:
Financial information:
A pragmatic approach, deploying a quick and easy-to-use tool, has been used to identify information assets that are extremely
important for the business of the University (crown jewel assets) and are at the same time potentially vulnerable. The tool is designed
to be used by departments, faculties, colleges and institutes. ‘Assets’ in this context include lists or documents or tables or spreadsheets
holding information which has value (either electronically or in filing cabinets).
This Information Asset Register tool enables: identification and recording of crown jewel assets; assigning those accountable for the
assets; and performing a risk assessment against the identified assets. It enables a university to focus its mitigation efforts on the most
important areas.
The tool has been used by Oxford and also by universities across the world. A significant level of consistency is beginning to emerge in
terms of identification of specific types of crown jewel assets that are considered to be vulnerable and require mitigation.
Further details of the Information Asset Register tool are given at: http://www.it.ox.ac.uk/policies-and-guidelines/is-toolkit/
information-asset-management#d.en.158803; the tool can be downloaded from the same web page. It is intended to develop the tool,
please send proposals for improvement to the email address given.
This case study describes the steps that Loughborough University took to mitigate risk when the Heartbleed OpenSSL vulnerability was
disclosed.
Patch Management
Different teams within IT Services at Loughborough University take different approaches to how they manage patching of their
services:
• Systems Team who predominantly manage Microsoft Windows Server environments have a cycle where patches are firstly
deployed to non-critical systems within a test environment, secondly to critical systems within a test environment. This rollout
cycle is then mirrored on production systems.
• Desktop Management Team use SCCM to manage patching on desktop systems. Their approach is to package security patches and
updates within SCCM, deploy to a test machine to complete a full Q and A. Once this has been tested, patches will be deployed to
IT Services managed machines and then out to the wider University one school at a time.
• Networks Infrastructure Team manages a large estate of Linux servers. As there is no scheduled release of Linux patches, systems
use a list to inform administrators of outstanding patches. These are firstly tested within a test environment before being applied
to production servers.
When out-of-band patches or patches to prevent zero day attacks are released, IT Security contacts within the department are called
upon to evaluate the risk of a potential exploit depending on the systems which are vulnerable and the data stored on these systems.
Depending on the calculated risk, the security team will advise others within the University on how to mitigate this risk. The end goal
will be to patch systems, but this is not always possible due to time of release etc.
The security team will also look to leverage border protection systems such as firewalls and IPS/IDS.
On 7 April 2014, Heartbleed OpenSSL vulnerability was unleashed onto the Internet. The security team became aware of this via various
sources, which are followed such as Twitter feeds, security blogs etc.
Heartbleed was a vulnerability in OpenSSL, a popular cryptographic library which is used to secure communication across the Internet
such as:
This vulnerability didn’t just impact web services; appliances were also vulnerable such as:
• Routers / Switches
• Firewalls
• IPS/IDS
• Load balancers
Due to the nature of this vulnerability, SSL certificates were also deemed as being compromised. This meant that all SSL certificates on
vulnerable services would need to be revoked and new certificates issues.
Internally this vulnerability was classified as critical due to the potential level of information disclosure that was possible. Luckily this
vulnerability had been disclosed responsibly and patches were already available including a work around. Vendors such as Cisco, Juniper,
F5, Palo Alto Networks were also releasing fixes for appliances.
Once the vulnerability was evaluated, the next step was to try and identify vulnerable hosts on the network. Simply getting back the
version of OpenSSL would have identified if it software was vulnerable, but we could only do this on servers within our control.
The security team opted to use a script, which had been developed for NMAP, and were very quickly able to identify which hosts on the
Step 2 - Communication
From the scan of the network, as expected not all vulnerable hosts and or services were under the control of the department. Other
schools and departments around the University were also hosting services.
Internal mailing lists are used to communicate vulnerabilities and releases of patches to the wider IT community within Loughborough
University.
As this was more a Linux issue, it was decided to post information to the unix security mailing list. Information about the vulnerability,
links to additional information and assistance from IT Services was communicated to IT Support staff across the University.
Step 3 - Patching
At this point, the security team had already configured the IPS/IDS to block any attempts at exploiting the Heartbleed vulnerability
externally. This helped remove the external attack vector. The risk remained high due to possible attacks from within the network.
Individual teams patching procedures had already begun within the department, once emergency change requests where approved
and testing was complete; the fix was being rolled out onto production services.
Access to management interfaces on appliances followed best practice in that this traffic is completely separate to other business
critical traffic and access to this network is heavily restricted to privileged personnel. Due to this, management interfaces were less of
an issue and once vendors released updates to resolve the Heartbleed vulnerability, these were scheduled and deployed accordingly.
Step 4 – Follow up
The security team continued to regularly scan the entire network for Heartbleed vulnerabilities. Where services were found to still be
vulnerable, service managers were contacted to ensure patches were scheduled to be implemented.
Follow up communications was posted to the internal mailing lists informing server managers that Janet was offering free replacement
for SSL certificates. Help was also offered to managers wanting there services scanned to ensure fixes had worked.
I have been a sysadmin for almost all of my working career. I have a degree in computer science; I worked as a programmer for a year
before moving on to system administration, and have worked as a sysadmin for 14 of the last 16 years.
Being a Sysadmin
From a sysadmin perspective, security and hacking are viewed in a defensive or preemptive way. Best practice is to set up servers with
strong passwords, regularly patch the servers, and work on the principle of least privilege and privilege separation. These measures are
usually passive – once in place, it becomes standard working practice and doesn’t require major thought to implement.
In the environments in which I have worked, password security has generally been good. Strong passwords are in use (varyingly) and
storage of passwords has been good. Over time storage has improved greatly, with the adoption of tools like Keepass and Lastpass.
Regular patching is usually a work in progress. The structure is simple enough to set up, with WSUS for Windows servers and Satellite
for RedHat servers. Even on systems where update processes require more sysadmin intervention, this can be automated with cron
scripts or similar.
A frequent problem is older, unmaintained servers – as services are upgraded and servers are replaced, older servers remain in service
to host a specific legacy application or code. Sometimes these services are business-critical, but resources are not available to upgrade
and migrate the application to a newer system – or the people who wrote and understand the application have left, leaving a key
application unsupported and stuck on a legacy server.
Least privilege and privilege separation are key concepts which should be emphasised. In my experience this is generally well handled,
since it is mostly under the control of the sysadmin alone. If included as a routine design principle it becomes a habit for sysadmins
without being onerous. I have on occasion been surprised where it hasn’t been done – where a small amount of effort at the design
stage could result in a large increase in overall system security.
Hacking tools themselves cover a wide range; at one end there are simple tools which are easily available, or installed by default on
many systems. Chaining these simple tools together can be used to compromise a server quite easily. This requires some technical
knowledge as many of the tools rely on command-line usage, and usually shell access is the result. The main example of this is netcat,
which combined with a vulnerability that allows remote code execution, can be used to establish a remote console connection.
At the other end of the range are the complex tools, which are sadly just as easily available. These don’t even require the technical
knowledge needed for simpler tools; generally a GUI provides all of the options. Some are perfectly legitimate security scanning/testing
tools; nmap and Nessus are good examples, since these will only provide information. Tools like Cain&Abel are slightly more suspect;
although there are legitimate uses for this, cracking passwords does not come up that often. Metasploit is the black hat tool of choice;
this performs the full spectrum of hacking techniques with a few clicks, ranging from system/service discovery, to compromise, to post-
compromise exploitation (shell/desktop access, file transfer, keylogging, screen capture).
For this reason ethical hacking combines the diligence of the sysadmin with the creativity and lateral thinking of a hacker. This is not
a particularly difficult or rare combination, but it does require a step-by-step procedure of experimental testing, and it takes time to
develop both the mindset and the process.
Overall, then, hacking is a growing threat which is becoming more accessible to unskilled users. Protecting against it is strengthened by
good procedures, which if they become habit will make any organisation reasonably secure. This can be supplemented and improved by
security guidance and penetration testing - but only ever supplemented. A poor base security cannot be improved much after the fact!
• Password strength and least privilege design don’t require much effort, and greatly increase security.
Hacking is easy and automated. Tools are free and widely available. It’s not a matter of if you’ll be attacked, it’s a matter of how much
you’re being attacked right now!
Vulnerability Management
Key Points
• Reduce the risks organisations face resulting from exploitation of technical vulnerabilities.
• Allow organisations to setup a vulnerability scanning framework.
• Assist organisations in developing a patch management policy.
• Support organisations in procuring penetration testing services.
Introduction
A vulnerability is defined in ISO/IEC 27000 as “A weakness of an asset or a group of assets that can be exploited by one or more threats”.
Vulnerability management is the process in which vulnerabilities in Information Systems are identified and the risks of these
vulnerabilities are evaluated. This evaluation leads to correcting the vulnerabilities and removing the risk or a formal risk acceptance by
the management of the organisation.
Vulnerability management provides visibility into the risks of assets deployed on the network.
There are risks involved with vulnerability scanning. Since vulnerability scanning typically involves sending a large number of packets
to systems, they might sometimes trigger unusual effects such as, for example, disrupting network equipment. In order to cover these
risks, it is always important to inform stakeholders within the organisation when vulnerability scanning is taking place.
• Security Officer: The security officer is the owner of the Vulnerability Management process. This person designs the process and
ensures it is implemented as designed.
• IT Security Engineer: The IT Security Engineer is responsible for configuring the vulnerability scanner and scheduling the various
scans.
• Service Manager: The Service Manager is responsible for the Information system being vulnerability scanned and the information
stored on the system. This role should decide whether identified vulnerabilities are mitigated or their associated risks accepted.
• IT Systems Engineer: The IT Systems Engineer role is typically responsible for implementing remediating actions defined as a result
of detected vulnerabilities. In most cases, this is likely to be multiple Systems Engineers which could also be spread over multiple
teams or departments.
In many organisations the role of the Security Officer and Security Engineer is one.
2. Vulnerability Scan;
3. Remediation actions;
4. Rescan.
The first step in this process is defining a scope. The following information should form part of the scope:
• Whether the scan is going to be an authenticated or unauthenticated. An authenticated scan would be where credentials are
provided to login to the application or operating system, whereas an unauthenticated scan would test the authentication process;
• Is this an infrastructure scan or applications scan. An infrastructure scan would check the network footprint of the host or service
being tested whereas an application scan would focus on the specified application.
Depending on where the organisation sees the risk will influence the scope; for example some organisations see external threats as the
biggest risk and would therefore prioritise Internet facing services.
Once the scope has been defined, this should be distributed to Service Managers. It is very important to get buy-in from service
managers and provide them with plenty of notice about upcoming vulnerability scans. It is the responsibility of the Service Manager
to liaise with stakeholders to inform them about upcoming vulnerability scans. Depending on the criticality of the system, service
managers may have requirement for example not to scan systems during the clearing process.
Plan for unexpected events which might lead to delaying a vulnerability scan depending on the nature of the event. Allow service
managers to propose another suitable scan date.
If services are deemed too risky for vulnerabilities scans; this risk needs to be highlighted and the risk accepted by the appropriate
senior management. Additional protection will need to be implemented such as ACLs and no external access to mitigate against
internal and external threats.
Vulnerability Scan
Once the preparation is complete, the next phase is the initial vulnerability scans are performed. Any issues, which occur during the
initial, scans such as systems becoming unavailable or poor application response should be recorded since this may happen on future
scans. In this case actions may be defined to reduce the impact of future scans on the stability or performance of target systems.
Vulnerability scanning tools offer a wide range of reporting options to visualize the results. It is necessary to utilize these to create
reports depending on the audience:
• Security Officer/Engineer Interested in the risk the organisation is currently facing, this risk includes the number of vulnerabilities
identified and the severity/risk ratings of the identified vulnerabilities.
• Asset Owner Overview of the vulnerabilities in the systems they are responsible for.
• Systems Engineer Technical information about the vulnerabilities identified as well as recommendations for mitigation and
improvement.
Remediation actions
In this phase, the Service Manager will work with the Security Officer/Engineer to define remediating actions. The Security Officer/
Engineer will analyse the reported vulnerabilities and work with Systems Engineers to determine the associated risk and provide input
on risk remediation. The risk will depend on factors such as CVSS (Common Vulnerability Scoring System) score for the vulnerability,
publicity of the vulnerability, the Security Officer/Engineers personal experience and the classification of information stored on the
system.
Depending on the risk, clear timelines should be provided on when remediating actions should be implemented. Sufficient time should
be allowed taking into account the technical nature of the remediation and the organisations change management policies.
If remediation is not possible, this risk should be acknowledged and senior management should made aware. This risk should be
documented and accepted via the organisation’s risk acceptance process. Compensating controls should be identified in order to
mitigate/remove the risk without correcting the vulnerability.
The next set is for the Service Manager and the Security Office to define a schedule on how often a vulnerability scan should be carried
out against systems. In order to establish a robust vulnerability management process, it is recommended scheduled scans should be
conducted weekly or monthly. This will ensure rapid vulnerability detection allowing the organisation to implement mitigation controls
in a timely fashion and reducing the risk.
Patching
Introduction
Patch management is a security best practice designed to proactively prevent the exploitation of known vulnerabilities in information
systems within the organisation. The result is to reduce the time and money spent dealing with vulnerabilities and the exploitation
of these vulnerabilities. Proactively managing vulnerabilities within information systems will reduce or mitigate the potential for
exploitation therefore reducing staff effort in responding after exploitation has occurred.
Patches are additional pieces of code developed to address problems in software. Patches can enable additional features or address
security flaws within software. Vulnerabilities are flaws that can be exploited by a malicious entity to gain greater access or privileges.
Not all vulnerabilities have patches available; therefore systems administrators must also be aware of other methods of mitigation.
Timely patching of security issues is critical to maintaining the operational availability, confidentiality and integrity of information
systems.
Key Principles
New patches are released daily and it is becoming very difficult to keep abreast of all the new patches and ensuring proper deployment
in a timely manner. The following high-level key principles can be used to help mitigate the risk of such exploitation.
• The organisation should have a patch management policy, which should be able to identify which patches need to be/have been
applied;
• Only software needed to deliver the organisation’s business should be installed. This would reduce the number of patches which
need to be applied;
Type Patch
Computers/Servers BIOS, firmware, drivers, hypervisors
Operating Systems Patches, service packs, feature packs
Application Software (databases) Patches, service packs, feature packs
Installed Applications (Java, Adobe) Patches, service packs
Routers and Switches Firmware
Firewall, IPS/IDS and URL Filtering Firmware, definition updates
Anti-virus and Anti-spyware Data files and virus definition updates
Printers and Scanners Firmware and drivers
Bespoke or in-house developed software Patches, service packs, feature packs
Introduction
The purpose of performing a penetration test is to verify the new and existing applications, networks and systems are not vulnerable to
security risks which could lead to unauthorised access to sensitive information. A penetration test is also PCI DSS requirement 11.3.
A penetration test should be considered after a vulnerability scan has been completed and any issues identified are resolved or
mitigated. A penetration test would identify vulnerabilities, which are unknown or have been missed by the scan. Depending on results
it may also highlight where a Vulnerability Management process might be failing.
• Black Box – This form of testing requires no previous information and usually takes the approach of an uninformed attacker. The
penetration tester has no previous knowledge about the target system, network or application.
• White Box – This form of testing provides information to the penetration tester about aspects of the system or application they
are testing. This could be usernames and passwords to access the system, information on how the application is built such as
database access etc.
• Network Penetration Testing – This will test all services, which are offered by the organisation via the Internet. These include
email, DNS, firewall effectiveness and web services. This type of test would also indicate vulnerable software and firewall
misconfigurations.
• Application Penetration Testing – This type of test could be conducted either internally or externally depending on its availability
and testing would be limited to the specified application. An example might be a web application.
• Social Engineering Penetration Testing – This type of test focuses on identifying and verifying vulnerabilities associated with
employee’s ability to understand documented policies and follow procedures and security best practices.
• CHECK certified
• Tiger Scheme certified
• CEH (Certified Ethical Hacker) certification
When provisioning a penetration test using an external testing company, ensure a detailed scope of work has been provided and that it
meets all of the organisation’s testing needs. The following information should form the scope of work:
• What is going to be tested (infrastructure or application test and whether this is an internal or external test) and the type of test
(black or white box).
The report should be held in the strictest of confidence as the report could hold information that would reduce the overall security
of the organisation. The organisation should act upon the issues identified as part of a penetration test, this might be implementing
remediation steps or accepting the risks and implementing mitigating controls to reduce the identified risk.
• protect information from accidental or deliberate compromise, which may lead to damage, and/or be a criminal offence
• help to meet legal, ethical and statutory obligations
• protect the interests of all those who have dealings with the University and about whom it may hold information (including its
staff, students, alumni, funders, collaborators, business partners, supporters etc.)
Procurement documents
Confidential Information which is sensitive in some way because it might be Student personal details
personal data, commercially sensitive or legally privileged, or under
Staff personal details
embargo before being released at a particular time.
Press releases
It also includes information in a form that could not be disclosed under
Freedom of Information legislation. Financial transactions
Covers data about an individual, and data about the institution. Internal reports
• Confidential - personal
• Confidential - commercially sensitive
• Confidential - exams - expires 1 July 2013 and becomes public
Qualifying descriptors may also be used to incorporate/map to protective markings from other classification schemes, where staff are
working with external partners, data and schemes (e.g. the Government Protective Marking Scheme). For example: Confidential - GPMS
Secret
Paper and
electronic copies
must be marked
‘Confidential’
and the intended
recipients clearly
indicated. An
optional descriptor,
to state the reason
for confidentiality,
may be used.
Introduction
As part of the University-wide Information Security Framework Programme following ISO/IEC 27001 principles, we identified the need
to establish an Information Classification, i.e. a University-wide system of categorising information in relation to its sensitivity and
confidentiality, together with associated rules for the handling of each category of information in order to ensure the appropriate level
of security (confidentiality, integrity and availability) could be applied.
• keep it simple,
• make the labels intuitive and
• base it on impact of disclosure or loss and align this with our newly defined risk assessment impact scales.
We originally came up with a 4 point scale – with two categories relating to confidentiality (‘Highly Confidential’ and ‘Confidential’) and
one category relating to criticality/integrity (‘Protect’). The fourth category was Non-Classified. It became obvious after a short while
however that the Protect category did not work in the same way as the other two classified categories so we decided to drop it as a
category in itself and build in the relevant criticality/integrity policy considerations into each of the other categories.
A conscious decision was taken not to treat Personal Data or Sensitive Personal Data (as defined by the Data Protection Act) as a
categories in themselves as the impact of disclosure did not always correlate. Some Sensitive Personal Data and much Personal Data is
already public domain and not therefore ‘sensitive’ or confidential at all. We wanted to keep the focus of the categories on the scale of
impact of inappropriate disclosure on the institution, groups or the individual. The definitions of the two classified categories (Highly
Confidential and Confidential) were designed to include both confidential personal data (such as salaries) and confidential non-
personal data (such as competitive business strategy, security codes, etc.).
Examples were given for each category. The short definitions of the categories are below:
• C1 – Highly Confidential - Has the potential to cause serious damage or distress to individuals or serious damage to the
University’s interests if disclosed inappropriately
• C2 – Confidential - Has the potential to cause a negative impact on individuals’ or the University’s interests (but not falling into C1)
• NC – Not Classified - Information not falling into either of the Classified categories
Policy
Having got the Programme Steering group to approve the Information Classification we then developed a handling rules and a
supporting policy. It was determined that the scope of the policy would cover all information held by and on behalf of the University
and the handling rules would apply to members of the University and to third parties handling University information. We also decided
that where the University holds information on behalf of another organisation with its own information classification agreement
shall be reached as to which set of handling rules shall apply – acknowledging that some data handling requirements from external
organisations may be very stringent and require specific security arrangements to be put in place.
The draft Handling Rules are currently approved only as guidelines until such time as the programme has delivered some specific
tools (such as enterprise encryption and a secure but user friendly file sync and share alternative) to support the more aspirational
statements. Other tools may enable more differentiation between the rules relating to C1 and C2 information. In addition we will add
in disposal and printing as handling processes that are currently not included and continue to gather feedback on the practical aspects
of implementation. At this point the ‘Handling Rules’ will be reviewed, turned into enforceable policy and promoted through mandatory
information security training. We will also develop a similar, but different document for contractors’ use of University information.
The Information Classification policy and draft Handling Rules are available here: http://sites.cardiff.ac.uk/isf/handling/
1 Purpose
The purpose of this policy is to establish a University-wide system of categorising information in relation to its sensitivity and
confidentiality, and to define associated rules for the handling of each category of information in order to ensure the appropriate level
of security (confidentiality, integrity and availability) of that information.
2 Scope
This policy covers all information held by and on behalf of Cardiff University and the handling rules shall apply to members of
the University and to third parties handling University information. Where the University holds information on behalf of another
organisation with its own information classification agreement shall be reached as to which set of handling rules shall apply.
4 Policy Statement
All members of Cardiff University and third parties who handle information on behalf of Cardiff University have a personal
responsibility for ensuring that appropriate security controls are applied in respect of the information they are handling for the
University. Appropriate security controls may vary according to the classification of the information and the handling rules for the
relevant category shall be followed.
5 Policy
5.1 All information held by or on behalf of Cardiff University shall be categorised according to the Information Classification
(Annex 1). The categorisation shall be determined by the originator of the information and all information falling into the
classified categories shall be marked as such.
5.2 Information shall be handled in accordance with the Information Handling Rules (Annex 2) and where information falls
within more than one category, the higher level of protection shall apply in each case
5.3 Where a third party will be responsible for handling information on behalf of Cardiff University, the third party shall be
required by contract to adhere to this policy prior to the sharing of that information
5.4 Where the University holds information on behalf of another organisation with its own information classification, written
agreement shall be reached as to which set of handling rules shall apply prior to the sharing of that information
6 Responsibilities
6.1 The Senior Information Risk Owner shall ensure that the Information Classification and associated Handling Rules are
reviewed regularly to ensure they remain fit for purpose.
6.2 It shall be the responsibility of every individual handling information covered by this policy, to mark classified material as
such, to apply the appropriate handling rules to each category of information, and to seek clarification or advice from a line
manager or the Information Security Co-ordinator where they are unsure as to how to label or handle information.
6.3 All members of the University shall report issues of concern in relation to the application of this policy, including alleged
non-compliance, to the Information Security Co-ordinator.
7 Compliance
Breaches of this policy may be treated as a disciplinary matter dealt with under the University’s staff disciplinary policies or the Student
Disciplinary Code as appropriate. Where third parties are involved breach of this policy may also constitute breach of contract.
• Always aim to keep Classified Information (C1 and C2) within the University’s secure environment.
• Where this is not possible consider whether the information can be redacted or anonymised to remove confidential or highly
confidential information, thereby converting it to Non-Classified Information (NC).
• Report any potential loss or unauthorised disclosure of Classified Information to the IT Service Desk on 74xxx
• Seek advice on secure disposal of equipment containing Classified Information via the IT Service Desk on 74xxx
• Use the Confidential Waste Service for disposal of paper and small electronic media [email protected]
INFORMATION HANDLING - Electronic/digital information storage
Home H:
Controlled access ü Consider: Consider: Consider:
Shared space û file password protection file password protection Does information
for most sensitive files for most sensitive files need to be shared with
Central back-up ü colleagues - if so enable
Service delivers high folder sharing or move
availability and resilience to shared drive
School/Department Controlled access ? Seek advice from local IT Seek advice from local IT Consider:
based server on default access rights, on default access rights,
Shared space ? Any back-up
physical security of physical security of
requirements
Central back-up ? server and back-up server and back-up
Any back-up
requirements
Other IT Services
Controlled access ü Seek advice from IT Seek advice from IT
ü
maintained service (e.g. Services on default Services on default
database) Shared space ? access rights access rights
Central back-up ü Use restricted access Use restricted access
mechanisms where mechanisms where
online access is shared online access is shared
In public areas (e.g. Use not permitted Use not permitted Consider:
Open Access PCs):
High risk of incidental High risk of incidental Any back-up
Controlled access û disclosure disclosure requirements
Shared space û
Central back-up û
Controlled access û
Personally owned (e.g. No storage or creation No storage or creation No master copy storage
home) desktop PC permitted on device permitted on device permitted
Shared space ?
hard drive C: or D: May be used for read May be used for read May be used for remote
Central back-up û only remote connection only remote connection connection to access files
to access files if used in a to access files if used in a
Do not leave logged in
private environment. private environment.
and unattended
Do not download files Do not download files
Created documents
to device. to device.
must be saved on
Do not leave logged in Do not leave logged in University network or
and unattended and unattended University owned device
Consider: Consider:
Central back-up û May be used for read May be used for read May be used for remote
only remote connection only remote connection connection to access files
to access files if used in to access files if used in a
Created documents
a private environment private environment
must be saved on
Device to be protected Device to be protected University network or
by strong password, by strong password, University owned device
with maximum of 10 with maximum of 10
minutes inactivity minutes inactivity until
until device locks. device locks.
Consider: Consider:
Keep in lockable
cabinet/drawer
which is locked when
unattended.
Large capacity portable
Controlled access û Encrypt device – strong Encrypt device – strong Do not use to store
storage devices (i.e. passcode passcode master copy
external hard drive) Shared space û
Do not use to store Do not use to store
Central back-up û master copy master copy
University collaborative
Controlled access ü No storage permitted No storage permitted Do not use to store
workplace master copy
Shared space ü Use University Use University solutions
(e.g. Connections, solutions e.g. (Quickr e.g. Quickr (or Filr where
Quickr Teamplace) Central back-up ü or Filr where available) available) instead
instead
External ‘cloud’ storage/ Controlled access ? No storage permitted No storage permitted Do not use to store
file sync provider non- master copy
Shared space ? Use University Use University solutions
University contract
e.g personal Onedrive, Central back-up û solutions e.g. (Quickr e.g. Quickr (or Filr where
or Filr where available) available) instead
individually set up
instead
Dropbox accounts
Requirement: Requirement: ü
In lockable cabinet/drawer which In lockable cabinet/drawer Off-site working:
is locked when not in active use. which is locked when office is
Consider making a back-up copy
unattended.
No papers left out unless being before taking off site
actively worked on. No papers left out when desk
unattended.
In unrestricted access University
areas: In unrestricted access University
û areas:
Requirement:
Not permitted
In lockable cabinet/drawer which
Alternative: create as/convert to
is locked when not in active use.
electronic documents and use
secure remote connection with No papers left out unless being
permitted device actively worked on.
û Requirement:
Not permitted If needed to be taken off site
a back-up copy must be made
Alternative: create as/convert to
beforehand.
electronic documents and use
secure remote connection (e.g. Not to be left unattended and to
Cardiff Portal or WebDav) with be locked away in secure building
permitted device when not in use.
Consider: Consider:
Converting to an electronic Converting to an electronic
format and using secure electronic format and using secure electronic
transfer method instead e.g. transfer method instead e.g.
Fastfile Fastfile
• Disclosure of such information will have • Information should be stored in a physically secure manner with
a severe adverse impact on the business appropriate defence against unauthorised entry. Physical access
of the University, its reputation, or should be monitored and appropriate audit trails of access should
the safety or wellbeing of its staff/ be maintained.
members.
• File or disk encryption may be considered as an additional layer of
• Unauthorised disclosure of such defence or where physical security is considered insufficient.
information may have a severe financial
impact on the University.
• Copies of such information should be kept to an absolute minimum
and an audit trail should be maintained and secured for all
• The confidentiality of such assets will copies of the information. It is assumed that the confidentiality
far outweigh the importance of their of such information outweighs the need for availability and
availability. loss or destruction of such information would be preferable to
would include highly sensitive personal • Such information may be stored on machines that are isolated
information as well as those with a high from the network. Where remote access is required this must be
financial value, legal requirements for controlled via a well-defined access control policy and tight logical
confidentiality and information, which access controls designed to allow the minimum access necessary.
is critical to the business operation of
the University.
• Any remote access must be controlled by secure access control
protocols using appropriate levels of encryption and authentication.
Job purpose
• Provides leadership and guidelines on information assurance security expertise for the organisation, working effectively with
strategic organisational functions such as legal experts and technical support to provide authoritative advice and guidance on the
requirements for security controls.
• Provides for restoration of information systems by ensuring that protection, detection, and reaction capabilities are incorporated.
• Develops strategies for ensuring both the physical and electronic security of automated systems.
• Ensures that the policy and standards for security are fit for purpose, current and are correctly implemented.
• Reviews new business proposals and provides specialist advice on security issues and implications.
Duties and responsibilities
• Develops information security policy, standards and guidelines appropriate to business, technology and legal requirements and in
accordance with best professional and industry practice.
• Prepares and maintains a business strategy and plan for information security work which addresses the evolving business risk and
information control requirements, and is consistent with relevant IT and business plans, budgets, strategies, etc.
• Operates as a focus for IT security expertise for the organisation, providing authoritative advice and guidance on the application
and operation of all types of security control, including legislative or regulatory requirements such as data protection and software
copyright law.
• Manages the work of all other IT security specialist staff, including project and task definition and prioritisation, quality
management and budgetary control, and management tasks such as recruitment and training when required.
• Manages the operation of appropriate security controls as a production service to business system users.
• Develops implementation approach, taking account of current best practice, legislation and regulation. Ensures implementation of
information security strategy in automated systems and ensures operations of security systems. Analyses results of investigations
into complex, or highly sensitive security violations, to determine whether standards are fit for purpose, are current and are
correctly implemented.
• Reports any significant breaches in security to senior management. Interviews offenders in conjunction with the relevant line
manager or on own authority if the breach warrants it. Where appropriate, participates in forensic evidence gathering, disciplinary
measures, and criminal investigations.
• Ensures that procedures are in place for investigation of system access enquiries referred by support staff and for handling all
enquiries relating to information security, contingency planning as they affect the activities of the organisation, function or
department. Authorises implementation of procedures to satisfy new access requirements, or provide effective interfaces between
users and service providers.
• Devises new or revised procedures relating to security control of all IT environments, systems, products or services in order
to demonstrate continual improvement in control including creation of auditable records, user documentation and security
awareness literature.
• Authorises and initiates the provision of training, guidance and support to other security administrators and their agents within
the employing organisation, in all aspects of security policy and control.
• Reviews new business proposals and planned technical changes and provides specialist guidance on security issues and
implications.
• Be familiar with relevant University IT-related procedures and policies (acceptable use, data protection, freedom of information,
information security, purchasing etc) and advise colleagues and end-users accordingly.
• Undertake various other tasks on an occasional basis at the request of more senior staff in the department, and to a level
commensurate with training, knowledge, grade and skills.
Note: This job description was created in the spirit of the BCS (The Chartered Institute for IT), SFIA (Skills for the Information Age) level 6
with support from the BCS.
Responsible for: A team of <x> colleagues, staff at the senior level may be asked to deputise for their line manager in case of absence.
Hours: 37 hours per week, within the hours of 8:00 to 18:00 Mon to Thursday and 8:00 to 17:30 Friday, including 1 hour lunch
period. The precise pattern of working within these guidelines will be agreed in advance with your manager.
Special Conditions
Many staff carry mobile phones which allow them to be paged by various systems at all reasonable hours of the week. When
monitoring, diagnosis and configuration of services needs to be done outside normal working hours, it can sometimes be appropriate
for the work to be carried out remotely at home when convenient.
Attendance on site outside normal working hours is occasionally necessary, for example during major system changes and
maintenance. Such out-of-hours working as is necessary is scheduled in negotiation with the group of staff with relevant skills, and
takes account of the personal commitments and wishes of individuals.
For purposes of system management, IT Services staff often have enhanced access to data, files and computer systems and must at
all times respect the privacy of information to which they have enhanced access. The only exception to this will be investigations
authorised by IT Services Director or his/her nominee.
Please note: <organisation> is working towards equal opportunities and observance of our equal opportunities policy will be required.
PERSON SPECIFICATION
Job Title: Information Security Manager
Department: IT Services
All staff have a statutory responsibility to take reasonable care of themselves, others and the environment and to prevent harm by their
acts or omissions. All staff are therefore required to adhere to the University’s Health, Safety and Environmental Policy & Procedures.
Conditions of Service
The appointment will be on a full time, open ended contract on Management and Specialist Grade (salary, discretionary to salary per
annum)* at a starting salary commensurate with experience and qualifications.
*The appointment will be subject to the University’s normal Terms and Conditions of Employment for Academic and Related staff,
details of which can be found at:
Informal Enquiries
Informal enquiries should be made to <name>, <title> by email at: <email address> or by telephone on <telephone number>.
Application
The closing date for receipt of applications is <insert>.
Job purpose
• Obtains and acts on vulnerability information and conducts security risk assessments for business applications and computer
installations; provides authoritative advice and guidance on security strategies to manage the identified risk.
• Investigates major breaches of security, and recommends appropriate control improvements. Interprets security policy and
contributes to development of standards and guidelines that comply with this.
• Performs risk assessment, business impact analysis and accreditation for all major information systems within the organisation.
• Ensures proportionate response to vulnerability information, including appropriate use of forensics.
• Drafts and maintains the policy, standards, procedures and documentation for security.
• Monitors the application and compliance of security operations procedures and reviews information systems for actual or
potential breaches in security.
• Ensures that all identified breaches in security are promptly and thoroughly investigated.
• Ensures that any system changes required to maintain security are implemented. Ensures that security records are accurate and
complete.
• Identifies threats to the confidentiality, integrity, availability, accountability and relevant compliance of information systems.
Conducts risk and vulnerability assessments of business applications and computer installations in the light of these threats and
recommends appropriate action to management.
• Conducts investigation, analysis and review following breaches of security controls, and manages security incidents. Prepares
recommendations for appropriate control improvements, involving other professionals as required.
• Provides authoritative advice and guidance on the application and operation of all types of security controls, including legislative
or regulatory requirements such as data protection and software copyright law. Contributes to development of standards and
guidelines.
• Drafts and maintains policy, standards, procedures and documentation for security administration, taking account of current
best practice, legislation and regulation. Ensures that all identified breaches in security are promptly and thoroughly investigated.
Interviews offenders in conjunction with the relevant line manager or on own authority if the breach warrants it.
• Reviews information systems for actual or potential breaches in security, and investigates complex, or highly sensitive violations
referred by more junior staff or colleagues, handling issues imaginatively, efficiently and professionally. Obtains factual
information, and formulates opinions regarding exposed violations, through interview with all levels of staff. At all times,
undertakes to bring to the attention of management any actual or potential breaches in security.
• Investigates system access enquiries referred by support staff and all enquiries relating to information security, contingency
planning, as they affect the activities of the organisation, function or department. Implements and adopts known techniques to
satisfy new access requirements, or provides an effective interface between users and service providers when existing facilities are
considered inadequate.
• Recognises requirements for, and creates, auditable records, user documentation and security awareness literature for all services
and systems within IT Security Management, ensuring that the records provide a comprehensive history of violations, resolutions
and corrective action.
• In consultation with senior security personnel, devises and documents new or revised procedures relating to security control of
all IT environments, systems, products or services (including physical security) in order to demonstrate continual improvement in
control. Ensures that any system changes required to maintain security are implemented.
• Advises on, and assists with the assessment of the potential impact on existing access security mechanisms of specific planned
technical changes, in order to help ensure that potential compromise or weakening of existing security controls is minimised. Also
assists in the evaluation, testing and implementation of such changes.
• Drive liaison with customers and stakeholders in order to pursue continual service improvement and produce customer-driven and
well-supported services.
• Delivers and contributes to the design and development of specialist IT security education and training to IT and system user
• Manages the operation of appropriate security controls as a production service to business system users.
• Monitors the application and compliance of security operations procedures, and reports on non-compliance.
• Ensures that training, guidance and support is provided to other security administrators, in all aspects of security policy and
control.
• Plans and manages the work of small teams of security staff on complex IT security specialism projects.
• Maintains knowledge of the technical specialism at the highest level.
• Keeps in close touch with and contributes to current developments in the technical specialism within employing organisation, own
industry and in appropriate professional and trade bodies. Is fluent at articulating best practice and is a recognised authority in the
technical specialism.
• Be familiar with relevant University IT-related procedures and policies (acceptable use, data protection, freedom of information,
information security, purchasing etc) and advise colleagues and end-users accordingly.
• Undertake various other tasks on an occasional basis at the request of more senior staff in the department, and to a level
commensurate with training, knowledge, grade and skills.
Note: This job description was created in the spirit of the BCS (The Chartered Institute for IT), SFIA (Skills for the Information Age) level 5
with support from the BCS.
Organisational Responsibility
Responsible to: <line manager> but may receive strategic instruction from the Director of IT.
Responsible for: None, staff at the senior level may be asked to deputise for their line manager in case of absence.
Hours: 37 hours per week, within the hours of 8:00 to 18:00 Mon to Thursday and 8:00 to 17:30 Friday, including 1 hour lunch period.
The precise pattern of working within these guidelines will be agreed in advance with your manager.
Special Conditions
Many staff carry mobile phones which allow them to be paged by various systems at all reasonable hours of the week. When
monitoring, diagnosis and configuration of services needs to be done outside normal working hours, it can sometimes be appropriate
for the work to be carried out remotely at home when convenient.
Attendance on site outside normal working hours is occasionally necessary, for example during major system changes and
maintenance. Such out-of-hours working as is necessary is scheduled in negotiation with the group of staff with relevant skills, and
takes account of the personal commitments and wishes of individuals.
For purposes of system management, IT Services staff often have enhanced access to data, files and computer systems and must at
all times respect the privacy of information to which they have enhanced access. The only exception to this will be investigations
authorised by IT Services Director or his/her nominee.
Please note: <organisation> is working towards equal opportunities and observance of our equal opportunities policy will be required.
PERSON SPECIFICATION
Job Title: Senior Information Security Specialist
Department: IT Services
• All staff have a statutory responsibility to take reasonable care of themselves, others and the environment and to prevent harm
by their acts or omissions. All staff are therefore required to adhere to the University’s Health, Safety and Environmental Policy &
Procedures.
Conditions of Service
The appointment will be on a full time, open ended contract on Management and Specialist Grade (salary, discretionary to salary per
annum)* at a starting salary commensurate with experience and qualifications.
Informal Enquiries
Informal enquiries should be made to <name>, <title> by email at: <email address> or by telephone on <telephone number>.
Application
The closing date for receipt of applications is <insert>.
Job purpose
• Investigates identified security breaches in accordance with established procedures and recommends any required actions.
• Assists users in defining their access rights and privileges, and administers logical access controls and security systems. Maintains
security records and documentation
• Conducts security risk and vulnerability assessments for defined business applications or IT installations in defined areas, and
provides advice and guidance on the application and operation of elementary physical, procedural and technical security controls (e.g.
the key controls defined in ISO/IEC 27001).
• Performs risk and vulnerability assessments, and business impact analysis for medium size information systems. Investigates
suspected attacks and manages security incidents.
• Conducts business risk and vulnerability assessments and business impact analysis for well-defined business applications or IT
installations.
• Reviews compliance with information security policies and standards. Assesses configurations and security procedures for adherence
to legal and regulatory requirements.
• Reviews network usage. Assesses the implications of any unacceptable usage and breaches of privileges or corporate policy.
Recommends appropriate action.
• Provides advice and guidance on the application and operation of elementary security controls (e.g. the key controls defined in ISO/IEC
27001) and communicates information assurance issues effectively to users of systems and networks.
• Supervises and/or administers the operation of appropriate security controls (such as physical or logical access controls), as a
production service to business system users.
• Investigates and reconciles violation reports and logs generated by automated policing mechanisms in accordance with established
procedures and security standards. Investigates any other identified security breaches in accordance with established procedures.
Interviews minor offenders and compiles reports and recommendations for management follow-up.
• Assists users in defining their needs for new access rights and privileges. Operates and administers logical access controls and directly
associated security services relating to all platforms used in order to provide continuous and secure access to information services.
• For all services and systems within IT Security Management, maintains auditable records and user documentation. Assists in the
preparation and maintenance of other documentation such as business recovery plans, particularly in the data collection and
compilation/production/distribution phases of the exercise.
• Provides advice and handles enquiries relating to other security, contingency planning and related activities.
• Maintains knowledge of the technical specialism.
• Be familiar with relevant University IT-related procedures and policies (acceptable use, data protection, freedom of information,
information security, purchasing etc) and advise colleagues and end-users accordingly.
• Undertake various other tasks on an occasional basis at the request of more senior staff in the department, and to a level
commensurate with training, knowledge, grade and skills.
This job description was created in the spirit of the BCS (The Chartered Institute for IT), SFIA (Skills for the Information Age) level 4 with support
from the BCS.
Organisational Responsibility
Responsible to: <line manager> but may receive strategic instruction from the Director of IT.
Responsible for: None, staff at the senior level may be asked to deputise for their line manager in case of absence.
Hours: 37 hours per week, within the hours of 8:00 to 18:00 Mon to Thursday and 8:00 to 17:30 Friday, including 1 hour lunch period.
The precise pattern of working within these guidelines will be agreed in advance with your manager.
Attendance on site outside normal working hours is occasionally necessary, for example during major system changes and
maintenance. Such out-of-hours working as is necessary is scheduled in negotiation with the group of staff with relevant skills, and
takes account of the personal commitments and wishes of individuals.
For purposes of system management, IT Services staff often have enhanced access to data, files and computer systems and must at
all times respect the privacy of information to which they have enhanced access. The only exception to this will be investigations
authorised by IT Services Director or his/her nominee.
Please note: <organisation> is working towards equal opportunities and observance of our equal opportunities policy will be required.
PERSON SPECIFICATION
Job Title: I nformation Security Specialist
Department: IT Services
All staff have a statutory responsibility to take reasonable care of themselves, others and the environment and to prevent harm by their
acts or omissions. All staff are therefore required to adhere to the University’s Health, Safety and Environmental Policy & Procedures.
Conditions of Service
The appointment will be on a full time, open ended contract on Management and Specialist Grade (salary, discretionary to salary per
annum)* at a starting salary commensurate with experience and qualifications.
*The appointment will be subject to the University’s normal Terms and Conditions of Employment for Academic and Related staff,
details of which can be found at:
Informal Enquiries
Informal enquiries should be made to <name>, <title> by email at: <email address> or by telephone on <telephone number>.
Application
The closing date for receipt of applications is <insert>.
The SFIA (Skills Framework for the Information Age) framework from the British Computer Society defines core competencies for a
range of IT related disciplines.
The core competencies for information security professionals are listed below:
• Demonstrates extensive knowledge of good security practice covering the physical and logical aspects of information products,
systems integrity and confidentiality.
• Has expert knowledge of the employing organisation’s security policies and all relevant legislation and industry trends which
affect security within the defined scope of authority.
• Exhibits leadership qualities and is persuasive. Is familiar with the principles and practices involved in development and
maintenance and in service delivery.
• Has extensive technical understanding and the aptitude to remain up to date with IT security and developments.
• Possesses a comprehensive understanding of the business applications of IT. Is effective and persuasive in both written and oral
communication.
• Is thoroughly familiar with the employing organisation’s security policies and all relevant legislation and industry trends which
affect security within the defined scope of authority.
• Has extensive knowledge of the principles and practices involved in development and maintenance and in service delivery.
• Has good technical understanding and the aptitude to remain up to date with IT security and developments.
• Possesses a general understanding of the business applications of IT.
• Is effective and persuasive in both written and oral communication.
The UCL Security Working Group (SWG) partnered with UCL Human Factors researchers (led by Prof. Angela Sasse) from 2012,
consulting regularly to ensure that the expectations of the user-facing password policy within the university were realistically
achievable.
Related issues were presented by the SWG, and research expertise was shared by Prof. Sasse’s group. Password policy was shared with
researchers within the university.
Password policy was considered, but wider options for investment were also explored (such as alternative authentication technologies).
After a series of meetings discussing organisation-wide password policies and authentication capabilities, business-driven decisions
were presented which served to bound options for refining the password policy. Specific advice was tailored by researchers to match the
university infrastructure (e.g. password length and complexity, password renewal intervals), based on research knowledge.
Viable changes were integrated into the password policy, reflecting the outcomes of discussions with researchers. These were discussed
further with researchers, identifying potential future directions for investment and changes to policy, as well as challenges which may
be faced as the organisation itself changed (in terms of population, available technologies, etc.).
This consultation explored ways to make better use of existing security systems, through communication with on-site researchers with
related expertise. Consultation with researchers identified future challenges and informed procurement decisions. The process served
to transfer expertise in both directions - researchers gained understanding of the deployment of authentication technologies and
related security measures in practice within a large organisation, and SWG expanded understanding of the human factor of security.
Both sides then demonstrated additional value to other functions within and outside the university. IT administrators developed a
greater appreciation of the principles of human factors in security, and researchers gained insights that informed their research efforts
at similar levels with other organisations.
Dialogue with researchers informed and influenced elements of authentication principles and password principles adopted at
practitioner meetings. A new password policy developed, and was approved by the university’s governance group.
Key Points
• Organisations can consider how cutting-edge or multi-disciplinary expertise already present within the organisation can be
utilised in a way that benefits both security administrators and researchers.
• This case study also highlights that organisations should be aware of changes in the operating environment, and how these
influence the bodies of expertise necessary to make informed decisions about policy and procurement.
• This is an example of an organisation actively identifying ways to minimise the impact of core security Responsibilities while
maintaining an adequate level of security across the organisation (specifically the creation and management of suitably secure
passwords which can be maintained over time by members of the organisation).
This case study describes the approach taken by Cardiff University in its attempt to increase and improve user awareness of information
security and thus mitigate, to an extent, information security risk.
The Case
Cardiff University is a member of the Russell Group, a group of 24 leading UK research intensive universities. It is the 12th largest
university in the UK in terms of student numbers and features amongst its academic staff two Nobel Prize winners Professor Sir Martin
Evans and Professor Robert Huber.
In July 2012 the Information Security Framework (ISF) programme was initiated. The aim of the three year programme was to create a
framework by which the University can manage the significant financial and reputational risks involved in collecting, storing and using
personal and other data and to assure external stakeholders that the University can be regarded as secure in relation to the way it
manages its information/data. The programme considers all aspects of information security, both technological and organisational.
The programme was split into three stages: Foundations, Assessment and Evaluation and Implementation.
Challenge
The University employs a wide range of individuals carrying out a diverse set of roles. From staff tasked with managing the University
estate to academic staff engaged in novel research, delivering education and so on. Engaging with such a diverse audience is not
straightforward.
A further challenge when trying to engage individuals with the subject of information security, is the preconception that information
security is a concern for the IT department alone, that information security is all about the confidentiality of information and that
information security is about stopping people from doing things, creating barriers to using new technology (Cloud Storage for example).
Finding a mechanism for engaging and educating a broad range of individuals then, is the challenge.
The Study
The below activities cover the primary awareness raising activities delivered by the Information Security Framework Programme
between January 2013 and June 2014.
In addition to the below, briefings and updates about the programmes objectives and progress were delivered through a range of
regular communication channels: staff meetings in Schools, various departmental briefings, updates in a range of internal publications/
newsletters, emails etc.
Connections
At the outset of the programme an online Information Security Community was created using the Universities online collaborative
workspace called ‘Connections’. The purpose of this community was to have an area, accessible only to members of staff, where updates
about the programme could be posted, key documents shared for comment and review, and to blog about important information
security news stories which related to the work of the programme.
Whilst the workshops were not a communications exercise they did serve to create a core of individuals who were aware of the
programme and had experienced the chosen risk assessment methodology.
Survey
In March 2013, the first of an annual, all-staff, information security survey was released. The survey asked staff for their views on a
range of information security questions, from how secure they feel the University keeps their data, to what measures they take at home
to protect personal computers from which they connect to the University.
The survey was announced in an email to all staff sent in the Vice Chancellors name, it was also advertised via an internal news feed,
through the Connections Community, the programmes Operational Group (a cross discipline team with representatives from the
Universities Colleges and Professional Services departments) and through various meetings.
Teaser campaign
During the initial stages of the implementation phase of the project, one of the key deliverables was an information security website
http://cardiff.ac.uk/isf.
In order to generate interest in the run up to the launch of the website the concept of a teaser campaign was used. This took the form
of a life sized robot (Appendix 1) carrying a sack of data. This decal was installed in buildings across the University campus in areas of
high foot fall in order to get staff and students curious about what it was for, so that when subsequent communications and activities
took place there was a pre-established interest.
Once the robot had been on display for two working weeks a set of 6 posters (Appendix 2) were deployed across the campus (in both
English and Welsh language). Each poster provides advice on a specific information security topic. The topics for the posters being chosen
to resonate with issues which affect users both in their personal lives as well as in a University context. To accompany the posters a set
of stickers were also distributed across campus (Appendix 3). The stickers consisted of a robot from the posters as well as the URL or the
information security website and were located in unexpected areas to catch the eye of passers-by and generate traffic to the website.
Website
The culmination of the teaser campaign was the launch of the Cardiff University Information Security Website http://cardiff.ac.uk/isf.
The website provides information about the ISF programmer, advice on a variety of information security topics, a home for the new
information security policies generate by the programme, information to assist researchers when completing information security
questionnaires as part of research bids, hosts the University Information Classification Scheme and associated handling rules, and a
blog for the programme to share information security news.
In the first 3 months the site has received over 1500 unique visitors and 6872 page views with visitors viewing an average of 5 pages
per visit.
Phishing Exercise
During early June 2014 the programme initiated a phishing exercise. An email purporting to be from the University IT department,
but sent from a .co.uk domain and containing various other ‘give-aways’ was distributed to all University staff (some 10,000 email
addresses) over a period of 3 weeks. The aim of the exercise being to test susceptibility to phishing and provide a metric for the
programme to measure over time. Upon receipt of the phish, the user was prompted to click on a link advertised as taking the user to
a site to login using their University credentials and apply for extra network drive capacity. The page which the user is actually taken to
http://sites.cardiff.ac.uk/isf/cardiff-university-phishing-exercise/ provided the user with advice on how they could have identified the
email as a phish and how to avoid such scams in the future.
Statistics were generated about both the numbers of staff reporting the email as suspicious through the IT Service Desk as well as
numbers of unique visitors to the web page.
Password Change
Whilst not an awareness raising exercise in itself, the ISF programme has initiated a project to change the
University Password Policy. As part of this project the robot from the ‘Passwords can be very predictable’ poster
has been deployed to all University managed workstations as an icon. The icon launches a web browser to the ISF
website page describing the changes to the policy.
Next Steps
Evaluation
To evaluate the responses to the 2014 information security survey and assess the effectiveness of the programme in changing the levels
of awareness of information security and to identify areas for further effort.
Training
One of the next steps for the programme is the development and roll out of mandatory information security awareness training for all
staff and Post Graduate Researchers. This will further communicate to staff the importance of information security awareness but in a
more formal setting where there will be enforcement around compliance.
Appendices
For more information visit: www.cardiff.ac.uk/isf For more information visit: www.cardiff.ac.uk/isf For more information visit: www.cardiff.ac.uk/isf
123456
Protecting yourself online is easy Protecting your identity online is easy Protecting your identity online is easy
Simply...
Simply... - Always lock your device Simply...
- Use a strong password - For added security set your device to - Keep your security software up to date
- Use a mix of letters, numbers and symbols automatically lock when it goes to sleep - Allow automatic updates for security patches
- Use different passwords for each account - An unlocked device leaves access - Install firewall and anti-virus software
- Protect your personal data at home and work to your data - Scan external devices (e.g. USB sticks) for viruses
For more information visit: www.cardiff.ac.uk/isf For more information visit: www.cardiff.ac.uk/isf For more information visit: www.cardiff.ac.uk/isf
123456
www.cardiff.ac.uk/isf
What is a phish?
Phishing is the name given to the practice of sending emails at random, purporting to come from a genuine organisation. This sort
of email attempts to trick the recipient into entering confidential information, such as credit card or bank details, usernames and
passwords. The links contained within the message are false, and often re-direct the user to a fake web site.
Authorisation for carrying out the exercise was secured through the Information Security Framework programme Steering Group
and communicated to the University Business Change Portfolio Oversight Group. Both bodies include representation from senior
Professional Services and Academic members of staff.
The email warned recipients that they were running low on storage quota and to follow a link to enter their credentials and apply for
extra storage.
The email contained a number of tell-tale signs of being a phish including, spelling mistakes, a non ‘.ac.uk’ originating email address,
URGENT markings and an embedded hyperlink taking users to an address different to that which it advertised.
Those recipients who subsequently clicked on the hyperlink were taken to a University hosted ‘landing page’ (via a redirected web
address with a domain matching the one from which the email was sent) which informed them that the email had been a phishing
exercise run by the University, provided a reassuring message about the purpose of the exercise, highlighted the tell-tale signs that the
email was a phish and provided advice and links to further information on phishing. By using this mechanism targeted education on
how to avoid falling for phishing emails was provided to those most likely to fall victim to such attacks.
By keeping awareness of the details of the exercise to a limited group (IT Service Desk and Programme Team) it was possible to ensure
that the reactions of staff, including the responses of devolved IT staff within Departments and Schools were as they would be had the
email been a real phishing attack.
Where individuals contacted the IT Service Desk, or read the content of the landing page they were encouraged to keep their awareness
of the exercise to themselves and not warn colleagues.
Unique hits to the landing page were measured using Google Analytics (access to the landing page was only possible through receiving
and clicking on the link in the phish).
• There may be staff who avoided the phish by virtue of not reading the email, in future exercises it may be desirable to be able to
send a follow-up or chase email.
• Levels of reporting to the IT Service Desk were relatively low (3.2%) and work is needed to encourage staff to report such threats.
• There was evidence of local reporting mechanisms at School and Department level which are not formally recorded and which
may have significantly increased the percentage figure for staff reporting the phish.
• The exercise tests whether users will click on potentially dangerous links, but as an inherent assumption that they would then
enter their credentials. As above some staff, in order to investigate how ‘good’ a phish it was were clicking on the link out of
curiosity but would not have entered credentials.
Next Steps
• The outcomes of the exercise will be communicated to the University Information Security Review Group.
• The outcomes of the exercise will be publicised across the University to further raise awareness and to flag the advice to those who
have not already accessed it through the phishing email.
• Evaluating a measurement
The organisation should assess any existing or proposed measurement against these qualities to determine which are likely to be most
informative. If the answer to any of these questions is “No”, explore other possible measures or look to compensate for shortcomings.
Is my measure ...
RELIABLE? Can it be consistently measured in a repeatable way?
Is it reproducible? If NOT, are there known explanations of sources of uncertainty which are acceptable to all
stakeholders (even if they dominate the values)?
SUSTAINABLE? Is it cheap to gather? If it needs to be computed frequently, is the metric’s source data cheap to gather?
Can it be quickly evaluated? Are the costs of evaluation low enough that it is useful for those who will use it?
Can it be accurately measured? If NOT, is the distance between “true” and “real” measurement acceptable to
stakeholders?
Are measurements current enough to be useful, or time-stamped to a precision that makes them traceable?
OBJECTIVE? Are measurements free of influence from the measurer’s will or personal feeling?
Is it unbiased?
Is the process or system for collecting measurements correct according to its specification?
SCOPED? Is it contextually specific?
Is the domain in which it applies clearly defined? Conversely, does it overlap with other measurements?
Is it meaningful to stakeholders, and does it reflect the meaning of what it is expected to be measuring?
Is it relevant to stakeholders?
Is it easy to interpret?
INSTRUMENTABLE? Can it be automated through tool support?
Is it sufficiently non-intrusive?
Can the measurement process, and environment being measured, be adequately controlled?
TRANSPARENT? Can it be proved that it actually measures what it is supposed to?
Can the distance between the specified state (“should be”-state) and the real operational state (“as is”-state)
be known?
Are there stakeholders within the organisation capable of creating, using, and refining it?
Resources for Chapter 11 – When things go wrong: non-conformities and incidents 197
Developing an Information Security Incident Response Plan based on ISO/IEC 27035:2011
– University of Oxford
Introduction
Information security incidents are, one way or another, inevitable but the response to an incident can still reduce the overall risk to an
organisation by way of reducing the impact of any given incident. The key to good incident management is good communication and
ensuring all stakeholders are aware of their roles and responsibilities. In order to achieve this, roles, responsibilities procedures and protocols
need to be defined, agreed and tested. ISO/IEC 27035:2011 Information Technology – Security techniques – Information security incident
management provides the outline of one method for implementing an incident response scheme and this study documents some of the
applications and lessons learned from following such an approach in a University setting.
Incident Detection/Reporting
Clear guidance needs to be given on when to report incidents, what to report, how and to whom. Many incidents may not be reported
centrally either because they are not recognised as security incidents or because policy and/or process for reporting has not been widely
communicated and understood. As a result the number and type of incidents dealt with across an organisation will not tell the whole story,
hampering informed decision making based on a true understanding of risk.
The definition of a security incident may need to be reviewed. Typically IT security related incidents have been dealt with in isolation but, with
a general move towards greater focus and maturity in information security, it is now necessary to expand the definition of an incident to
include breaches of information security regardless of the format. It is, however, important to clearly define when events become incidents
and should be reported. To this end guidance should be produced for individual departments to understand clearly what types of incident
need to be reported and how. For example ongoing incidents may need to be reported immediately as assistance or escalation is required (e.g.
compromised server, stolen laptop with personal data). Other incidents might only be useful for statistical purposes and they can be reported
periodically (e.g. number of malware infections dealt with by a department in the past month).
Whether an incident should be reported immediately will depend on the potential impact on the organisation as a whole. Therefore criteria
should be agreed in advance with senior stakeholders and appropriate guidance should be provided to local departments. Incidents should
be reported within departments or sections initially and the guidance should be used to make a decision as to whether to report centrally. A
single point of contact should be provided for reporting centrally and, ideally, will be made use of by specific departmental liaisons. However it
should be recognised that incidents will be reported to alternative contacts. It is therefore important to ensure that staff within departments
(particularly those providing central services) are aware of where to forward incident reports.
A clear and concise format for escalation reports should be agreed upon in advance. Escalation reports might (for example) include the current
impact, potential impact and how likely that is, as well as any specific action required of anyone receiving the reports. This will lead to far
fewer queries when escalating incidents thus considerably improve efficiency and speed of communication.
Responding to incidents often requires coordination amongst different business functions. Depending on the nature of the incident this could
include physical security services, legal services, data protection offices, the press office etc. Having a small core of initial incident responders
and senior stakeholders will mean that the right people are immediately informed of incidents and may bring other stakeholders in (such as
legal services and the press office) as required. Ensuring the incident response and escalation team include senior stakeholders representative
of the organisation will help to ensure that appropriate backing is received when dealing with incidents. To ensure that stakeholders are
fully aware of mitigating actions that are taken regular updates (particularly changes in status) should therefore be communicated to the
immediate response team.
It is also worth noting that the criteria used for escalation of incidents is primarily used as a tool by initial incident responders but is aimed at
senior stakeholders. In other words incident responders tend to be experienced and know when something should be escalated. The impact
criteria can therefore be seen as a means for initial incident responders to explain why an incident has been escalated. If the criteria are not
useful for incident responders in this way they should be revised.
It is particularly beneficial to understand and agree the responsibilities of initial incident handlers, particularly in terms of ascertaining the
right level of information in order to make an informed decision as to the potential impact of an incident and maximise efficiency when
incidents are escalated. For example, agreeing in advance with the data protection officer the questions that need to be asked in terms of the
level of personal data involved in a breach means that the initial incident handler can complete the triage stage. This allows stakeholders, such
as the data protection officer, to make quicker, informed decisions and reduces the amount of correspondence and communication channels
required. Incidents involving personal data (or potentially involving personal data) are now dealt with much more efficiently.
• Policies for incident response should be agreed in advance and signed off by senior management as should subsequent processes in the
scheme
Resources for Chapter 11 – When things go wrong: non-conformities and incidents 199
Example of an information security incident response scheme
Introduction
The purpose of this scheme is to provide detailed documentation describing the policies activities and procedures for dealing with
information security events and incidents. The scheme includes definitions of information security events and incidents and should be
used as a guide for:
Scope
This policy forms part of the information security management framework and supplements the University’s information security
policy. It applies to events and incidents affecting any University information assets or information system. It applies to and will be
communicated to all those with access to University information systems, including staff, students, visitors and contractors.
Objective
The University recognizes the importance of, and is committed to, effective information security incident management in order to help
protect the confidentiality and integrity of its information assets, availability of its information systems and services, safeguard the
reputation of the University and fulfil its legal and regulatory obligations.
2. uidance and procedures for the detection, assessment, communication, reporting and escalation of security vulnerabilities, events
G
and incidents will be provided via the information security website, training programs and other communication channels.
3. All information security incidents must be reported via the appropriate management channels.
4. esponsibilities for the reporting and escalation of security vulnerabilities, events and incidents should be clearly defined and
R
communicated to all relevant personnel.
5. ecurity events and incidents should be assessed according to the event/incident classification scale provided via the
S
information security toolkit and, where necessary, escalated accordingly.
6. n information security incident response team (or teams) comprising representatives from all relevant parts of the
A
University, shall coordinate the management of and response to incidents which require escalation in accordance with an
Information Security Incident Response Plan.
7. Incidents involving personal data will be reported to the Data Protection Officer.
8. I ncidents which involve personal safety, security or require the involvement of law enforcement will be reported to the head
of physical security.
10. All information security incidents will be recorded for later analysis.
11. ost incident reviews will be carried out in order to identify where improvements in policies, procedures and information
P
security controls can be made.
12. he types, volumes and impact of security incidents will be recorded and reviewed and summary reports will be used as input
T
to the University’s information security risk register.
13. pecific incident reports will be reviewed by the Information Security Working Group who may advise on corrective action in
S
the future.
14. Information security incident procedures will be communicated to all relevant personnel and tested periodically.
• Monitoring network traffic to identify compromised or potentially compromised systems within the University network;
• Receiving internal and external reports on compromised systems;
• Protecting the security and integrity of the University backbone network and its core ;information systems and services by blocking
network access to any compromised machine;
• Informing and liaising with local IT staff to ensure that computer security incidents are dealt with promptly and effectively;
• Ensuring that compromised systems are fully cleaned and patched against known vulnerabilities, or the risk otherwise mitigated,
before being reconnected to the network;
• For providing advice and guidance on dealing with computer and network security;
• Maintaining a register of computer security incidents;
• Initial investigation into the type and quantity of personal (or otherwise confidential) data involved in a compromise;
• Appropriate escalation of computer security incidents in accordance with the information security incident management scheme/
plan.
• Coordination of University-wide responses to information security incidents via the crisis/escalation team;
• Receiving reports on information security incidents and breaches of the information security policy;
• Appropriate escalation of information security incidents in accordance with the information security incident management
scheme/plan;
Crisis/Escalation Team
Some incidents will require escalation above the ISIRT in order that senior management within the University are made aware of, and
may respond accordingly, to serious and potentially serious information security incidents. The Crisis/Escalation Team consists of senior
members of relevant University departments. Not all members of the Crisis/Escalation Team will need to be alerted to all information
Resources for Chapter 11 – When things go wrong: non-conformities and incidents 201
security incidents immediately. The classification scheme and requirements for escalation set out below will be used by the ISIRT to
determine when the various parts of the Crisis/Escalation Team will be called into action.
The Crisis/Escalation Team will be made up of a core set of senior staff and will therefore consist of (e.g.):
• Press Office
• The Registrar
• HR
• Legal Services Office
Roles and Responsibilities for the Crisis/Escalation Team
The roles and responsibilities for the Crisis/Escalation Team are as follows:
• Ensuring incidents are escalated appropriately to other members of the crisis/escalation team.
• Leading the coordination and response of the crisis/escalation team.
Data Protection Officer Responsibilities
The University’s Data Protection Officer is responsible for:
• Decisions to report and subsequent reporting of data protection incidents to the Information Commissioner
• Communication to relevant staff of correspondence with the Information Commissioner
Head of Compliance
• Receiving reports of incidents that have been escalated and confirming the classification of those incidents
• Ensuring that the Registrar, Press Office, Legal Services, HR and any other relevant senior stakeholders are fully informed and
updated on the progression of incidents as appropriate
• Providing senior management support for the ISIRT and the incident response scheme.
Identified occurrences of systems, services or networks that have the potential to breach information security policies
A single or series of unwanted events that compromise (or are likely to compromise) the confidentiality, integrity or availability of
University data and/or breach University information security policies
Some examples of information security events and incidents can be found below:
Information security events need not be reported immediately but may be reported periodically for information purposes. Usually
information security events will be recorded automatically in log files relating to IT systems. These can be reported, by local IT support
staff either manually or automatically. Other information security events should be reported to a local point of contact or via line
managers who will decide whether to pass on the reports. No response should be expected to reports of information security events
unless specific problems are identified.
All information security incidents must be reported. The University operates a devolved model for support when it comes to IT
and information security, therefore users should usually report security incidents to identified local contacts. If there is any doubt
then incidents should be reported to a user’s direct line manager who will be responsible for deciding whether further action and/
or reporting is required. Information security incidents should then be reported according to the initial incident reporting protocol
described below.
Resources for Chapter 11 – When things go wrong: non-conformities and incidents 203
Initial Incident Reporting Protocol
Information security incidents should be reported according to the following protocol:
Information
Initial
Security Event
Analysis
Detected
YES
Report to
Phishing Attack
[email protected]
NO
NO
NO
Incident
Optionally Suspected?
document and
report
periodically YES
Local
YES contact NO
point exists/
is known
NO
NO YES
Confirmed Report to
Security [email protected]
Incident
Initial incident handling will typically be handled by the CSIRT in accordance with their standard processes and practices. Additionally
the CSIRT will make standard enquiries into the level of personal (or otherwise confidential) data that may have been exposed as a
result of any incident. For the purposes of personal data details are given in Appendix B as to the information required by the Data
Protection Officer.
The incident classification scheme will be used in the first instance for determining whether incidents should be escalated to other
members of senior management throughout the University. Clearly the full impact of an incident will not be known at the time of
initial response. The full impact will therefore be assessed in separate reporting and review of incidents at a later date. This information
can be used assess how appropriate the escalation process was based on the classification at the time. Incidents will be escalated
based on their current impact. In order that incidents are escalated appropriately therefore the classification scheme needs to take into
account the potential impact. This is reflected by including “importance of information system” in the classification scheme. Incidents
affecting important or critical information systems will therefore always create a higher level of alert.
In order to provide senior staff within the escalation team the information they require in order to determine what (if any) action is
required, incidents escalation reports will include the following information:
Further details on the process involved in escalating incidents according to the classification scheme can be found in Appendix C.
Resources for Chapter 11 – When things go wrong: non-conformities and incidents 205
Incident Escalation Protocol
Incidents will be escalated by the ISIRT in accordance with the protocol described below:
ISIRT Escalation
Report
Incident Incident incident to
Reported to Reported to ISIRT
CSIRT Data statistical/
Protection reporting
purposes
YES
Report back
NO Confirmed
to reportee Security
Incident? Escalate Crisis
YES immediately response
Class 1 to whole crisis process
YES
team initiated
NO
Incident
Assessment
Escalate to Director of
and
YES Personal YES IT Risk Management,
Classification
Class 2 Data Head of Compliance
Involved? and Data Protection
NO
NO Officer
Escalate to
Director of IT
Risk
Management
and Head of
Compliance
NO
ISIRT deal with
YES and record Standard
Class 4 details. Incidents ISIRT
reported response/
periodically reporting
Incident Category
The following table provides categories and descriptions for various incident types:
Impact Categories
All incidents will be categorized according to their impact. The impact will be either CRITICAL, MAJOR, MODERATE or MINOR based on
the impact categories below. The greatest impact from the five different types of impact will determine the impact category assigned
to an incident. When incidents are reported the current and potential impact should be reported along with some indication of how
likely escalation may be (or what would need to happen for the potential impact to be realized)
Resources for Chapter 11 – When things go wrong: non-conformities and incidents 207
Importance of Information System
Category Description Examples
Critical Business-critical systems fundamental to the daily operations Email systems; Financials systems; Core
of the University supporting teaching, learning, research or the Infrastructure systems (such as routers, DNS);
administration of the University. Compromise of a critical system Primary University Web Server
would cause significant disruption or reputational damage to the
University. ‘Significant’ in this context is defined as impacting the
operations of multiple University departments; the disruption may
be more significant at certain times of the year.
Major EITHER Departmental directory server; Main
departmental webservers;
A system that is critical to the operations of a single department but
may also impact other departments. Loss of a Major system would
cause significant disruption to the affected department and may
cause inconvenience to other departments.
OR
Service impact
Category Description Examples
Critical University is no longer able to provide some core services to any Central Email service is unavailable; Backbone
users. network connectivity is lost or significantly
impaired.
Major University is unable to provide a core service to a subset of users Central email relays blacklisted for certain
emails
Moderate University is able to provide core services to users but secondary
services may be unavailable and/or services may be impaired for a
period of time.
Minor No effect on the University’s ability to provide core services to users.
Privacy Impact
Category Description Examples
Critical EITHER Unauthorised access to/disclosure of
A potential or known breach of confidentiality where the release of sensitive personal data such as medical
data could cause a significant risk of individuals suffering substantial records or individuals working on animal
detriment, including substantial distress research
OR
Exposure of personal data of 10000+ users
Major EITHER Unauthorised access to/disclosure of
A potential or known breach of confidentiality where the release of application data
data could cause a risk of individuals suffering substantial detriment,
including substantial distress
OR
Exposure of personal data of 1000 – 10000 users
Moderate Exposure of limited personal data affecting 100 – 1000 users List of user details (such as names and
addresses) exposed (e.g. access to the Global
Address List)
Minor Exposure of limited personal data affecting < 100 users. Unauthorised access to system containing
limited, non-private information (e.g.
usernames or email addresses.)
Reputational Impact
Incident classification
Having been assigned an overall category and given an impact score, incidents will then be classified according to the following criteria:
AND
OR
OR
OR
Resources for Chapter 11 – When things go wrong: non-conformities and incidents 209
Appendix B: Information required by the Data Protection Officer for incidents involving personal data
The following questions will be used as the basis for investigating information security incidents involving personal data. This reflects
the information that the Data Protection Officer will need to ascertain in order to make a decision on whether to pursue the incident
and potentially report to the Information Commissioner’s Office.
4. What is the evidence of data having been being exposed and what is the nature of the exposure (i.e. data already in the
public domain, data exposed but unlikely to be the target of the attack/incident etc.)
6. Is the data still current i.e. for how long should it have been stored/kept
9. Was the attack specifically targeted/what was the likely motivation of the attack
Class 3 incidents will be escalated to the the Director of IT Risk Management who will make a judgement as to whether further
investigation is required. Incidents will normally not need to be escalated immediately but the Director of IT Risk management will
ultimately be informed of all such incidents. Where personal information is involved the ISIRT will carry out initial investigations into
the nature and extent of the information and the exposure, before sending a report of the incident to the Data Protection Officer.
Statistics of all Class 3 incidents will be reported at Information Security Working Group meetings.
Class 2 incidents will be escalated immediately to the Director of IT Risk Management and Head, Head of Compliance and, where
personal data is potentially involved, the Data Protection Officer. The ISIRT will still usually be responsible for the initial investigations
into the nature and extent of exposure of any personal information but the Data Protection Officer will likely be involved in all
communications.
Class 2 incidents, including their handling, eventual impact and lessons learned, will be reviewed at ISAG meetings.
Class 1 incidents will be escalated immediately to the whole crisis/escalation team. The CIO, Head of Compliance and Director of IT Risk
Management will be responsible for coordinating the response to such incidents, ensuring sufficient resources are allocated to dealing
with the incident and for keeping senior stakeholders (such as the Registrar) fully informed.
Class 1 incidents, including their handling, eventual impact and lessons learned, will be reviewed at Information Security Working
Group meetings.
No
Service
4.0
Desk Distribute
Yes
Resolution
D2
Responsibility for Notification
2.0 D1 3.0 Resolved?
managing this process: Adequate Yes Investigation/
Initial Triage
Information? Diagnosis 6.0
Service Desk Manager D3 D4 5.0
Invoke ITIL
Escalation Info Security No Escalate Incident
No Yes Incident
Required? Related? to Resolver
Management
Groups
Process
7.0
No Yes Escalate Incident
to Infosec Team
Infosec Team
Responsibility for 21.0
Report
managing this process No
Distribution/
SOM for Information Notification
Updated Incident
Resolved? Yes Notify Resolved End
Log
Information Security Service: Information Security Incident Management Process – UCL
211
Investigations and Data Access Policies – University of York, case study
An investigation policy can expect to be the most scrutinised documents in a whole policy suite. Investigations and Data Access policies
will be used in circumstances ranging from access to documents of a member of staff off sick, to internal disciplinary procedures, to
police requests for data and even requests for surveillance under warrant from the Home Secretary. As such, even more care and wide
consultation are necessary that with other policies.
Different institutions will have different processes and expectations of privacy at work. At York, staff have always been permitted to use
their work email accounts for personal use, we do not do web filtering and do not pro-actively look at web logs for misuse. This culture
of personal use and privacy affected the final policy in many ways, and was an explicit part of the initial discussions. Other institutions
may take a very different line: FE institutions will usually have web filtering and altering for example.
These differences in emphasis mean that it is very unlikely that any general document will work, and a policy tailored to the institution
will be necessary.
At the University of York our previous policy was very old and no longer fit for purpose. The title itself was a problem: “Policy for
Investigation of Incidents under the Regulation of Investigatory Powers Act” seemed to imply that the policy only applied RIPA requests,
and not for anything else, when we used it not just for other legal requests but also for internal investigations.
• It pre-dated the use of cloud services. It was possible to read it so that cloud services came into scope, but it was also possible to
argue that they did not. With the University of York adopting Google Apps this became an urgent problem.
It was very obvious that the old policy did not just need a minor update, so we started again from scratch.
We came up with a set of sample issues and scenarios based on real cases over the past few years, and thought about how the old
policy had made life difficult. This generated a list of areas we needed to fix. Next we looked at how within the University should
authorise requests. We were surprised at the difference in views here: some departments thought that line managers (at any grade)
should be able to authorise access, others wanted it limited to very senior staff and it was important to get agreement on this
fundamental principle before other work was done.
We also listed our constraints and assumptions. This helped us to consider specific parts of the policy against criteria and was helpful
when “lost in the detail”. For example:
• The policy had to align with other policies such as social media policy
• We needed to consult with unions etc.
• None of our staff are trained to evidence standards and the University had no wish to establish a forensics facility
• We assumed that users would be informed about access unless there was a specific reason not to do so
• We wanted the scope of the data accessed to be drawn as tightly as possible
• To protect privacy, we do not normally give direct access to an account (either via sharing the password or delegation), instead we
pass on the data.
Our final policy has been in place for a year, and works for us. It protects University members’ privacy by requiring sign off at a
senior level (Head of Department for internal requests, the Registrar for external legal ones) and ensuring separation of request and
authorisation but still allows us to quickly give access to data in situations where it is urgent.
Links to policy
Policy
http://www.york.ac.uk/media/abouttheuniversity/supportservices/informationdirectorate/documents/policies/ITInvestigationsandDataAccessPolicy_
Oct2013.pdf
Method Statement
http://www.york.ac.uk/media/abouttheuniversity/supportservices/informationdirectorate/documents/policies/MethodStatement-
InvestigationsandDataAccess.pdf
At 11.59 on a Friday night, CSIRT staff at the University of Morpeth were alerted via email that students had discovered a way to
access a restricted system and could view personal information of moderate confidentiality by exploiting a mis-configuration in a
development system.
The message was picked up first thing on the Saturday morning, but due to a miscommunication between members of the team, the
bug wasn’t fixed until the Monday morning. Shortly after 9am on Monday the bug was fixed.
Because students had spotted the issue, the student press was in touch immediately, with other local media following later that day.
University senior management were alerted while key technical staff set to work analysing the log files to understand the extent of the
breach.
The University had not “war-gamed” an incident like this and the many decisions that have to be taken. For example, should a
spokesperson be provided for TV/radio, or should all media enquiries just receive a prepared statement? Making such decisions required
heavy levels of involvement from senior staff, working under considerable pressure. The University opted not to provide a case study for
local radio and other requests, but kept this decision under close review as the incident progressed.
Fortunately, full logging was available and the University was able to determine every unauthorised access and which information was
viewed. This proved to be key in managing the incident, with the certainty given by detailed logging helping to manage fears.
A key learning point is that any analysis needs to be communicated carefully: an initial estimate of the number of people affected was
produced on the fly in a meeting by a member of the technical team. Within 90 minutes, and without double checking, that number
was released in an official press release. Fortunately for everyone concerned, when double checked the next day, the number was
correct.
Since the University now had a full list of the individuals affected, the nature of the data released and, in many cases, knowledge of
who accessed the data, the decision was made to telephone everyone affected. Initial plans within IT had been to issue email alerts: the
decision to telephone everyone affected by the breach proved a very good one. People contacted were surprised to be directly contacted,
grateful for the chance to be reassured and get details on what happened.
To make this work, two key things were done. 1) All phone calls were done by two members of staff working in tandem and 2) a full
script was written in advance covering opening lines, responses to questions, details of the insurance offered etc. The need for a script
was contested by some staff, but its use in avoiding mistakes when making the 20th phone call of the day soon became apparent.
The combination of a formal response, transparency about the extent of the breach, personal communications and offers of suitable
insurance seemed to work, with interest in the incident dropping rapidly after the first two days. There have been no reports of any loss
or harm to individuals as a result of the incident.
In the aftermath of the breach, the Information Commissioner’s Office was notified and a plan put in place to prevent a repeat including
governance changes, staff training and changes to development practice.
Key points:
• Having an incident plan in advance, covering IT, Communications and University Senior Management with clear lines of escalation,
out-of-hours contact details for key stakeholders and agreed criteria for actions e.g. when something is serious enough to place on
the University’s front page, can save a lot of time
Resources for Chapter 11 – When things go wrong: non-conformities and incidents 213
214 UCISA Information Security Management Toolkit Edition 1.0
Resources for Chapter 12 –
Continual improvement
There are no resources for this chapter
Policy contact: the role to contact if the reader has questions or comments. Not a person’s name.
History
Date Version number Author Approved by Comments
Have a consistent Name and role Name and role Initial version
approach to versioning
What changes were made
and why
Review plan
When will this document be reviewed? Either every X years, or after event Y, e.g. changes to related requirements, or to related
documents.
Introduction
Answer these questions about the document:
Scope
• Who needs to know what’s in this document?
• What roles/physical areas/groups does it apply to?
• When?
Related documents
This allows you to find out what effect there will be on other documents if you change this one, and vice versa.
Related requirements
Contracts, laws and other internal or external requirements which have shaped this policy. So if they change, you know to review this
policy- and if you change this policy, you know what other documents to check with, so that you don’t accidentally breach contract etc.
Stakeholders
Optional: roles which need to be involved in revisions of this document.
Definitions
Key definitions can go here, or this section can reference a Glossary where all of the definitions live.
Policy statements
List of policy statements. No lengthy explanations, no sanctions, no detailed technical statements. This document should not need to
be changed every time a new version of Internet Explorer comes out.
Separate mandatory items from optional items to allow people to see at a glance what they have to do, as opposed to what they can
ignore.
Sanctions
What happens if this policy is not followed? Either describe how this is to be addressed, or reference a single sanctions document
(possibly in the HR area). If non-compliance is acceptable, this is not a policy but guidance - rename it.
Acknowledgements
The UCISA Information Security Management Toolkit project team consisted of a lead author, a group of five
contributing universities, and colleagues from Jisc Technologies.
UCISA would like to thank Jisc for their support to the project through their release of expert Jisc Technologies
staff to author and review content.
Lead author
Bridget Kenyon, Head of Information Security, University College London
Cardiff University
Gareth Jenkins, Business Change Manager, Information Security Framework Programme
Ruth Robertson, Deputy Director Governance and Compliance
Jisc Technologies
Andrew Cormack, Chief Security Advisor
James Davis, Information Security Manager
Loughborough University
Matthew Cook, Head of Infrastructure and Middleware
Niraj Kacha, Senior IT Services Specialist
Graeme Fowler, Senior IT Services Specialist
University of Oxford
Jonathan Ashton, Information Security Officer, IT Services
Professor Paul Jeffreys, Director of IT Risk Management, IT Services
Sarah Lawson, Head of IT and Information Security, National Perinatal Epidemiology Unit
University of York
Dr. Arthur Clune, Head of Systems, IT Services
Kay Mills-Hicks, Information Policy Consultant
The project was managed by Anna Mathews, UCISA Head of Policy and Projects, with oversight from Mark
Cockshoot, Chair of the UCISA Infrastructure Group and Alan Radley, Elected Member of the UCISA Executive
Committee. Peter Tinson, UCISA Executive Director, provided additional support.
UCISA is very grateful for the assistance received from colleagues across the sector. In particular, we would like
to thank the following individuals who provided information or acted as critical friends whilst the Toolkit was
being drafted:
Acknowledgements 221
Brian Gilmore, Director of IT Infrastructure, University of Edinburgh
William Hammonds, Policy Researcher, Universities UK
Quentin North, Assistant Director, IT Services, University of Brighton
Gary Nye, ICT Planning Manager, University of Bedfordshire
Christa Price, Senior Information Security Officer, University of Salford
Peter Rigby, Senior Policy Manager, Efficiency and Reform, Research Councils UK
Bruce Rodger, Head of Infrastructure Services, University of Strathclyde
Harris Salapasidis, IT Security Manager, University of the Arts London
Robbie Walker, Security Architect, University of Portsmouth
We would also like to thank members of the UCISA Networking Group, the UCISA Infrastructure Group and the
UCISA Executive Committee for their comments and suggestions.