0% found this document useful (0 votes)
191 views5 pages

Nine Principles of Security Architecture

The document discusses nine principles of security architecture: 1. Set a security policy and know what's on your system to avoid unsystematic decisions and secure what needs securing. 2. Actions should be verifiable so users know exactly what is being done, like with command line tools versus GUIs. 3. Always give the least privilege practical by only giving users and processes access to needed resources. 4. Practice defense in depth with multiple security features at different levels rather than relying on just one measure. 5. Audit the system by keeping and reviewing system logs to detect intrusions. 6. Build to contain intrusions to limit damage by compartmentalizing access like with user accounts

Uploaded by

prateek217857
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
191 views5 pages

Nine Principles of Security Architecture

The document discusses nine principles of security architecture: 1. Set a security policy and know what's on your system to avoid unsystematic decisions and secure what needs securing. 2. Actions should be verifiable so users know exactly what is being done, like with command line tools versus GUIs. 3. Always give the least privilege practical by only giving users and processes access to needed resources. 4. Practice defense in depth with multiple security features at different levels rather than relying on just one measure. 5. Audit the system by keeping and reviewing system logs to detect intrusions. 6. Build to contain intrusions to limit damage by compartmentalizing access like with user accounts

Uploaded by

prateek217857
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Nine principles of security architecture

By - November 22, 2005 Author: Bruce Byfield

Security architecture is a new concept to many computer users. Users are


aware of security threats such as viruses, worms, spyware, and other malware.
They have heard of, and most use, anti-virus programs and firewalls. Many use
intrusion detection. Architectural security, though, remains a mystery to most
computer users.
The truth is, anti-virus software, firewalls, and intrusion detection are only the
surface of security. They are all reactive measures that attempt to respond to
active threats, rather than proactive measures that anticipate threats and try to
make them harmless. These applications have a major role to play, but are not
enough in themselves.

Behind reactive security measures is the much broader field of architectural


security: How to set up a secure system to prevent security breaches, how to
minimize breaches if they occur, and how react to an intrusion and recover from
it if it happens.

Architectural security is a subject that fills dozens of books. However, if you


ignore the exact configuration techniques, you can break down architectural
security into nine basic principles which are widely agreed upon by security
architects. They apply whether you are programming, doing systems
administration, or using desktop applications, and they apply whether you are
managing a single home machine or a large network. They are not exact laws
so much as methods of how you should think about security.

If you learn these basic principles, you can not only make more informed
choices when installing and configuring software, but also learn more about
your operating system. As a side benefit, you’ll also understand the reasoning
behind claims that OpenBSD is more secure than GNU/Linux, or that both are
more secure than Windows.

Set a security policy for your system and know what’s on it

Architectural security starts with a strong security policy and detailed knowledge
of what is on your system. Without this first principle, you’re faced with making
decisions about security unsystematically and without an understanding of what
needs to be secured. This principle requires avoiding the default policies and
choices of installation programs, and drilling down into the preferences and
configuration files where you have more control over questions such as whether
ordinary users can mount CDs and where you can choose individual packages.
It may also mean checking the default packages in a preconfigured group, and
removing programs that conflict with security policy. On Debian, for example, if
you select the Desktop Environment profile for your installation, you can see
exactly what is installed by referring to the Packages.gz file on the install CD.
This may be time-consuming, but unless you make the effort, your goal of a
secure system is defeated at the start.

Actions should be verifiable

Verifiability is the ability to check that an action is carried out. It is a principle


that looms large in programming, and explains why many experienced
sysadmins prefer command-line tools: running shell commands provides
transparency, so you know exactly what is being done in a way that you often
don’t when clicking a box in an equivalent GUI tool. Take, for example, the
recent problem with Sony’s anti-copying measures. While Sony’s installer told
users that it was installing one component, it actually installed additional
software users would be unaware of that was harmful in nature.

Reverse the principle, and it becomes a strong argument for the free and open
source development model. Because source code is available, users who know
the programming language can confirm that actions are the result of a
particular block of code, and that no unexpected actions are carried out at the
same time. Users who can’t read the code, of course, have to rely on an
expert’s opinion, but the point is that the potential for verification is available.

Always give the least privilege practical

The principle of least privilege suggests that all processes, users, and programs
should be given only the access to system resources that they need, and no
more. If a process does not need to run as root, then it shouldn’t. If particular
users don’t need to read or write to a particular partition, such as /boot, then
they shouldn’t have the permissions to do so. When users require greater
privileges or access, they should get it for as brief a time as possible.

Least privilege is one of the reasons why, ideally, users should be added to
groups only as necessary, rather than being automatically added to a number of
common ones. Similarly, while ordinary users can use the sudo command to run
programs as root, they shouldn’t all be able to use it for any command. Instead,
specific users should be limited to specific commands that they can run using
sudo.

Least privilege is also the design philosophy behind the multiple system
accounts found in operating systems like Solaris. Instead of having a single root
account that gives complete access to the system, multiple system accounts
divide root privileges, each with limited powers. When you use multiple system
accounts, a cracker may gain one password, but still be unable to control the
system fully.

Practice defense in depth

Average users often think they’re safe if their systems are behind a firewall.
Often, they’re right. The trouble is, if the only security precaution on the system
is a firewall, then breaching the firewall exposes the entire system. The principle
of defense in depth suggests that a more secure solution is to have a variety of
security features operating at a variety of different levels.

Instead of relying exclusively on a firewall, a system is more likely to remain


secure if it is also set up to take full advantage of security features such as
permissions, authentication, and whitelists and blacklists. Similarly, although
the principle suggests that relying only on passwords is a poor strategy, adding
passwords to the BIOS and the boot manager makes for greater security than
simply relying on a single password at the desktop level.

In addition, security professionals utilize physical security measures to bolster


system security. Server rooms, for example, should be restricted to authorized
personnel only. Desktop systems should be physically secured so that they
cannot be easily removed. Backups should be stored in a secure location,
preferably off-site.

Auditing the system: keep (and review) system logs

To keep a system secure, you need a record of changes made to the system,
either by its own utilities or by intrusion detection systems such as Snort and
Tripwire. While you can keep some records of changes by hand, on Unix-like
systems, logs of changes or errors are traditionally saved in /var/log by system
applications. This system is not ideal, since altering logs to hide an intrusion is
one of the first steps that an expert cracker makes.

However, since many attacks are by script-kiddies with little understanding of


the system, a change in logs is often the first sign that a system has been
compromised. Some intrusion detection programs, such as Tripwire, can
automate the checking of logs and other key files.

Build to contain intrusions


The goal of containment is to minimize the consequences when a system is
cracked. A system built with this principle in mind is like a ship with bulkheads.
In the same way that a breach of the hull is quickly sealed off with bulkheads, a
system built with containment in mind tries to limit the access of a successful
cracker.

This principle is one of the main reasons for the division between root and
normal user accounts used for everyday activities. In most cases, if a exploit
succeeds against a non-privileged account, a cracker still doesn’t have access to
the system’s configuration files or utilities. The individual user may lose files,
but damage to the system as a whole is limited, unless security is reduced by a
step such as adding non-privileged accounts to all available groups. On a more
advanced level, containment is the principle behind using a chroot jail to run an
untested or dangerous program in isolation.

By contrast, buffer overflows, which can occur because of the way languages


such as C and C++ are designed, are an example of circumstances in which the
principle cannot be observed. If an application is running as root, and an exploit
takes advantage of a buffer overflow, then the exploit now has root privileges.
That’s one reason why patching such vulnerabilities is a priority for
conscientious programmers, and why it’s important to apply patches regularly.

A system is only as strong as its weakest link

This principal reinforces the need for defense-in-depth. The more defenses a
system has, the less likely that the weakest one will leave it vulnerable. Since
the weakest link is often users rather than the system itself, it may mean that
you need to educate users about basic security practices, and check periodically
that they are following them.

No matter how well-designed and implemented a security policy is, your efforts
are wasted if users tape their passwords to the bottom of their keyboards
or give their passwords to random interviewers on the subway.

Locking the barn door after the horse is gone is ineffective

Security measures taken after the fact leave you uncertain how secure a system
is. An antivirus program may remove a worm or trojan, but you can never be
sure whether the system is secure again.

From an architectural security viewpoint, the only way you can be reliably
certain that your system is secure after being successfully attacked is to
reinstall the BIOS, reformat the hard drive, and restore files from a backup
taken before the system was compromised. Since these steps are time-
consuming, and result in a system being off-line much longer than is you’d like,
you are better off applying the other security principles so you won’t need to
restore the system in the first place.

Practice full disclosure

When a system is successfully attacked, or is known to be vulnerable, let users


know as soon as possible. This principle is best known at the level of operating
system vulnerabilities, where there is often a stark contrast between the
approach taken by Microsoft and the FOSS community. While Microsoft often
holds off on disclosing security issues until shortly before it releases a patch, or
there is an exploit in the wild, most FOSS projects disclose (and fix)
vulnerabilities as soon as possible.

However, disclosure applies on the level of individual systems, as well. It allows


the users of vulnerable system to take their own precautions, if only logging off
the system immediately, until the vulnerability has been addressed.

Conclusions

Understanding and exercising these principles is not a guarantee that your


system will not be compromised. In practice, you’ll need to balance them
against the convenience of users — and that frequently means lowering the
overall security of a system. However, without a knowledge of these principles,
you are more likely to set this balancing point without any thought — and, if
you’re like most people, in favor of convenience.

In the end, thinking about these principles can help shift the odds in security in
your favor, and help you recover more efficiently from attacks. And, if nothing
else, they can save you from the false sense of security that comes from
thinking that reactive measures are enough.

You might also like