Operating Systems Lecture Notes-13
Operating Systems Lecture Notes-13
Denial of Service
Denial of Service ( DOS ) attacks do not attempt to actually access or damage systems,
but merely to clog them up so badly that they cannot be used for any useful work. Tight
loops that repeatedly request system services are an obvious form of this attack.
DOS attacks can also involve social engineering, such as the Internet chain letters that
say "send this immediately to 10 of your friends, and then go to a certain URL", which
clogs up not only the Internet mail system but also the web server to which everyone is
directed. (Note: Sending a "reply all" to such a message notifying everyone that it was
just a hoax also clogs up the Internet mail service, just as effectively as if you had
forwarded the thing.)
Security systems that lock accounts after a certain number of failed login attempts are
subject to DOS attacks which repeatedly attempt logins to all accounts with invalid
passwords strictly in order to lock up all accounts.
Sometimes DOS is not the result of deliberate maliciousness. Consider for example:
181
Keys are designed so that they cannot be divined from any public information, and must
be guarded carefully. (Asymmetric encryption involves both a public and a private
key.)
Encryption
The basic idea of encryption is to encode a message so that only the desired recipient can
decode and read it. Encryption has been around since before the days of Caesar, and is an
entire field of study in itself. Only some of the more significant computer encryption
schemes will be covered here.
The basic process of encryption is shown in Figure 15.7, and will form the basis of most
of our discussion on encryption. The steps in the procedure and some of the key
terminology are as follows:
182
Figure - A secure communication over an insecure medium.
Symmetric Encryption
With symmetric encryption the same key is used for both encryption and decryption, and
must be safely guarded. There are a number of well-known symmetric encryption
algorithms that have been used for computer security:
183
encrypts in blocks of 128 bits using 10 to 14 rounds of
transformations on a matrix formed from the block.
o The two fish algorithm uses variable key lengths up to 256 bits
and works on 128 bit blocks.
o RC5 can vary in key length, block size, and the number of
transformations, and runs on a wide variety of CPUs using only
basic computations.
o RC4 is a stream cipher, meaning it acts on a stream of data rather
than blocks. The key is used to seed a pseudo-random number
generator, which generates a key stream of keys. RC4 is used
in WEP, but has been found to be breakable in a reasonable
amount of computer time.
Asymmetric Encryption
With asymmetric encryption, the decryption key, Kd, is not the same as the encryption
key, Ke, and more importantly cannot be derived from it, which means the encryption
key can be made publicly available, and only the decryption key needs to be kept secret.
(or vice-versa, depending on the application.)
One of the most widely used asymmetric encryption algorithms is RSA, named after its
developers - Rivest, Shamir, and Adleman.
RSA is based on two large prime numbers, p and q, (on the order of 512 bits each), and
their product N.
184
Figure - Encryption and decryption using RSA asymmetric cryptography
Authentication
Authentication involves verifying the identity of the entity that transmitted a message.
For example, if D (Kd) (c) produces a valid message, then we know the sender was in
possession of E (Ke).
This form of authentication can also be used to verify that a message has not been
modified
Authentication revolves around two functions, used for signatures ( or signing ),
and verification:
185
Understanding authenticators begins with an understanding of hash functions, which is
the first step:
o Hash functions, H (m) generate a small fixed-size block of data
known as a message digest, or hash value from any given input
data.
o For authentication purposes, the hash function must be collision
resistant on m. That is it should not be reasonably possible to find
an alternate message m' such that H (m') = H (m).
o Popular hash functions are MD5, which generates a 128-bit
message digest, and SHA-1, which generates a 160-bit digest.
Message digests are useful for detecting (accidentally) changed messages, but are not
useful as authenticators, because if the hash function is known, then someone could
easily change the message and then generate a new hash value for the modified message.
Therefore authenticators take things one step further by encrypting the message digest.
A message-authentication code, MAC, uses symmetric encryption and decryption of the
message digest, which means that anyone capable of verifying an incoming message
could also generate a new message.
An asymmetric approach is the digital-signature algorithm, which produces
authenticators called digital signatures. In this case Ks and Kv are separate, Kv is the
public key, and it is not practical to determine S (Ks) from public information. In practice
the sender of a message signs it ( produces a digital signature using S(Ks) ), and the
receiver uses V(Kv) to verify that it did indeed come from a trusted source, and that it has
not been modified.
There are three good reasons for having separate algorithms for encryption of messages
and authentication of messages:
o Authentication algorithms typically require fewer calculations,
making verification a faster operation than encryption.
o Authenticators are almost always smaller than the messages,
improving space efficiency. (?)
o Sometimes we want authentication only, and not confidentiality,
such as when a vendor issues a new software patch.
Another use of authentication is non-repudiation, in which a person filling out an
electronic form cannot deny that they were the ones who did so.
Key Distribution
Key distribution with symmetric cryptography is a major problem, because all keys must be kept
secret, and they obviously can't be transmitted over unsecured channels. One option is to send
them out-of-band, say via paper or a confidential conversation.
Another problem with symmetric keys is that a separate key must be maintained and used
for each correspondent with whom one wishes to exchange confidential information.
Asymmetric encryption solves some of these problems, because the public key can be
freely transmitted through any channel, and the private key doesn't need to be transmitted
anywhere. Recipients only need to maintain one private key for all incoming messages,
though senders must maintain a separate public key for each recipient to which they
might wish to send a message. Fortunately the public keys are not confidential, so
this key-ring can be easily stored and managed.
186
Unfortunately there is still some security concerns regarding the public keys used in
asymmetric encryption. Consider for example the following man-in-the-middle attack
involving phony public keys:
One solution to the above problem involves digital certificates, which are public keys
that have been digitally signed by a trusted third party. But wait a minute - How do we
trust that third party, and how do we know they are really who they say they are?
Certain certificate authorities have their public keys included within web browsers and
other certificate consumers before they are distributed. These certificate authorities can
then vouch for other trusted entities and so on in a web of trust, as explained more fully
in section 15.4.3.
Implementation of Cryptography
An Example: SSL
187
SSL (Secure Sockets Layer) 3.0 was first developed by Netscape, and has now evolved
into the industry-standard TLS protocol. It is used by web browsers to communicate
securely with web servers, making it perhaps the most widely used security protocol on
the Internet today.
SSL is quite complex with many variations, only a simple case of which is shown here.
The heart of SSL is session keys, which are used once for symmetric encryption and then
discarded, requiring the generation of new keys for each new session. The big challenge
is how to safely create such keys while avoiding man-in-the-middle and replay attacks.
Prior to commencing the transaction, the server obtains a certificate from a certification
authority, CA, containing:
188
User Authentication
Protection, dealt with making sure that only certain users were allowed to perform certain
tasks, i.e. that a users privileges were dependent on his or her identity. But how does one
verify that identity to begin with?
Passwords
Passwords are the most common form of user authentication. If the user is in
possession of the correct password, then they are considered to have identified
themselves.
In theory separate passwords could be implemented for separate activities, such as
reading this file, writing that file, etc. In practice most systems use one password
to confirm user identity, and then authorization is based upon that identification.
This is a result of the classic trade-off between security and convenience.
Password Vulnerabilities
189
Passwords can be given away to friends or co-workers, destroying the integrity of the
entire user-identification system.
Most systems have configurable parameters controlling password generation and what
constitutes acceptable passwords.
o They may be user chosen or machine generated.
o They may have minimum and/or maximum length requirements.
o They may need to be changed with a given frequency. (In extreme cases
for every session.)
o A variable length history can prevent repeating passwords.
o More or less stringent checks can be made against password dictionaries.
Encrypted Passwords
Modern systems do not store passwords in clear-text form, and hence there is no
mechanism to look up an existing password.
Rather they are encrypted and stored in that form. When a user enters their password, that
too is encrypted, and if the encrypted version matches, then user authentication passes.
The encryption scheme was once considered safe enough that the encrypted versions
were stored in the publicly readable file "/etc/passwd".
One-Time Passwords
One-time passwords resist shoulder surfing and other attacks where an observer is able to
capture a password typed in by a user.
190
A variation uses a map (e.g. a road map) as the key. Today's
question might be "On what corner is SEO located?", and
tomorrow's question might be "How far is it from Navy Pier to
Wrigley Field?" Obviously "Taylor and Morgan" would not be
accepted as a valid answer for the second question!
o Another option is to have some sort of electronic card with a series of
constantly changing numbers, based on the current time. The user enters
the current number on the card, which will only be valid for a few
seconds. A two-factor authorization also requires a traditional password
in addition to the number on the card, so others may not use it if it were
ever lost or stolen.
o A third variation is a code book, or one-time pad. In this scheme a long
list of passwords is generated and each one is crossed off and cancelled as
it is used. Obviously it is important to keep the pad secure.
Biometrics
Biometrics involve a physical characteristic of the user that is not easily forged or
duplicated and not likely to be identical between multiple users.
A security policy should be well thought-out, agreed upon, and contained in a living
document that everyone adheres to and is updated as needed.
Examples of contents include how often port scans are run, password requirements, virus
detectors, etc.
Vulnerability Assessment
o Port scanning.
o Check for bad passwords.
o Look for suid programs.
o Unauthorized programs in system directories.
o Incorrect permission bits set.
o Program checksums / digital signatures which have changed.
o Unexpected or hidden network daemons.
191
o New entries in start-up scripts, shutdown scripts, cron tables, or other
system scripts or configuration files.
o New unauthorized accounts.
The government considers a system to be only as secure as its most far-reaching
component. Any system connected to the Internet is inherently less secure than one that is
in a sealed room with no external communications.
Some administrators advocate "security through obscurity", aiming to keep as much
information about their systems hidden as possible, and not announcing any security
concerns they come across. Others announce security concerns from the rooftops, under
the theory that the hackers are going to find out anyway, and the only one kept in the dark
by obscurity are honest administrators who need to get the word.
Intrusion Detection
Intrusion detection attempts to detect attacks, both successful and unsuccessful attempts.
Different techniques vary along several axes:
o The time that detection occurs, either during the attack or after the fact.
o The types of information examined to detect the attack(s). Some attacks
can only be detected by analyzing multiple sources of information.
o The response to the attack, which may range from alerting an
administrator to automatically stopping the attack (e.g. killing an
offending process), to tracing back the attack in order to identify the
attacker.
Another approach is to divert the attacker to a honey pot, on
a honey net. The idea behind a honey pot is a computer running
normal services, but which no one uses to do any real work. Such a
system should not see any network traffic under normal conditions,
so any traffic going to or from such a system is by definition
suspicious. Honey pots are normally kept on a honey net protected
by a reverse firewall, which will let potential attackers in to the
honey pot, but will not allow any outgoing traffic. (So that if the
honey pot is compromised, the attacker cannot use it as a base of
operations for attacking other systems.) Honey pots are closely
watched, and any suspicious activity carefully logged and
investigated.
Intrusion Detection Systems, IDSs, raise the alarm when they detect an intrusion.
Intrusion Detection and Prevention Systems, IDPs, act as filtering routers, shutting down
suspicious traffic when it is detected.
There are two major approaches to detecting problems:
o Signature-Based Detection scans network packets, system files, etc.
looking for recognizable characteristics of known attacks, such as text
strings for messages or the binary code for "exec /bin/sh". The problem
with this is that it can only detect previously encountered problems for
which the signature is known, requiring the frequent update of signature
lists.
o Anomaly Detection looks for "unusual" patterns of traffic or operation,
such as unusually heavy load or an unusual number of logins late at night.
192
The benefit of this approach is that it can detect previously
unknown attacks, so called zero-day attacks.
One problem with this method is characterizing what is "normal"
for a given system. One approach is to benchmark the system, but
if the attacker is already present when the benchmarks are made,
then the "unusual" activity is recorded as "the norm."
Another problem is that not all changes in system performance are
the result of security attacks. If the system is bogged down and
really slow late on a Thursday night, does that mean that a hacker
has gotten in and is using the system to send out SPAM, or does it
simply mean that a CS 385 assignment is due on Friday? :-)
To be effective, anomaly detectors must have a very low false
alarm (false positive) rate, lest the warnings get ignored, as well as
a low false negative rate in which attacks are missed.
Virus Protection
Modern anti-virus programs are basically signature-based detection systems, which also
have the ability (in some cases) of disinfecting the affected files and returning them back
to their original condition.
Both viruses and anti-virus programs are rapidly evolving. For example viruses now
commonly mutate every time they propagate, and so anti-virus programs look for
families of related signatures rather than specific ones.
Some antivirus programs look for anomalies, such as an executable program being
opened for writing (other than by a compiler.)
Avoiding bootleg, free, and shared software can help reduce the chance of catching a
virus, but even shrink-wrapped official software has on occasion been infected by
disgruntled factory workers.
Some virus detectors will run suspicious programs in a sandbox, an isolated and secure
area of the system which mimics the real system.
Rich Text Format, RTF, files cannot carry macros, and hence cannot carry Word macro
viruses.
Known safe programs (e.g. right after a fresh install or after a thorough examination) can
be digitally signed, and periodically the files can be re-verified against the stored digital
signatures. (Which should be kept secure, such as on off-line write-only medium?)
Auditing, accounting, and logging records can also be used to detect anomalous behavior.
Some of the kinds of things that can be logged include authentication failures and
successes, logins, running of suid or sgid programs, network accesses, system calls, etc.
In extreme cases almost every keystroke and electron that moves can be logged for future
analysis. (Note that on the flip side, all this detailed logging can also be used to analyze
system performance. The down side is that the logging also affects system performance
(negatively!), and so a Heisenberg effect applies. )
"The Cuckoo's Egg" tells the story of how Cliff Stoll detected one of the early UNIX
break ins when he noticed anomalies in the accounting records on a computer system
being used by physics researchers.
193
Tripwire File system (New Sidebar)
The tripwire file system monitors files and directories for changes, on the assumption that
most intrusions eventually result in some sort of undesired or unexpected file changes.
The two config file indicates what directories are to be monitored, as well as what
properties of each file are to be recorded. (E.g. one may choose to monitor permission
and content changes, but not worry about read access times.)
When first run, the selected properties for all monitored files are recorded in a database.
Hash codes are used to monitor file contents for changes.
Subsequent runs report any changes to the recorded data, including hash code changes,
and any newly created or missing files in the monitored directories.
For full security it is necessary to also protect the tripwire system itself, most importantly
the database of recorded file properties. This could be saved on some external or write-
only location, but that makes it harder to change the database when legitimate changes
are made.
It is difficult to monitor files that are supposed to change, such as log files. The best
tripwire can do in this case is to watch for anomalies, such as a log file that shrinks in
size.
Free and commercial versions are available at http://tripwire.org and http://tripwire.com.
194
Figure 15.10 - Domain separation via firewall.
Computer-Security Classifications
No computer system can be 100% secure, and attempts to make it so can quickly make it
unusable.
However one can establish a level of trust to which one feels "safe" using a given
computer system for particular security needs.
The U.S. Department of Defense’s "Trusted Computer System Evaluation Criteria"
defines four broad levels of trust, and sub-levels in some cases:
o Level D is the least trustworthy, and encompasses all systems that do not meet
any of the more stringent criteria. DOS and Windows 3.1 fall into level D, which
195