Chapter 5 Introduction Network Security & Firewalls
Chapter 5 Introduction Network Security & Firewalls
Make sure that you have or create the directory you wish to log to otherwise snort will
complain! When packets are logged snort uses the address of the remote computer as it's
logging directory.
Identifying Your Network
Sometimes you want to log packets relative to your network. To log packets into directories
they are associated with use the "-h" option with the network address and mask of your
home network.
snort -dev -i eth0 -l $HOME/log -h 192.168.1.0/24
Using Rules
Now for one of the most powerful parts of snort. SNORT RULES!. Indeed, snort would be
limited if it didn't have a facility for rules. We'll cover rules syntax in a little bit but for now let's
see how see specify what rules file to use. Snort comes with all sorts of built-in rules files in
/usr/local/share/snort/. The rules files are simply ASCII files with lines of rules snort uses to
determine what to log/alert. You'll notice that they can be quite simple or complicated but
also cover some of the more sophisticated attacks and probes and they're premade for you.
Snort rules can be specified with the simple "-c" option:
All together a snort command on the command line might look like this:
This is quite a long command which is why this could be a good candidate for the old
/etc/rc.d/ scripts files.
124
Chapter 11
Introduction to Firewalls
A firewall is basically the first line of defense for your network. The basic purpose of a
firewall is to keep uninvited guests from browsing your network. A firewall can be a hardware
device or a software application and generally is placed at the perimeter of the network to
act as the gatekeeper for all incoming and outgoing traffic.
A firewall allows you to establish certain rules to determine what traffic should be allowed in
or out of your private network. Depending on the type of firewall implemented you could
restrict access to only certain IP addresses or domain names, or you can block certain types
of traffic by blocking the TCP/IP ports they use.
A firewall is not simply a router, host system, or collection of systems that provides security
to a network. Rather, a firewall is an approach to security; it helps implement a larger
security policy that defines the services and access to be permitted, and it is an
implementation of that policy in terms of a network configuration, one or more host systems
and routers, and other security measures such as advanced authentication in place of static
passwords. The main purpose of a firewall system is to control access to or from a protected
network (i.e., a site). It implements a network access policy by forcing connections to pass
through the firewall, where they can be examined and evaluated.
A firewall system can be a router, a personal computer, a host, or a collection of hosts, set
up specifically to shield a site or subnet from protocols and services that can be abused from
hosts outside the subnet. A firewall system is usually located at a higher-level gateway, such
as a site's connection to the Internet, however firewall systems can be located at lower-level
gateways to provide protection for some smaller collection of hosts or subnets.
The general reasoning behind firewall usage is that without a firewall, a subnet's systems
expose themselves to inherently insecure services such as NFS or NIS and to probes and
attacks from hosts elsewhere on the network. In a firewall-less environment, network
security relies totally on host security and all hosts must, in a sense, cooperate to achieve a
uniformly high level of security. The larger the subnet, the less manageable it is to maintain
all hosts at the same level of security. As mistakes and lapses in security become more
common, break-ins occur not as the result of complex attacks, but because of simple errors
in configuration and inadequate passwords.
125
A firewall approach provides numerous advantages to sites by helping to increase overall
host security. The following sections summarize the primary benefits of using a firewall.
A firewall can greatly improve network security and reduce risks to hosts on the subnet by
filtering inherently insecure services. As a result, the subnet network environment is exposed
to fewer risks, since only selected protocols will be able to pass through the firewall.
For example, a firewall could prohibit certain vulnerable services such as NFS from entering
or leaving a protected subnet. This provides the benefit of preventing the services from being
exploited by outside attackers, but at the same time permits the use of these services with
greatly reduced risk to exploitation. Services such as NIS or NFS that are particularly useful
on a local area network basis can thus be enjoyed and used to reduce the host management
burden.
Firewalls can also provide protection from routing-based attacks, such as source routing and
attempts to redirect routing paths to compromised sites via ICMP redirects. A firewall could
reject all source-routed packets and ICMP redirects and then inform administrators of the
incidents.
A firewall also provides the ability to control access to site systems. For example, some
hosts can be made reachable from outside networks, whereas others can be effectively
sealed off from unwanted access. A site could prevent outside access to its hosts except for
special cases such as mail servers or information servers.
This brings to the fore an access policy that firewalls are particularly adept at enforcing: do
not provide access to hosts or services that do not require access. Put differently, why
provide access to hosts and services that could be exploited by attackers when the access is
not used or required? If, for example, a user requires little or no network access to her
desktop workstation, then a firewall can enforce this policy.
Concentrated Security
A firewall can actually be less expensive for an organization in that all or most modified
software and additional security software could be located on the firewall systems as
opposed to being distributed on many hosts. In particular, one-time password systems and
other add-on authentication software could be located at the firewall as opposed to each
system that needed to be accessed from the Internet.
Other solutions to network security such as Kerberos involve modifications at each host
system. While Kerberos and other techniques should be considered for their advantages and
may be more appropriate than firewalls in certain situations, firewalls tend to be simpler to
implement in that only the firewall need run specialized software.
Enhanced Privacy
Privacy is of great concern to certain sites, since what would normally be considered
innocuous information might actually contain clues that would be useful to an attacker. Using
126
a firewall, some sites wish to block services such as finger and Domain Name Service. finger
displays information about users such as their last login time, whether they've read mail, and
other items. But, finger could leak information to attackers about how often a system is used,
whether the system has active users connected, and whether the system could be attacked
without drawing attention.
Firewalls can also be used to block DNS information about site systems, thus the names and
IP addresses of site systems would not be available to Internet hosts. Some sites feel that by
blocking this information, they are hiding information that would otherwise be useful to
attackers.
If all access to and from the Internet passes through a firewall, the firewall can log accesses
and provide valuable statistics about network usage. A firewall, with appropriate alarms that
sound when suspicious activity occurs can also provide details on whether the firewall and
network are being probed or attacked.
It is important to collect network usage statistics and evidence of probing for a number of
reasons. Of primary importance is knowing whether the firewall is withstanding probes and
attacks, and determining whether the controls on the firewall are adequate. Network usage
statistics are also important as input into network requirements studies and risk analysis
activities.
Policy Enforcement
Lastly, but perhaps most importantly, a firewall provides the means for implementing and
enforcing a network access policy. In effect, a firewall provides access control to users and
services. Thus, a network access policy can be enforced by a firewall, whereas without a
firewall, such a policy depends entirely on the cooperation of users. A site may be able to
depend on its own users for their cooperation, however it cannot nor should not depend on
Internet users in general.
Given these benefits to the firewall approach, there are also a number of disadvantages, and
there are a number of things that firewalls cannot protect against. A firewall is not by any
means a panacea for Internet security problems.
The most obvious disadvantage of a firewall is that it may likely block certain services that
users want, such as TELNET, FTP, X Windows, NFS, etc. However, these disadvantage are
not unique to firewalls; network access could be restricted at the host level as well,
depending on a site's security policy. A well-planned security policy that balances security
requirements with user needs can help greatly to alleviate problems with reduced access to
services.
Some sites may have a topology that does not lend itself to a firewall, or may use services
such as NFS in such a manner that using a firewall would require a major restructuring of
network use. For example, a site might depend on using NFS and NIS across major
127
gateways. In such a situation, the relative costs of adding a firewall would need to be
compared against the cost of the vulnerabilities associated with not using a firewall, i.e., a
risk analysis, and then a decision made on the outcome of the analysis. Other solutions such
as Kerberos may be more appropriate, however these solutions carry their own
disadvantages as well. [NIST94c] contains more information on Kerberos and other potential
solutions.
Secondly, firewalls do not protect against back doors into the site. For example, if
unrestricted modem access is still permitted into a site protected by a firewall, attackers
could effectively jump around the firewall. Modem speeds are now fast enough to make
running SLIP (Serial Line IP) and PPP (Point-to-Point Protocol) practical; a SLIP or PPP
connection inside a protected subnet is in essence another network connection and a
potential backdoor. Why have a firewall if unrestricted modem access is permitted?
Firewalls generally do not provide protection from insider threats. While a firewall may be
designed to prevent outsiders from obtaining sensitive data, the firewall does not prevent an
insider from copying the data onto a tape and taking it out of the facility. Thus, it is faulty to
assume that the existence of a firewall provides protection from insider attacks or attacks in
general that do not need to use the firewall. It is perhaps unwise to invest significant
resources in a firewall if other avenues for stealing data or attacking systems are neglected.
Other Issues
WWW, gopher - Newer information servers and clients such as those for World Wide
Web (WWW), gopher, WAIS, and others were not designed to work well with firewall
policies and, due to their newness, are generally considered risky. The potential
exists for data-driven attacks, in which data processed by the clients can contain
instructions to the clients; the instructions could tell the client to alter access controls
and important security-related files on the host.
MBONE - Multicast IP transmissions (MBONE) for video and voice are encapsulated
in other packets; firewalls generally forward the packets without examining the packet
contents. MBONE transmissions represent a potential threat if the packets were to
contain commands to alter security controls and permit intruders.
viruses - Firewalls do not protect against users downloading virus-infected personal
computer programs from Internet archives or transferring such programs in
attachments to e-mail. Because these programs can be encoded or compressed in
any number of ways, a firewall cannot scan such programs to search for virus
signatures with any degree of accuracy. The virus problem still exists and must be
handled with other policy and anti-viral controls.
throughput - Firewalls represent a potential bottleneck, since all connections must
pass through the firewall and, in some cases, be examined by the firewall. However,
this is generally not a problem today, as firewalls can pass data at T1 (1.5 Megabits[
128
second) rates and most Internet sites are at connection rates less than or equal to
T1.
all eggs in single basket - A firewall system concentrates security in one spot as
opposed to distributing it among systems. A compromise of the firewall could be
disastrous to other less-protected systems on the subnet. This weakness can be
countered, however, with the argument that lapses and weaknesses in security are
more likely to be found as the number of systems in a subnet increase, thereby
multiplying the ways in which subnets can be exploited.
Despite these disadvantages, NIST strongly recommends that sites protect their resources
with firewalls and other security tools and techniques.
Most firewalls will permit traffic from the trusted zone to the untrusted zone, without any
explicit configuration. However, traffic from the untrusted zone to the trusted zone must be
explicitly permitted. Thus, any traffic that is not explicitly permitted from the untrusted to
trusted zone will be implicitly denied (by default on most firewall systems).
A firewall is not limited to only two zones, but can contain multiple ‘less trusted’ zones, often
referred to as Demilitarized Zones (DMZ’s).
To control the trust value of each zone, each firewall interface is assigned a security level,
which is often represented as a numerical value or even color. For example, in the above
diagram, the Trusted Zone could be assigned a security value of 100, the Less Trusted Zone
a value of 75, and the Untrusted Zone a value of 0.
As stated previously, traffic from a higher security to lower security zone is (generally)
allowed by default, while traffic from a lower security to higher security zone requires explicit
permission.
129
N.3 Firewall Components
network policy,
advanced authentication mechanisms,
packet filtering, and
application gateways.
Network Policy
There are two levels of network policy that directly influence the design, installation and use
of a firewall system. The higher-level policy is an issue-specific, network access policy that
defines those services that will be allowed or explicitly denied from the restricted network,
how these services will be used, and the conditions for exceptions to this policy. The lower-
level policy describes how the firewall will actually go about restricting the access and
filtering the services that were defined in the higher level policy. The following sections
describe these policies in brief.
The service access policy should focus on Internet-specific use issues as defined above,
and perhaps all outside network access (i.e., dial-in policy, and SLIP and PPP connections)
as well. This policy should be an extension of an overall organizational policy regarding the
protection of information resources in the organization. For a firewall to be successful, the
service access policy must be realistic and sound and should be drafted before
implementing a firewall. A realistic policy is one that provides a balance between protecting
the network from known risks, while still providing users access to network resources. If a
firewall system denies or restricts services, it usually requires the strength of the service
access policy to prevent the firewall's access controls from being modified on an ad hoc
basis. Only a management-backed, sound policy can provide this.
A firewall can implement a number of service access policies, however a typical policy may
be to allow no access to a site from the Internet, but allow access from the site to the
Internet. Another typical policy would be to allow some access from the Internet, but perhaps
only to selected systems such as information servers and e-mail servers. Firewalls often
implement service access policies that allow some user access from the Internet to selected
internal hosts, but this access would be granted only if necessary and only if it could be
combined with advanced authentication.
The firewall design policy is specific to the firewall. It defines the rules used to implement the
service access policy. One cannot design this policy in a vacuum isolated from
understanding issues such as firewall capabilities and limitations, and threats and
130
vulnerabilities associated with TCP/IP. Firewalls generally implement one of two basic
design policies:
A firewall that implements the first policy allows all services to pass into the site by default,
with the exception of those services that the service access policy has identified as
disallowed. A firewall that implements the second policy denies all services by default, but
then passes those services that have been identified as allowed. This second policy follows
the classic access model used in all areas of information security.
The first policy is less desirable, since it offers more avenues for getting around the firewall,
e.g., users could access new services currently not denied by the policy (or even addressed
by the policy) or run denied services at non-standard TCP/UDP ports that aren't denied by
the policy. Certain services such as X Windows, FTP, Archie, and RPC cannot be filtered
easily [Chap92], [Ches94], and are better accommodated by a firewall that implements the
first policy. The second policy is stronger and safer, but it is more difficult to implement and
may impact users more in that certain services such as those just mentioned may have to be
blocked or restricted more heavily.
The relationship between the high level service access policy and its lower level counterpart
is reflected in the discussion above. This relationship exists because the implementation of
the service access policy is so heavily dependent upon the capabilities and limitations of the
firewall system, as well as the inherent security problems associated with the wanted
Internet services. For example, wanted services defined in the service access policy may
have to be denied if the inherent security problems in these services cannot be effectively
controlled by the lower level policy and if the security of the network takes precedence over
other factors. On the other hand, an organization that is heavily dependent on these services
to meet its mission may have to accept higher risk and allow access to these services. This
relationship between the service access policy and its lower level counterpart allows for an
iterative process in defining both, thus producing the realistic and sound policy initially
described.
The service access policy is the most significant component of the four described here. The
other three components are used to implement and enforce the policy. (And as noted above,
the service access policy should be a reflection of a strong overall organization security
policy.) The effectiveness of the firewall system in protecting the network depends on the
type of firewall implementation used, the use of proper firewall procedures, and the service
access policy.
Advanced Authentication
Sections, and describe incidents on the Internet that have occurred in part due to the
weaknesses associated with traditional passwords. For years, users have been advised to
choose passwords that would be difficult to guess and to not reveal their passwords.
However, even if users follow this advice (and many do not), the fact that intruders can and
do monitor the Internet for passwords that are transmitted in the clear has rendered
traditional passwords obsolete.
131
Advanced authentication measures such as smartcards, authentication tokens, biometrics,
and software-based mechanisms are designed to counter the weaknesses of traditional
passwords. While the authentication techniques vary, they are similar in that the passwords
generated by advanced authentication devices cannot be reused by an attacker who has
monitored a connection. Given the inherent problems with passwords on the Internet, an
Internet-accessible firewall that does not use or does not contain the hooks to use advanced
authentication makes little sense.
Some of the more popular advanced authentication devices in use today are called one-time
password systems. A smartcard or authentication token, for example, generates a response
that the host system can use in place of a traditional password. Because the token or card
works in conjunction with software or hardware on the host, the generated response is
unique for every login. The result is a one-time password that, if monitored, cannot be
reused by an intruder to gain access to an account. [NIST94a] and [NIST91a] contain more
detail on advanced authentication devices and measures.
Since firewalls can centralize and control site access, the firewall is the logical place for the
advanced authentication software or hardware to be located. Although advanced
authentication measures could be used at each host, it is more practical and manageable to
centralize the measures at the firewall. Figure illustrates that a site without a firewall using
advanced authentication permits unauthenticated application traffic such as TELNET or FTP
directly to site systems. If the hosts do not use advanced authentication, then intruders could
attempt to crack passwords or could monitor the network for login sessions that would
include the passwords. Figure also shows a site with a firewall using advanced
authentication, such that TELNET or FTP sessions originating from the Internet to site
systems must pass the advanced authentication before being permitted to the site systems.
The site systems may still require static passwords before permitting access, however these
passwords would be immune from exploitation, even if the passwords are monitored, as long
as the advanced authentication measures and other firewall components prevent intruders
from penetrating or bypassing the firewall.
Sections and contain more information on using advanced authentication easures with
firewalls. See [NIST94b] for more information on using advanced authentication measures
with hosts.
132
Packet Filtering
IP packet filtering is done usually using a packet filtering router designed for filtering packets
as they pass between the router's interfaces. A packet filtering router usually can filter IP
packets based on some or all of the following fields:
source IP address,
destination IP address,
TCP/UDP source port, and
TCP/UDP destination port.
Not all packet filtering routers currently filter the source TCP/UDP port, however more
vendors are starting to incorporate this capability. Some routers examine which of the
router's network interfaces a packet arrived at, and then use this as an additional filtering
criterion. Some UNIX hosts provide packet filtering capability, although most do not.
Filtering can be used in a variety of ways to block connections from or to specific hosts or
networks, and to block connections to specific ports. A site might wish to block connections
from certain addresses, such as from hosts or sites that it considers to be hostile or
untrustworthy. Alternatively, a site may wish to block connections from all addresses external
to the site (with certain exceptions, such as with SMTP for receiving e-mail).
Adding TCP or UDP port filtering to IP address filtering results in a great deal of flexibility.
Servers such as the TELNET daemon reside usually at specific ports, such as port 23 for
TELNET. If a firewall can block TCP or UDP connections to or from specific ports, then one
can implement policies that call for certain types of connections to be made to specific hosts,
but not other hosts. For example, a site may wish to block all incoming connections to all
hosts except for several firewalls-related systems. At those systems, the site may wish to
allow only specific services, such as SMTP for one system and TELNET or FTP connections
to another system. With filtering on TCP or UDP ports, this policy can be implemented in a
straightforward fashion by a packetfiltering router or by a host with packet filtering
capability.
133
As an example of packet filtering, consider a policy to allow only certain connections to a
network of address 123.4.*.*. TELNET connections will be allowed to only one host,
123.4.5.6, which may be the site's TELNET application gateway, and SMTP connections will
be allowed to two hosts, 123.4.5.7 and 123.4.5.8, which may be the site's two electronic mail
gateways. NNTP (Network News Transfer Protocol) is allowed only from the site's NNTP
feed system, 129.6.48.254, and only to the site's NNTP server, 123.4.5.9, and NTP (Network
Time Protocol) is allowed to all hosts. All other services and packets are to be blocked. An
example of the ruleset would be as follows:
The first rule allows TCP packets from any source address and port greater than 1023 on
the Internet to the destination address of 123.4.5.6 and port of 23 at the site. Port 23 is the
port associated with the TELNET server, and all TELNET clients should have unprivileged
source ports of 1024 or higher. The second and third rules work in a similar fashion, except
packets to destination addresses 123.4.5.7 and 123.4.5.8, and port 25 for SMTP, are
permitted. The fourth rule permits packets to the site's NNTP server, but only from source
address 129.6.48.254 to destination address 123.4.5.9 and port 119 (129.6.48.254 is the
only NNTP server that the site should receive news from, thus access to the site for NNTP is
restricted to only that system). The fifth rule permits NTP traffic, which uses UDP as
opposed to TCP, from any source to any destination address at the site. Finally, the sixth
rule denies all other packets - if this rule weren't present, the router may or may not deny all
subsequent packets. This is a very basic example of packet filtering. Actual rules permit
more complex filtering and greater flexibility.
The decision to filter certain protocols and fields depends on the network access policy, i.e.,
which systems should have Internet access and the type of access to permit. The following
services are inherently vulnerable to abuse and are usually blocked at a firewall from
entering or leaving the site
tftp, port 69, trivial FTP, used for booting diskless workstations, terminal servers and
routers, can also be used to read any file on the system if set up incorrectly,
X Windows, OpenWindows, ports 6000+, port 2000, can leak information from X
window displays including all keystrokes,
RPC, port 111, Remote Procedure Call services including NIS and NFS, which can
be used to steal system information such as passwords and read and write to files,
and
rlogin, rsh, and rexec, ports 513, 514, and 512, services that if improperly
configured can permit unauthorized access to accounts and commands.
134
Other services, whether inherently dangerous or not, are usually filtered and possibly
restricted to only those systems that need them. These would include:
While some of these services such as TELNET or FTP are inherently risky, blocking access
to these services completely may be too drastic a policy for many sites. Not all systems,
though, generally require access to all services. For example, restricting TELNET or FTP
access from the Internet to only those systems that require the access can improve security
at no cost to user convenience. Services such as NNTP may seem to pose little threat, but
restricting these services to only those systems that need them helps to create a cleaner
network environment and reduces the likelihood of exploitation from yet-to-be-discovered
vulnerabilities and threats.
Packet filtering routers suffer from a number of weaknesses, as described in Packet filtering
rules are complex to specify and usually no testing facility exists for verifying the correctness
of the rules (other than by exhaustive testing by hand). Some routers do not provide any
logging capability, so that if a router's rules still let dangerous packets through, the packets
may not be detected until a break-in has occurred.
Often times, exceptions to rules need to be made to allow certain types of access that
normally would be blocked. But, exceptions to packet filtering rules sometimes can make the
filtering rules so complex as to be unmanageable. For example, it is relatively straightforward
to specify a rule to block all inbound connections to port 23 (the TELNET server). If
exceptions are made, i.e., if certain site systems need to accept TELNET connections
directly, then a rule for each system must be added. Sometimes the addition of certain rules
may complicate the entire filtering scheme. As noted previously, testing a complex set of
rules for correctness may be so difficult as to be impractical.
Some packet filtering routers do not filter on the TCP/UDP source port, which can make the
filtering ruleset more complex and can open up ``holes'' in the filtering scheme describes
such a problem with sites that wish to allow inbound and outbound SMTP connections. As
described in section, TCP connections include a source and destination port. In the case of a
system initiating an SMTP connection to a server, the source port would be a randomly
chosen port at or above 1024 and the destination port would be 25, the port that the SMTP
server ``listens'' at. The server would return packets with source port of 25 and destination
port equal to the randomly-chosen port at the client. If a site permits both inbound and
outbound SMTP connections, the router must allow destination ports and source ports >
135
1023 in both directions. If the router can filter on source port, it can block all packets coming
into the site that have a destination port > 1023 and a source port other than 25. Without the
ability to filter on source port, the router must permit connections that use source and
destination ports > 1024. Users could conceivably run servers at ports > 1023 and thus get
``around'' the filtering policy (i.e., a site system's telnet server that normally listens at port 23
could be told to listen at port 9876 instead; users on the Internet could then telnet to this
server even if the router blocks destination port 23).
Another problem is that a number of RPC (Remote Procedure Call) services are very difficult
to filter effectively because the associated servers listen at ports that are assigned randomly
at system startup. A service known as portmapper maps initial calls to RPC services to the
assigned service numbers, but there is no such equivalent for a packet filtering router. Since
the router cannot be told which ports the services reside at, it isn't possible to block
completely these services unless one blocks all UDP packets (RPC services mostly use
UDP). Blocking all UDP would block potentially necessary services such as DNS. Thus,
blocking RPC results in a dilemma.
Packet filtering routers with more than two interfaces sometimes do not have the capability
to filter packets according to which interface the packets arrived at and which interface the
packet is bound for. Filtering inbound and outbound packets simplifies the packet filtering
rules and permits the router to more easily determine whether an IP address is valid or being
spoofed. Routers without this capability offer more impediments to implementing filtering
strategies.
Related to this, packet filtering routers can implement both of the design policies discussed
in section. A ruleset that is less flexible, i.e., that does not filter on source port or on inbound
and outbound interfaces, reduces the ability of the router to implement the second and more
stringent policy, deny all services except those expressly permitted, without having to curtail
the types of services permitted through the router. For example, problematic services such
as those that are RPC-based become even more difficult to filter with a less-flexible ruleset;
no filtering on source port forces one to permit connections between ports > 1023. With a
less-flexible ruleset, the router is less able to express a stringent policy, and the first policy,
permit all services except those expressly permitted, is usually followed.
Readers are advised to consult, which provides a concise overview of packet filtering and
associated problems. While packet filtering is a vital and important tool, it is very important to
understand the problems and how they can be addressed.
Application Gateways
To counter some of the weaknesses associated with packet filtering routers, firewalls need
to use software applications to forward and filter connections for services such as TELNET
and FTP. Such an application is referred to as a proxy service, while the host running the
proxy service is referred to as an application gateway. Application gateways and packet
filtering routers can be combined to provide higher levels of security and flexibility than if
either were used alone.
As an example, consider a site that blocks all incoming TELNET and FTP connections using
a packet filtering router. The router allows TELNET and FTP packets to go to one host only,
the TELNET/FTP application gateway. A user who wishes to connect inbound to a site
136
system would have to connect first to the application gateway, and then to the destination
host, as follows:
1. a user first telnets to the application gateway and enters the name of an internal host,
2. the gateway checks the user's source IP address and accepts or rejects it according
to any access criteria in place,
3. the user may need to authenticate herself (possibly using a one-time password
device),
4. the proxy service creates a TELNET connection between the gateway and the
internal host,
5. the proxy service then passes bytes between the two connections, and
6. the application gateway logs the connection.
This example points out several benefits to using proxy services. First, proxy services allow
only those services through for which there is a proxy. In other words, if an application
gateway contains proxies for FTP and TELNET, then only FTP and TELNET may be allowed
into the protected subnet, and all other services are completely blocked. For some sites, this
degree of security is important, as it guarantees that only those services that are deemed
``trustworthy'' are allowed through the firewall. It also prevents other untrusted services from
being implemented behind the backs of the firewall administrators.
Another benefit to using proxy services is that the protocol can be filtered. Some firewalls, for
example, can filter FTP connections and deny use of the FTP put command, which is useful
if one wants to guarantee that users cannot write to, say, an anonymous FTP server.
Application gateways have a number of general advantages over the default mode of
permitting application traffic directly to internal hosts. These include:
information hiding, in which the names of internal systems need not necessarily be
made known via DNS to outside systems, since the application gateway may be the
only host whose name must be made known to outside systems,
robust authentication and logging, in which the application traffic can be pre-
authenticated before it reaches internal hosts and can be logged more effectively
than if logged with standard host logging,
137
cost-effectiveness, because third-party software or hardware for authentication or
logging need be located only at the application gateway, and
less-complex filtering rules, in which the rules at the packet filtering router will be
less complex than they would if the router needed to filter application traffic and direct
it to a number of specific systems. The router need only allow application traffic
destined for the application gateway and reject the rest.
In addition to TELNET, application gateways are used generally for FTP and e-mail, as well
as for X Windows and some other services. Some FTP application gateways include the
capability to deny put and get command to specific hosts. For example, an outside user who
has established an FTP session (via the FTP application gateway) to an internal system
such as an anonymous FTP server might try to upload files to the server. The application
gateway can filter the FTP protocol and deny all puts to the anonymous FTP server; this
would ensure that nothing can be uploaded to the server and would provide a higher degree
of assurance than relying only on file permissions at the anonymous FTP server to be set
correctly.
where emailhost is the name of the e-mail gateway. The gateway would accept mail from
outside users and then forward mail along to other internal systems as necessary. Users
sending e-mail from internal systems could send it directly from their hosts, or in the case
where internal system names are not known outside the protected subnet, the mail would be
sent to the application gateway, which could then forward the mail to the destination host.
Some e-mail gateways use a more secure version of the sendmail program to accept e-mail.
Circuit-Level Gateways
Another firewall component that include under the category of application gateway. A circuit-
level gateway relays TCP connections but does no extra processing or filtering of the
protocol. For example, the TELNET application gateway example provided here would be an
example of a circuit-level gateway, since once the connection between the source and
destination is established, the firewall simply passes bytes between the systems. Another
example of a circuit-level gateway would be for NNTP, in which the NNTP server would
connect to the firewall, and then internal systems' NNTP clients would connect to the
firewall. The firewall would, again, simply pass bytes.
138
Putting the Pieces Together: Firewall Examples
Now that the basic components of firewalls have been examined, some examples of
different firewall configurations are provided to give a more concrete understanding of
firewall implementation. The firewall examples shown here are:
Additionally, a section is provided that discusses methods for integrating dial-in modem
access with firewalls. The examples are based loosely on , which provides concise but
detailed guidance on firewall definition and design. In the examples, assumptions about
policy are kept to a minimum, but policy issues that affect the firewall design are pointed out
where appropriate. Readers should note that there are many other types of firewalls that are
not illustrated here; their absence does not indicate that they are less secure, only that it is
impractical to illustrate every potential design. The examples shown here were chosen
primarily because they are covered by other literature in more detail and thus serve well as a
basis for more study.
The packet filtering firewall is perhaps most common and easiest to employ for small,
uncomplicated sites. However, it suffers from a number of disadvantages and is less
desirable as a firewall than the other example firewalls discussed in this chapter. Basically,
one installs a packet filtering router at the Internet (or any subnet) gateway and then
configures the packet filtering rules in the router to block or filter protocols and addresses.
The site systems usually have direct access to the Internet while all or most access to site
systems from the Internet is blocked. However, the router could allow selective access to
systems and services, depending on the policy. Usually, inherently-dangerous services such
as NIS, NFS, and X Windows are blocked.
The dual-homed gateway is a better alternative to packet filtering router firewalls. It consists
of a host system with two network interfaces, and with the host's IP forwarding capability
disabled (i.e., the default condition is that the host can no longer route packets between the
two connected networks). In addition, a packet filtering router can be placed at the Internet
connection to provide additional protection. This would create an inner, screened subnet that
could be used for locating specialized systems such as information servers and modem
pools.
Unlike the packet filtering firewall, the dual-homed gateway is a complete block to IP traffic
between the Internet and protected site. Services and access is provided by proxy servers
139
Figure: Dual-homed Gateway Firewall with Router.
This type of firewall implements the second design policy, i.e., deny all services unless they
are specifically permitted, since no services pass except those for which proxies exist. The
ability of the host to accept source-routed packets would be disabled, so that no other
packets could be passed by the host to the protected subnet. It can be used to achieve a
high degree of privacy since routes to the protected subnet need to be known only to the
firewall and not to Internet systems (because Internet systems cannot route packets directly
to the protected systems). The names and IP addresses of site systems would be hidden
from Internet systems, because the firewall would not pass DNS information.
A simple setup for a dual-homed gateway would be to provide proxy services for TELNET
and FTP, and centralized e-mail service in which the firewall would accept all site mail and
then forward it to site systems. Because it uses a host system, the firewall can house
software to require users to use authentication tokens or other advanced authentication
measures. The firewall can also log access and log attempts or probes to the system that
might indicate intruder activity.
The dual-homed gateway firewall, as well as the screened subnet firewall mentioned later in
this chapter, provides the ability to segregate traffic concerned with an information server
from other traffic to and from the site. An information server could be located on the subnet
between the gateway and the router, as shown in figure. Assuming that the gateway
provides the appropriate proxy services for the information server (e.g., ftp, gopher, or http),
the router can prevent direct Internet access to the firewall and force access to go through
the firewall. If direct access is permitted to the server (which is the less secure alternative),
then the server's name and IP address can be advertised by DNS. Locating the information
server there also adds to the security of the site, as any intruder penetration of the
information server would still be prevented from reaching site systems by the dual-homed
gateway.
The inflexibility of the dual-homed gateway could be a disadvantage to some sites. Since all
services are blocked except those for which proxies exist, access to other services cannot
be opened up; systems that require the access would need to be placed on the Internet side
of the gateway. However, a router could be used as shown in figure to create a subnet
between the gateway and the router, and the systems that require extra services could be
located there (this is discussed more in section with screened subnet firewalls).
140
Another important consideration is that the security of the host system used for the firewall
must be very secure, as the use of any vulnerable services or techniques on the host could
lead to break-ins. If the firewall is compromised, an intruder could potentially subvert the
firewall and perform some activity such as to re-enable IP routing.
A packet filtering firewall suffers from the same disadvantages as a packet filtering router,
however they can become magnified as the security needs of a protected site becomes
more complex and stringent. These would include the following:
there is little or no logging capability, thus an administrator may not easily determine
whether the router has been compromised or is under attack,
packet filtering rules are often difficult to test thoroughly, which may leave a site open
to untested vulnerabilities,
if complex filtering rules are required, the filtering rules may become unmanageable,
and
each host directly accessible from the Internet will require its own copy of advanced
authentication measures.
A packet filtering router can implement either of the design policies discussed in section.
However, if the router does not filter on source port or filter on inbound as well as outbound
packets, it may be more difficult to implement the second policy, i.e., deny everything unless
specifically permitted. If the goal is to implement the second policy, a router that provides the
most flexibility in the filtering strategy is desirable.
The screened host firewall is a more flexible firewall than the dual-homed gateway firewall,
however the flexibility is achieved with some cost to security. The screened host firewall is
often appropriate for sites that need more flexibility than that provided by the dual-homed
gateway firewall.
The screened host firewall combines a packet-filtering router with an application gateway
located on the protected subnet side of the router. The application gateway needs only one
network interface. The application gateway's proxy services would pass TELNET, FTP, and
other services for which proxies exist, to site systems. The router filters or screens inherently
dangerous protocols from reaching the application gateway and site systems. It rejects (or
accepts) application traffic according to the following rules:
application traffic from Internet sites to the application gateway gets routed,
all other traffic from Internet sites gets rejected, and
the router rejects any application traffic originating from the inside unless it came
from the application gateway.
141
Figure: Screened Host Firewall.
Unlike the dual-homed gateway firewall, the application gateway needs only one network
interface and does not require a separate subnet between the application gateway and the
router. This permits the firewall to be made more flexible but perhaps less secure by
permitting the router to pass certain trusted services ``around'' the application gateway and
directly to site systems. The trusted services might be those for which proxy services don't
exist, and might be trusted in the sense that the risk of using the services has been
considered and found acceptable. For example, less-risky services such as NTP could be
permitted to pass through the router to site systems. If the site systems require DNS access
to Internet systems, DNS could be permitted to site systems. In this configuration, the
firewall could implement a mixture of the two design policies, the proportions of which
depend on how many and what types of services are routed directly to site systems.
The additional flexibility of the screened host firewall is cause for two concerns. First, there
are now two systems, the router and the application gateway, that need to be configured
carefully. As noted before, packet filtering router rules can be complex to configure, difficult
to test, and prone to mistakes that lead to holes through the router. However, since the
router needs to limit application traffic only to the application gateway, the ruleset may not be
as complex as for a typical site using a packet filtering firewall (which may restrict application
traffic to multiple systems).
The second disadvantage is that the flexibility opens up the possibility that the policy can be
violated (as with the packet filtering firewall). This is less of a problem than with the dual-
homed gateway firewall, since it is technically impossible to pass traffic through the dual-
homed gateway unless there is a corresponding proxy service. Again, a strong policy is
essential.
142
Screened Subnet Firewall
The screened subnet firewall is a variation of the dual-homed gateway and screened host
firewalls. It can be used to locate each component of the firewall on a separate system,
thereby achieving greater throughput and flexibility, although at some cost to simplicity. But,
each component system of the firewall needs to implement only a specific task, making the
systems less complex to configure.
In figure, two routers are used to create an inner, screened subnet. This subnet (sometimes
referred to in other literature as the ``DMZ'') houses the application gateway, however it
could also house information servers, modem pools, and other systems that require
carefully-controlled access. The router shown as the connection point to the Internet would
route traffic according to the following rules:
application traffic from the application gateway to Internet systems gets routed,
e-mail traffic from the e-mail server to Internet sites gets routed,
application traffic from Internet sites to the application gateway gets routed,
e-mail traffic from Internet sites to the e-mail server gets routed,
ftp, gopher, etc., traffic from Internet sites to the information server gets routed, and
all other traffic gets rejected.
The outer router restricts Internet access to specific systems on the screened subnet, and
blocks all other traffic to the Internet originating from systems that should not be originating
connections (such as the modem pool, the information server, and site systems). The router
would be used as well to block packets such as NFS, NIS, or any other vulnerable protocols
that do not need to pass to or from hosts on the screened subnet.
The inner router passes traffic to and from systems on the screened subnet according to the
following rules:
application traffic from the application gateway to site systems gets routed,
e-mail traffic from the e-mail server to site systems gets routed,
application traffic to the application gateway from site systems get routed,
e-mail traffic from site systems to the e-mail server gets routed,
ftp, gopher, etc., traffic from site systems to the information server gets routed,
all other traffic gets rejected.
143
Figure: Screened Subnet Firewall with Additional Systems.
Thus, no site system is directly reachable from the Internet and vice versa, as with the dual-
homed gateway firewall. A big difference, though, is that the routers are used to direct traffic
to specific systems, thereby eliminating the need for the application gateway to be dual-
homed. Greater throughput can be achieved, then, if a router is used as the gateway to the
protected subnet. Consequently, the screened subnet firewall may be more appropriate for
sites with large amounts of traffic or sites that need very high-speed traffic.
The two routers provide redundancy in that an attacker would have to subvert both routers to
reach site systems directly. The application gateway, e-mail server, and information server
could be set up such that they would be the only systems ``known'' from the Internet; no
other system name need be known or used in a DNS database that would be accessible to
outside systems. The application gateway can house advanced authentication software to
authenticate all inbound connections. It is, obviously, more involved to configure, however
the use of separate systems for application gateways and packet filters keeps the
configuration more simple and manageable.
The screened subnet firewall, like the screened host firewall, can be made more flexible by
permitting certain ``trusted'' services to pass between the Internet and the site systems.
However, this flexibility may open the door to exceptions to the policy, thus weakening the
effect of the firewall. In many ways, the dual-homed gateway firewall is more desireable
because the policy cannot be weakened (because the dual-homed gateway cannot pass
services for which there is no proxy). However, where throughput and flexibility are
important, the screened subnet firewall may be more preferable.
As an alternative to passing services directly between the Internet and site systems, one
could locate the systems that need these services directly on the screened subnet. For
example, a site that does not permit X Windows or NFS traffic between Internet and site
systems, but needs to anyway, could locate the systems that need the access on the
screened subnet. The systems could still maintain access to site systems by connecting to
the application gateway and reconfiguring the inner router as necessary. This is not a perfect
solution, but an option for sites that require a high degree of security.
144
There are two disadvantages to the screened subnet firewall. First, the firewall can be made
to pass ``trusted'' services around the application gateway(s), thereby subverting the policy.
This is true also with the screened host firewall, however the screened subnet firewall
provides a location to house systems that need direct access to those services. With the
screened host firewall, the ``trusted'' services that get passed around the application
gateway end up being in contact with site systems. The second disadvantage is that more
emphasis is placed on the routers for providing security. As noted, packet filtering routers
are sometimes quite complex to configure and mistakes could open the entire site to security
holes.
Many sites permit dial-in access to modems located at various points throughout the site. As
discussed in section, this is a potential backdoor and could negate all the protection provided
by the firewall. A much better method for handling modems is to concentrate them into a
modem pool, and then secure connections from that pool.
The modem pool likely would consist of modems connected to a terminal server, which is a
specialized computer designed for connecting modems to a network. A dial-in user connects
to the terminal server, and then connects (e.g., telnets) from there to other host systems.
Some terminal servers provide security features that can restrict connections to specific
systems, or require users to authenticate using an authentication token. Alternatively, the
terminal server can be a host system with modems connected to it.
Figure shows a modem pool located on the Internet side of the screened host firewall. Since
the connections from modems need to be treated with the same suspicion as connections
from the Internet, locating the modem pool on the outside of the firewall forces the modem
connections to pass through the firewall.
145
filtering router could be used to prevent inside systems from connecting directly to the
modem pool.
A disadvantage to this, though, is that the modem pool is connected directly to the Internet
and thus more exposed to attack. If an intruder managed to penetrate the modem pool, the
intruder might use it as a basis for connecting to and attacking other Internet systems. Thus,
a terminal server with security features to reject dial-in connections to any system but the
application gateway should be used.
Figure: Modem Pool Placement with Screened Subnet and Dual-Homed Firewalls.
The dual-homed gateway and screened subnet firewalls provide a more secure method for
handling modem pools. In figure, the terminal server gets located on the inner, screened
subnet, where access to and from the modem pool can be carefully controlled by the routers
and application gateways. The router on the Internet side protects the modem pool from any
direct Internet access except from the application gateway.
With the dual-homed gateway and screened subnet firewalls, the router connected to the
Internet would prevent routing between Internet systems and the modem pool. With the
screened subnet firewall, the router connected to the site would prevent routing between site
systems and the modem pool; with the dual-homed gateway firewall, the application gateway
would prevent the routing. Users dialing into the modem pool could connect to site systems
or the Internet only by connecting to the application gateway, which would use advanced
authentication measures.
If a site uses any of these measures to protect dial-in access, it must rigidly enforce a policy
that prevents any users from connecting modems elsewhere on the protected subnet. Even
if the modems contain security features, this adds more complexity to the firewall protection
scheme and adds another ``weak link'' to the chain.
146
Firewall Policy
Policy was discussed in terms of a service access policy and a firewall design policy. This
section discusses these policies in relationship to overall site policy, and offers guidance on
how to identify needs, risks, and then policies.
Policy decisions regarding the use of firewall technology should be made in conjunction with
the policy decisions needed to secure the whole site. This includes decisions concerning
host systems security, dial-in access, off-site Internet access, protection of information off-
site, data communications security and others. A stand-alone policy concerning only the
firewall is not effective; it needs to be incorporated into a strong site security policy.
A firewall is a direct implementation of the network service access and design policies, as
discussed in section. There are a number of service access policies that may be
implemented, such as no inbound access and full outbound access or restricted inbound
access and restricted outbound access. The firewall design policy determines to a large
degree the service access policy: the more robust the firewall design policy, the more
stringent the service access policy. Thus, the firewall design policy needs to be decided
upon first.
As explained in section, the firewall design policy is generally to deny all services except
those that are explicitly permitted or to permit all services except those that are explicitly
denied. The former is more secure and is therefore preferred, but it is also more stringent
and causes fewer services to be permitted by the service access policy.
Chapter 3 provided several firewall examples, and showed that certain firewalls can
implement either design policy whereas one, the dual-homed gateway, is inherently a ``deny
all'' firewall. However, the examples also showed that systems needing certain services that
shouldn't be passed through the firewalls could be located on screened subnets separate
from other site systems. The point here is that depending on security and flexibility
requirements, certain types of firewalls are more appropriate than others. This shows also
the importance of choosing a policy first before implementing the firewall; doing the opposite
could result in a clumsy fit.
To arrive at a firewall design policy and then ultimately a firewall system that implements the
policy, NIST recommends that the firewall design policy start with the most secure, i.e., deny
all services except those that are explicitly permitted. The policy designer then should
understand and document the following:
which Internet services the organization plans to use, e.g., TELNET, Mosaic, and
NFS,
where the services will be used, e.g., on a local basis, across the Internet, dial-in
from home, or from remote organizations,
additional needs, such as encryption or dial-in support,
what are the risks associated with providing these services and access,
what is the cost in terms of controls and impact on network usability to provide
protection, and
assumptions about security versus usability: does security win out if a particular
service is too risky or too expensive to secure.
147
The creation of these items is straightforward, however at the same time highly iterative. For
example, a site may wish to use NFS across two remote sites, however the ``deny all''
design policy may not permit NFS (as explained in sec.). If the risks associated with NFS are
acceptable to the organization, it may require changing the design policy to the less secure
approach of permitting all services except those specifically denied and passing NFS
through the firewall to site systems. Or, it may require obtaining a firewall that can locate the
systems that require NFS on a screened subnet, thus preserving the ``deny all'' design
policy for the rest of the site systems. Or, the risks of using NFS may prove too great; NFS
would have to be dropped from the list of services to use remotely. The aim of this exercise,
then, is to arrive at a service access policy and the firewall design policy.
To assist in this process, the following sections present some common issues that need to
be addressed in the policies associated with firewall use.
Flexibility in Policy
Any security policy that deals with Internet access, Internet services, and network access in
general, should be flexible. This flexibility must exist for two reasons: the Internet itself is in
flux, and the organization's needs may change as the Internet offers new services and
methods for doing business. New protocols and services are emerging on the Internet, which
offers more benefits to organizations using the Internet, but may also result in new security
concerns. Thus, a policy needs to be able to reflect and incorporate these new concerns.
The other reason for the flexibility is that the risk of the organization also does not remain
static. The change in risk may be a reflection of major changes such as new responsibilities
being assigned to the organization, or smaller changes such as a network configuration
change.
Remote users are those who originate connections to site system from elsewhere on the
Internet. These connections could come from any location on the Internet, from dial-in lines,
or from authorized users on travel or working from home. Regardless, all such connections
should use the advanced authentication service of the firewall to access systems at the site.
Policy should reflect that remote users may not access systems through unauthorized
modems placed behind the firewall. There must be no exceptions to this policy, as it may
take only one captured password or one uncontrolled modem line to enable a backdoor
around the firewall.
Such a policy has its drawbacks: increased user training for using advanced authentication
measures, increased expense if remote users must be supplied with authentication tokens or
smartcards, and increased overhead in administering remote access. But, it does not make
sense to install a firewall and at the same time not control remote access.
Dial-in/out Policy
A useful feature for authorized users is to have remote access to the systems when these
users are not on site. A dial-in capability allows them to access systems from locations
where Internet access is not available. However as discussed in section, dial-in capabilities
add another avenue for intruder access.
148
Authorized users may also wish to have a dial-out capability to access those systems that
cannot be reached through the Internet. These users need to recognize the vulnerabilities
they may be creating if they are careless with modem access. A dial-out capability may
easily become a dial-in capability if proper precautions are not taken.
The dial-in and dial-out capabilities should be considered in the design of the firewall and
incorporated into it. Forcing outside users to go through the advanced authentication of the
firewall should be strongly reflected in policy. Policy can also prohibit the use of unauthorized
modems attached to host systems and personal computers at the site if the modem
capability is offered through the firewall. A strong policy and effective modem service may
limit the number of unauthorized modems throughout the site, thus limiting this dangerous
vulnerability as well.
In addition to dial-in/dial-out connections, the use of Serial Line IP (SLIP) and Point-to-Point
Protocol (PPP) connections need to be considered as part of the policy. Users could use
SLIP or PPP to create new network connections into a site protected by a firewall. Such a
connection is potentially a backdoor around the firewall, and may be an even larger
backdoor than a simple dial-in connection.
Section provided several examples for locating dial-in capability such that dial-in connections
would pass first through the firewall. This sort of arrangement could be used as well for SLIP
and PPP connections, however this would need to be set forth in policy. As usual, the policy
would have to be very strong with regard to these connections.
A site that is providing public access to an information server must incorporate this access
into the firewall design. While the information server itself creates specific security concerns,
the information server should not become a vulnerability to the security of the protected site.
Policy should reflect the philosophy that the security of the site will not be compromised in
order to provide an information service.
One can make a useful distinction that information server traffic, i.e., the traffic concerned
with retrieving information from an organization's information server, is fundamentally
different from other ``conduct of business'' traffic such as e-mail (or other information server
traffic for the purposes of business research). The two types of traffic have their own risks
and do not necessarily need to be mixed with each other.
Section discusses incorporating an information server into the firewall design. The screened
subnet and dual-homed gateway firewall examples show information servers that can be
located on a screened subnet and in effect be isolated from other site systems. This reduces
the chance that an information server could be compromised and then used to attack site
systems.
Procuring a Firewall
After policy has been decided, there are a number of issues to be considered in procuring a
firewall. Many of these issues are the same as for procuring other software systems, thus
familiar steps such as requirements definition, analysis, and design specification are
149
standard. The following sections describe some additional considerations, including minimal
criteria for a firewall and whether to build or purchase a firewall.
Once the decision is made to use firewall technology to implement an organization's security
policy, the next step is to procure a firewall that provides the appropriate level of protection
and is cost-effective. However, what features should a firewall have, at a minimum, to
provide effective protection? One cannot answer this question entirely with specifics, but it is
possible to recommend that, in general, a firewall have the following features or attributes:
The firewall should be able to support a ``deny all services except those specifically
permitted'' design policy, even if that is not the policy used.
The firewall should support your security policy, not impose one.
The firewall should be flexible; it should be able to accommodate new services and
needs if the security policy of the organization changes.
The firewall should contain advanced authentication measures or should contain the
hooks for installing advanced authentication measures.
The firewall should employ filtering techniques to permit or deny services to specified
host systems as needed.
The IP filtering language should be flexible, user-friendly to program, and should filter
on as many attributes as possible, including source and destination IP address,
protocol type, source and destination TCP/UDP port, and inbound and outbound
interface.
The firewall should use proxy services for services such as FTP and TELNET, so
that advanced authentication measures can be employed and centralized at the
firewall. If services such as NNTP, X, http, or gopher are required, the firewall should
contain the corresponding proxy services.
The firewall should contain the ability to centralize SMTP access, to reduce direct
SMTP connections between site and remote systems. This results in centralized
handling of site e-mail.
The firewall should accomodate public access to the site, such that public information
servers can be protected by the firewall but can be segregated from site systems that
do not require the public access.
The firewall should contain the ability to concentrate and filter dial-in access.
The firewall should contain mechanisms for logging traffic and suspicious activity,
and should contain mechanisms for log reduction so that logs are readable and
understandable.
If the firewall requires an operating system such as UNIX, a secured version of the
operating system should be part of the firewall, with other security tools as necessary
to ensure firewall host integrity. The operating system should have all patches
installed.
The firewall should be developed in a manner that its strength and correctness is
verifiable. It should be simple in design so that it can be understood and maintained.
The firewall and any corresponding operating system should be updated with
patches and other bug fixes in a timely manner.
There are undoubtably more issues and requirements, however many of them will be
specific to each site's own needs. A thorough requirements definition and high-level risk
assessment will identify most issues and requirements, however it should be emphasized
that the Internet is a constantly changing network. New vulnerabilities can arise, and new
services and enhancements to other services may represent potential difficulties for any
150
firewall installation. Therefore, flexibility to adapt to changing needs is an important
consideration.
A number of organizations may have the capability to build a firewall for themselves, i.e., put
together a firewall by using available software components and equipment or by writing a
firewall from scratch. At the same time, there are a number of vendors offering a wide
spectrum of services in firewall technology. Service can be as limited as providing the
necessary hardware and software only, or as broad as providing services to develop security
policy, risk assessments, security reviews and security training.
Whether one buys or builds a firewall it must be reiterated that one should first develop a
policy and related requirements before proceeding. If an organization is having difficulty
developing a policy, it may need to contact a vendor who can assist in this process.
If an organization has the in-house expertise to build a firewall, it may prove more cost-
effective to do so. One of the advantages of building a firewall is that in-house personnel
understand the specifics of the design and use of the firewall. This knowledge may not exist
in-house with a vendor supported firewall.
At the same time, an in-house firewall can be expensive in terms of time required to build
and document the firewall, and the time required for maintaining the firewall and adding
features to it as required. These costs are sometimes not considered; organizations
sometimes make the mistake of counting only the costs for the equipment. If a true
accounting is made for all costs associated with building a firewall, it could prove more
economical to purchase a vendor firewall.
In deciding whether to purchase or build a firewall, answers to the following questions may
help an organization gauge whether it has the resources to build and operate a successful
firewall:
how will the firewall be tested; who will verify that the firewall performs as expected,
who will perform general maintenance of the firewall, such as backups and repairs,
who will install updates to the firewall, such as for new proxy servers, new patches,
and other enhancements,
can security-related patches and problems be corrected in a timely manner, and
who will perform user support and training.
Many vendors offer maintenance services along with firewall installation, therefore the
organization should consider whether it has the internal resources to perform the above.
It should not be surprising that firewall administration is a critical job role and should be
afforded as much time as possible. In small organizations, it may require less than a full-time
position, however it should take precedence over other duties. The cost of a firewall should
include the cost of administering the firewall; administration should never be shortchanged.
151
System Management Expertise
As evidenced by previous discussions concerning the many host system break-ins occurring
throughout the Internet, the need for highly trained, quality, full-time host system
administrators is clearly shown. But, there is also indication that this need is not being met;
many sites do not manage systems such that the systems are secure and protected from
intruder attacks. Many system managers are part-time at best and do not upgrade systems
with patches and bug fixes as available.
Firewall management expertise is a highly critical job role, as a firewall can only be as
effective as its administration. If the firewall is not maintained properly, it may become
insecure, and may permit break-ins while providing an illusion that the site is still secure. A
site's security policy should clearly reflect the importance of strong firewall administration.
Management should demonstrate its commitment to this importance in terms of full-time
personnel, proper funding for procurement and maintenance and other necessary resources.
A firewall is not an excuse to pay less attention to site system administration. It is in fact the
opposite: if a firewall is penetrated, a poorly administered site could be wide-open to
intrusions and resultant damage. A firewall in no way reduces the need for highly skilled
system administration.
At the same time, a firewall can permit a site to be ``proactive'' in its system administration
as opposed to reactive. Because the firewall provides a barrier, sites can spend more time
on system administration duties and less time reacting to incidents and damage control. It is
recommended that sites
An important consideration under firewall and site system administration is incident handling
assistance and contacts. NIST recommends that organizations develop incident handling
capabilities that can deal with suspicious activity and intrusions, and that can keep an
organization up to date with computer security threat and vulnerability information. Because
of the changing nature of Internet threats and risks, it is important that those maintaining
firewalls be part of the incident handling process. Firewall administrators need to be aware of
new vulnerabilities in products they are using, or if intruder activity is on-going and can be
detected using prescribed techniques, contain information on developing incident response
teams and contacts. NIST has produced a publication specifically on creating incident
response capabilities
152
See Appendix A for more information on incident response team contacts and the Forum of
Incident Response and Security Teams (FIRST).
Firewall Services
• Packet Filtering
• Proxying
Packet Filtering
Packet Filtering is one of the core services provided by firewalls. Packets can be filtered
(permitted or denied) based on a wide range of criteria:
• Source address
• Destination address
• Source Port
• Destination Port
Number Action Protocol Source Add. Source Port Destination Add. Destination Port
The order of the rule-list is a critical consideration. The rule-list is always parsed from top-to-
bottom. Thus, more specific rules should always be placed near the top of the rule-list,
otherwise they may be negated by a previous, more encompassing rule. Also, an implicit
‘deny any’ rule usually exists at the bottom of a rule-list, which often can’t be removed. Thus,
rule-lists that contain only deny statements will prevent all traffic.
153
Stateful Packet Inspection
Connections from the untrusted network to the trusted network are also monitored, to
prevent Denial of Service (DoS) attacks. If a high number of half-open sessions are
detected, the firewall can be configured to drop the session (and even block the source), or
send an alert message indicating an attack is occurring.
A half-open TCP session indicates that the three-way handshake has not yet completed. A
half-open UDP session indicates that no return UDP traffic has been detected. A large
number of half-opened sessions will chew up resources, while preventing legitimate
connections from being established.
Proxy Services
• Logging
• Content Filtering
• Authentication
The rapid growth of the Internet resulted in a shortage of IPv4 addresses. In response, the
powers that be designated a specific subset of the IPv4 address space to be private, to
temporarily alleviate this problem.
A public address can be routed on the Internet. Thus, devices that should be Internet
accessible (such web or email servers) must be configured with public addresses.
A private address is only intended for use within an organization, and can never be routed
on the internet. Three private addressing ranges were allocated, one for each IPv4 class:
154
• Class A - 10.x.x.x
• Class B - 172.16-31.x.x
• Class C - 192.168.x.x
NAT (Network Address Translation) is used to translate between private addresses and
public addresses. NAT allows devices configured with a private address to be stamped with
a public address, thus allowing those devices to communicate across the Internet.
NAT is not restricted to just public-to-private address translations, though this is the most
common application of NAT. NAT can perform a public-to public address translation, or a
private-to-private address translation as well. NAT provides an additional benefit – hiding the
specific addresses and addressing structure of the internal network.
Types of NAT
Static NAT – performs a static one-to-one translation between two addresses, or between a
port on one address to a port on another address. Static NAT is most often used to assign a
public address to a device behind a NAT-enabled firewall/router.
Dynamic NAT – utilizes a pool of global addresses to dynamically translate the outbound
traffic of clients behind a NAT-enabled device.
NAT Overload or Port Address Translation (PAT) – translates the outbound traffic of
clients to unique port numbers off of a single global address. PAT is necessary when the
number of internal clients exceeds the available global addresses.
NAT Terminology
• Inside Local – the specific IP address assigned to an inside host behind a NAT-enabled
device (usually a private address).
• Inside Global – the address that identifies an inside host to the outside world (usually a
public address). Essentially, this is the dynamically or statically-assigned public address
assigned to a private host.
• Outside Global – the address assigned to an outside host (usually a public address).
• Outside Local – the address that identifies an outside host to the inside network. Often,
this is the same address as the Outside Global. However, it is occasionally necessary to
translate an outside (usually public) address to an inside (usually private) address. For
simplicity sake, it is generally acceptable to associate global addresses with public
addresses, and local addresses with private addresses. However, remember that public-to-
public and private-to-private translation is still possible. Inside hosts are within the local
network, while outside hosts are external to the local network.
155
NAT Terminology Example
Consider the above example. For a connection from HostA to HostB, the NAT addresses are
identified as follows:
HostA’s configured address is 10.1.1.10, and is identified as its Inside Local address. When
HostA communicates with the Internet, it is stamped with RouterA’s public address, using
PAT. Thus, HostA’s Inside Global address will become 55.1.1.1.
When HostA communicates with HostB, it will access HostB’s Outside Global address of
99.1.1.2. In this instance, the Outside Local address is also 99.1.1.2. HostA is never aware
of HostB’s configured address.
It is possible to map an address from the local network (such as 10.1.1.5) to the global
address of the remote device (in this case, 99.1.1.2). This may be required if a legacy device
exists that will only communicate with the local subnet. In this instance, the Outside Local
address would be 10.1.1.5.
156
The above example demonstrates how the source (SRC) and destination (DST) IP
addresses within the Network-Layer header are translated by NAT.
Implementing a DMZ
As briefly described earlier, a DMZ is essentially a less trusted zone that sits between the
trusted zone (generally the LAN) and the untrusted zone (generally the Internet). Devices
that provide services to the untrusted world are generally placed in the DMZ, to provide
separation from the trusted network.
A single firewall with multiple ports can be used to implement a logical DMZ:
157
Chapter 13
IDPSes typically record information related to observed events, notify security administrators
of important observed events, and produce reports. Many IDPSes can also respond to a
detected threat by attempting to prevent it from succeeding. They use several response
techniques, which involve the IDPS stopping the attack itself, changing the security
environment (e.g., reconfiguring a firewall), or changing the attack’s content.
Terminology
158
Site policy: Guidelines within an organization that control the rules and
configurations of an IDS.
Site policy awareness: An IDS's ability to dynamically change its rules and
configurations in response to changing environmental activity.
Confidence value: A value an organization places on an IDS based on past
performance and analysis to help determine its ability to effectively identify an attack.
Alarm filtering: The process of categorizing attack alerts produced from an IDS in
order to distinguish false positives from actual attacks.
Attacker or Intruder: An entity who tries to find a way to gain unauthorized access to
information, inflict harm or engage in other malicious activities.
Masquerader: A user who does not have the authority to a system, but tries to
access the information as an authorized user. They are generally outside users.
Misfeasor: They are commonly internal users and can be of two types:
1. An authorized user with limited permissions.
2. A user with full permissions and who misuses their powers.
Clandestine user: A user who acts as a supervisor and tries to use his privileges so
as to avoid being captured.
Types
For the purpose of dealing with IT, there are two main types of IDS:
Host-based intrusion detection system (HIDS) It consists of an agent on a host that identifies
intrusions by analyzing system calls, application logs, file-system modifications (binaries,
password files, capability databases, Access control lists, etc.) and other host activities and
state. In a HIDS, sensors usually consist of a software agent. Some application-based IDS
are also part of this category. An example of a HIDS is OSSEC. Stack-based intrusion
detection system (SIDS). This type of system consists of an evolution to the HIDS systems.
The packets are examined as they go through the TCP/IP stack and, therefore, it is not
necessary for them to work with the network interface in promiscuous mode. This fact makes
its implementation to be dependent on the Operating System that is being used.
Intrusion detection systems can also be system-specific using custom tools and honey pots.
In a passive system, the intrusion detection system (IDS) sensor detects a potential
security breach, logs the information and signals an alert on the console and or owner. In a
reactive system, also known as an intrusion prevention system (IPS), the IPS auto-
responds to the suspicious activity by resetting the connection or by reprogramming the
firewall to block network traffic from the suspected malicious source. The term IDPS is
159
commonly used where this can happen automatically or at the command of an operator;
systems that both "detect" (alert) and/or "prevent."
Though they both relate to network security, an intrusion detection system (IDS) differs from
a firewall in that a firewall looks outwardly for intrusions in order to stop them from
happening. Firewalls limit access between networks to prevent intrusion and do not signal an
attack from inside the network. An IDS evaluates a suspected intrusion once it has taken
place and signals an alarm. An IDS also watches for attacks that originate from within a
system. This is traditionally achieved by examining network communications, identifying
heuristics and patterns (often known as signatures) of common computer attacks, and taking
action to alert operators. A system that terminates connections is called an intrusion
prevention system, and is another form of an application layer firewall.
A statistical anomaly-based IDS determines normal network activity like what sort of
bandwidth is generally used, what protocols are used, what ports and devices generally
connect to each other- and alert the administrator or user when traffic is detected which is
anomalous(not normal).
Signature-based IDS
Signature based IDS monitors packets in the Network and compares with pre-configured
and pre-determined attack patterns known as signatures. The issue is that there will be lag
between the new threat discovered and Signature being applied in IDS for detecting the
threat.During this lag time your IDS will be unable to identify the threat.
Limitations
Noise can severely limit an Intrusion detection system's effectiveness. Bad packets
generated from software bugs, corrupt DNS data, and local packets that escaped can
create a significantly high false-alarm rate.
It is not uncommon for the number of real attacks to be far below the false-alarm rate.
Real attacks are often so far below the false-alarm rate that they are often missed
and ignored.
Many attacks are geared for specific versions of software that are usually outdated. A
constantly changing library of signatures is needed to mitigate threats. Outdated
signature databases can leave the IDS vulnerable to new strategies.
Intrusion prevention systems (IPS), also known as intrusion detection and prevention
systems (IDPS), are network security appliances that monitor network and/or system
activities for malicious activity. The main functions of intrusion prevention systems are to
160
identify malicious activity, log information about said activity, attempt to block/stop activity,
and report activity.
Network-based intrusion prevention (NIPS): monitors the entire network for suspicious
traffic by analyzing protocol activity.
Network behavior analysis (NBA): examines network traffic to identify threats that
generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain
forms of malware, and policy violations.
Detection methods
The majority of intrusion prevention systems utilize one of three detection methods:
signature-based, statistical anomaly-based, and stateful protocol analysis.
Signature-Based Detection: This method of detection utilizes signatures, which are attack
patterns that are preconfigured and predetermined. A signature-based intrusion prevention
system monitors the network traffic for matches to these signatures. Once a match is found
the intrusion prevention system takes the appropriate action. Signatures can be exploit-
based or vulnerability-based. Exploit-based signatures analyze patterns appearing in
exploits being protected against, while vulnerability-based signatures analyze vulnerabilities
in a program, its execution, and conditions needed to exploit said vulnerability.
Stateful Protocol Analysis Detection: This method identifies deviations of protocol states
by comparing observed events with “predetermined profiles of generally accepted definitions
of benign activity.”
161
Before discussing sensor placement, the target network should be analyzed and choke
points identified. A choke point would be any point in a network where traffic is limited to a
small number of connections. An example is usually a company's Internet boundary, where
traffic crosses only a router and a firewall. The links between the router and firewall are
perfect choke points and good places to consider placing IPS sensors. Another
consideration is high-value network assets. Business critical systems and infrastructure,
such as server farms or databases, may warrant additional protection in the form of
dedicated IPS or IDS sensors. Of course some of these assets can be protected by host-
based IDS or IPS software agents in addition to, or instead of, targeted network level
sensors.
IPS sensors require network choke points; they are meant to be deployed between other
network infrastructure components. An IPS sensor can only provide protection if traffic flows
through it. As we've seen, an Internet boundary is usually a good choke point, but there is
another consideration in this case: do we position a sensor inside or outside the firewall? If
we go outside, one sensor will protect the internal network and any DMZ networks behind
the firewall. The downside is that the sensor requires much more tuning to lower the noise
level. Being outside the firewall means the sensor sees everything, even traffic the firewall
would block. In this case, the IPS administrator needs to adjust the IPS policy or rule set so
traffic that the firewall will block either doesn't get inspected by the IPS or the IPS doesn't
generate alerts based on it. This assumes that the administrator doesn't want to know about
every inbound attack. In most corporate environments, this is true, but there are a few
environments where it isn't, the individual administrator and their superiors must decide. The
flip side to this scenario is to place an IPS sensor inside, or behind, the firewall. Here, the
firewall blocks traffic and therefore limits what the IPS needs to inspect, improving efficiency.
The trade off is the number of sensors needed to provide the same level of protection as an
externally placed sensor. Most commercially available sensors offer coverage for several
physical network links in a single chassis or other hardware platform. Generally, the higher
the number of links, the higher the cost. Highly available networks add cost and complexity
to both scenarios by increasing the number of physical links being protected. The decision of
providing protection for the passive or fail-over side of a high availability lies with the system
administrator and their superiors. This discussion was specific to an Internet boundary but
other likely choke points may exist. Many organizations maintain extranet connections to
business partners that are consolidated on firewall or VPN protected networks. Placing an
IPS sensor behind such a firewall or VPN concentrator protects one network from the other.
In the case of VPN networks, care must be taken to inspect the unencrypted side of the VPN
tunnel. There may even be choke points and boundaries within a network where IPS
sensors can be deployed. Between departments or business units, or between users and
critical systems like databases. But what if a given network has no choke points? What this
means is that flat networks are trouble for IPS sensors. But, in some cases, choke points
can be created. Consider a switched network using one or no VLANs. On a single switch
different ports can be assigned to different VLANs. Creating two VLANs and bridging them
with an IPS sensor, creates a protected choke point . Network engineers will see this as an
162
oddity and they are right but in a pinch, it works and allows different portions of the network
to be protected from each other. Another problem for IPS deployments is the wide-area
network or WAN. IPS sensors can be used in wide-area networks but require positioning
between distributed local area networks and the WAN cloud. This most likely translates to
one IPS sensor at each remote location and one or more sensors at any central or large
sites. Obviously then IPS deployments in WAN environments can be expensive. I will leave
one possibility up to the network engineers: in a hub-and spoke WAN, it might be possible to
leverage VLANS as discussed previously to get all traffic inspected by a single, centralized
IPS sensor. This option is highly dependent on the given network infrastructure and also
depends on all WAN traffic traversing the network through a single site.
As previously mentioned, Intrusion Detection System (IDS) sensors are more flexible and
less capable than IPS sensors. Nonetheless, IDS sensors can be substituted for IPS
sensors in all of the examples previously given and some of the same caveats apply,
particularly when considering placement around firewalls. Importantly, though, IDS sensors
forgo the need for in-line placement common to IPS sensors. IDS sensors can be connected
to network taps or switch analysis ports, commonly known as SPAN ports. Both types of
connections simply copy network traffic for presentation to and analysis by the IDS sensor.
This means that IDS can provide security event detection with fewer sensors than IPS can,
although the level of protection is far less. For example, switched network backbones are
ideal for IDS sensor deployment. Dependent on the amount of traffic being inspected, a few
or perhaps even one IDS sensor can provide coverage for an entire network. Actually, any
switch that can enable an analysis port is a possible deployment site for an IDS sensor.
Clearly, existing IDS and IPS technologies have some limits, the need to protect at choke
points only being chief among them. Aside from increases in processing speed, yielding the
ability to inspect and protect more data per second, it seems that incorporating IDS and IPS
technology into the network infrastructure is a logical next step. Some vendors are already
providing something like this in the way of add-on modules or blades for existing switches.
But I think we will begin to see a hybridization of switch and security technologies in the next
few years. A single device that appears to be a switch but has enough intelligence to
perform a security analysis of not just every packet crossing the backplane but keep state on
and watch every conversation, a session in network parlance. Such a device eliminates the
need for separate IDS or IPS sensors sitting in the network and can conceivably protect
system on adjoining ports from each other which is possible but cost prohibitive using
today's technology. These hybrid devices will be much more than just a switch with IPS.
They will both require new technologies within the switch chassis and enable new network
architectures without. Whenever these devices arrive however, the need for them exists
today. Do note however, that the foregoing discussion does not mention firewalls. The
merger of firewalls and IPS/IDS technologies isn't necessarily logical
163
Chapter 14
Computer Forensics
Computer Forensics
As computers become more prevalent in the world, more computer crimes will occur. Thus,
the need for computer forensics specialists will continue to grow as long as computers are
being implemented in society. Currently, the need for computer forensics is growing
exponentially. The need is particularly acute at local, state, federal and military law
enforcement agencies that house computer forensics divisions.13 It is important for
companies to identify and take proper action against those who engage in agency conflict.
In recent years, corporations have started taking the initiative and imaging the contents of an
employee’s computer when he/she leaves the company. As such, an employee’s hard drive
is imaged upon resignation, termination or internal transfer. Archiving these images is
important, as issues such as theft of trade secrets or intellectual property, harassment and
wrongful termination claims often do not surface until months after an employee leaves
his/her position. Therefore, it is important for companies to archive the information in the
event that it needs to investigate the activities on the employee’s computer. With the
increasing importance of computer forensics, the Big 4 accounting firms have stepped up
their efforts to recruit and hire professionals with forensics skills.For years, computer and
network security experts (whitehats) have fought to stay ahead of computer criminals
(blackhats). As blackhats became more skilled and computers became more powerful,
conventional security measures became less effective. This perpetual action-response-
reaction cycle evolved into a new field of study known as Computer and Network Forensics.
(CNF). CNF is the art of discovery and retrieval of information about computer related crime
in such a way that the gathered information is admissible in court.
Network Forensics: The use of scientifically proved techniques to collect, fuse, identify,
examine, correlate, analyze, and document digital evidence from multiple, actively
processing and transmitting digital sources for the purpose of uncovering facts related to the
planned intent, or measured success of unauthorized activities meant to disrupt, corrupt, and
or compromise system components as well as providing information to assist in response to
or recovery from these activities.
164
There are two sides to CNF efforts. The first is to assess the impact of the malicious or
suspect act or acts. In order to bring a computer criminal to justice, it must be possible to
show that sufficient damage has been done so that the act can be accurately classified as a
crime. Often, there is an economic threshold associated with statutes that govern computer
crime.
The second part is to gather information that legally binds the act or acts that caused the
damage to the perpetrator. This is the better known aspect of computer crime investigation;
the standard "Who dun' it" component. In response to innovative computer criminals, CNF
techniques have become highly sophisticated and CNF tools are increasingly effective. In
addition to putting computer criminals in jail, CNF techniques have enabled whitehats to
learn valuable information about blackhats' techniques and methods and to formulate
protection and defense mechanisms, tools, and techniques.
The CNF ultimate goal is to provide sufficient evidence to allow the blackhat to be
successfully prosecuted. CNF techniques are used to discover evidence in a variety of
crimes ranging from theft of trade secrets, to protection of intellectual property, to general
misuse of computers.
The science of computer forensics has a seemingly limitless future, and as technology
advances, the field will continue to expand. Such evidence has to be handled in the
appropriate manner and must be documented for use in a court of law. Any methodology,
process or procedural breakdown in the application of forensics can jeopardize the
company’s case. Organizations are beginning to rely on the findings that computer forensics
specialists gather when a cybercrime is committed. Computer forensics quickly is becoming
standard protocol in corporate internal investigations by expanding beyond the realm of
specialized, computer incident response teams. As the overwhelming majority of documents
are now stored electronically, it is difficult to imagine any type of investigation that does not
warrant a computer forensic investigation.15 It is becoming a standard for electronic crime
investigations.
Computer forensics is not only used for cybercrime cases, but the techniques and methods
are also adopted for noninvestigative purposes. Examples include data mapping for security
and privacy risk assessment, and the search for intellectual property for data protection.
165
employees. Additionally, computer forensics schemes can be used when critical files have
been deleted accidentally or through hardware failure.
Thus, there are several additional applications pertaining to the science of computer
forensics in addition to utilizing the methods to investigate computer-related crimes.
CHALLENGES
A key challenge in network forensics is to first ensure that the network is forensically ready.
For a successful network investigation, the network itself must be equipped with an
infrastructure to fully support this investigation The infrastructure should ensure that the
needed data exists for a full investigation
Data sources: A typical network has several possible sources of data which includes raw
network packets and logs of network devices and services. Although, it is desirable to collect
data from all the possibles sources, this option is not always feasible especially for large
networks. Therefore, an important decision is to select a subset of data sources which gives
a good coverage of the network and makes the collection processes practical.
Data granularity: A related issue to selecting data sources is to decide on how much details
should be kept. For instance, when collecting network packets, one may collect whole
packets, packets’ headers, connection information (ip addresses, port numbers), etc. Similar
to the above item, keeping extensive data details is not practical in large networks.
Data integrity: It is critical to ensure the integrity of collected data. The outcome of the
forensics process can be adversely affected if the collected data are altered either
deliberately or accidentally. Therefore, measures have to implemented to ensure data
integrity during and after data collection and analysis.
Data as Legal Evidences: Using the collected data internally within an organization is quite
different from presenting the data in a court of law. In the latter case, the collected data has
to pass stringent legal procedures in order to qualify as evidences in a court of law. They
have to pass an admissibility test; a screening process by the court.
Data Analysis: A major challenge is analyzing the collected data, in order to produce useful
information that can be used in a decision making process. Such analysis process is in many
respect challenging due to the complexity of a typical network environment and the amount
166
and diversity of data involved. Innovative tools are needed to help human investigators to
analyze data. These tools may apply techniques from fields like data mining and information
visualization
Honeytraps
Honeynets, on the other hand, are a network of interconnected production and honeypot
nodes. They protect the production resource only by distracting the intruder from the real
target. Like Honeypots, Honeynets are designed to collect information from intruders and
attempted intruders while containing the operations that these intruders can perform.
Pitfalls of Honeytraps
There are several potential pitfalls of honeytraps, one based on its foundational goal of being
a system that is established to be compromised. A concern is that once an intruder enters
the honeytrap, they may be able to utilize some component of the honeytrap for an illicit
purpose, e.g. as a zombie in a distributed denial of service attack.
A second concern is that once the blackhat enters the honeytrap, they may be able to attack
the honeytrap itself, shielding their actions from the designed honeytrap monitors or by
destroying or modifying the honeytrap activity logs. There are many discussions of how to
avoid these and other honeytrap pitfalls in the literature and on relevant web pages. For the
purposes of this paper, we posit without proof, that these pitfalls can be effectively
overcome.
Data Capture.
Once a blackhat penetrates a honeytrap, there must be mechanisms in place to detect and
record the actions that the intruder takes. Detecting and recording that activity is termed
Data Capture. Data Capture should record every possible aspect of the blackhat activity,
from keystrokes to transmitted packets. The purpose of data capture is to collect data to
determine tools, tactics, habits, and motives of specific blackhats, and of the blackhat
community.
Honeytrap Uses
Honeytrap are intended to let the blackhats in and allow them to operate in order to monitor
their actions. As nformation is collected, blackhats are profiled and their techniques are
analyzed.
167
Documenting Blackhat Techniques
For years whitehats have dedicated efforts and resources to study the blackhat community
with the ultimate goal of learning blackhat techniques. From earlier efforts, such as hacker
surveys , to the cutting edge technology such as that implemented in the honeynet project,
whitehats have used their knowledge to investigate the mystery of blackhats and the working
of the hacking process. In 1999, MacClure shed some light on this subject when he
documented the process of hacking by breaking it down into stages that most blackhats
goes through during an attack. The anatomy described by MacClure includes four stages:
proving, invading, mischief, and covering tracks. Documenting the blackhats activities during
these four stages allows us to create a signature that can be used to identify a specific
blackhat.
Through the literature produced from previous research, blackhats techniques, tactics,
motives and psychology have been documented [KEM00]. We now use this information to
create signatures to characterize specific blackhats. For example, suppose our blackhat is a
script kiddy. Script kiddies are inexperienced blackhats that try to break into systems using
scripts created by knowledgeable blackhats. A signature for this blackhat may include, for
example, level of skill, methodology, tactics, tools, and other information such as the
originating site for scripts.
An essential element of deception technology is that hackers must enter the trap in order to
for the trap to gather
information. By many reports, hacking and probing is sufficiently widespread that simply
placing a computer on the
Internet will naturally result in intruders entering the computer. Still, there is no guarantee
that there will be enough of any interesting types of hacking in the computer to allow
effective information gathering.
168
Honeytrap Approaches
Honeytraps are inherently flexible and can be implemented in a wide variety of ways. There
are three primary considerations that guide the approach that we recommend:
1) System vulnerabilities,
For the first, we can configure honeytraps to identify a specific vulnerability or set of
vulnerabilities within a host or system. For example, the honeytrap "BackOfficer Friendly"
[BOF] can be configured to emulate the specific vulnerability known as Back Orifice. When
blackhats find this honeytrap and recognize that the Back Orifice vulnerability is open,
BackOfficer Friendly carefully tracks their activities in the honeytrap to see how they exploit
the vulnerability. The tool extends the process by assessing the impacts of the recorded
actions.
In the final category, we can configure a honeytrap to simulate a network system and
monitor the system behaves and how its components interact under attack. Honeytraps are
an effective technique to test changes in systems, such as addition of new software or
significant configuration changes to determine possible risk and vulnerabilities before they
are implemented in the real system.
With the introduction of honeytraps, the face of information gathering changed, putting
whitehats in the offensive rather than the defensive mode. The purpose of honeytraps is to
gather intelligence about the enemy to learn the tools, tactics, and motives of the blackhat
community . To date, the information collected in honeytraps has not been intended for
presentation in court. In order to use the information collected in honeytraps to prosecute the
blackhat there are numerous legal issues to deal with. As we discussed earlier, when an
intruder is attracted (no matter how subtle that attraction may be) into a honeytrap, the
honeytrap owner assumes liability for the actions the intruder takes on the honeytrap. For
example, if the intruder is able to turn the honeytrap into a zombie to affect a distributed
denial of service attack, the subject of that attack may claim damages from the honeytrap
owner. Containment technology employing wrappers and sandbox techniques reduce the
vulnerability, but are far from perfect.
169
Secondly, if an intruder is attracted into a honeytrap, it is unlikely that the intrusion itself can
be prosecuted as a crime,even if the activity that the intruder engages after entry is clearly
malicious. Proving crimes relative to invited participants is more legally complicated than for
acts carried out by uninvited intruders.
Additionally, honeytraps are not real systems, they contain no valuable data, and they have
no real users. As a result, there is no real economic impact and no real damage that can
result from honeytrap intrusions. Honeytraps are created to be attacked, so it is unlikely that
an intruder could be prosecuted for activities they undertake within a honeytrap since it
would be difficult to categorize the results of their activities as a crime.
Even if a crime can be established, if the intruder was attracted into the honeytrap, it is a
good chance that they will be able to employ an entrapment defense. Even subtle attractions
can be used to defeat prosecution via anti-entrapment laws.
Finally, honeytrap operators must deal with legal issues related to privacy. While privacy
issues are not well defined on the Internet (or in society in general) honeytrap operators may
face invasion of privacy claims either in response to their attempts to prosecute intruders, or
independently from malicious or non-malicious intruders that do not desire to have their
activities or identities revealed.
In the previous section we introduced the traditional forensic model and then described how
this model changed with the additions introduced in MY01. In the next sections we introduce
two architectures that allow honeytraps to be used as CNF tools. We show how these
170
developments transform the forensic model to create the parallel and the serial forensic
architectures. In addition, we discuss the role of honeytraps in the forensic process. As we
have noted, honeytraps are designed to provide insight into blackhat methods, tactics, and
targets. To gather this information, blackhats must be tracked through the system, with every
action being recorded for later analysis. This information
provides a basis to determine how the blackhat works and allows the analyst to predict what
this blackhat, or a class of blackhats, may do in the future.
We notice that information gathered from Honeytraps may be used to develop a profile of
each blackhat that is monitored. By tracking the actions, a signature for attacks can be
created that we can use to identify and prosecute the blackhat.
Architectures
Honeytraps come in many shapes and sizes. They are highly configurable and therefore can
be designed to meet the needs and capabilities of a wide variety of specific systems. Once
the Honeytrap is designed, the architecture of how to connect the Honeytrap to the Internet
in reference to the production system must be determined. Two architectures that facilitate
the forensic investigation are the serial and parallel architectures.
Serial Architecture
The serial honeytrap architecture works by placing the honeytrap between the Internet and
the production system as shown in Figure 3. In this configuration, the honeytrap acts as a
firewall. All recognized users are filtered to the production system while blackhats are
contained in the honeytrap. The blackhats’ activities are monitored and all the information
171
collected is routed to another system that is protected by a firewall, to ensure the integrity of
the data.
The serial architecture forces the blackhat to go through the honeytrap to attack the
production system thus exposing all attackers to the honeytrap monitoring techniques. This
may also enhance tracing capability, since it may be possible to follow blackhats as they
transition between the honeytrap and the production system, making it easier for the forensic
investigation to match the blackhat in the honeytrap to the blackhat in the production system.
There are numerous detractors to the serial architecture. We first notice that it is resource
intensive. One of the important characteristics of Honeytraps is that they need not deal with
real users, thus reducing the volume and complexity of monitoring. However, in the serial
connection the honeytrap must handle all traffic going into the production system and reroute
the authorized user to the production system. Additionally, were it easy to contain intruders
in a firewall, we would not need honeytraps. This architecture runs the fundamental risk that
intruders that it attracts into the honeytrap, may subsequently successfully attack the
production system in in spite of the best containment efforts of the honeytrap.
Serial Architecture
172
Parallel Architecture
Configuring the honeytrap so that it is likely that an intruder would enter or probe the
honeytrap before or shortly after entering the production systems is tricky, and again leads
us into possible entrapment scenarios. Secondly, under the parallel honeytrap architecture it
is likely to be more difficult to connect an intruder in the honeytrap to the intruder in the
production system if the honeytrap is implemented in the parallel configuration, since there is
no direct connection between them as we had in the serial architecture.
173
If the attack expands into the production system, activate the response procedure to handle
the attack . After the situation is contained, move on to the investigation. When sufficient
evidence is collected in the honeytrap and in the production system, the analysis begins and
a report is produced that includes all the evidence and findings enables prosecution of the
blackhat.
The states of the parallel forensic model, shown in Figure 6, are very similar to those of the
serial forensic model with one mayor difference; in the parallel forensic model there are two
processes ongoing concurrently. Process A is the honeytrap forensic process (HTFP), and
Model B is the production system forensic process (PSFP).
The HTFP begins once the blackhat has entered the honeytrap and the FAS is activated. An
alert is immediately sent to the production system, and the monitoring of the blackhat’s
activities begins. Once the blackhat’s activities are detected, a forensic investigation is
performed with the information collected, and the results are safely stored until needed.
The PSFP begins once the production system has been compromised and the attack is
detected. If the honeytrap is attacked first, then the production system is already on forensic
alert, making it easier for the attack to be detected. Once the intrusion response procedure is
activated and the situation is contained, the forensic analysis begins with the evidence
gathered on the production system and the information collected from the honeytrap, if
available. A complete report from the analysis is generated that includes all the evidence
and findings that can be used to take legal action against the blackhat.
174
The Forensic Investigation
In both architectures, the forensic investigation procedure and goal is the same. The forensic
investigation is broken down into two separate investigations. The forensic investigation
begins in the evidence collected from the honeytrap, which will refer to as Honeytrap
Forensic Investigation (HTFI). The second investigation is based on the on the evidence
collected from the production system, which we will refer to as Production System Forensic
Investigation (PSFI). Both investigations will produce a piece of the puzzle.
The goal in the HTFI is to produce a damage report and a signature for the blackhat. For
example, suppose A is the blackhat that broke into the honeytrap, then the HTFI will
produce:
1) A -> identity
2) A -> tactics
3) A -> tools
4) A -> targets
Because less information about the blackhat is available in the production system, the
blackhat’s signature may only be partial. For example, suppose B is the blackhat that broke
Into the production system, then the PSFI might include:
1) B -> tactics
2) B -> tools
3) B -> targets
An essential element of this investigation is to determine the identity of the intruder in the
production system. The PSFI provides blackhat B’s partial signature and a damage report,
but not blackhat B’s identity. The HTFI establishes blackhat A’s identity, but A cannot be
charged because he was in a honeytrap, so no real damage can be shown in court. So, we
need the identity of blackhat B to charging him with the damage report.
The question is, "How can we use what we know about blackhat A to discover who blackhat
B is?" The answer is that if we can show that the tactics, tools, targets, and other information
signatures of intruder B are identical to those of intruder A, we may be able to make a
compelling argument that A and B are the same person. If so, since the identity of intruder A
is known, the match would enable the case to be pursued.
175
Figure 6: Parallel Forensic Model
176
This list of Forensics/Network Forensics tools contains some of the tools that can be used to
extract valuable info from the system or from network capture files (usually pcap files).
Imagine getting a large pcap file and you need to extract all emails form there? Or Extract all
jpegs? These tools can definitely help.
DateDecoder - “A command line tool used to decode various date/time stamps from their
encoded format to human readable format.”
http://www.live-forensics.com/dl/DateDecoder.zip
http://www.esiea-recherche.eu/~desnos/draugr/draugr.tar.gz
EchoMirage - “Echo Mirage is a generic network proxy. It uses DLL injection and function
hooking techniques to redirect network related function calls so that data transmitted and
received by local applications can be observed and modified.”
http://www.bindshell.net/tools/echomirage
Foremost - "Foremost is a console program to recover files based on their headers, footers,
and internal data structures. This process is commonly referred to as data carving. Foremost
can work on image files, such as those generated by dd, Safeback, Encase, etc, or directly
on a drive. The headers and footers can be specified by a configuration file or you can use
command line switches to specify built-in file types. These built-in types look at the data
structures of a given file format allowing for a more reliable and faster recovery."
http://foremost.sourceforge.net/
Forensics ToolKit - "The Forensic ToolKit™ contains several Win32 Command line tools that
can help you examine the files on a NTFS disk partition for unauthorized activity."
http://www.foundstone.com/us/resources/proddesc/forensictoolkit.htm
HexReader - “Reads hexoffsets from files, is primary used to then send output to
datedecoder.”
http://www.live-forensics.com/dl/HexReader.zip
Hfsexplorer - "HFSExplorer is an application that can read Mac-formatted hard disks and
disk images.
It can read the file systems HFS (Mac OS Standard), HFS+ (Mac OS Extended) and HFSX
(Mac OS Extended with case sensitive file names)."
http://hem.bredband.net/catacombae/hfsx.html
http://www.macosxforensics.com/Downloads/Downloads.html
177
JSUnpack - “...it is a completely passive JavaScript decoder to perform Intrusion Detection,
by processing network traffic (either an interface or pcap file), rather than URLs.”
http://jsunpack.jeek.org/jsunpack-n.tgz
http://www.mandiant.com/products/free_software/memoryze/
http://networkminer.sourceforge.net/
PCAP Forensic Tool - “This tool as of now, hosts the following features:-Packet
Summary,DNS Summary,Stream Summary,List files within stream (magic bytes),List files
within archives in streams(ZIP and TAR),Extract files based on magic type, Look within ZIP
and TAR archives for file type to extract,GZIP Decompression for files and archives,
Extraction Summary...”
http://malforge.com/node/30
http://www.live-forensics.com/dl/RecycleReader.zip
SleuthKit - "The Sleuth Kit (TSK) is a library and collection of command line tools that allow
you to investigate volume and file system data."
http://www.sleuthkit.org/
http://code.google.com/p/skipfish/
SQLiX - “SQLiX, coded in Perl, is a SQL Injection scanner, able to crawl, detect SQL
injection vectors, identify the back-end database and grab function call/UDF results (even
execute system commands for MS-SQL).”
http://www.owasp.org/index.php/Category:OWASP_SQLiX_Project
Xplico - “The goal of Xplico is extract from an internet traffic capture the applications data
contained.”
http://www.xplico.org
178