Firewalls Complete
Firewalls Complete
Firewalls Complete
http://kickme.to/tiger/
Firewalls Complete - Beta Version
Comments
Comments
Security
Real-Time Performance
Multicasting
IPv6 Security
Network Time Protocol (NTP)
Dynamic Host Configuration Protocol (DHCP)
Windows Sockets (WINS)
Domain Name System (DNS)
Limiting DNS Information
Firewalls Concepts
The Flaws in Firewalls
Fun With DMZs
Authentication Issues
Trust at the Perimeter
Intranets
From Here…
Chapter 2
Basic Connectivity
What Happened to TTY
What is the Baudot Code?
UNIX to UNIX CoPy (UUCP)
SLIP and PPP
Rlogin
Virtual Terminal Protocol (TELNET)
Columbia University’s KERMIT: a Secure and Reliable TELNET Server
TELNET Services Security Considerations
A Systems Manager Approach to Network Security
Chapter 3
Cryptography: Is it Enough?
Introduction
Symmetric Key Encryption (Private Keys)
Data Encryption Standard (DES)
International Data Encryption Algorithm (IDEA)
CAST
Skipjack
But is Skipjack Secure?
RC2/RC4
Asymmetric Key Encryption/Public Key Encryption:
RSA
Is RSA Algorithm Secure?
Adaptive-chosen-plaintext Attack
Man-in-the-middle Attack
Chosen-ciphertext Attack
Chosen-key Attack
Rubber-hose Cryptanalysis
Timing Attack
Cryptography Applications and Application Programming Interfaces (APIs)
Data Privacy and Secure communications channel
Some Data Privacy Prime and Tools
Have a Password Policy*
Authentication
Authenticode
NT Security Support Provider Interface (SSPI)
Microsoft Cryptographic API (CryptoAPI)
Cryptography and Firewalling: The Dynamic Dual
Chapter 4
Firewalling Challenges: The Basic Web
HTTP
The Basic Web
What to Watch for on the HTTP Protocol
Taking Advantage of S-HTTP
Using SSL to Enhance Security
Be Careful When Caching the Web!
Plugging the Holes: a Configuration Checklist
A Security Checklist
Novell’s HTTP: Better be Careful
Watch for UNIX-based Web Server Security Problems
URI/URL
File URLs
Gopher URLs
News URLs
Partial URLs
CGI
Chapter 5
Firewalling Challenges: The Advanced Web
Extending the Web Server: Increased Risks
ISAPI
CGI
Internet Server API (ISAPI)
A Security Hole on IIS exploits ISAPI
What can you do About it?
NSAPI
Servlets
Servlets Applicability
Denali
Web Database gateways
Cold Fusion
Microsoft Advanced Data Connector (ADC)
Security of E-mail Applications
Macromedia’s Shockwave
Shockwave’s Security Hole
The Security Hole Explained
Countermeasures to the Shockwave Exploit
Chapter 6
The APIs Security Holes and Its Firewall
Interactions
Sockets
BSD sockets
Windows sockets
Java APIs
Perl modules
CGI Scripts
ActiveX
ActiveX DocObjects
Distributed Processing
XDR/RPC
RPC
COM/DCOM
Chapter 7
What is an Internet/Intranet Firewall After All?
What are Firewalls After All?
The Purpose of a Firewall
The Firewall Role of Protection
Administrating a Firewall
Management Expertise
System Administration
Circuit-Level Gateways and Packet Filters
Packet Filtering
Application Gateways
IP-Level Filtering
Chapter 8
How Vulnerable Are Internet Services?
Protecting and Configuring Vulnerable Services
Electronic Mail Security Threats
Simple Mail Transfer Protocol (SMTP)
Preventing against E-mail Attacks
Be Careful With E-Mail Attachments
Post Office Protocol (POP)
Multimedia Internet Mail Extensions (MIME)
File Transferring Issues
File Transfer Protocol (FTP)
Trivial File Transfer Protocol (TFTP)
File Service Protocol (FSP)
UNIX-to-UNIX Copy Protocol (UUCP)
The Network News Transfer Protocol (NNTP)
The Web and the HTTP Protocol
Proxying HTTP
HTTP Security Holes
Security of Conferencing
Watch This Services
Gopher
finger
whois
talk
IRC
DNS
Network Management Station (NMS)
Simple Network Management Protocol (SNMP)
traceroute
Network File System (NFS)
Confidentiality and Integrity
Chapter 9
Setting Up a Firewall Security Policy
Assessing Your Corporate Security Risks
Data Security
Understanding and Estimating the Threat
The Virus Threat
Outside Threats
Inside Threat
A Word About Security Holes
Setting up a Security Policy
A Security Policy Template
Chapter 10
Chapter 11
Proxy Servers
SOCKS
Tcpd, the TCP Wrapper
Setting Up and Configuring the Proxy Server
Chapter 12
Firewall Maintenance
Keeping Your Firewall in Tune
Monitoring Your System
Monitoring the Unmonitored Threats
Preventive and Curative Maintenance
Preventing Security Breaches on Your Firewall
Identifying Security Holes
Recycling Your Firewall
Chapter 13
Firewall Toolkits And Case Studies
The TIS Internet Firewall Toolkit
Case Studies: Implementing Firewalls
Firewalling a Big Organization: Application-Level Firewall and Package Filtering, a Hybrid
System
Firewalling a Small Organization: Packet Filtering or Application-Level Firewall, a Proxy
Implementation
Chapter 14
Types of Firewalls and Products on the Market
Check Points’ Firewall-1 Firewall - Stateful Inspection Technology
FireWall-1 Inspection Module
Full State Awareness
Securing "Stateless" Protocols
The INSPECT Language
Stateful Inspection: Under the hood
Extensible Stateful Inspection
The INSPECT Engine
Securing Connectionless Protocols such as UDP
Securing Dynamically Allocated Port Connections
Firewall-1 Performance
Systems Requirements
CYCON’s Labyrinth Firewall - The "Labyrinth-like" System
An Integrated Stateful Inspection
Intelligent Connection Tracking
Redirecting Traffic
Transparent Redirection to Fault-Tolerant Systems*
Diverting Scanning Programs*
Network Address Translation
Load Balancing of Connections
Multi-Host Load Balancing*
Proxying - Source Address Rewriting
Spoofing - Destination Address Rewriting
IPSec - Encryption
IPSec Filter*
IPSec Gateway*
Common Use*
Protection of Attached Networks and Hosts
Protection of Individual Hosts
Systems Requirements
NetGuard’s Guardian Firewall System - MAC Layer Stateful Inspection
A Unprecedented Internet Management Tools.
Visual Indicator of Enterprise-Wide Agent Activity:
Extended Gateway Information
Activity Monitoring Screen
Enhanced Activity Monitoring Screen:
Monitoring User’s Connectivity
Firewall Strategy Wizard
WAN Adapter Support
Logoff Command on Authentication Client
CyberGuard’s CyberGuard Firewall - Hardening the OS
The Trusted Operating System
Intuitive Remote Graphical User Interface (GUI)
Dynamic Stateful Rule Technology
Certifiable Technology
Systems Requirements
Raptor’s Eagle Firewall - An application-level Architecture
Enforcing Security at All Levels of the Network
Reliance on Dedicated Security Proxies
Using Raptor’s Firewalls Eagle Family
Graphical Policy Configuration
Consistent Management- Locally or Remote
System Requirements
Solstice FireWall-1 3.0
Solstice FireWall-1 Features
Comprehensive Services Support
Encryption Support for Data Privacy - Virtual Private Networks
Client Authentication
Anti-Spoofing and SNMP Management
Secure Computing’s Borderware Firewall: Combining Packet Filters and Circuit-Level
Gateways
The BorderWare Firewall Server
Transparency
Network Address Translation
Packet Filtering
Circuit-Level Gateway
Applications Servers
Audit Trails and Alarms
Transparent Proxies
BorderWare Application Services
Mail Servers (SMTP and POP)
Mail Domain Name Hiding*
POP Mail Server*
Anonymous FTP Server
News Server
Web Server
Finger (Information) Server
Encryption Features
Automatic Backups
Security Features
Ukiah Software’s NetRoad Firewall: a Multi-Level Architecture Firewall
Appendix A:
List of Firewall Resellers and Related Tools
AlterNet:
Atlantic Computing Technology Corporation
ARTICON Information Systems GmbH
Cisco Routers
Cohesive Systems
Collage Communications, Inc.
Conjungi Corporation
Cypress Systems Corporation, (Raptor reseller)
Data General Corp. (Gauntlet Reseller)
Decision-Science Applications, Inc.
E92 PLUS LTD
Enterprise System Solutions, Inc.(BorderWare reseller)
E.S.N - Serviço e Comércio de Informática Ltda.
FSA Corporation
IConNet
Igateway by Sun Consulting.
Chapter 15
Glossary
Bibliography & Webliography
Partial Webliography List
Comments
Comments
Comments
Comments
Preface
The Internet is an all pervasive entity in today’s world of computing. To cope with the "wild" Internet,
several security mechanisms were developed, among them access controls, authentication schemes and
firewalls, as one of the most secure methods.
However, firewall means different things for different people. Consider the fable from India about the
blind men and the elephant. Each blind man touched a different part of the elephant and came up with a
totally different description. The blind man who touched the elephant’s legs described it as being similar
to a tree. Another blind man touched the tail and decided an elephant was like a twig. Yet another
grabbed the trunk and concluded an elephant was like a snake. To some computer professionals, even to
some of those in charge of Internet security, firewalls are just "walls of fire" blocking hackers outside of
it. To some others, it is only a form of authentication mechanism. Some other folks consider firewalls to
be synonymous with routers. Obviously, a firewall is much more than any of these individually.
The problem is only compounded by the fact that for a lot of computer and security professionals,
firewalls was touched upon only fleetingly in their academic career, worse, they bumped into it at the
computer room. Also, a lot of the important parts and features of firewall are recent innovations, and thus
were never covered in an academic career or most of the 1995-1996 firewall books at all, which further
aggravates the problem as of right now there is no one single book these professionals can turn to. Their
only resource is to peruse a wide array of literature including textbooks, web pages, computer magazines,
white papers, etc.
This book, the Complete Firewall Handbook, aims to become your companion book, the one you will
always want to carry with you, as it does claim to be complete! I can assure you, there may be some
similar books on the market, but none is complete as this one, none provides a reference guide as this
one. The other titles I know are either discussing a specific technology and strategy or a product.
Although you can compare this book to those, as it also covers the firewall technologies, strategies and
all the main firewall products on the market, this one goes beyond the scope of the other ones. In
addition, it provides a complete reference guide of the various protocols, including the upcoming ones
(IPV6, for example) and how firewalling fits into it.
In fact, this book adds a next level to your expertise by discussing all the components that makes the
Internet, and any other network for that matter, unsecured: it discuss and describes in details all the
protocols, standards and APIs used on internetworking, as well as the security mechanisms, from
cryptograph to firewalls. Later on the book there is a "reference" section with a complete review of the
major firewall products available on the market to date, a selection of tools, applications and many
firewall demos and evaluations, which were all bundled together on the CD the accompanies this book.
This book is aimed primarily to network administrators, including Web, systems, LAN and WAN
Administrators. But it is also target to the new breed of professionals, the so called Internet Managers, as
well as to anyone in need of a complete reference book in firewalls. As you read this book you will
notice that what separates it from others is that this one is comprehensive, and gives the technical
information necessary to understand, choose, install, maintain e foresee future needs involving firewalls
and security at a very informal level. It has a conversational style with practical information, tips and
cautions, to help the Internet, network and security administrator to cope, and "survive," their tasks and
responsibilities.
As important as implementing firewalls at your site, it must be preceded of a security policy that takes in
consideration services to be blocked and allowed. It should also consider implementation of
authentication and encryption devices and the level of risks you are willing to undertake in order to be
connected to the Internet. This book will discuss all of these topics and the issues it brings up when
dealing with site security and administration. It will go over all the services, such as TELNET, FTP,
Web, e-mail, news, etc.
security of advanced technologies behind the Web, such as ISAPI, NSAPI, Servlets, plug-ins, ActiveX,
JavaScript, Shockwave and more.
Chapter 6, "The APIs Security Holes and Its Firewall Interactions," discuss the influence of APIs on
network environment connecting to the Internet and its effect due to lack of security. It covers sockets,
Java APIs, Perl modules, W3C www-lib and more.
Part II, "Firewall Implementations and Limitations" is a more practical one covering all aspects
involving firewall implementations considering the security limitations and advantages of plugging in
security as discussed on part I in light of the multitude of protocols and standards. It discusses how to use
the various types of firewall for the many different environments, and what to use where and how, etc.
Chapter 7, "What is an Internet/Intranet Firewall After All?" discusses the basic components and
technology behind firewalls, extending the discussion to the advantages and disadvantages of using
firewalls, security policy and types of firewalls.
Chapter 8, "How Vulnerable Are Internet Services?" lists all the major Internet services weaknesses and
what can be done to minimize the risks it generates for users and corporations attached to the Internet.
The chapter discusses how to protect and configure electronic mail, SMTP, POP, MIME, FTP, TFTP,
FSP, UUCP, News, and much more.
Chapter 9, "Setting Up a Firewall Security Policy," peels another layer of this Internet security onion by
discussing how to setup a firewall policy, what to look for and when enough security is really enough!
Chapter 10, "Putting It Together: Firewall design and Implementation," begins to put everything
discussed so far into action. It discusses how to implement a firewall, from planning, chosen the right
firewall according to your environment and needs, to implementing it.
Chapter 11, "Proxy Servers," is vital for the success of a firewall implementation discussed on the
previous chapter. It brings security a step further by showing how proxy server can significantly enhance
the level of security offered by a firewall. This chapter defines a proxy, shows how to implement it and
introduces the concept of SOCKS and how to implement it with your proxy server.
Chapter 12, "Firewall Maintenance," adds naturally to the other two previous chapters. Once you setup
your firewall and add a proxy server onto it you know will need to get ready for maintaining your
firewall. This chapter will help you to keep your firewall in tune, monitor your systems and perform
preventive and curative maintenance on your firewall.
Chapter 13, "Firewall Toolkits And Case Studies," complements this section of the book by providing
you with supplementary information and study of cases on the subject.
Part III, "Firewall Resource Guide," expands the information contained on chapter 13 by providing an
extensive resource guide on firewalls. It discusses the major firewall technologies and brands, their
advantages and disadvantages, what to watch for, what to avoid, as well as what to look for in a firewall
product.
Chapter 14, "Types of Firewalls," This section provides you with a technical overview of the main
firewall products available on the market as of Summer of 1997. It’s an extensive selection of all the
major vendors and they firewall technology, so you can have a chance to evaluate each one of them
before deciding which firewall best suite your needs.
.
Part IV, "Appendixes," provides you with specifications of the best firewalls out there, list of vendors,
security companies, products and other resource utilities on firewalls, as well as a glossary of terms.
Appendix A, "List of Firewall Vendors and Products," provides you with a list of firewall vendors and
their products descriptions. Most of them have a demo or evaluation copy included in the CD that
accompanies this book.
Appendix B, "List of Firewall Utilities," provides a list of utilities and an overview of each one of them.
Appendix C, "Bibliography on Firewall," provides you with a list of complementary reading materials
such as books, white papers, articles, etc.
Appendix D, "Webliography on Firewalls," provides you with a list of URL links of sites offering white
papers, general and more technical information of firewall and proxy servers.
Appendix E, "Glossary of Terms," provide you with a comprehensive list of words and terms generally
used in the firewall/Internet environment.
● Professionals involved with setting up, implementing and managing Intranets and Internet;
● Webmasters;
● Entry level (in terms of computer literacy) professionals who want to understand how the Internet
works rather than how to use the Internet;
● Advanced computer literate people who would use the book as a quick reference book.
He was one of the co-authors of "Web Site Administrator’s Survival Guide" (Sams.Net), the author of
"Protecting Your Web Site With Firewall" (PRT), the author of "Internet Privacy Kit" (Que), the
co-author of "Windows NT Server 4.0: Management and Control" (PTR), and the author of "Web
Security with Firewalls" (Axcel Books). Also he is a regular contributor for BackOffice Magazine,
WEBster Magazine, WebWeek and Developer’s Magazine.
If you’re interested in his articles, check the URL http://members.aol.com/goncalvesv/private/writer.htm.
For a complete background information, check the URL http://members.aol.com/goncalvesv.
If contacting the author, please send e-mail to [email protected] or [email protected].
Comments
Comments
Comments
Comments
Chapter 1
Internetworking Protocols and
Standards: An Overview
It has being said that the Internet is a very dynamic place. From it’s efforts to emerge since earlier
researching programs dated back in 1968, to its predecessor ARPANET, which much contributed for the
platform of experimentation that would characterize the Internet, it all actually first came to place in
1973.
Since then, endlessly, the internetworking efforts and researching were much evolved around attending
the needs for standards of the new Cyberspace communities joining the now so called the Net. Of course,
you must understand that the significance of "efforts" on the Internet environment goes beyond the nature
and significance of the word, it can not only be based on what the Webster would define it! Being the
Internet so dynamic, so aggressive and outspoken, not only these efforts for problem resolution and
standard transcends the problems and barriers coming its way, but as David Croker simply put on
Lynch’s and Rose’s book, "Internet System Handbook" (1993), "the Internet standards process combines
the components of a pragmatic engineering style with a social insistence upon wide-ranging input and
review." Thus, "efforts" becomes more often the result of individual champions than of organizational
planning or directives.
Unlike any other structure in the world, the Internet protocols and standards are always proposed by
individual initiatives of organizations or professionals. In order to understand how new protocols emerge
and eventually become standards (do they?) you will need to start getting use to the acronym RFC, or
Request for Comments. This dynamics, or process, was initiated back in 1969, as a result of they
dispersion of the Internet community members. These documents, as the acronym suggests, were (and
are still being!) working documents, ideals, testing results, models and even complete specifications. The
various members of the Internet community would read and respond, with comments, to the RFC
submitted. If the idea (and grounds!) were accepted by the community, it might then become an standard.
Not much has changed in the MO (modus operandi) of the Internet community with regards to the RFCs
and how they operate. However, back there in 69, there was only one network, and the community did
not exceed 100 professionals. With its fast growth, the Internet began to require not only a body that
would centralize and coordinate the efforts, but also "regulate" a minimum standard so that they could at
Tip:
If you want to get the RFC style guide, you should refer to RFC 1111. For more information
about submitting an RFC, send an e-mail message to [email protected]. For a list of RFCs,
retrieve the file rfc/rfc-index.txt.
Note:
For more detailed information about the IAB, the IETF and the IRTF, I suggest you to get Lynch
and Rose’s book, "Internet System Handbook," as it’s not the scope of this book to discuss the
specifics of it.
It’s not the scope of this book to discuss every protocol used on the Internet. I have for that at least
couple reasons:
1. These protocols are too many and in constant change (and will continue to change), so this book
wouldn’t be of service to you, and
2. Our goal here is to concentrate on the security flaws specific to each of these protocols. By
assessing their security issues not only you will be able to make a more informed decision when
choosing a protocol but also understand why all these efforts and fuzz on security alternatives such
as cryptography, firewalls and proxy servers becomes necessary.
Therefore, this chapter focus on discussing the major Internet protocols, their characteristics, weaknesses
and strength, and how they affects your connectivity and data exchange on the Internet. Table 1.1
provides you a list of the major protocols in used on the Internet.
Table 1.1
RFCs sent to IETF on IP Support
1001 Protocol Standard for a NetBIOS Service on a TCP/UDP Transport: Concept and
Methods
● The maximum number of hops the datagram can be transported over the Internet/Intranet,
● IP options.
All the datagrams with local addresses are delivered directly by the IP, and the external ones are
forwarded to their next destination based on the routing table information.
IP also monitors the size of a datagram it receives from the host layer. If the datagram size exceeds the
maximum length the physical network is capable of sending, then IP will break up the datagram into
smaller fragments according to the capacity or the underlying network hardware. These datagrams are
then reassembled at its destination before it is finally delivered.
IP connections are controlled by IP addresses. Every IP address is a unique network address that
identifies a node on the network, which includes protected (LANs, WANs and Intranets) as well as
unprotected ones such as the Internet. IP addresses are used to route packets across the network just like
the U.S. Postal Office uses ZIP codes to route letters and parcels throughout the country (internal
network, which it has more control) and internationally (external network, which it has minimum control,
if any!).
In a protected network environment such as a LAN, a node can be a PC using a simple LAN Workplace
for DOS (LWPD), in which case the IP address is set by modifying a configuration file during
installation of the LWPD software.
The Internet Protocol is the foundation of the Transmission Control Protocol/Internet Protocol (TCP/IP),
a suite of protocols created especially to connect dissimilar computer systems, which is discussed in
more details later on this chapter.
IP Security Risks
If there were no security risk concerns about connectivity on the Internet, there would not be a need for
firewalls and other defense mechanisms either, and I probably would be already in God’s ministry
somewhere in the world, rather than writing a book about it. Thus, the solutions to the security concerns
of IP-based protocols are widely available in both commercial and freely available utilities, but as you
will realize throughout this book, most of the times a system requires administrative effort to properly
keep the hackers at bay.
Of course, as computer security becomes more of a public matter, it is nearly impossible to list all of the
tools and utilities available to address IP-based protocols security concerns. Throughout this book you
are introduced to many mechanisms, hardware technologies and application software to help you audit
the security of your network, but for now, lets concentrate on the security weaknesses of the protocols
used for connections over the Internet by identifying the flaws and possible workarounds and solutions.
There are many other advanced tools out there to intercept an IP connection, but they are not easily
available. Some even have the ability to insert data into a connection while you are reading your e-mail,
for example, whereas suddenly all your personal files could start being transmitted across the wires to a
remote site. The only sign you would notice would be a small delay on the delivery of the packets, but
you wouldn’t notice it while reading your e-mail or watching a disguising porno video on the Web! But
don’t go "bazuka" about it! Hijacking an IP connection is not as easy as it sounds when reading this
paragraphs! It requires the attacker to be directly in the stream of the connection, which in most cases
forces the him/her to be at your site.
Tip:
If you want to learn more about similar tools for monitoring or hijacking IP connections on the
Internet and protected networks, check the following sites below:
● http://cws.iworld.com - This site provides several 16 and 32-bits Windows (NT and
Windows 95) Internet tools.
● http://www.uhsq.uh.edu - You will find several UNIX security tools in this site, with
short and comprehensive descriptions for every tool.
● ftp://ftp.bellcore.com/pub/nmh, ftp://primal.iems.nwu.edu/pub/skey - This site
maintains the core S/Key software.
● ftp://ftp.funet.fi - Here you will find general security/cracking utilities such as npasswd,
passwd+, traceroute (as showing on figure 1.3), whois, tcpdump, SATAN, and Crack. For
faster searching of utilities, once in the site use ‘quote site find <find>’, where <find> is the
phrase to look for on the file-system. Using a web client, use
‘http://ftp.funet.fi/search:<find>’.
One more thing. Be careful with the information you provide the InterNIC! If you need a site on the
Internet you must apply for a domain name with InterNIC. When you do that, you must provide
information about the administrative and technical contact at your organization, with their phone
numbers, e-mail addresses, and a physical address for the site. Although this is a good safe measure, if
someone issues the UNIX command ‘whois <domainname>,’ as showing on figure 1.4, the utility will
list all of that information you provided InterNIC with.
Not that you should refuse to provide the information to InterNIC. This is a requirement and also used
for your protection as well, but when completing this information keep in mind that hackers often use it
to find out basic information about a site. Therefore, be conservative, be wise. For the contact names, for
example, use an abbreviation or a nick name. Consulting the information at InterNIC is usually the
starting point for many attacks to your network.
During the spring of 1997, while coordinating a conversion from MS Mail to MS Exchange my mailer
went South (mea culpa!) and few listservers where spammed as a result. Within hours one of our systems
manager was getting a complaining phone call, at his home phone number, and the complainer knew
exactly who to ask for! By using ‘whois’ the sysop of the spammed listserver was able to identify the
name and address of the company I work for. Since it was a weekend, he could not talk to anyone about
the problem, but with the systems manager’s name and the city location of our company, the sysop only
had to do a quick search at query engines such as Four11 (http://www.four11.com) to learn the home
address and phone number of our systems manager!
several flavors of UNIX and Windows NT and is currently priced based on the size of a site’s network.
IP Addresses
All the IP-based networks (Internet and LANs and WANs) use a consistent, global addressing scheme.
Each host, or server, must have a unique IP address. Some of the main characteristics of this address
scheme are:
● Addresses cannot be duplicated, so they won’t conflict with other networks on the Internet,
● IP addressing allows an unlimited number of hosts or networks to connect to the Internet and other
networks,
● IP addresses allow networks using different hardware addressing schemes to become part of
dissimilar networks
Rules
IP addresses are composed of four one-byte fields of binary values separated by a decimal point. For
example,
1.3.0.2 192.89.5.2 142.44.72.8
Tip:
You can always find the IP address of a host or node on the Internet by using the PING command,
as shown on figure 1.9.
The host names will be determined usually by LAN Administrator, as he/she adds a new node to the
network and enters with its address on the DNS (Domain Name Service) database.
Tip:
Never assign a host name to a specific user or location of a computer as these characteristics tend
to change frequently. Also, keep your host names short, easy to spell, free of numbers and
punctuation.
The address class determines the network mask of the address. Hosts and gateways use the network mask
to route internet packets by:
1. Extracting the network number of an internet address.
2. Comparing the network number with their own routing information to determine if the packet is
bound for a local address
The network mask is a 32-bit internet address where the bits in the network number are all set to one and
the bits in the host number are all set to zero.
Table 1.2 lists the decimal value of each address class with its corresponding network mask. The first
byte of the address determines the address class. Figure 1.9 shows the decimal notation of internet
addresses for address classes A, B, and C.
Table 1.2 - Internet Address Classes
A 1. to 127. 255.0.0.0
Note:
Class D addresses are used for multicasting. Values 240 to 255 are reserved for Class E, which are
experimental and not currently in use.
IP Spoofing
A common method of attack, called IP spoofing involves imitating the IP address of a "trusted" host or
router in order to gain access to protected information resources. One avenue for a spoofing attack is to
exploit a feature in IPv4 known as source routing, which allows the originator of a datagram to specify
certain, or even all intermediate routers that the datagram must pass through on its way to the destination
address. The destination router must send reply datagrams back through the same intermediate routers.
By carefully constructing the source route, an attacker can imitate any combination of hosts or routers in
the network, thus defeating an address-based or domain-name-based authentication scheme.
Therefore, you can say that you have been "spoofed" when someone, by-passing source routing, trespass
it by creating packets with spoofed IP addresses. Yeah, but what is this "IP spoofing" anyway?
Basically, spoofing is a technique actually used to reduce network overhead, especially in wide area
networks (WAN). By spoofing you can reduce the amount of bandwidth necessary by having devices,
such as bridges and routers, answer for the remote devices. This technique fools (spoofs) the LAN device
into thinking the remote LAN is still connected, even though it is not. However, hackers use this same
technique as a form of attack on your site.
Figure 1.10 explains how spoofing works. Hackers can use of IP spoofing to gain root access, by creating
packets with spoofed source IP addresses. This tricks applications that use authentication based on IP
addresses and leads to unauthorized user and very possibly root access on the targeted system. Spoofing
can be successful even through firewalls if they are not configured to filter income packets whose source
address are in the local domain.
You should also be aware of routers to external networks that are supporting internal interfaces. If you
have routers with two interfaces supporting subnets in your internal network, be on alert, as they are also
vulnerable to IP spoofing.
Tip:
For additional information on IP spoofing, please check Robert Morris paper "A Weakness in the
4.2BSD UNIX TCP/IP Software," at URL ftp.research.att.com:/dist/internet_security/117.ps.Z
When spoofing an IP to crack into a protected network hackers (or crackers, for that matter!) are able to
bypass one-time passwords and authentication schemes by waiting until a legitimate user connects and
login to a remote site. Once the user’s authentication is complete, the hacker seize the connection, which
will compromise the security of the site there after. This is more common among the SunOS 4.1.x
systems, but it is also possible in other systems.
You can detect an IP spoofing by monitoring the packets. You can use netlog, or similar
network-monitoring software to look for packet on the external interface that has both addresses, the
source and destination, in your local domain. If you find one, this means that someone is tempering onto
your system.
Tip:
Netlog can be downloaded through anonymous FTP from URL:
ftp://net.tamu.edu:/pub/security/TAMU/netlog-1.2.tar.gz
Another way for you to detect IP spoofing is by comparing the process accounting logs between systems
on your internal network. If there has been an IP spoofing, you might be able to see a log entry showing a
remote access on the target machine without any corresponding entry for initiating that remote access.
As mentioned before, the best way to prevent and protect your site from IP spoofing is by installing a
filtering router that restricts the input to your external interface by not allowing a packet through if it has
a source address from your internal network. Following CERT’s recommendations, you should also filter
outgoing packets that have a source address different from your internal network in order to prevent a
source IP spoofing attack originating from your site, as shown on figure 1.11, but much more will be
discussed about it on the chapters to come.
Caution:
If you believe that your system has been spoofed, you should contact the CERT Coordination
Center or your representative in Forum of Incident Response and Security Teams (FIRST).
CERT staff strongly advise that e-mail be encrypted. The CERT Coordination Center can support
a shared DES key, PGP (public key available via anonymous FTP on info.cert.org), or PEM
(contact CERT staff for details).
Internet E-mail: [email protected] or Telephone: +1 412-268-7090 (24-hour hotline)
Figure 1.16
MBONE Configuration screen
The configuration show on figure 5.20 allows mrouted machine to connect with tunnels to other regional
networks over the external DMZ and the physical backbone network, and connect with tunnels to the
lower-level mrouted machines over the internal DMZ, thereby splitting the load of the replicated packets.
The only problem in promoting MBONE is that the most convenient platform for it is a Sun
SPARCstation. You can use a VAX or MicroVAX, or even a DecStation 3100 or 5000, running Ultrix
3.1c, 4.1, 4.2a. But our typical Web server OS won’t do it. In this case, you must rely on Internet Service
Providers (ISP).
Note:
The following is a partial list of ISP who are participating in the MBONE:
AlterNet - [email protected]
CERFnet - [email protected]
CICNet - [email protected]
CONCERT - [email protected]
Cornell - [email protected]
JANET - [email protected]
JvNCnet - [email protected]
Los Nettos - [email protected]
NCAR - [email protected]
NCSAnet - [email protected]
NEARnet - [email protected]
OARnet - [email protected]
PSCnet - [email protected]
PSInet - [email protected]
SESQUINET - [email protected]
SDSCnet - [email protected]
SURAnet - [email protected]
UNINETT - [email protected]
One of the limitations of Mbone is with regards to audio capabilities, which is still troublesome, specially
with Windows NT system, as it requires you to download the entire audio program before it can be
heard. Fortunately, there are now systems available which avoid this problem by playing the audio as it is
downloaded. The following is a list of some of them that I have tested with Windows 95 and Windows
NT 3.51 and 4.0 Beta 2:
● RealAudio - Developed by Progressive Networks. You can download an evaluation copy from
their URL at: http://www.realaudio.com. This player communicates with a specialized RealAudio
server in order to play back audio as it is downloaded, which eliminates the delays during
download, especially with slow modems. It also supports a variety of quality levels and non-audio
features such as HTML pages displayed in synchronization with the audio. RealAudio players are
available for Microsoft Windows, the Macintosh, and several UNIX platforms.
● Winplay - Winplay offers a very high quality audio using MPEG Level 3 compression. To the
best of my knowledge, this feature is not available in any other similar product out there.
Unfortunately, it is available for Windows 3.x only. You can download it form URL:
ftp://ftp.uoknor.edu, or from the Institute for Integrated Circuits home page, in Germany at URL:
http://www.iis.fhg.de/departs/amm/layer3/winplay3.
● VocalTec - This is a well known player, which offers streaming audio technology for the Web, but
it is available for Microsoft Windows only. You can check their URL at http://www.vocaltec.com
Multicast packets are designated with a special range of IP addresses: 224.0.0.0 to 239.255.255.255. This
range, as discussed above, is specifically known as "Class D Internet Addresses". The Internet Address
Number Authority (IANA) has given the MBONE (which is largely used for teleconferencing) the Class
D subset of 224.2.*.* . Hosts choosing to communicate with each other over MBONE set up a session
using one IP address from this range. Thus, multicast IP addresses are used to designate a group of hosts
attached by a communication link rather than a group connected by a physical LAN. Also, each host
temporarily adopts the same IP address. After the session is terminated, the IP address is restored to the
"pool" for re-use by other sessions involving different hosts.
There are still some problems to be resolved before MBONE can be fully implemented on the Internet.
Since multicasts between multiple hosts on different subnets must be physically transmitted over the
Internet and not all routers are capable of multicasting, the multicast IP packets must be tunneled (which
makes MBONE a virtual network) to look like unicast packets to ordinary routers. Thus, these multicast
IP datagrams must be first encapsulated by the sources-end mrouter in a unicast IP header that has the
destination and source address fields set to the IP addresses of tunnel-end-point mrouters respectively
and the protocol field set to "IP" which indicates that the next protocol in the packet is also IP. The
destination mrouter then strips of this header and reads the "inner" multicast session IP address and
forwards the packet to its own network hosts or re-encapsulates the datagram and forwards it to other
mrouters that serve or can forward to session group members.
Note:
For more information about MBONE, check Vinay Kumar book "MBONE: Interactive
Multimedia on the Internet," by New Riders, 1996.
queryinterval sec ;
timeoutinterval sec ;
traceoptions trace_options ;
} ] ;
The igmp statement on the first line enables or disables the IGMP protocol. If the igmp statement is not
specified the default is igmp off; If enabled, IGMP will default to enabling all interfaces that are both
broadcast and multicast capable. These interfaces are identified by the IFF_BROADCAST and
IFF_MULTICAST interface flags. IGMP must be enabled before one of the IP Multicast routing
protocols are enabled.
Note:
For complete information about IGMP functionality and options, please check RFC 1112 or
Intergate’s URL at http://intergate.ipinc.com/support/gated/new/node29.html
Tip:
What is IGP anyway?
Interior Gateway Protocol (IGP) is an Internet protocol designed to distribute routing information
to the routers within an autonomous system. To better understand the nature of this IP protocol
just substitute the term "gateway" in the name, which is more of a historical definition, with the
term "router," which is much more accurate and preferred term.
All routers supporting OSPF exchange routing information within an autonomous system using a
link-state algorithm by issuing routing update messages only when a change in topology occurs. In this
case, the affected router immediately notifies its neighboring router about the topology change only,
instead of the entire routing table. By the same token, the neighbor router pass the updated information to
their neighboring routers, and so on, reducing the amount of traffic on the internetwork. The major
advantage of this is that since topology change information is propagated immediately, all network
convergence is achieved more quickly than if relying on the timer-based mechanism used with RIP.
Hence, OSPF is increasingly being adopted within existing autonomous systems that previously relied on
RIP’s routing services, especially because OSPF routers simultaneously support RIP for
router-to-endstation communications, and OSPF for router-to-router communications. This is great
because it ensures communications within an internetwork and provides a smooth migration path for
few packets across the connection to trigger. A router upgrade will sometimes mean further expense in
memory or firmware upgrades, but as a critical piece of equipment, it should not be neglected.
Other than updating the software, disabling remote management is often key to preventing both
denial-of-service attacks and remote attacks to try to gain control of the router. With a remote
management port open, attackers have a way into the router. Some routers fall victim to brute-force
attempts against their administrative passwords. Quick scripts can be written to try all possible password
combinations, accessing the router only once per try to avoid being detected. If there are so many routers
that manual administration is a problem, then perhaps investigating network switch technology would be
wise. Today’s switches are replacing yesterday’s routers in network backbones to help simplify such
things.
Note:
Many of the original proposed security aspects of SNMPv2 were made optional or removed from
the Internet Standards track SNMPv2 specification in March 1996. There is now a new
experimental security protocol for SNMPv2 that has been proposed.
Nevertheless, SNMP is the standard protocol used to monitor and control IP routers and attached
networks. This transaction-oriented protocol specifies the transfer of structured management information
between SNMP managers and agents. An SNMP manager, residing on a workstation, issues queries to
gather information about the status, configuration, and performance of the router.
advanced. A session can be transparently hijacked and the user will simply think that the network is
lagging. Such hijacking does, however, require that the attacker be in the stream somewhere and an ISP
is a wonderful place to perch.
Address Expansion
One of the main needs for IPv6 is the rapid exhaustion of the available IPv4 network addresses. To
assign a network address to every car, machine tool, furnace, television, traffic light, EKG monitor, and
telephone, we will need hundreds of millions of new network addresses. IPv6 is designed to address this
problem globally, providing for billions of billions of addresses with its 128bit architecture.
Security
At this point on the book, needless to say there is a major security concern shared by senior IT
professionals and CEOs when connecting their organization with Intranets and to the Internet.
Nevertheless, for everyone connected to the Internet, invasion of privacy is also a concern as IP
connections are beginning to invade even coffee makers. Fortunately, IPv6 will have a whole host of new
security features built in, including system to system authentication and encryption based data privacy.
These capabilities will be critical to the use of the Internet for secure computing.
Real-Time Performance
One barrier to adoption of TCP/IP for real-time, and near real-time, applications has been the problem of
response time and Quality of Service. By taking advantage of IPv6's packet prioritization feature TCP/IP
now becomes the protocol of choice for these applications.
Multicasting
The designs of current network technologies were based on the premise of one-to-one or one-to-all
communications. This means that applications that are distributing information to a large number of users
must build a separate network connection from the server to each client. IPv6 provides the opportunity to
build applications that make much better use of server and network resources through its "multicasting"
option. This allows an application to "broadcast" data over the network, where it is received only by
those clients that were properly authorized to do so. Multicast technology opens up a whole new range of
potential applications, from efficient news and financial data distribution, to video and audio distribution,
etc.
There are many features and implementations to be discussed about IPv6, but for our purpose here, lets
concentrate on the IPv6’s promises, specifically with regards to security.
IPv6 Security
Users want to know that their transactions and access to their own sites are secure. Users also want to
increase security across protocol layers. Up until IPv6, as discussed throughout this whole book, security
Tip:
For more up to date information, check the IPv6 Resource Center of Process Software
Corporation, one of the leaders in TCP/IP solutions, at URL Error! Reference source not
found..
Firewalls Concepts
By now, only with an overview of internetworking protocols and standards, you should assume that
every piece of data sent over the Internet can be stolen or modified. The way the Internet is organized,
every site takes responsibility for their own security. If a hacker can take a site that is at a critical point to
the communications that are being sent from the user, then all of the data that the user is sending through
that site is completely at the whim of the hacker. Hackers can intercept unencrypted credit cards, telnet
sessions, ftp sessions, letters to Grandma, and just about anything else that comes across the wire.
Just like not trusting your upstream feed, be careful with the information that is sent to remote sites. Who
controls the destination system should always be in question.
Firewalls are designed to keep unwanted and unauthorized traffic from an unprotected network like the
Internet out of a private network like your LAN or WAN, yet still allowing you and other users of your
local network to access Internet services. Figure 1.18 shows the basic purpose of a firewall.
Figure 1.18
Basic function of a firewall
Most firewalls are merely routers, as showing on figure 1.19, filtering incoming datagrams based upon
the datagrams source address, destination address, higher level protocol, or other criteria specified by the
private network’s security manager, or security policy.
Figure 1.19
Packet filtering at a router level
More sophisticated firewalls employ a proxy server, also called a bastion host, as shown on figure 1.20.
The bastion host prevents direct access to Internet services by the your internal users, acting as their
proxy, while filtering out unauthorized incoming Internet traffic.
Figure 1.20
Proxy server prevent the direct access to and from the Internet.
The purpose of a firewall, as a security gate, is to provide security to those components inside the gate, as
well as control of who (or what) is allowed to get into this protected environment, as well as those
allowed to go out. It works like a security guard at a front door, controlling and authenticating who can
or cannot have access to the site.
It is setup to provide controllable filtering of network traffic, allowing restricted access to certain Internet
port numbers and blocked access to almost everything else. In order to do that, they must function as a
single point of entry. That is why many times you will find firewalls integrated with routers.
Therefore, you should choose your firewall system based on the hardware you already have installed at
your site, the expertise you have available in your department and the vendors you can trust.
Note:
Such is the need for firewalls that according to the journal CommunicationsWeek (April 8, 1996),
the Computer Security Institute, from San Francisco-CA, did a survey last year, and found that
almost half of the organizations surveyed already deploy firewall, and of those that did not, 70
percent were planning on installing them.
Usually, firewalls are configured to protect against unauthenticated interactive login from the "outside"
world. Protecting your site with firewalls can be the easiest way to promote a "gate" where security and
audit can be imposed.
With firewalls you can protect your site from arbitrary connections and can even set up tracing tools,
which can help you with summary logs about origin of connections coming through, the amount of
traffic your server is serving and even if there were any attempts to break in to it.
One of the basic purposes of a firewall should be to protect your site against hackers. As discussed
earlier, your site is exposed to numerous threats, and a firewalls can help you. However, it cannot protect
you against connections by-passing it. Therefore, be careful with backdoors such as modem connections,
to your LAN, specially if your Remote Access Server (RAS) is inside the protected LAN, as typically
they are.
Nevertheless, a firewall is not infallible, its purpose is to enhance security, not guarantee it! If you have
very valuable information in your LAN, your Web server should not be connected to it in the first place.
You must be careful with groupware applications that allows you access to your Web server from within
the organization or vice versa.
Also, if you have a Web server inside your internal LAN, watch for internal attacks as well as to your
corporate servers. There is nothing a firewall can do about threats coming from inside the organization.
An upset employee, for example, could pull the plug of your corporate server, shutting it down, and there
is nothing a firewall will be able to do!
Packet filtering was always a simple and efficient way of filtering inbound unwanted packets of
information by intercepting data packets, reading them, and rejecting those not matching the criteria
programmed at the router.
Unfortunately, packet filtering are no longer sufficient to guarantee the security of a site. Many are the
threats, and many are the new protocol innovations, with the ability to by-pass those filters with very
little efforts.
For instance, packet filtering is not effective with the FTP protocol as FTP allows the external server
being contacted to make connections back on port 20 in order to complete a data transfers. Even if a rule
were to be added on the router, port 20 on the internal network machines is still available to probes from
the outside. Besides, as seen earlier, hackers can easily "spoof" these routers. Firewalls make these
strategies a bit harder, if not near to impossible.
When deciding to implement a firewall, however, first you will need to decide on the type of firewall to
be used (yes, there are many!) and its design. I’m sure this book will greatly help you in doing so!
You should also know that there are some kind of commercial firewall products, often called OS Shields,
that are installed over the operating system. Although they became some what popular, combining packet
filtering with proxy applications capable to monitor data and command streams of any protocol to secure
the sites, OS shield was not so successful due to specifics of its configurations: not only its
configurations were not visible to administrators as they were configured at the kernel level, but also
forced administrators to introduce additional products to help the management of the server’s security.
The firewall technology has gone a long way. Besides the so called traditional, or static, firewalls, today
we have what is called "dynamic firewall technology."
The main difference is that, unlike static firewalls, where the main purpose is to
"permit any service unless it is expressly denied" or to
"deny any service unless it is expressly permitted,"
a dynamic firewall will
"permit/deny any service for when and as long as you want."
This ability to adapt to network traffic and design offers a distinctive advantage over the static packet
filtering models.
attack, unless an administrator mail-filters on ’phf’, which places a high demand on the firewall.
The key to dealing with this limitation is in treating a firewall as a way of understanding the
configuration of internal services. The firewall will only allow certain services to be accessed by users on
the Internet. These known services can then be given special attention to make sure that they are the
latest, most secure versions available. In this way, the focus can shift from hardening an entire network,
to just hardening a few internal machines and services.
More will be seen about it on chapters 4 "Firewalling Challenges: the Basic Web," chapter 5 "Firewalling
Challenges: the Advanced Web," and chapter 8 "How Vulnerable are Internet Services."
Authentication Issues
Firewalls and filtering routers tend to behave rather binary. Either a connection is or is not allowed into a
system. Authentication allows service connections to be based on the authentication of the user, rather
than their source or destination address. With some software, a user’s authentication can allow certain
services and machines to be reached while others can only access rudimentary systems. Firewalls often
play a large role in user-based service authentication, but some servers can be configured to understand
this information as well. Current Web servers can be configured to understand which users are allowed to
access which sub-trees and restrict users to their proper security level.
Authentication comes in many varieties and it can be in the form of cryptographic tokens, one-time
passwords, and the most commonly used and least secure simple text password. It is up to the
administrators of a site to determine which form of authentication for which users, but it is commonly
admitted that it should be used. Proper authentication can allow administrator from foreign sites to come
into a network and correct problems. This sort of connection would be a prime candidate for a strong
method of authentication like a cryptographic token.
Intranets
Resources provided by Intranets are rapidly becoming a staple good within information systems groups.
They promise to provide a single resource that everyone can access and enrich their lives. Switching to a
paperless information distribution system is not always as grand as it looks. Placing all of an
organizations internal documentation into one place is akin to waving a giant red flag and expecting
people not to notice.
Perhaps I’m creating a new word, but Intra-Intranets are often a wise solution to this issue. Keeping
critical data within Workgroup and non-critical data in a separate Intranet is a viable alternative. Use
different systems to store subgroups on and one main system for the whole organization. Policies should
be developed for what is allowable on the main system to help keep proprietary material away from
public or near-public access.
From Here…
This chapter provided a comprehensive overview on many of the most used internetworking protocols
and standards, some of the security concerns associated with it and the basic whole of firewalls in
enhancing the security of the connections you make across the Internet and receive within your protected
network.
The issue of basic connectivity becomes then very important for many organizations. There are indeed
many ways to get connected on the Internet, some more effective then others due to their ability to
interact with a variety of environments and computers.
Chapter 2, "Basic Connectivity," discuss about the characteristics a basic connectivity can assume on the
Internet through UUCP, SLIP, PPP, Rlogin and TELNET.
Comments
Comments
COMPUTING MCG
Comments
Comments
Chapter 2
Basic Connectivity
As you saw on chapter 1, "Internetworking Protocols and Standards: An Overview," the popularity of
TCP/IP and all the standards and protocols derived from it makes the issue of basic connectivity critical
for many organizations. As we realize, there are indeed many ways to get connected on the Internet,
some more effective then others due to their ability to interact with a variety of environments and
computers.
Regardless if you look at the Internet as a verb, as internetworking couple LANs or WANs, or you use it
as a noun, as comprising two or more different networks, you will have to get down to basics when
talking about connectivity and how you will connect clients, servers and networks (LAN and WAN), and
ultimately, how you will protect this connections. We also discussed on chapter 1 about the many
protocols used and in use on the Internet, as well as those being developed and proposed (IPv6). But can
your organization take advantage of the Internet, or the IP technology for that matter? What kind of
topology you have in your company? How is file transfer, electronic mail, host terminal emulation
sessions, hardware integration and most of all, security, handled at your company?
These are issues that you need to have clear in your mind so that you can communicate with management
and MIS in your company and to focus in the technology that you need to effectively deploy your basic
connectivity plan, no matter with you are just starting or have a large and complex network system.
When internetworking, you must keep the focus of secure connections and how you intend to deploy it.
TCP/IP technology will provide the basic connectivity that is needed within any organization as it
collects, analyses, and distributes information. Advanced knowledge of rapidly evolving storage
technologies will always be essential to accommodate voice, video, and other broad bandwidth sources
of information. But you must be ready to chose and deploy the right protocol, the right technology, for
the kind of connectivity you need securely, after all, the Internet, as it is discussed later in more details, is
a wild place!
Being the Internet basically a virtual network that allows users to communicate with all connected
servers and hosts, as if they were part of a local network, then there is a need for all details of this
network to be hidden from all users. This is were the basic connectivity requirements starts, and the basis
on which this virtual network exists is actually provided by the TCP/IP suite. The many protocols, as
seen on Chapter 1, establishes the format and the rules that must be followed for information to be
exchanged between systems. But how are services to network users provided and what are the security
issues surrounding it?
TCP/IP defines a wide range of application layer protocols that provide services to network users,
including remote login, file copying, file sharing, electronic mail, directory services, and network
management facilities. Some application protocols are widely used, others are employed only for
specialized purposes. Although throughout this chapter we will only concentrate on some of these
protocols and their security weaknesses, the following are the most commonly used TCP/IP application
layer protocols:
● PING - According to the Computer Dictionary (http://nightflight.com/foldoc/), PING was
probably originally contrived to match submariners’ term for the sound of a returned sonar pulse!
But actually, this is a program used to test network connectivity by sending them one, or repeated,
ICMP echo requests and waiting for replies. Since ping works at the IP level its server-side is often
implemented entirely within the operating system kernel and is thus pretty much the lowest level
test of whether a remote host is alive. Ping will often respond even when higher level, TCP-based
services cannot.
● TELNET - This is the Internet standard protocol for remote login. It runs on top of TCP/IP.
● Rlogin - Similar to TELNET, Rlogin is the 4.2BSD UNIX utility to allow a user to log in on
another host via a network. Rlogin communicates with a daemon on the remote host.
● Rsh - The acronym stands for "Remote shell." This is a Berkeley UNIX networking command to
execute a given command on a remote host, passing it input and receiving its output. Rsh
communicates with a daemon on the remote host.
● FTP - Acronym for File Transfer Protocol, this is a client-server protocol that enables the file
transferring between two computers over a TCP/IP network.
● TFTP - Acronym for Trivial File Transfer Protocol, this is a very similar to FTP, this is a simple
file transfer protocol usually used for down-loading boot code to diskless workstations.
● SMTP - Acronym for Simple Mail Transfer Protocol, this protocol is used to transfer electronic
mail between computers.
● Kerberos - This is an authentication system developed at MIT, based on symmetric key
cryptography.
● X Windows - A specification for device-independent windowing operations on bitmap display
devices.
● DNS Name - A general-purpose distributed, replicated, data query service chiefly used on Internet
for translating hostnames into Internet addresses.
● NFS - Acronym for Network File System, a protocol that allows a computer to access files over a
network as if they were on its local disks.
● SNMP - Acronym for Simple network Management Protocol, which is the Internet standard
protocol to manage nodes on an IP network.
Note:
There is a software, as show on the screenshot of figure 2.1, that allows you a modem to talk with
a TTY modem that has the ASCII mode option turned on. You can find more information about it
at the URL http://tap.gallaudet.edu/asciitdd.htm.
00011 03 A -
11001 19 B ?
01110 0E C :
01001 09 D $
00001 01 E 3
01101 0D F !
11010 1A G &
10100 14 H #
00110 06 I 8
01011 0B J BELL
01111 0F K (
10010 12 L )
11100 1C M .
01100 0C N ,
11000 18 O 9
10110 16 P 0
10111 17 Q 1
01010 0A R 4
00101 05 S ‘
10000 10 T 5
00111 07 U 7
11110 1E V ;
10011 13 W 2
11101 1D X /
10101 15 Y 6
10001 11 Z "
01000 08 CR CR
00010 02 LF LF
00100 04 SP SP
Where ‘CR’ is carriage return, ‘LF’ is linefeed, ‘BELL’ is the bell, ‘SP’ is space, and ‘STOP’ is the stop
character.
is to copy files from one host to another, but it also allows certain actions to be performed on the remote
host. Just make sure not to confuse UUCP with uucp, as the first was named after the later, but are not
the same thing.
Tip:
These are the key UUCP programs:
uucp - allows the request of file transferring between remote machines
UUX - request command execution on remote machines, mail transfers
UUXQT - process remote requests locally, both uucp and UUX, running on background.
UUCICO - calls and transfers files and requests by UUCP and UUX. Master/Slave configuration
Another property of UUCP is that it allows to forward jobs and files through several hosts, through a
chain, provided they cooperate. The most important services provided by UUCP networks these days are
electronic mail and news.
Finally, UUCP is also the medium of choice for many dial-up archive sites which offer public access.
You can usually access them by dialing them up with UUCP, logging in as a guest user, and download
files from a publicly accessible archive area. These guest accounts often have a login name and password
of uucp/nuucp or something similar.
NOTE:
PPP can be configured to encapsulate different network layer protocols (such as IP, IPX, or
AppleTalk) by using the appropriate Network Control Protocol (NCP). For more information,
check the URL http://www.virtualschool.edu/mon/DialupIP/slip-ppp.html
Rlogin
Just like TELNET, rlogin connects your terminal on the current local host system lhost to the remote host
system rhost.
Cygnus Solutions (http://www.cygnus.com) has a product called KerbNet, as shown on the screenshot of
their site on figure 2.2, that enables a very secure connection using rlogin.
The version built to use Kerberos authentication is very similar to the standard Berkeley rlogin, except
that instead of the `rhosts’ mechanism, it uses Kerberos authentication to determine whether a user is
authorized to use the remote account.
Each user may have a private authorization list in a file `.klogin’ in his login directory. This file functions
much like the `.rhosts’ file, by allowing non-local users to access the Kerberos service on the machine
where the `.klogin’ file exists. For example, user `[email protected]’ would normally not be permitted to
log in to machines in the `MUSSELS.COM’ realm. However, Joe’s friend `[email protected]’
can create a `.klogin’ file in her home directory, that contains the line `[email protected]’. This allows Joe
to log in as Bertha to Bertha’s machine, even though does not have a ticket identifying him as Bertha.
Each line in this file should contain a Kerberos principal name of the form `principal.instance@realm’.
The following are all valid Kerberos principal names.
If the originating user is authenticated to one of the principals named in `.klogin’, access is granted to the
account. The principal `accountname@localrealm’ is granted access if there is no `.klogin’ file.
Otherwise, a login and password is prompted for on the remote machine as in login. To avoid security
problems, the `.klogin’ file must be owned by the remote user.
If there is some problem in gathering the Kerberos authentication information, an error message is
printed and the standard UCB rlogin is executed in place of the Kerberos rlogin. This permits the use of
the same rlogin command to connect to hosts that do not use CNS, as well as to host which do.
There are few connection-oriented security requirements that you should be aware of when TELNETing:
● Confidentiality
● Integrity
● Peer-entity authentication
All these requirements implicitly assume that there is a basic security implemented at a connection level,
stream-oriented, using point-to-point application protocol. But you can not assume that the connection is
secure, as not always you will find security mechanisms implemented within the application protocols. If
necessary, you must try to implement security mechanisms at lower layers, such as the transport or the
network layers.
The Transport Layer Security Protocol (TLSP), which became an Internet Standard in July of 1992, is a
possible solution for the lack of security of TELNET connections. TLSP will run under the transport
layer and provide security services to TELNET connections on a per-connection basis by providing
end-to-end cryptographic encoding directly above the network layer.
One of the main advantages of relaying on this lower layer security mechanisms is that it can avoid the
duplication of security efforts. But again, I’m not sure how many developers or implementation
professionals would be willing to introduce new software into operating system kernels. Therefore, you
would be better off providing security for TELNET connections at the Application layer than at the
Network or Transport layer.
for the same security reasons outlined above. They want to be able to TELNET to a host, a log in. That
is, they need a mechanism to provide some form of authentication and access control -- not just a
wide-open DOS prompt.
Columbia University’s Kermit 95 has lots of features to aid a TELNET connection, as well as making it
more secure and easy to use. Figure 2.4 gives you a good overview of Kermit 95 (K-95). You can see the
K-95 Dialer interface in the background, and the Connection one in front of it with the entry settings
highlighted, open to its first page, and finally the session itself, a dialup connection to a BBS.
Also, figure 2.4 shows in the background, a second session, this one via Internet to a UNIX server, where
a piece of a "man page" is showing, to illustrate how the K-95 Dialer can manage multiple sessions.
Usually, all you would have to do to open a session would be to double-click on the desired entry.
Figure 2.5 shows the terminal settings page of the entry notebook. Kermit provides one of these
notebooks for each connection, so each one can have a different emulation, character size, character set,
screen size, colors, and so on. All these settings apply equally well to dialup connections and to TELNET
or RLOGIN sessions, and they are all applied automatically as part of the connection process. These
notebooks give you fully customized one-button access to every dialup and Internet service or computer
that you use.
Figure 2.6 shows how you can give K-95 the information it needs to place your calls correctly, no matter
where you are. You don’t have to use any of these features if you always make your calls from the same
place, but if you travel around with a laptop, you’ll be amazed at the convenience. Just tell Kermit 95 (or
Windows 95) your new location, and all the numbers in the dialing directory will "just work".
Another great feature of K95 is that, unlike many computers or TELNET services that require different
codes for backspacing (many times you have to assign the appropriate code to your PC’s backspace),
Kermit 95 allows you to assign for each computer or host in your directory their own key settings,
specified on the Keyboard tab of its settings notebook, as shown in figure 2.7.
As also shown on figure 2.7, to solve the Backspace problem, just push the appropriate button. Kermit 95
also allows you to load in an entire custom key map for your whole keyboard if you need to (figure 2.7
shows the Key map for host-based WordPerfect 5.1, which is distributed with Kermit 95).
Figure 2.8 illustrates some of K-95’s features, such as,
● Tall screens - Did you know that in your TELNET sessions without Kermit 95, the Lynx main
page is in a 43-line screen?
● Multi-sized screens - Which are based on the size of the fonts,
● Ability to display Latin-1 8-bit characters - Figure 2.7 blue screen shows a sample of German
font.
● File transferring - K-95 can actually achieve great transfer rates using long packets and sliding
windows, even when, as showing on figure 2.8, the PC is fairly heavily loaded up with other
processes.
● Simultaneous multi-sessions - K-95, as you can see on the same figure, can handle various other
sessions simultaneously, such as ANSI terminal emulation on a dialup session to a BBS, plus the
ability to customize your screen colors for each session.
Figure 2.9 shows various context-sensitive pop-up help windows in the Terminal screen - "Important
Keys", mouse buttons, Compose-key functions.
● Length of TELNET sessions - You can setup the duration of your users’s TELNET session. The
length of time could be based on the type of user or the individual user. For example, guest
account using TELNET at your company could have a shorter logon time (5-10 minutes) than
technical support, upper management or any other qualified/certified user.
● Session time-out - A TELNET session can be setup to time-out if no activity occurs after a
specific timeframe.
● Secure screen savers - You could use a time out screen saver to go off when no activity occurs in
a session after a certain period of time. In this case, unlike session time-out, the TELNET session
would remain active on the network, but protected. Users could be warned before time-out
occurred.
● Data protection strategies:
● Clearinghouse directories - You should implement corporate wide temporary directories where
unverified data entries are saved. Also you should make sure these data would remain
unmodifiable by any unauthorized users after entry verified by electronic signature.
● Protect sensitive data - Make sure to protect sensitive data, by only allowing validated users to
access it and reminding every user that all data is confidential.
To Err is Human!
Many security policies fail because they did not considered the human factor. Your users are the ones
actually using, enforcing or breaking the security policy, and to them, rules and procedures can be
difficult to remember, or they don’t feel it makes sense to be generating "nonsense" passwords every six
months, and so on.
That’s why it is still very common to find passwords written on the undersides of keyboards, modems are
connected to a networks without any security measures, to avoid onerous dial-in security procedures, and
so forth! The bottom line is that, if your security measures interfere with essential use of the system,
those measures will be resisted by your users and they WILL circumvent it! To make sure you get the
support of your users, must make sure that your security procedures are not getting in the way of they
work, that they are still getting their jobs done without stress. You’ll need to sell it to them!
Remember that any user can compromise your security policy and statistically speaking, most security
break-ins came from inside, which not necessarily mean from your users, but it means that there was a
hole in the security from within.
Passwords, for instance, can often be found simply by calling the user on the telephone, claiming to be a
system administrator, and asking for them. If your users understand security issues, and if they
understand the reasons for your security measures, they are far less likely to make a hacker’s life easier.
At a minimum, users should be taught never to release passwords or other secrets over unsecured
telephone lines (especially cellular telephones) or electronic mail.
● If TELNET sessions are started from home or any other remote location, by telephone dial-up, you
should require a second password or a call-back procedure
● Passwords should be encrypted
● Do not allow the share of passwords!
● Log all access by password and network address and construct reports of usage with user name,
network address and date (Access audit trail).
● Develop user profiles and monitor deviations from the profile.
● TELNET users should sign a confidentiality agreement.
● Run security test drills periodically with some available security testing programs, and lastly
● As shown on figure 2.10, implement a firewall!
coming on port 111, at the very least, a lot of security incidents can be avoided.
But don’t rely only on it. The portmapper only knows about RPC services. Other network services can be
located with a brute-force method that connects to all network ports. Many network utilities and
windowing systems listen to specific ports, such as port 25 for sendmail, port 23 for TELNET and port
6000 for X windows. SATAN includes a program that scans the ports of a remote hosts and reports on its
findings providing outputs like the ones below:
hacker % tcpmap poorsite.com
Mapping 148.158.28.1
This indicates that poorsite.com is running X windows. If not protected properly (via the magic cookie or
xhost mechanisms), window displays can be captured or watched, user keystrokes may be stolen,
programs executed remotely, etc. Also, if the target is running X and accepts a telnet to port 6000, that
can be used for a denial of service attack, as the target’s windowing system will often "freeze up" for a
Tip:
If you want to get some free security resources from the Internet, try this sites:
● The CERT (Computer Emergency Response Team) advisory mailing list, by sending e-mail
to [email protected], and ask to be placed on their mailing list.
● The Phrack newsletter. Send an e-mail message to [email protected] and ask to be
added to the list.
● The Firewalls mailing list. Send the following line in the body of the message (blank
subject line) to [email protected]: subscribe firewalls
For free software:
● COPS (Computer Oracle and Password System) is available via anonymous ftp from URL
ftp://archive.cis.ohio-state.edu, in pub/cops/1.04+.
● The tcp wrappers are available via anonymous ftp from URL ftp://ftp.win.tue.nl, in
pub/security.
● Crack is available from URL ftp://ftp.uu.net, in /usenet/comp.sources.misc/volume28.
problems. Thus, corporate IS departments rush to set in place security tools that many times are
immature rather then being a complete solution to address the variety of challenges the Internet presents
to the corporation’s internetworking. That’s why, usually what we see as a solution for the "blob" is a
series of security measures that ends up looking more like a fruit salad then a controllable and efficient
Internet security system. Some of the flavors we find, isolated or combined, include but are not limited
to:
● Password-based security
● Encryption schemes
● Firewalls
● Proxy servers
services. This chapter also identified several weaknesses at the IP protocol and services level, but also
provided alternatives for increasing security on the IP networks as well as few features and processes you
can use to enhance the security level of your site. These features included controls to restrict access to
routers and communication servers by way of console port, Telnet, SNMP, and so on. But those
measures are not enough, so you must consider implementing firewalls. Although firewall concepts were
introduced on chapter one, much more on its architecture and setup needs to be discussed, but before we
do that, lets take a look at few other alternatives widely and effectively used: cryptography. After all, a
question remains: is it enough? That’s what chapter three is all about.
Comments
Comments
Comments
Comments
Chapter 3
Cryptography: Is it Enough?
Never mind personal use! Encryption will be widely adopted to protect transactions over the electronic commerce industry,
despite what the government concerns are with regards to national security.
The increasing growth of the electronic commerce are pushing the issue of data encryption to the main courts, as more and
more there is a need for companies and netizens to protect their privacy on the Internet, as well as their commercial and
financial transactions. But the government is a bit nervous about it as, for the first time, encryption can block the watchful
eyes of the law enforcement agencies over individuals, which in fact, is a double-edged sword, as if powerful encryption
schemes is to fall into the wrong hands it can represent freedom for crimes to be committed and go undetected.
Cryptography’s main tool, the computer, is now available everywhere! Since the World War II (WWII) governments
worldwide have been trying to control the use of data encryption (ask Phil Zimmermann about it!!). No longer we need
Colossus, the computer built during WWII to crack the German military’s secret code! My 14 years old son already uses a
Pentium at home, access the Internet and encrypts his files with CodeDrag!
Note:
What about Phil Zimmermann?
He was the developer of Pretty Good Privacy, an encryption tool he placed on the Internet after he
finished developing it, in which he was persecuted by the U.S. government for it. For more
information and details about the whole case, check the URL
http://web.its.smu.edu/~dmcnickl/miscell/warnzimm.html
Tip:
What is CodeDrag?
CodeDrag, as shown on figure 3.1, is very fast encryption tool that uses a fast C-implementation
of the DES-algorithm to increased speed. It was developed at the University of Linz, Austria, as
an example tool to demonstrate the new possibilities of the Windows 95 shell, as CodeDrag is
fully embedded into the Windows desktop. For more information you can contact the developing
team at [email protected] or visit their site (and download a copy of CodeDrag) from
URL http://www.fim.uni-linz.ac.at/codeddrag/codedrag.htm.
Since 1979, the National Security Agency (NSA) had classified any form of encryption as weapons, compared to fighter
jets and nuclear missiles. However, people like Zimmermann, concerned with privacy and civil rights, have been fighting
against exclusive government control of encryption. During the 70, Whitfield Diffie, of Stanford Research Institute,
developed what is today known as public key cryptography, which is discussed in more details later on this chapter.
Diffie’s innovation actually created a revolution in the encryption world back there, especially among the government. The
problems was that, while the government’s secret agencies were still using single key schemes, which would rely upon both
the sender and the receiver of an encoded message having access to the key, he proposed a dual-key approach which made
it much simpler to encrypt data.
Not long time later, in 1977, a company found by three scientists from the Massachusetts Institute of Technology (MIT),
RSA Data Security, introduced the first public key cryptography software and obtained US patents for the scheme.
It was in 1991 that Zimmermann, then a computer programmer, launched his "Pretty Good Privacy" (PGP) encryption
software and distributed it freely on the Internet, making it internationally available. Not only his action got the
governments attention to him, which lead to his persecution, but even RSA Data Security also condemn PGP, classifying it
as a threat to its commercial interests.
Nowadays, even commercial software companies are developing their own encryption products. Take Netscape, for
example, which developed and freely distributed their security scheme all over the Internet as well. Netscape’s Secure
Sockets Layer (SSL) encryption scheme uses 56 character key to increase data security. Microsoft also come up with an
encryption tool, know as Private Communications Technology (PCT) protocol.
As discussed in the past two chapters, computer network security is becoming increasingly important as the number of
networks increase and network size expands. Besides, the Internet has also become an extension of the protected networks
of a corporation. Until last year, Intranets were something new, but only a little more than an year later we are already
talking and investing into Extranets. As the sharing of resources and information worldwide (Cyberspace included!)
becomes easier, the ability to protect information and resources against unauthorized use becomes critical.
By now you already realized that it is not possible to have a 100% secure network. At the same time, information needs to
be accessible to be useful. Balancing accessibility and security is always a tradeoff and is a policy decision made by
management.
Good security involves careful planning of a security policy, which should include access control and authentication
mechanisms. These security strategies and procedures can range from a very simple password policy to complex encryption
schemes. Assuming that you have already implemented at least a password policy at your organization (you did it right?!),
this chapter will be discussing about the many levels and types of encryption schemes and when it is enough. Is it?
Introduction
Encrypting the information of your company can be an important security method and provides one of the most basic
security services in a network: authentication exchange. Other methods, such as Digital Signatures and data confidentiality,
also use encryption.
With private key encryption algorithms, only one key exists. The same key value is used for both encryption and
decryption. In order to ensure security, you must protect this key and only you should know it. Kerberos, for example,
which is discussed in more details later on this chapter, is an authentication protocol that uses private key algorithms.
Another characteristic of private key encryption is that the keys used are usually small, making its algorithms computation
relatively fast and easier then asynchronous ones.
One of the main limitations of using private key encryption is when distributing it to everyone who needs it, especially
because the distribution itself must be secure. Otherwise you could expose and compromise the key and therefore, all the
information encrypted with it. Thus, it becomes necessary for you to change your private key encryption every so often.
If you only have private key schemes available to you, I recommend you to use it with digital signatures, as they are much
more versatile and secure.
Until recently, DES was never been broken and was believed to be secure. But a group of Internet users, working together
in a coordinated effort to solve the RSA DES challenge, see figure 3.3, for over four months finally broke the algorithm.
The group checked nearly 18 quadrillion keys, finding the one correct key to reveal the encrypted message:
"Strong cryptography makes the world a safer place."
Note:
The U.S. Government forbids export of hardware and software products that contain certain DES
implementations. American exporters must adhere to this policy even though implementations of
DES are widely available outside of the United States.
The group used a technique called "brute-force", where the computers participating in the challenge began trying every
possible decryption key. There are over 72 quadrillion keys (72,057,594,037,927,936). At the time the winning key was
reported to RSA Data Security, Inc, in June of 97, the group, known as DESCHALL (DES Challenge), had already
searched almost 25% of the total possibilities. During the pick time of the group’s efforts 7 billion keys were being tested
per second. Figure 3.4. is a screenshot of the DESCHALL site, located at URL http://www.frii.com/~rcv/deschall.htm
Although DES was cracked, it has remained a secure algorithm for over 20 years. The brute-force attack used against DES
is very common when trying to decipher an algorithm. Although you must try all the possible 2^56 keys of DES on a
plaintext and match the result against the known corresponding ciphertext, by using differential cryptanalysis you could
reduce the amount of tryouts to 2^47, which is still a big project to undertake. If DES were to use a key longer than 56-bit
key, the possibilities of cracking it would be nearly to impossible.
CAST
Developed by Carlisle Adams and Stafford Tavares, CAST algorithm uses a 64-bit block size and a 64-bit key. The
algorithm uses a six S-boxes with an 8-bit input and a 32-bit output. Don’t even ask me about the constitution of these
S-boxes, as it is very complicated and out of the scope of this book. For that I strongly recommend Bruce Schneier’s book
"Applied Cryptography," by John Wiley (ISBN 0-471-11709-9), which is a great book for those wanting to dig into
cryptography.
CAST encryption is done by dividing the plaintext block into two smaller blocks, left and right blocks. The algorithm has
eight rounds and in each round one half of the plaintext block is combined with some key material using a function "f" and
then XORed with the other block, the left one to form a new right block. The old right hand becomes the new left hand.
After doing this eight times the two halves now will be concatenated as a ciphertext. Table 3.1 shows the "f" function,
according to the example of Schneier in the above mentioned book, page 335, which is very simple.
Table 3.1 - The Function used by CAST for encryption of plaintext blocks into a ciphertext.
4 XOR the six S-box outputs together to get the final 32-bit output.
Note:
What are S-boxes?
S-boxes, or selection boxes, are a set of highly non-linear functions, which are implemented in
DES as a set of lookup tables. They are the functions that actually carry out the encryption and
decryption processes under DES.
Figure 3.6 is a screenshot of a DES S-boxes site at the College of William and Mary, courtesy of Serge Hallyn at URL
http://www.cs.wm.edu/~hallyn/des/sbox.html, which is worthwhile for you to check. Also, for your convenience, figures
3.7 through 3.14 are screenshots of DES S-box 1 through 8 respectively.
Skipjack
Skipjack is an encryption algorithm developed by the National Security Agency (NSA) for the Clipper chips. Unfortunately,
not much is known about the algorithm, as it is classified as secret by the US government. It is known that this is a
symmetric algorithm, which uses a 80-bit key and has 32 rounds of processing per each encrypt or decrypt operation.
The Clipper-chip is a commercial chip made by NSA for encryption, using the Skipjack algorithm. AT&T does have plans
to be using the Clipper for encrypted voice phone lines.
Tip:
For detailed information on Skipjack, check the URL
http://www.cpsr.org/cpsr/privacy/crypto/clipper/skipjack_interim_review.txt, which provides a
complete overview about it.
Clipper uses Skipjack with two keys, and whoever knows the chip’s "master key" should be able to decrypt all messages
encrypted with it. Thus, NSA could, at least in thesis, decrypt Clipper-encrypted messages with this "master-key" if
necessary. This method of tampering with the algorithms is what is so called and know as the key escrow.
There are many resistance from concerned citizens and the business sector against the Clipper-chip as they perceive it as an
invasion of their privacy. If you check the URL http://www.austinlinks.com/Crypto/non-tech.html you will find detailed
information about the Clipper wiretap chip.
RC2/RC4
RC4, which used to be a trade secret until the source code was posted in the USENET, is a very fast algorithm, designed by
RSA Data Security, Inc. RC4 is considered a strong cipher, but the exportable version of Netscape’s Secure Socket Layer
(SSL), which uses RC4-40, was recently broken by at least two independent groups which took them about eight days.
Table 3.2 gives you an idea of how the different symmetric cryptosystems compare to each other.
Table 3.2 - A Symmetric Cryptosystems Comparison Table
● Digital Signatures to provide a way for the receiver to confirm that the message came from the stated sender. In this
case, only the user knows the private key and keeps it secret. The user’s public key is then publicly exposed so that
anyone communicating with the user can use it.
● Plaintext encrypted with a private key can be deciphered with the corresponding public key or even the same private
key.
One of the main public key encryption algorithm is RSA, which was named after its inventors, Rivest, Shamir, and
Adleman. These public key algorithms always have advantages and disadvantages. Usually, the encryption and decryption
of the algorithms use large keys, often with 100 or more digits. That’s why the industry has the tendency to resolve key
management and computing overhead problems by using smart cards such as SecureID and so on.
Zimmermann’s Pretty Good Privacy (PGP), is an example of a public-key system, which is actually becoming very popular
for transmitting information via the Internet. These keys are simple to use and offer a great level of security. The only
inconvenient is to know the recipients’ public key, and as its usage increases, there are a lot of public keys out there,
without a central place to be stored. But there is a "global registry of public keys" effort at works, as one of the promises of
the new LDAP technology.
Note:
What about LDAP?
LDAP is an acronym for Lightweight Directory Access Protocol, which is a set of protocols for
accessing information directories. Based on the X.500 protocol, LDAP is much simpler to use and
supports TCP/IP (X.500 doesn’t), necessary for any type of Internet access.
With LDAP an user should be able to eventually obtain directory information from any computer
attached to the Internet, regardless of the computer’s hardware and software platform, therefore
allowing for an specific address or public-keys to be found without the need for clearing house
sites such as Four11 (http://www.four11.com) or similar.
RSA
RSA, developed invented in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman (RSA), is a public-key cryptosystem
for both encryption and authentication. RSA has become a sort of standard as it is the most widely used public-key
cryptosystem.
RSA works as follows: take two large primes, p and q, and find their product n = pq. Choose a number, e, less than n and
relatively prime to (p-1)(q-1), and find its inverse, d, mod (p-1)(q-1), which means that ed = 1 mod (p-1)(q-1); e and d are
called the public and private exponents, respectively. The public key is the pair (n,e); the private key is d. The factors p and
q must be kept secret, or destroyed.
It is difficult (presumably) to obtain the private key d from the public key (n,e). If one could factor n into p and q, however,
then one could obtain the private key d. Thus the entire security of RSA is predicated on the assumption that factoring is
difficult; an easy factoring method would break RSA.
RSA is fast, but not as DES. The fastest current RSA chip has a throughput greater than 600 Kbits per second with a 512-bit
modulus, implying that it performs over 1000 RSA private-key operations per second.
Figure 3.16 shows a summary overview of how public/private keys are generated.
In order to exploit NT’s password encryption system and MD4 is not a big deal here. The major challenge is that you will
need to connect to the machine you want to exploit as an administrator. Once done that, here it is what you’ll need to do:
1. Create a temporary directory where you will run the tools and make sure both, PWDUMP and NTCRACK reside
there.
2. Type PWDUMP > LIST.TXT (or any other suggestive name you want. This file will store all the password hashes
PWDUMP will find).
3. Now it is time to use NTCRACK! Type NTCRACK PASSWORDS LIST.TXT > CRACKED.TXT. (PASSWORDS
is the name of the file containing words, preferably a whole dictionary, in ASCII format. NTCRACK comes with a
basic dictionary file, you should add more words to it. Ask your secretary to enter the whole Webster there! Once the
process is finished you just need to open the file named CRACKED.TXT with any text editor and check which
passwords were cracked.
The NTCRACK version listed earlier is one of the most updated one at the time this chapter is written, mid-June of 1997.
This version not only checks the passwords against its basic dictionary, but also checks for passwords that are identical to
the username, which I used as an example for a cracked password on figure 3.18. Note that only passwords part of the
dictionary file are cracked. That’s why it’s so important to use long passwords, eight characters or more and not found in
any dictionary.
If you want to try this cracking tool on yourself, you can try it out on the Web. All you will need is to be running Internet
Explorer, which also exposes its security flaws, and access the URL http://www.efsl.com/security/ntie/. There, click on the
hyperlink "TRY IT." The system should provide an output with your password exposed, as shown on figure 3.18, if your
password was part of its dictionary file!
As you can see on figure 3.8, whereas on the previous figure the password was unknown, now it lists my last name,
GONCALVES, as it checked for passwords identical to account name.
You should know that MD5 is considered to be relatively more secure then MD4 and good enough for most purposes.
Certificates
To guarantee the authenticity of users and their keys, the Public key system requires a third party who is trusted by, and
independent of, all the other parties communicating with each other.
This third party is called the Certification Authority (CA), because it is their job to certify that the owner of a Public key
really is who they claim to be. To certify a Public key, the CA (such as VeriSign) creates a certificate that consists of some
of the user’s identification details and the user’s Public key. The CA then digitally signs this certificate with their own
Private key to create a Public Key Certificate.
Users can check the authenticity of another user’s Public key by verifying the CA signature on the certificate using the
CA’s Public key, which is made widely available to the public.
After decrypting the message, the receiver verifies the sender’s digital signature. To do this, a digest of the document is
created using the same hash algorithm that created the original signature. At the same time, the digital signature that was
attached to the document is decrypted using the sender’s Public key. This creates a digest of the digital signature.
The digests of the document and the digital signature are then compared. If there is even the slightest difference between the
two, the signature is rejected. If the digests match exactly, the receiver knows that the document was not changed in transit,
and can be sure of the identity of the sender.
Since the sender is the only person who has access to the Private key used to sign the message, they can’t deny having sent
it. Figure 3.19 shows a process where a digital signature is verified.
Certificate Servers
Certificate Servers are applications developed for creating, signing, and managing standard-based, public-key certificates.
Organizations use Certificate Servers, such as Netscape’s certificate server
(http://home.netscape.com/comprod/server_central/support/faq/certificate_faq.html#1) to manage their own public-key
certificate infrastructure rather than relying on an external Certificate Authority service such as VeriSign, as discussed in
the previous section.
Another vendor, OpenSoft (http://www.opensoft.com/products/expressmail/overview/certserver/) also provides Certificate
Server technology for Windows NT and Windows 95 platforms. OpenSoft, uses an architecture based on the new
Distributed Certificate System (DCS), which makes it a reliable public key distribution system. Figure 3.20 is a screenshot
of OpenSoft’s Certificate Server page.
Note:
What about DCS?
The DCS server is a speed-optimized certificate server, based upon the DNS model. The server
initially only supports four resource record types: certificate records (CRT), certificate revocation
lists (CRL), certificate server records by distinguished name (CS), and certificate server records
by mail domain (CSM).
As the Distributed Certificate System is intentionally extensible, new data types and experimental
behavior should always be expected in parts of the system beyond the official protocol. As in
DNS, the DCS server uses a delimited, text-based file format named the DCS master files. The
DCS server allows multiple master-files to be used in conjunction, as well as a ‘root’ file, where
authoritative root server information is stored.
For more information on DCS, check OpenSoft’s Web Site at URL
http://www.opensoft.com/dcs/. The following section, "DCS: What is Under the Hood?," is an
edited (stripped) version of the full document available at OpenSoft’s URL listed above, which
holds the copyrights of this document.
As the DCS is intentionally extensible, new data types and experimental behavior should always be expected in parts of the
system beyond the official protocol. As in DNS, the DCS server uses a delimited, text-based file format named the DCS
master files. The DCS server allows multiple master-files to be used in conjunction, as well as a ‘root’ file, where
authoritative root server information is stored.
■ distinguished name,
■ the certificate
■ server address
■ domain name,
■ server address
2. In a CRT or CRL query, a user agent sends a request for a certificate or CRL to a certificate server, given a
distinguished name:
● if a CRT or CRL record is not present, the server searches for a CS record to see where the certificate may be found,
otherwise the server asks a DCS root server where to look for this certificate or CRL
● if the CRT or CRL record is present, the certificate or CRL is returned
3. In a CS(M) query, a distinguished name segment may be an attribute, or set of attributes:
● Refer to RFC 1779 ("A String Representation of Distinguished Names") to obtain the necessary format for
distinguished names in CS(M) queries.
● At the user agent, marking an attribute or set of attributes in the distinguished name allows the server to decide how
to look for the corresponding certificate on another server via a CS query
● Only the marked attribute or set of attributes is used in a CS query, this marked set is the common element in
distinguished names of certificates located at the server with the correct key, but not all certificates at this location
have this common element
● This query method is similar to how DNS uses the NS record to find the address of servers with a common domain
● By default, a user agent uses the e-mail attribute as the marked attribute, if no other attribute or set of attributes is
marked. From the e-mail address, the domain name is extracted and then used in a CSM query. If there is no email
attribute and no other marked attribute, then the first attribute in the first set is used as the marked attribute.
● A user agent may also request CRLs from the DCS in the above manner.
4. CRT, CRL, and CS(M) records are stored in a DCS master file which is similar to the DNS master file format.
DCS Topology
A common topology of multiple DCS hosts and their role in the Internet is represented on figure 3.21.
On figure 3.21, note that:
1. Edit DCS master files . Records used: CRT, CRL, CS, CSM
2. Request to the Certificate Authority for CRL(s). Records used: CRL
3. Request to the certificate server for certificates and CRLs. Record used: CRT, CRL
4. DCS inter-server communication. Records used: CS, CSM
The DCS topology illustrates the high-speed nature of this system. A user agent may query a local certificate server and in
milliseconds receive a transmission of the desired certificate or CRL from that certificate server or perhaps another server
located anywhere on the Internet.
DCS Protocol
Refer to RFCs 1032-1035 on the DNS protocol for the exact syntax on DCS queries. The DCS query protocol will have the
same format as the DNS query protocol. The syntax of distinguished names within DCS queries will conform to RFC 1779
("A String Representation of Distinguished Names").
All communication inside of the DCS protocols are carried in a single format called a DCS message (DCSM). The top level
format of message is divided into 5 sections, just like with DNS, some of which are empty in certain cases, as shown on
figure 3.22.
Looking at figure 3.22, the header section is always present. The header includes fields that specify which of the remaining
sections are present, and also specify whether the message is a query or a response, a standard query or some other opcode,
etc.
The names of the sections after the header are derived from their use in standard queries. The question section contains
fields that describe a question to a name server. These fields are a query type (as the QTYPE in DNS), a query class (as the
QCLASS in DNS). The last three sections have the same format: a possibly empty list of concatenated DCS records. The
answer section contains RRs that answer the question; the authority section contains RRs that point toward an authoritative
name server; the additional records section is not used in the DCS.
● OPCODE - A four bit field that specifies kind of query in this message. This value is set by the originator of a query
and copied into the response. The values are:
● 0 - a standard query (QUERY)
● 3 - a simple query. The certificate server makes a search of an information until finds a first required DCS record
(SMQUERY).
● 4 - an update query. A CA sets this type when sending to a certificate server new certificates or a CRL(UQUERY).
● AA - Authoritative Answer - this bit is valid in responses, and specifies that the responding name server is an
authority for the distinguished name in question section. Note that the contents of the answer section may have
multiple owner names because of aliases. The AA bit corresponds to the name which matches the query name, or the
● QTYPE - a two octet code which specifies the type of the query. The values for this field include all codes valid for a
TYPE field.
● QCLASS - a two octet code that specifies the class of the query. This field is used for compatibility with the DNS
only. For DCS it must equal the IN (the Internet).
● TYPE - Two octets containing one of the DCS record types. This field specifies the meaning of the data in the
RDATA field.
for CS record The Type value is 1001
for CSM record The Type value is 1002
for SOC record The Type value is 1003
for SOCM record The Type value is 1004
for CRT record The Type value is 1005
for CRL record The Type value is 1006
● AXFR - 252 A request for a transfer of entry zone (it is identical to the DNS query). This value is same the DNS
AFXR.
● CLASS - two octets which specify the class of the data in the RDATA field. For the DCS this value must be equal
the IN.
● TTL - a 32 bit unsigned integer that specifies the time interval (in seconds) that the resource record may be cached
before it should be discarded. Zero values are interpreted to mean that the RR can only be used for the transaction in
progress, and should not be cached. The each DCS record contains a time value. This field may not be necessary.
● RDLENGTH - an unsigned 16 bit integer that specifies the length in octets of the RDATA field. In DCS the DATA
is the DER encoded value. Thus the RDATA contains its length. Therefore this filed is not used.
● RDATA - A DER encoded ASN.1 type. The format of this information varies according to the TYPE of the RR.
If you would like to have more information about DCS message compression and transport, as well as server algorithm,
please check OpenSoft URL at URL http://www.opensoft.com/dcs/, as I feel that this kind of information already goes
beyond the scope of this book.
Key Management
The only reasonable ways to protect the integrity and privacy of information is to rely upon the use of secret information in
the form of private keys for signing and/or encryption, as discussed earlier in this chapter. The management and handling of
these pieces of secret information is generally referred to as "key management." This includes the process of selection,
exchange, storage, certification, expiration, revocation, changing, and transmission of keys. Thus, most of the work in
managing information security systems lies in the key management.
The use key management within public key cryptography, as seeing earlier, is appealing because it simplifies some of the
problems involved in the distribution of secret keys. When a person sends a message, only the receiver can read it. This
without having any need for the receiver to know the original key used by the sender or agree on a common key, as the key
used for encryption is different from the key used for decryption.
Key management not only provides convenience for encrypted message exchange, but also provides the means to
implement digital signatures. The separation of public and private keys is exactly what is required to allow users to sign
their data, allow others to verify their signatures with the public key, but not have to disclose their secret key in the process.
Kerberos
The Kerberos protocol provides network security by regulating user access to networking services. In a Kerberos
environment, at least one system runs the Kerberos Server. This system must be kept secure. The Kerberos Server, referred
to as a trusted server, provides authentication services to prove that the requesting user is genuine. Another name for the
Kerberos Server is the Key Distribution Center (KDC).
Other servers on the network, and all clients, are assumed by the system administrator to be untrustworthy. For the Kerberos
protocol to work, all systems relying on the protocol must trust only the Kerberos server itself.
In addition to providing authentication, Kerberos can supply other security services such as:
● Data integrity
● Data confidentiality
Kerberos uses private key encryption based on the Data Encryption Standard (DES). Each client and server has a private
DES key. The Kerberos protocol refers to these clients and servers as principals. The client’s password maps to the client’s
private key.
Tip:
For a great source of information on Kerberos and its applicability in the network security
environment, check Process Software Web site at http://www.process.com. Not only they are one
of the leading TCP/IP (including IPv6!) solution company, but also have a vast resource of
information on IPv6, Kerberos and TCP/IP.
The Kerberos Server maintains a secure database list of the names and private keys of all clients and servers that are
allowed to use the Kerberos Server’s services. Kerberos assumes that all users (clients and servers) keep their passwords
secure.
The Kerberos protocol solves the problem of how a server can be sure of a client’s identity. Kerberos does this by having
both the client and server trust a third party, in this case, the Kerberos Server. The Kerberos Server verifies the client’s
identity.
Getting Application Service Tickets for Network Services from the Kerberos
Server
Once a Client has a ticket-granting ticket, it can ask application servers for access to network applications.
Every request of this kind requires first obtaining an application service ticket for the particular application server from the
Ticket–Granting Service (TGS).
Figure 3.28 and 3.29 outlined in the following process describe getting an application service ticket to use to access an
application server.
The Client:
1. Creates an authenticator to be used between the Client and the Kerberos Server. The Client encrypts the authenticator
using the session key that it received previously. The authenticator contains three parts:
● user name,
● current time
2. Creates the message to send to the Kerberos Server. The packet contains three parts:
● ticket–granting ticket,
● encrypted authenticator,
3. Sends the packet to the Kerberos Server. The Kerberos Server receives the packet from the Client.
The Kerberos Server:
4. Decrypts the ticket–granting ticket using its private key to obtain the session key. (The ticket–granting ticket was
originally encrypted with this same key.)
● Decrypts the authenticator using the session key, which compares the:
❍ Current time in the authenticator with its own current time to make sure the message is authentic and recent.
After the Kerberos Server verifies the information in the ticket, the Server creates an application service ticket packet for
the Client. The Server:
7. Uses the application server name in the message and obtains the application server’s private key from the Kerberos
database.
8. Creates a new session key and then an application service ticket based on the application server name and the new
session key. The Kerberos Server encrypts this ticket with the application server’s private key. This ticket is called
the application ticket. This ticket has the same fields as the ticket–granting ticket:
❍ user-name,
❍ Application ticket
● Kerberos requires a new authenticator from the Client each time the Client starts a new connection with an
application. Authenticators have a short lifetime (generally five minutes).
● The encrypted ticket and authenticator contain the Client’s network address. Another user cannot use stolen copies
without first changing his system to impersonate the Client’s network address.
To hack Kerberos is very hard! In case of an attack, before the authenticator expires, a hacker would need to:
● steal the original ticket,
Cygnus’ KerbNet
KerbNet security software is Cygnus’ commercial implementation of MIT’s Kerberos v5.
This is a great product to use when securing your network, as it provides the security of Kerberos, with its single trusted
authentication server architecture, which provides the basis for a single sign-on interface for your users. Also, once you
install and configure the KerbNet Authentication Server, client and server applications can be ‘Kerberized’ to work with
KerbNet, which is very simple in a multi-user application environment. Basically, all you do is to replace your E-mail, ftp,
or telnet, with Cygnus’ off-the-shelf Kerberized versions.
The KerbNet libraries allows in-house developers to add KerbNet authentication and encryption directly to their existing
client-server applications. KerbNet Authentication Server is the first Kerberos server for both UNIX and Windows NT with
encrypted tickets for requesting services, which keeps passwords off the network, prevents password spoofing attacks, and
allows for encrypted communications between a client and server.
Tip:
For more information on KerbNet or to download a free copy, check Cygnus web site at URL
http://www.cygnus.com/product/kerbnet-index.html
3. At this point, both of you will generate the public keys, which we will call "y." You guys will create them using the
function:
y = g^x % p
4. You now will exchange the public keys (‘y’) and the exchanged numbers are converted into a secret key, "z."
z = y^x % p
"z" now can be used as the key for whatever encryption method used to transfer information between the two of you.
Mathematically speaking, you two should have generated the same value for "z," whereas
z = (g^x % p)^x’ % p = (g^x’ % p)^x % p
All of these numbers are positive integers, whereas
x^y means: x is raised to the y power
x%y means: x is divided by y and the remainder is returned
Note:
The Diffie-Hellman Key Agreement, U.S. patent 4,200,770, is owned by Public Key Partners and
will expire later this year, 1997.
Ciphertext-only Attack
In this type of attack, the hacker, or cryptanalyst, does not know anything about the contents of the message, and must work
from ciphertext only.
In practice it is quite often possible to make guesses about the plaintext, as many types of messages have fixed format
headers. Even ordinary letters and documents begin in a very predictable way. It may also be possible to guess that some
ciphertext block contains a common word.
The goal of the cryptanalyst here is then to try to deduce the key used to encrypt the message, which would also allow
him/her to decrypt other messages encrypted with the same key.
Known-plaintext Attack
In this case, the hacker knows or can guess the plaintext for some parts of the ciphertext. The task is to decrypt the rest of
the ciphertext blocks using this information. One way he will probably try is to determine the key used to encrypt the data.
Chosen-plaintext Attack
The hacker here is able to have any text he likes encrypted with the unknown key, he is able choose the plaintext that gets
encrypted, which can lead him to ones that might yield more information about the key. Therefore, his task is to determine
the key used for encryption. Some encryption methods, particularly RSA, are extremely vulnerable to chosen-plaintext
attacks.
Adaptive-chosen-plaintext Attack
This is actually a variation of the chosen-plaintext attack. But in this case the hacker is able to exercise the option of
modifying his choice of the encrypted plaintext based on the results of previous encryption, which allows him to choose a
smaller text block of plaintext to be encrypted.
Man-in-the-middle Attack
This is a relevant attack for cryptographic communication and key exchange protocols. It’s a sort of key spoofing, where a
hacker would intercept the communication between two parties exchanging keys for secure communication, such as
Diffie-Hellman’s, corrupting the key by performing a separate key exchange with each party and forcing each one of them
to use different keys, each of which is known by the hacker. The hacker will then decrypt any communications with a now
valid key, and encrypt them with the other key for sending to the other party. Worse, the parties will still think that they are
communicating securely, as this whole process is totally transparent to both parties, they would never know what has
happened until it is too late!
One way to prevent man-in-the-middle attacks is that both sides compute a cryptographic hash function of the key
exchange, use a digital signature algorithm, and send the signature to the other side. The recipient then verifies the
authentication of the signature as being from the desired other party, and that the hash in the signature matches that
computed locally.
Chosen-ciphertext Attack
In this case, the hacker, or cryptanalyst, not only is able to choose which ciphertext he will try to decrypt but also he/she has
access to the decrypted plaintext.
Usually this type of attack is applied to public-key algorithms, which very often works well against symmetric algorithms
too.
Chosen-key Attack
Although the name suggests that the attacker is able to choose the key, this is not true. As a matter of fact, this is a very
weird form of attack where the hacker only has some knowledge about the relationship between the two keys. Bruce
Schneier brilliantly discusses this form of attack in his book Applied Cryptography, in the section "Differential and Linear
Cryptanalysis."
Rubber-hose Cryptanalysis
This is the "dirty" way, where the hacker will harass, threaten, bribe and torture someone until they get the key!
Tip:
For additional information on cryptanalysis attacks, check this references:
Bruce Schneier: Applied Cryptography. John Wiley & Sons, 1994.
Jennifer Seberry and Josed Pieprzyk: Cryptography: An Introduction to Computer Security,
Prentice-Hall, 1989.
Man Young Rhee: Cryptography and Secure Data Communications. McGraw-Hill, 1994.
M. E. Hellman and R. C. Merkle: Public Key Cryptographic Apparatus and Method.
The RSA Frequently (http://www.rsa.com/faq.htm) Asked Questions document by RSA Data
Security, Inc., 1995.
Timing Attack
This is somewhat a new form of attack discovered by Paul Kocher that looks at the fact that different modular
exponentiation operations in RSA takes discretely different amounts of time to process. In this process, the cryptanalyst
repeatedly measures the exact execution times of modular exponentiation operations, which is very relevant for RSA,
Diffie-Hellman, and Elliptic Curve methods.
Usually, RSA computations are done with what is called Chinese Remainder theorem (see figure 3.31). But if it doesn’t, a
hacker could exploit slight timing differences in RSA computations in order to try to recover it.
Figure 3.31 shows a description of the Chinese Remainder Theorem at SouthWest Texas State University Web site.
Tip:
To learn more about the Chinese Remainder Theorem, check the URL
http://www.math.swt.edu/~haz/prob_sets/notes/node25.html, at SouthWest Texas State
University.
The attacker passively observes "k" operations measuring the time "t" it takes to compute each modular exponentiation
operation: m=c^d mod n. The attacker also knows "c" and "n." The pseudo code of the attack is:
Algorithm to compute m=c^d mod n:
Let m0 = 1.
Let c0 = x.
If (bit i of d) is 1 then
Else
End.
According to Ron Rivest ([email protected]), at MIT, the simplest way to defeat this timing attack would be to
"ensure that the cryptographic computations take an amount of time that does not depend on the data being operated on. For
example, for RSA it suffices to ensure that a modular multiplication always takes the same amount of time, independent of
the operands."
He also suggest a second alternative, using "blinding techniques. According to him, you could "blind the data beforehand,
perform the cryptographic computation, and then unblind it afterwards. For RSA, this is quite simple to do. (The blinding
and unblinding operations still need to take a fixed amount of time.) This doesn’t give a fixed overall computation time, but
the computation time is then a random variable that is independent of the operands."
Note:
This blinding process introduces a random value into the decryption process, whereas,
m = c^d mod n
becomes:
m = r^-1(cr^e)^d mod n
The University of British Columbia (http://www.ubc.ca/) has a Web site at URL http://axion.physics.ubc.ca/pgp-attack.html
with vast documentation of symmetric and asymmetric crypto attacks, which are well worth checking out.
● Use non-basic words in the password. Basic words in foreign languages are just as easy to guess.
● Try keyboard tricks like shifting fingers to the left or right one key when typing the password.
● If someone is standing over your shoulder, you could always politely ask him to turn his head.
● Use non-alphanumeric characters in the password. Symbols such as $, %, ^, and & are often valid characters to use in
passwords.
● Administrators can use control characters, on certain systems, in the middle of the password. You can determine
which control characters can be used by trial and error.
● Use a mixture of uppercase and lowercase letters.
Authentication
As discussed above, a good password policy is very important to safeguard the integrity, confidentiality and security of your
users, specially if you are involved with electronic commerce, which becomes a requirement. Therefore, authentication
must become a daily user’s process, rather than a special procedure, for users to logon at their computers, the Internet and
departmental Intranets.
When applying authentication methods, it is important to take into consideration the spoofing risks. Cryptography methods,
as discussed earlier will help you to implement a security policy not so easy to be spoofed by hacker, but it may not be
enough to protect corporate resources and other non-individual related data or resource.
Therefore, you must incorporate other strategies and technologies to enhance the level of security of your corporate
network. Firewalls are definitely a requirement and will be discussed in more details throughout this book.
Authenticode
Authenticode appeared as one of Microsoft’s earliest commercial implementations of code signing. The Authenticode
signature is based on currently prevailing industry standards - X509v3 certificates and PKCS#7 signature blocks.
Tip:
Documentation on Authenticode and related infrastructure can be found at
http://www.microsoft.com/intdev/security.
That has being a lot of commentary in the USENET and trade magazines about ActiveX and Authenticode, but most of
them focusing on how an ActiveX control operates and what Microsoft should or should not have included with the tool. So
I don’t intend to reproduce the same line of thoughts here as if you want to know more about the hows, dos and don’ts of
Authenticode probably a search on AltaVista would be enough. Rather, I would like us to focus on the infrastructure of
what Microsoft proposes with Authenticode and its impact as a data security cryptographic-based application.
Brent Laminack ([email protected]) posted the following considerations about Authenticode on the USENET, which
clearly illustrates a basic infra-structure issue with it. You judge for yourself if you would be wiiling to base your data
security tasks to it.
Laminack suggests us to consider two ActiveX controls. One providing a control similar to the Win95 "Start" button with
all the commands on the user’s computer presented in a list to choose from. Suppose it keeps these command names in a
preferences file such as C:\windows\mycommands. The file may contain a list such as: Word, Excel, format c:, IE3, etc.
He also suggests us to consider a second ActiveX control that provides a "cron" facility, which would automatically wake
up at a specified time and execute a list of commands for housekeeping such as backup, defrag, etc. Suppose it keeps its list
of commands in, say, for instance C:\windows\mycommands. In his own words, "you see it coming," don’t you? What
could happen is that the second control could find the file written by the first one and dutifully fire up Word, Excel, and
then… format the C drive. Commands after this one are of diminishing consequence.
What now? You’re stuck! You now have a wiped hard drive and, as Laminack puts it, you have no fingerprints for
Authenticode. Even if you do get them, who are you going to sic the law enforcement people on? Both controls did exactly
what they were designed to do, exactly what they advertised to do. Who are you going to sue?
Worse, neither of the codes "misbehaved." What did in your disk was an unforeseen interaction between the two. Laminack
suggests that with a bit of thought work it would be possible to come up with a co-operating gang of ActiveX controls to do
deliberate theft via collusion where each program is only doing what it’s "supposed" to, yet the total of their activity is
much greater than the sum of the parts. Yes, non-linearity is clearly at work here in the interaction of the components.
The only way to avoid this would be to strictly decouple them, by not allowing any to share information with the other, such
as giving each its own private file-space to write in. This, alas is not the case.
As Microsoft puts it, the way Authenticode is implemented, both contractually and technically, at least in its present release
(March of 1997), when you sign a code you are actually taking explicit responsibility as the code’s publisher, an action not
to be taken lightly from a legal point of view.
But it is just to easy to say that by signing a code gives you accountability. After all, would you have an audit trail to use as
supporting evidence? Also, in the software industry, history shows that usually a piece of software is not liable for damages
it may causes to a system!
Although Authenticode is still the only deployed code-signing application, Netscape Navigator 4.0 already has code
signing, and JavaSoft’s JDK1.1 as well. The bottom line? You’ll need much more then Authenticode, and Microsoft
response is SSPI
Note:
For more and detailed information about CryptoAPI, check the URL
http://www.graphcomp.com/info/specs/ms/capi.html.
A feature or a bug, what I am most concerned is the fact that a shrewd developer could generate an ActiveX control that
would do nothing more than open the doors of the system and let all the other programs come in without even passing
through the Authenticode. This ActiveX control could even let another version of itself access the system, accordingly
signed, but without malicious codes, which would cover up any trace of it in the system.
Unfortunately, with ActiveX, when an user allows the code to run on the system, there are many "distressing" situations that
could happen. In a way, this is not a problem affecting only ActiveX. It extends through all the platforms and type of codes.
If the Web made it easy for an editor to distribute his codes, it also made it easy to identify a malicious code and to alert and
communicate the endangered parties.
Without a doubt, the Authenticode helps a lot in the quality control and authenticity of the code. The fact that we can
rapidly identify the author of a code and demand from him a fix for a bug is an example of it. If the author refuses to fix the
code, there are several avenues one could take to force him to fix it, both in the commercial level, refusing to use the code,
as well as legally, bringing him to court. This feature alone already grants Authenticode some merit.
Even though, Java’s robustness and the existence of other security applets for Java, such as Java Blocking, for instance, is
enough for one to argue on rather develop on ActiveX or Java.
One alternative to prevent such a vulnerability is to run a filter in combination to the firewalls, so that these applets (Java,
JavaScript or ActiveX objects) can be filtered. A major example of such a tool is the so called Java Blocking, which have
created a lot of confusion as far as how to run it in the most effective way, as opinions are many.
My recommendation is to run the Java Blocking as a service at the firewall. This way, it will extend the level of protection
against Java applets throughout the whole network. Some browsers, such as Netscape Navigator, provide security against
Java applets at the client level, allowing the user to disable Java applets at the browser. However, it becomes very difficult
to administer all the clients centrally.
Carl V Claunch, from Hitachi Data Systems, developed a patch for the TIS firewall toolkit that converts the TIS http-gw
proxy onto a proxy filter. This filter can be implemented as an uniform or differentiated security policy at the level of
IP/domain addresses. This filter can block, permit or combine both instances based on the browser version. The security
policies are created separately for Java, JavaScript, VBScript, ActiveX, SSL e SHTTP
According to Claunch, as far as blocking JavaScript, this process involves the scanning of various constructs:
1 - <SCRIPT language=javascript> ... </SCRIPT>
3 - Attribute in other tags on form onXXXX= where XXXX indicates the browser’s
actions, such as click, mouse movements, etc.
The Java Blocking consists in disactivating both tags <APPLET ...> and </APPLET>,
</SCRIPT>
3 - Removal of attributes on form onXXXXX= and many tags, just like with JavaScript
However, the dialogs of SSL and SHTTP turns HTML blurry to the proxy. Consequently, these shttp and https HTML
pages can’t be effective filtered!
But don’t you think that I’m hammering ActiveX and promoting Java! Anyone could develop a malicious plug-in for
Netscape if they wanted to. As a matter of fact, the impact would have been even greater then with any ActiveX object
when we consider the browsers. After all, a plug-in has as much control over Windows as an ActiveX object.
Don’t even tell me that the advantage is in having to install a plug-in versus automatically receiving an ActiveX object.
There are so many implementations of Netscape out there that for sure there would had been so many users installing such a
malicious plug-in as ActiveX users facing a malicious ActiveX on their pages. Furthermore, you have no way to better
control the installation of a plug-in on Netscape better them you would control the installation of an ActiveX object.
As professionals involved with network and site security, lets be realistic. Many experts have been pointing out the security
flaws existent on Java implementations, as well as fundamental problems with the Java security model. As an example, I
could cite attacks that confuses Java’s system, resulting in applets executing arbitrary codes with total permission from the
user invoking the applet.
Note:
There is a white paper, written by Dean, Felten e Wallach, entitled "Java Security: From HotJava
to Netscape and Beyond" that discusses most of the problems and security flaws of Java. The
paper is available for download at Princeton University’s site, at URL
http://www.cs.princeton.edu/sip.
So far, users and systems developers have been content in considering these Java problems... "temporary." They have been
confident that bugs will be fixed quickly, limiting the margin of damages. Netscape has been incredible quick in fixing
serious problems!
However, with the huge base of browsers capable of running Java, each one inviting a hostile applet to determine the
actions of this browser, gives as the suspicion of a security flaw on Java at the implementation structure level.
There is another paper, available at Boston University’s URL at
http://www.cs.bu.edu/techreports/96-026-java-firewalls.ps.Z, that describes attacks to firewalls that can be launched from
legitimate Java applets. The document describes a situation where in some firewall environments, a Java applet running on
a browser inside the firewall can force the firewall to accept connections such as TELNET, or any other TCP ones, directed
to the host! In some cases, the applet can even use the firewall to arbitrarily access other hosts supposedly protected by a
firewall.
Let me explain that the weaknesses exploited in these attacks are not caused by Java implementations themselves, nor by
the firewall itself, but from the combination of both elements together, and on the security model resulted from the browsers
Comments
Comments
Comments
Comments
Chapter 4
Firewalling Challenges: The Basic Web
This chapter discuss the challenges firewall implementations face in light of the HyperText Transmission Protocol
(HTTP) and some of its security issues. It also discusses the proxing characteristics of HTTP and its security concerns.
It explores the secure HTTP (S-HTTP) as well as the use of SSL for enhanced security and reviews the security
implications of Common Gateways Interface (CGI
HTTP
Being an application-level protocol developed for distributed, collaborative, hypermedia information systems, the
Hypertext Transfer Protocol (HTTP) is a very generic and stateless protocol, enabling systems to be built independently
of the data being transmitted. It is also an object-oriented protocol with capabilities to be used for a variety of tasks,
which includes but is not limited to name servers, distributed object management systems and extension of its request
methods, or commands.
One of the great features of HTTP is the typing and negotiation of data representation. This protocol has been in use
since 1990, with the W3 global information initiative.
The most current version of HTTP is version 1.0, which is supported by all Web servers in the market. But there is also
another version of the protocol, HTTP-NG (Next Generation), which promises to use the bandwidth available more
efficiently and enhance the HTTP protocol.
Further, HTTP is a protocol that can be generically used for communication between user agents and proxies or
gateways to other Internet protocols, such as SMTP, NNTP, FTP, Gopher and WAIS.
Nevertheless, all this flexibility offered by HTTP comes at a price: it makes Web server, and clients, very difficult to
secure. The openness and stateless, characteristics of the Web, accounts for its quick success, but makes it very difficult
to control and protect.
On the Internet, HTTP communication generally takes place over TCP/IP connections. It uses as default port 80, but
other ports can be used, which does not prevent HTTP from being implemented on top of any other protocol. In fact,
HTTP can use any reliable transport.
When a browser receives a data type it does not understand, it relies on additional applications to translate it to a form it
can understand. These applications are usually called viewers, and should be the one of the first concerns you should
have when preserving security. You must be careful when installing one, because, again, the underlying HTTP protocol
running on your server will not stop the viewer from executing dangerous commands.
You should be especially careful with proxy and gateway applications. You must be cautions when forwarding requests
that are received in a format different than the one HTTP understands. It must take into consideration the HTTP version
in use, as the protocol version indicates the protocol capability of the sender. A proxy or gateway should never send a
message with a version indicator greater than its native version. Otherwise, if a higher version request is received, both
the proxy or the gateway must either downgrade the request version, respond with an error, or switch to a tunnel
behavior.
Note:
If you need more information on HTTP, check the URL:
http://www.w3.org/hypertext/WWW/Protocols/
There is a series of utilities intended for Web server administrators available at the URL:
ftp://src.brunel.ac.uk/WWW/managers/
The majority of HTTP clients, such as Purveyor (http://www.process.com) and Netscape Navigator, support a variety
of proxying schemes, SOCKS and transparent proxying.
Purveyor, for instance, provides proxy support for not only HTTP, but also FTP and GOPHER protocols, creating a
secure LAN environment by restricting internet activities of LAN users. The proxy server offers improved performance
by allowing internal proxy caching. Purveyor also provides proxy-to-proxy support for corporations with multiple
proxy servers.
Tip:
For more information on Purveyor Webserver, check Process Software’s URL:
http://www.process.com.
If you are running your Web server on Windows NT, Windows 95 or NetWare, you can use Purveyor Webserver’s
proxy features to enhance security. In addition, you can increase the performance of your server as Purveyor can locally
cache Web pages obtained from the Internet.
Installing a firewall at your site should be a must. Regardless if you are placing your server outside or inside your
protected network, a firewall will be able to stop most of the attacks, but not all! The openness of HTTP is too great for
you to risk. Besides, you still have all the viewers and applets to worry about.
When selecting a firewall, make sure to choose one that includes HTTP proxy server, check Appendix A "Types of
Firewalls and Products on the Market" for a complete review of all the major firewalls vendors and specifications of
their products. Also, check the CD that accompanies this book, as many of the vendors listed on Appendix A provided
demos and evaluation copies of their products, which are worth testing.
Firewalls will tremendously help you protecting your browsers. Some firewalls, such as TIS FWTK provide HTTP
proxing totally transparent to the user. More will be seeing about firewall in chapter 7, "What is an Internet/Intranet
Firewall After All." For now, you must be aware of the firewalling challenges when dealing with Web security
requirements and the HTTP protocol.
not even the client’s address, but the address of the proxy server his requests goes through. What your server will see
then is the address of the proxy requesting the document on behalf of the client. But the client, thanks to the HTTP
protocol, can also disclose to the Web server the username logged at the client, making the request.
Unless you have set your server to capture such information, what it will do first is to reverse the numeric IP address in
an attempt to get the domain name of the client (e.g. www.vibes.com). But in order for the server to get this domain
name, it must first contact a domain name server and present it with the IP address to be converted.
Many times, IP addresses cannot get reversed as they were not correctly configured. Consequently, the server cannot
reverse the address. What happens next? The serve goes ahead and forges the address!
Once the Web server has the IP address and the possible domain name for the client, it start to apply a set of
authentication rules, trying to determine if the client has access permission to the document requested.
Did you notice the security hole? There are few security holes here, as a result of this transaction:
● The client requesting the information may never get it as the server had forged the domain name. The client now
may not be authorized to retrieve the information requested,
● The server may send the information to a different client as the domain name was forged.
● Worse, the server may allow access to an intruder under the impression it is a legitimate user!
● You should be concerned with the HTTP server and what risks, or harm it can bring to your clients, but also
● You should be concerned with the HTTP clients and what risks, or harm it can bring to your server.
As discussed above, as far as client’s threats to your server, you should be careful with the security of your server. You
should make sure clients will access only what they are supposed to and if there is a hostile attack, that your server has
some way to protect the access to it.
However, not all is loss, as there are few basic steps you can follow in order to enhance the security of your server:
● Make sure to configure your server carefully, and to use its access and security features.
● If you are running your server on a Windows NT system, make sure to check the permissions for the drives and
shares and to set the system and restricted areas read-only. Or you can use chroot to restrict access to the systems
section.
● You can mirror you server and put sensitive files on the primary system but have a secondary system, without
any sensitive data open for the Internet.
● Remember Murphy’s Law, whatever can go wrong, WILL go wrong. Expect the worse and configure your Web
server in a way that even if a hacker is to take total control over it, there is going to be a huge wall (if not a
firewall!) to be crossed.
● Most important, review the applets and script your HTTP server uses, especially those CGI scripts interacting
with your clients over the Internet. Watch for possibilities of external users triggering execution of inside
commands.
● Run your Web server on a Windows NT server. It is much more secure, although it may not have as much
features as the UNIX and Suns counterparts.
● Macintosh Web server are even more secure, but lack on implementation features when compared with Windows
NT and 95 platforms.
To illustrate what a misconfigured domain name can do to a reversal IP address process, take in consideration the
entries you enter in your access.conf file. Keep in mind that this file is responsible for the access control of the
documents in your server.
When setting up this file, you will need to put a <directory> tag, for each directory being controlled, into the
access.conf file. Within the <directory> tag you will also need to use a <limit> tag with the parameters (allow, deny,
and order) needed to control access to the directory.
The following is an example where the whole Cyberspace can access the files in you top-level document directory:
<directory /usr/local/http/docs>
<limit>
order allow,deny
</limit>
</directory>
One of the key lines here is the "order" directive, telling the server to process "allow" directives (from ALL clients)
before any "deny" directives. Have you noticed we don’t have any "deny" directive?
Now lets assume you need to restrict an area on your server only for internal users to access it. Unlike the above
example, you will need a "deny" directive:
<directory /usr/local/http/docscorp>
<limit>
order deny,allow
</limit>
</directory>
In this case, the "deny" directive came before the "allow" directive, so that the whole Cyberspace can have its access
restricted access to the company. The "allow" directive permit access from anyone coming from greatplace.com
domain.
If the server can’t reverse the IP address of a client, then you have a problem, as the domain name is critical to this
process. Simply put, the user will not be able to access the Web page.
But, there it is a "Band-Aid" solution. You can add raw IP numbers to the access list.
<directory /usr/local/http/docscorp>
<limit>
order deny,allow
</limit>
</directory>
This way, the directive "allow" will permit any access coming from "greatplace" but also from any machine where the
IP address starts with 198.155.25.
● Abuse of log information (extraction of IP addresses, domain names, file names, etc.)
Most of these security holes are well known. Some applications like Netscape’s SSL and NCSA’s S-HTTP try to
address the issue, but only partially.
The problem is that Web servers are very vulnerable to client’s behavior over the Internet. I recommend you to force
Web clients to prompt a user before allowing HTTP access to reserved ports other than the port reserved for it.
Otherwise, these could cause the user to unadvertedly cause a transaction to occur in a different and danger protocol.
Watch the GET and HEAD methods! The so trivial link to click an anchor to subscribe or reply to a service can trigger
an applet to run without the user’s knowledge, which enables the abuse by malicious users.
Another security hole of HTTP has to do with server logs. Usually, a Web server logs a large amount of personal data
about information requested by different users. Evidently, this information should remain confidential. HTTP allows
the information to be retrieved without any access permission scheme.
There is a feature, the "Referer:" field, that increases the amount of personal data transferred. This field allows reading
patters to be analyzed and even reverse links to be drawn. If in wrong hands, it could become a very useful and
powerful tool that can lead to abuse and breach of confidentiality. To this day, there are cases where the suppression of
the Referer information is not know. Developers are still working on a solution.
Many other HTTP limitations and security holes exist if we were to break down the ramifications of the above security
issues presented by the protocol. Secure HTTP technologies and schemes are an attempt to address and resolve these
security holes.
A Security Checklist
First of all, the best security checklist you can have is knowing what to check and when. The following is a list of
resources on the Internet to help you keeping abreast with security issues arising everyday in Cyberspace. It can also
get some free resources to help you enhance security at your site:
● Subscribe to security mailing lists:
● Send an e-mail to the Computer Emergency Response Team (CERT) advisory mailing list, requesting your
inclusion to their mailing list at [email protected].
● Try Phrack newsletter, an underground hacker’s newsletter. Send an e-mail message to [email protected].
● Unchanged Passwords - Make sure to change default passwords when installing servers for the first time.
Always remove unused accounts from the password file. Disable these account by changing the password field in
the /etc/passwd file to an asterisk ‘*' and change the login shell to /bin/false to ensure that an intruder cannot
login to the account from a trusted system on the network.
● Passwords Re-used - Use passwords only once. Be aware that passwords can be captured over the Internet by
sniffer programs.
● Password Theft - Hackers use Trivial File Transfer Protocol (TFTP) to steal password files. If you are not sure
about this vulnerability at your system, connect to it using the TFTP protocol and try to get /etc/motd. If you can
access it, then everyone on the Internet can get to your password file. To avoid it, either disable tftpd or restrict
its access.
URI/URL
Uniform Resource Identifiers (URI), are a group of extensive technologies for naming and addressing resources such as
pages, services and documents on the web. There are a number of existing addressing schemes, and more may be
incorporated over time.
Figure 4.1 shows the basic structure of URI which includes:
● URI - The Uniform Resource Identifier, a generic set of all names/addresses referring to resources.
● URL - The Uniform Resource Locator is a set of URI schemes with explicit instructions on how to access a
resource on the Internet.
● URN - The Uniform Resource Name is composed of:
● A particular scheme, under development in the IETF to provide for the resolution using internet protocols of
names which have a greater persistence than that currently associated with internet host names or organizations.
When defined, a URN will be an example of a URI.
● URC - The Uniform Resource Citation, also known as Uniform Resource Characteristics, which is a set of
attribute/value pairs describing a resource. These values could be URIs of various kinds, but it can also include,
for example, authorship, publisher, data type, date, copyright status and so forth.
An Uniform Resource Locator (URL) is a sort of networked extension of the standard filename concept. An URL
enables you to point to a specific file on a specific directory at any giving machine attached to the Internet or Intranet.
Also, this file can be served though several different methods, such as HTTP, TELNET, FTP and so forth.
The following is an overview of some of the most common URL types, as described at the National Computer Security
Associations’ site at University of Illinois (http://www.ncsa.uiuc.edu/demoweb/url-primer.html).
File URLs
Suppose there is a document called "foobar.txt"; it sits on an anonymous ftp server called "ftp.yoyodyne.com" in
directory "/pub/files". The URL for this file is then:
file://ftp.yoyodyne.com/pub/files/foobar.txt
Gopher URLs
Gopher URLs are a little more complicated than file URLs, since Gopher servers are a little trickier to deal with than
FTP servers. To visit a particular gopher server (say, the gopher server on gopher.yoyodyne.com), use this URL:
gopher://gopher.yoyodyne.com/
Some gopher servers may reside on unusual network ports on their host machines. (The default gopher port number is
70.) If you know that the gopher server on the machine "gopher.banzai.edu" is on port 1234 instead of port 70, then the
corresponding URL would be:
gopher://gopher.banzai.edu:1234/
News URLs
To point to a Usenet newsgroup (say, "rec.gardening"), the URL is simply:
news:rec.gardening
Partial URLs
Once you are viewing a document located somewhere on the network (say, the document
http://www.yoyodyne.com/pub/afile.html), you can use a partial, or relative, URL to point to another file in the same
directory, on the same machine, being served by the same server software. For example, if another file exists in that
same directory called "anotherfile.html", then anotherfile.html is a valid partial URL at that point.
This provides an easy way to build sets of hypertext documents. If a set of hypertext documents are sitting in a common
directory, they can refer to one another (i.e., be hyperlinked) by just their filenames -- however a reader got to one of
the documents, a jump can be made to any other document in the same directory by merely using the other document's
filename as the partial URL at that point. The additional information (access method, hostname, port number, directory
name, etc.) will be assumed based on the URL used to reach the first document.
CGI
Another form of threat that makes harder for a firewall to protect a Web site involves Common Gateway Interface
(CGI) scripts. Many Web pages display documents and hyperlink them to other pages or sites. However, some have
search engines that will allow you to search the site (or sites) for particular information. This is done through forms that
are execute by CGI scripts.
Hackers can modify these CGI script to do things it really ought not to do. Normally, these CGI scripts will only search
into the Web server area, but if you modify it, it can search outside the Web server. To prevent it from happening you
will need to set these scripts with low user privileges, and if you are running a UNIX-based server, make sure you
search for those semicolons again.
There are many known forms of threats and many more of unknown ones. In the next sections you learn about some of
the most common and threatening ones.
Further, the open architecture of Web servers allows for arbitrary Common Gateway Interface (CGI) scripts to be
executed on the server’s side of the connection in response to remote requests. Any CGI script installed at your site
may contain bugs, and every such bug is a potential security hole.
Caution:
Beware of CGI scripts, as they are the major source of security holes. The protocol itself is not
insecure, but the scripts must be written with security in mind. If you are installing these scripts at
your site, beware of the problem!
The same goes for Web server software, as more features they have greater is the potential for security holes. Servers
that offer a variety of features such as CGI script execution, directory listing in real-time and script error handling, are
more likely to be vulnerable to security holes. Even security tools widely used are not guaranteed to always work.
Note:
There is a Web server comparison table available at http://www.webcompare.com/. It includes
freeware as well as commercial products for UNIX, Novell, Windows NT, Windows 95, VMS,
and many other operating system.
For instance, right before I started writing this book, two present events come to mind. First, is about the well known
Kerberos system, widely adopted for security in distributed systems, developed at MIT in the mid-1980s. The people
from COAST, at Purdue University, found a vulnerability in current versions of the Kerberos. Couple students, Steve
Lodin and Bryn Dole, and the professor Eugene Spafford, discovered a method where someone without privileged
access to most implementations of a Kerberos 4 server could break secret session keys issued to users, allowing
unauthorized access to distributed services available to a user without even knowing that user’s password. They were
able to demonstrate it in a record time of less than 1 minute, on average, using a typical workstation, and sometimes as
quickly as 1/5 second!
Another example is Netscape, where versions 2.0 and 2.01 were vulnerable to a "malicious" Java applet being spread
over the Internet, according to a story on the New York Times of May 18. This applet, although a bit annoying, could
cause denial-of-service, which potentially could cause also loss of unsaved edits in a word processor, or erratic
behavior of application if you, in a verge of panic decided to reboot your machine instead of just killing your browser.
Note:
What about Java?
Java is a language developed by Sun Microsystems which allows Web pages to contain codes to
be executed by browsers. The exciting thing about Java is that, by being based on a single "virtual
machine" that all implementations of Java emulates, it is capable to run on any system with a
version of it. There is a web browser, HotJava, totally written in the Java language. If want to
learn about it, try the URL: http://java.sun.com.
However, keep in mind that denial-of-service applets are not viruses, which are created with malicious intentions. True,
this Java bug had the capability to execute instruction over the Web server, remotely, with the ability even to upload
information from within the remote Web server, but the security breaches that have gotten so much press were fixed in
JDK 1.0.2, their current release, and in NN3.0b4.
In the interim, Netscape users were instructed to disable "Java" and "Java script" dialog box to prevent the browser
from receiving such applets, or upgrade to version 2.02, which supposedly resolved the problem.
Another example you should be aware of is the existing vulnerability in the httpd servers provided by NCSA and the
Apache organization. According to the Computer Incident Advisory Capability (CIAC), an user can potentially gain the
same access privileges as the httpd server. This security hole not only applies to UNIX servers but to all server’s
platform capable of running httpd. If you are running an NCSA httpd, you should upgrade it to version 1.5.1, its latest
version.
Tip:
You can download the NCSA httpd version 1.5 from the URL
ftp://ftp.ncsa.uiuc.edu/Web/httpd/UNIX/ncsa_httpd/current/httpd_1.5.1-export_source.tar.Z
Note:
If you want to download patch 1.3 for NCSA’s version 1.3 for UNIX, it is available at
http://hoohoo.ncsa.uiuc.edu/.
The Apache plug-in replacement for NCSA can be found at
http://www.hyperreal.com/apache/info.html).
The problem with the Apache httpd CGI is no different: a hacker could easily enter arbitrary commands on the server
host using the same user-id as the user running the httpd server. If httpd is being run as root, the unauthorized
commands are also run as root! Since he is using the same user-id, he can also access any file on the system that is
accessible to the user-id that is running the httpd server, including but not limited to destroying file contents on the
server host.
Further, if he is using an X11-based terminal emulator attached to the httpd server host, he can gain full interactive
access to the server host just as if he were logging in locally.
If you are using Apache httpd, this is what you will need to do:
1. Locate the escape_shell_command() function in the file "src/util.c" (approximately line 430). In that function, the
line should read if(ind("&;`'\"|*?~<>^()[]{}$\\",cmd[x]) != -1){
2. You will need to change that line to read if(ind("&;`'\"|*?~<>^()[]{}$\\\n",cmd[x]) != -1){
3. Then, you will need to recompile, reinstall, and restart the server.
It is very important that you run the upgrade as if let alone, this security hole can lead to a compromise of your Web
server.
Note:
For additional information you should visit CIAC’s Web page at URL: http://ciac.llnl.gov/
The same goes for CGI scripts with Novell platforms. The challenge involved with the implementation of CGI
gateways on Novell-based platforms is due to the overhead involved in spawning NLMs and implementing language
compilers or interpreters that reside and launch on the NetWare server. In order to resolve this problem, Great Lakes
will allow data from the Web client to be either stored in a file on the NetWare server or transmitted as an MHS or
SMTP E-mail message.
The NT version of both Netscape Communications Server version 1.12 and the Netscape Commerce Server, also are
affected by CGI scripts handling. The following are two known problems:
● Perl CGI Scripts are Insecure - Since the Netscape server does not use the NT File Manager’s associations
between file extensions and applications, Perl scripts are not recognized as such when placed in to the cgi-bin
directory. To associate the extension .pl with the Perl interpreter will not work. You are using any of these
versions, Netscape technical note recommends to place Perl.exe into the cgi-bin and refer to your scripts as
/cgi-bin/Perl.exe?&my_script.pl.
Unfortunately this technique opens a major security hole on the system as it allows a remote user to execute an
arbitrary set of Perl commands on the server by invoking such scripts as
/cgi-bin/Perl.exe?&-e+unlink+%3C*%3E, which will cause every file in the server’s current directory to be
removed.
There is another suggestion on Netscape’s technical note to encapsulate the Perl scripts in a batch (.bat) file.
However, be aware that there is also a related problem with batch scripts, which makes this solution unsafe.
Both Purveyor and WebSite NT servers, because of EMWACS, use NT’s File Manager extension associations,
allowing you to execute Perl scripts without having to place Perl.exe into cgi-bin. This bug does not affect these
products.
● DOS batch files are Insecure - According to Ian Redfern ([email protected]), a similar hole exists in the
processing of CGI scripts implemented as batch files. Here it is how he describes the problem:
"Consider test.bat:
@echo off
echo
file.
Further, if you have an FTP daemon, even though generally you would not be compromising data security by sharing
directories between this daemon and your Web daemon, no remote user should ever be able to upload files that can
later be read or executed by your Web daemon. Otherwise, a hacker could, for example, upload a CGI script to your ftp
site and then use his browser to request the newly uploaded file from your Web server, which could execute the script,
totally by-passing security! Therefore, limit ftp uploads to a directory that cannot be read by any user. More about this
is discussed on chapter 8, "How Vulnerable are Internet Services."
Evidently, your Web servers should support the development of application gateways, as it is essential for
communicating data between an information server--in this case a Web server--and another application.
Wherever the Web server needs to communicate with another application, you will need CGI scripts to negotiate the
transactions between the server and an outside application. For instance, CGIs are used to transfer data, filled in by a
user in an HTML form, from the Web server to a database.
But if you want to preserve the security of your site, and you must, be alert about allowing your users to run their own
CGI scripts. These scripts are very powerful, which could represent some risks for your site. As discussed earlier, CGI
scripts, if poorly written could open security roles in your system. Thus, never run your Web server as root; make sure
it is configured to change to another user ID at startup time. Also, consider using a CGI wrapper to ensure the scripts
run with the permissions and user id of the author. You can easily download one from URL:
http//www.umr.edu/~cgiwrap
Tip:
You should check the URL: URL: http://www.primus.com/staff/paulp/cgi-security/ for security
related scripts.
CGI are not all bad! A good security tool to control who is accessing your Web server is to actually use CGI scripts to
identify them. There are five very important environment variables available to help you do that:
1. HTTP_FROM - This variable is usually set to the email address of the user. You should use it as a default for
the reply email address in an email form.
2. REMOTE_USER - It is only set if secure authentication was used to access the script. You can use the
AUTH_TYPE variable to check what form of secure authentication was used. REMOTE_USER will display the
name of the user authenticated under.
3. REMOTE_IDENT - It is set if the server has contacted an IDENTD server on the browser machine. However,
there is no way to ensure a honest reply from the browser
4. REMOTE_HOST - Provides information about the site the user is connecting from if the hostname was
retrieved by the server.
5. REMOTE_ADDR - This also provides information about the site the user is connecting from. It will provide the
dotted-decimal IP address of the user.
Caution:
If you ever suspect your site have been broken-in you should contact the Computer Emergency
Response Team (CERT). CERT was formed by the Defense Advanced Research Projects Agency
(DARPA) in 1988 to serve as a focal point for the computer security concerns of Internet users.
The Software Engineering at Carnegie Mellon University, in Pittsburgh, PA runs the Coordination
Center for the CERT. You can visit their Web page at URL: http://www.cert.org or send an e-mail
to [email protected].
Also, CGI can be used to create e-mail forms on the Web. There is a CGI e-mail form, developed in Perl by Doug
Stevenson ([email protected]), of Ohio State University, that is fairly secure. The script, called "Web Mailto Gateway,"
enables you to hide the real e-mail addresses from user, which helps to enhance security. The following source code
can be found at URL: http://www.mps.ohio-state.edu/mailto/mailto_info.html.
#!/usr/local/bin/perl
# 5/95
# Use this script as a front end to mail in your HTML. Not every browser
# supports the mailto: URLs, so this is the next best thing. If you
# use this script, please leave credits to myself intact! :) You can
# Documentation at:
# http://www-bprc.mps.ohio-state.edu/mailto/mailto_info.html
# I didn't exactly follow the RFCs on mail headers when I wrote this,
# Also, you'll need cgi-lib.pl for the GET and POST parsing. I use
# version 1.7.
# http://www.bio.cam.ac.uk/web/form.html
# PLEASE: Use this script freely, but leave credits to myself!! It's
# common decency!
########
# only a certain few mail addresses to be sent to. I changed the WWW
# Mail Gateway to allow only those mail addresses in the list @addrs
# the selected option being either the first one or the one that matches
########
# Enhancing the enhancements from 1.2. You can now specify a real name
# defined, or read from a file. Read the information HTML for instructions
# on how to set this up. Also, real mail addresses may hidden from the
########
# The next URL to be fetched after the mail is sent can be specified with
# Force user to enter something for the username on 'Your Email:' tag,
########
# Added <PRE>formatted part to header entry to make it look nice and fixed a
# typo.
########
# ALL cgi varaibles (except those reserved for mail info) are logged
# at then end of the mail received. You can put forms, hidden data,
# or whatever you want, and the info for each variable will get logged.
########
# Fixed stupid HTML error for an obscure case. Probably never noticed.
# are listed in the order they were received. That was a function of perl
########
# New suggested sendmail flag -oi to keep sendmail from ending mail
# Added support for setting the "real" From address in the first line
# of the mail header using the -f sendmail switch. This may or may not
# the security of your script for public usage. Your mileage will vary,
# one out.
########
# Doug Stevenson
######################
# Configurable options
######################
$active = 1;
$logging = 1;
$logfile = '/usr/local/WWW/etc/mailto_log';
# Physical script location. Define ONLY if you wish to make your version
# of this source code available with GET method and the suffix '?source'
# on the url.
$script_loc = '/usr/local/WWW/cgi-bin/mailto.pl';
$cgi_lib = '/usr/local/WWW/cgi-bin/cgi-lib.pl';
$script_http = 'http://www-bprc.mps.ohio-state.edu/cgi-bin/mailto.pl';
# Path to sendmail and its flags. Use the first commented version and
# define $listserver = 1if you want the gateway to be used for listserver
# correctly.
# sendmail options:
# -n no aliasing
#$expose_address = 1;
# List of address to allow ONLY - gets put in a HTML SELECT type menu.
# who view source, or you don't want to mess with the source, read them
# from $mailto_addrs:
#$mailto_addrs = '/usr/local/WWW/etc/mailto_addrs';
#open(ADDRS,$mailto_addrs);
#while(<ADDRS>) {
# $addrs{$name} = $address;
#}
# version
$version = '2.2';
#############################
#############################
##########################
# source is self-contained
##########################
open(SOURCE, $script_loc) ||
print <SOURCE>;
close(SOURCE);
exit(0);
require $cgi_lib;
&ReadParse();
#########################################################################
# method GET implies that we want to be given a FORM to fill out for mail
#########################################################################
if ($ENV{'REQUEST_METHOD'} eq 'GET') {
$destaddr = $in{'to'};
$cc = $in{'cc'};
$subject = $in{'sub'};
$body = $in{'body'};
$nexturl = $in{'nexturl'};
if ($in{'from'}) {
$fromaddr = $in{'from'};
elsif ($ENV{'REMOTE_USER'}) {
$fromaddr = $ENV{'REMOTE_USER'};
# this is for Lynx users, or any HTTP/1.0 client giving From header info
elsif ($ENV{'HTTP_FROM'}) {
$fromaddr = $ENV{'HTTP_FROM'};
else {
$fromaddr = "$ENV{'REMOTE_IDENT'}\@$ENV{'REMOTE_HOST'}";
$body =~ s/\0//;
if (%addrs) {
if ($in{'to'} eq $addrs{$_}) {
else {
$selections .= "<OPTION>$_";
if ($expose_address) {
$selections .= "</SELECT>\n";
print &PrintHeader();
print <<EOH;
that you want to mail to. The <B>Your Email</B>: field needs to
contain your mail address so replies go to the right place. Type your
message into the text area below. If the <B>To</B>: field is invalid,
or the mail bounces for some reason, you will receive notification
is set incorrectly, all bounced mail will be sent to the bit bucket.</I></P>
EOH
if ($selections) {
print $selections;
else {
print <<EOH;
</FORM>
<HR>
<H3><A HREF="http://www-bprc.mps.ohio-state.edu/mailto/mailto_info.html#about">
<H3><A HREF="http://www-bprc.mps.ohio-state.edu/mailto/mailto_info.html#new">
<H3><A HREF="http://www-bprc.mps.ohio-state.edu/mailto/mailto_info.html#misuse">
<HR>
</P></ADDRESS>
</BODY></HTML>
EOH
#########################################################################
# Method POST implies that they already filled out the form and submitted
#########################################################################
$destaddr = $in{'to'};
$cc = $in{'cc'};
$fromaddr = $in{'from'};
$fromname = $in{'name'};
$replyto = $in{'from'};
$sender = $in{'from'};
$errorsto = $in{'from'};
$subject = $in{'sub'};
$body = $in{'body'};
$nexturl = $in{'nexturl'};
print <<EOH;
Content-type: text/html
<HTML><HEAD><TITLE>Mailto error</TITLE></HEAD>
<BODY><H1>Mailto error</H1>
<UL>
<LI><B>To</B>:, the full mail address you wish to send mail to</LI>
</UL>
EOH
exit(0);
# do some quick logging - you may opt to have more/different info written
if ($logging) {
open(MAILLOG,">>$logfile");
close(MAILLOG);
# Log every CGI variable except for the ones reserved for mail info.
# Valid vars go into @data. Text output goes into $data and gets.
# First, get an ORDERED list of all cgi vars from @in to @keys
for (0 .. $#in) {
local($key) = split(/=/,$in[$_],2);
$key =~ s/%(..)/pack("c",hex($1))/ge;
push(@keys,$key);
local(%mark);
foreach (@data) {
$body =~ s/\0//;
# now check to see if some joker changed the HTML to allow other
if (%addrs) {
print &PrintHeader();
print <<EOH;
<BODY>
not one of the pre-defined set of addresses that are allowed. Go back and
try again.</P>
</BODY></HTML>
EOH
exit(0);
$realaddr = $destaddr;
if ($addrs{$destaddr}) {
if ($active) {
if ($listserver) {
open(MAIL,"| $sendmail$fromaddr") ||
else {
open(MAIL,"| $sendmail") ||
To: $realaddr
Reply-To: $replyto
Errors-To: $errorsto
Sender: $sender
Subject: $subject
X-Real-Host-From: $realfrom
$body
$data
EOM
close(MAIL);
# if the cgi var 'nexturl' is given, give out the location, and let
if ($nexturl) {
else {
print &PrintHeader();
print <<EOH;
<HTML><HEAD><TITLE>Mailto results</TITLE></HEAD>
<BODY><H1>Mailto results</H1>
<PRE>
<B>Subject</B>: $subject
$body</PRE>
<HR>
</BODY></HTML>
EOH
} # end if METHOD=POST
#####################################
#####################################
else {
print <<EOH;
EOH
exit(0);
# Deal out error messages to the user. Gets passed a string containing
sub InternalError {
local($errmsg) = @_;
print &PrintHeader();
print <<EOH;
Content-type: text/html
<B>$errmesg</B></P></BODY></HTML>
EOH
exit(0);
##
## end of mailto.pl
##
If your server can run CGI scripts and is configured with sendmail, this is the right, and secure, mail gateway script to
have in your HTML, you will need to be able to run CGI scripts on your server though.
Comments
Comments
Comments
Comments
Chapter 5
Firewalling Challenges: The Advanced
Web
For the most part, Internet managers are used to the idea that a proxy server, a specialized HTTP server
typically running on a firewall machine, would be enough to provide secure access from Internet
connections coming through the firewall into the protected network.
Sure enough, running a proxy server is one of the most recommended approach to protect your Web site.
But there is more to it than only setting up a proxy, which many times can breach security requirements.
Thus, SOCKS comes in the picture. As a package that enables Internet clients to access protected
network without breaching security requirements, SOCKS can also be an add-on feature to your firewall
challenge. But not so fast! According to Ying-Da Lee ([email protected]), from NEC, you may bump
into few problems using the modified version of Mosaic for X 2.0, which is not supported by its
developer, the National Computer Security Association (NCSA).
Therefore, to implement security in a Web environment is not really the same as to building an Internet
firewall. To better understand the challenges in setting up a firewall in a Web-centric environment you
must understand the threats and risks you are up against, as well as the implications of integrating
different technologies, which includes but are not limited to protocols, devices and services.
This chapter discusses the main security flaws and risks associated to Web-based connectivity, as well as
of the main technologies interacting with the Web, such as media types, programming languages and
other security concerns, so that you can better choose and implement the right firewall solution.
whackers, lurking around, waiting for an opportunity to break-in to a secure system, regardless if it is a
Web site or a corporate internal network. They will try to exploit anything, from high level Application
Programming Interface (API) to low level services, from malicious applets, to sophisticated client-pull
and server-push schemes.
What are they after? You should expect them to be after anything! Many of them will try the same old
tricks UNIX crackers did years ago just for the fun of it. What about publicly posting your client list on
the Internet? What if suddenly, instead of your company’s logo you find one of those looney tunes
character on your home page? Worse! What if you are been hacked right now and not even noticed? One
thing you can be sure: soon or later they will knock your door, it is just a matter of statistics!
The bottom line is there always will be Web security issues you should be concerned with. Many of this
security issues are documented at http://www-genome.wi.mit.edu/WWW/faqs/www-security-faq.html, at
least for UNIX boxes. Therefore, lets take a look at some of the ways your Web server can be attacked
and what you can do to prevent it.
ISAPI
When time for integration between systems comes you will need to decide on the approach you will use
to create interaction between your applications and the your Webserver. If you don’t have an Intranet
already in place, don’t worry, you will! But before even considering it, you will first need to consider
how your users will interact with the system you have in place and decide their level of interaction with
your Web-centric applications.
The choice you make largely depends on what user interactivity you would like to build into the system.
Some aspects of this interactivity are new, and some have been a part of LAN connectivity for some
time. Ideally, when your application is linked with a Web server, your users will be able to use your
application in ways unique to being on a Web, whether it is an "Intranet" or the Internet itself.
Be careful when choosing your Web server though. My recommendation goes for the Purveyor
WebServer (http://www.process.com), which has much to offer your existing application and user base.
For instance, Purveyor allows you to use existing user authentication and authorization systems or take
advantage of user authentication and authorization using Purveyor. LAN-based applications can also use
Purveyor’s encryption services if desired. Also, since Purveyor can be configured as a proxy server, it
may also be used to allow secure Internet access for users on the LAN. You may also want to consider
the added user interactivity unique to Web technology.
The reason am highlighting this is that, by considering these design elements beforehand will save you
programming time. Regardless of your Web server, depending on what you wish to do, you may not have
much options when choosing how to access server functions, which will be by either of two major
interfaces: the Common Gateway Interface (CGI) or the Internet Server Application Programming
Interface (ISAPI). CGI provides a versatile interface that is portable between systems. ISAPI is much
faster but requires that you write a Windows DLL, which is not a trivial programming exercise.
In all considered, the Internet Server Application Programming Interface, (ISAPI) is a high performance
interface to back end applications running on your web server. Based on its own DLL ensuring
significant performance over CGI, ISAPI is easy to use, well documented, and does not require complex
programming. These approaches are often combined. Some parts of your interface program may call
DLLs and others may use the CGI approach. So lets take a look at the CGI approach first then the ISAPI
one, so you can have a clear idea of what’s involved as far as security.
Note:
Just for the record, if you’re interested in CGI scripts, a good CGI tutorial can be found at URL
http://hoohoo.ncsa.uiuc.edu/cgi/.
CGI
The Common Gateway Interface is a standard method for writing programs to work with World Wide
Web servers. Programs that use the Common Gateway interface, referred to as CGI scripts, usually take
input from HTML forms to execute particular tasks. Developers may find it appropriate to use CGI in
cases where ease of development and portability to other operating systems are important. CGI scripts
are simple to write, and since the user interface is HTML, the CGI script can be initiated by any client
that can run a browser.
As you know, users interact with Web Servers by filling in and submitting HTML forms or clicking on
links in HTML documents. Through these HTML forms or links, the Web can be used to obtain
important information and perform specific tasks. Routine tasks can be moved on-line, facilitating
collaboration on projects between individuals and groups. HTML forms can also allow users to specify
what information they want to obtain and what tasks they want to perform.
A CGI script can be an individual executable program or a chain of programs that can be started by the
Purveyor Server in response to a client request. A typical CGI script may, for instance, take a keyword
that a user has submitted in an HTML form and search for that keyword in a specific document or group
of documents. When a user enters this keyword and submits it, the server passes this data to the CGI
script. This program performs operations with the data, sending it back or passing it along to other
applications as specified. When the data finally returns to the server, it is re-formatted into HTML and
shipped back to the requesting client. Figure 5.1 illustrates this process.
However, CGIs have their limitations. In designing CGI scripts, bear in mind that each time the Web
server executes a script it creates a new process and a new drain on available resources. This is one of the
less attractive characteristics of the CGI method. It requires the server to spawn a new process every time
a client invokes a CGI script. Each CGI call therefore consumes CPU time and server resources so that
many simultaneous requests slow the entire system significantly. This problem can become particularly
serious on a busy server with many concurrent requests. Consequently, the more calls there are to an
application, the less suited it may be to CGI scripting because of the load this places on the server.
Bear in mind also that applications that use the power of corporate and business-to-business intranets
often experience many more "hits" per hour than even the most popular internet Web sites.
Furthermore, CGI programs work within the constraints of the HTTP server. They communicate with the
user through a stateless protocol, so they "forget" every previous transaction. There is no way of creating
intensely interactive applications unless you arrange each step to re-transmit any information that has to
be "remembered" from previous steps. Although it is possible to write a program or a group of programs
that build on previous information, you must write them with this stateless environment in mind.
● Extension DLLs load into the server’s process space—eliminating the time and resource demands
of creating additional processes.
● All resources available to the server are also available to its DLLs.
In addition, server can manage DLLs, pre-loading commonly-used ones and unloading those that remain
unused for some (configurable) period of time. The primary disadvantage in using an Extension DLL is
that a DLL crash can cause a server crash.
These advantages make ISAPI an ideal interface for supporting server applications subject to heavy
traffic in corporate intranets. As a matter of fact, the greater the degree of interactivity required of an
Web server application, the more the application may be suited to an ISAPI interface. For example,
engineers at Process Software use the ISAPI method to support the Purveyor Web Server’s remote server
management (RSM) application for just this reason. A sample screen from the RSM application is shown
on figure 5.2.
The particular method used for ISAPI is called run-time dynamic linking. In this method, an existing
program uses the LoadLibrary and GetProcAddress functions to get the starting address of DLL
functions, calls them through a common entry point called HttpExtensionProc(), and communicates with
them through a data structure called an Extension Control Block.
The other method is called load-time dynamic linking, which requires building the executable module of
the main application (the server) while linking with the DLL’s import library. This method is not suitable
for our purposes since it presents barriers to efficient server management of DLL applications.
How does the server handles the DLLs? The filename extension "DLL" in client requests is reserved for
Dynamic Link Library files to be used through this Application Programming Interface. All extension
DLLs must be named in the form *.DLL and no other type of Purveyor Server executables requested by
a client may have names of this form.
When the server gets a request to execute a DLL file, it takes the following steps:
● Checks to see if the requested DLL is already in memory and load it if not already present. If the
DLL does not contain the entry point GetExtensionVersion, the server will not load it.
● Executes a call to the entry point GetExtensionVersion to verify that this DLL was written to
conform to the API standard. If the returned value is not valid the server unloads the DLL without
executing it.
● Executes a call to HttpExtensionProc to begin execution of the DLL.
● Responds as needed to the running DLL through the callback functions and the Extension Control
Block.
● Terminates the operation upon receipt of a return value. If there is a non-null log string, the server
writes the DLL’s log entry to its log.
All Extension DLLs must export two entry points:
● GetExtensionVersion() - the version of the API specification to which the DLL conforms.
This entry point is used as a check that the DLL was actually designed to meet this specification,
and specifies which version of this specification it uses. As additional refinements take place in the
future, there may be additions and changes which would make the specification number
significant. Table 5.1 shows a sample of a suitable definition in C.
● HttpExtensionProc() - the entry point for execution of the DLL. This entry point is similar to a
main() function in a script executable. It is the actual startup of the function and has a form (coded
in C) as described on Table 5.2.
Table 5.1 - Using GetExtensionVersion() as an entry point.
BOOL WINAPI GetExtensionVersion( HSE_VERSION_INFO *version )
{
version->dwExtensionVersion = HSE_VERSION_MAJOR;
version->dwExtensionVersion = version->dwExtensionVersion << 16;
version->dwExtensionVersion = version->dwExtensionVersion | HSE_VERSION_MINOR;
sprintf( version->lpszExtensionDesc, "%s", "This is a sample Extension DLL" );
return TRUE;
Upon termination, ISAPI programs must return one of the following codes:
HSE_STATUS_SUCCESS
The Extension DLL has finished processing and the server can disconnect and free up allocated
resources.
HSE_STATUS_SUCCESS_AND_KEEP_CONN
The Extension DLL has finished processing and the server should wait for the next HTTP request if the
client supports persistent connections. The Extension should only return this if they were able to send the
correct Content-Length header to the client.
HSE_STATUS_PENDING
The Extension DLL has queued the request for processing and will notify the server when it has finished
(see HSE_REQ_DONE_WITH_SESSION under the Callback Function ServerSupportFunction ).
HSE_STATUS_ERROR
The Extension DLL has encountered an error while processing the request and the server can disconnect
and free up allocated resources.
There are four Callback Functions used by DLLs under this specification:
● GetServerVariable - obtains information about a connection or about the server itself. The
function copies information (including CGI variables) relating to an HTTP connection or the
server into a buffer supplied by the caller. If the requested information pertains to a connection, the
first parameter is a connection handle. If the requested information pertains to the server, the first
parameter may be any value except NULL.
● ReadClient - reads data from the body of the client's HTTP request. It reads information from the
body of the Web client's HTTP request into the buffer supplied by the caller. Thus, the call might
be used to read data from an HTML form which uses the POST method. If more than *lpdwSize
bytes are immediately available to be read, ReadClient will return after transferring that amount of
data into the buffer. Otherwise, it will block waiting for data to become available. If the client’s
socket is closed, it will return TRUE but with zero bytes read.
● WriteClient - write data to the client. This function sends information to the Web client from the
buffer supplied by the caller.
● ServerSupportFunction - provides the Extension DLLs with some general purpose functions as
well as functions that are specific to HTTP server implementation. This function sends a service
request to the server.
The server calls your application DLL at HttpExtensionProc() and passes it a pointer to the ECB
structure. Your application DLL then decides what exactly needs to be done by reading all the client
input (by calling the function GetServerVariable() ). This is similar to setting up environment variables
in a Direct CGI application.
Since the DLL is loaded into the same process address space as that of HTTP server, an access violation
by the Extension DLL crashes the server application. Ensure the integrity of your DLL by testing it
thoroughly. DLL errors can also corrupt the server’s memory space or may result in memory or resource
leaks. To take care of this problem, a server should wrap the Extension DLL entry point in a "try/except
clause" so that access violations or other exceptions will not directly effect the server. For more
information on the "try/except" clause, refer to the help section on C/C++ Language under Visual C++
v2.0 help.
Although it may initially require more development resources to write the DLLs needed to run ISAPI
applications, the advantages of using ISAPI are evident. ISAPI makes better use of system resources by
keeping shared functions in a single library, and spawning only a single process for applications invoked
by more than one client. The fact that the server pre-loads these libraries at startup ensures quicker
program performance and faster server response time. Finally, the quickness and efficiency of ISAPI
make it well suited for applications that require user interaction and that may be subject to heavy traffic,
such as those that take full advantage of the intranet.
Note:
For more information on ISAPI programming, you may wish to participate in the Microsoft forum
- ISAPI-L. You can subscribe by sending e-mail to:
[email protected].
Include a one-line message with the body:
SUBSCRIBE ISAPI-L <firstname><lastname>
To send messages to the mailing list, e-mail them to:
[email protected]
Microsoft has also made several PowerPoint presentations that deal with ISAPI development
available at the following URL: http://www.microsoft.com/intdev/pdc/pdcserv.htm. These
presentations describe ISAPI advantages, filters and programming techniques while providing
several examples of ISAPI applications.
Tip:
For your information, ISAPI is available on Purveyor for Windows NT and Purveyor for
OpenVMS, developed by Process Software Corp. For additional information and program
documentation, check their Web site at URL http://www.process.com/news/spec.htp.
Caution:
If you want to try the exploit above, check the URL
http://www.ntsecurity.net/security/webguest.htm, which has a DLL called REVERT.DLL that you
can run from any Intel based IIS box. The script, once downloaded to your scripts directory on the
IIS machine, once executed, will create a directory called C:\IIS-REVERT-TEST without your
authorization!
Tip:
To learn more about ISAPI, check the ISAPI Tutorials page at URL
http://www.genusa.com/isapi/isapitut.htm
NSAPI
Netscape Server Application Programming Interface (NSAPI) is Netscape’s version of ISAPI, which also
works on UNIX systems that support shared objects, and can be used as a framework for implementing
custom facilities and mechanisms. However, NSAPI groups a series of functions to be used specifically
with Netscape Server, allowing it to extend the core functionality of the Netscape Server. According to
Netscape (http://developer.netscape.com/misc/developer/conference/proceedings/s5/sld002.html )
NSAPI provides flexibility, control, efficiency, and multi-platform solutions which includes but is not
limited to:
● Faster CGI-type functions
● Database connectivity
● Customized logging
● Version control
● Plug-in applications
Comments
Comments
Chapter 6
The APIs Security Holes and Its Firewall
Interactions
An Application Program Interface (API) is the specific method prescribed by a computer operating system by which a
programmer writing an application program can make requests of the operating system.
An API can be contrasted with an interactive user interface or a command interface as an interface to an operating system. All
of these interfaces are essentially requests for system services.
As discussed on chapter 5, "Firewalling Challenges: the Advanced Web," under the sections on ISAPI and NSAPI, API
provides another alternative to CGI, SSI and Server-Side Scripting for working with Web servers, creating dynamic
documents, as well as providing other services via the Web.
However, I believe that for the most part, you should try to develop Web-centric application not only with APIs, but also using
SSI, CGI and SSS technology, which is discussed in more details on chapter 5. The reason I say this is because I also believe
there has been too much media hype lately about pseudo-standard API technology. Much of it is about its speed when
compared to CGI scripts, but this information overlooks some vital facts: Your choice of webserver should be heavily
influenced by its SSI, SSS, and CGI capabilities and efficiency as well as its support for advanced API programming.
Otherwise, you gain nothing. O’Reilly has a great paper at their Web site
(http://website.ora.com/devcorner/white/extending.html) which discusses this issue and the key characteristics and the
tradeoffs of using the four main server extension techniques: SSI, SSS, CGI and API.
For now, lets take a look at the security issues involving APIs and their applications.
Sockets
A socket is one end-point of a two-way communication link between two programs running on the network. For instance, a
server application usually listens to a specific port waiting for connection requests from a client. When a connection request
arrives, the client and the server establish a dedicated connection over which they can communicate. During the connection
process, the client is assigned a local port number, and binds a socket to it. The client talks to the server by writing to the
socket and gets information from the server by reading from it. Similarly, the server gets a new local port number, while
listening for connection requests on the original port. The server also binds a socket to its local port and communicates with the
client by reading from and writing to it. The client and the server must agree on a protocol before data starts being exchanged.
The following program is a simple example of how to establish a connection from a client program to a server program
through the use of sockets, which was extracted from Sun’s Web site at URL
http://java.sun.com/docs/books/tutorial/networking/sockets/readingWriting.html. I encourage you check the site our for more
in-depth information about it and the use of the API java.net.Socket, a very versatile API..
The Socket class in the java.net package is a platform-independent implementation of the client end of a two-way
communication link between a client and a server. The Socket class sits on top of a platform-dependent implementation, hiding
the details of any particular system from your Java program. By using the java.net Socket class instead of relying on native
code, your Java programs can communicate over the network in a platform-independent fashion.
This client program, EchoTest, connects to the standard Echo server (on port 7) via a socket. The client both reads from and
writes to the socket. EchoTest sends all text typed into its standard input to the Echo server by writing the text to the socket.
The server echos all input it receives from the client back through the socket to the client. The client program reads and
displays the data passed back to it from the server:
import java.io.*;
import java.net.*;
DataOutputStream os = null;
DataInputStream is = null;
try {
os = new DataOutputStream(echoSocket.getOutputStream());
is = new DataInputStream(echoSocket.getInputStream());
} catch (UnknownHostException e) {
} catch (IOException e) {
try {
String userInput;
os.writeBytes(userInput);
os.writeByte('\n');
os.close();
is.close();
echoSocket.close();
} catch (IOException e) {
Let's walk through the program and investigate the interesting bits.
The following three lines of code within the first try block of the main() method are critical--they establish the socket
connection between the client and the server and open an input stream and an output stream on the socket:
echoSocket = new Socket("taranis", 7);
os = new DataOutputStream(echoSocket.getOutputStream());
is = new DataInputStream(echoSocket.getInputStream());
The first line in this sequence creates a new Socket object and names it echoSocket. The Socket constructor used here (there
are three others) requires the name of the machine and the port number that you want to connect to. The example program uses
the hostname taranis, which is the name of a (hypothetical) machine on our local network. When you type in and run this
program on your machine, you should change this to the name of a machine on your network. Make sure that the name you use
is the fully qualified IP name of the machine that you want to connect to. The second argument is the port number. Port
number 7 is the port that the Echo server listens to.
The second line in the code snippet above opens an output stream on the socket, and the third line opens an input stream on the
socket. EchoTest merely needs to write to the output stream and read from the input stream to communicate through the socket
to the server. The rest of the program achieves this. If you are not yet familiar with input and output streams, you may wish to
read Input and Output Streams.
The next section of code reads from EchoTest's standard input stream (where the user can type data) a line at a time.
EchoTest immediately writes the input text followed by a newline character to the output stream connected to the socket.
String userInput;
os.writeBytes(userInput);
os.writeByte('\n');
The last line in the while loop reads a line of information from the input stream connected to the socket. The readLine()
method blocks until the server echos the information back to EchoTest. When readline() returns, EchoTest prints the
information to the standard output.
This loop continues--EchoTest reads input from the user, sends it to the Echo server, gets a response from the server and
displays it--until the user types an end-of-input character.
When the user types an end-of-input character, the while loop terminates and the program continues, executing the next three
lines of code:
os.close();
is.close();
echoSocket.close();
These lines of code fall into the category of housekeeping. A well-behaved program always cleans up after itself, and this
program is well-behaved. These three lines of code close the input and output streams connected to the socket, and close the
socket connection to the server. The order here is important--you should close any streams connected to a socket before you
close the socket itself.
This client program is straightforward and simple because the Echo server implements a simple protocol. The client sends text
to the server, the server echos it back. When your client programs are talking to a more complicated server such as an http
server, your client program will also be more complicated. However, the basics are much the same as they are in this program:
1. Open a socket.
2. Open an input stream and output stream to the socket.
3. Read from and write to the stream according to the server's protocol.
4. Close streams.
5. Close sockets.
Only step 3 differs from client to client, depending on the server. The other steps remain largely the same.
But knowing how a socket works, even if using reliable codes such as the above, does not necessarily makes your system
immune to security holes and threats. It all will depend on the environment you’re in. Security holes generated by sockets will
vary depending on what kind of threat they can allow, such as all:
● Denial of service
BSD sockets
Daniel L. McDonald (Sun Microsystems, USA), Bao G. Phan (Naval Research Laboratory, USA) and Randall J. Atkinson
(Cisco Systems, USA) wrote a paper entitled "A Socket-Based Key Management API (and Surrounding Infrastructure),"
which can be found at the URL http://info.isoc.org/isoc/whatis/conferences/inet/96/proceedings/d7/d7_2.htm, that addresses
the security concerns expressed by the Internet Engineering Task Force (IETF) in this area.
The IETF has advanced to Proposed Standard, a security architecture for the Internet Protocol [Atk95a, Atk95b, Atk95c]. The
presence of these security mechanisms in the Internet Protocol does not, by itself, ensure good security. The establishment and
maintenance of cryptographic keys and related security information, also known as key management, is also crucial to
effective security. Key management for the Internet Protocol is a subject of much experimentation and debate [MS95]
[AMP96a] [AMP96b] [Orm96]. Furthermore, key management strategies have a history of subtle flaws that are not discovered
until after they are published or deployed [NS87].
McDonald, Phan and Atkinson paper proposes an environment which allows implementations of key management strategies to
exist outside the operating system kernel, where they can be implemented, debugged, and updated in a safe environment. The
Internet Protocol suite has gained popularity largely because of its availability in the Berkeley Software Distribution (BSD)
versions of the Unix operating system. Even though many commercial operating systems no longer use the BSD networking
implementation, they still support BSD abstractions for application programmers, such as the sockets API [LMKQ89]. The
sockets interface allows applications in BSD to communicate with other applications, or sometimes, even with the operating
system itself. One of the recent developments in BSD was the routing socket [Skl91], which allows a privileged application to
alter a node's network routing tables.
This abstraction allows a BSD system to use an appropriate routing protocol, without requiring changes inside the kernel.
Instead, routing protocols are implemented in user-space daemons, such as routed or gated.
Windows sockets
Windows Sockets Version 2.0, provides a powerful and flexible API for creating universal TCP/IP applications. You can
create any type of client or server TCP/IP application with an implementation of Windows Sockets specification. You can port
Berkeley Sockets applications and take advantage of the message-based Microsoft Windows programming environment and
paradigm.
Tip:
To know more about sockets, check the book "Network Programming with Windows Sockets," by
Pat Bonner, which is a great helper. She writes with a tech-talk-avoiding clarity I've not seen in
any other books on the subject.
WinSock 2 specification has two distinct parts: the API for application developers, and the SPI for protocol stack and
namespace service providers. The intermediate DLL layers are independent of both the application developers and service
providers. These DLLs are provided and maintained by Microsoft and Intel. The Layered Service Providers would appear in
this illustration one or more boxes on top of a transport service provider.
Tip:
For more information about Windows Socket, check the URL http://www.sockets.com. The
information you will find there can help you with your Windows Sockets (WinSock) application
development. There are lots of useful information there, including sample source code, detailed
reference files, and web links. Most of this material comes out of the book "Windows Sockets
Network Programming," which provides a detailed introduction, and complete reference to
WinSock versions 1.1 and 2.0.
c. Java APIs
Java Enterprise APIs support connectivity to enterprise databases and legacy applications. With these APIs, corporate
developers are building distributed client/server applets and applications in Java that run on any OS or hardware platform in
the enterprise. Java Enterprise currently encompasses four areas: JDBCTM, Java IDL, Java RMI and JNDITM. For more
information about this APIs I recommend you to check the JavaLink site at URL
http://java.sun.com/products/api-overview/index.html.
Now, Joseph Bank ([email protected]), from MIT wrote a paper discussing the Java security issues. The document is available at
URL http://www.swiss.ai.mit.edu/~jbank/javapaper/javapaper.html
Bank discusses the potential problems raised by executable content, such as in Java. As he comments, the advantages of
executable content come from the increase in power and flexibility provided by software programs. The increased power of
Java applets (the Java term for executable content) is also the potential problem. When a user is surfing the Web, they should
not have to worry that an applet may be deleting their files or sending their private information over the network
surreptitiously.
The essence of the problem is that running programs on a computer typically gives that program access to certain resources on
the host machine. In the case of executable content, the program that is running is untrusted.
If a Web browser that downloads and runs Java code is not careful to restrict the access that the untrusted program has, it can
provide a malicious program with the same ability to do mischief as a hacker who had gained access to the host machine.
Unfortunately, the solution is not as simple as completely restricting a downloaded programs access to resources. The reason
that one gives programs access to resources in the first place is that in order to be useful a program needs to access certain
resources. For example a text editor that cannot save files is useless. Thus, if one desires to have useful and secure executable
content, access to resources needs to be carefully controlled.
As Bank concludes in his paper, "the security measures of Java provide the ability to tilt this balance whichever way is
preferable. For a system where security is of paramount importance, using Java does not make sense; it is not worth the added
security risk. For a system such as a home computer, many people are likely to find that the benefits of Java outweigh the risks.
By this same token, a number of systems are not connected to the Internet because it is a security risk that outweighs the
benefits of using the Internet. Anyone that is considering using Java needs to understand that it does increase the security risk,
but that it does provide a fairly good "firewall."
c. Perl modules
Briefly described, Perl is a Practical Extraction and Reporting Language (Perl). Perl for Win32 is a port of most of the
functionality in Perl, with some extra Win32 API calls thrown in so that you can take advantage of native Windows
functionality, and runs on Windows 95 and Windows NT 3.5 and later.
There is a module with this package, Perl for ISAPI, which is a ISAPI DLL that runs Perl scripts in process with Internet
Information Server (IIS) and other ISAPI compliant web servers. This provides better performance, at the risk of some
functionality.
The following is a sample code written in PerlScript, extracted from ActiveWare Internet Corp, site, at URL
http://www.activestate.com/PerlScript/showsource.asp?filename=hello.asp&URL=/PerlScript/hello.asp. This sample coding
gives one an example of how versatile and portable this script is.
HTML Source for: /PerlScript/hello.asp
<html>
<HEAD>
<!--
-->
</HEAD>
<!--
-->
</TD></TR></TABLE>
<%
%><font size=
} %>
<!-- +++++++++++++++++++++++++++++++++++++
<%
$url = $Request->ServerVariables('PATH_INFO')->item;
$_ = $Request->ServerVariables('PATH_TRANSLATED')->item;
s/[\/\\](\w*\.asp\Z)//m;
$params = 'filename='."$1".'&URL='."$url";
%>
<A HREF="showsource.asp?<%=$params%>">
</BODY>
</HTML>
There is a lot written about Perl out there, and it doesn’t make sense to discuss too much about Perl on a firewall book.
Nevertheless, I would like to comment a little bit about Perl for Win32, by ActiveWare Internet Corp
(http://www.activeware.com/), as it closely interacts with ISAPI, playing a role on APIs security.
Perl for Win32 refers to a port of the Perl programming language to the Win32 platform. Please note that Perl for Win32 does
not run on Windows 3.11 and Win32s.
You should be careful with these modules, as most of them are distributed AS-IS, without any guarantee to work. If a module
doesn’t work, chances are:
● Some of the functions are not provided by Perl for Win32
● Some of the UNIX tools being used are not available on Win32 platforms, or
● It makes assumptions about the way files are handled that aren't valid on Win32 platforms
Also, be careful with the Perl for ISAPI build 307, which doesn’t work, due to a problem with POST. Activeware asks that you
continue to use build 306. As soon as this bug is fixed it should be announced on the Perl-Win32-Announce Mailing List.
c. CGI Scripts
Typically, CGI scripts are insecure, and Perl CGI Scripts are not exception to the rule, especially affecting Web-centric
application, such as browsers.
Take for example the Netscape server. It does not use Windows NT’s File Manager's associations between file extensions and
applications. Consequently, even though you may have associated the extension .pl with the Perl interpreter, Perl scripts are
not recognized as such when placed in the cgi-bin directory. In order to workaround this problem, an earlier Netscape technical
note suggested that you place the perl.exe file into the cgi-bin directory and refer to your scripts as
/cgi-bin/perl.exe?&my_script.pl.
However, it was a bad, very bad idea! This technique allowed anyone on the Internet to execute an arbitrary set of Perl
commands right onto your server by just invoking such scripts as /cgi-bin/perl.exe?&-e+unlink+%3C*%3E, which once run
will erase all files stored in your serves’s current directory! This was bad news! A more recent Netscape technical note
suggested then to encapsulate your Perl scripts in a .bat file. However, because of a related problem with batch scripts, this is
still not safer.
Because the EMWACS, Purveyor and WebSite NT servers all use the File Manager extension associations, you can execute
perl scripts on these servers without placing perl.exe into cgi-bin. They are safe from this bug.
The NCSA httpd is also affected by CGI scripts with security holes. The NCSA httpd prior to version 1.4 contain a serious
security hole relating to a fixed-size string buffer, which allows for remote users to break into systems running this server by
requesting an extremely long URL. Even though this is a bug already well publicized for more than couple year, many sites are
still running unsafe versions of this server. From version 1.5 on the bug was fixed.
But not so long ago, it was found that the example C code (cgi_src/util.c) usually distributed with the NCSA httpd as a
boiler-plate for writing safe CGI scripts omitted the newline character from the list of characters. This omission introduced a
serious bug into CGI scripts built on top of this template, which caused a security hole where a remote user could exploit this
bug to force the CGI script to execute any arbitrary UNIX command. This is another example of the dangers of executing shell
commands from CGI scripts.
The Apache server, versions 1.02 and earlier, also contains this hole in both its cgi_src and src/ subdirectories. The patch to fix
these holes in the two util.c files is not complicated. You will have to recompile the "phf" and any CGI scripts that use this
library after applying the GNU patch, which can be found at URL ftp://prep.ai.mit.edu/pub/gnu/patch-2.1.tar.gz).
Here it is the source:
tulip% cd ~www/ncsa/cgi_src
tulip% cd ../src
***************
l=strlen(cmd);
for(x=0;cmd[x];x++) {
! if(ind("&;`'\"|*?~<>^()[]{}$\\",cmd[x]) != -1){
for(y=l+1;y>x;y--)
cmd[y] = cmd[y-1];
l=strlen(cmd);
for(x=0;cmd[x];x++) {
! if(ind("&;`'\"|*?~<>^()[]{}$\\\n",cmd[x]) != -1){
for(y=l+1;y>x;y--)
cmd[y] = cmd[y-1];
(c ) ActiveX
As I mentioned on chapter 5, "Firewalling Challenges: The Advanced Web," You should not consider and ActiveX applet
secure.
However, you should understand that ActiveX is only as secure as its architecture, design, implementation and environment
permits. Although Microsoft never tried to state the security of ActiveX, as its use of digital signatures is intended only to the
extent that it allows you to prove who was the originators of the applet. As a commented on chapter 5, if Microsoft was
attempting to do any further security on ActiveX, the WinVerifyTrust API implementation would be checking the signature
against the CA every time the object was accessed. But again, dealing with certificate revocation is a lot of work!
The way its implemented, this check is done once and recorded, subsequent access check first to see if its been previously
authorized, and if so, it will use the object. So if a CA invalidates an object, anyone who had previously accessed the object
would continue to use the malicious object without question. But you don’t have to rely on it in order to grant some level of
security. As it is discussed on chapter 14, "Types of Firewalls," there are products nowadays filtering ActiveX and Java
applets.
But don’t put all your eggs into a single nest, or firewall. Of course, firewalls are needed, but you will also need virus
scanning, applet filters, encryption and so on. Also, you must understand that all these security technologies are simply an
artifact of our inability (lack of time, knowledge, money, who knows?) to did deeper into the foundation of any security model:
a complete and well elaborated security policy, which is followed and enforced. It’s usually because we don’t want to deal
with it that we look for fixes such as firewalls, etc. Thus, you must understand that these products and techniques are tools, and
you’ll need to come up with the "intel," the knowledge anyway.
A firewall should be for you what a word processor is for me when I write this book. It doesn’t matter if I use a Pentium Pro or
a 486 PC, with a word processor such as Microsoft Word or FrameMaker to write the book. Surely these tools will help me to
spell the text, write faster and so on, but if I don’t have a clear picture of what I want to accomplish with my writings, nothing
will help me.
d. ActiveX DocObjects
The new ActiveX DocObjects technology allows you to edit a Word document on Internet Explorer (IE), by just selecting a
hyperlink displayed by Internet Explorer (IE). After clicking on a hyperlink to a Microsoft Word document, the document is
displayed in Internet Explorer's window. The Word menus and toolbars are displayed along with those from Internet Explorer,
as shown on Figure 06.1, extracted from MacMillam/Sams.Net site at URL
http://www.mcp.com/sams/books/156-4/pax11.htm#I21.
This is what Microsoft calls visual editing, where Microsoft Word becomes activated in Internet Explorer's window. The
editing functions of both applications coexist on the Internet completely intact.
You can benefit greatly from this type of "online" document management and editing. It gives you a much greater maintenance
capability, which doesn't exist for most file formats. The problem of distributing your documentation is also alleviated merely
by putting the documents on the Web.
This technology is intended for use in both Internet Explorer 3.0 and in the Office95 Binder for visually editing various file
formats. The ActiveX DocObject technology uses a modified menu-sharing technique that more closely resembles the data's
own standalone native editing application.
What you should be careful here is that by clicking on a hyperlink to open a Word document on the Web could trigger a
malicious applet. The same is true for Adobe’s PFD files. When you click on a link or filename on the Web, you don’t know
what this document contains, or even if it will open a Word document.
c. Distributed Processing
Distributed Processing (DP) is the process of distribution of applications and business logic across multiple processing
platforms, which implies that processing will occur on more than one processor in order for a transaction to be completed.
Thus, the processing is distributed across two or more machines and the processes are most likely not running at the same time,
as each process performs part of an application in a sequence.
Often the data used in a distributed processing environment is also distributed across platforms.
Don’t confuse distribute processing with cooperative processing, which is the computing which requires two or more distinct
processors to complete a single transaction. Cooperative processing is related to both distributed and client/server processing.
It is a form of distributed computing where two or more distinct processes are required to complete a single business
transaction. Usually, these programs interact and execute concurrently on different processors.
Cooperative processing can also be considered to be a style of client/server processing if communication between processors is
performed through a message passing architecture.
Lets take a look at some examples of it.
d. XDR/RPC
XDR/RPC are routines used for describing the RPC messages in XDR language. They should normally be used by those who
do not want to use the RPC package directly. These routines return TRUE if they succeed, FALSE otherwise.
XDR routines allow C programmers to describe arbitrary data structures in a machine-independent fashion. Data for remote
procedure calls (RPC) are transmitted using these routines.
Tip:
For a list of the XDR routines check the URL
http://www.doc.ic.ac.uk/~mac/manuals/solaris-manual-pages/solaris/usr/man/man3n/xdr.3n.html,
from which the above definition was extracted.
d. RPC
The rpc file is a local source containing user readable names that can be used in place of RPC program numbers. The rpc file
can be used in conjunction with or instead of other rpc sources, including the NIS maps "rpc.byname" and "rpc.bynumber" and
the NIS+ table "rpc".
The rpc file has one line for each RPC program name. The line has the following format:
Items are separated by any number of blanks and/or tab characters. A "_" indicates the beginning of a comment; characters up
to the end of the line are not interpreted by routines which search the file.
RPC-based middleware is a more general-purpose solution to client/server computing than database middleware. Remote
procedure calls are used to access a wide variety of data resources for use in a single application.
Messaging middleware takes the RPC philosophy one step further by addressing the problem of failure in the client/server
system. It provides synchronous or asynchronous connectivity between client and server, so that messages can be either
delivered instantly or stored and forwarded as needed.
Object middleware delivers the benefits of object-oriented technology to distributed computing in the form of object request
brokers. ORBs package and manage distributed objects, which can contain much more complex information about a
distributed request than an RPC or most messages and can be used specifically for unstructured or nonrelational data.
d. COM/DCOM
Database middleware as mentioned above, is used on database-specific environments. It provides the link between client and
server when the client application that accesses data in the server's database is designed to use only one database type.
TP monitors have evolved into a middleware technology that can provide a single API for writing distributed applications.
Transaction-processing monitors generally come with a robust set of management tools that add mainframe-like controls to
open distributed environments.
Proprietary middleware is a part of many client/server development tools and large client/server applications. It generally runs
well with the specific tool or application environment it is a part of, but it doesn't generally adapt well to existing client/server
environments, tools, and other applications.
There are other technologies such as CORBA, ILU that also supports database middleware. For more information check the
middleware glossary of LANTimes at http://www.lantimes.com/lantimes/95aug/508b068a.html.
Comments
Comments
Comments
Comments
Chapter 7
What is an Internet/Intranet Firewall After
All?
As I wrote in a book I co-authored with Ablan and Yanoff, by Sams.Net (Web Site Administrator’s
Survival Guide), the first time I heard about firewalls was with my mechanic. Seriously! He was
explaining to me that cars have this part that separates the engine block from the passenger compartment,
and it’s called a firewall. If the car explodes, the firewall protects the passengers.
Similarly, a firewall in computer terms protects your network from untrusted networks. On one side you
have a public network, without any kind of control over what is being done, how or where. On the other
side you have the production network of a company with a corporate network that must be protected
against any damaging action. Some even question: if we really need to protect a corporate network, so
then why allow a network of public domain, such as the Internet, to access it.
The reason is simple: It’s a matter of survival! Companies rely more on the Internet to advertise their
products and services. The Internet is growing tremendously, just like big market places and shopping
malls, more people are coming to the Internet. The more that come, the more security is necessary to
guarantee the integrity of products sold, as well as the safety of those participating in this market (a.k.a.,
electronic commerce). It has become necessary to protect data, transmissions and transactions from any
incidents, regardless if the cause is unintentional or by malicious acts.
This chapter discusses the mechanisms used to protect your corporate network/Intranet and/or Web
servers against unauthorized access coming from the Internet or even from inside a protected network. It
also reviews what firewalls are after all and how important they are in providing a safe Internet
connection. You will learn about the following:
● The purpose of firewalls
However, as you will see, a firewall does more than protect you against the electronic version of
airbrushing someone else’s wall or breaking glass windows on the digital street. It will help you manage
a variety of aspects on your gate to the Web by keeping the jerks out while enabling you to concentrate
on your job.
Furthermore, have you ever thought about the functions of a United States Embassy in other countries? A
firewall can act just like one. As your corporate ambassador to the Internet, a firewall can control and
document the foreign affairs of your organization.
Note:
If you want more information on firewalls, a great site to visit is URL
ftp://ftp.greatcircle.com/ub/firewalls. A firewall toolkit and papers are available at
ftp://ftp.tis.com/ub/firewalls.
Tip:
Read more about ActiveX and Java vulnerabilities and security holes on chapter 5, "Firewalling
Challenges: The Advanced Web," under the section "Code in the Web."
This brings us to one of the main purposes an access policy is particularly adept at enforcing: Never
provide access to servers or services that do not require access as they could be exploited by hackers
since the access is not necessary or required.
Access Restrictions
Obviously, a firewall will very likely block certain services that users want, such as Telnet, FTP, X
Window, NFS, and so on. These disadvantages are not unique to firewalls alone, however network
access could be restricted at the server level as well, depending on a site’s security policy. A
well-planned security policy that balances security requirements with user needs can help greatly to
alleviate problems with reduced access to services.
Nonetheless, some sites might lend themselves to a firewall due to their topology, or maybe due to
services such as NFS, which could require a major restructuring of network use. For instance, you might
depend on using NFS and NIS across major gateways. In this case, your relative costs of adding a
firewall would need to be compared against the cost of exposure from not using a firewall.
Firewall Components
The basic components in building a firewall include:
● Policy
● Advanced authentication
● Packet filtering
● Application gateways
The following topics give you a brief overview of each of these components and how they affect your
site’s security and, consequently, the implementation of your firewall.
The network-access policy that defines services that will be allowed or explicitly denied from the
restricted network is the high-level policy. It also defines how these services will be used. The
lower-level policy defines how the firewall will actually restrict access and filter the services defined in
the higher-level policy. However, your policy must not become an isolated document sitting in a drawer
or a shelf it would be useless. The policy needs to become part of your company’s security policy. Let’s
take a brief look at different types of security policies.
Flexibility Policy
If you are going to develop a policy to deal with Internet access, web administration, and electronic
services in general, it must be flexible. Your policy must be flexible because:
● The Internet itself changes every day at a rate that no one can follow including books, by the way.
As the Internet changes, services offered through the Internet also change. With that, a company’s
needs will change also, so you should be prepared to edit and adapt your policy accordingly
without compromising security and consistency. But remember: a security policy almost never
change, but procedures should always be reviewed!
● The risks your company faces on the Internet are not static, either. They change every moment,
always growing. You should be able to anticipate these risks and adjust the security processes
accordingly.
Service-Access Policy
When writing a service-access policy, you should concentrate on your companies’ user issues as well as
dial-in policy, SLIP connections, and PPP connections. The policy should be an extension of your
organizational policy regarding the protection of Information Systems (IS) resources in your company.
Your service-access policy should be realistically complete. Make sure you have one drafted before
implementing a firewall. The policy should provide a balance between protecting your network and
providing user access to network resources.
A firewall that implements the first policy allows all services to pass into your site by default, except for
those services that the service-access policy has determined should be disallowed. By the same token, if
you decide to implement the second policy, your firewall will deny all services by default but then will
permit those services that have been determined as allowed.
As you will surely agree, to have a policy that permits access to any service is not advisable because it
exposes the site to more threats.
Notice the close relationship between the high-level service-access policy and the lower-level one. This
relationship is necessary because the implementation of the service-access policy depends on the
capabilities and limitations of the firewall systems you are installing as well as the inherent security
problems that your Web services bring.
For example, some of the services you defined in your service-access policy might need to be restricted.
The security problems they can present cannot be efficiently controlled by your lower-level policy. If
your company relies on these services, which usually Web sites do, you probably will have to accept
higher risks by allowing access to those services. This relationship between both service-access policies
enables their interaction in defining both the higher-level and the lower-level policies in a consistent and
efficient way.
The service-access policy is the most important component in setting up a firewall. The other three
components are necessary to implement and enforce your policy. Remember: the efficiency of your
firewall in protecting your site will depend on the type of firewall implementation you will use, as well
as the use of proper procedures and the service-access policy.
Information Policy
As an Internet manager, or even LAN or web administrator, if you intend to provide information access
to the public, you must develop a policy to determine the access to the server (probably a web server) and
include it in your firewall design. Your server will already create security concerns on its own, but it
should not compromise the security of other protected sites that access your server.
You should be able to differentiate between an external user who accesses the server in search for
information and a user who will utilize the e-mail feature, if you are incorporating one, for example, to
communicate with users on the other side of the firewall. You should treat these two types of traffic
differently and keep the server isolated from other sites in the system.
Advanced Authentication
Despite all of the time and effort writing up policies and implementing firewalls, many incidents result
from the use of weak or unchanged passwords.
Passwords on the Internet can be cracked in many ways. The best password mechanism will also be
worthless if you have users thinking that their login name backwards or a series of Xs are good
passwords!
The problem with passwords is that once an algorithm for creating them is specified, it merely becomes a
matter of analyzing the algorithm in order to find every password on the system. Unless the algorithm is
very subtle, a cracker can try out every possible combination of the password generator on every user on
the network. Also, a cracker can analyze the output of the password program and determine the algorithm
being used. Than, he just need to apply the algorithm to other users so that their passwords can be
determined.
Furthermore, there are programs freely available on the Internet to crack user’s passwords. Crack, for
example, is a program written with the sole purpose of cracking insecure passwords. It is probably the
most efficient and friendly password cracker available at no cost. It even includes the ability to let the
user specify how to form the words to use as guesses at user’s passwords. Also, it has a built-in
networking capability, which allows the load of cracking to be spread over as many machines as are
● Routers with two interfaces supporting subnets on the internal network, and
● Proxy firewalls where the proxy applications use the source IP address for authentication.
Please note that in the figure 7.2 the attack shown won’t work if you have a properly configured router.
Before about couple year ago, it would have worked, but after Kevin Mitnick, all the router vendors
came out with fixes and told their customers to implement them. Most have, but the illustration is still
valid though as many UNIX servers will still accept source-routed packets and pass them on as the
source route indicates. Routers will accept source-routed packets as well, although most routers can
block source-routed packets.
Couple years ago, the Internet Computer Emergency Response Team (CERT) sent out a security alert
describing how hackers were using IP spoofing to break into many Internet sites. More than 23 million
university, businesses, government facilities, and home computers connected to the Internet are exposed
to the threat of having information stolen, systems time-bombed, and data corrupted through worms,
Trojan horses, and viruses. All this, most of the time, for fun.
These kinds of attacks are usually aimed at applications that use authentication on source IP addresses.
When the hacker can pass the packet, access to unauthorized data will be totally available. Keep in mind
that the hacker doesn’t have to get a reply packet back this break-in is possible even without it.
Moreover, some network administrators tend to believe that disabling source routing at the router would
prevent it. Not so! It cannot protect the internal network from itself.
If you have a router to external networks that supports multiple internal interfaces, you should consider a
firewall because you are potentially exposed to hacker spoofing attacks. The same is true for routers with
two interfaces supporting subnets on the internal network, as well as proxy firewalls if the proxy
Packet Filtering
Usually, IP packet filtering is done using a router set up for filtering packets as they pass between the
router’s interfaces. These routers can filter IP packets based on the following fields:
● Source IP address
● Destination IP address
Although not all packet-filtering routers can filter the source TCP/UDP port, most of them already have
this capability. Some routers examine which of the routers network interfaces a packet arrived at, and this
is used as an extra criterion. Unfortunately, most UNIX servers do not provide packet-filtering capability.
In order to block connections from or to specific web servers or networks, filtering can be applied in
many ways, including the blocking of connections to specific ports. For instance, you might decide to
block connections from addresses or sites that you consider being untrustworthy, or you might decide to
block connections from all addresses external to your site all this can be accomplished by filtering. You
can add a lot of flexibility simply by adding TCP or UDP port filtering to IP address filtering.
Servers such as the Telnet daemon usually reside at specific ports. If you enable your firewall to block
TCP or UDP connections to or from specific ports, you will be able to implement policies to target
certain types of connections made to specific servers but not others.
You could, for example, block all incoming connections to your web servers except for those connected
to a firewall. At those systems, you might want to allow only specific services, such as SMTP for one
system and Telnet or FTP connections to another system. Filtering on TCP or UDP ports can help you
implement a policy through a packet-filtering router or even by a server with packet-filtering capability.
Figure 7.5 illustrates packet-filtering routers on such services.
You can set up a ruleset to help you outline the permissions. Figure 7.6 shows a very basic, example
ruleset of packet filtering. Actual rules permit more complex filtering and greater flexibility.
The first rule allows TCP packets from any source address and port greater than 1023 on the Internet to
enter the destination address of 123.4.5.6 and port of 23 at the site. Port 23 is the port associated with the
Telnet server, and all Telnet clients should have unprivileged source ports of 1024 or higher.
The second and third rules work in a similar way, except that packets to destination addresses 123.4.5.7
and 123.4.5.8, and port 25 for SMTP, are permitted.
The fourth rule permits packets to the site’s NNTP server, but only from source address 129.6.48.254 to
destination address 123.4.5.9 and port 119 (129.6.48.254 is the only NNTP server that the site should
receive news from; therefore, access to the site for NNTP is restricted to that system only).
The fifth rule permits NTP traffic, which uses UDP as opposed to TCP, from any source to any
destination address at the site.
Finally, the sixth rule denies all other packets. If this rule wasn’t present, the router might not deny all
subsequent packets.
Although packet filtering can effectively block connections from or to specific hosts, which increases
your level of security substantially, packet-filtering routers have a number of weaknesses. Their rules are
complex to specify and tough to test, because you either have to employ exhaustive testing by hand or
find a facility where you can test the correctness of their rules. Logging capability is not found in all
routers. If the router doesn’t have this capability, you won’t know if dangerous packets are passing
through until it is too late.
Besides, in order to allow certain types of access (that normally would be blocked) to go through, you
might have to create an exception to your rules. Exceptions sometimes can make filtering rules very
difficult, or even unmanageable. How? Lets suppose you specify a rule to block all inbound connections
to port 23 (the Telnet server). Assuming that you made exceptions such as accepting Telnet connections
directly, a rule for each system needs to be added, right? Well, sometimes this kind of addition can
complicate the entire filtering scheme! Don’t forget: Testing complex sets of rules for correctness might
be so difficult that you could never be able to set it right.
Another inconvenience to watch for is that some packet-filtering routers will not filter on the TCP/UDP
source port. The filtering ruleset can become very complex because of it, and you can end up with flaws
in the whole filtering scheme.
The RPC (Remote Procedure Call) services are very difficult to filter too. The associated servers listen at
ports that are assigned randomly at system startup. The portmapper service maps initial calls to RPC
services to the assigned service numbers. However, there is no such equivalent for a packet-filtering
router. It becomes impossible to block these services completely because the router cannot be told on
which ports the services reside (unless you block all UDP packets) because RPC services mostly use
UDP. But if you block all UDP packets, you probably would block necessary services (DNS, for
example). The question becomes to block or not to block RPCs.
You should get more information on packet filtering and associated problems. It’s not the scope of this
chapter to exhaust the subject, but packet filtering is a vital and important tool. It is very important to
understand the problems it can present and how they can be addressed.
Procuring a Firewall
After you’ve decided on the security policy, there are a number of issues to be considered in procuring a
firewall. Standard steps to be taken are requirement definition, analysis, and design specification. The
following sections describe some considerations, including minimal criteria for a firewall and whether to
build or purchase a firewall.
Needs Assessment
When the decision is made to use firewall technology to implement your organization’s Web site security
policy, the next step is to procure a firewall that provides the appropriate level of protection and
cost-effectiveness. Ask these questions:
1. What features should a firewall have?
2. What would be considered effective protection?
Of course, by now you can entirely answer these questions with specifics but it is easy to assert that
firewalls have the following features or attributes for which you should always look for:
● A firewall should be able to support a deny all services, except for those specifically permitted
design policy. Even if you didn’t read this chapter from the beginning, this should not be the
policy to use! You must be able to permit few and still keep a sound level of security to your
organization.
● A firewall should support your security policy, not force one.
● A firewall should be flexible. It should be able to be modulated to fit the needs of your company’s
security policy and be responsive to organizational changes.
● The firewall should contain advanced authentication measures or should be expandable to
accommodate these authentications in the future.
● A firewall must employ filtering techniques that allow or disallow services to specified server
systems as needed.
● The IP filtering language must be flexible, user-friendly to program, and capable of filtering as
many attributes as possible, including source and destination IP addresses, protocol type, source
and destination TCP/UDP ports, and inbound and outbound interfaces.
● A firewall should use proxy services for services such as FTP and telnet so that advanced
authentication measures can be employed and centralized at the firewall. If services such as NNTP,
X, HTTP, or Gopher are required, the firewall should contain the corresponding proxy services.
● A firewall should contain the capability to centralize SMTP access in order to reduce direct SMTP
connections between site and remote systems. This will result in centralized handling of site
e-mail.
● A firewall should accommodate public access to the site, such that public information servers can
be protected by the firewall but can be segregated from site systems that do not require the public
access.
● A firewall should contain the capability to concentrate and filter dial-in access.
● A firewall should contain mechanisms for logging traffic and suspicious activity, and should
contain mechanisms for log reduction so those logs are readable and understandable.
● If the firewall requires an open operating system such as UNIX and NT, a secured version of the
operating system should be part of the firewall, with other security tools as necessary to ensure
firewall server integrity. The operating system should have all patches installed.
● A firewall should be developed in a manner so that its strength and correctness is verifiable. It
should be simple in design so that it can be understood and maintained.
● A firewall and any corresponding operating system should be updated and maintained with patches
and other bug fixes in a timely manner.
There are undoubtedly more issues and requirements, but many of them are specific to each site’s own
needs. A thorough requirements definition and high-level risk assessment will identify most issues and
requirements; however, it should be emphasized that the Internet is a constantly changing network. New
vulnerabilities can arise, and new services and enhancements to other services might represent potential
difficulties for any firewall installation. Therefore, flexibility to adapt to changing needs is an important
consideration.
Buying a Firewall
A number of organizations might have the capability to build a firewall for themselves. At the same time,
there are a number of vendors offering a wide spectrum of services in firewall technology. Service can be
as limited as providing the necessary hardware and software only, or as broad as providing services to
develop security policy and risk assessments, security reviews, and security training.
Whether you buy or build your firewall, it must be restated that you should first develop a policy and
related requirements before proceeding. If your organization is having difficulty developing a policy, you
might need to contact a vendor who can assist you in this process.
If your organization has the in-house expertise to build a firewall, it might prove more cost-effective to
do so. One of the advantages of building a firewall is that in-house personnel understand the specifics of
the design and use of the firewall. This knowledge might not exist in-house with a vendor-supported
firewall.
Building a Firewall
An in-house firewall can be expensive in terms of time required to build and document the firewall and
the time required maintaining the firewall and adding features to it as required. These costs are
sometimes not considered organizations sometimes make the mistake of counting only the costs for the
equipment. If a true accounting is made for all costs associated with building a firewall, it could prove
more economical to purchase a vendor firewall.
In deciding whether to purchase or build a firewall, answers to the following questions might help your
organization decide whether it has the resources to build and operate a successful firewall:
● How will the firewall be tested?
● Who will perform general maintenance of the firewall, such as backups and repairs?
● Who will install updates to the firewall, such as new proxy servers, new patches, and other
enhancements?
● Can security-related patches and problems be corrected in a timely manner?
Many vendors offer maintenance services along with firewall installation, so the organization should
consider whether it has the internal resources needed.
Setting It Up
If you decide to build your firewall, make sure you respond to all of the preceding questions and that you
indeed will be able to handle all the details of setting the firewall. Most importantly, make sure that your
organization’s upper management is 100-percent with you.
The following is an example of a firewall setup. Later on this chapter I give you an example of a firewall
installation, should you decide to purchase one instead of setting it up yourself. Hardware requirements
and configuration will vary, of course, but if you follow the outlined steps you should be able to avoid
lots of frustration and time-consuming surprises.
Also, make sure you have your firewall policy written up, understood, and on hand. When that is
complete, write the following outlined steps on a board or notepad. They will be your roadmap in putting
your firewall together:
● Select the hardware required.
● Test it out.
If your company is a medium size, I tried to complement the information to suit your needs with the
following example of a company with 200 employees. Keep in mind: Far from being a sample firewall
plan, this plan should be considered as a template to be modified as needed.
interfaces showing up on the screen during my boot-up sequence (it should show up!). If not, I will need
to review all of the above procedures, and even the machine itself if necessary. In doing so, I will watch
for PCI and SCSI conflicts.
If everything works, it will be time to set up the system on the network.
Testing it
In order to test network connectivity, I will try to ping the Internet from JAVALITO. I want to make sure
to try to ping a few other places that are not connected to my LAN. If it doesn’t work, it will be an
indication that I probably have set up my PPP incorrectly.
After I have a chance to ping out there, I will then try to ping a few hosts inside my own network. What I
want to make sure here is that all of the computers on my internal network are able to ping each other. If
not, it will not even be funny trying to continue with this setup until the problem is resolved, believe me!
As long as I determine that all of the computers are able to ping each other, they should also be able to
ping JAVALITO. If not, I will have to go back to my previous step. One thing to remember is that I
should try to ping 192.168.2.1, not the PPP address.
Lastly, I want to try to ping the PPP address of JAVALITO from inside my network. Of course, I should
not be able to! If I can, this tells me that I have forgotten to turn off IP forwarding, and it will be time to
recompile the kernel again. When I finish these tests, my basic firewall will be ready to go.
Note:
You probably are thinking, why bother reconfiguring it, because I assigned my protected network
to a dummy domain that consequently cannot get any packets routed to it? The reason is that by
doing this I take the control away from my PPP provider and keep it in my own hands.
I will try to Telnet the netstat port, which I shouldn’t be able to get any output from. If I can, something
is wrong.
At this point, my firewall will be up and running, but a firewall that doesn’t allow anyone to come in or
out is like a company that keeps its doors locked as part of a crime-prevention policy. It might be safe,
but bad for business! By the same token, if a firewall is too restrictive, it can do as much harm as a
wide-open firewall. With this in mind, applications, patches and software packages have been developed
to make firewalls smarter and consequently more beneficial proxy servers, Socks, and so on.
Socks is one of several firewalling software packages out there, which is discussed in more details in the
next section, discussing exclusively about proxies. TCP Wrapper is an application widely used as well,
but as mentioned earlier in this chapter, it is not really a firewall utility so it is better to focus on Socks.
Should you need additional information on TCP Wrapper, make sure to visit the FTP sites noted in that
section.
Figure 7.9 shows the screen that will appear once you select the gateway.
Figure 7.10 shows the host properties screen of Firewall-1 and figure 7.11 shows the users management
screen. These screenshots give you an idea of what to expect on a top-of-the-line firewall product. Keep
it in mind when shopping for a firewall. Needless to say, Check Point’s product should be strongly
considered.
Administrating a Firewall
Firewall administration is a critical job role and should be afforded as much time as possible. In small
organizations, it might require less than a full-time position, but it should take precedence over other
duties. The cost of a firewall should include the cost of administrating the firewall administration should
never be shortchanged.
Management Expertise
As described at the beginning of this chapter, there are many ways to break into a system through the
Internet. Therefore, the need for highly trained, quality, full-time server system administrators is clear.
But there are also indications that this need is not being met satisfactorily in a way that identifies,
protects, and prevents such incidents from happening. Many system managers are part-time at best and
do not upgrade systems with patches and bug fixes as they become available.
Firewall management expertise is a highly critical job role because a firewall can only be as effective as
its administration. If the firewall is not maintained properly, it might become insecure and permit
break-ins while providing the illusion that the site is still secure. A site’s security policy should clearly
reflect the importance of strong firewall administration. Management should demonstrate its commitment
to this importance in terms of full-time personnel, proper funding for procurement and maintenance, and
other necessary resources.
System Administration
A firewall is not an excuse to pay less attention to site system administration. It is, in fact, the opposite: If
a firewall is penetrated, a poorly administered site could be wide open to intrusions and resultant damage.
A firewall in no way reduces the need for highly skilled system administration.
At the same time, a firewall can permit a site to be proactive in its system administration as opposed to
reactive. Because the firewall provides a barrier, sites can spend more time on system-administration
duties and less time reacting to incidents and damage control. It is recommended that sites do the
following:
● Standardize operating system versions and software to make installation of patches and security
fixes more manageable.
● Institute a program for efficient, site-wide installation of patches and new software.
● Use services to assist in centralizing system administration if it will result in better administration
and better security.
● Perform periodic scans and checks of server systems to detect common vulnerabilities and errors
in configuration, and to ensure that a communications pathway exists between system
administrators and firewall/site security administrators to alert the site about new security
problems, alerts, patches, and other security-related information.
● Finally, ask yourself: What kind of firewall do I need? There is no correct answer. A security plan
chosen by company A may not be suitable for company B. Here are few suggested scenarios.
and protected network, the department might not need anything more.
Packet Filtering
The same model will suit a mid-size company relying heavily on the Internet, such as ISPs, Web hosting,
etc., but the policy will be contrary to the example above since more Internet users will be accessing the
site them the site accessing the Internet. Wide access can be granted to the Web/Internet server outside of
the firewall. Protected network users would have to Telnet to the Internet/Web server, from inside the
company, just like everyone else outside of the firewall.
Application Gateways
Larger companies or those where Internet users are offered access to specific services and shares inside
the protected network will need to have a different setup. In this case, I would suggest firewall packages
like CheckPoint, or at least an application gateway. It would be advisable to implement CERT’s
recommendation of an additional router to filter and block all packets whose addresses are originated
from inside the protected network. This two-router solution is not complicated to deploy, and is very
cost-effective when you consider that a larger company would be exposed to spoofing by allowing all the
many employees it has throughout the country to have access to its Web server and internal network.
When implementing two routers, you should purchase them from different companies (that is, choose
two different brands). It might sound like nonsense, but if a hacker is able to break into one router due to
a bug or a back door on the router’s code, the second router will not have the same codes. Even though
the firewall will no longer be transparent, which will require users to log on to it, the site will be
protected, monitored, and safe.
The typical firewall for such a company is illustrated on Figure 7.12. In this case, the two routers create a
package-filtering firewall while the bastion gateway functions as an application-gateway firewall.
IP-Level Filtering
In the case of a smaller-sized company, the IP-level filtering might be the most appropriate versus other
types of filtering. This model enables each type of client and service basically to be supported within the
internal network. No modifications or special client software would be necessary.
The access through the IP-level filtering firewall will be totally transparent for the user and the
application. The existing router can be utilized for the implementation of the IP-level filtering. There will
be no need to buy an expensive UNIX host. However, small companies can reinforce its Internet server
security by implementing similar solutions used by a larger company without a need for the application
gateway.
Comments
Comments
Comments
Comments
Chapter 8
How Vulnerable Are Internet Services?
The implementation of Internet services must be carefully considered due to its vulnerabilities and threats. This chapter lists some
of the most common implemented services and discusses the risks associated with each one of them.
The United States asked Taiwan officials to investigate the incident, but the director of the university computer center concluded
that there was no way to find a record of the person logging in and out on the Internet and sending the message to President
Clinton. Thus, you should be aware of e-mail threats and what you can do to prevent against these pitfalls.
You can be threaten by anyone using an anonymous e-mail, and you won’t be able to track him or her down. Take this other
example, of Jonathan Littman, one of the few journalists covering the computer underground. When Kevin Mitnick, was arrested,
Littman had become the uber-hacker’s inside, to the extend of even writing a book, entitled "The Fugitive Game." The problem is
that on the book he was sympathetic to Mitnick, and ended up receiving some retaliation for some hackers, which sent him several
e-mail threats, vowed through anonymous messages.
E-mail threats also includes people scanning your messages in search of valuable information, such as credit card, social security
numbers or systems authentication information? When an e-mail message travels through the Internet it can be exposed to little
programs that automatically will scan the mail feed into a computer, looking for specific information, just like you do in your mail
program when you want to locate a particular message stored in one of your message folders.
A good preventive measure to this kind of attack is through message encryption. As discussed on chapter 3, "Cryptography, Is It
Enough," encryption does hacking much more difficult. Also, there are lots of encryption tools out there, such as Pretty Good
Privacy (PGP) and digital signatures to aid you on this process. You should encrypt and sign all your e-mail messages.
Tip:
For information on protecting electronic information, check out "An Attorney’s Guide To
Protecting, Discovering, and Producing Electronic Information" by Michael Patrick (phone
1-800-341-7874 x-310).
Note:
If you would like more information on e-mail bombing and spamming, check out Byron Palmer’s
Web page at URL http://mwir.lanl.gov:8080/E-mail_Spamming.html
Tip:
A good source of information about e-mail spoofing is at the URL
ftp://info.cert.org/pub/tech_tips/email_spoofing.
As mentioned in the chapter above, the large amount of e-mails coming in to a server, as a result of e-mail bombing and spamming
can generate a denial-of-service (where the server denies to honor a request or a task, to the extreme of freezing up) on the server,
through loss of network connectivity, system crashes, or even failure of a service (where the ability to execute that service fails on
the server) because of:
● overloaded network connections
At the /etc/hosts.allow:
smap : ALL
At /etc/hosts.deny:
smap : spammer.com .spammer.com 128.xxx.000.0
at the /usr/local/etc/netperm-table:
smap, smapd: userid 32
You can use the above example as a boilerplate, as the paths will vary according to your environment as well as the site(s) you’re
blocking. This should sufice to keep e-mail spamming and bombing coming from spammer.com or anyone in the IP range of
128.xxx.000.0 from accessing your SMTP server. Now, watch your server! This technique could overload your server as it will
generate a process for every incoming mail message. If your server already works at more then 30% of its capacity, you may want
Note:
You can try to block spamming by using smap. According to Craig Hagan ([email protected]),
spammers often use third-party relaying to distribute spam via an intermediary party’s mailer, so
Hagan proposes a routine, which you can review and download from URL
http://www.cih.com/~hagan/smap-hacks/ to prevent your mailer from being misused by the
relaying mailer.
You can also use sendmail to block spamming and bombing. Axel Zinser
([email protected]) has developed patches for blocking spam with sendmail
versions 8.6.12, 8.7.3 or 8.8.2. For more information, check the URL
http://spam.abuse.net/spam/tools/mailblock.html.
Tip:
Check CERT for additional information on filtering SMTP connections in your firewall, at URL
ftp://info.cert.org/pub/tech_tips/packet_filtering.
● Originator authenticity, which allows digital signature and reliability of a message to be verified.
● Message integrity measures, which assures that the message has not being modified during the transmission, and
● Non-repudiation of origin which allows for the verification of the identity of the original sender of the message.
Note:
For more information on RIPEM, and if you want to download a copy of it, check the FTP site at
URL ftp://ftp.rsa.com/rsaref/. Note that there are restrictions for downloading RIPEM, as it uses
the RSAREF library of cryptographic routines, which is considered munitions and thus is
export-restricted from distribution without an export license to persons who are not citizens or
permanent residents in the U.S or Canada. Thus, I strongly recommend you to read the frequently
asked questions for RIPEM at URL http://www.cs.indiana.edu/ripem/ripem-faq.
You can use RIPEM with popular mailers such as Berkeley, mush, Elm, and MH. Code also is included in elisp to allow the easy
use of RIPEM inside GNU Emacs. Post your interfaces or improvements for RIPEM to the newsgroup on USENET,
alt.security.ripem.
Zimmermann’s Pretty Good Privacy (PGP), is another product you can use to encrypt your SMTP messages. However, unlike
RIPEM, PGP tries to approach the issue of trustworthiness, but as I understand it, it does so without respect to any enunciated
criteria or policy. Thus the question remains: Can you trust someone you’re with whom you are interacting through e-mail, by
signing a contract or something similar (using digital signatures), just because he’s authenticated over PGP or RIPEM?
the pending mail to the "client" machine. Thereafter, all mail processing is local to the client machine.
But you must keep in mind that when you are dealing with POP configuration you ultimately are dealing with private information
coming and going through it. You are dealing with issues such confidentiality, integrity and liabilities! Thus, I recommend you not
to allow your users to transfer mail over the Internet through a POP, because it can reveal passwords and the messages are totally
unprotected. If they must transfer it, then implement packet filtering. You might be able to implement some proxy too, but it will
require some minor coding.
Recently, CERT Advisory CA-97.09 (August 27, 1997), reported on a vulnerability with POP and Internet Message Access
Protocol (IMAP). According to CERT, some versions of the University of Washington's implementation of the IMAP and POP has
a security hole that allows remote users to obtain unauthorized root access without even having access to an account on the system.
CERT/CC team recommends installing a patch if one is available or upgrading to IMAP4rev1. Until you can do so, CERT
recommends you to disable the IMAP and POP services at your site.
Tip:
Should you need to update to IMAP4rev1, you can download it from the University of
Washington FTP server at URL ftp://ftp.cac.washington.edu/mail/imap.tar.Z. Note that the
checksums change when files are updated.
If you are not able to temporarily disable the POP and IMAP services, then try to limit access to the vulnerable services to
machines in your local network. This can be done by installing the tcp_wrappers, since POP is launched out of inetd.conf, for
loggins and access control. This doesn’t mean that your POP is safe now, and you still have to do run the fix, hopefully already
available by the publishing of this book, or upgrade to IMAP4ver1. Additionally, you should consider filtering connections at the
firewall to minimize the impact of unwanted connections.
Note:
If you need access to the tcp_wrappers tool, you can download it from CERT’s FTP server at
URL ftp://info.cert.org/pub/tools/tcp_wrappers/tcp_wrappers_7.5.tar.gz
The BorderWare firewall is an example of a product that runs all standard Internet servers including a full function electronic mail
server with POP and SMTP support. But BorderWare is not the only one, check chapter 14, "Types of Firewall," for the complete
list.
Note:
For more information about RSA’s S/MIME, check their URL at
http://www.rsa.com/smime/html/faq.html#gnrl.1.
Now, one way to have more control over your SMTP mail is to tunnel them to a specific server where they can be screened. You
can easily do this by setting up an HTML email form and using the "Mailto" function. You would enter a line code it in HTML as
<A HREF="mailto:[email protected]">[email protected]</A>
The [email protected] eventually will be replaced by an Internet address. Every time an user clicks on the email anchor, a
special form pops-up. The user then writes his message and sends it to you.
However, there are many other options, in many different script languages. It all will depend on how much you want to invest on
it, in time and effort, and the resources you have available.
To create an email comment form, you will need to create a form which sends mail to you from any browser that supports forms.
For UNIX server, there is a very flexible CGI script, cgimail, which can be downloaded from MIT’s Web site. I have not seen any
other tool for this purpose with such a level of flexibility. It is also very easy to install and use.
Since cgimail requires an ASCII form, it can be later emailed, which allows users with disabilities to access it. If you want to
download it, check the mit-dcns-cgi at the URL: http://web.mit.edu/wwwdev/www/dist/mit-dcns-cgi.html.
If you rather work with ANSI C, there is very simple email form package called Simple CGI Email Handler, which I strong
recommend. It is based on the post_query.c code provided with the NCSA httpd 1.1 package, released to the public domain.
You should be aware of AIX, which definitely is vulnerable to it. The SunOS 4.1.3 does not allow these escape sequences, unless
mail is being run from an actual terminal. With version 2.1, you don’t need to be concerned about it as the tilde escapes were
replaced with spaces.
If you are interested on this script, you can download it from the URL: http://www.boutell.com/email/.
If you like Perl, there is another email form package called the "Web Mailto Gateway," developed by Doug Stevenson
([email protected]). The following source code can be found at URL: http://www.mps.ohio-state.edu/mailto/mailto_info.html.
#!/usr/local/bin/perl
# 5/95
# Use this script as a front end to mail in your HTML. Not every browser
# supports the mailto: URLs, so this is the next best thing. If you
# use this script, please leave credits to myself intact! :) You can
# Documentation at:
# http://www-bprc.mps.ohio-state.edu/mailto/mailto_info.html
# I didn't exactly follow the RFCs on mail headers when I wrote this,
# Also, you'll need cgi-lib.pl for the GET and POST parsing. I use
# version 1.7.
# http://www.bio.cam.ac.uk/web/form.html
# PLEASE: Use this script freely, but leave credits to myself!! It's
# common decency!
########
# only a certain few mail addresses to be sent to. I changed the WWW
# Mail Gateway to allow only those mail addresses in the list @addrs
# the selected option being either the first one or the one that matches
########
# Enhancing the enhancements from 1.2. You can now specify a real name
# defined, or read from a file. Read the information HTML for instructions
# on how to set this up. Also, real mail addresses may hidden from the
########
# The next URL to be fetched after the mail is sent can be specified with
# Force user to enter something for the username on 'Your Email:' tag,
########
# Added <PRE>formatted part to header entry to make it look nice and fixed a
# typo.
########
# ALL cgi varaibles (except those reserved for mail info) are logged
# at then end of the mail received. You can put forms, hidden data,
# or whatever you want, and the info for each variable will get logged.
########
# Fixed stupid HTML error for an obscure case. Probably never noticed.
# are listed in the order they were received. That was a function of perl
########
# New suggested sendmail flag -oi to keep sendmail from ending mail
# Added support for setting the "real" From address in the first line
# of the mail header using the -f sendmail switch. This may or may not
# the security of your script for public usage. Your mileage will vary,
# one out.
########
# Doug Stevenson
######################
# Configurable options
######################
$active = 1;
$logging = 1;
$logfile = '/usr/local/WWW/etc/mailto_log';
# Physical script location. Define ONLY if you wish to make your version
# of this source code available with GET method and the suffix '?source'
# on the url.
$script_loc = '/usr/local/WWW/cgi-bin/mailto.pl';
$cgi_lib = '/usr/local/WWW/cgi-bin/cgi-lib.pl';
$script_http = 'http://www-bprc.mps.ohio-state.edu/cgi-bin/mailto.pl';
# Path to sendmail and its flags. Use the first commented version and
# define $listserver = 1if you want the gateway to be used for listserver
# correctly.
# sendmail options:
# -n no aliasing
#$expose_address = 1;
# List of address to allow ONLY - gets put in a HTML SELECT type menu.
# who view source, or you don't want to mess with the source, read them
# from $mailto_addrs:
#$mailto_addrs = '/usr/local/WWW/etc/mailto_addrs';
#open(ADDRS,$mailto_addrs);
#while(<ADDRS>) {
# $addrs{$name} = $address;
#}
# version
$version = '2.2';
#############################
#############################
##########################
# source is self-contained
##########################
open(SOURCE, $script_loc) ||
print <SOURCE>;
close(SOURCE);
exit(0);
require $cgi_lib;
&ReadParse();
#########################################################################
# method GET implies that we want to be given a FORM to fill out for mail
#########################################################################
if ($ENV{'REQUEST_METHOD'} eq 'GET') {
$destaddr = $in{'to'};
$cc = $in{'cc'};
$subject = $in{'sub'};
$body = $in{'body'};
$nexturl = $in{'nexturl'};
if ($in{'from'}) {
elsif ($ENV{'REMOTE_USER'}) {
$fromaddr = $ENV{'REMOTE_USER'};
# this is for Lynx users, or any HTTP/1.0 client giving From header info
elsif ($ENV{'HTTP_FROM'}) {
$fromaddr = $ENV{'HTTP_FROM'};
else {
$fromaddr = "$ENV{'REMOTE_IDENT'}\@$ENV{'REMOTE_HOST'}";
$body =~ s/\0//;
if (%addrs) {
if ($in{'to'} eq $addrs{$_}) {
else {
$selections .= "<OPTION>$_";
if ($expose_address) {
$selections .= "</SELECT>\n";
print &PrintHeader();
print <<EOH;
that you want to mail to. The <B>Your Email</B>: field needs to
contain your mail address so replies go to the right place. Type your
message into the text area below. If the <B>To</B>: field is invalid,
or the mail bounces for some reason, you will receive notification
is set incorrectly, all bounced mail will be sent to the bit bucket.</I></P>
EOH
if ($selections) {
print $selections;
else {
print <<EOH;
</FORM>
<HR>
<H3><A HREF="http://www-bprc.mps.ohio-state.edu/mailto/mailto_info.html#about">
<H3><A HREF="http://www-bprc.mps.ohio-state.edu/mailto/mailto_info.html#new">
<H3><A HREF="http://www-bprc.mps.ohio-state.edu/mailto/mailto_info.html#misuse">
<HR>
</P></ADDRESS>
</BODY></HTML>
EOH
#########################################################################
# Method POST implies that they already filled out the form and submitted
#########################################################################
$destaddr = $in{'to'};
$cc = $in{'cc'};
$fromaddr = $in{'from'};
$fromname = $in{'name'};
$replyto = $in{'from'};
$sender = $in{'from'};
$errorsto = $in{'from'};
$subject = $in{'sub'};
$body = $in{'body'};
$nexturl = $in{'nexturl'};
print <<EOH;
Content-type: text/html
<HTML><HEAD><TITLE>Mailto error</TITLE></HEAD>
<BODY><H1>Mailto error</H1>
<UL>
<LI><B>To</B>:, the full mail address you wish to send mail to</LI>
</UL>
EOH
exit(0);
# do some quick logging - you may opt to have more/different info written
if ($logging) {
open(MAILLOG,">>$logfile");
close(MAILLOG);
# Log every CGI variable except for the ones reserved for mail info.
# Valid vars go into @data. Text output goes into $data and gets.
# First, get an ORDERED list of all cgi vars from @in to @keys
for (0 .. $#in) {
local($key) = split(/=/,$in[$_],2);
$key =~ s/%(..)/pack("c",hex($1))/ge;
push(@keys,$key);
local(%mark);
foreach (@data) {
$body =~ s/\0//;
# now check to see if some joker changed the HTML to allow other
if (%addrs) {
print &PrintHeader();
print <<EOH;
<BODY>
not one of the pre-defined set of addresses that are allowed. Go back and
try again.</P>
</BODY></HTML>
EOH
exit(0);
$realaddr = $destaddr;
if ($addrs{$destaddr}) {
if ($active) {
if ($listserver) {
open(MAIL,"| $sendmail$fromaddr") ||
else {
To: $realaddr
Reply-To: $replyto
Errors-To: $errorsto
Sender: $sender
Subject: $subject
X-Real-Host-From: $realfrom
$body
$data
EOM
close(MAIL);
# if the cgi var 'nexturl' is given, give out the location, and let
if ($nexturl) {
else {
print &PrintHeader();
print <<EOH;
<HTML><HEAD><TITLE>Mailto results</TITLE></HEAD>
<BODY><H1>Mailto results</H1>
<PRE>
<B>Subject</B>: $subject
$body</PRE>
<HR>
</BODY></HTML>
EOH
} # end if METHOD=POST
#####################################
#####################################
else {
print <<EOH;
EOH
exit(0);
# Deal out error messages to the user. Gets passed a string containing
sub InternalError {
local($errmsg) = @_;
print &PrintHeader();
print <<EOH;
Content-type: text/html
<B>$errmesg</B></P></BODY></HTML>
EOH
exit(0);
##
## end of mailto.pl
##
If your server can run CGI scripts and is configured with sendmail, this is the right mail gateway script to have in your HTML, you
will need to be able to run CGI scripts on your server though.
The use of firewalls can enhance your protection. It can restrict the access of outside mail to only few machines and re-enforce
security on those machines. Usually these machines would act as a gateway to the company and a firewall as a guard, a security
agent, controlling what’s coming in or going out.
Nevertheless, messages will need to come into the company, and a firewall will not be able to screen those messages for hostile
applets or scripts. At most, there are few techniques to filter threatening characters in the mail address, if you can come up with a
table so that the firewall can recognize it.
Thus, always keep in mind that, since SMTP lacks authentication, forging email is not something difficult. If your site allows
connections to the SMTP port, anyone can connect to the that port and issue commands that will send emails that appears to be
from you or even a fictitious.
❍ When setting up FTP directories, make sure the anonymous FTP root directory, "~ftp," and its sub-directories are not
owned by the FTP account or even in the same group. Otherwise, as stressed earlier, these can be an open door for
attackers, specially if the directory is not write-protected.
❍ You should have the FTP root directory and its sub-directories owned by root and also have only root with
permissions to write on it. This way you will keep your FTP service secure. The following is an example of an
anonymous FTP directory structure:
drwxr-xr-x 7 root system 512 Mar 1 15:17 ./
cops:*:3271:20:COPS Distribution::
cert:*:9920:20:CERT::
tools:*:9921:20:CERT Tools::
ftp:*:9922:90:Anonymous FTP::
nist:*:9923:90:NIST Files::
It is important to understand that there is a risk in allowing anonymous FTP connections to write to your server. Therefore, you
Tip:
You can download Joe Hentzel’s TFTP CONNECT scanner from the URL
http://www.giga.or.at/pub/hacker/unix/.
Tip:
If you need more information on HTTP, check the URL:
http://www.w3.org/hypertext/WWW/Protocols/
There is a series of utilities intended for Web server administrators available at the URL:
ftp://src.brunel.ac.uk/WWW/managers/
Proxying HTTP
The majority of HTTP clients, such as Purveyor and Netscape Navigator, support a variety of proxying schemes, SOCKS and
transparent proxying.
Purveyor, for instance, provides proxy support for not only HTTP, but also FTP and GOPHER protocols, creating a secure LAN
environment by restricting Internet activities of LAN users. The proxy server offers improved performance by allowing internal
proxy caching. Purveyor also provides proxy-to-proxy support for corporations with multiple proxy servers.
Tip:
For more information on Purveyor Webserver, check Process Software’s URL:
http://www.process.com.
If you are running your Web server on Windows NT, Windows 95 or NetWare, you can use Purveyor Webserver’s proxy features
to enhance security. In addition, you can increase the performance of your server as Purveyor can locally cache Web pages
obtained from the Internet.
You should consider installing a firewall at your site, regardless if you are placing your server outside or inside your protected
network. The openness of HTTP is too great for you to risk. Besides, you still have all the viewers and applets to worry about.
When selecting a firewall, make sure to choose one that includes HTTP proxy server. It will be useful for protecting your
browsers. Some firewalls, such as the TIS Firewall Toolkit, provide HTTP proxying totally transparent to the user.
Security of Conferencing
Of course, there must be a practical reason for you to use the Web for conferencing. Not only there is a large variety of hardware
and software, but the fact that the Web provides a common user interface for Internet utilities like FTP, Telnet, Gopher, and WAIS
allows the users to reach all the resources available on the Internet without having to leave the Web.
Despite the advances of Web technology in the past three or four years, there are still a series of issues to be addressed before
considering conferencing, at least in a large scale. The following is a summary list of the main challenges affecting Web
conferencing deployment:
● Freshness of information - Just like with news gateways, users want to read only the new messages added since their last
visit. In the Web environment, either the client or the server could do this.
● Ability to submit files to the system - Users should be able to upload files onto the conferencing system. To have to type it
all over again in the Web form is unproductive.
● Incorporate images and sound to the messages - Thus one of the most exciting features enabled by the Web, image and
sound is one of the most difficult to implement. As long as an image is already available on a Web server, you can link any
HTML message to it.
● Risks of HTML usage - It might seem natural to allow users to manipulate HTML markups in their messages, but it may
create a formatting problem, as users may produce messages not compatible with your conferencing application. Users
would have to be aware of structural elements such as message headers and navigation buttons.
● Keeping users on track - On the Web, it is very easy for an user to take side trips by clicking on a hyperlink. This could be
a problem if these links were to be appended to the message.
● Speed - To carry sound and image on the Web can be a problem for users with low bandwidth connection. A 14.400 baud
modem on the Web can be awfully slow when transferring image and sound.
The bottom line, you must take in consideration the clientele accessing your site, the Web conferencing technology to be deployed
and the bandwidth you have available to deploy this service. Conferencing involves skimming over a lot of stuff to find the most
interesting nuggets, so you need to be able to move around quickly.
Gopher
Gopher is not as used as before, but it is still fast and efficient. Believe it or not, Gopher is fairly secure but there are some issues I
would like to alert you about. One of the most popular Gopher server is the one of the University of Minnesota (found at
boombox.micro.umn.edu), which is run by a lot of the Gophers available out there.
You should know that there is a bug on both Gopher and Gopher+ in all versions that were available before August of 1993, as
reported in CERT Advisory CA-93:11. This bug allows hacker to obtain password files, both remotely or locally, by potentially
gaining unrestricted access to the account running the public access client and reading any file accessible to this account. This
includes the /etc/passwd and other sensitive files.
If you want to review this bug, you can check it at the Defense Data Network Bulletin 9315, which can be viewed at the URL
HTTP://www.arc.com/database/security_bulletins/DDN/sec-9315.txt.
You should be alert about Gophers proxying an FTP session. Even if access is restricted to an FTP directory on your server, the
Gopher can be used to perform a bounce attack. Thus, be careful when protecting an FTP server behind a firewall. If the Gopher
server is not protected, a hacker can use it to trespass the firewall.
Another vulnerability, reported by NASA Automated Systems Incident Response Capability (NASIRC), indicates a failure in the
gopher servers gpopher1.1 (Gopher) and gopher2.012 (Gopher+) internal access controls, which can allow files in directories
above the gopher data directory, such as the password file, to be read if the gopherd does not run chroot. This vulnerability only
affects servers that are started with the option "-c". Without this option, gopherd runs chroot and access to files above the
gopher-data directory is disabled.
finger
Finger is a program that tells you whether someone is logged on to a particular local or remote computer. Through finger you
might be able to learn the full name, terminal location, last time logged in, and other information about an user logged onto a
particular host, depending on the data that is maintained about users on that computer. Finger originated as part of BSD UNIX.
To finger another Internet user, you need to have the finger program on your computer or you can go to a finger gateway on the
Web and enter the name of the user. The user's computer must be set up to handle finger requests. A ".plan" file can be created for
any user that can be fingered.
An intruder can use finger to find information about a site, and use finger gateways to protect his identity.
whois
Whois is a program run by InterNIC that will tell you the owner of any second-level domain name. For example, you can look up
the name of the owner of your own access provider by entering for example, "process.com" and whois will tell you the owner of
that second-level domain name. The InterNIC Web whois is at http://rs.internic.net/cgi-bin/whois.
whois can also be used to find out whether a domain name is available or has already been taken. If you enter a domain name you
are considering and the search result is "No match," the domain name is likely to be available and you can apply to register it
through your service provider.
talk
Talk is a UNIX service that allows two users to communicate over the Internet via text-based terminals. It’s much similar to the
Net send command and IRC, only that the connection is directed by the persons e-mail address. Thus, if you were to talk to me via
the Internet you would issue a command:
talk [email protected]
By issuing this command the local talk program would contact the remote talk daemon. If I’m available, assuming that I have talk
connections enabled, my screen would split and conversation would take place. If you’re familiar with the chat command of
Windows for Workgroups, bundled with the network tools, you know what I’m talking about.
The risk with this service is that information can be gathered from an unadvertised user that engage in conversation with someone
unknown out on the Internet.
IRC
Internet Relay Chat (IRC), just like talk, allows the communication over the Internet. However, IRC allows multiple users
conversing at the same time.
The main risk is that file transferring can be done over IRC without any traced left behind, its like a cash transaction without
receipts! Even though these file transferring can be done through FTP, etc., IRC makes it possible without any server software
running.
DNS
As you already know, Domain Name System (DNS) is the way that Internet domain names are located and translated into Internet
Protocol (IP) addresses. Because maintaining a central list of domain name/IP address correspondences would be impractical, the
lists of domain names and IP addresses are distributed throughout the Internet in a hierarchy of authority. There is probably a DNS
server within close geographic proximity to your access provider that maps the domain names in your Internet requests or forwards
them to other servers in the Internet.
As far as risks with DNS, you should be aware of spoofing. When a DNS machine is compromised, this machine has been a victim
of a spoofing. Not that it happens very often, but there has been reports, both at DDN and CIAC, about DNS spoofing.
CIAC’s advisory, entitled "Domain Name Server Vulnerability alerts about the possibility of an intruder to spoof BIND into
providing incorrect name data at the DNS server, allowing for unauthorized access or re-routing of connections. Can you imagine
if all private connections of the Secret Services were re-routed to a hackers home server? Fortunately (or should I say hopefully),
the Secret Service is already using Skipjack or some other kind of strong encryption in their IP connections!
But fear not! A DNS spoofing is not an easy task. It’s not enough for an intruder to gain access to the DNS server. The intruder
will have to re-route the addresses of that database, which would easily give him away. It’s like breaking the window of a jewelry
store, it’s just a matter of minutes before the police arrives. But again, with a good plan, how much time would a hacker need to
get what he wants?
Tip:
What is a MIB? A Management Information Base (MIB) is a formal description of a set of
network objects that can be managed using the Simple Network Management Protocol (SNMP).
The format of the MIB is defined as part of the SNMP. All other MIBs are extensions of this basic
MIB. MIB-I refers to the initial MIB definition. MIB-II is the current definition. SNMPv2
includes MIB-II and adds some new objects.
There are MIB extensions for each set of related network entities that can be managed. For
example, there are MIB definitions in the form of Requests for Comments (RFCs) for Appletalk,
DNS server, FDDI, and RS-232C network objects. Product developers can create and register new
MIB extensions.
Companies that have created MIB extensions for their sets of products include Cisco, Fore, IBM,
Novell, QMS, and Onramp. New MIB extension numbers can be requested by contacting the
Internet Assigned Numbers Authority (IANA) at 310-822-1511 x239.
The SNMPv2 Working Group recently completed work on a set of documents which makes up version 2 of the Internet Standard
Management Framework. Unfortunately, this work ended without reaching consensus on several important areas -- the
administrative and security framework and remote configuration being two of the most important.
The IETF has charted a Working Group to define SNMPv3, which, if successful, will replace the SNMPv2. The SNMPv3 effort
has been underway since April 1997.
traceroute
Van Jacobson is the author of traceroute, which is a tool to trace the route IP packets take from the current system to some
destination system. What it does is, by using the IP protocol "time_to_live" field it attempts to elicit an ICMP TIME_EXCEEDED
response from each gateway the packet goes through on its way.
The danger here is that this utility can be used to identify the location of a machine. Worse, you don’t even need to run Unix to
have access to traceroute. There are several gateways on the Net, such as the one at the URL http://www.beach.net/traceroute.html.
Note:
Enterprise Integration Technologies, Inc. (EIT) is designing a Secure HTTP, a set of protocol
changes to address confidentiality, integrity, and authentication issues. For more information you
can check their Web page at URL: http://www.eit.com
As for integrity, just keep in mind that certain transactions not only require confidentiality, but also that contents will not be
modified. The banking industry, for example, relies on confidentiality, but the integrity of the data is as important as the privacy of
Comments
Comments
Comments
Comments
Chapter 9
Setting Up a Firewall Security Policy
To talk about security policy we must talk about risks. As it is the antithesis of security, we naturally
strive to eliminate risk. As worthy as that goal is, however, we learn with each experience that complete
elimination is never possible. Even if it were possible to eliminate all risk, the cost of achieving that total
risk avoidance would have to be compared against the cost of the possible losses resulting from having
accepted rather than having eliminating risk. The results of such an analysis could include pragmatic
decisions as to whether achieving risk avoidance at such cost was reasonable. Applying reason in
choosing how much risk we can accept and, hence, how much security we can afford is risk
management.
Did you ever heard about "security through obscurity"? Although it is not as evident within many
organizations (it is obscure!), this security practice used to be very common, and it is still around.
Security through obscurity defines a security system that promotes security by isolating information
about the system it is protecting from anyone outside the implementation team. This includes but is not
limited to hidden passwords in binary files or scripts in the assumption that no one will ever find it.
Are you running your internal network and are planning to run your firewall based on such a system?
Better not! It certainly worked back there with proprietary and centralized systems, back in the "glass
walls" age. But today, with the advent of open systems, internetworking, and great development of
intelligent applications and applets, security policies needs to be taken a step higher.
To run your site based on hidden information, rather than protected information, is to play with fire
(without the wall!). Nowadays, users are more knowledgeable about the systems they are running and the
technology surrounding them. To keep information unknown is just a matter of time until it becomes
well known. To base security on it is useless.
Hackers are proud to prove that. They were the first ones to prove that obscurity, rather than security, is
exciting. Consequently, you will need a system that is genuinely secure. True, it can still be broken, but
by been structured you will be dealing with a organized method, where you can have tools to increase
security, monitor threats, catch intruders, or even pursuit them.
You must keep your firewall (and protected network!) logically secure. Logic should be your starting
criteria in putting together a security policy that will use algorithmically secure systems such as
Kerberos, PGP and many others.
All right, you already know that your site must be secured and also know what needs to be protected. But
what makes a site insecure in the first place? Truly? It is the fact that you turned it on!
Your site will be as secure as the people you allow, or invite, to access it. You can have a very secure site
where only corporate users have access to it and you have enough information about each one of them
(should you need to track them down later!). So lets assess your corporate security…
(c ) Assessing Your Corporate Security Risks
It is useful in thinking about risk management to use a sort of formula. This is not, of course, a
mathematical equation for use in making quantitative determinations of risk level. It is an algorithm for
use in thinking about the factors that enter into risk management and in assessing the qualitative level of
danger posed in a given situation.
Reliability and the steps necessary to allow for and deal with reliability failures are risk management
issues you must take into consideration. In information systems security, the word "threat" describes a
more limited component of risk. For these purposes, threats are posed by organizations or individuals
who both intend us harm and have the capability to accomplish their intentions.
In order to develop a thorough security policy, you must consider the possible consequences of attacks
from a wide variety of different threats, each of which may act on a specific vulnerability different from
those attempted to be exploited by other independent threats and any of which may be unrecognized.
Often threats to information and information systems are paired with a specific line of attack or set of
vulnerabilities - since a threat which has no vulnerability it is capable of exploiting creates no risk, it is
useful to deal with threat-vulnerability pairings in the risk management process.
This uncertainty is a contributing cause of our tendency to rely on risk avoidance. By assuming the threat
to be capable, intent, and competent, by valuing our potential targets highly, and by conservatively
estimating uncertainties, we reduce risk management to: "what are our vulnerabilities and how much do
countermeasures cost to eliminate them?" The management problem is, "How much money can I spend
and where can I spend it most wisely?" In most cases, fortunately, it is possible to do better. It is often
sufficient to bound the problem, even when exact figures are not available. By careful analysis, we may
be able estimate the value of each factor in our equation and balance the risk of loss or damage against
the costs of countermeasures and select a mix that provides adequate protection without excessive cost.
Ultimately, the risk management process is about making decisions. The impact of a successful attack
and the level of risk that is acceptable in any given situation are fundamentally policy decisions. The
threat is whatever it is and while it may be abated, controlled or subdued by appropriate
countermeasures, it is beyond the direct control of the security process. The process must focus,
accordingly, on vulnerabilities and countermeasures. Vulnerabilities are design issues and must be
addressed during the design, development, fabrication and implementation of our facilities, equipment,
systems and networks. Although the distinction is not always certain, countermeasures are less
characteristics of our systems than of their environments and the ways in which we use them. Typically,
to make any asset less vulnerable raises its cost, not just in the design and development phase but also
due to more extensive validation and testing to ensure the functionality and utility of security features,
and in the application of countermeasures during the operation and maintenance phase as well.
Your basic security requirement should be to minimize, if not eliminate, all the security holes existent in
your site. This security holes usually are presented in four ways:
1. Physical - Caused by unauthorized people accessing the site, enabling them to peruse where they
are not supposed to. A good example of this would be a browser placed into a public place
(reception area, for example), giving an user the chance not only to browse the Web, but also
change the browser’s configuration, get site’s information such as IP addresses, DNS entries, etc.
2. Software - Caused by "buggy privileged" applications such as daemons, for example, executing
functions they were not supposed to. As a rule of thumb, never trust scripts and applets! When
using them make sure you understand what they are supposed to do (and what they are not!).
3. Incompatibility Issues - Caused by bad system’s integration planning. A hardware or software
may work great alone but once you put it together with other devices, as a system, it may present
you problems. This kind of problems are very hard to spot once the parts are integrated into the
system. So make sure to test every component before integrating it into your system.
4. Lack of a security policy - It does not matter how secure your password authentication
mechanism is if your users use their kids names as their passwords. You must have a security
policy addressing all the security requirements for your site as well as covering, and preventing, all
the possible security roles.
The requirements to run a secure firewall also includes a series of "good habits" that you, as
administrator should cultivate. It is a good policy to try to keep you strategies simple. It is easier to
maintain, as well as to be modified, if necessary.
Most bastion hosts and firewall applications, as mentioned earlier, have the capability to generate traffic
logs. Users are at the mercy of these servers, especially Web servers, when information about
themselves, their connections, their address or even specifications about their client or company are
disclosed. The log provided by a Web server can be threatening for an user as it disclose a list of
information, which usually includes:
● The IP address,
● The user’s name (if known by user authentication or, with UNIX, obtained by the identd protocol),
● The data variables submitted through forms users usually fill out during their session,
A fundamental problem of developing a security policy then, is to link the choice of design
characteristics which reduce vulnerabilities and of countermeasures to threat and impact in order to
create a cost-effective balance which achieves an acceptable level of risk. Such a process might work as
follows:
1. Assess the impact of loss of or damage to the potential target. While the impact of the loss of a
family member as a parent is beyond measure, the economic value of the member as a wage earner
can be estimated as part of the process of deciding the amount of life insurance to purchase,
correct? The same model should be used in assessing the impact of loss or damage of a particular
network resource of information. Table 9.1 was extracted from my book "Protecting Your Web
Site With Firewalls," which economic impact of crime or destruction by fires in a city can be
determined as part of the process of sizing police and fire departments. The impact of loss of a
technological lead on battlefield effectiveness can be specified.
2. Not all impacts are economic. The loss of privacy or integrity of an user is an example of it!
3. Specify the level of risk of damage or destruction that is acceptable. This may well be the most
difficult part of the process. Check Table 9.1 as your boiler-plate.
4. Identify and characterize the threat. The damage that can be caused by criminal behavior can be
described and predicted.
5. Analyze vulnerabilities. Your computer systems and networks can be designed to be less
vulnerable to hacker attacks. Where potential improvements that may reduce vulnerabilities are
identified, the cost of their implementation must be estimated.
6. Specify countermeasures. Where vulnerabilities are inherent or cost too much to eliminate during
the design and development of your security policy, countermeasures must be selected to reduce
risk to an acceptable level. Access to servers can be controlled. Use of computers and networks
can be monitored or audited. Personnel can be vetted to various degrees. Not all available
countermeasures need be used if some lesser mix will reduce risk to an acceptable level. Costs of
each type of countermeasure must be estimated in order to determine the most cost-effective mix.
7. Expect and allow for uncertainties. None of the factors in the risk management equation is
absolute. No threat is infinitely capable and always lucky. No system is without vulnerability. No
countermeasure is completely effective. Risk management requires the realistic assessment of
uncertainties, erring on neither conservative nor optimistic sides.
8. Keep in mind that in practice, the estimations needed in applying such a risk management
process are accomplished in only gross terms. Threat level or uncertainty may be assessed as
high or low. Impact may be designated as severe or moderate. This gross quantification of factors
in the risk management equation allows the design attributes used to reduce vulnerabilities and the
countermeasures to be grouped so that they can be applied consistently throughout large
organizations.
Table 9.1 provides a matrix to assess the level of security you may need to implement, based on the level
of concern with the information to be protected and the potential consequences in case of breach of
confidentiality.
High-Classified / Use of encryption If loss of integrity at your site will affect confidentiality,
methods along with packet filtering then the requirements for integrity is high and must be
met.
If loss of integrity at your site does not affect
confidentiality, then your site can be accommodated in
one of the requirements below for Low, Medium, or High
levels of concern, as applicable.
High / Use of encryption methods and Absolute accuracy required for mission accomplishment
associated authentication methods (e.g. electronic commerce); or expected dollar value of
loss of integrity is high.
Low / Use of password protection Reasonable degree of accuracy required for mission
accomplishment (database applications, search engines);
or expected dollar value of loss is low.
Very Low / may not require security No particular degree of accuracy required for mission
measures other then integrity of data accomplishment (informative pages, minimum interaction
with user).
Table 9.1
Level of Integrity To Be Implemented
Data Security
Remember! Bastion hosts and servers alike for that matter, are dull! They are obedient, and will do what
you ask them to do, but unfortunately, they are dull! Since they will not think on they own, they don’t
know the difference between the firewall administrator and a hacker (well, we probably wouldn’t know
either!). Anything placed into the bastion host’s document root directory is exposed and unprotected if
you don’t find a way to protect it.
Bastion hosts that are loaded with a whole bunch of optional features and services are specially prone to
data security risks, especially if your bastion host is also your Web server! Many of the features of a Web
server that adds convenience and user-friendliness are more susceptible to security flaws. Unfortunately,
most of the Web servers software available in the market do not provide any kind of proxy support.
However, according to a multi-platform Web servers comparative review article, written by Jim Rapoza,
for PCWEEK (April 1, 1996), from the six Web servers he reviewed, only two had proxy support. They
were IBM’s Internet Connection Secure Server for OS/2 Warp 1.1 and Process Software Corp.’s
Purveyor Webserver for NetWare 1.0, where Purveyor was rated as having an "excellent" proxy support
and Internet Connection a "good" one.
Both products have a very easy interface for setting up proxies, which should be a must feature for your
Web site. These products even allow you to cache specific Uniform Resource Locators (URL) and
redirect them to other proxy servers, which is a great security feature. Purveyor even allows you to block
an internal user to access non-business related Web sites or controversial ones. I am sure you don’t want
upper management blaming you for allowing employees to spend chunks of the time they should be
working accessing Playboy, Penthouse or neonazism sites!
Nevertheless, when choosing a Web server software, have data security in mind, and site security as
well! Make sure they have solid access security options. You should be able to set users access
parameters and block access to the site based on the IP address or domain name of the client.
Proxy support will help you to prevent attacks or unwanted visitors, enhancing data security. It also will
help you to cope with the holes generally opened by dangerous features present in so many Web server
software. Another important aspect to consider is the underlying operation system. The operating system
underlying the Web server is a vital aspect in determining how safe the server is against hackers attack.
The inherent openness of a UNIX systems, for example, will bring extra work for you when trying to
block access to hackers. Conversely, a Mac-based system is a much more secure system as it does not as
open as UNIX. Servers running Windows NT, or even Windows 95 or Novell have a good built-in
security. One of Windows NT advantage is that it supports a large variety of Web server software, which
allows you to tailor your server’s configuration.
Besides the operating system, you should be careful with the features each operating system, combined
with Web servers had to offer. There are potentially dangerous ones that you should turn off, specially if
you do not need them. The following is a list of features that you should pay special attention:
● Automatic directory listings - The more a hacker knows about your system the more chance for
you to be tampered. Of course, automatic directory listing can be very convenient, but hackers can
have access to sensitive information through them. For example:
● Emacs backup files containing CGI scripts,
● Control logs,
Be aware that by turning off automatic directory listings won’t stop hackers from grabbing
files whose names they guess at, but it at least make the process more difficult.
● Symbolic links following - There are servers that allow you to extend the document tree with
symbolic links. Although convenient, it can become dangerous if the link is created to a sensitive
area such as /etc.
● Server side includes - One of the major security hole is the "exec" form of server side includes. It
should be turned off completely or made available only to trusted users. Apache and NCSA allows
you to turn it off by entering the statement in the directory control section of access.conf: Options
IncludesNoExec
recognize the instruction string, or "signature," of the virus in order to be able to detect it. A new virus
will not be detected at all unless, as occasionally happens, the programmer just used an old virus and
changed the payload. That's what happened when the "Stoned Virus" that displayed a message
recommending legalization of marijuana was mutated into "Michaelangelo" that destroys data on March
6th, the painter's birthday. Of course, anti-viral software can be updated as new viruses or new versions
of old viruses are discovered, but it's always a game of catch-up ball, and even those who take care to
upgrade often will not be completely safe.
Moreover, the programmers who create viruses, keep up with the state of the art in anti-viral software,
and constantly improve their malicious technology. We are now seeing viruses that are encrypted to
escape detection. Other viruses use compression technology to make transmission easier and recognition
more difficult.
Since the order in which instructions are executed can sometimes be changed without changing the
ultimate result, as when two processes are independent and either can run first, the order of the
instructions in a virus may be changed and thereby invalidate the anti-viral software. Or NULOPS,
instructions to the computer to do nothing for a clock cycle, might be inserted at random points, mutating
the sequence of instructions for which the anti-viral software seeks. Such changes result in viruses that
are called "polymorphic" because they constantly change the structural characteristics that would have
facilitated their detection. And lately, we have begun to see foxy viruses that recognize that anti-viral
software is a work, watch as sectors of the storage device are cleared, and copy themselves over to
previously cleared sectors, in effect leaping over the anti-viral bloodhound. Thus, while anti virus
packages are a valuable, even essential, part of a sound information security program, they are not in and
of themselves sufficient. Good backup procedures and sound policies designed to reduce the likelihood
of a virus attack are also necessary.
Sound security policies, practices and procedures like those discussed in this chapter can reduce the risk
they represent to a manageable level. Much more dangerous are the risks posed by directed threats,
capable and willing adversaries who target the confidentiality, integrity and availability of our
information assets and systems.
Outside Threats
Hackers have received a great deal of attention in the press and in the entertainment media. They
comprise an interesting subculture, technically astute and talented even if socially and morally deprived.
They have been pictured as nerd teenagers who stay up all night eating pizza, drinking sodas as they
crouch over their computers, monitors reflecting off of their bottle-thick eyeglasses, and try command
after command until they get through their school's computer security so they can improve their grade
point average.
If this representation was ever accurate, it certainly is not so today. Today's cyberpunks may mostly be
yesterday's juvenile cyberdelinquents grown older, but they tend to be in their twenties and even thirties,
although the occasional teenager is still arrested for hacking. To the extent that hackers have a coherent
philosophy, it centers around the quaint notion that, "Information wants to be free." The hacker
philosophy is libertarian, and technocentric. Access to computers and information, they believe, should
be unlimited and hackers should be judged solely by their computer and network skills, not by archaic
laws and ethics the evolution of which have not kept pace with the revolution in technology.
When outside hackers have the resources of a large company or a government behind them, they become
even more dangerous. Large companies and governments can afford to apply resources to cracking our
systems and networks that individuals would have trouble marshaling, including off-the-shelf equipment
like supercomputers or arrays of general purpose computers or such special purpose devices as Field
Programmable Gate Arrays. Boards are readily available with FPGA chips that can test 30 million DES
keys per second at a cost about ten percent of the cost of a PC. For companies and governments,
investments in custom-made special purpose chips are feasible that accelerate calculations and make the
cost per solution much lower. For an investment easily within reach of a large company or a small
government, 200 million DES keys could be tested per second using Application-Specific Integrated
Circuits.
Such resources change the difficulty of brute-force attacks on passwords and other access controls from
practically impossible to merely time consuming, and with enough resources to trivial. Using an FPGA
chip at an investment of a few hundred dollars, a 40-bit key (the maximum size for which export
approval can easily be obtained) could be recovered in an average time of about five hours. An
investment of a few tens of thousands of dollars could reduce the time to break a 40-bit key to a few
minutes. A few hundred thousand dollars would buy the capability to break a 40-bit key in a few
seconds, and a few million dollars would reduce the time to less than one second. Custom chips could
easily be designed for a few million dollars that would permit 40-bit key recovery in a few thousandths
of a second.
DES keys of 56-bits are more secure, of course, than 40-bit keys, but an investment of a few hundred
thousand dollars could yield DES keys in a few hours and an investment of a few million dollars would
reduce recovery time to minutes.
Inside Threat
Where high-quality information systems security mediates information transactions across the boundary
that separates an organization's systems and networks from the lawless Cyberspace outside and protects
the confidentiality, integrity and availability of the organization's information assets and systems, it may
be easier and cheaper to subvert or suborn an employee than to mount a direct attack. Or the attacker may
seek employment and the authorized access that follows in order to better position himself to mount a
wider attack that exceeds the access that has been granted as a condition of employment.
Our defenses are mostly directed outward. Few systems have a plethora of internal firewalls mediating
information transactions within the organization. Many systems provide the capability to monitor and
audit information transactions, even those totally within the system.
But looking for an insider abusing privilege among the vast number of transactions taking place routinely
on the system is a daunting task, and impossible without computer-based audit reduction and analysis
techniques. So most of our problem lies inside. Techniques are described in later chapters for abating the
resulting risks include good use of computer science and cryptography to protect information assets and
systems, monitoring and auditing to detect intrusions from without or abuses by insiders, and an effective
capability to react to security relevant incidents, correct problems and resume safe operations. But
effective and efficient security begins with and depends upon having the proper security policies in place.
As Gene Spafford ([email protected]) commented on the USENET once, there is a "fourth kind of
security problem [which] is one of perception and understanding. Perfect software, protected hardware,
and compatible components don't work unless you have selected an appropriate security policy and
turned on the parts of your system that enforce it." And he continues… "having the best password
mechanism in the world is worthless if your users think that their login name backwards is a good
password! Security is relative to a policy (or set of policies) and the operation of a system in
conformance with that policy."
To find security holes, and identifying design weaknesses it is necessary to understand the system control
structure, and layers. In order to do that, you should always try to:
● Determine the items to be protected, or security objects, such as users file.
● Identify your control objects, or the items that will protect security objects.
● Detect potential holes in a system. These holes can often be found in code that:
● Test code for unexpected input. Coverage, data flow, and mutation.
I hope all the above gave you an idea of what a security policy should contain. Vulnerabilities are many,
Internet attacks as well. There are several countermeasure strategies you can use, but without a guideline,
a map, you might find yourself shooting in the dark. That’s when a security policy is necessary. It will
become your map, your guideline, your contract (with users and upper management), your "power of
attorney" to take the decisions you must take in order to preserve the security of your site.
In the late 1960s the Department of Defense (DoD) designed and implemented the ARPAnet network for
the exchange of defense industry research information world-wide. TCP/IP was the protocol developed
and UNIX was the platform.
The National Science Foundation (NSF) needed a network also to interconnect their supercomputers and
exchange academic research information so they built their own, but followed the DoD standards. They
called their network NSFNET.
The Internet consists of many, worldwide, independent networks that allow interconnection and
transmission of data across the networks because they follow the same basic standards and protocols and
agreed upon Internet etiquette, " No central authority." Each user organization pays for its own piece of
the network.
Motivated by developments in high-speed networking technology and the National Research and
Education Network (NREN) Program, many organizations and individuals are looking at the Internet as a
means for expanding their research interests and communications. Consequently, the Internet is now
growing faster than any telecommunications system thus far, including the telephone system.
New users of the Internet may fail to realize, however, that their sites could be at risk to intruders who
use the Internet as a means for attacking systems and causing various forms of threat. Consequently, new
Internet sites are often prime targets for malicious activity including break in, file tampering, and service
disruptions. Such activity may be difficult to discover and correct, may be highly embarrassing to the
organization, and can be very costly in terms of lost productivity and compromised data integrity.
All Internet users need to be aware of the high potential for threat from the Internet and the steps they
should take to secure their sites. Many tools and techniques now exist to provide sites with a higher level
of assurance and protection.
<Your Company> branches should acquire a copy of the "Guide to the <Your Company> Internet." This
document is published by the MIS department. This guide defines the <Your Company> Internet Access
Network. You may acquire this guide by contacting the Director of MIS, at extension XXX.
3 DEFINITIONS
Definitions relating to this policy may be found in appendix "A".
4 REFERENCES
NIST CSL Bulletin, July 1993, NIST Connecting to the Internet: Security Considerations
<List here any other documents users can refer to in order to better understand this policy>
5 ABBREVIATIONS
ARPAnet Advanced Research Projects Agency Network
DMZ Demilitarized Zone
DoD Department of Defense
FTP File Transfer Protocol
● Configure firewalls on with outgoing access to the Internet, but strictly limit incoming access to
<Your Company> data and systems by Internet users.
● Apply the DMZ concept as part of the firewall design.
● Firewall compromise would be potentially disastrous to subnet security. For this reason, branches
will, as far as is practical, adhere to the below listed stipulations when configuring and using
firewalls.
● Limit firewall accounts to only those absolutely necessary, such as the administrator. If practical,
disable network logins.
● Use smartcard or authentication tokens to provide a much higher degree of security than that
provided by simple passwords. Challenge-response and one-time password cards are easily
integrated with most popular systems.
● Remove compilers, editors, and other program development tools from the firewall system(s) that
could enable a cracker to install Trojan horse software or backdoors.
● Do not run any vulnerable protocols on the firewall such as TFTP, NIS, NFS, UUCP.
● Consider disabling finger command. The finger command can be used to leak valuable user
information.
● Consider not using the e-mail gateway commands (EXPN and VFRY) which can be used by
crackers to probe for user addresses.
● Do not permit loopholes in firewall systems to allow friendly systems or users special entrance
access. The firewall should not view any attempt to gain access to the computers behind the
firewall as friendly.
● Disable any feature of the firewall that is not needed, including other network access, user shells,
applications, and so forth.
● Turn on full-logging at the firewall and read the logs weekly at a minimum.
● No <Your Company> computer or subnet that has connections to the Internet can house privacy or
sensitive information without the use of firewalls or some other means to protect the information.
● <Your Company> branches and staff offices must develop and document an Internet security
strategy based on the type of Internet service selected for use. This strategy must be included in the
Internet Security Plan.
● <Your Company> branches and staff offices that use the Internet must adhere to guidance stated in
XXXXX" <Your Company> Internet Security Policy."
● All software available on the Internet must be scanned for Trojan horses or computer viruses once
it has been downloaded to a <Your Company> computer.
● All downloaded software should be loaded preferably onto a floppy disk and not to the system
hard disk. Once you are reasonably assured that the downloaded software does not contain Trojan
horses or computer viruses it can be placed on the hard drive. If the software will not fit on a
floppy disk then the only option is the hard disk. The software must be scanned before use
(executed).
● Mandatory vulnerability and risk assessment of existing gateways is required at annual intervals.
Initial assessment should be completed within nine (9) months of the issuance of this policy. And
all branches should also conduct weekly or monthly reviews of audit trails of gateway software
and firewalls for breaches of security.
● <Your Company> personnel, and contractor personnel working for <Your Company> while using
the Internet:
❍Must not be harassing, libelous, or disruptive to others while connected to the Internet.
❍ Must not transmit personal data or unauthorized company-owned data across the Internet.
❍ Must not download to company’s computers from the Internet any obscene written material
or pornography.
❍ Must not send threatening, racially harassing, or sexually harassing messages.
❍ Must not attempt to break into any computer whether <Your Company>, its clients or
private.
❍ Must not be used for private or personal business, except when authorized.
● <Your Company> sponsored Internet connections are to be used for official <Your Company>
business.
● Host computers should be regularly scanned to ensure compliance with <Your Company> security
guidelines.
7 RESPONSIBILITIES
The Director or MIS :
1. Develop, coordinate, implement, interpret, and maintain Internet Security policies, procedures, and
guidelines for the protection of <Your Company> information system resources.
2. Review <Your Company> Internet security policy.
3. Assist in <Your Company’s> branch Internet security policy development and implementation.
4. Determine adequacy of security measures for systems used as gateways to the Internet.
5. Ensure that all <Your Company> branches conduct periodic information systems security risk
assessments, security evaluations, and internal control reviews of operational <Your Company>
Internet gateways and facilities.
All branches and <Your Company> departments that have or are planning to install a firewall or any sort
of gateway to the Internet will:
● Devise and implement a comprehensive risk management program which assures that security
risks are identified, considered, and mitigated through the development of cost effective security
controls. The risk management system will include a service access policy that will define those
services that will be allowed or explicitly denied from the restricted network, how these services
will be used, and the conditions for exception to this policy.
● Another part of this risk management system will be a firewall design policy. This policy relates
precisely to firewalls and defines the rules used to implement the service access policy.
● Each branch and staff office must develop an Internet Security Plan which address all security
controls in place or planned.
● These controls shall be commensurate with the risks identified in the risk analysis. Internet
Security plans shall be submitted annually with the <Your Company’s> security plans for review
and approval. The guidelines governing the submission of these security plans should comply to
the Internet Security Plan.
● Perform risk analysis to identify the risks associated with using Internet both for individual users
and branches or departmental offices. Cost effective safeguards, identified in the risk analysis
process, will be implemented and continually monitored to ensure continued effectiveness.
<Your Company> MIS department should be responsible for developing, testing, and maintaining
Internet contingency plans. The risk involved with using the Internet makes it essential that plans and
procedures be prepared and maintained to:
● Minimize the damage and disruption caused by undesirable events; and
● Provide for the continued performance of essential systems functions and services.
● Develop, install, maintain, and regularly review audit trails for unusual system activity.
● Fund, implement, and maintain the prescribed protective features identified as a solution by a risk
assessment.
● Risk assessment developed by branches and staff offices are to be made available to MIS upon
request.
● Ensure that the branch information security manager is a vital part of any security activity on the
Internet.
The information security manager is responsible for:
1. Implementing the policy stated in this directive.
2. Developing audit trails for any <Your Company> network connected to the Internet.
3. Reviewing and monitoring activity audit trails on the Internet connections.
4. Working closely with the branch network administrator in monitoring activity on the use of their
host and subnets.
8 NON-COMPLIANCE
All users of data and systems are responsible for complying with this Internet systems security policy, as
well as procedures and practices developed in support of this policy.
Anyone suspecting misuse or attempted misuse of departmental information systems resources is
responsible for reporting such activity to their branch or staff Office management, or to the information
system security manager or the MIS manager.
Violations of standards, procedures, or practices in support of this policy will be brought to the attention
of management for action, which will result in disciplinary action up to and including termination of
employment.
9 SOURCE OF INFORMATION
1. MIS Guide To The <Your Company> Internet
2. <Whatever documents you want to make available to users>
Comments
Comments
Comments
Comments
Chapter 10
Putting It Together: Firewall design and
Implementation
This chapter discusses what you need to know about firewalls and its implementations. In some ways it
complements chapter 7, "What is an Internet/Intranet Firewall After All?," as it goes beyond the basic
concepts discussed on that chapter. This chapter reviews the different firewall technologies used today,
their strengths and their weaknesses, and the tradeoffs involved when designing a firewall system and
implementing it to your specific application and corporate needs.
1. They are vulnerable to attacks aimed at protocols higher than the network level protocol,
which is the only level they understand.
2. Since the network level protocol requires certain knowledge of its technical details, and not
every administrator have them, packet filtering firewalls are usually more difficult to
configure and verify, which increases the risks for systems misconfigurations, security holes
and failures.
3. They cannot hide the private network topology and therefore expose the private network to
the outside world.
4. These firewalls have very limited auditing capabilities, and as you know, auditing should
play a major role in the security policy of your company.
5. Not all Internet applications are supported by packet filtering firewalls.
6. These firewalls not always support some of the security policies clauses such as user-level
authentication and time-of-day access control.
● Application-level firewalls - Application-level firewalls provide access control at the
application-level layer. Thus, it act as application-level gateways between two networks. Since
application level firewalls function at the application layer, they have the ability to examine the
traffic in detail, making them more secure than packet filtering firewalls. Also, this type of firewall
are usually slower than packet filtering due to their scrutiny of the traffic. Thus, to some degree
they are intrusive, restrictive and normally require users to either change their behavior or use
specialized software in order to achieve policy objectives. Application-level firewalls are thus not
transparent to the users.
● Advantages of application-level firewalls:
1. Since they understand application-level protocol, they can defend against all attacks.
2. They are usually much easier to configure then packet filtering ones, as they don’t require
you to know all the details about the lower level protocols.
3. They can hide the private network topology.
4. They have full auditing facilities with tools to monitor the traffic and manipulate the logs
files which contain information such as source, destination network addresses, application
type, user identification and password, start and end time of access, and the number of bytes
of information transferred in all directions.
5. They can support more security policies including user-level authentication and time-of-day
access control.
● Hybrid firewalls - Realizing some of these weaknesses with packet filtering and application-level
firewalls, some vendors have introduced hybrid firewalls which combine both packet filtering with
application-level firewall techniques. While these hybrid products attempt to solve some of the
weaknesses mentioned above, they introduce some of the weaknesses inherent in application-level
firewalls as outlined above.
● Weakness of hybrid firewalls:
1. Since hybrid firewalls still rely on packet filtering mechanisms to support
certain applications, they still have the same security weaknesses
● Second-generation application-level firewalls - This type of firewall is still being an
application-level firewall, only that in its so called second generation, which solves the
transparency problem of its earlier version without compromising performance.
Selecting a Firewall
Before you start selecting a firewall from chapter 14, you should develop a corporate security policy, as
discussed on chapter 7, and then select the firewall that can be used to implement the chosen policy.
When evaluating firewalls, care must be taken to understand the underlying technology used in the
firewall as some firewall technologies are inferior in security to others.
The basic concept of a firewall will always be the same, so you should evaluated a firewall based on the
level of security and implementation features it offers. When I say security features I mean the ability a
firewall product has to deliver security based and consistent to your corporate security objectives and
policy. The following are some of the characteristics you should be looking for in a firewall:
● Security Assurance - Independent assurance that the relevant firewall technology fulfills its
specifications and assurance that it is properly installed. Is the firewall product certified by the
National Computer Security Association (NCSA - http://www.ncsa.com/) What about the
Communications Security Establishment (CSE) evaluation, does it has one?
● Privilege Control - The degree to which the product can impose user access restrictions.
● Authentication - What kind of access control the product provides? Does it supports
authorizations? What about authentication techniques. These techniques include security features
such as source/destination computer network address authentication, password authentication,
access control cards, and fingerprint verification devices.
● Audit Capabilities - The ability of the product to monitor network traffic, including unauthorized
access attempts, generate logs, and provide statistical reports and alarms.
As for implementation features, you should be looking for the ability a product has to satisfy your
network management requirements and concerns. A good firewall product should be:
● Flexibility - The firewall should be open enough to accommodate the security policy of your
company, as well as allow for changes in the feature. Remember, a security policy should very
seldom change, but security procedures should always be reviewed, especially in light of Internet
and Web-centric new applications!
● Performance - A firewall should be fast enough so that users wouldn’t feel the screening of
packets. The volume of data throughput and transmission speed associated with the product should
be reasonable enough, consistent to your bandwidth to the Internet.
● Scalability - Is the firewall scaleable? The product should be able to adapt to multi-platforms and
instances within your protected network. This includes OSs, machines and security configurations.
As far as integrated features, look for the ability of an firewall to meet your and users needs, such as:
● Ease of Use - The firewall product should ideally have a Graphical User Interfaces (GUI), which
For workstations
For servers
For remote access services
VII. Technical Support
Common goals and mission statement
Specific goals and procedures
Procedures for auditing corporate security
VIII. Auditing Policy
Automatic generation of login reports
Security checklist
IX. Technology Policy and Procedures
Adopted access control mechanisms
Firewall and proxy servers
Security management
Risk management and control
perform periodical scans in your system to detect if any system files or programs have been modified.
But this is not enough to prevent a hacker to invade your system and not every operating system
platforms have tools like tripwire.
Tip:
Tripwire is distributed free of charge. If interested in downloading it, try URL:
ftp://coast.cs.purdue.edu/pub/COAST/Tripwire/
Another quick check you can do is to check your access and error log files for suspicious activity. Look
for traces of system commands such as "rm", "login", "/bin/sh" and "Perl."
For those on Windows NT platform, check the Security log in the Event Log periodically, looking for
suspicious activities.
Also, hackers usually try to trick a CGI script into invoking a command by entering long, very long lines
in URL requests with the purpose to overrun a program’s input buffer. Lastly, look for repeated failed
attempts to access a password or section protected by passwords. Overall, these could be an indication
that someone is trying to break-in to your site.
Sites are been broken-in to more and more every day. As the technology, specially the Web technology,
changes so rapidly, systems become obsolete or vulnerable to new threads very quickly. Even the most
protected system can become vulnerable by a creation of a new Java applet not predicted into its present
security system.
Web server running operating systems (OS) such as SunOS and UNIX, which are based on the
client/server abstraction are particularly sensitive to these moving Internet technology trends. As they
usually are developed to model the network as an extension of its internal data bus, which also extends a
series of features hardly found in other OS platforms, these same extensions opens a door (if not many!)
to hacker and intruders.
But opening a door for potential hackers is only the index of the whole Bible! There is much more to it
when come to intrusion detection.
Nowadays, hackers are very aware of the typical security models utilized by MIS and deployed all over
the Internet. As a matter of fact, they use it for they own interest. Password systems, access and
authentication systems are not sufficient to guarantee the security of a protected network.
Hackers can write simple applets to act as Network File System (NFS) clients, for instance, and bypass
all the access control system normally used, gaining total access to internal networks or users files. But
this is not merely a security hole of NFS, it extends to almost every network service available.
When comes to secure your site, you must rely and apply every resource you can to guarantee the safety
of your site and users. Firewalls and proxy server, as you saw on chapter 10, will not totally resolve the
problem, but they will greatly enhance your chances of survival.
Once you have done everything you can to protect your site, from hardware to software, from security
policy to its implementation, the only thing you can do is to accept the odds and wait for the day that you
may need to face an incident. I hope you never will, but if you do, you must be prepared to deal with the
incident, from the systems perspective, to the legal one.
There are several services that can be used as cracking tools as well. The following is a selection of some
of them, which you should be careful when making them available and using them.
base on it to tailor your own, depending on the system you are trying to protect:
#!/bin/sh
LOGFILE=logfile
while true; do
(18|19|20|21|22|23|00|01|02|03|04|05|06|07)
sleep 600
;;
*)
sleep 3600
;;
esac
done
● Take action.
Take Action
It is time to implement your emergence response plan. Make sure upper management, users and service
providers are aware of the incident.
You don’t need to give them much information, especially the technical ones, but should give them a
reasonable timeframe for the restoration of the system.
Notify CERT, exchange your information with them. Not only you will be helping them to alert others
about it but they might be able to help you with their expertise.
Finally, repair the security hole and restore the system. Make sure to document the whole incident, learn
from it and archive it.
Catching an Intruder
It is very difficult to catch intruders. Especially when they try to cover up their tracks. Chances are that if
you are able to spot a hacker attack, it will be by accident! Very unlikely intentional.
However, even though you will need a lot of luck to spot a hacker in your system, there are some
guidelines that you can follow to help you to be more lucky:
● Always keep an eye on your log files, examine them regularly, especially those generated by the
system log service and the wtmp file.
● Watch for unusual hosts connections, as well as unusual times (instruct users about connection
times, so it can be easier to eliminate possibilities).
● Watch for accounts that are not being used for a while and suddenly become active. You should
always disable or delete unused accounts.
● Expect a hacker’s visit usually between the hours of 6PM to 8AM, Saturdays, Sundays, and
holidays. Yes, they can come at any time!
● Set a shell script to run every 10 minutes during these times logging all the process an network
connections. For instance, I have a log file set in Performance Monitor (Windows NT) running
during those hours, tracking RAS connections, processes and network connection activities).to a
file. The shell script above is an example of it. But don’t count on it. Hacker are not stupid and will
quickly find out that they are been watched!
Reviewing Security
There is so much that should be discussed when reviewing security. Many books were written about it.
Associations and task forces were created for that purpose.
The following is a summary list of security issues you should review. It is not a complete list, of course,
but it does try to address some of the main issues affecting your Web environment. At the end of this
book you will find some complementary bibliography references do complement this information:
● Make sure to install the NIS latest patches when working with it.
● Do not use any wildcards on trusted hosts databases (/etc/hosts.equiv), if you have any, remove
them.
● Be very careful when using .rhosts and .netrc. You should consider disallowing them from foreign
hosts.
● As with trusted hosts database, do not use wild-cards nor store plain-text passwords in the .netrc
file.
● As with NIS, make sure to install the latest patches for NFS.
● Ensure that you specify to which hosts you export your file system. You may want to write-protect
the user file system (/usr) when exproting it.
● You may wan to disallow setuid and root access for any NFS file system.
● You may also want to turn off the off the "-n" option of the mount daemon (/etc/rc.local).
Although some people believe the system will be slightly unsecured (including the mount daemon
manual), this is not true. You will have not security at all!
● When offering FTP service, make sure to write-protect the FTP spool directory.
As you can see, there are few things you can do to prevent break-ins. As more control you have over
your system, more will be your alternatives to prevent it and even to try to catch an intruder... as far as
system’s at least. Legally speaking, unfortunately there is not much you can do yet. The legal system in
U.S. is trying to move fast, but not fast enough, one may say.
Note:
If you would like to have more information about ABA’s Science and Technology Section (STS)
and its Committees, you can check the URL: http:// http://www.intermarket.com/ecl/
Tip:
If you want to know more about this crackdown, check Bruce’s electronic version of the book at
URL: http://homepage.eznet.net/~frac/crack.html
The FBI’s National Computer Crime Squad is dedicated to detecting and preventing all types of
computer-related crimes. When an incident is detected, the tendency is to overreact, and many times,
based on the legal infrastructure do deal with the issue, that is what ends up happening.
Network intrusions, for instance, have been made illegal by the U.S. federal government, but detection
and enforcement are very difficult. The law, as it presents when facing computer crimes, is very much
limited in essence and scope. It does not take much to realize it when you take the criminal case of Kevin
Mitnick’s, aka the Condor, when recently pleating bargain. His final plea and the crimes he allegedly
committed had very little connection to each other.
Note:
If you are not familiar with Kevin Mitnick’s case, he was arrested back in February of 1995 for
for allegedly breaking into the home computer of Tsutomu Shimomura, a respected member of
the computer security world.
Kevin, also known as "Condor," was suspected of spoofing Tsutomu’s computer and stealing
computer security tools to distribute over the Internet. By the beginning of July, the federal
prosecutors and Kevin’s lawyers had reached a plea bargain agreement whereby Kevin would
admit the charges of "possessing unauthorized access devices" in exchange of the prosecutors
dropping 22 charges brought against him.
According to the sentence guidelines, Kevin’s admitting he was guilty would carry a maximum
prison sentence of eight months.
The fact is that corporations and governments alike, love to spy on the enemy. The Web is providing new
opportunities for this. Wired Magazine (June of 1996), commented that more and more the American
people are watching less television at night to spend an average of 11 hours in front of a computer screen.
Mostly likely on the Web!
Hackers-for-hire became a trend. Somewhat, an idol. I had the chance, while writing this book, to talk to
some of them, and what I found out is that there is a status quo in been a hacker. Just check the magazine
and newspaper articles about them, it is always catching, vibrating. Crackers are living to become
hackers, and hackers and law enforcement agents are re-living the tale (tale?).
Tracing hackers and crackers is a very labor-intensive, specially the first. Convictions are hard to be
reached, as the laws are not written with electronic theft in mind.
For instance, how would you qualify a scenario where your site is victimized with mail bombing?
Hackers, with little effort, can instruct a computer to repeatedly send electronic mail to your Webmasters
account to such a extend that it could generate a "denial of service" state and potentially shutdown your
entire site. Is this action illegal? It may not be.
Note:
The journalist Joshua Quittner and Michelle Slatalla had their home computer targeted by hackers
and flooded with mail bombs. Also, their phone lines were rerouted for a whole weekend.
The problem is that, there is not a concrete definition yet for the term "computer-related crime." What is
the difference between illegal or deliberate abuses of the Internet and an annoying act. Unfortunately,
one could look at e-mail bombing both ways.
Legal systems everywhere, and ABA/STS is an example, are very busy trying to find ways of dealing
with crimes and criminals on the Internet. As it stands, there is no common sense on how hackers an
other computer criminals are prosecuted. It varies from one jurisdiction to another. It is as cases like
Kevin Mitnick and Jake Baker unfolds that the world’s legal system starts to react and be ready for this
new cast of citizens.
Computer information systems present a whole slew of legal issues. For instance, your Web site can very
well be used for dissemination of useful information, but it can also be used as an outlet for defamation,
contrabanded materials, etc. How should this situation be treaded? In case of a computer crime at your
site where users where affected, who is liable, you or the "hacker?" Is the crime his fault, since he was
the author, or the Webmaster’s, since he controls and provides access to the site?
● Fraud and Abuse - The fraud and abuse statute states that it is a crime to fraudulently access or
abuse the access to a computer.
Note:
The Fraud and Abuse Statute states that:
"a) whoever
1. Knowingly accesses a computer without authorization or exceeds authorized access, and by
means of such conduct obtains information that has been determined by the United States
Government pursuant to an Executive order or statute to require protection against
unauthorized disclosure for reasons of national defense or foreign relations, or any
restricted data, as defined in paragraph y. of section 11 of the Atomic Energy Act of 1954,
with the intent or reason to believe that such information so obtained is to be used to the
injury of the United States, or to the advantage of any foreign nation;
2. Intentionally accesses a computer without authorization or exceeds authorized access, and
thereby obtains information contained in a financial record of a financial institution, or of a
card issuer as defined in section 1602(n) of title 15, or contained in a file of a consumer
reporting agency on a consumer, as such terms are defined in the Fair Credit Reporting Act
(15 U.S.C. 1681 et seq.);
3. Intentionally, without authorization to access any computer of a department or agency of
the United States, accesses such a computer of that department or agency that is exclusively
for the use of the Government of the United States or, in the case of a computer not
exclusively for such use, is used by or for the Government of the United States and such
conduct affects the use of the Government's operation of such computer,
of this section.
4. Intentionally accesses a Federal interest computer without authorization, and by means of
one or more instances of such conduct alters, damages, or destroys information in any such
Federal interest computer, or prevents authorized use of any such computer or information,
and thereby
. causes loss to one or more others of a value aggregating $1,000 or more during any
one year period;
B. modifies or impairs, or potentially modifies or impairs, the medical examination,
medical diagnosis, medical treatment, or medical care of one or more individuals; or
6. knowingly and with intent to defraud traffics (as defined in section 1029) in any password or
similar information through which a computer may be accessed without authorization, if
(A) such trafficking affects interstate or foreign commerce; or
(B) such computer is used by or for the Government of the United States;
of this section.
of this section is
(1) of this section which does not
occur after a conviction for another
offense under such subsection, or an
attempt to commit an offense
punishable under this subparagraph;
and
(1) of this section which occurs after a
conviction for another offense under
such subsection, or an attempt to
commit an offense punishable under
this subparagraph; and
(1) of this section which does not
occur after a conviction for another
offense under such subsection, or an
attempt to commit an offense
punishable under this subparagraph;
and (B) a fine under this title or
imprisonment for not more than ten
file:///D|/Cool Stuff/old/ftp/firewalls-complete.tar/firewalls-complete/chap10.htm (22 von 26) [06.05.2000 20:43:12]
Firewalls Complete - Beta Version
● Integrity, and
● Availability.
The only way you will be able to achieve excellence in these areas is going to be by regulating the flow
of users, services and activity of your site with predetermined set of rules, the security policy.
The security policy, which should precede any of the security strategies you may implement (firewalls,
packet filtering, access control and authentication, etc.), will specify which subjects can access which
objects. Thus, it is important that you have very clear what are the subjects you will be dealing with at
your site (e.g. internal users, external users, clients leasing portions of your site, etc.) and objects to be
accessed or offered (e.g. Web services, links, leased home pages, etc.)
You will need to devote some time an effort to elaborate your security policy. Security measures are
employed to prevent illicit tampering with your "users and clients as well as your services offered.
Webmasters, systems administrators and everyone else responsible for its operation to recognize the
vulnerabilities to which the site is subject and take the steps to implement appropriate safeguards. That is
what this book is all about.
Internet security, while a relatively recent concern, is subject to a variety of interpretations. Historically,
security measures have been applied to the protection of classified information from the threat of
disclosure in a MIS or computer lab environment. But nowadays, in an environment where general
Internet users are capable to shift from personal home pages on the Web to personal IPSs, much attention
has been directed to the issue of individual privacy as it relates to personal information stored in the
computerized data systems.
Data integrity in financial, scientific and process control applications at your site should be another focus
of attention.
When setting up you security policy, remember that security policy, like your car insurance, is to a large
extent applied risk management: you should try to achieve a tolerable level of risk at the lowest possible
cost. The goal is to reduce the risk exposure of your site to an acceptable level, best achieved by a formal
assessment of your risks. This includes a number of components, such as the identification of the Web
site assets, values, threats and vulnerabilities, as discussed in this book, as well as the financial impact of
each threat-asset combination.
When analyzing your risks, make sure to involve as many people as possible, from users to managers and
upper management. If confidentiality is a specific concern, based on the services you will provide (dating
services, financial services, etc.), additional protection must be provided through the application of
hardware/software security solutions as well as mandatory regulatory requirements.
Make sure to include specific security administrative practices, assigning security responsibilities to all
professionals involved with the operation and maintenance of the site. Make sure to determine:
● A procedure to ensure that risks are identified (auditing logs, printing activity reports,
implementation of configuration and security checklists. Etc.);
● Individual security duties and the appropriate assignment of responsibilities;
● File access policy and designated restricted areas in your disk farm;
● Authorization and authentication procedures for new users and services (you can’t be in control of
the site 24 hours a day, seven days a week! Have a documented set of procedures);;
● A contingency plan, in case of emergencies.
Final Considerations
Internet security is possible! Break-ins, although inevitable, can be prevented when you are aware of
security issues. However, this means that all the issues discussed in this book would have to be taken into
consideration, that this book would not be enough to guide you in securing you site as new threats arise
everyday and the Web and computer technology is to diverse to be comprised in a single book.
Internet security and firewall management is a full-time job. The systems manager or the Webmaster, or
the LAN administrator, or the MIS director and so on, cannot be blamed in case of an incident. They
probably didn’t have time!
If the people involved with the every-day activity and operation of your site could known the information
contained in this book, the risks of your site being attacked and the resulting damage could be much less.
This is exactly why I wrote this book. Not for the gurus. Not for the technically skilled. It was to give an
survey of the threats a Web site faces, the existence of hackers to make our work more... enjoyable.
By no means, don’t you have the illusion that by following the information and guidelines of this book
your site will be impossible to penetrate.
Finally, make sure to keep security into account during the whole phase of your Web site design. This
will also make security much more user friendly.
Comments
Comments
Comments
Comments
Chapter 11
Proxy Servers
Application gateways, or proxy server, define a whole different concept in terms of firewalls. In order to
balance out some of the weaknesses presented by packet-filtering routers, you can use certain software
applications in your firewall to forward and filter connections for services, such as telnet and FTP. These
applications are referred to as a proxy service, and the host running the proxy service is often called an
application gateway.
Many IS&T (information systems and technology) professionals consider application gateways to be a
true firewall because the other types lack user authentication. Accessibility is much more restricted than
with packet-filtering and circuit-level gateways because it requires a gateway program for every
application such as telnet, FTP, and so on.
As a matter of fact, there are many companies that only use a proxy service as they firewall, while others
just rely on the firewall itself. Depending of your environment, size of your company and the level of
protection you want to accomplish, one or the other may be all you need. However, as a rule of thumb,
you should always consider the implementation of a proxy service combined with your package-filtering
routers (firewalls), so that you can achieve a more robust level of defense and flexible access control.
Also you will find that many firewall products will bring you the best of both worlds, combining both
filtering and proxing features in a single package.
The combination of application gateways and packet-filtering routers to increase the level of security and
flexibility of your firewall is therefore the ideal solution for addressing Internet security. These are often
called hybrid gateways. They are somewhat common, as they provide internal hosts unobstructed access
to untrusted networks while enforcing strong security on connections coming from outside the protected
network.
Consider figure 11.1 as an example of a site that uses a packet-filtering router and blocks all incoming
telnet and FTP connections. The router allows telnet and FTP packets to go only to the telnet/FTP
application gateway. A user connecting to a site system would have to connect first to the application
gateway, and then to the destination host, as follows:
1. A user telnets to the application gateway and enters the name of an internal host
2. The gateway checks the user’s source IP address and accepts or rejects it according to any access
criteria in place
3. The user might need to be authenticated
4. The proxy service creates a telnet connection between the gateway and the internal server
5. The proxy service passes bytes between the two connections
6. The application gateway logs the connection
If you look at figure 11.2, it shows the details of the virtual connection happening on figure 11.1 and
emphasizes the many benefits to using proxy services. Lets stop for a moment and try to identify some of
these benefits:
Proxy services allow through only those services for which there is a proxy. If an application gateway
contains proxies for FTP and telnet, only FTP and telnet are allowed into the protected subnet. All other
services are completely blocked. This degree of security is important. Proxy makes sure that only
trustable services are allowed through the firewall and prevents untrusted services from being
implemented on the firewall without your knowledge.
Let’s take a look at some advantages and disadvantages of application gateways.
There are several advantages to using application gateways over the default mode of permitting
application traffic directly to internal hosts. Here it is the five main ones:
1. Hiding information. The names of internal systems (through DNS) are hidden to outside systems.
Only the application gateway host name needs to be known to outside systems.
2. Robust authentication and logging. The traffic can be pre-authenticated before it reaches internal
hosts. It can also be logged more efficiently than if logged with standard host logging.
3. Cost-effectiveness. Authentication/logging software and hardware are located at the application
gateway only.
4. More comprehensive filtering rules. The rules at the packet-filtering router are more
comprehensive than they would be with the routers filtering and directing traffic to several specific
systems. With application gateways, the router needs only to allow application traffic destined for
the application gateway and block the rest.
5. E-mail. It can centralize e-mail collection and distribution to internal hosts and users. All internal
users would have e-mail addresses of the form user@mailbag, where mailbag is the name of the
e-mail gateway. The gateway would receive mail from outside users and then forward it to internal
systems.
However, nothing is perfect! Application gateways have disadvantages too. To connect to client-server
protocols such as telnet requires two steps, inbound or outbound. Some even require client modification,
which is not necessarily the case of a telnet application gateway, but it would still require a modification
in user behavior. The user would have to connect to the firewall as opposed to connecting directly to the
host. Of course, you could modify a telnet client to make the firewall transparent by allowing a user to
specify the destination system (as opposed to the firewall) in the telnet command. The firewall would
still serve as the route to the destination system, intercepting the connection and running authentication
procedures such as querying for a one-time password.
You can also use application gateways for FTP, e-mail, X Window, and other services.
Note:
Some FTP application gateways have the capability to block put and get commands to specific
hosts. They can filter the FTP protocol and block all put commands to the anonymous FTP server.
This guarantees that nothing can be uploaded to the server.
So, what are proxies after all? Simply put, proxy are gateway applications basically used to route Internet
and web access from within a firewall.
If you have used TIA (The Internet Adapter) or TERM, you probably are familiar with the concept of
redirecting a connection. Using these programs, you can redirect a port. Proxy servers work in a similar
way, by opening a socket on the server and allowing the connection to pass through.
A proxy is a special HTTP server that typically is run on a firewall. A proxy basically does the following:
● Receives a request from a client inside the firewall
Usually, the same proxy is used by all of the clients in a subnet. This enables the proxy to efficiently
cache documents that are requested by several clients. Figure 11.3 demonstrates these basic functions.
The fact that a proxy service is not transparent to the user means that either the user or the client will
have to be proxified. Either the user is instructed on how to manage the client in order to access certain
services (telnet, FTP), or the client, such as Web clients, should be made proxy-aware.
The caching of documents makes proxies very attractive to those outside the firewall. Setting up a proxy
server is not difficult. Today, most web client programs already have proxy support built in. It is very
simple to configure an entire workgroup to use a caching proxy server, which helps to cut down on
network traffic costs because many of the documents are retrieved from a local cache after the initial
request has been made.
Proxy has a mechanism that makes a firewall safely permeable for users in an organization without
creating a potential security hole through which hackers can get into the organization’s protected
network.
This application-level proxying is easily supported with minor modifications for the Web client. Most
standard out-of-the-box Web clients can be configured to be a proxy client without any need for
compilations or special versions. In a way, you should begin to see proxying as a standard method for
getting through firewalls, rather than having clients getting customized to support a special firewall
method. This is especially important for your Web clients because the source code will probably not be
available for modification.
As an example of this procedure, check the Anonymizer site, at URL http://www.anonymizer.com. All
connections passing through the Anonymizer are proxified. The output connection was totally redirected
and had its address changed, only that here it is done to protect the identity of the client, rather then
access control (another benefit of using proxies!). Clients without DNS (Domain Name Service) can still
use the Web because the only thing they need is proxy IP addresses.
Tip:
You can build a proxy-type firewall by using TIS toolkit if you have experience with UNIX and
programming. It contains proxies for telnet, FTP, Gopher, Rlogin and a few other programs. Also,
as an alternative, you can use Purveyor 1.1 (http://www.process.com), which offers all of that
without a need for UNIX and programming knowledge. Best of all, you won’t need an expensive
UNIX box, it runs on Windows NT and Windows 95.
Organizations using private network address spaces can still use your Web site as long as the proxy is
visible to both the private internal net and the Internet, most likely using two separate network interfaces.
Proxying permits high-level logging of client transactions, which includes the client IP address, date and
time, URL, byte count, and success code. Another characteristic of proxying is its capability to filter
client transactions at the application-protocol level. It can control access to services for individual
methods, server and domain, and so on.
As far as caching, the application-level proxy facilitates it by enabling it to be more effective on the
proxy server than on each client. This helps to save disk space because only a single copy is cached. It
also enables more efficient caching of documents. Cache can use predictive algorithms such as look
ahead and others more effectively because it has many more clients with a much larger sample size on
which to base its statistics.
Have you ever thought about browsing a Web site when the server is down? It is possible, if you are
caching. As long as you connect to the cache server, you can still browse the site even if the server is
down.
Usually, Web clients’ developers have no reason to use firewall versions of their code. But in the case of
the application-level proxy, the developers might have an incentive: caching! I believe developers should
always use their own products, but they usually don’t with firewall solutions such as SOCKS. Moreover,
you will see that a proxy is simpler to configure than SOCKS, and it works across all platforms, not only
UNIX.
Technically speaking, as shown in figure 11.4, when a client requests a normal HTTP document, the
HTTP server gets only the path and keyword portion of the requested URL. It knows its hostname and
that its protocol specifier is http:.
When a proxy server receives a request from a client, HTTP is always used for transactions with the
proxy server, even when accessing a resource served by a remote server using another protocol such as
Gopher or FTP.
A proxy server always has the information necessary to make an actual request to remote hosts specified
in the request URL. Instead of specifying only the pathname and possibly search keywords to the proxy
server, as figure 11.5 shows, the full URL is specified.
This way, a proxy server behaves like a client to retrieve a document, calling the same protocol module
of Libwww that the client would call to perform the retrieval. However, it is necessary to create an HTTP
containing the requested document to the client. A Gopher or FTP directory listing is returned to the
client as an HTML document.
Caution:
Netscape does not use libwww so if you are using Netscape, you would not be calling a protocol
module of libwww from the client.
Therefore, by nature a proxy server has a hybrid function: It must act as both client and server. A server
when accepting HTTP requests from clients connecting to it, and a client (to the remote) to actually
retrieve the documents for its own client.
Note:
In order for you to have a complete proxy server, it must speak all of the Web protocols,
especially HTTP, FTP, Gopher, WAIS, and NNTP.
One of the HTTP server programs, CERN’s httpd, has a unique architecture. It is built on top of the
WWW Common Library. The CERN httpd speaks all of the Web protocols just like Web clients, unlike
other HTTP servers built on the WWW Common Library. It has been able to run as a protocol gateway
since version 2.00, but not enough to act as a full proxy. With version 2.15, it began to accept full URLs,
enabling a proxy to understand which protocol to use when interacting with the target host.
Another important feature with a proxy involving FTP is that if you want to deny incoming connections
above port 1023, you can do so by using passive mode (PASV), which is supported.
Caution:
Not all FTP servers support PASV, causing a fallback to normal (PORT) mode. It will fail if
incoming connections are refused, but this is what would happen in any case, even if a separate
FTP tool were used.
However, before considering caching, you should be aware of at least couple problems that can occur
and need to be resolved:
● Can you keep a document in the cache and still be sure that it is up-to-date?
● Can you decide which documents are worth caching, and for how long?
The caching mechanism is disk-based and persistent. It survives restarts of the proxy process as well as
restarts of the server machine itself. When the caching proxy server and a Web client are on the same
machine, new possibilities are available. You can configure a proxy to use a local cache, making it
Tip:
All major HTTP servers already support the conditional GET header.
Just for your information, there is a function called no-cache pragma, which is typically used by a
client’s reload operation. This function provides users with the opportunity to do a cache refresh with no
visible modifications in the user interface. A no-cache pragma function is forwarded by the proxy server,
thus ensuring that if another proxy is also used, the cache on that server is ignored.
In summary, taken from the internal network perspective, a proxy server tends to allow much more
outbound access than inbound. Generally, it will not allow Archie connections or direct mailing to the
internal network, you will have to configure it.
Also, depending on which proxy server you are using, you should anticipate problems with FTP when
doing a GET or an ls because FTP will open a socket on the client and send the information through it.
Some proxy server will not allow it, so if you will be using FTP, make sure the proxy server supports it.
Note:
With Purveyor, a client who does not implement Domain Name Services (DNS) will still be able
to access your Web site through Purveyor’s proxy server. The proxy IP address is the only
information required.
As the applications for proxies rise, there are many features that are still in their early stages, but the
basic features are already there! You should plan on having a proxy server on your firewall. Although
caching is a wide and complicated area, it is also one of the parts of the proxy server that needs to be
improved.
Tip:
You can provide Internet access for companies using one or more private network address spaces,
such as a class A IP address 10.*.*.* by installing a proxy server that is visible to the Internet and
to the private network.
I believe the HTTP protocol will be further enhanced as Internet growth continues to explode. In the near
future you should see multipart requests and responses becoming a standard, enabling both caching and
mirroring software to refresh large amounts of files in a single connection. They are already much
needed by Web clients to retrieve all of the inlined images with one connection.
Moreover, proxy architecture needs to be standardized. Proxy servers should have a port number
assigned by Internet Assigned Numbers Authority (IANA). On the client side, there is a need for a
fallback mechanism for proxies so that a client can connect to a second or third proxy server if the
primary proxy failed (like DNS). But these are just items on a wish list that will certainly improve
netsurfing but are not yet available.
Tip:
If you need to request parameter assignments (protocols, ports, etc) to IANA, they request you to
send it by mail to [email protected]. For SNMP network management private enterprise number
assignments, please send e-mail to [email protected].
Taking into consideration the fast growth of the Web, (by the time I finish this chapter, the Web will
have surpassed FTP, and gopher all together!), , I believe proxy caching represents a potential (and
needed)." Bits and bytes will need to get returned from a nearby cache rather than from a faraway server
in a geographically distant place.
SOCKS
SOCKS is a packet that enables servers behind the firewall to gain full access to the Internet. It redirects
requests aimed at Internet sites to a server, which in turn authorizes the connections and transfers data
back and forth.
Tip:
If you need more information about SOCKS, you can find it at http://www.socks.nec.com. To join
the SOCKS mailing list, send mail to [email protected] with subscribe SOCKS
[email protected] in the body of the mail.
SOCKS was designed to allow servers behind a firewall to gain full access to the Internet without
requiring direct IP reachability . The application client establishes communication with the application
server through SOCKS. Usually the application client makes a request to SOCKS, which typically
includes the address of the application server, the type of connection, and the user’s identity.
After SOCKS receives the request, it sets up a proper communication channel to the application server. A
proxy circuit is then established and SOCKS, representing the application client, relays the application
data between the application client and the application server.
It is SOCKS that performs several functions such as authentication, message security-level negotiation,
authorizations, and so on while a proxy circuit is being set up.
SOCKS performs four basic operations (the fourth being a feature of SOCKS V5):
● Connection request
● Authentication (V5)
unfortunately can become the bottleneck of internetworking. You must try to balance it out with the
hierarchical distribution of SOCKS, shadow SOCKS (multiple parallel SOCKS), and other mechanisms
for keeping the consistency of your security policy. Also, beware of potential security holes and attacks
among multiple SOCKS, and so on, as a factor of acceptability of SOCKS as a secure mechanism for
insecure network.
The integration of SOCKS and the Web has substantially increased the area of security on the web.
Whereas secure web-related technologies such as S-HTTP (Security-enhanced HyperText Transport
Protocol) and SSL (Secure Socket Layer) provide message and server authentications, SOCKS can be
successfully integrated to provide user authentication and authorization. Furthermore, the security
technologies employed on the Web can also be integrated into SOCKS to enhance the security of proxy
connections.
Tip:
If you want to take a look at the source code for TCP Wrapper, you can download it from
ftp://ftp.win.tue.nl/pub/security.
Another feature of TCP Wrapper is its support library, libwrap.a. It can be used by many other programs
to provide the same wrapper-like defenses of other services.
Also, it only controls the machine it is installed on, making it a poor choice for network use. Firewalls
are much more broad and therefore can protect every machine of every architecture.
However, the major drawback of TCP Wrapper is that it does not work on Apple Macintoshes or
Microsoft Windows machines. It’s basically a UNIX security tool.
Note:
You can download SOCKS from
ftp://sunsite.unc.edu/pub/Linux/system/Network/misc/socks-linux-src.tgz
If you care to, you can also download a configuration example, found in the same directory, called
socks-config.
By the time I start configuring SOCKS, I should be aware that SOCKS needs two separate configuration
files: one to notify the allowed access and the other to route the requests to the appropriate proxy server. I
have to make sure the access file is loaded on the server and that the routing file is loaded on every
UNIX computer.
I will be using SOCKS version 4.2 beta, but as discussed earlier in this chapter, version 5 is already
available. If you’re also using version 4.2 beta, the access file is called sockd.conf. Simply put, it should
contain two lines: a permit line and a deny line. For each line I will have three entries:
● The identifier (permit/deny). It will be either permit or deny, but I must have both a "permit" and a
"deny" line.
● The IP address. It holds up to four byte address in typical IP dot notation.
● The address modifier. A typical IP address four byte number, acting like an netmask, such as
255.255.255.255.
For example, the line will look like this:
permit 192.168.2.26 255.255.255.255
My goal is to permit every address I want and then deny everything else. Another issue I have to decide
is about power users or special ones. I could probably allow some users to access certain services, as well
as deny certain users from accessing some of the services that I have allowed in my internal network.
However, this is done by using ident, an application that if on, will have httpd connect to the ident
daemon of the remote host and find out the remote login name of the owner of the client socket.
Unfortunately the Trumpet Winsock I am using does not support it, nor do some other systems. Keep in
mind that if your system supports ident, this is a good feature to use, even though it’s no trustworthy, you
should use it for informational purpose only, as it does not add any security to your system.
One thing I need to watch out for, and I am sure you will have to as well, is not to confuse the name of
the routing file in SOCKS, socks-conf, with the name of the access file. They are so similar that I find it
easy to confuse the two. However, their functions are very different.
The routing file is there to tell SOCKS clients when to use it and when not to use it. Every time an
address has a direct connection to another (through Ethernet, for example), SOCKS is not used because
its loopback is defined automatically. Therefore, I have three options here:
● To deny, which tells SOCKS to reject a request.
● To direct, which tells us what address should not use SOCKS (addresses that can be reached
without SOCKS).
● To sockd, which tells the computer what host has the SOCKS server daemon on it (the syntax is
sockd @=<serverlist> <IP address> <modifier>). The @= entry enables me to enter a list of proxy
servers IP addresses.
Now, to have my applications working with the proxy server, they need to be "sockified." I need a telnet
address for direct communication and another for communications using the proxy server. The
instructions to sockify a program are included with SOCKS. Because the programs will be sockified, I
will need to change their names. For example, finger will become finger.orig, ftp will become ftp.orig,
and so on. The include/socks.h file will hold all of this information.
A nice feature of using Netscape Navigator is that it handles routing and sockifying itself. However,
there is another product I plan to use called Purveyor Web Server 1.2. that not only also works as a proxy
for FTP, Gopher, and HTTP but also
But one of the reasons I will be using Trumpet Winsock (for Microsoft Windows) is that it comes with
built-in proxy server capabilities. I just need to enter the IP address of the server and addresses of all the
computers I can reach directly in the setup menu. Trumpet Winsock will then handle all of the outgoing
packets.
At this point, I should be done. However, I know I’ll have a problem (and you will too!). SOCKS does
not work with UDP, only with TCP. Programs such as Archie use UDP, which means that because
SOCKS is my proxy server, it will not be able to work with Archie. Tom Fitzgerald ([email protected])
designed a package called UDPrelay to be used with UDP, but it’s not compatible with Linux yet.
Comments
Comments
Comments
Comments
Chapter 12
Firewall Maintenance
The level of security you have implemented at your company is directly related to the amount of money
you invested on it and the risks you’re willing to take. So you install a firewall!
Firewall maintenance begins with its management, and as part of management you must not consider the
installation of a firewall as the solution to all of your security problems. Always keep in mind, as stressed
throughout this book, that firewalls provide a wide variety of controls, but in the end they are only a tool.
A firewall is part of a diversified defense strategy that identifies what must be protected and identifies the
potential threats.
It seems obvious, but there is more to protecting a network than hardware and software. Security comes
from the integration of reliable technology, active and alert systems administrators, and management
decisions regarding user access to the Internet and other computer resources. Prudence demands the
development of a comprehensive plan to deal with system security. You, as the administrator, along with
your security staff, will have to define at least:
1. Which assets to be protected, and
2. What it the level of risks those assets are exposed to.
Therefore, your security policy must includes multiple strategies. Increasingly this is overlooked as
administrators turn toward technology and firewalls in particular (and rely on them!), as a cure-all for
their installation's security. This is a dangerous path to follow. Firewalls should not be called upon to
perform increasingly complex and unreasonable tasks such as scanning packets for viruses, encrypted
data and even foreign languages.
Now, the firewall should not be forgotten either. It’s not because it’s doing its job that you just we’ll let it
alone. Just like a car, in order for it to run well and efficiently it will require continuous care and
attention. Some times, an occasional drill will be necessary, as well as few checkups. Never neglect a
firewall! The Internet is a wild thing! If today your firewall is set up to protect your corporation from the
know threats out there, tomorrow there might be a new one you’re not aware of and it will come to bite
you.
How much time you will have to allocate to care for your firewall will vary. It will depend upon the type
of firewall you have installed, the assets you’re protecting and the kind of Internet services and access
you’re providing.
Some company’s rely on routers to filter unwanted traffic connection. If that is your case, what you have
is a set of rules not so complicated to maintain. As discussed before, with this kind of firewall, you’re
either allowing or denying connections. In this case, I have a good and a bad news for you: the good, the
amount of time you will need to spend in caring for your firewall is almost none. Except for allowing
new connections or denying some more, there’s nothing more you can do, other then make sure that
firewall is on and the NIC cards are still alive, which in case of failure you will notice right away
anyway. The bad news is that you may be preventing desired traffic to come in, such as potential new
customers, and not taking advantage of lots of Internet services and resources at Cyberspace. Make sure
not to develop a bad rep for MIS!
If you are one of the Fortune 500 company, you better have a complete and detailed security policy.
Otherwise you may be in for a ride! At the very least, you should be probing the network traffic coming
to the firewall from the Internet, as well as leaving your protected network daily. Don’t be surprise if
your traffic measuring hits the gigabytes! Thus, to perform these probing manually is literally impossible.
Your firewall must offer traffic probing, security alerts and report generation features.
Since firewalls are usually in an ideal position to gather usage statistics, as all traffic must pass through
them, you will be able to track usage of the network link on regular intervals and analyze them. These
analysis can greatly help you assess network usage and performance, as well as any security threat and
countermeasure.
For instance, you can analyze which protocols are delivering the best performance, which subnets are the
most accessed, and even, based on the information you collect, schedule service upgrades, bug fixes or if
necessary, discover a security hole and plug it.
If you have a packet filter firewall, you should have at least a basic understanding of the transport
protocols that they see crossing the wires so you can care for it. In doing so, the filter rules that you will
likely use, Alec Muffett well outlines in his paper ([email protected]) are typically to control
traffic on the basis of:
● Transport endpoints, or a notion of what's inside and what's outside the network. In the TCP/IP
world, this is usually implemented by masking off portions of the source and destination
addresses, and checking whether or not the remaining parts of the addresses refer to hosts inside
the secured network.
● Transport protocol, such as TCP, UDP, or raw IP. Other protocols may or may not be directly
supported, or it may be assumed that they are to be tunneled through the firewall.
● Protocol options, should be featured in any good firewall, which also should have the ability to
"drop" traffic on the basis of protocol-dependent options which might compromise security if
misused - for instance, the IP "source routing" option which can be utilized in traffic forgery.
Muffett indicates that similar issues arise when trying to ensure proper handling of ICMP packets, for
example, in trying to control messages necessary for the proper operation of IP. Further, he alerts that the
most critical facility in a packet filter is the ability to match network traffic against a table of permitted
source and destination hosts (or networks), but it is also vitally important to note that the firewall's
checking must be done against both ends of a connection, and must take into account the service port
numbers at each end of the connection, otherwise the firewall may be trivially subverted.
● Ensure the firewall is still promoting the secure environment to your corporation, as it was
designed and implemented for,
● Optimize is operation and services,
● Make sure all the firewall components are still functioning and interacting with each other.
By periodically performing a tune-up in your firewall you will be able to fairly evaluate the load your
firewall is taking and/or is capable of bearing and anticipate future problems or issues. By modeling its
performance against a scaled number of measured loads you will be able to have a good picture of your
firewall vital signs.
The following section was based on a great paper written by Marcus J. Ranum ([email protected]), CEO of
Network Flight Recorder, Inc.
Note:
For more information of firewall, check Marcus Ranum’s personal Web site at
http://www.clark.net/pub/mjr or http://www.nfr.net.
The following is a firewall tune-up procedure a recommend you to do so you can have a "health chart" of
your system:
1. Monitor your firewall for a month and store all the lof results. AS many logs you have more
accurate and complete will be results of your firewall "physical" exam!
By doing this you will have a first hand idea of the load going through your firewall, regardless if
it is a packet or application level firewall. If the firewall is an application level firewall, this should
be a breeze, as these firewall already provide you a lot of reports about the system by default.
Now, if it is a packet-level firewall, like a router for example, then you will have to develop some
kind of log-reduction, by using tools such as tcpdump, see figure 12.2 or NNstat.
2. Sort the logs by the time of day, per hour.
Notice that some hours of the day have higher peaks than others and exhibit different load
characteristics. Sorting the logs at an hour interval evidences it.
3. Tabulate the batch of logs by services, yielding values like:
● Number of email messages during that interval
1. Note the peak load in any one interval for each service wherever it occurs.
If you were to put all that load through the firewall in one interval's worth of time, you would have
a clear picture of the worst-case load you have yet observed.
2. Implementation tools to generate these loads from existing logs would be pretty straightforward
and could be run at any site wishing to perform this test. Presumably the values would be different
for each site but maybe not very. A number of expect scripts or PERL scripts, using static file data
could simulate the load through the firewall without having to actually do the work.
3. After this procedure you should have a basic "workload by interval" paradigm for your firewall in
your company’s environment, including peaks and a worst case scenario. With this data on hand
you will be able to tune up your firewall based on the assumption of "whatif" the rate of load
increase, by watching what happens to the rate of service requests between busy hours and
non-busy hours.
Notice that you can could on the "workload by interval" result as sustainable by your firewall
because you measured it, right? The goal here is to find out how far your firewall can go, from the
doable load level and the load level at which the firewall topples.
4. Now you can write a test harness that invokes the emulators in a way that will develop the same
load model. Values that you should control are:
● Number of concurrent loads for service X
1. Run the test harness with the load configured to match the loadout at a given time that is not peak
but someplace near it.
2. Compare the run times for the near peak load with the actual measured near peak load.
Notice that hey should be about the same!!
3. Run the test harness to emulate the peak load
4. Now compare the run times for the peak load with the run times for the actual measured peak load.
Again, notice that they should be about the same!!
5. Now, you have a template to fine tune your firewall, based on what happens to it when the traffic
load increases above real measured values!!
help your firewall to promote security by plugging the holes of the services you must need to use and
making sure the information assets of your company (e-mail messages and documents, including
financial data) are protected. By doing this, soon you will conclude that network security is not just a job
for a firewall.
Tip:
You should subscribe to GreatCircles firewall maillist archive at URL
ftp://ftp.greatcircle.com/pub/firewalls/. This list has plenty reference material, and past issues of
the digested "Firewalls" maillist. To subscribe send message to [email protected] and
write in the body of the message: subscribe firewalls [email protected].
Also, check the TIS Firewall Toolkit at URL ftp://ftp.tis.com/pub/firewalls/. You will find here a
free set of proxy clients and associated firewalling tools, as well as many technical papers upon
the subject.
available space, even on machines that have almost no users. Unfortunately, there is no automatic
way to find junk on the disk. Auditing programs, like Tripwire, will alert on new files that appear
in supposedly static areas. The main disk space problem will be firewall logs. These can and
should be rotated automatically with old logs being stored for a minimum of one year.
4. Monitors your system. By creating a habit of monitoring your system you will be able to
determine several things:
● Has the firewall been under any form of attack?
● If so, what kinds of attacks are being tried against the firewall?
● Configure your system so that any activities related to security is recorded on a log report.
● If your firewall doesn’t provide an auditing software, install one, such as Tripwire or L5, run it
regularly to spot unexpected changes to your system.
● Log your most critical events to hardcopy if at all possible, and check your logs frequently! Your
logs are critical. Most of the time you won’t find nothing fun in there, but maybe one of these days
you may find an evidence that something is wrong, and you will be thankful to yourself for having
coped with this ordeal of checking boring logs.
1. Be on alert for abnormal conditions of your firewall. Develop a security checklist, watching
for:
● All packets that were dropped
● Data such as time, protocol, and user name of all successful connection to or through the firewall.
● All error messages from your routers, firewalls and any proxying programs.
● Exceptions based on your normal firewall activity. Figure 12.01 outlines a basic access policy.
with your system. Most of the firewall listed on chapter 14, "Types of Firewalls," produces detailed audit
logs of all network traffic activity, as well as other easy-to-read management reports on network access
and usage. By regularly reviewing these reports, you will become familiar with network usage patterns,
and will be able to recognize aberrations that even hint at trouble.
Comments
Comments
Comments
Comments
Chapter 13
Firewall Toolkits And Case Studies
The TIS Internet Firewall Toolkit
The TIS Internet Firewall Toolkit is a set of programs and configuration practices designed to facilitate
the building of network firewalls. Components of the toolkit, while designed to work together, can be
used in isolation or can be combined with other firewall components. The toolkit software is designed to
run on UNIX systems using TCP/IP with a Berkeley-style "socket" interface.
I recommend you to access TIS URL at http://www.tis.com/docs/products/fwtk/fwtkoverview.html and
download the complete document from where this section was extracted. Throughout that
documentation, a distinction is made between "configuration practices" and software. A configuration
practice is a specific way of configuring existing system software, while a software component of the
toolkit is a separate program which may replace or enhance existing system software.
Therefore, when the documentation refers to the configuration practice applicable to configuring some
system daemon in a secure manner, it is assumed that the base operating system in question has existing
support for that software, and that it is capable of being configured. The exact details of how to configure
various system utilities differ from vendor implementation to vendor implementation and are outside of
the scope of this document. In general, most UNIX systems with BSD-style networking will support all
the functionality and services referred to herein.
Installing the toolkit assumes that you have practical experience with UNIX systems administration and
TCP/IP networking. At a minimum, a firewall administrator should be familiar with installing software
and maintaining a running UNIX system. Since components of the toolkit are released in source code
form, familiarity with building packages using make is required.
The toolkit does not try to provide a "turnkey" network firewall, since every installation's requirements,
network topology, available hardware, and administrative practices are different. Depending on how the
toolkit is configured, different levels of security can be achieved. The most rigorous security
configurations are not for everyone, while for others anything less will not suffice. It is the responsibility
of the firewall installer to understand the security policy of the network that is to be protected[1], to
understand what constitutes acceptable and unacceptable risks, and to rationalize them with the
requirements of the end users. Performing this analysis is the hardest task in implementing any security
system. The toolkit, unfortunately, cannot do it for you; it can only provide the components from which
to assemble a solution.
The toolkit consists of three basic components, which are all discussed on that paper:
● Design Philosophy
● Software Tools
If you decide on using the toolkit you may use any or none of these components, as you see fit.
Good luck!
Comments
Comments
Comments
Comments
Chapter 14
Types of Firewalls and Products on the Market
This section provides you with a technical overview of the main firewall products available on the market as of Summer of
1997. I made sure to include a vast and extensive selection of all the major players and architecture so you can have a chance
to evaluate each one of them before deciding which firewall best suite your needs.
This selection includes many different firewall architectures, from application proxy and circuit relay ones, such as Raptor’s
EagleNT, Ukiah’s NetRoad and Secure Computing’s Borderware firewall, to stateful inspection and packet filter ones, such
as WatchGuard Technologies’s WatchGuard, Sun’s SunScreen, Check Point’s Firewall-1 and Cycon’s Labyrinth ones.
Evidently, I’m not in the position of recommending any of these products as the needs and features of a firewall product will
change depending of your environment. Although I may have my preferences, it probably would be a biased one, which
would be directly related to the environment I work with. Thus, all the information you find in this section was totally
provided by the vendor of each firewall outlined here. Some provided more information then others, as others provided more
graphics and figures. By no means you should opt for any of these firewalls based on the amount of pages or details here
provided. Most of the vendors listed here also provided demo and/or evaluation copies of their products in the CD that
accompanies this book.
In order to make an informative decision when selecting a firewall that best suites your needs, I strongly encourage you to
carefully read this chapter, and summarize on a table all the features you are looking for, or need, in a firewall for your
organization. Then, I suggest you to check the CD and install the firewall(s) you selected and run a complete "dry-run" on
them before you can really make a decision. Also, don’t forget to contact the vendor directly, as these products are always
being upgraded and new features incorporated to them, which could make a difference in your decision. Contact information
and a brief background about the vendor is provided at the beginning of every section of the product covered.
Note:
For more information, contact Check Point Software Technologies, Redwood City, CA, (415)
562-0400 or at their Web site at URL http://www.checkpoint.com
If you take packet filters, for example, historically they are implemented on routers, are filters on user defined content, such
as IP addresses. As discussed on chapter 7, "What is an Internet/Intranet Firewall After All?," packet filters examine a packet
at the network layer and are application independent, which allows them to deliver good performance and scaleability.
However, they are the least secure type of firewall, especially when filtering services such as FTP, which was vastly
discussed on chapter 8, "How Vulnerable Are Internet Services." The reason is that they are not application aware-that is,
they cannot understand the context of a given communication, making them easier for hackers to break. Figure 14.3
illustrates it.
If we look into FTP filtering, packet filters have two choices with regard to the outbound FTP connections. They can either
leave the entire upper range (greater than 1023) of ports open which allows the file transfer session to take place over the
dynamically allocated port, but exposes the internal network, or they can shut down the entire upper range of ports to secure
the internal network which blocks other services, as shown on figure 14.4. This trade-off between application support and
security is not acceptable to users today.
As with application gateways, as shown on figure 14.5, the security is improved by examining all application layers, bringing
context information into the decision process. However, they do this by breaking the client/server model. Every client/server
communication requires two connections: one from the client to the firewall and one from the firewall to the server. In
addition, each proxy requires a different application process, or daemon, making scaleability and support for new
applications a problem.
For instance, in using an FTP proxy, the application gateway duplicates the number of sessions, acting as a proxied broker
between the client and the server (see figure 14.6). Although this approach overcomes the limitation of IP filtering by
bringing application-layer awareness to the decision process, it does so with an unacceptable performance penalty. In
addition, each service needs its own proxy, so the number of available services and their scaleability is limited. Further, this
approach exposes the operating system to external threats.
The Stateful Inspection introduced by Check Point overcomes the limitations of the previous two approaches by providing
full application-layer awareness without breaking the client/server model. With Stateful Inspection, the packet is intercepted
at the network layer, but then the INSPECT Engine takes over, as shown on figure 14.7. It extracts state-related information
required for the security decision from all application layers and maintains this information in dynamic state tables for
evaluating subsequent connection attempts. This provides a solution which is highly secure and offers maximum
Tip:
Check Point provides an open application programming interface (API) for third-party developers
and regularly posts INSPECT Scripts to support new applications on the Check Point Web site at
http://www.checkpoint.com.
context) and perform logical or arithmetic operations on data in any part of the packet. In addition to the operations compiled
from the security policy, the user can write his or her own expressions.
Unlike other security solutions, FireWall-1’s Stateful Inspection architecture intercepts, analyzes, and takes action on all
communications before they enter the operating system of the gateway machine, ensuring the full security and integrity of the
network. Cumulative data from the communication and application states, network configuration and security rules, are used
to generate an appropriate action, either accepting, rejecting, authenticating, or encrypting the communication. Any traffic not
explicitly allowed by the security rules is dropped by default and real-time security alerts and logs are generated, providing
the system manager with complete network status.
The Stateful Inspection implementation supports hundreds of pre-defined applications, services, and protocols, more than any
other firewall vendor. Support is provided for all major Internet services, including secure Web browsers, the traditional set
of Internet applications (e.g. mail, FTP, Telnet, etc.), the entire TCP family, and connectionless protocols such as RPC and
UDP-based applications. In addition, only FireWall-1’s Stateful Inspection offers support for critical business applications
such as Oracle SQL*Net database access and emerging multimedia applications such as RealAudio, VDOLive, and Internet
Phone.
Firewall-1 Performance
The following are the major performance strength of Firewall on through its INSPECT Engine:
● Runs inside the operating-system kernel, which imposes negligible overhead in processing. Also, no context switching
is required, and low-latency operation is achieved.
● Uses of advanced memory management techniques, such as caching and hash tables, which are used to unify multiple
object instances and to efficiently access data.
● Its generic and simple inspection mechanisms are combined with a packet inspection optimizer, which ensure optimal
utilization of modern CPU and OS designs.
Systems Requirements
The FireWall-1 system requirements is the following:
● Platforms supported: Sun SPARC, HP-PA-RISC 700/800, Intel x86 or Pentium
● Operating systems: Windows NT 3.51 and 4.0, SunOS 4.1.3 and 4.1.4, Solaris 2.3, 2.4, and 2.5, HP-UX 9 and 10 and
IBM AIX
● Window systems: Windows 95, Windows NT, X/Motif and Open Look
● Disk space: 20 MB
● Memory: 16-32 MB
● Routers management (optional): Cisco Systems IOS version 9, 10, 11 Bay Networks version 8, 9
● Media: CD-ROM
Note:
For more information, contact CYCON Technologies, Fairfax, VA, (703) 383-0247, or at their
Web site at URL http://www.cycon.com
CYCON Labyrinth firewall’s stateful inspection engines support all IP based services and correctly follows TCP, UDP,
ICMP, and TCP SYN/ACK traffic. Support for all major IP services include, but not limited to:
● Telnet
● SMTP
● FTP
● HTTP
● SSL
● NFS
● NNTP
● Archie
● Gopher
● X11
● NTP
● X500
● LDAP
● RealAudio
CYCON Labyrinth firewall offers full bi-directional network address translation. CYCON Labyrinth firewall can rewrite the
source, destination, and port addresses of a packet. Network address translation conceals internal addresses from outside
untrusted networks. Additionally, bi-directional address translation enables CYCON Labyrinth firewall to properly redirect
packets to any host in any system. Using two CYCON Labyrinth firewalls together allow the proper communication between
two private IP networks connected to the Internet by translating both incoming and outgoing traffic.
CYCON Labyrinth firewall can be configured to authenticate users on both inbound and outbound access. Inbound access
authentication is used to implement stronger security policies. Outbound access authentication can be used to track and log
connections for internal billing or charge backs purposes. Authentication is at the user level, not at the IP address level. This
allows the user to move across networks and retain the ability to use resources regardless of their physical IP address, making
it appropriate for Dynamic Host Configuration Protocol (DHCP) address assignments.
CYCON Labyrinth firewall supports multi-level logging. In regular mode, connections are logged. In debug logging mode,
connections, packets, bytes, and actions taken are logged. Log files are written in standard UNIX syslog ASCII format and
are easily manipulated by a firewall administrator for analysis. Syslog logging allows multiple CYCON Labyrinth firewalls
to log to a single machine for greater security and ease of analysis.
CYCON Labyrinth firewall utilizes a rewritten BSD UNIX kernel incorporating optimized data structures and algorithms
designed to produce high-speed packet filtering. CYCON Labyrinth firewall implements stateful inspection and packet
modifying technology to overcome gaps found in traditional packet filtering methods
● Redirecting Traffic
● IPSec - encryption
Traffic enters the interface "de0" from any source address destined for the host 1.1.1.1. These packets match static rules and
trigger the following dynamic rule on the outbound portion of the interface:
ipcycon de0 out proxy ip 2.2.2.2 3.3.3.3 spoofaddr 1.1.1.1
When traffic returns from host 2.2.2.2 destined for host 3.3.3.3, it will match the above dynamic rule and adjust the source
Redirecting Traffic
The address translation features of the CYCON Labyrinth firewall is used to redirect traffic to any host in any network. This
feature has a number of real-world uses; examples include: transparent redirection to fault-tolerant systems; diverting
scanning programs back to the attacker; and diverting an attacker to a dummy machine specifically designed to trap and log
the attacker.
Address translation is accomplished by altering the destination address of the packet to a new location, and altering the
source address of the packet to reflect the address of the CYCON Labyrinth firewall. When the reply packets are returned
from the receiving host, the address translation process is reversed, and the packets are rewritten and sent to the original
sender as though they came from the originally intended destination.
The second command alters the source address, ensuring that replies from the external Web server are sent through CYCON
Labyrinth system and back to the client.
The source address of these unwanted packets will need to be altered to complete the ruse, as follows:
ipcycon de0 out proxy ip 0.0.0.0:0.0.0.0 0.0.0.0:0.0.0.0
This spoof rule uses the 0.0.0.0:255.255.255.255 as the spoof address. This special form of the spoof address is used to
represent the source address of the original packets. The result is that the packet’s destination address is changed to the
source address, and the source address is changed to the firewall’s address. All of these address changes are reversed when a
reply packet is received.
A more complex version of this rule may be used to redirect traffic to the hacker’s network instead of redirecting traffic to the
source address used in the attack. This is accomplished by using a different mask on the spoof address. For example, to
redirect unwanted traffic to the hacker’s network, use the following spoof rule:
ipcycon de0 in spoof ip 0.0.0.0:0.0.0.0 1.1.1.0:255.255.255.255.0 spoofaddr
0.0.0.0:255.255.255.0
This rule will cause the destination address to be replaced with the first three octets of the hacker’s address, followed by the
last octet of the original destination. If the hacker is coming from the address 3.3.3.3 and sending packets to 1.1.1.15, then the
new destination address would be 3.3.3.15. If the hacker is sending packets to 1.1.1.32, then new destination address would
be 3.3.3.32.
| | | | | | | | |
command | | | | | | | |
interface | | | | | | |
direction | | | | | |
action | | | | |
service | | | |
source | | |
destination | |
tag |
NAT address
This command alters IP packets leaving the interface "de0" from source 1.1.1.1 bound for destination 165.80.1.1 so that the
source address is rewritten to 192.80.4.3. The CYCON Labyrinth firewall has mechanisms in place for proper translations of
any reply packets.
The packet leaving the de0 interface is detected by the CYCON Labyrinth firewall as its internal rules are being processed,
and is marked for action because the packet originated from host "1.1.1.1" and is destined for the host "165.80.1.1". As
1.1.1.1 is not routable on the public network, the source address must be changed or the sender will get the error No Route to
Host.
If a rule matching the source and destination addresses is encountered, the Proxy action occurs, and the spoofaddr address
192.80.4.3 is substituted for 1.1.1.1 as the source address. The packet is modified and routed through the interface.
This is all that is necessary to route the packet out of the network, but any replies to the packets will have 192.80.4.3 as the
destination address. Replies to 192.80.4.3 will not be routed back properly into the internal network, so the CYCON
Labyrinth firewall rewrites the incoming destination address. The CYCON Labyrinth firewall remembers the original source
address and established port of the packet and rewrites packets of expected reply traffic (known as Intelligent Connection
Tracking).
When the original packet was processed and the 1.1.1.1 address rewritten, the CYCON Labyrinth firewall created a dynamic
rule and applied it to the inbound portion of the de0 interface, noting the original destination address and destination port of
the packet. When the firewall encounters traffic from 165.80.1.1 destined for 192.80.4.3 on port 3456 (in this example, a
negotiated TCP port), the CYCON Labyrinth firewall knows to replace the 192.80.4.3 destination address back to 1.1.1.1 and
route the packet to the internal network. This dynamic rule remains until the transaction is terminated and removed from
memory.
The following is a time-lapse view of how and when the packets are rewritten:
● Packet going out to destination
source destination
1.1.1.1 165.80.1.1
192.80.4.3 165.80.1.1
165.80.1.1 192.80.4.3
165.80.1.1 1.1.1.1
This concept of altering source and destination addresses can be applied to either direction (inbound and outbound) and on
any individual interface. This provides extreme flexibility for generating rules. Other examples of the applicability include
load-balancing one address among multiple servers, directing any inbound web requests to one web server on the DMZ, and
sending all SATAN packets back to the originator (causing attackers to attack themselves).
The CYCON Labyrinth firewall is intelligent. Utilizing intelligent connection tracking modules, the firewall creates dynamic
rules for each connection and thus "remembers" the correct host.
This technology enables an organization to spread connections to one web address across multiple web servers. Without the
CYCON Labyrinth firewall, an organization is forced to use either multiple web servers or inefficient round-robin Domain
Name Server (DNS) techniques.
In the event a site using a private address space wants to access the Internet, the only option in the past was to acquire an IP
segment from the provider and visit each host and alter configurations. This is both time consuming and costly. Utilizing the
Proxy feature of the CYCON Labyrinth firewall, organizations can get a single Class C address space and proxy all traffic,
creating the appearance that it is coming from the provided network. For example:
ipcycon de0 out proxy ip 172.16.1.0:255.255.255.0 0.0.0.0:0.0.0.0 spoofaddr
204.5.16.0:255.255.255.0
● Host to Network - When the CYCON Labyrinth firewall encounters a packet coming from any host destined for host
1.1.1.1, it changes the 1.1.1.1 address to 2.2.2.1. For example:
ipcycon de0 in spoof ip 0.0.0.0:0.0.0.0 1.1.1.1 spoofaddr 2.2.2.0:255.255.255.0
● Network to Network - When the CYCON Labyrinth firewall encounters a packet coming from any source destined
for any host on the 1.1.1 network, it changes the 1.1.1 address to 2.2.2 network address. For example:
ipcycon de0 in spoof ip 0.0.0.0:0.0.0.0 1.1.1.0:255.255.255.0 spoofaddr
2.2.2.0:255.255.255.0
● Port-based Spoofing - To add another level of complexity, the CYCON Labyrinth firewall also has the ability to
distinguish traffic based on port mappings. For example, an internal web server may be used, and all incoming traffic
for any local IP address with a destination port of 80 is remapped to the single web server, as follows:
ipcycon de0 in spoof ip 0.0.0.0:0.0.0.0 0.0.0.0:0.0.0.0 dst-eq 80 spoofaddr 1.1.1.1
The CYCON Labyrinth firewall also has the ability to spoof only destination ports and remap only the port. For example, an
advertised web server at port 8080 and can be changed to the standard WWW port 80. The CYCON Labyrinth firewall
identifies any inbound traffic destined for the internal web server on the original port and rewrites the header to map to the
new destination port, as follows:
ipcycon de0 in spoof ip 0.0.0.0:0.0.0.0 1.1.1.1 dst-eq 8080 spoofaddr 1.1.1.1
spoofport 80
IPSec - Encryption
IPSec is a set of standards for Internet security to ensure open standard host-to-host, host-to-firewall, and firewall-to-firewall
connectivity. The standard includes two parts: Authentication and Encapsulation. The CYCON Labyrinth system supports
these standards as specified in RFC 1825, RFC 1826, RFC 1827, RFC 1828 and RFC 1829.
The Authentication Header (AH) provides a mechanism whereby the sender signs IP packets and the receiver verifies the
signature. This helps to prevent alteration of packets and spoofing during transit.
The Encapsulation Security Protocol (ESP) provides a mechanism whereby the sender encrypts IP packets and the receiver
decrypts the packets. This helps to preserve confidentiality and privacy and is key to implementing virtual private networks
(VPN).
IPSec Filter
The CYCON Labyrinth firewall supports IPSec as specified in the standards RFC-1825, RFC-1826 and RFC-1827. The
CYCON Labyrinth firewall allows AH and ESP to pass through the system using security filter rules. AH is treated as an
attribute of the protocol field while ESP is treated as a separate protocol. For example, to permit AH signed packets into
interface de0, the following firewall command is used:
ipcycon de0 in permit ip-ah 128.33.0.0:255.255.0.0 115.27.0.0:255.255.0.0
The "-ah" attribute can be used on any protocol. When used, a packet must have an Authentication Header within the packet.
To permit ESP packets into interface de0, the following command is used:
ipcycon de0 in permit esp 128.33.0.0:255.255.0.0 115.27.0.0:255.255.0.0
IPSec Gateway
The CYCON Labyrinth firewall uses two versions of a special security key system to control the AH and ESP mechanisms
within the firewall. As such, the CYCON Labyrinth firewall can be configured to sign packets (AH) on behalf of the client
system, and/or check the AH signature of packets entering the network. Furthermore, the CYCON Labyrinth firewall can
encrypt and decrypt communications between hosts or networks communications through the CYCON Labyrinth firewall.
This is accomplished by configuring the encryption, decryption, and authentication algorithms, keys, and addresses with the
spi command.
When the CYCON Labyrinth firewall is functioning as an IPSec gateway, an additional set of attributes is available for the
ipcycon rules. These attributes are set for inbound rules when a packet is successfully authenticated or decrypted. Likewise,
these attributes force authentication and encryption when used on outbound rules. For example, a packet decrypted by the
CYCON Labyrinth firewall will match the attribute "-via_esp". To accept decrypted packets through the de0, the following
command is used:
To force encryption on communications through the de0, the following command is used:
ipcycon de0 out permit ip-via_esp 10.9.0.0:255.255.0.0 129.2.0.0:255.255.0.0
Likewise, the "-via_ah" attribute may be used to match properly authenticated packets or force authentication headers to be
added to packets.
Common Use
The most common mode of operation is to support Virtual Priave Networks (VPN). In this mode, two or more LANs
communicate with eac other over public networks (e.g., the Internet) and maintain their security by encrypting all
communications between these networks. In this mode, the CYCON Labyrinth firewall resides between the LAN and the
public network. The system encrypts all traffic from the LAN before it is passed over the public network to another LAN.
The system also decrypts all traffic entering the LAN from the public network. As a result, the computers on the LAN do not
have to support encryption. Instead, they communicate as they would with any other system, and the CYCON Labyrinth
firewall does all the work transparently to the users.
The next common mode of operations supports access to private LANs via public networks by remote users. In this mode, the
remote user will use an IP stack that support the IPSec standard. If the user’s IP address dynamic, then a third-party
authentication is needed to identify the user, the IP address and the encryption keys needed for the session. If the user’s IP
address is static, then a weaker authentication method could be used. Once the remote users is authenticated, all traffic in and
out of the LAN to and from the user’s address is encrypted and decrypted. This protects sensitive information from sniffer
attacks while it traverses the public network.
● Deny - denies the packet and sends an appropriate ICMP message back to the sender;
● Track - permits the packet and creates a dynamic rule to permit expected replies;
● Proxy - rewrites the source address of the packet with either the address of the firewall or a range of user specified IP
addresses; and,
● Spoof - rewrites the destination address of the packet with either the address of the firewall or a range of user specified
IP addresses.
The proxy and spoof actions can redirect packets to any host on the network or on the Internet.
CYCON Labyrinth firewall protects against network spoofing with one simple rule. The filter rule will not accept packets
originating from the external interface that contain source addresses that match any internal IP addresses. In addition, all
source routed packets or IP fragments are dropped.
CYCON Labyrinth firewall supports standard username and password authentication and 128-bit encrypted S/KEY (MD5)
authentication. Inbound and outbound authentication is performed via an embedded technology called "VISA."
The firewall administrator maintains access lists of users and groups. A user must authenticate with the authentication server
(which runs on the firewall, but optionally can run on a dedicated machine) before access is permitted. Upon successful
authentication, the "VISA" system creates a dynamic rule permitting access for the user as defined in the access lists.
Any possible access rights are predefined by the firewall administrator and can be set to expire after a predefined time has
passed. It is possible to allow only certain types of access (e.g. Web, Telnet, ftp) to one group of users while allowing a
different type of access (e.g. Archie, gopher, NFS) to another group. The "VISA" system is flexible enough to receive
authentication requests from third-party servers, such as DHCP and WINNS servers.
CYCON Labyrinth firewall supports temporary and timed rules. These rules allow security policies that prevent certain
protocols during specific times. An organization may want to restrict outbound Web access to non-business hours, or only
during lunch time.
Systems Requirements
Hardware Requirements:
● Intel Pentium or Intel 486 (Pentium recommended), 100 MHz minimum for active 10 MB Ethernet, or 166 MHz
minimum for 100 MB Ethernet
● 16 MB RAM minimum, 32-64 MB RAM for active Ethernet (each rule [static or dynamic] requires 128 bytes)
● 1 GB HD (IDE or EIDE) for typical sites (intensive logging requires more space and may degrade performance)
Note:
For more information, contact NetGuard Ltd, via e-mail, [email protected], or visit their Web
site at URL http://www.ntguard.com/. You can also contact their headquarters at 2445 Midway
Road, Carrollton, Texas 75006, Tel: (972) 738-6900 - Fax: (972) 738-6999.
● Number of connections
● Allows the Network Administrator to create rules which determine the conditions under which a user operates.
● Our network-1, Our Network-2, Global Network-1, Global Network-2 are networks defined in the Network Object
dialog box.
Note:
For more information, contact CyberGuard Corp, at 2101 W Cypress Creek Road, Fort
Lauderdale, FL 33309, Phone: 800.666.4273 or Phone: 954.973.5478 - Fax: 954.973.5160. You
can also contact them via e-mail at E-mail: [email protected] or at the URL
http://www.cybg.com
CyberGuard Firewall technology can be utilized with your remote offices to operate secure enterprise-wide mobile security
applications, secure database applications and access controls.
CyberGuard claims to be the strongest enterprise security solution available because it is built on a secure operating system
that utilizes an extension of multi-level security called Multiple Virtual Secure Environments (MVSE), as shown on figure
14.29 above. MVSE matches data access to user privileges, preventing theft or unauthorized access to highly sensitive data
via networks at lower levels of security.
This unique capability, Multiple Virtual Secure Environments (MVSE), allows a single physical network to be divided by
security level into multiple virtual networks. Simultaneously, customers can divide their physical data servers into multiple
virtual data servers, each with a unique level of security. MVSE ensures that the data at a given level of security only travels
over networks at the same level of security. MVSE technology recognizes the need for protection of two separate corporate
assets - the data and the network. Contemporary firewalls generally protect the network but not the data traveling across it.
The CyberGuard Firewall is the only firewall to protect data at all enterprise levels.
MVSE’s capacity to create over 200 virtual networks/servers from a single network/server provides the flexibility and growth
potential the your company may need. CyberGuard’s unique Multiple Virtual Secure Environments also provides a secure,
cost-effective, multiple network implementation while extending security coverage to data traveling over the network.
Certifiable Technology
The CyberGuard Firewall Release 3 is designed by the same team that created the hardware/software CyberGuard Firewall
solution (Release 2.2) with an operating system and integrated networking software that have been evaluated at the B1 level
of trust by the National Computer Security Center (NCSC) and certified by the National Computer Security Association
(NCSA). The CyberGuard Firewall Release 2.2 was also tested in by Celar in France and is the first firewall solution to
undergo ITSEC E3 evaluation in the United Kingdom.
This firewall, for Intel platform, is offered in three configurations:
● An entry-Level Option, supporting 50 users or less;
With CyberGuard Firewall 3, both Pentium and Pentium Pro processor systems (single or dual processor configurations)
come with the same high throughput, scaleability and flexibility of previous versions of CyberGuard. An easy-to-use remote
graphical user interface (GUI) manager allows system administrators to configure and manage the firewall from both remote
and local sites. Figure 14.35 shows the basic architecture design of CyberGuard.
Systems Requirements
The following is the recommended systems requirements for configuring CyberGuard:
● Pentium and Pentium Pro processor systems (single and dual processor configurations)
Note:
For more information of Raptor’s Eagle family of firewalls, contact Raptor Systems, Inc., 69
Hickory Drive, Waltham, MA 02154, telephone 800-9-EAGLE-6 or 617-487-7700, Fax:
617-487-6755. You can also reach them via email at [email protected] or on the Web at URL
http://www.raptor.com/
● Authentication information
Based on this information, the Eagle makes complex security decisions. It automatically enforces service restrictions, issues
alerts via email or beeper, SNMP trap or client program, and compiles a comprehensive log on all connections-whether they
are allowed or not.
To derive only a portion of the information available to the Eagle, packet filtering firewalls must evaluate each IP packet
individually, capturing state information on-the-fly. This makes these systems particularly vulnerable to attacks that exploit
packet fragmentation and reassemble operations. The Eagle’s architecture makes it invulnerable to such attacks.
Raptor defines five domains of network security to promote an integrated approach to protecting the enterprise:
● Domain 1: Internet Security - To protect networks exposed to unauthorized Internet access, as shown on figure
14.38, Raptor Systems offers the flagship Eagle firewall. Designed as the foundation on which any enterprise solution
can be built, Eagle is a flexible, application-level firewall that secures bi-directional communications through the
public network. It includes EagleConnect virtual private networking, a powerful, real-time network security
management facility with intuitive GUI, suspicious activity and alert monitoring, encryption and multiple types of
authentication and proxy software to foil IP spoofing attacks. Multiple hardware platforms are supported including Sun
Microsystems, Hewlett Packard and Windows NT on Intel and DEC Alpha platforms.
● Domain 2: Workgroup Security - Raptor Systems provides two solutions to protect sensitive data that reside at a
workgroup level, as shown on figure 14.39. The EagleLAN is a departmental firewall that integrates seamlessly with
the Eagle. If one department attempts to access another department’s data without authorization, the network
administrator will know immediately. As with our Eagle firewall, real-time alarms let administrators catch hackers in
the act. And for desktop security, EagleDesk resides on a user’s PC, behind the firewall, to provide secure
communications between the PC and any other authorized destination inside or outside the enterprise.
● Domain 3: Mobile User Security - The combination of portable PCs, telecommuters, and virtual offices opens the
door to data access anywhere in the enterprise from anywhere in the world through public and private networks. To
protect this newly-emerging mobile portion of the enterprise, Raptor Systems provides EagleMobile (see figure 14.40).
An option to the Eagle firewall, EagleMobile can be installed by a non-technical user on any portable or off-site PC for
additional password protection and encryption between their PC and an Eagle firewall.
● Domain 4: Remote Site Security - To secure communications among corporate headquarters, corporate divisions, and
branch offices (see figure 14.41), Raptor offers the EagleRemote firewall. EagleRemote includes all of the superior
security features of the flagship Eagle firewall for remote sites that must use the public network to communicate with
other enterprise "satellites." The EagleRemote is configured and monitored by the Eagle firewall. This allows the
network administrator to have complete control from one central location back at the enterprise.
● Domain 5: Integrated Enterprise Security - As shown on figure 14.42, Raptor has designed its products as a suite of
modular software components that can interact seamlessly with each other using a common management and
monitoring capability. This building-block approach to security management lets companies change and grow their
network security systems without changing their underlying security strategy. Central to this integration is Raptor’s
EagleConnect virtual private networking technology, which transparently manages the connections among network
security points within the enterprise.
Eagle’s strong, rules-based defense (see screenshot on figure 14.43) is very impressive. Packet filtering firewalls authorize
passage of IP packets on a first fit rule matching basis. As packets enter a router or filtering firewall, the device compares
each packet in turn against a set of match conditions (filters).
By default, the device accepts the first fit to these conditions to allow or deny the packet. Herein lies the problem: filtering
rules are inherently general and highly order dependent. This means that the first match triggered may allow a connection that
would be denied by a subsequent comparisons. Thus, whether a packet gets into your network may depend on the way you
order of the rules, rather than on the rules themselves. This complexity makes misconfiguration an ever present possibility.
Therefore, with the Eagle Firewall,
● All connections are denied unless explicitly permitted
The Eagle’s best fit approach is simpler, tougher, and easier to manage. To begin with, the Eagle denies all network traffic
except for that which is explicitly allowed. Second, the rules the Eagle applies are not order-dependent, so it always chooses
a rule specific to the connection attempt at hand. And to make sure the rule chosen is specific, the Eagle always applies
conservative best fit criteria to allow or deny a connection. And if no rule meets its best fit criterion, the Eagle denies the
connection. This approach to rule management by the Eagle firewall allows a firewall administrator to concentrate on the
creation and management of a security policy rather than on the management of the firewall itself.
● Fine grain control of direction of service, e.g. FTP put versus get.
The Eagle’s secure proxy architecture presents a virtual brick wall between your networks and the unsecured world of the
Internet. This wall protects you in two ways:
1. Only connections explicitly allowed are permitted. This greatly simplifies configuration. This in turn virtually
eliminates security breaches arising from mismanagement.
2. Your networks are not only protected but hidden from the outside world. This bars hackers from probing for
insecurities in your internal systems, and safeguards the critical information needed to mount an attack.
● SMTP
● TELNET
● GOPHER
● RealAudio
In addition to supporting commonly used applications with out-of-the-box proxies, Hawk makes it a snap to specify
additional applications.
Address Redirection
At times, you may need to allow users to access data on certain internal systems, and still conceal these systems’ identities
and addresses, as showing on figure 14.47. Examples of this could include customer information databases or commerce
servers: resources that you must both protect, and provide access to from the outside world. The Eagle can be configured to
present one or many public IP addresses which can then be mapped or redirected (on a per service basis) to systems behind
the firewall with different (and hidden) IP addresses. A common use is to map multiple public IP addresses to multiple and
different Web servers behind the firewall.
As for performance, independent lab tests performed at the National Software Testing Laboratories (NSTL) confirm the
Eagle as the fastest transaction processing engine of any tested.
The Eagle’s application proxy architecture is the key to its great performance. Since the Eagle authorizes connections at the
application-level, it has access to all contextual information on each connection attempt. As a result, the Eagle only needs to
evaluate each connection once. No additional checking is needed to proxy packets securely. This delivers a big performance
advantage over other approaches.
Tip:
For more information on WebNOT, check Raptor’s URL at
http://www.raptor.com/products/webnot/webnot.htm.
● Automatic filtering of specific HTTP attacks related to buffer overruns, embedded 8-bit characters and illegal URL
formats
Systems Requirements
Raptor’s UNIX firewall is available on Sun Solaris and HP-UX. Now in its fourth generation, Eagle NT provides the same
robust security and flexibility of our award winning UNIX variant, tightly integrated with the Microsoft Windows NT
platform.
The Eagle supports the broadest range of authentication types in the industry. It’s design makes it easy to combine weak
forms of authentication (like gateway password and NT domain) and strong, single-use password schemes in a single rule.
According to Raptor, the Eagle firewall family is also the first commercially available firewall to offer full support for IPSec,
including DES, triple DES and RC2 encryption. Additional standards supported include SNMP V1 and V2 traps, and NT
Domain, TACACS+ and Radius authentication types.
Note:
For more information, contact Milkyway Networks Corp., 2650 Queensview Drive, Suite 150,
Ottawa, ON - CANADA, K2B 8H6 or via their distributor in U.S., North Eastern, 109 Danbury
Road, Office #4B, Ridgefield, CT, USA, 06877. By telephone, dial 613) 596-5549 or 800)
206-0922, Fax: (613) 596-5615 or via e-mail at [email protected] and Web site at URL
http://www.milkyway.com/
Note:
Configuring a Dual SecurIT FIREWALL
The following policy is used in this configuration:
● Inside Network users can access the Private Network transparently:
● Inside Network users can have an Inside DNS/Mail server or they can access the DNS/Mail
server on the Private Network. Similarly, Inside Network users can have an Inside News
server or they can access the News server on the Private Network.
● Private Network users will need user level authentication to access the Inside Network.
● Private Network users and Inside Network users can access the Internet transparently (or
they may need user-level authentication for going through the Outside SecurIT
FIREWALL, if so configured by the system administrator).
● Internet users will need user-level authentication to access the Private Network.
● This policy is a combination of a rule on the Inside SecurIT FIREWALL and a user-base
security policy.
● This policy combination requires that an authorized user from the outside, after having
connected to a machine on the Private Network, CANNOT start a session on that machine
to another on the Inside Network (even if the user is normally allowed to do so from within
the Private Network). Since users who have access to the Inside Network are considered
trusted, this policy should not be difficult to enforce. Otherwise, do not allow any incoming
sessions to the Inside Network.
In this configuration, all the internal users on both the Private Network and the Inside Network
still enjoy transparent access and the Inside Network is immune to access to the Internet by a
man-in-the-middle attack.
The Dual SecurIT FIREWALL configuration provides the ultimate defense against
man-in-the-middle attacks to the protected sub-network and allows all users (private and sub-net)
transparent access to the internet.
To build a secure kernel for SecurIT FIREWALL, Milkyway started with a standard UNIX kernel for the platform on which
SecurIT FIREWALL was to run (a Sun Sparc kernel and a BSDI kernel). Then the kernel was modified to remove all
non-essential functions, resulting in a kernel that only supported TCP/IP networking, hard drive access, and similar basic
functions on a restricted selection of platforms. The result is a specialized and very secure kernel but with limited
functionality.
Functionality was carefully added to support the needs of a firewall. Care was taken to ensure that all functionality that was
added was secure. The resulting SecurIT FIREWALL kernel is a very secure hardened kernel that has limited and specialized
functionality. In addition, the kernel has also been made untouchable so that it cannot be accidentally modified (and its
security compromised) by the administrator.
This limited functionality means that the SecurIT FIREWALL kernel does not support a wide range of devices but support is
limited to devices essential to a firewall. As new devices are developed, before they can be supported by the SecurIT
FIREWALL kernel they must be evaluated by Milkyway and only added if they are essential and a secure way can be found
to support them.
For this reason SecurIT FIREWALL does not support all types of network cards. In fact, support for two network cards was
not added to the kernel because the vendors of the cards could not supply Milkyway with drivers that would allow secure
support of the product.
Key Management
Key management is one of the most difficult and crucial aspects of providing a usable and trusted virtual private network.
The basic problem is how to provide all trusted users with access to up-to-date keys while keeping private keys from being
intercepted by people outside the realm of trust.
SecurIT FIREWALL uses the Entrust Public Key Infrastructure (PKI) as a mechanism for authentication and encryption
using public keys. This PKI is based on the X.509 standard for authentication and encryption.
Automated key distribution using Nortel Entrust PKI means that once identity is established, distribution of public keys is
managed automatically. Key distribution using an X.500 database and Version 3 X.509 certificates can be centrally managed
by a third-party key management service or by an in-house key management system.
Automated key distribution provides all SecurIT FIREWALLs on the VPN with easy access to up-to-date public keys for any
other SecurIT FIREWALL on the VPN.
Private Keys
SecurIT FIREWALL supports the use of private keys for data encryption and decryption, as shown on figure 14.55. Note that
while a private key system requires very little overhead, it may be difficult to keep private keys for many SecurIT
FIREWALLs up-to-date in a reliable and secure manner.
All network applications are assigned a port number. FTP uses port 21. Telnet uses port 23 and so on. There are a total of
64,000 ports. The port number is used by a computer receiving a packet to determine what application or service is required
for the packet. If there is a network service running that can receive the packet, the computer can receive information on that
port. If the network service is not running, then the computer does not receive information on that port.
A common first step to gaining access to a computer is to run a port scanning program against the computer. The port scanner
attempts to communicate with the computer using each communications port and reports back the ports that receive
information.
Knowing which ports receive information lets an intruder know what network services can be used to access the computer.
For example, if the port scanner found that the computer was accepting packets sent to port 21, this means that the computer
is capable of communicating using FTP. This allows the intruder to attempt to use an FTP program to access the computer or
to exploit known FTP weaknesses.
One of the strongest feature I find on SecurIT is that it listens on all ports. Listening on all ports means that this firewall
accepts communications on all 64,000 ports, which has two important consequences:
● All ports accept communications
As far as I can tell, as I write this section (August 1997), listening on all ports is unique to SecurIT firewall. This is a very
important feature, as an effective way to protect a system from unauthorized access is to prevent an intruder from learning
anything about the system. As discussed earlier, port scanning normally provides an intruder with exploitable information
about a system. However, if all the hacker learns is that all ports are accepting communications he/she is no further ahead.
There is nothing to distinguish one port from another. No new information is gained.
Further, any attempt to connect to any port on a SecurIT firewall is recorded by the Logging Facility. The information logged
includes the source address of the connection attempt. This information can then potentially be used to determine the source
of the attack.
In addition, the Alarm Facility of this firewall continuously analyses logging information and will raise an alarm if
compromising activity (such as port scanning) is recognized.
Buffer Overflow
A buffer overflow occurs when a program adds data to a memory buffer (holding area) faster than it can be processed. The
overflow may occur due to a mismatch in the processing rates of the producing and consuming processes, or because the
buffer is simply too small to hold all the data that must accumulate before some of it can be processed.
Software can be protected from buffer overflows through careful programming, but if a way to cause a buffer overflow is
found, the computer running the software can be compromised. If a user accesses a computer across the Internet and
intentionally causes a buffer overflow, the program that the user was running may crash but the user may remain connected
to the computer. Now, instead of accessing the computer through the controlled environment of the program, the user may
have direct unrestricted access to all of the data on the computer.
Milkyway codes the programs (for example, proxies) that run on SecurIT FIREWALL to stop buffer overflow from
occurring. Even if a buffer overflow occurs, the proxy crashes because the memory "box" in which the proxy runs is
protected from buffer overflow. Also, when the proxy crashes the user is disconnected because the connection depends on the
proxy.
In addition, protecting the memory buffer means that the firewall keeps running and security is not compromised. If a
firewall that is not protected in this way encounters a buffer overflow the entire firewall may crash, causing a service
disruption.
Spoofing
Spoofing can occur when a packet is made to look like it came from an internal network even though it came from an
external one. SecurIT FIREWALL eliminates spoofing by recognizing the firewall interface that specific source addresses
can connect to. If a port receives a packet that should only be received at another port, the packet is denied.
Sniffing
Sniffing involves observing and gathering compromising information about network traffic in a passive way. This can be
done by any node on a non-switched Ethernet. On non-broadcast media (for example, ATM, T1, 56k, ISDN) an intruder
would either have to be in the telephone switches, have physical taps, or easiest, break into any router where the data travels.
SecurIT FIREWALL does not prevent people from sniffing the external network. As a matter of fact, no firewall can prevent
that! However, since the firewall keeps external people from breaking into the internal network, this effectively prevents
external people from running sniffers on the internal network.
Hijacking
Hijacking a connection involves predicting the next packet in a TCP communications session between two other parties and
replacing it with your own packet. For example, hijacking could be used by an intruder to insert a command into a Telnet
session. To hijack successfully, an intruder must either make an educated guess about the TCP sequence information, or be
able to sniff the packet.
Hijacking is a threat because the intruder can wait for users to authenticate themselves, and then the intruder can take over the
authenticated connection. Hijacking of a connection can happen no matter how strong the authentication required to start the
connection
Since traffic on the networks protected by SecurIT FIREWALL cannot be seen, and cannot be sniffed, this firewall prevents
hijacking attacks on traffic that does not pass through the firewall. Figure 14.57 shows Milkyway’s product family at glance
to protect against all the issues discussed in this section. Figure 14.58 shows a screenshot of Milkyway’s site at URL
http://www.milkyway.com/prod/info.html, which provides a product information matrix. I recommend you to access this
page for additional information.
Systems Requirements
The following is the recommended systems requirements for configuring SecurIT:
● Pentium and Pentium Pro processor systems (single and dual processor configurations)
Note:
For more information, contact WatchGuard Technologies Labs, Inc. at 316 Occidental Avenue
South, Suite 300, Seattle, WA 98104. Tel.: 206/521-8340 and Fax: 206/521-8341. Or you can
visit their Web site at URL http://www.sealabs.com
WatchGuard at Glance
WatchGuard offers you all major approaches to firewall design, such as packet filtering, proxies and stateful inspection as
many of its competitors, however, with a low cost and easy to use interface. It also adds features not easily available in other
similar products, such as inspection of executable content such as Java and ActiveX and the ability to e-mail you with
traceroute and finger information.
Basically, the WatchGuard System consists of the WatchGuard Firebox, a network security appliance featuring a Pentium
processor, and WatchGuard Security Management System (SMS), software that runs on Windows NT, Windows 95 and
Linux workstations.
The WatchGuard "point-and-click" approach makes it very easy to install and configure the firewall. Configuration
information is presented on a service-by service basis, allowing you to setup security even if you don’t have extensive
knowledge of your network. You only add Internet services you wish to enable, keeping access to a minimum and security to
a maximum. Also, WatchGuard’s visualization tools allow you to get a complete picture of your network security land see
overall trends and network usage patterns.
WatchGuard has the ability to automatically warn you of security-related events occurring at the firewall. It delivers these
messages by e-mail, pager, or custom script to almost any device, computer, or program that you use. It can provide detailed
logging of every firewall event or simply record events that you designate to be significant. Thus, you can test for "holes" and
see at-a-glance what visitors to your site can and cannot do.
The Firebox itself is a dedicated network security "appliance". It contains a real-time firewall operating system giving you the
ability to be up-and-running right out of the box. The firewall operating system does not allow user log-ins and only supports
encrypted connections to the Firebox from the SMS software.
As a standalone element, the security appliance is a specialized solution. As such, the WatchGuard Firebox is more reliable
than a general-purpose system modified to do the specialized work of network security.
Other advantages associated with the standalone, dedicated nature of the appliance include the following:
● It plugs into the network and is operational within minutes. As a dedicated device rather than a general-purpose
computer, it is simpler to boot up and run.
● It is managed from an ordinary desktop Windows 95 or NT PC that is used for other functions, yet it serves any
network-PC, Macintosh, or a cross-platform environment.
● Its specific configuration makes it easier to verify security performance. In a general-purpose OS, a stew of network
drivers, devices, and third-party software produces unbounded and sometimes undetectable security risks.
● Its exclusive focus on security ensures that it does not degrade the router or the network server's performance.
WatchGuard is built around the basic premise that unless an external user has authorization for a specific activity, then that
external user is denied an inbound connection. The second premise is WatchGuard’s ability to enforce security even if your
network fails. It ensures that your site and the SMS software itself are not under attack by intruders. If WatchGuard suspects
that its own software has been tampered with, it shuts off access to your network before an intruder can circumvent its
protective screen.
● GroupGuard, to protect departmental systems, restrict information and packet flow and define group-level Internet
privileges
● HostGuard, to protect mission-critical servers with crucial databases
WatchGuard's Security Management System runs on standard Windows 95, Windows NT or Linux workstations that can be
connected to the WatchGuard Firebox over a LAN or directly via a serial cable connection. WatchGuard SMS software
includes all firewall setup and configuration software as well as the WatchGuard graphical user interface which is based on a
service-centric model, meaning that you add only the services that you wish to enable, keeping access to a minimum and
security to a maximum.
The WatchGuard SMS includes a powerful alarm and event notification system that serves to alert you to attempted security
attacks while automatically blocking scans. It also includes a "reverse probe" capability that traces scan attempts back to the
originating host address.
With the event notification system, network managers can choose to be notified of attempted break-ins either via email or
pager messages. They also can establish a threshold number of attempts to set off the alarm system in order to avoid being
"flooded" with messages.
The WatchGuard graphical interface, as you can see on figure 14.61, is based on a service-centric model, meaning that you
add only the services that you wish to enable, keeping access to a minimum and security at a maximum. WatchGuard’s
operating system has been "hardened", as many other products reviewed on previous sections, which helps to eliminate
security holes and ensures reliability.
The following is an itemized list of features available with WatchGuard:
● Block unwanted traffic into and out of the network
● Inspect Web traffic for dangerous mime types (i.e. Java, ActiveX, PostScript, etc.)
● Notification system alerts you to attacks and scams
● Visually depict traffic and usage
● Optional add-on modules
WatchGuard’s Firebox
As mentioned earlier in this section, WatchGuard consists of two major components, the Firebox (hardware) and the Security
Management System (software). The WatchGuard Firebox is a hardware firewall platform that runs the transparent proxies
and the dynamic stateful packet filter to control the flow of IP information.
The WatchGuard Firebox resides between your router and your trusted local network, which connects to local workstations
and servers. The Firebox also provides an interface for an optional bastion network which might contain servers (for FTP and
World Wide Web for example) you wish to be accessible from the Internet with different access policies than the machines
on your trusted local network.
The Firebox is a specially designed, properly optimized machine for running the WatchGuard firewall. It is designed to be
small, efficient and reliable, as seen on figure 14.62.
The following is an itemized list of the Firebox features:
● Real-time embedded operating system
● Tamper-proof operation
● Easy-to-understand icons
● Critical and important information organized for easy access for each Firebox
● Easy zoom in to detailed information for each individual Firebox with standard SMS tools.
WatchGuard WebBlocker
WatchGuard WebBlocker is a tool that provides tailored management control over web surfing putting Web site access
privileges fully under the control of corporate managers. Because WebBlocker is flexible, users can block all browsing of the
Web by user group and times of day.
For example, corporate managers can use WebBlocker to prevent selected departments and work groups from accessing all of
the selected site categories (see figure 14.70) during normal business hours, but allow access to categories such as sports and
leisure during lunch breaks and after 5:00pm. WebBlocker also provides users the ability to add the names of sites they wish
to permanently block or permit, as shown on figure 14.70a, in keeping with their corporate access requirements.
WatchGuard WebBlocker is based on Microsystems Software’s Cyber Patrol database. Each week automated updating of the
WebBlocker database is downloaded via a secure, encrypted Internet connection. The list of supported groups feature
questionable or inappropriate content.
WebBlocker set-up software vastly simplifies the creation of customized group profiles as well as other configuration tasks,
as shown on figure 14.70b. The WebBlocker set-up walks users through each step of the process and lets them map different
access privileges to different groups using simple point-and-click operations.
WatchGuard SchoolMate
As I write this section, WatchGuard SchoolMate stands as the first firewall product intended specifically for use in schools.
WatchGuard SchoolMate is an affordable system that meets all four security challenges to support productive classroom use
of the Internet. It protects students and educators from falling victim to Internet abusers of all kinds, as it plugs security holes
as soon as it's plugged into the network.
WatchGuard SchoolMate’s main components are these:
● The WatchGuard Firebox houses core firewall functions in a standalone device and plugs into a school network in
minutes. In contrast, software-based firewalls generally require two or more days for installation and can carry a
five-figure price tag. In addition, WatchGuard can serve any network-PC, Macintosh, or a cross-platform environment.
● WebBlocker software, which relies on Microsystems' CyberPatrol service, is highly regarded by K-12 educators as the
most discriminating "guidance system" for student Internet use. WebBlocker allows educators to establish times of
restricted and unrestricted use and categories of sites blocked. The site-blocking feature also allows educators to
customize these categories.
● The WatchGuard Graphical Monitor module shows real-time graphical representations of host-to-host activity on the
school network, enabling educators to see which sites students visit and what they do there. It plots connections so
educators can monitor the composition of their network traffic. The Graphical Monitor module also measures the
bandwidth being used by the school network and provides instant replay of network activity.
● The WatchGuard Historical Reports module keeps track of student's Internet activities by providing daily, weekly or
monthly reports in an easy-to-read summary format. It produces "suspicious activities summaries" that serve as an
early warning system of potential security breaches.
Tip:
For more detail on the challenges of Internet use in schools and WatchGuard SchoolMate's role in
overcoming them, check the paper entitled, "Surfing Schools: Issues and Answers regarding
Students on the Internet at http://www.watchguard.com/schoolmate.
Systems Requirements:
The following is an itemized list of the minimum requirements recommended by WatchGuard Technologies Inc. to run
WatchGuard:
● Pentium-class processor
● Minimum 16 MB Ram
AltaVista products and services are designed to integrate all levels of your working environment, from the Internet and
enterprise, to workgroup and individual use, to allow location and platform-independent computing.
To increase global awareness of the AltaVista brand and showcase AltaVista software technologies and products, the
company provides the already well-known AltaVista Search Public Service, which is the world’s most popular Internet
search engine, and other Internet services free on the World Wide Web. They also license their Internet services to major
telecommunications and media companies outside the United States, and to major Internet content providers.
Figure 14.71 shows a screenshot AltaVista Firewall Center Web site.
Note:
For more information, contact AltaVista Software Inc., 30 Porter Road, Littleton, MA, Tel.:
508-486-2308, Fax (508) 486-2017. Or you can visit their Web site at URL
http://www.altavista.software.digital.com
According to the vendor, this firewall is the only one that takes an active role in your security management. With its unique
intelligence, it warns you of impending danger of intrusions, is constantly looking for threats to your defined security zone,
and takes evasive action when attacks do occur. Figure 14.71a is a screenshot of Firewall 97’s main menu.
AltaVista Firewall 97 combines trusted application gateways, comprehensive logging, reporting, real-time alarms, strong
authentication, graphical user interface (GUI), and a step-by-step installation wizard all in one software package. Also,
according to my lab tests AltaVista is by far the fastest firewall available in its class, with no compromise on security. This
demonstrates not only its high efficiency, but the tightness of its Windows NT integration.
Remote management is also offered, which allows system administrators to perform the following operations remotely:
● View/Change firewall status
When thinking about remote management of firewalls, you must be careful with the side effect of it: the establishment of a
weak link to the firewall via a serial port or Telnet session on a high port. With AltaVista Firewall, its remote management
services is done through tunneling, using AltaVista’s Tunnel. The tunnel product provides RSA 512 bit authentication, MD5
integrity and the strongest encryption worldwide with RSA 128bit (U.S.) and 56/40 bit (International.)
The new remote management enables system administrators to view firewall activities and allows them to quickly take
appropriate actions. Consistently with the OnSite Computing vision of AltaVista, you are able to manage the firewall from
anywhere within the Intranet or from an untrusted network.
On all supported platforms, the remote management displays the states of all services as well as various statuses and alarms.
It also allows to modify the firewall status and start/stop specific services such as FTP. Additionally, on Digital UNIX,
network administrators can maintain and manage security policies, user authentication, DNS, mail, SNMP alarms and active
monitoring of traffic. Furthermore, different levels of control can be assigned on UNIX. As an example, one Firewall
administrator can monitor the status of the firewall, while another can change some security policies.
The installation wizard provides an easy step-by-step firewall installation, including DNS configuration. Its comprehensive
graphical user interface through which all configuration administration, and management tasks are performed makes
management of the firewall much easier.
Another great feature is its automatic shutdown of individual services, or the whole firewall, if the firewall is under continued
or repeated attack. AltaVista Firewall for Windows NT can automatically shut down the service or the whole firewall to
prevent the firewall from being compromised.
Enhanced Proxy
The firewall has an updated proxy contains significant performance improvements based on code optimization and caching
implementation. It supports the following protocols:
● HTTP,
● HTTPS/SSL,
● gopher and
● ftp.
It implements the CERN/NCSA Common Log Format for enhanced reporting and integration with third party analysis tools.
As for other proxies, access restriction policies per user can also be combined with time limitations.
Support for Real-Audio proxy: RealAudio is an application that allows playback of audio in real-time over Internet
connections. Through the RealAudio proxy, managers can allow or prevent users on internal network systems with Web
browsers to access RealAudio services on the external network. For this proxy, system administrators can specify security
policy details, time restrictions and blacklists of hosts forbidden access (common with ftp, Telnet and finger proxies.)
A new generic UDP proxy allows UDP-based applications, such as Internet Chat, to pass through the firewall securely. Also,
with AltaVista Firewall 97, you are a system architect, you are now free to build any sophisticated, distributed networks of
Oracle7 or third-party data repositories across the Internet. SQL*Net establishes a connection to a database when a client or
another database server process requests a database session. The proxy is based on the Oracle Multi-Protocol Interchange
(MPI), so it inherits many of the Multi-Protocol interchange’s features.
SQL*Net firewall proxy is able to control access based on information contained in the SQL*Net connection packet. This
includes the client machine name, the destination name and the database service. The firewall also integrates the
administration of this authorization list with various authentication methods such as smartcards.
AltaVista Firewall 97 broadens security policies by offering a generic TCP relay for one-to-many and many-to-one
connections. Consequently, an instance of the generic relay such as news can have one server on the inside of the firewall
getting feeds from multiple news servers on the outside.
This generic relay is also fully transparent outbound so there will be no need to reconfigure internal systems. The
management GUI supports both one-to-many and many-to-one configurations.
Dual-DNS Server
Before the introduction of AltaVista Firewall 97, the recommended name server configuration was the hidden DNS setup
hiding the internal address space from the untrusted network. However, this recommendation required to set up a second
name server within the Intranet causing some management issues.
With AltaVista Firewall 97, firewalls can now be configured as Dual-DNS servers that understand which name services are
internal or external. This Dual-DNS server is fully configurable through the GUI based management.
Most of us, Internet Managers, are mostly interested in dedicated boxes for security, performance and management reasons,
correct? Well, AltaVista has been offering the capability of running a security low-end server on the same UNIX box. It
managed to minimize any security impacts by a close integration between those two products. With Firewall 97, AltaVista
now extends this integrated solution to Windows NT servers.
Note:
Note that the Windows NT server must be connected to the ISP through a router. Support for a
direct connection over an ISDN or a dial-up line is not yet available in this firewall but according
to the vendor, will follow in a next release.
DMZ Support
With DMZ (Demilitarized Zone), AltaVista 97 on UNIX offers more than a simple trusted/untrusted implementation
supporting only two LAN connections. While two interfaces is often enough for an Internet-oriented firewall, many
organizations need three:
Tip:
AltaVista Firewall can be expanded to handle larger, more complex environments as it supports a
large variety of platforms, including Windows NT, BSD/OS and Digital UNIX, which enable it to
easily scale from small businesses to enterprise environments.
Configuration
AltaVista Firewall software can be used with the AltaVista Tunnel product to create a virtual private network over the
Internet and allow encryption and authentication securely through the AltaVista Firewall. Both products can run securely on
the same system with a packet filter application provided with the firewall.
Note:
For more information on AltaVista’s Tunnel product, check the URL
http://www.altavista.software.digital.com/tunnel/index.htm.
The product supports Remote Access Service (RAS) on NT for external connection. This feature is used most often in an
environment where Internet connection is via a dial-up line.
Hardware Requirements
The AltaVista Security Pack 97 contains all firewall proxies, firewall remote management, and full authentication, with no
extra costs. It consists of a complete AltaVista Firewall 97 kit and a complete AltaVista Tunnel 97 kit.
The systems requirements are:
● System: Pentium
Note:
SQL*Net proxy does not run on Alpha platforms running Windows NT
Note:
For more information, contact ANS at 1875 Campus Common Drive, Suite 220, Reston, VA
20191-1552. You can contact them by phone at 800-456-8267 from within the US or
+1-703-758-7700, Fax: +1-703-758-7717. You can also send an e-mail to [email protected] or visit
their Web site at URL http://www.ans.net/
ANS InterLock
ANS InterLock Firewall Service provides network access control, attempted intrusion detection/response and cost accounting
functionality to help organizations protect and manage valuable Intranet and Internet resources. One of the original
application-layer firewalls, it provides high granularity of control with a full line of application proxies for all the major
TCP/IP services as well as address remapping, file integrity monitoring and a real time utility to detect and prevent intrusion
attempts. Detailed auditing information, cost of use/abuse controls and accounting reports are provided for advanced
management of network resources.
As discussed throughout this book, firewalls are an important component of any organization’s network security architecture.
Good firewalls provide security controls without making Internet access prohibitively difficult for the end user. Better
firewalls improve upon those solutions by adding detailed audit trails and accounting information. State-of-the-art firewalls
offer management control over secure Internet and Intranet resources. In short, they combine access control mechanisms,
detailed logging, usage and chargeback reports, intrusion detection capabilities and graphical administrative interfaces to
provide secure, managed access network solutions. ANS InterLock service 4.0 has evolved to meet customer requirements
for this advanced level of security, accountability and manageability. Figure 14.73 shows a layout of a multi-ANS InterLock
configuration
● News (NNTP)
● TN3270
● Gopher
● Real Audio
● X-Windows
● HTTP (Web)
● SMTP
● Generic TCP
● LPR/LPD
● SSL
● Generic UDP
● Telnet
ANS InterLock solutions can be deployed throughout an organization. Figure 14.73 above shows a multi ANS InterLock
configuration for the XYZ Corporation. XYZ uses ANS InterLock systems to manage Internet connectivity, to isolate R&D
information from unauthorized corporate users and to limit access to internal resources from Intranet-connected vendors.
As a network security and resource management tool, ANS InterLock service provides:
● Application Gateway Services Between IP Networks;
● Attempted Intrusions.
● Attempted Intrusions - Real time log monitoring system to watch the logs for a variety of attacks including port
scans, IP spoofing and ISS or SATAN probes.
● Integrity Watcher - Utility to monitor the permissions and contents of key files on the system from potential
tampering.
● SSL Forwarder - Support for SSL based forwarding (https: and snews: URLs).
● RealAudio 2.0 support - Support for Real Audio’s RA Player.
● Solaris 2.5 port - Overall system performance and stability improvements and support for UltraSparc platforms.
● HTML-based Administrator Interface - Security policy updates via Web-browser.
● Enigma Logic Support - Support for Enigma Logic DES Gold card authentication.
● Access Control Rule Base (ACRB) - Each application gateway makes queries into the ACRB to determine if a
connection request should be granted and, if so, the level of service which should be provided. ANS InterLock
administrators define the set of rules which describe an organization’s security policy. There are multiple components
to each rule. The first portion of each rule describes the situations when the rule is to be enforced. Rules which do not
match a particular situation (e.g. outside the time range) can be configured by the administrator to deny access or
simply remain inactive. The second part of each rule defines the authorizations or constraints to be enforced. Different
levels of logging (Low, Medium, High, Debug, Trace) can be associated with each rule.
❍ Access Controls Criteria
■ User or Group
■ Protocol/Port Number
■ Source and Destination Address Associated with Connection
■ Time of Day (start/stop times)
■ Days of the Week
❍ Rule Constraints
■ Direction of Connection/Data Flow
■ Authentication Required (SecurID, Enigma Logic, Unix password)
■ Audit Level (Low, Medium, High, Debug)
When making changes to the ACRB, the name of the administrator making the change and a
timestamp are associated with each rule. This feature is useful for multiple administrator
coordination and accountability. Figure 14.74 shows a sample rulebase modify screen of InterLock.
❍ Application Gateways - One of the original design goals of the ANS InterLock service was to develop
application proxies which would require user authentication. This was easy for some gateways (e.g. FTP, Telnet)
since user/password mechanisms were included in the protocol specification. For applications like SMTP or
NetNews (NNTP), the ANS InterLock system uses a concept of mapping entries to have user-level controls even
though those services are normally non-authenticated. For access to applications via Web browser, the ANS
InterLock system takes advantage of proxy and basic authentication mechanisms to require passwords for these
transactions. There were several reasons for this approach, more granular control, more detailed auditing and
chargeback reports based on user, group and/or IP address. Below a typical Web transaction is traced.
Web access is transparent to the end user. The only requirement is to make the browser aware of the ANS InterLock through
standard proxy configuration as shown in figure 14.75. What is unique about this approach is that the Web gateway on the
ANS InterLock prompts the user for name and password whenever a remote access is requested via the desktop Web
browser, as seen on figure 14.76). Most browsers cache this information for future requests. Even though each Web
transaction is separately authenticated, the user enters his/her password only one time.
Audit Levels
It is common for sites to require more detailed information on some transactions but less for others. Audit levels can be
assigned to each rule added to the ACRB. For example, medium auditing may be required for corporate users accessing the
Internet but a much higher audit level may be assigned for vendors accessing internal resources.
URL-Level Controls
Recognizing that site administrators are often concerned about the percentage of traffic going to non-business related sites,
the ANS InterLock service provides support for restricting users from going to specific URLs in the WWW gateway. Since
many Web sites today are implemented using multiple hosts with different IP addresses, this blocked site database allows
URL-level controls for pages, directories or entire sites without having to add an excessive number of rules into the ACRB.
Log Files
ANS InterLock 4.0 includes a modified version of the Unix syslog daemon. Each service generates logging information
allowing an administrator to generate usage statistics, isolate configuration problems, and determine if there have been any
attempts to obtain unauthorized access to the protected network.
Log entries contain information specific to each service including the time the action occurred, a unique process ID
associated with the connection, number of bytes sent in each direction, the type of message, the addresses of the source and
destination host, the user accessing the service, any commands entered, and an informative message describing the action
performed.
FTP logs include information on the operation performed (put versus get) and the name and size of the file being transferred.
HTTP entries contain information on URL accesses and byte transfer sizes. All user and administrative activity is logged.
Audit information can be logged to local disk and to a syslogd on a protected site host. Figure 14.77 shows a typical HTTP
log entry.
● Spoof Guard - Prevents hackers from exploiting protected site network addresses to gain entry.
● Audit Log Thresholder - Recognizes and responds to potential security attacks in real-time. Attack patterns can be
pre-loaded by ANS or created by you. Sophisticated response options include e-mail, paging, SNMP traps, scripts and
customer programs.
● Integrity Watcher Daemon - Monitors configurable set of ANS InterLock files not ordinarily subject to change. This
helps protect your network against Trojan horse attacks.
Note:
For more information, contact Global Technology Associates, Inc. , 3504 Lake Lynda Drive,
Suite 160, Orlando, FL 32817. You can call 1.800.775.4GTA, or internationally,
+1.407.380.0220, Fax: +1.407.380.6080. You can also contact them via e-mail at
[email protected] or via their Web site at the URL http://www.gnatbox.com/index.html
● A Unix system, although it uses core technology from the Unix operating system.
At the heart of GNAT Box is GTA's network address translation and stateful packet inspection engine. This facility was
originally developed for GTA's premier turnkey dual wall firewall, the GFX Internet Firewall System. The stateful packet
inspection facility monitors every IP packet passing through the GNAT Box to guarantee that:
● Network address translation is performed for all packets passing through the GNAT Box.
● Only valid response packets or packets passing through user defined tunnels reach hosts on the Protected or PNS
networks from the External network.
This facility is tightly integrated into the GNAT Box's network layer to guarantee maximum data throughput.
Standard Features
The following is an overview of GNAT Box Firewall at glance:
● Secure Network Address Translation (NAT)
● RealAudio/RealVideo,
● StreamWorks,
Note:
How Do You Pronounce GNAT Box?
GNAT Box is pronounced, "nat box", with the 'g' silent, like the tiny insect called a gnat. The
derivation comes from GTA's Network Address Translation.
● 8Mb RAM,
Note:
Ethernet Card Notes
Network interfaces are addressed by their two or three character device identifier and a positional
number starting at zero. The first card of a specific type identified by the system will have a
positional identifier of zero, (e.g. de0). If a second card of the same type as the first is found then
it will have a positional identifier of one, (e.g. de1) and a third card will have a positional
identifier of two, (e.g. de2). Each new type of card identified in the system will begin with a base
identifier of zero. This naming scheme does not apply to cards that must be configured to specific
values listed below.
The system doesn't require a keyboard or monitor for operation, however you'll need them for the initial configuration. Figure
14.81 shows a typical layout of a network using GNAT Box firewall.
Note:
Considerations about ISA Cards when using GNAT Box
1. Network cards do not have to be of identical make and/or manufacturer.
2. Configure the network cards using the configuration programs supplied with the network
cards. It is very important you configure the cards correctly or you may have problems
later.
● Plug and play should be off
● Use the listed IRQ, PORT and memory address (if required)
GNAT Box configuration is simple too, 4 commands (well 5 if you count the reboot command) to get the system up and
running. Figures 14.89, 14.90, 14.91 and 14.92 shows a sequence of GNAT Box console configuration interface.
Once the system is up just use the web browser interface to administer the system, (if you need to). The GNAT Box
configuration is simple yet powerful, facilities are provided for: static routes, IP aliasing, logging and inbound tunnels.
According to the vendor, other features such as filtering will be offered in a later release.
For those organizations that need to allow some inbound connections, the GNAT Box offers a tunneling facility. This facility
allows a service port (IP port) on the external network interface of the GNAT Box to be mapped to a port on the PSN
network (with optional 3rd network card) or an internal host system. Facilities that you might want to tunneled include email,
http (WWW), ftp and Telnet. Using the IP aliasing facility in conjunction with the tunneling facility the GNAT Box can
operate in a virtual hosting role.
The GNAT Box system is cost effective. The hardware required is inexpensive, there are not many components that can fail,
and there are no license restrictions on a per user basis as found on most other systems. Figure 14.82 gives you a graphic
description of what you would need as hardware requirements to run GNAT Box firewall.
Figure 14.83 shows a basic GNAT Box firewall configuration, where the requirements are,
● Two Networks
● Protected Network
● Operational Mode
● Unsolicited packets from the External Network are rejected.
● Packets that originate on the Protected Network are allowed to pass through the GNAT Box and their reply packets are
allowed to pass back to the Protected Network.
Now, you can have a more advanced configuration (see a basic installation of GNAT Box on figure 14.85) for the GNAT
Box. Figure 14.84 shows a typical example of such a configuration, where you have,
● Three Networks
● Operational Mode
● Packets that originate on the Protected Network are allowed to pass through the GNAT Box and their reply packets are
allowed to pass back to the Protected Network.
● Tunnel(s) are defined to allow External Network access to servers on the Private Service network (see figure 14.86).
Common servers might be web, email (see figure 14.87), and ftp.
● Users on the Protected Network have complete access to the Private Service network, as it is typical of a University or
a multi-departmental company, as seen on figure 14.88.
● The Private Service network has no access to the Protected network unless a Tunnel is defined.
Note:
For more information, contact Network-1 Software and Technology Inc., at 909 3rd Ave. 9th
Floor, New York, NY 10022. By phone at (212) 293-3068 or fax at (212) 293-3090. You can also
contact them via e-mail at [email protected] or at their Web site at URL
http://www.network-1.com.
About Firewall/Plus
FireWall/Plus is a NCSA certified frame, packet and application filtering network security firewall. It provides a very high
degree of security between internal corporate networks as well as controlling access to and from external networks such as
the Internet.
Installation and configuration of FireWall/Plus is accomplished with a minimum amount of effort using a powerful Graphical
User Interface (GUI). Using pre-defined rule bases the system can be installed in a plug-and-play manner and made available
for immediate use. Since FireWall/Plus is transparent to the network community all network applications will operate without
interruption or modification.
FireWall/Plus may be configured in a variety of methods to provide a secure firewall installation for networks. The most
common configuration is as a dual-homed gateway, as shown on figure 14.94
In the configuration described on figure 14.94, FireWall/Plus provides total filtration services between an exterior network,
such as the Internet, and the internal network. This is the first line of defense against unwanted network attacks.
However, for sites that require systems such as Web Servers and gopher servers to be accessed from internal users and
external users, a demilitarized zone (DMZ) network configuration may be used, as shown on figure 14.95
This DMZ configuration would require two FireWall/Plus systems to secure the systems on the inside section of the network
from both the external network and the DMZ systems.
Performance Statistics
FireWall/Plus provides real-time system and network performance statistics on system and network activities, as shown on
figure 14.98. As filters and flags are added to the system and as the traffic loads increase over time, FireWall/Plus provides
pro-active performance data so that the system may be upgraded before performance degradation occurs.
Additionally, network statistics for the trusted and untrusted sides of the firewall system provide detailed information on
connections, node access counts and other items required for proper management of traffic performance.
Technical Specifications
Firewall Type:
● Frame, packet and application-level filtering
● Source routing
● RIP attacks
● ICMP attacks
● PCMAIL
● DNS access
● IP spoofing
● Broadcast storms
● ARP spoofing
● External activities
The following are the management features and services provided by Firewall/Plus:
● Configuration files are in plain-text
● Replacement code and updates take very little time (less than 1 hour)
● Filters:
● finger
● ftp
● gopher
● ICMP
● Mbone
● MIME
● NFS
● NIS
● NTP
● RPCs
● Redirect messages
● RIP
● routing protocols
● sendmail
● SMTP
● Telnet
● TFTP
● tunneling (assembly/disassembly)
● UDP
● WWW
● X11
● Xterm
● MAC addresses
● User-configurable application filters
Systems Requirements
To operate FireWall/Plus, you must have Windows NT Version 3.51 or 4.0, and NDIS 3.0 drivers for Ethernet/802.3.
The hardware requirements of FireWall/Plus are as follows:
● Intel Pentium or DEC Alpha class CPU, 133 MHz minimum clock speed
● 32MB of memory
● NDIS 3.0 compliant Ethernet/802.3 Network Interface Card/s (SMC EtherPower PCI recommended)
TIS also offers a family of Gauntlet Internet Firewall products, and Gauntlet ForceField, a first-of-its-kind product designed
to protect web servers. They have a patented RecoverKey technology that was developed to support effective, exportable, and
recoverable software and hardware cryptography solutions. This technology allows software applications vendors the ability
to provide strong data protection internationally and the end-user the ability to recover their data when their encryption key is
lost, stolen, or destroyed.
TIS is internationally renowned for research in information systems security. They are actively participating in government
research contracts and internal research and development projects that advance the state of the art in trusted system
technology.
Under DARPA and National Laboratory sponsorship, TIS staff are performing innovative research in access control for O/S
and networks, cryptography (including key management), security services for Internet mail, trusted distributed file systems,
secure distributed operating systems, and integrated Fortezza support. TIS also provides trusted systems engineering and
consulting to a number of major government organizations and DoD programs. Figure 14.102 is a screenshot of TIS Web
site.
Note:
For more information, contact Trusted Information Systems, Inc., 15204 Omega Drive, Rockville,
MD 20850. Or by phone at +1 (301)527-9500 or Gauntlet Sales at (888)FIREWALL (toll free) or
+1 (301)527-9500, FAX: +1 (301)527-0482. You can also contact them via email at [email protected]
or on the Web at URL http://www.tis.com.
Note:
What about GVPN?
Virtual Private Networks (VPNs) allow privacy for all allowed network traffic between two
protected gateways through the Data Encryption Standard (DES). No level of trust between
networks is assumed. But when a trusted relationship exists between networks, the security
perimeters may be extended. Users can economically establish security-assured, high-speed,
Internet VPNs at a fraction of the operating expense of dedicated, leased-line networks. Gauntlet
Internet Firewalls come standard with software encryption; hardware encryption and Commercial
Key Recovery are available.
As an add-on feature to Gauntlet Internet Firewall, a Gauntlet Intranet Firewall allows you to place additional network
strongholds within your security perimeter, as shown on figure 14.105. You can pass authorized information quickly and
securely inside your organization. It can be easily managed locally or remotely, using the same access rules and features
provided by your Gauntlet Internet Firewall.
As far as firewall management, Gauntlet also includes:
● A secure, graphical management interface, accessible from an authorized computer on your trusted network.
● A firewall system integrity checker using cryptographic checksums to detect and report any changes in the system
software.
● "Smoke alarms" that can be configured to "go off" any time connections to unsupported services are attempted.
● An audit tool that provides audit reduction and reporting on a timely basis.
Gauntlet PC Extender
Also an add-on to your existing Gauntlet Internet Firewall, the PC Extender extends the network security perimeter from
host-to-host or from hotel room to trusted network, allowing for privacy and easy access on business travel. Figure 14.107
illustrates how it works, through its interaction with the Gauntlet Internet Firewall employing the same strong cryptography
for privacy, whether directly connected to the trusted (inside) network or dialed in. Strong authentication is required to
establish trust when the user is outside the physical security perimeter.
Note:
For more information, contact Technologic, Inc. 1000 Abernathy Road, Suite 1075, Atlanta, GA
30328. You can call 770/522-0222 or 800/615-9911, Fax: 770/522-0201. You can also contact
them via e-mail at [email protected] or on the Web at URL http://www.tlogic.com
● Web caching,
Interceptor’s Components
The following is an overview of the main components and features of Technologic’s Interceptor firewall
Internet Scanner
A significant percentage of network vulnerabilities result from the presence of bugs, holes and system configuration
weaknesses on devices attached to an organization’s network. Technologic uses Internet Scanner from Internet Security
Systems (ISS) - a powerful network scanning system - to locate these exposures. Internet Scanner identifies network security
vulnerabilities on both internal and external machines.
Internet Scanner is the first and most comprehensive network security assessment tool available to help you close the gap
between security policy and security practice. Internet Scanner provides you with an excellent view of your network’s
security exposures. The system tests for over 130 known vulnerabilities and recommends appropriate corrective action. It
also provides frequent updates with latest vulnerabilities and automatically identifies and reports these vulnerabilities.
HTTP Proxy
The HTTP proxy server handles connection requests on the HTTP port. It allows internal web browsers to access remote
HTTP and FTP servers. It also supports the relaying of SSL-encrypted connections with secure HTTP and NNTP servers.
E-Mail Proxy
All e-mail between the internal protected network and the external Internet is handled by the Interceptor host. Secure
handling of e-mail through Interceptor host is achieved using a two-step process.
First, all SMTP connections to Interceptor are answered by the SMTP proxy program which runs without privileges and
simply receives the incoming message, checks if it is allowed by the access policy, and if so hands it off to the sendmail
program which performs the final delivery. The benefit of this approach is that malicious clients never speak directly to the
sendmail program and thus cannot exploit any weaknesses it can contain. Instead they interact with a bare-bones SMTP
server program small enough to be inspected and verified.
Systems Requirements
Interceptor Firewall requires:
● Intel-based Systems Pentium 90Mhz
● 16 MB RAM
Note:
For more information, contact Sun Microsystems, Inc., 901 San Antonio Road, Palo Alto, CA
94303, telephone at 1-800-SUN-FIND or 1-972-788-3150 outside the United States. You can also
contact them via e-mail at [email protected] or at their Web site at URL
http://www.sun.com.
security, separating the networks into static safe and unsafe areas much like creating fences. The problem with this approach
is that once the fence has been breached, the network can be compromised.
Sun’s solution for this problem is a suite of products, which includes,
● SunScreen SPF, a dedicated, stealthy, network security solution, designed for the highest security needs of complex
networks; typically deployed at the gateway to a public network;
● SunScreen EFS, an encryption server software product with strong firewall/gateway functionality. It can be used to
protect all servers in the de-militarized zone (e.g. FTP, WWW, mail) and Intranet (e.g. database, HR, payroll servers);
and
● SunScreen SKIP, which provides encryption and key management capabilities, to the desktop or remote end user,
which enables PCs, workstations, and servers to achieve secure/authenticated communication.
Figure 14.113 illustrates the SunScreen line and how they fit in your security policy.
Sun security implementation vision is scaled to enterprise needs as secure virtual private networks are deployed in volume, as
shown on figure 14.114.
The SunScreen product line enables you to secure your network in an entirely new way. SunScreen SPF provide stealthy
network access control and SVPN solutions. SunScreen EFS provides similar network access control and encryption
capability, allowing corporations to lock down each of the DMZ machines, as well as all the servers within the corporate
network. This secures the whole network, not just the perimeter.
Deploying the product line will help create multiple SVPN’s both within the Intranet and Internet environments. Each
department, from the corporate office, to finance, to personnel, can each have a separate secure network. Secure and
authenticated communication with remote customers and employees, as well as business-to-business communication can be
accomplished via SunScreen SKIP.
This creates a network security system involving dedicated gateway-level security with SunScreen SPF a hardened
encryption server for databases, NFS, mail, Web and other types of application machines with SunScreen EFS and encryption
equipped end nodes with SunScreen SKIP.
This secure network solution creates one large electronic workspace, as shown on figure 114, where distinctions between
Intranets and the Internet become academic from a security standpoint, and all communication can be made private and
authenticated as needed.
Sun’s SKIP technology allows you to use the Internet as a conduit to your business partners and employees. According to
Sun, studies have shown that this can reduce the overall operating expenses by 23% (U.S. Computer).
Ease of administration.
Combined with a user-friendly interface and centralized control, SunScreen products allow for ease of maintenance and
management with little training and low software maintenance costs. Web-based administration also allows for flexibility in
selecting the number and placement of administration stations.
SunScreen products provide centralized and granular control of all authenticated users. SKIP authenticates remote clients for
secure communication between an enterprise’s local network and the corporate branch offices, business partners, and
nomadic users. Remote access can be granted or denied using a number of criteria such as network address or key identifier
in the case of nomadic systems. Figure 14.115 and 14.116 shows an example of such a scenario.
● State-of-the-art SKIP encryption to enable secure electronic commerce and remote access for employees.
● Remote administration.
● Remote administration.
SunScreen SPF-200
The SunScreen SPF package is Sun’s strategic platform for perimeter defense, providing secure business operations over the
Internet. To ensure a high level of security, SunScreen SPF uses a stealth design to protect it from attack, and state-of-the-art
SKIP encryption to protect data going over the network. Its advanced dynamic packet filtering, coupled with Sun’s
high-speed hardware, is designed to meet the most demanding performance requirements. The SunScreen SPF solution
enables organizations to deploy a premier perimeter defense today, and accommodate business over the Internet at their own
rate in the future
design and optimization, SunScreen SPF should run even faster. SunScreen SPF performance ensures that it can keep
up with the demands required to screen large amounts of Internet traffic.
● The stealth design - This design makes SunScreen SPF not addressable with an IP address provides two benefits.
First, stealthing makes a SunScreen SPF system more secure because potential intruders can not address the machine
running SunScreen SPF, possibly compromising the machine. Second, installation of SunScreen SPF into the network
is easy since the administrator can install it without changing routing tables.
● The stealth design "hardens" the OS - This factor turns the system into a dedicated SunScreen SPF system that only
runs SunScreen SPF. Hardening the OS enhances security. Since other applications do not run on the system, there is
less exposure. SunScreen SPF systems use a separate administration station that can be any SPARC machine and need
not be dedicated.
● State-of-the-art SKIP encryption technology - This encryption technology provides secure network communication
and acts as the infrastructure for electronic commerce, Extranets, and secure remote access. SKIP protects the data
being transmitted, ensures its integrity (not altered), and provides a high level of authentication.
● SunScreen SPF covers both TCP and UDP services - In regards to UDP, SunScreen SPF maintains state to improve
security and performance.
● SunScreen SPF allows flexibility in logging what has passed or failed through the screen. - Administrators can
choose what they want to monitor and be alerted to problems through pagers or alerts to network management stations.
● Network Address Translation (NAT) converts internal addresses to a different set of public addresses. This
allows for additional protection for the internal network, and also helps those sites that have not registered their IP
addresses. NAT supports both static and dynamic translation of internal addresses to public addresses. Since hackers
do not know the internal addresses of hosts, attacks are minimized.
● Administration is done through secured, remote administration stations - This enhances the security and meets the
needs of organizations for remote management
SunScreen EFS
SunScreen EFS software is Sun’s strategic offering for compartmentalization, where companies deploy multiple screens to
protect various departments and sites. SunScreen SPF is the best offering for protecting the corporation from Internet attack
and for performing business over the Internet.
In contrast, SunScreen EFS was designed from the ground up to be deployed throughout an organization and protect sites and
multiple departments inside the organization. With it, organizations can implement security policy and establish secure
connections between departments, sites, or even between business partners over an Extranet.
● SunScreen EFS can be managed remotely - This feature makes it very practical to deploy numerous SunScreen EFS
servers throughout an organization and manage them centrally.
● Conversion tool to migrate from Solstice FireWall-1 - This conversion facility translates host group definitions,
network object definitions, service definitions, actions, and rules from FireWall-1 3.0 to SunScreen EFS 1.1.
● Network Address Translation (NAT) converts internal addresses to a different set of public addresses. - This
provide additional protection for the internal network, and also helps those sites that have not registered their IP
addresses. NAT supports both static and dynamic translation of internal addresses to public addresses. Since hackers
do not know the internal addresses of hosts, attacks are minimized.
System Requirements
SunScreen SPF-200’s stealth feature dedicates the system running the screen to just SunScreen SPF. In addition, SunScreen
SPF requires a separate administration station, but is not required to be a dedicated system.
In contrast SunScreen EFS runs as a separate application on any SPARC machine.
The system requirements for the SunScreen SPF-200 Screen are:
● CPU: Ultra 1or Ultra 2 or a SunScreen SPF-100 screen for upgrades
● Disk: 1 GB of disk
● Memory: 16 MB
The system requirements for the SunScreen SPF-200 Administration Station are:
● CPU: SPARC system or compatible
● Memory: 16 MB
● Memory: 16 MB
company’s Intranet segments or between the internal network and the Internet. All inbound and outbound data packets are
inspected, verifying compliance with the enterprise security policy. Packets that the security policy does not permit are
immediately logged and dropped.
Client Authentication
Solstice FireWall-1 provides centralized and granular control of all users, including authenticated and unknown users. Client
Authentication permits only specified users to gain access to the internal network, or to selected services, as an additional
part of secure communications between an enterprise’s local network and corporate branch offices, business partners, and
nomadic users. Client Authentication works without modifying the application either on the client or server side.
This firewall supports four different approaches for user authentication, including Security Dynamics’ SecurID one-time
password cards. Unknown users can be granted access to specific services such as Web servers or e-mail, depending on your
corporate security policy.
This firewall can protect users from viruses and malicious programs that enter a company’s network from the Internet. This
includes viruses in executable programs, "macros" that are part of application documents, and ActiveX and Java applets. It
also uses third-party "plug-in" anti-virus and URL-filtering programs available from such vendors as Symantec, McAfee,
Trend Micro, Cheyenne, Eliashim, WEBsense, and others.
If you are operating a "server farm," Solstice FireWall-1 can optionally distribute incoming requests to the next available
server. One logical IP address can support access to all servers.
Note:
For more information, contact Secure Computing at 2675 Long Lake Road, Roseville, MN 55113.
Tel +1.612.628.2700 Fax +1.612.628.2701. Or via e-mail at [email protected] or via
the Web at http://www.securecomputing.com.
As discussed on chapter 7, "What is an Internet/Intranet Firewall After All?," firewalls come in three types: packet filters,
circuit-level gateways, and application gateways. BorderWare combines all three into one firewall server giving you the
flexibility and security you need, as seen on figure 14.119. BorderWare also supports multiple styles of authentication
including address/port based authentication and cryptographic authentication.
There is one very important feature of BorderWare that really stands out in the crowd of commercial firewalls on the market
today. BorderWare is built from the bottom up with a fail-safe design. The foundation for BorderWare was a securely
hardened kernel. Each layer of functionality that was added was first made secure. In the event that any of these services is
under attack, the firewall is still not compromised. There are tiny firewalls inside BorderWare that keep barriers around the
services to prevent the spread of any compromised piece and the rest of the firewall remains unaffected.
The following is a list of the main features found on BorderWare:
● Easy to use - It works with any PC, MAC or UNIX Internet application and offers complete transparency to internal
users. There is no need to change application software or user procedures.
● Has all you need to link to the Internet - , Enables you to incorporate application servers like Mail, News, WWW,
FTP, DNS
● Makes joining the Internet easy - It remaps and hides all internal IP addresses, allowing use of non-registered IP
address
● Is a complete network security solution - It combines packet filtering with application-level and circuit-level
gateways.
● Provides worry-free inbound access - It permits authenticated inbound Telnet access using one-time password
"tokens".
● Is flexible - It allows the security administrator to define proxies for secure and specialized applications that require
"tunneling" through the firewall.
● Is easy to install and manage - It provides a simple graphical interface for configuration, control and set-up.
● Lets the administrator know then the system is being attacked - It incorporates security features to detect probing
and initiate alarms.
● Makes audit simple and foolproof - It includes comprehensive audit capabilities and allows the security administrator
to direct log files to a remote host.
Transparency
BorderWare provides outbound application services such as Telnet, FTP, WWW, Gopher and America On Line
transparently. Existing windows-based or non-windows-based point-and-click client software will run without modification.
You can use your favorite shrink-wrap software. There is no need to login to the firewall. BorderWare is transparent.
Packet Filtering
All IP packets going between the internal network and the external network must pass through BorderWare . User definable
rules allow or disallow packets to be passed. The graphical user interface allows system administrators the ability to
implement packet filter rules easily and accurately.
Circuit-Level Gateway
All of outgoing connections and incoming connections are circuit-level connections. The circuit connection is made
automatically and transparently. BorderWare allows you to enable a variety of these such as outgoing Telnet, FTP, WWW,
Gopher, American On Line, and your own user-defined applications. Incoming circuit-level applications include Telnet and
FTP. Incoming connections are only permitted with authenticated inbound access using one-time password tokens.
Applications Servers
One of the extra features of BorderWare is that it includes support for several standard application servers. These include:
Mail, News, WWW, FTP, and DNS. Each application is compartmentalized from other firewall software, so that if an
individual server is under attack, other servers/functions are not affected.
● other servers
● alarm conditions
● administrative log
● kernel messages
Log information that is sent to the FTP log area can now be sent to another internal machine running syslog. Also,
BorderWare has an alarm system that watches for network probes. The alarm system can be configured to watch for TCP or
UDP probes from either the external or internal networks. Alarms can be configured to trigger email, pop-up windows,
messages sent to a local printer, and/or halt the system.
Transparent Proxies
Traditional firewalls require either logging into the firewall system or the modification of client applications using library
routines such as "SOCKS". BorderWare permits "off-the-shelf" software such as Beame & Whiteside BW-Connect TCP/IP
package, NetManage Chameleon, SPRY AIR Series, and standard UNIX networking software to operate transparently
through the firewall. Figure 14.121 shows the many protocols BorderWare incorporates proxies to.
Integrated Servers BorderWare includes support for several standard applications including Mail, News, FTP, Finger, Name
Server (DNS), and WWW. Each applications is completely isolated from all other applications, so that attempts to
compromise one server can have no effect on the others.
organizational domain name and whatever additional hostname is specified for the firewall. The External DNS also
automatically installs NS and wildcard MX records that point to the firewall. Additional backup MX and secondary NS
records can be configured by the administrator. No internal information is available to the External DNS and only the
External DNS can communicate with the outside. Therefore, no internal naming information can be obtained by
anyone on the outside. The External DNS cannot query the Internal DNS or any other DNS inside the firewall.
● Internal DNS: The Internal DNS is automatically configured with some initial information and can have additional
hosts added via the administrator interface. Other internal domains or sub-domains can be primaried, secondaried or
delegated to other internal nameservers. The ability to prime the internal DNS by downloading host and NS delegation
information from an existing DNS is available in the next major release. The information managed by the Internal
DNS is only available to internal machines. The Internal nameserver cannot receive queries from external hosts since it
cannot communicate directly with the external network. Resolution of external DNS information both for the firewall
itself and to handle internal queries for external information are handled by the internal nameserver. Although it is
unable to communicate directly with the external network, it is able to send queries and receive the responses via the
External DNS.
News Server
BorderWare incorporates a secured and self maintaining NNTP based news server. It accepts an Internet news feed from
designated external systems, usually your Internet service providers news machine(s). The news can be read directly from
BorderWare with standard PC or UNIX news reader clients. Also, the news can be fed to internal or external sites. No
maintenance is required for the news server as there is auto-addition of new News-groups and auto-deletion of old News.
Web Server
BorderWare incorporates a secured HTTP server. It will respond to internal or external requests for files from a limited file
hierarchy. Internal users will be transparently proxied to other Internet WWW servers. However, external users will never be
able to access any WWW server running on the internal network.
Encryption Features
Using a DES encryption based electronic challenge and response authentication card you can Telnet or FTP to the internal
network from an external network. As soon as you request a Telnet or FTP session, you are prompted with an eight digit
challenge number. The next Telnet or FTP attempt would be given a different challenge and would require a different
response.
Automatic Backups
BorderWare has a built-in mechanism for automatic nightly backup. First, it does a backup of your configuration files onto a
floppy diskette. It also backs up all your anonymous FTP directories, WWW data, and Finger server data on 4mm DAT tape.
News data is not backed up for obvious reasons because of the amount of space it would use. When upgrading your software,
you simply restore your configuration from the diskette and restore your data files from tape in minutes. The backup is also
very useful if your system crashes due to any hardware failure.
Security Features
The BorderWare Firewall Server is unique in integrating secure application servers as part of the basic system. Each server
has been designed from the ground up with security in mind. This alleviates the necessity for you to modify and harden your
own server application or machine as is required by some firewalls. Figure 14.124 gives an overview of the Secure Server
Net (SSN).
The BorderWare Firewall Server is built upon a version of UNIX that has been hardened to protect against security
violations. The operating system has been modified so that even if an attacker did gain access to the firewall through a
service s/he would be unable to affect the other application systems or gain access to your internal network.
The BorderWare Firewall Server has secure versions of most Internet services and networking tools including:
● a dual Domain Name Servers (internal and external)
Note:
For more information, contact Ukiah Software, 2155 South Bascom Avenue, Suite 210, Campell,
CA 95008, (800) 988-5424 or (800) 98-UKIAH, Fax: (408) 369-2899. Via e-Mail,
[email protected] or on the Web URL: http://www.ukiahsoft.com.
● e-mail,
● pager,
● SNMP trap,
● destinations,
● groups,
● time of day,
● applications,
● file types and and even right down to the level of individual Web pages.
The addition of three different forms of user authentication (NDS and MD4/MD5 One Time Password) make FireWALL a
robust security solution.
The platform on which FireWALL runs can be either NetWare 4.x or IntranetWare, or Windows NT. Access from the
firewall to the Internet can be provided via a stand-alone router (such as Cisco, Bay Networks etc.) or the multi-protocol
routing (MPR) capability in NetWare itself.
High Performance
A highly efficient application implementation delivers high throughput and hence maximum performance for client
applications. With 95% throughput efficiency, FireWALL has the performance edge for Internet and high-speed intranet
connections.
● Multi-layered security that ensures maximum flexibility to meet the security threats of today and tomorrow
● Integration into directory services and network management platforms that ensures a cohesive, easy to manage system
for organizations large and small
● Extensibility of NetRoad FireWALL's policy-based architecture that allows the incorporation of other application
modules that add new facets to the platform, beyond network security.
System Requirements
The following are the requirements for FireWALL Server for NetWare:
● NetWare 4.x or IntranetWare
● 16MB RAM
● TCP/IP stack
● 32MB RAM
encompassing all universal enterprise security standards, Secure Computing has more than 4,000 customers worldwide,
ranging from small businesses to Fortune 500 companies and government agencies.
Figure 14.131 is a screenshot of Secure Computing’s Web site.
Note:
For more information, contact Secure Computing at 2675 Long Lake Road, Roseville, MN 55113.
Tel +1.612.628.2700 Fax +1.612.628.2701. Or via e-mail at [email protected] or via
the Web at http://www.securecomputing.com.
A type can include any number of files, but each file on the system belongs to only one type.
Type Enforcement is based on the security principle of least privilege: any program executing on the system is given only the
resources and privileges it needs to accomplish its task. On the Sidewinder, Type Enforcement enforces the least privilege
concept by controlling the interactions between domains and file types, where,
● Each process domain on the Sidewinder is given access to only specific file types. If a process attempts to reference a
file belonging to a type that it does not have explicit permission to access, the reference fails as though the file does not
exist.
● Applications must usually collaborate with applications in other domains in order to do their job. On a typical system,
this collaboration is done using the system's interprocess communications facility, which also opens up opportunities
for breaching security. Type Enforcement eliminates this security risk by strictly controlling any communication
between process domains. If a program in the process domain attempts to signal, or otherwise communicate with, a
domain it does not have explicit permission to access, the communication attempt will fail.
● Most applications need to call operating system functions at times, but this can enable malicious users to access the
kernel directly and compromise the system. To prevent this, Type Enforcement explicitly specifies which system
functions can be called from each domain.
● One of the greatest security risks on a typical UNIX system is system administration, because of the high level of
privileges needed to successfully manage and configure system resources. UNIX allows a user to log in as "super-user"
(root), which gives the user access to all files and applications on the system. Under Type Enforcement, there is no
super-user status. Each process domain is administered separately and is assigned its own administrative role. Each
role is assigned only the privileges needed to administer a specific process domain. For example, if a user logs in using
an account that is assigned the Web administrator role, that user cannot perform administrative tasks for mail or FTP.
Figure 14.134 illustrates how Type Enforcement controls a domain's access to files of different types. Any time a process
tries to access a file, the Type Enforcement controls determine whether the access should be granted; these controls cannot be
circumvented. In Figure 14.134, for example, a process running in Domain A is attempting to access File Type X; Type
Enforcement denies this request. A process in domain B is permitted access to File Type X and File Type Z, while the
process in domain C is granted access to File Type Y.
You can see the effects of Type Enforcement by looking at an example, such as mail services (mail services are notorious for
security risks). Type Enforcement controls the mail server process by:
● Providing the mail process with access to only those files it needs to save and obtain mail.
● Permitting the mail process to communicate only with those processes it needs to transfer mail.
● Allowing the mail process to make only the system calls that are necessary for mail handling.
● Restricting mail administration capabilities to only those accounts that have been assigned the mail administration role.
Using the mail example, you can see how Type Enforcement provides restriction and containment. Even if an attacker
managed to discover and exploit a weakness in the mail server, the attacker is restricted from entering another domain. Any
resulting damage is contained within the mail domain, and applications executing in other domains are not affected. There is
no way to gain access to the root directory, for example, or to break into any other part of the system.
Remote Management
The Sidewinder's remote management capability is crucial for solving the network administration concerns of large
organizations with remote or branch offices. The ability to configure remote systems from a centralized location provides an
additional layer of information security control. By adding strong authentication and virtual private network (VPN)
capabilities to a Sidewinder, secure remote management becomes a reality.
Access Controls
The Sidewinder provides all of the basic Internet services your site needs, along with sophisticated controls that allow your
organization to easily allow or deny user access to these services.
These controls are configured in the Access Control List (ACL), a database of configurable rules. Each rule determines
whether or not a user program may open a connection to a network service proxy or a server application on the Sidewinder.
The connection request may originate from either an internal network or the Internet. When a network connection is
requested, the Sidewinder checks the ACL entries to determine whether to allow or deny the connection.
For example, your organization may want to allow all internal users to access the World Wide Web at any time, or you might
want to allow Web access by only specific users on certain internal systems at certain times of the day. You may want to
allow Internet users to access an FTP server located on the Sidewinder, or you may want to allow certain Internet users to
access an internal system situated behind the Sidewinder.
The Sidewinder's interface provides an easy way of configuring ACL entries, as shown below. When the Sidewinder is
installed, the initial ACL database contains entries that allow certain connections from the internal network to the Internet.
You can then add, modify, or delete individual Access Control List entries and configure them as necessary according to the
requirements of your organization's security policy. At any time, as shown on figure 14.135 you can quickly change the ACL
entries to make new services available and to loosen or tighten access restrictions based on your organization's unique needs.
The ACL is extremely flexible and allows organizations to restrict connections based on the following criteria:
● Source or destination burb - A burb is a type-enforced network area used to isolate network interfaces from each
other. You can allow or deny connections based on the source burb, the destination burb, or both.
● Source or destination network object type or group - You can allow or deny connections based on a source network
object, a destination network object, or both, as shown on figure 14.136. A source or destination object can be an IP
address, a host name, a domain name or a subnet. In addition, you can set up network groups composed of any
combination of these objects. For example, you may want to allow Telnet access from several specific host computers
and IP addresses residing on your internal network. You can easily create a group comprising these host names and IP
addresses. Then, you can quickly create an ACL entry allowing Telnet access for this group rather than creating
separate ACL entries for each host name and IP address.
● Type of connection agent - You can configure an ACL entry to allow or deny connections based on the software
agent in the Sidewinder that is providing the connection. One type of agent is a proxy, which allows communication
through the Sidewinder without any direct contact between systems on opposite sides of the firewall. A second type of
agent is a server, which provides a service on the Sidewinder itself, such as FTP. The third type of agent is a Network
Access Server (NAS), which provides dial-up connectivity from a bank of modems.
● Type of requested network service - You can allow or deny connections based on the type of service that is being
requested. The Sidewinder provides proxies for most popular Internet services. These are pre-configured and set up to
use standard port numbers. These include AOL, FTP, Web (http), Real Audio and Telnet. In addition, you can set up
your own UDP or TCP proxy by configuring a port for a specific service. For example, you can set up a UDP proxy to
allow you to route Simple Network Management Protocol (SNMP) messages through the Sidewinder.
You can also set up rules that are unique to some network services. For example, FTP can be controlled by a rule that
allows only GET operations, thus preventing it from writing to the server. Similarly, you can control access to Web
services based on a Web site's content using Secure Computing's SmartFilter™ technology.
● User requesting the connection - For services that support authentication (such as Web and FTP), you can restrict
access based on the user requesting the connection. Authentication
You can set up a rule requiring the Sidewinder to authenticate the requester's identity before granting the connection
request. You can use standard password authentication, or you can implement strong authentication to provide tighter
security. Strong authentication methods that are supported include LOCKout DES, LOCKout FORTEZZA and the
SafeWord Authentication Server, which are all premium features available for the Sidewinder. (See the "Premium
Features" section for more information.) You can also use strong authentication provided by a Defender Security
Server or an ACE/Server.
● Time and day of the connection request - You can specify the day and/or time of day when a connection is
permitted. For example, you could allow internal access to certain Internet services during the times when your site's
network traffic is lightest.
● Encryption - You can configure an ACL entry that requires the incoming connection request to be encrypted. This is a
premium feature available when you purchase the Sidewinder's IPSEC software option. See the "Premium Features"
● attack attempts
When the Sidewinder detects one of these events, it responds based on controls set by the administrator. Since most events
are unintentional, it isn't practical to respond to every one. When a particular event is repeated during a short time interval,
however, it may indicate malicious intent that warrants action.
The Sidewinder administrator specifies when an event will trigger an alarm and when it will be ignored by setting up
thresholds. For example, the administrator might specify that five network probe attempts in one hour will trigger an alarm.
Advanced Filtering
Even after authenticating users and restricting access to network resources, an enterprise's security may still be in jeopardy if
unauthorized content is allowed to pass between connections. The Sidewinder provides what most security systems do not:
advanced filtering technology that lets an organization prevent undesirable messages from flowing between networks.
The Sidewinder contains filtering mechanisms for three major areas of vulnerability:
● electronic mail
● Web pages
● Java applets
Email filtering
An enterprise's email system can be critical to its success. On the other hand, there can be disastrous consequences if an
organization's mail system is misused. To further secure the mail system, the Sidewinder provides three kinds of mail filters:
● A binary filter blocks mail that contains binary data such as MIME (multipurpose Internet mail extension) attachments.
● A key word filter blocks mail containing words the administrator specifies.
Note:
For more information, contact IBM North America, 1133 Westchester Avenue, White Plains NY
10604, telephone (520) 574 4600 or toll free number (for use within the United States)1 800 IBM
3333. You can also contact them via e-mail at [email protected] or visit their Web site at
http://www.ics.raleigh.ibm.com/
For over a decade, IBM has used the IBM Firewall to protect its own corporate networks. With access to the Internet from
internal IBM networks, IBM can be confident that with the IBM Firewall in place those internal, secure networks will stay
that way.
The IBM Firewall stops network intruders in their tracks. It combines all three leading firewall architectures (application
proxies, SOCKS circuit gateway, and filtering) in one flexible, powerful security system. It runs on an IBM RS/6000
workstation with AIX Version 4.1.5 or 4.2. And, as an e-business enhancer, it supports the IBM Network Computing
Framework for e-business.
The Java-based graphical user interface (GUI) offers an easy-to-use and safe tool for administrators. Easy-to-use because the
interface is interactive and dynamic. Safe because Java applets are installed on the administrator's workstation instead of on
the network.
Navigation through the interface itself is easy, thanks to a navigation tree that is always visible for guidance. Through this
navigation tree, administrators can easily find their way around the GUI and move from one task to another. Help with using
the GUI is available in several different forms, from context-sensitive help to immediate access to the online documentation.
Figure 14.141 shows the main panel of the GUI.
The IBM Firewall also eases your administrative tasks. The Enterprise Firewall Manager allows several firewalls to be
administered from a central location. And with administrators authorized for only specific tasks, you can maintain control
over who does what.
Greater Accessibility
Virtual private networks (VPNs) provide secure communication across the Internet. You can give remote users the same
accessibility to internal networks while protecting their communication across the Internet. Client-to-firewall VPNs allow
remote users to have private and secure communication even when the traffic travels over the Internet. These users can
change ISP-assigned IP addresses without losing access.
The IBM Firewall uses state-of-the-art technology to deliver a flexible and versatile firewall solution, with application
gateways, a Socks server, and advanced filtering capabilities. In one product, you have the choice of firewall technologies
that best suit your needs. These technologies, combined with an innovative graphical user interface and powerful
administration and management tools, make the IBM Firewall a leader in Internet security offerings.
user. The user contacts the proxy server using one of the TCP/IP applications (Telnet or FTP). The proxy server makes
contact with that remote host on behalf of the user, thus controlling access while hiding your network structure from external
users. Figure 14.142 illustrates a proxy Telnet server intercepting a request from an external user.
The IBM Firewall FTP and Telnet proxy servers can authenticate users with a variety of authentication methods, including
password verification, SecurID cards, S/Key, and SecureNet Key cards.
Use of Encryption
The IBM Firewall provides secure communication across a public network like the Internet through virtual private networks
(VPNs). A VPN is a group of one or more secure IP tunnels. When two secure networks (each protected by a firewall)
establish a VPN between them, the firewalls at each end encrypt and authenticate the traffic that passes between them.
Likewise, when a VPN is established between a remote client and a firewall, the traffic between them is encrypted and
authenticated. The exchange of data is controlled, secure, and validated.
SafeMail
SafeMail is an IBM mail gateway. The SafeMail function does not store mail on the gateway or run under the root user ID.
The firewall gateway name is substituted for the user's name on outgoing mail so that mail appears to be coming from the
firewall's address instead of the user's address. SafeMail supports Simple Mail Transfer Protocol (SMTP) and Multipurpose
Internet Mail Extensions (MIME).
Strong Authentication
The IBM Firewall lets you choose from many methods for authenticating users. You can use just a password, but in certain
situations this may not be secure enough. Particularly when logging in from the non-secure network, a password could easily
be intercepted by a would-be intruder. The IBM Firewall provides a strong authentication method, Security Dynamics
SecurID** card, plus the opportunity to implement your own unique authentication method.
The method from Security Dynamics includes a user ID and a SecurID card. When you're logging in remotely, you get your
password from the SecurID card. The password changes every 60 seconds and is good for one-time use only. So, even if
someone does intercept your password over the open network, the password is not valid by the time the hacker gets it.
You can also customize a user exit to support any other authentication mechanism. The IBM Firewall includes an application
programming interface (API) to help you define your own authentication technique.
Hardening
When you install the IBM Firewall, there are some non-secure services and protocols embedded within UNIX and TCP/IP,
along with accounts that could create a hole in your security policy. The IBM Firewall installation process disables these
applications and non-secure UNIX accounts on the firewall machine. (This process is also known as hardening your
operating system.)
Once you have completed the installation and configuration, a background program periodically checks for altered
configuration files. A message is sent to the syslog and an alarm is generated when this program detects that the protected
files were changed.
● Reporting dangerous services, known vulnerabilities, obsolete server versions, and servers or services in violation of
customized site policy
● Generating reports in HTML for easy browsing
System requirements
The following is a list of system requirements to run IBM the Firewall:
● A RISC System/6000
● 64MB of memory
Comments
Comments
Comments
Comments
Appendix A:
List of Firewall Resellers and Related
Tools
The following is a list of the main companies providing sales, VAR, and consulting on firewalls on Web
sites and networks. There are companies with expertise on different operation systems and environment.
The list is in alphabetical order. The technical information was provided by the company developing the
product, extracted as courtesy of Catherine Fulmer from the URL:
http://www.access.digex.net/~bdboyle/firewall.vendor.html.
AlterNet:
AlterNet is now offering security consulting services.
Bob Stratton Voice +1 703 204 8000
UUNET Technologies, Inc.
Email: [email protected]
Email: [email protected]
http://www.atlantic.com
Cisco Routers
http://www.cisco.com
Cohesive Systems
Cohesive Systems provides many networking and security services, as well as is a reseller of Trusted
Information Systems Gauntlet.
Cohesive Systems is a leading network consulting firm that puts all the pieces together for corporate
internetworks.
We partner with our clients to help them find technology solutions for their operating and business goals.
We do this by providing the expertise products and services to build high performance information
systems. Although we have expertise in all areas of network computing, we are widely recognized in the
industry for Internet connectivity and security solutions.
Cohesive Systems Branden L. Spikes
1510 Fashion Island Blvd, Webmaster
Suite 104, San Mateo, CA 415-574-3500
Email: [email protected] - [email protected]
http://www.cohesive.com
Conjungi Corporation
Conjungi Corporation is a re-seller for Trusted Information System's Gauntlet product. Located in
Seattle, Washington, and doing business through the US and (increasingly) internationally.
Email: [email protected]
http://www.conjungi.com
email: [email protected]
Rio De Janeiro - RJ
Brasil
tel: +55 21 262 1168
FSA Corporation
FSA Corporation is a software company that is dedicated to providing security software for
heterogeneous UNIX networks and PCs.
http://www.fsa.ca
IConNet
IConNet is a full service Internet provider in NYC. We sell IP service, consulting, hardware and software
dealing specifically with the Internet.
1. Internet in a Rack (IR) - IR is a full-service solution for corporate access, which includes a netra
server, a dedicated Sparc 5 firewall (Checkpoint/SunSoft), a Cisco router, and a T1 CSU/DSU. It
allows companies to connect to the Internet securely and quickly - setting up the system involves
plugging in 4 cables and flipping a switch.
2. Netra servers, Checkpoint Firewall-1 software, and other security products and services. We are
also a VAR for Cisco, chipcom, Sun, and many other high-end vendors.
Email: [email protected]
INTERNET GmbH
German distributor of the BorderWare Firewall-Server and also a provider of consulting for firewalls,
ISDN, Web Server, etc...
INTERNET GmbH
Am Burgacker 23 phone : +49-6201-3999-59
D-69488 Birkenau fax : +49-6201-3999-99
Germany
Contact: Ingmar Schraub
Web: GmbH
Neil Costigan
Email: [email protected]
http://www.medcom.se
ph: +46.708.432224 (GSM)
fax: +46.8.219505
video: +46.8.4402255 (h.320, isdn, up to 384k)
http://www.momentum.com.au
OpenSystems, Inc.
OpenSystems Inc. is a consulting and integration firm specializing in the design and deployment of
network computing technologies for the corporate enterprise.
Our background and history as a provider of corporate computing solutions has helped us cultivate a
unique set of skills in developing secure computing environments for customers with a critical need to
protect corporate data.
This experience has enabled us to build expertise and develop a proven methodology and techniques in
performing security assessments, policy reviews, designing enterprise security architectures, and rapidly
deploying security solutions in the areas of computer, network, and Internet/Intranet security.
OpenSystems Inc. represents firewall products from Raptor Systems, Checkpoint/SunSoft and Sun
Microsystems on UNIX and Windows NT. OpenSystems Inc. provides both bundled
hardware/software/integration/training packages as well as custom consulting solutions.
OpenSystems Inc.
10210 NE Points Drive, Suite 110
Kirkland, WA 98033-7872
(206) 803-5000 / (206) 803-5001 FAX
E-Mail: [email protected]
http://www.opensys.com
PDC
Peripheral Devices Corp.
http://PDC
Email: [email protected]
PENTA
PENTA, Inc. Phone: (800) PENTA-79
333 North Sam Houston Parkway East (713) 999-0093
Suite 680 Fax: (713) 999-0094
Houston, TX 77060
PRC
PRC is a leading integrator of open systems with over 40 years of experience in delivering quality
results. We specialize in providing custom client/server solutions for your enterprise. Talk to us today
about your Internet firewall requirements and be surprised at how easy it is to operate securely.
(Enterprise Assurance is a service mark of PRC).
Jay Heiser
Product Manager
Enterprise Assurance
1500 PRC Drive
McLean, Va 22102
703.556.2991
e-mail: [email protected]
http://www.c3i.wsoc.com
world-leading combination of products and services, address the information security needs of financial
institutions, government departments and commercial organizations - wherever they are located.
Racals integrated software and hardware security products each protect points of potential weakness -
building into full end to end security solutions tailored to meet the information security needs of your
organization.
As part of our commitment to provide "best of breed" products in a fast changing environment, Racal is
pleased to be working with Raptor systems in supplying and supporting the EAGLE family of firewall
software to our large customer base in the finance, government and commercial markets.
* * * UK * * *
Racal Airtech Ltd
Meadow View House
Long Crendon, Aylesbury
Buckinghamshire, United Kindom HP18 9EQ
Phone: 01844 201800
* * * USA * * *
Racal Gaurdata Inc
480 Spring Park Place
Suite 900
HERNDON, VA 22070
Phone: 703 471 0892
***
Sohbat Ali
http://www.gold.net
RealTech Systems
RealTech Systems Corporation is a systems integration company, located in NY city in the Empire State
building and Albany NY, serving the needs of Fortune 500 companies.
RTS is CISCO Gold authorized, an Advanced Technical Partner of Bay Networks, a Platinum Novell
reseller, and an authorized reseller of Checkpoints FireWall-1 product. Recent clients who RTS has
completed Internet projects for include: Deloitte & Touche LLP, Hearst Magazines, and Standard
Microsystems Corporation. Visit our web site at
http://WWW.REALTECH.COM
Stew Guernsey
Stalker supports all Sun operating systems (SunOS, Solaris, and Sun Trusted Solaris) and IBM AIX. We
have ports underway to additional platforms, including HP, and are working on several network and
router monitoring tools, as well. And our customers benefit from receiving ongoing updates to our
Misuse Detection Database.
For more information, email [email protected] or call us:
Haystack Labs, Inc.
10713 RR620N, Suite 521
Austin, TX 78726 USA
(512) 918-3555 (voice)
(512) 918-1265 (fax)
Stonesoft Corporation
Stonesoft Corporation in Finland, a FW-1 reseller.
Taivalm=E4ki 9 FIN-02200 Espoo, Finland
phone: +358 0 4767 11- fax: +358 0 4767 1234
phone: +358 0 422 400 - fax: +358 0 422 110
email: [email protected]
TeleCommerce
TeleCommerce is a Network Systems Corp. VAR in S. California. We specialize in Virtual Private
Networks over the Internet to replace costly dedicated and leased lines.
Email: [email protected]
http://WWW.TeleCommerce.com
Phone: 805-289-0300
UNIXPAC AUSTRALIA
UNIXPAC, headquartered in Cremorne (Sydney), NSW, Australia markets, and services enterprise-wide
systems solutions for internetworking, fire-wall security and data protection. Unixpac are the Australian
agents for Raptor Systems (Eagle firewall) and ISS (Internet Scanner).
Unixpac can be contacted directly at (02) 9953 8366, toll free number 1 800 022 137, or via Internet
e-mail at: [email protected]
http://www.unixpac.com.au
Zeuros Limited
Zeuros Limited in Rotherwick, Hampshire, England have been supplying the Raptor Eagle Firewall for
over 12 months into Banking, Telecommunications and other major UK corporates. Primarily a facility
management company, the provision of secure data networking, Internet protection and secure virtual
private networking services has fitted easily into Zeuros's portfolio.
For information, sales and support on the Raptor Eagle in the UK contact:
http://www.zeuros.co.uk
Les Carleton
Zeuros Limited
Tudor Barn, Frog Lane
Rotherwick, Hampshire, RG27 9BE
Tel:- 44 (0) 1256 760081
Fax:- 44 (0) 1256 760091
Email: [email protected]
ISS
Internet Security Scanner is an auditing package that is publicly available that checks domains and nodes
searching for well-known vulnerabilities and generating a log for the administrator to take corrective
measures. The publicly available version is on aql.gatech.edu /pub/security/iss.
SOCKS
The SOCKS package, developed by David Koplas and Ying Da Lee. Available by ftp from ftp.nec.com.
Comments
Comments