Squid Proxy
Squid Proxy
Squid acts as a proxy cache. It behaves like an agent that receives requests from clients, in this case
web browsers, and passes them to the specified server. When the requested objects arrive at the agent,
it stores a copy in a disk cache.
The main advantage of this becomes obvious as soon as different clients request the same objects: these
are served directly from the disk cache, much faster than obtaining them from the Internet. At the same
time, this results in less network traffic and thus saves bandwidth.
Tip
Squid covers a wide range of features, including distributing the load
over intercommunicating hierarchies of proxy servers, defining strict
access control lists for all clients accessing the proxy, and (with the
help of other applications) allowing or denying access to specific web
pages. It also can also produce data about web usage patterns, for
example, statistics about the most-visited web sites.
Squid is not a generic proxy. It proxies normally only HTTP connections. It does also support the
protocols FTP, Gopher, SSL, and WAIS, but it does not support other Internet protocols, such as Real
Audio, news, or video conferencing. Because Squid only supports the UDP protocol to provide
communication between different caches, many other multimedia programs are not supported.
It is also possible to use Squid together with a firewall to secure internal networks from the outside
using a proxy cache. The firewall denies all clients access to external services except Squid. All web
connections must be established by way of the proxy.
If the firewall configuration includes a DMZ, the proxy should operate within this zone. In this case, it
is important that all computers in the DMZ send their log files to hosts inside the secure network. The
possibility of implementing a transparent proxy is covered in Section 18.3.6. “Transparent Proxy
Configuration”.
1 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
Several proxies can be configured in such a way that objects can be exchanged between them, reducing
the total system load and increasing the chances of finding an object already existing in the local
network. It is also possible to configure cache hierarchies, so a cache is able to forward object requests
to sibling caches or to a parent cache — causing it to get objects from another cache in the local
network or directly from the source.
Choosing the appropriate topology for the cache hierarchy is very important, because is not desirable to
increase the overall traffic on the network. For a very large network, it would make sense to configure
a proxy server for every subnetwork and connect them to a parent proxy, which in turn is connected to
the proxy cache of the ISP.
All this communication is handled by ICP (Internet Cache Protocol) running on top of the UDP
protocol. Data transfers between caches are handled using HTTP (Hyper Text Transmission Protocol)
based on TCP.
To find the most appropriate server from which to get the objects, one cache sends an ICP request to all
sibling proxies. These answer the requests via ICP responses with a HIT code if the object was detected
or a MISS if it was not. If multiple HIT responses were found, the proxy server decides from which
server to download, depending on factors such as which cache sent the fastest answer or which one is
closer. If no satisfactory responses are received, the request is sent to the parent cache.
Tip
To avoid duplication of objects in different caches in the network, other
ICP protocols are used, such as CARP (Cache Array Routing Protocol)
or HTCP (HyperText Cache Protocol). The more objects maintained in
the network, the greater the possibility of finding the desired one.
Not all objects available in the network are static. There are a lot of dynamically generated CGI pages,
visitor counters, and encrypted SSL content documents. Objects like this are not cached because they
change each time they are accessed.
The question remains as to how long all the other objects stored in the cache should stay there. To
determine this, all objects in the cache are assigned one of various possible states.
Web and proxy servers find out the status of an object by adding headers to these objects, such as
“Last modified” or “Expires” and the corresponding date. Other headers specifying that objects must
not be cached are used as well.
Objects in the cache are normally replaced, due to a lack of free hard disk space, using algorithms such
as LRU (Last Recently Used). Basically this means that the proxy expunges the objects that have not
been requested for the longest time.
The most important thing is to determine the maximum load the system must bear. It is, therefore,
important to pay more attention to the load peaks, because these might be more than four times the
2 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
day's average. When in doubt, it would be better to overestimate the system's requirements, because
having Squid working close to the limit of its capabilities could lead to a severe loss in the quality of the
service. The following sections point to the system factors in order of significance.
Speed plays an important role in the caching process, so this factor deserves special attention. For hard
disks, this parameter is described as random seek time, measured in milliseconds. Because the data
blocks that Squid reads from or writes to the hard disk will tend to be rather small, the seek time of the
hard disk is more important than its data throughput. For the purposes of a proxy, hard disks with high
rotation speeds are probably the better choice, because they allow the read-write head to be positioned
in the required spot much quicker. Fast SCSI hard disks nowadays have a seek time of under 4
milliseconds.
One possibility to speed up the system is to use a number of disks concurrently or to employ striping
RAID arrays.
In a small cache, the probability of a HIT (finding the requested object already located there) is small,
because the cache is easily filled so the less requested objects are replaced by newer ones. On the other
hand, if, for example, 1 GB is available for the cache and the users only surf 10 MB a day, it would take
more than one hundred days to fill the cache.
The easiest way to determine the needed cache size is to consider the maximum transfer rate of the
connection. With a 1 Mbit/s connection, the maximum transfer rate is 125 KB/s. If all this traffic
ends up in the cache, in one hour it would add up to 450 MB and, assuming that all this traffic is
generated in only eight working hours, it would reach 3.6 GB in one day. Because the connection is
normally not used to its upper volume limit, it can be assumed that the total data volume handled by the
cache is approximately 2 GB. This is why 2 GB of disk space is required in the example for Squid to
keep one day's worth of browsed data cached.
18.3.3.3. RAM
The amount of memory required by Squid directly correlates to the number of objects in the cache.
Squid also stores cache object references and frequently requested objects in the main memory to speed
up retrieval of this data. Random access memory is much faster than a hard disk.
In addition to that, there is other data that Squid needs to keep in memory, such as a table with all the
IP addresses handled, an exact domain name cache, the most frequently requested objects, access
control lists, buffers, and more.
It is very important to have sufficient memory for the Squid process, because system performance is
dramatically reduced if it must be swapped to disk. The cachemgr.cgi tool can be used for the cache
memory management. This tool is introduced in Section 18.3.7.1. “cachemgr.cgi”.
18.3.3.4. CPU
Squid is not a program that requires intensive CPU usage. The load of the processor is only increased
while the contents of the cache are loaded or checked. Using a multiprocessor machine does not
3 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
increase the performance of the system. To increase efficiency, it is better to buy faster disks or add
more memory.
Squid is already preconfigured in SUSE LINUX, so you can start it easily right after installation. A
prerequisite for a smooth start is an already configured network, at least one name server, and Internet
access. Problems can arise if a dial-up connection is used with a dynamic DNS configuration. In cases
such as this, at least the name server should be clearly entered, because Squid does not start if it does
not detect a DNS server in /etc/resolv.conf.
To start Squid, enter rcsquid start at the command line as root. For the initial start-up, the directory
structure must first be defined in /var/squid/cache. This is done by the start script /etc/init.d
/squid automatically and can take a few seconds or even minutes. If done appears to the right in
green, Squid has been successfully loaded. Test Squid's functionality on the local system by entering
localhost and Port 3128 as proxy in the browser.
To allow all users to access Squid and, through it, the Internet, change the entry in the configuration file
/etc/squid/squid.conf from http_access deny all to http_access allow all. However, in
doing so, consider that Squid is made completely accessible to anyone by this action. Therefore, define
ACLs that control access to the proxy. More information about this is available in Section 18.3.5.2.
“Options for Access Controls”.
If you have made changes in the configuration file /etc/squid/squid.conf, tell Squid to load the
changed file by entering rcsquid reload. Alternatively, do a complete restart of Squid with
rcsquid restart.
Another important command is rcsquid status, which allows you to determine whether the proxy is
running. Finally, the command rcsquid stop causes Squid to shut down. This can take a while, because
Squid waits up to half a minute (shutdown_lifetime option in /etc/squid/squid.conf) before
dropping the connections to the clients and writing its data to the disk.
Terminating Squid
Terminating Squid with kill or killall can lead to the destruction of the
cache, which then must be fully removed to restart Squid.
If Squid dies after a short period of time even though it was started successfully, check whether there is
a faulty name server entry or whether the /etc/resolv.conf file is missing. The cause of a start
failure is logged by Squid in the /var/squid/logs/cache.log file. If Squid should be loaded
automatically when the system boots, use the YaST runlevel editor to activate Squid for the desired
runlevels.
An uninstall of Squid does not remove the cache or the log files. To remove these, delete the
/var/cache/squid directory manually.
Setting up a local DNS server, such as BIND9, makes sense even if the server does not manage its own
domain. It then simply acts as a caching-only DNS and is also able to resolve DNS requests via the root
name servers without requiring any special configuration. If you enter the local DNS server in the
4 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
/etc/resolv.conf file with the IP address 127.0.0.1 for localhost, Squid should always find a
valid name server when it starts. For this to work, it is sufficient just to start the BIND server after
installing the corresponding package. The name server of the provider should be entered in the
configuration file /etc/named.conf under forwarders along with its IP address. However, if you
have a firewall running, you need to make sure that DNS requests can pass it.
All Squid proxy server settings are made in the /etc/squid/squid.conf file. To start Squid for the
first time, no changes are necessary in this file, but external clients are initially denied access. The
proxy must be made available for the localhost, usually with 3128 as port. The options are extensive
and therefore provided with ample documentation and examples in the preinstalled /etc/squid
/squid.conf file. Nearly all entries begin with a # sign (the lines are commented) and the relevant
specifications can be found at the end of the line. The given values almost always correlate with the
default values, so removing the comment signs without changing any of the parameters actually has
little effect in most cases. It is better to leave the sample as it is and reinsert the options along with the
modified parameters in the line below. In this way, easily interpret the default values and the changes.
If you have updated from an earlier Squid version, it is recommended to edit the new /etc/squid
/squid.conf and only apply the changes made in the previous file. If you try to implement the old
squid.conf, run a risk that the configuration will no longer function, because options are sometimes
modified and new changes added.
http_port 3128
This is the port on which Squid listens for client requests. The default port is 3128, but 8080 is
also common. If desired, specify several port numbers separated by blank spaces.
Here, for example, enter a parent proxy to use the proxy of your ISP. As <hostname>, enter the
name and IP address of the proxy to use and, as <type>, parent. For <proxy-port>>, enter the
port number that is also set by the operator of the parent for use in the browser, usually 8080. Set
the <icp-port> to 7 or 0 if the ICP port of the parent is not known and its use is irrelevant to the
provider. In addition, default and no-query should be specified after the port numbers to prohibit
the use of the ICP protocol. Squid then behaves like a normal browser as far as the provider's
proxy is concerned.
cache_mem 8 MB
This entry defines the amount of memory Squid can use for the caches. The default is 8 MB.
The entry cache_dir defines the directory where all the objects are stored on disk. The numbers
5 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
at the end indicate the maximum disk space in MB to use and the number of directories in the first
and second level. The ufs parameter should be left alone. The default is 100 MB occupied disk
space in the /var/cache/squid directory and creation of sixteen subdirectories inside it, each
containing 256 more subdirectories. When specifying the disk space to use, leave sufficient
reserve disk space. Values from a minimum of fifty to a maximum of eighty percent of the
available disk space make the most sense here. The last two numbers for the directories should
only be increased with caution, because too many directories can also lead to performance
problems. If you have several disks that share the cache, enter several cache_dir lines.
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
These three entries specify the paths where Squid logs all its actions. Normally, nothing is
changed here. If Squid is experiencing a heavy usage burden, it might make sense to distribute
the cache and the log files over several disks.
emulate_httpd_log off
If the entry is set to on, obtain readable log files. Some evaluation programs cannot interpret this,
however.
client_netmask 255.255.255.255
With this entry, mask the logged IP addresses in the log files to hide the clients' identity. The last
digit of the IP address is set to zero if you enter 255.255.255.0 here.
ftp_user Squid@
With this, set the password Squid should use for the anonymous FTP login. It can make sense to
specify a valid e-mail address here, because some FTP servers can check these for validity.
cache_mgr webmaster
An e-mail address to which Squid sends a message if it unexpectedly crashes. The default is
webmaster.
logfile_rotate 0
If you run squid -k rotate, Squid can rotate secured log files. The files are numbered in this
process and, after reaching the specified value, the oldest file is overwritten. The value here is
usually 0 because archiving and deleting log files in SUSE LINUX is carried out by a cronjob
found in the configuration file /etc/logrotate/squid.
append_domain <domain>
6 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
With append_domain, specify which domain to append automatically when none is given.
Usually, your own domain is entered here, so entering www in the browser accesses your own
web server.
forwarded_for on
If you set the entry to off, Squid removes the IP address and the system name of the client from
the HTTP requests.
Normally, you do not need to change these values. If you have a dial-up connection, however,
the Internet may, at times, not be accessible. Squid will make a note of the failed requests then
refuse to issue new ones, although the Internet connection has been reestablished. In a case such
as this, change the minutes to seconds then, after clicking Reload in the browser, the dial-up
process should be reengaged after a few seconds.
To prevent Squid from taking requests directly from the Internet, use the above command to
force connection to another proxy. This must have previously been entered in cache_peer. If all
is specified as the <acl_name>, force all requests to be forwarded directly to the parent. This
might be necessary, for example, if you are using a provider that strictly stipulates the use of its
proxies or denies its firewall direct Internet access.
Squid provides an intelligent system that controls access to the proxy. By implementing ACLs, it can be
configured easily and comprehensively. This involves lists with rules that are processed sequentially.
ACLs must be defined before they can be used. Some default ACLs, such as all and localhost, already
exist. However, the mere definition of an ACL does not mean that it is actually applied. This only
happens in conjunction with http_access rules.
An ACL requires at least three specifications to define it. The name <acl_name> can be chosen
arbitrarily. For <type>, select from a variety of different options, which can be found in the
ACCESS CONTROLS section in the /etc/squid/squid.conf file. The specification for <data>
depends on the individual ACL type and can also be read from a file, for example, via host
names, IP addresses, or URLs. The following are some simple examples:
acl mysurfers srcdomain .my-domain.com
acl teachers src 192.168.1.0/255.255.255.0
acl students src 192.168.7.0-192.168.9.0/255.255.255.0
acl lunch time MTWHF 12:00-15:00
http_access defines who is allowed to use the proxy and who can access what on the Internet.
For this, ACLs must be given. localhost and all have already been defined above, which can
deny or allow access via deny or allow. A list containing any number of http_access entries can
be created, processed from top to bottom, and, depending on which occurs first, access is allowed
or denied to the respective URL. The last entry should always be http_access deny all. In the
7 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
following example, the localhost has free access to everything while all other hosts are denied
access completely.
http_access allow localhost
http_access deny all
In another example using these rules, the group teachers always has access to the Internet. The
group students only gets access Monday to Friday during lunch time.
http_access deny localhost
http_access allow teachers
http_access allow students lunch time
http_access deny all
The list with the http_access entries should only be entered, for the sake of readability, at the
designated position in the /etc/squid/squid.conf file. That is, between the text
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR
# CLIENTS
redirect_program /usr/bin/squidGuard
With this option, specify a redirector such as squidGuard, which allows blocking unwanted
URLs. Internet access can be individually controlled for various user groups with the help of
proxy authentication and the appropriate ACLs. squidGuard is a package in and of itself that can
be separately installed and configured.
authenticate_program /usr/sbin/pam_auth
If users must be authenticated on the proxy, a corresponding program, such as pam_auth, can be
set here. When accessing pam_auth for the first time, the user sees a login window in which to
enter the user name and password. In addition, an ACL is still required, so only clients with a
valid login can use the Internet:
acl password proxy_auth REQUIRED
The REQUIRED after proxy_auth can be replaced with a list of permitted user names or with the
path to such a list.
With this, have an ident request run for all ACL-defined clients to find each user's identity. If you
apply all to the <acl_name>, this is valid for all clients. Also, an ident daemon must be running
on all clients. For Linux, install the pidentd package for this purpose. For Windows, there is free
software available to download from the Internet. To ensure that only clients with a successful
ident lookup are permitted, a corresponding ACL also needs to be defined here:
acl identhosts ident REQUIRED
8 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
Here, too, replace REQUIRED with a list of permitted user names. Using ident can slow down
the access time quite a bit, because ident lookups are repeated for each request.
The usual way of working with proxy servers is the following: the web browser sends requests to a
certain port in the proxy server and the proxy provides these required objects, whether they are in its
cache or not. When working in a real network, several situations may arise:
For security reasons, it is recommended that all clients use a proxy to surf the Internet.
All clients must use a proxy whether they are aware of it or not.
The proxy in a network is moved, but the existing clients should retain their old configuration.
In all these cases, a transparent proxy may be used. The principle is very easy: the proxy intercepts and
answers the requests of the web browser, so the web browser receives the requested pages without
knowing from where they are coming. This entire process is done transparently, hence the name.
The options to activate in the /etc/squid/squid.conf file to get the transparent proxy up and
running are:
httpd_accel_host virtual
httpd_accel_port 80 # the port number where the actual HTTP server is located
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
Now redirect all incoming requests via the firewall with help of a port forwarding rule to the Squid port.
To do this, use the SUSE-provided tool SESEfirewall2. Its configuration file can be found in
/etc/sysconfig/SuSEfirewall2. The configuration file consists of well-documented entries. Even to
set only a transparent proxy, you must configure some firewall options. In our example:
Set ports and services (see /etc/exports) on the firewall permitted access from untrusted networks
such as the Internet. In this example, only web services are offered to the outside:
FW_SERVICES_EXT_TCP="www"
Define ports or services (see /etc/exports) on the firewall permitted access from the secure network,
both TCP and UDP services:
9 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
This allows accessing web services and Squid (whose default port is 3128).
The service “domain” stands for DNS. This service is commonly used. Otherwise, simply take it out of
the above entries and set the following option to no:
FW_SERVICE_DNS="yes"
The comments above show the syntax to follow. First, enter the IP address and the netmask of the
internal networks accessing the proxy firewall. Second, enter the IP address and the netmask to which
these clients send their requests. In the case of web browsers, specify the networks 0/0, a wild card
that means “to everywhere.” After that, enter the original port to which these requests are sent and,
finally, the port to which all these requests are redirected. As Squid supports more protocols than HTTP,
redirect requests from other ports to the proxy, such as FTP (port 21), HTTPS, or SSL (port 443). The
example uses the default port 3128. If there are more networks or services to add, they only need to be
separated by a single blank character in the corresponding entry.
FW_REDIRECT_TCP="192.168.0.0/16,0/0,80,3128 192.168.0.0/16,0/0,21,3128"
FW_REDIRECT_UDP="192.168.0.0/16,0/0,80,3128 192.168.0.0/16,0/0,21,3128"
To start the firewall and the new configuration with it, change an entry in the /etc/sysconfig
/SuSEfirewall2 file. The entry START_FW must be set to "yes".
Start Squid as shown in Section 18.3.4. “Starting Squid”. To check if everything is working properly,
check the Squid logs in /var/log/squid/access.log.
To verify that all ports are correctly configured, perform a port scan on the machine from any computer
outside your network. Only the web services port (80) should be open. To scan the ports with nmap, the
command syntax is nmap -O IP_address.
10 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
In the following, see how other applications interact with Squid. cachemgr.cgi enables the system
administrator to check the amount of memory needed for caching objects. squidGuard filters web
pages. Calamaris is a report generator for Squid.
18.3.7.1. cachemgr.cgi
The cache manager (cachemgr.cgi) is a CGI utility for displaying statistics about the memory usage of a
running Squid process. It is also a more convenient way to manage the cache and view statistics without
logging the server.
18.3.7.2. Setup
First, a running web server on your system is required. To check if Apache is already running, as root
enter the command rcapache status. If a message like this appears:
Checking for service httpd: OK
Server uptime: 1 day 18 hours 29 minutes 39 seconds
Apache is running on your machine. Otherwise, enter rcapache start to start Apache with the SUSE
LINUX default settings.
The last step to set it up is to copy the file cachemgr.cgi to the Apache directory cgi-bin:
cp /usr/share/doc/packages/squid/scripts/cachemgr.cgi /srv/www/cgi-bin/
There are some default settings in the original file required for the cache manager:
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
the first ACL is the most important, as the cache manager tries to communicate with Squid over the
cache_object protocol.
The following rules assume that the web server and Squid are running on the same machine. If the
communication between the cache manager and Squid originates at the web server on another
computer, include an extra ACL as in File 18.2. “Access Rules”.
11 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
Configure a password for the manager for access to more options, like closing the cache remotely or
viewing more information about the cache. For this, configure the entry cachemgr_passwd with a
password for the manager and the list of options to view. This list appears as a part of the entry
comments in /etc/squid/squid.conf.
Restart Squid every time the configuration file is changed. This can easily be done with
rcsquid reload.
18.3.7.5. squidGuard
This section is not intended to go through an extensive configuration of squidGuard, only to introduce it
and give some advice for using it. For more in-depth configuration issues, refer to the squidGuard web
site at http://www.squidguard.org.
squidGuard is a free (GPL), flexible, and fast filter, redirector, and access controller plug-in for Squid.
It lets you define multiple access rules with different restrictions for different user groups on a Squid
cache. squidGuard uses Squid's standard redirector interface.
limit the web access for some users to a list of accepted or well-known web servers or URLs
block access to some listed or blacklisted web servers or URLs for some users
block access to URLs matching a list of regular expressions or words for some users
have different access rules based on time of day, day of the week, date, etc.
12 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
Install squidGuard. Edit a minimal configuration file /etc/squidguard.conf. There are plenty of
configuration examples in http://www.squidguard.org/config/. Experiment later with more complicated
configuration settings.
Next, create a dummy “access denied” page or a more or less intelligent CGI page to redirect Squid if
the client requests a blacklisted web site. Using Apache is strongly recommended.
Now, tell Squid to use squidGuard. Use the following entry in the /etc/squid.conf file:
redirect_program /usr/bin/squidGuard
There is another option called redirect_children configuring how many different “redirect” (in this
case squidGuard) processes are running on the machine. squidGuard is fast enough to cope with lots of
requests (squidGuard is quite fast: 100,000 requests within 10 seconds on a 500MHz Pentium with
5900 domains, 7880 URLs, total 13780). Therefore, it is not recommended to set more than four
processes, because this may lead to an unnecessary increase of memory for the allocation of these
processes.
redirect_children 4
Last, have Squid read its new configuration by running rcsquid reload. Now, test your settings with a
browser.
Calamaris is a Perl script used to generate reports of cache activity in ASCII or HTML format. It works
with native Squid access log files. The Calamaris home page is located at http://Calamaris.Cord.de/.
The use of the program is quite easy.
Log in as root then enter cat access.log.files | calamaris [options] > reportfile It is important when
piping more than one log file that the log files are chronologically ordered with older files first. These
are some options of the program:
-a
-w
an HTML report
-l
More information about the various options can be found in the program's manual page with
man calamaris.
13 of 14 2/29/2012 12:37 PM
18.3. Proxy Server: Squid http://www.novell.com/documentation/suse91/suselinux-adminguide/html...
This puts the report in the directory of the web server. Apache is required to view the reports.
Another powerful cache report generator tool is SARG (Squid Analysis Report Generator). More
information about this can be found in the relevant Internet pages at http://web.onda.com.br/orso/.
Visit the home page of Squid at http://www.squid-cache.org/. Here, find the Squid User Guide and a
very extensive collection of FAQs on Squid.
14 of 14 2/29/2012 12:37 PM