LPI 303-200: Security
LPI 303-200: Security
Description: Candidates should understand X.509 certificates and public key infrastructures. They should
know how to configure and use OpenSSL to implement certification authorities and issue SSL certificates
for various purposes.
Understand X.509 certificates, X.509 certificate lifecycle, X.509 certificate fields and X.509v3
certificate extensions.
Understand trust chains and public key infrastructures.
Generate and manage public and private keys.
Create, operate and secure a certification authority.
Request, sign and manage server and client certificates.
Revoke certificates and certification authorities.
The following is a partial list of the used files, terms and utilities:
OpenSSL
$ openssl version
1 sur 39
This command should display the version and date OpenSSl was created.
2. Some system may require the creation of a random number file. Cryptographic software needs a source of
unpredictable data to work correctly. Many Open Source operating systems provide a “random device”.
Systems like AIX do not. The command to do this is:
3. Edit the /etc/ssl/openssl.cnf file and search for _default. Edit each of these default settings to fit you
needs. Also search for “req_distinguished_name”, in this section you find the default answer for some of the
openssl questions. If you need to create multiple certificates with the same details, it is helpful to change
these default answers.
4. Create a RSA private key for your CA (will be Triple-DES encrypted and PEM formatted):
$ openssl req -new -x509 -keyout demoCA/private/cakey.pem -out demoCA/cacert.pem -days 3650
This command creates two files. The first is the Private CA key in demoCA/private/cakey.pem. The second
is demoCA/cacert.pem. This is the Public CA key. As a part of this process you are ask several questions.
Answer them as you see fit. The password is the access to your Private Key. Make it a good one. With this
key anyone can sign ohter certificates as you. The “Common Name” questions should reflect the fact your
are a CA. A name like MyCompany Certificate Authority would be good.
$ misc/CA.pl -newreq
or
2 sur 39
Once the private key is generated a Certificate Signing Request can be generated.
During the generation of the CSR, you will be prompted for several pieces of information. These are the
X.509 attributes of the certificate. One of the prompts will be for “Common Name (e.g., YOUR name)”. It
is important that this field be filled in with the fully qualified domain name of the server to be protected by
SSL. If the website to be protected will be https://www.domain.com [https://www.domain.com], then enter
www.domain.com [http://www.domain.com] at this prompt. The command to generate the CSR is as follows:
$ cp server.key server.key.org
$ openssl rsa -in server.key.org -out server.key
4.1 Let an official CA sign the CSR. You now have to send this Certificate Signing Request (CSR) to a
Certifying Authority (CA) for signing. The result is then a real Certificate which can be used for Apache.
Here you have to options: First you can let the CSR sign by a commercial CA like Verisign or Thawte.
Then you usually have to post the CSR into a web form, pay for the signing and await the signed Certificate
you then can store into a server.crt file.
$ openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
3 sur 39
4.3 Self-sign the CSR using your own CA. Now you can use this CA to sign server CSR's in order to create
real SSL Certificates for use inside an Apache webserver (assuming you already have a server.csr at hand):
$ /var/ssl/misc/CA.pl -sign
or
5. You can see the details of the received Certificate via the command:
6. Now you have two files: server.key and server.crt. These now can be used as following inside your
Apache's httpd.conf file:
SSLCertificateFile /path/to/this/server.crt
SSLCertificateKeyFile /path/to/this/server.key
Convert the signed key to P12 format so it can be inported into a browser.
openssl pkcs12 -export -in newcert.pem -inkey newreq.pem -name "www.domain.com" -certfile demoCA/cacert.pem -
Source1 [http://www.akadia.com/services/ssh_test_certificate.html]
Source2 [http://www.grennan.com/CA-HOWTO-1.html]
Source3 [https://lpi.universe-network.net/doku.php?id=wiki:certification:lpic303]
CTRL
OCSP
Understand X.509 certificates, X.509 certificate lifecycle, X.509 certificate fields and
X.509v3 certificate extensions.
Understand trust chains and public key infrastructures.
secure a certification authority.
Revoke certificates and certification authorities.
Description: Candidates should know how to use X.509 certificates for both server and client
authentication. Candidates should be able to implement user and server authentication for Apache HTTPD.
The version of Apache HTTPD covered is 2.4 or higher.
4 sur 39
Understand of SSL, TLS and protocol versions.
Understand common transport layer security threats, for example Man-in-the-Middle.
Configure Apache HTTPD with mod_ssl to provide HTTPS service, including SNI and HSTS.
Configure Apache HTTPD with mod_ssl to authenticate users using certificates.
Configure Apache HTTPD with mod_ssl to provide OCSP stapling.
Use OpenSSL for SSL/TLS client and server tests.
The following is a partial list of the used files, terms and utilities:
Description: Candidates should be able to set up and configure encrypted file systems.
The following is a partial list of the used files, terms and utilities:
cryptsetup
cryptmount
/etc/crypttab
ecryptfsd
ecryptfs-* commands
mount.ecryptfs, umount.ecryptfs
pam_ecryptfs
LUKS, cryptsetup-luks
LUKS (Linux Unified Key Setup) provides a standard on-disk format for encrypted partitions to facilitate
cross distribution compatability, to allow for multiple users/passwords, effective password revocation, and
to provide additional security against low entropy attacks. To use LUKS, you must use an enabled version
5 sur 39
of cryptsetup.
luksFormat
Before we can open an encrypted partition, we need to initialize it.
WARNING!
========
This will overwrite data on /dev/loop/0 irrevocably.
Note: For encrypted partitions replace the loopback device with the device label of the partition.
luksOpen
Now that the partition is formated, we can create a Device-Mapper mapping for it.
6 sur 39
root@host:~$ losetup /dev/loop/0 testfile
root@host:~$ cryptsetup luksOpen /dev/loop/0 testfs
Enter LUKS passphrase:
key slot 0 unlocked.
Command successful.
root@host:~$ mount /dev/mapper/testfs /mnt/test/
root@host:~$
The LUKS header allows you to assign 8 different passwords that can access the encyrpted partition or
container. This is useful for environments where the CEO & CTO can each have passwords for the device
and the administrator(s) can have another. This makes it easy to change the password in case of employee
turnover while keeping the data accessible.
Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 1032
MK bits: 128
MK digest: a9 3c c2 33 0b 33 db ff d2 b9 dc 6c 01 d6 90 48 1d c1 2e bb
MK salt: 98 46 a3 28 64 35 f1 55 f0 2b 8e af f5 71 16 64
3c 30 1f 6c b1 4b 43 fd 23 49 28 a6 b0 e4 e2 14
MK iterations: 10
7 sur 39
UUID: 089559af-41af-4dfe-b736-9d9d48d3bf53
Source [http://feraga.com/node/51]
CBC
Despite its deficiencies the CBC (Cipher Block Chaining) mode is still the most commonly used for disk
encryption[citation needed]. Since auxiliary information isn't stored for the IV of each sector, it is thus
derived from the sector number, its content, and some static information. Several such methods were
proposed and used.
ESSIV
Encrypted Salt-Sector Initialization Vector (ESSIV)[3] is a method for generating initialization vectors for
block encryption to use in disk encryption.
The usual methods for generating IVs are predictable sequences of numbers based on for example time
stamp or sector number and permits certain attacks such as a Watermarking attack.
ESSIV prevents such attacks by generating IVs from a combination of the sector number with the hash of
the key. It is the combination with the key in form of a hash that makes the IV unpredictable.
LRW
In order to prevent such elaborate attacks, different modes of operation were introduced: tweakable narrow-
block encryption (LRW and XEX) and wide-block encryption (CMC and EME).
Whereas a purpose of a usual block cipher EK is to mimic a random permutation for any secret key K, the
purpose of tweakable encryption E_K^T is to mimic a random permutation for any secret key K and any
known tweak T.
XTS
XTS is XEX-based Tweaked CodeBook mode (TCB) with CipherText Stealing (CTS). Although XEX-
TCB-CTS should be abbreviated as XTC, “C” was replaced with “S” (for “stealing”) to avoid confusion
with the abbreviated ecstasy. Ciphertext stealing provides support for sectors with size not divisible by
block size, for example, 520-byte sectors and 16-byte blocks. XTS-AES was standardized in 2007-12-19 [1]
as IEEE P1619 Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices.
The XTS proof[2] yields strong security guarantees as long as the same key is not used to encrypt much
more than 1 terabyte of data. Up until this point, no attack can succeed with probability better than
approximately one in eight quadrillion. However this security guarantee deteriorates as more data is
8 sur 39
encrypted with the same key. With a petabyte the attack success probability rate decreases to at most eight
in a trillion, with an exabyte, the success probability is reduced to at most eight in a million.
This means that using XTS, with one key for more than a few hundred terabytes of data opens up the
possibility of attacks (and is not mitigated by using a larger AES key size, so using a 256-bit key doesn't
change this).
The decision on the maximum amount to data to be encrypted with a single key using XTS should consider
the above together with the practical implication of the attack (which is the ability of the adversary to
modify plaintext of a specific block, where the position of this block may not be under the adversary's
control).
Source [http://en.wikipedia.org/wiki/Disk_encryption_theory]
cryptmount
In order to create a new encrypted filing system managed by cryptmount, you can use the supplied
’cryptmount-setup’ program, which can be used by the superuser to interactively configure a basic setup.
Alternatively, suppose that we wish to setup a new encrypted filing system, that will have a target-name of
“opaque”. If we have a free disk partition available, say /dev/hdb63, then we can use this directly to store
the encrypted filing system. Alternatively, if we want to store the encrypted filing system within an ordinary
file, we need to create space using a recipe such as:
and then replace all occurences of ’/dev/hdb63’ in the following with ’/home/opaque.fs’. (/dev/urandom can
be used in place of /dev/zero, debatably for extra security, but is rather slower.)
First, we need to add an entry in /etc/cryptmount/cmtab, which describes the encryption that will be used to
protect the filesystem itself and the access key, as follows:
opaque {
dev=/dev/hdb63 dir=/home/crypt
fstype=ext2 fsoptions=defaults cipher=twofish
keyfile=/etc/cryptmount/opaque.key
keyformat=builtin
}
Here, we will be using the “twofish” algorithm to encrypt the filing system itself, with the built-in key-
manager being used to protect the decryption key (to be stored in /etc/cryptmount/opaque.key).
In order to generate a secret decryption key (in /etc/cryptmount/opaque.key) that will be used to encrypt the
filing system itself, we can execute, as root:
This will generate a 32-byte (256-bit) key, which is known to be supported by the Twofish cipher algorithm,
and store it in encrypted form after asking the system administrator for a password.
If we now execute, as root:
we will then be asked for the password that we used when setting up /etc/cryptmount/opaque.key, which
will enable cryptmount to setup a device-mapper target (/dev/mapper/opaque). (If you receive an error
message of the form device-mapper ioctl cmd 9 failed: Invalid argument, this may mean that you have
9 sur 39
chosen a key-size that isn’t supported by your chosen cipher algorithm. You can get some information about
suitable key-sizes by checking the output from “more /proc/crypto”, and looking at the “min keysize” and
“max keysize” fields.)
We can now use standard tools to create the actual filing system on /dev/mapper/opaque:
mke2fs /dev/mapper/opaque
(It may be advisable, after the filesystem is first mounted, to check that the permissions of the top-level
directory created by mke2fs are appropriate for your needs.)
After executing
the encrypted filing system is ready for use. Ordinary users can mount it by typing
cryptmount -m opaque
or
cryptmount opaque
cryptmount -u opaque
cryptmount keeps a record of which user mounted each filesystem in order to provide a locking mechanism
to ensure that only the same user (or root) can unmount it.
PASSWORD CHANGING
After a filesystem has been in use for a while, one may want to change the access password. For an example
target called “opaque”, this can be performed by executing:
After successfully supplying the old password, one can then choose a new password which will be used to
re-encrypt the access key for the filesystem. (The filesystem itself is not altered or re-encrypted.)
/etc/crypttab
ecryptfsd
ecryptfs-* commands
mount.ecryptfs, umount.ecryptfs
pam_ecryptfs
Use eCryptfs to encrypt file systems, including home directories and PAM integration.
Be aware of plain dm-crypt and EncFS.
Description: Candidates should have experience and knowledge of cryptography in the context of DNS and
10 sur 39
its implementation using BIND. The version of BIND covered is 9.7 or higher.
The following is a partial list of the used files, terms and utilities:
Description: Candidates should be able to secure computers running Linux against common threats. This
includes kernel and software configuration.
11 sur 39
Key Knowledge Areas:
The following is a partial list of the used files, terms and utilities:
grub.cfg
chkconfig, systemctl
ulimit
/etc/security/limits.conf
pam_limits.so
chroot
sysctl
/etc/sysctl.conf
root@richard:/etc/security# ls -alh
total 52K
drwxr-xr-x 2 root root 4.0K 2008-11-07 18:06 .
drwxr-xr-x 149 root root 12K 2008-11-08 17:13 ..
-rw-r--r-- 1 root root 4.6K 2008-10-16 06:36 access.conf
-rw-r--r-- 1 root root 3.4K 2008-10-16 06:36 group.conf
-rw-r--r-- 1 root root 1.9K 2008-10-16 06:36 limits.conf
-rw-r--r-- 1 root root 1.5K 2008-10-16 06:36 namespace.conf
-rwxr-xr-x 1 root root 1003 2008-10-16 06:36 namespace.init
-rw-r--r-- 1 root root 3.0K 2007-10-01 20:49 pam_env.conf
-rw-r--r-- 1 root root 419 2008-10-16 06:36 sepermit.conf
-rw-r--r-- 1 root root 2.2K 2007-10-01 20:49 time.conf
Config files:
12 sur 39
#<domain> can be:
# - an user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open files
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
# - chroot - change root to directory (Debian-specific)
#
#<domain> <type> <item> <value>
#
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#ftp - chroot /ftp
#@student - maxlogins 4
# End of file
Description: Candidates should be familiar with the use and configuration of common host intrusion
detection software. This includes updates and maintenance as well as automated host scans.
13 sur 39
Key Knowledge Areas:
The following is a partial list of the used files, terms and utilities:
auditd
auditctl
ausearch, aureport
auditd.conf
audit.rules
pam_tty_audit.so
chkrootkit
rkhunter
/etc/rkhunter.conf
maldet
conf.maldet
aide
/etc/aide/aide.conf
Description: Candidates should be familiar with management and authentication of user accounts. This
includes configuration and use of NSS, PAM, SSSD and Kerberos for both local and remote directories and
authentication mechanisms as well as enforcing a password policy.
14 sur 39
Configure SSSD authentication against Active Directory, IPA, LDAP, Kerberos and local domains.
Obtain and manage Kerberos tickets.
The following is a partial list of the used files, terms and utilities:
nsswitch.conf
/etc/login.defs
pam_cracklib.so
chage
pam_tally.so, pam_tally2.so
faillog
pam_sss.so
sssd
sssd.conf
sss_* commands
krb5.conf
kinit, klist, kdestroy
Description: Candidates should be familiar with FreeIPA v4.x. This includes installation and maintenance
of a server instance with a FreeIPA domain as well as integration of FreeIPA with Active Directory.
The following is a partial list of the used files, terms and utilities:
389 Directory Server, MIT Kerberos, Dogtag Certificate System, NTP, DNS, SSSD, certmonger
ipa, including relevant subcommands
ipa-server-install, ipa-client-install, ipa-replica-install
ipa-replica-prepare, ipa-replica-manage
15 sur 39
Understand FreeIPA, including its architecture and components.
Understand system and configuration prerequisites for installing FreeIPA.
Install and manage a FreeIPA server and domain.
Understand and configure Active Directory replication and Kerberos cross-realm trusts.
Be aware of sudo, autofs, SSH and SELinux integration in FreeIPA.
Description: Candidates are required to understand Discretionary Access Control and know how to
implement it using Access Control Lists. Additionally, candidates are required to understand and know how
to use Extended Attributes.
Understand and manage file ownership and permissions, including SUID and SGID.
Understand and manage access control lists.
Understand and manage extended attributes and attribute classes.
The following is a partial list of the used files, terms and utilities:
getfacl
setfacl
getfattr
setfattr
Extended Attributes
In Linux, the ext2, ext3, ext4, JFS, ReiserFS and XFS filesystems support extended attributes (abbreviated
xattr) if the libattr feature is enabled in the kernel configuration. Any regular file may have a list of
extended attributes. Each attribute is denoted by a name and the associated data. The name must be a null-
terminated string, and must be prefixed by a namespace identifier and a dot character. Currently, four
namespaces exist: user, trusted, security and system. The user namespace has no restrictions with regard to
naming or contents. The system namespace is primarily used by the kernel for access control lists. The
security namespace is used by SELinux, for example.
Extended attributes are not widely used in user-space programs in Linux, although they are supported in the
2.6 and later versions of the kernel. Beagle does use extended attributes, and freedesktop.org publishes
recommendations for their use.
Source [http://en.wikipedia.org/wiki/Extended_file_attributes#Linux]
getfattr
For each file, getfattr displays the file name, and the set of extended attribute names (and optionally values)
which are associated with that file.
OPTIONS
-n name, --name=name
Dump the value of the named extended attribute extended attribute.
16 sur 39
-d, --dump
Dump the values of all extended attributes associated with pathname.
-e en, --encoding=en
Encode values after retrieving them. Valid values of en are "text", "hex", and "base64". Values encoded a
double quotes ("), while strings encoded as hexidecimal and base64 are prefixed with 0x and 0s, respectiv
-h, --no-dereference
Do not follow symlinks. If pathname is a symbolic link, the symbolic link itself is examined, rather than
-m pattern, --match=pattern
Only include attributes with names matching the regular expression pattern. The default value for pattern
includes all the attributes in the user namespace. Refer to attr(5) for a more detailed discussion on nam
--absolute-names
Do not strip leading slash characters ('/'). The default behaviour is to strip leading slash characters.
--only-values
Dump out the extended attribute value(s) only.
-R, --recursive
List the attributes of all files and directories recursively.
-L, --logical
Logical walk, follow symbolic links. The default behaviour is to follow symbolic link arguments, and to s
encountered in subdirectories.
-P, --physical
Physical walk, skip all symbolic links. This also skips symbolic link arguments.
1: # file: somedir/
2: user.name0="value0"
3: user.name1="value1"
4: user.name2="value2"
5: ...
Line 1 identifies the file name for which the following lines are being reported. The remaining lines (lines 2
to 4 above) show the name and value pairs associated with the specified file.
Source [http://linux.about.com/library/cmd/blcmdl1_getfattr.htm]
setfattr
The setfattr command associates a new value with an extended attribute name for each specified file.
OPTIONS
-n name, --name=name
Specifies the name of the extended attribute to set.
-v value, --value=value
Specifies the new value for the extended attribute.
-x name, --remove=name
Remove the named extended attribute entirely.
-h, --no-dereference
Do not follow symlinks. If pathname is a symbolic link, it is not followed, but is instead itself the ino
--restore=file
Restores extended attributes from file. The file must be in the format generated by the getfattr command
If a dash (-) is given as the file name, setfattr reads from standard input.
Example:
# file: test-1.txt
user.testing="this is a test"
Source [http://linux.about.com/library/cmd/blcmdl1_setfattr.htm]
17 sur 39
Access Control Lists
getfacl
getfacl - get file access control lists For each file, getfacl displays the file name, owner, the group, and the
Access Control List (ACL). If a directory has a default ACL, getfacl also displays the default ACL. Non-
directories cannot have default ACLs.
If getfacl is used on a file system that does not support ACLs, getfacl displays the access permissions
defined by the traditional file mode permission bits.
The output format of getfacl is as follows:
$ getfacl somedir
1: # file: somedir/
2: # owner: lisa
3: # group: staff
4: user::rwx
5: user:joe:rwx #effective:r-x
6: group::rwx #effective:r-x
7: group:cool:r-x
8: mask:r-x
9: other:r-x
10: default:user::rwx
11: default:user:joe:rwx #effective:r-x
12: default:group::r-x
13: default:mask:r-x
14: default:other:---
Lines 4, 6 and 9 correspond to the user, group and other fields of the file mode permission bits. These three
are called the base ACL entries. Lines 5 and 7 are named user and named group entries. Line 8 is the
effective rights mask. This entry limits the effective rights granted to all groups and to named users. (The
file owner and others permissions are not affected by the effective rights mask; all other entries are.) Lines
10–14 display the default ACL associated with this directory. Directories may have a default ACL. Regular
files never have a default ACL.
setfacl
setfacl - set file access control lists
OPTIONS
-b, --remove-all
Remove all extended ACL entries. The base ACL entries of the owner,
group and others are retained.
-k, --remove-default
Remove the Default ACL. If no Default ACL exists, no warnings are
issued.
-n, --no-mask
Do not recalculate the effective rights mask. The default behavior
of setfacl is to recalculate the ACL mask entry, unless a mask
entry was explicitly given. The mask entry is set to the union of
all permissions of the owning group, and all named user and group
entries. (These are exactly the entries affected by the mask
entry).
--mask
Do recalculate the effective rights mask, even if an ACL mask entry
18 sur 39
was explicitly given. (See the -n option.)
-d, --default
All operations apply to the Default ACL. Regular ACL entries in the
input set are promoted to Default ACL entries. Default ACL entries
in the input set are discarded. (A warning is issued if that hap-
pens).
--restore=file
Restore a permission backup created by ‘getfacl -R’ or similar. All
permissions of a complete directory subtree are restored using this
mechanism. If the input contains owner comments or group comments,
and setfacl is run by root, the owner and owning group of all files
are restored as well. This option cannot be mixed with other
options except ‘--test’.
ACL ENTRIES
The setfacl utility recognizes the following ACL entry formats (blanks
inserted for clarity):
EXAMPLES
Revoking write access from all groups and all named users (using the
effective rights mask)
setfacl -m m::rx file
Description: Candidates should be familiar with Mandatory Access Control systems for Linux. Specifically,
candidates should have a thorough knowledge of SELinux. Also, candidates should be aware of other
Mandatory Access Control systems for Linux. This includes major features of these systems but not
configuration and use.
19 sur 39
Key Knowledge Areas:
The following is a partial list of the used files, terms and utilities:
TE
- No info found.
RBAC
Role-based access control (RBAC) is an access policy determined by the system, not the owner. RBAC is
used in commercial applications and also in military systems, where multi-level security requirements may
also exist. RBAC differs from DAC in that DAC allows users to control access to their resources, while in
RBAC, access is controlled at the system level, outside of the user's control. Although RBAC is non-
discretionary, it can be distinguished from MAC primarily in the way permissions are handled. MAC
controls read and write permissions based on a user's clearance level and additional labels. RBAC controls
collections of permissions that may include complex operations such as an e-commerce transaction, or may
be as simple as read or write. A role in RBAC can be viewed as a set of permissions.
1. Role assignment: A subject can execute a transaction only if the subject has selected or been assigned a
role.
2. Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule
ensures that users can take on only roles for which they are authorized.
3. Transaction authorization: A subject can execute a transaction only if the transaction is authorized for the
subject's active role. With rules 1 and 2, this rule ensures that users can execute only transactions for which
they are authorized.
Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level
roles subsume permissions owned by sub-roles.
20 sur 39
Source [http://en.wikipedia.org/wiki/Access_control#Access_Control_Techniques]
MAC
Mandatory access control (MAC) is an access policy determined by the system, not the owner. MAC is used
in multilevel systems that process highly sensitive data, such as classified government and military
information. A multilevel system is a single computer system that handles multiple classification levels
between subjects and objects.
Sensitivity labels: In a MAC-based system, all subjects and objects must have labels assigned to
them. A subject's sensitivity label specifies its level of trust. An object's sensitivity label specifies the
level of trust required for access. In order to access a given object, the subject must have a sensitivity
level equal to or higher than the requested object.
Data import and export: Controlling the import of information from other systems and export to
other systems (including printers) is a critical function of MAC-based systems, which must ensure
that sensitivity labels are properly maintained and implemented so that sensitive information is
appropriately protected at all times.
Two methods are commonly used for applying mandatory access control:
Rule-based access controls: This type of control further defines specific conditions for access to a
requested object. All MAC-based systems implement a simple form of rule-based access control to
determine whether access should be granted or denied by matching:
Lattice-based access controls: These can be used for complex access control decisions involving
multiple objects and/or subjects. A lattice model is a mathematical structure that defines greatest
lower-bound and least upper-bound values for a pair of elements, such as a subject and an object.
Source [http://en.wikipedia.org/wiki/Access_control#Access_Control_Techniques]
DAC
Discretionary access control (DAC) is an access policy determined by the owner of an object. The owner
decides who is allowed to access the object and what privileges they have.
File and data ownership: Every object in the system has an owner. In most DAC systems, each
object's initial owner is the subject that caused it to be created. The access policy for an object is
determined by its owner.
Access rights and permissions: These are the controls that an owner can assign to other subjects for
specific resources.
Access controls may be discretionary in ACL-based or capability-based access control systems. (In
capability-based systems, there is usually no explicit concept of 'owner', but the creator of an object has a
similar degree of control over its access policy.)
21 sur 39
Source [http://en.wikipedia.org/wiki/Access_control#Access_Control_Techniques]
SELinux configuration
$ getenforce
Disabled
setenforce
Modify the mode if SELinux will it is running SELinux has two modes
$ setenforce Enforcing
To enable or disable SELinux you need to modify /etc/selinux/config and reboot the system.
getsebool
Example display all booleans used for squid
semanage
Example display all contexts for squid
setsebool
Example: Allow anonymous ftp
$ setsebool allow_ftp_anon_write=on
22 sur 39
apol
seaudit, seaudit-report, audit2why, audit2allow
/etc/selinux/*
Description: Candidates should have experience and knowledge of security issues in use and configuration
of NFSv4 clients and servers as well as CIFS client services. Earlier versions of NFS are not required
knowledge.
The following is a partial list of the used files, terms and utilities:
/etc/exports
/etc/idmap.conf
nfs4acl
mount.cifs parameters related to ownership, permissions and security modes
winbind
getcifsacl, setcifsacl
Configuration
limit access
For instance, the following line in the /etc/exports file shares the directory /tmp/nfs/ to the host
bob.example.com with read and write permissions. /etc/exports:
/tmp/nfs/ bob.example.com(rw)
Restricting access is also by protecting the portmap service, this can be done using libwrap en iptables.
#
# hosts.allow This file describes the names of the hosts which are
# allowed to use the local INET services, as decided
# by the '/usr/sbin/tcpd' server.
#
23 sur 39
By default, NFS shares change root-owned files to user nfsnobody. This prevents uploading of programs
with the setuid bit set.
Configure /etc/idmapd.conf
The id mapper daemon is required on both client and server. It maps NFSv4 username@domain user strings
back and forth into numeric UIDs and GIDs when necessary. The client and server must have matching
domains in this configuration file:
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = vanemery.com
[Mapping]
Nobody-User = nfsnobody
Nobody-Group = nfsnobody
The following is a partial list of the used files, terms and utilities:
nfs4acl
mount.cifs parameters related to ownership, permissions and security modes
winbind
getcifsacl, setcifsacl
Description: Candidates should be able to secure networks against common threats. This includes
verification of the effectiveness of security measures.
24 sur 39
Use Wireshark to analyze network traffic, including filters and statistics.
Identify and deal with rogue router advertisements and DHCP messages.
The following is a partial list of the used files, terms and utilities:
radiusd
radmin
radtest, radclient
radlast, radwho
radiusd.conf
/etc/raddb/*
nmap
wireshark
tshark
tcpdump
ndpmon
nessus
Vulnerabilities that allow a remote cracker to control or access sensitive data on a system.
Misconfiguration (e.g. open mail relay, missing patches, etc).
Default passwords, a few common passwords, and blank/absent passwords on some system
accounts. Nessus can also call Hydra (an external tool) to launch a dictionary attack.
Denials of service against the TCP/IP stack by using mangled packets
On UNIX (including Mac OS X), it consists of nessusd, the Nessus daemon, which does the scanning, and
nessus, the client, which controls scans and presents the vulnerability results to the user. For Wndows,
Nessus 3 installs as an executable and has a self-contained scanning, reporting and management system.
Operation
In typical operation, Nessus begins by doing a port scan with one of its four internal portscanners (or it can
optionally use Amap or Nmap ) to determine which ports are open on the target and then tries various
exploits on the open ports. The vulnerability tests, available as subscriptions, are written in NASL (Nessus
Attack Scripting Language), a scripting language optimized for custom network interaction.
Tenable Network Security produces several dozen new vulnerability checks (called plugins) each week,
usually on a daily basis. These checks are available for free to the general public seven days after they are
initially published. Nessus users who require support and the latest vulnerability checks should contact
Tenable Network Security for a Direct Feed subscription which is not free. Commercial customers are also
allowed to access vulnerability checks without the seven-day delay.
Optionally, the results of the scan can be reported in various formats, such as plain text, XML, HTML and
LaTeX. The results can also be saved in a knowledge base for reference against future vulnerability scans.
On UNIX, scanning can be automated through the use of a command-line client. There exist many different
commercial, free and open source tools for both UNIX and Windows to manage individual or distributed
25 sur 39
Nessus scanners.
If the user chooses to do so (by disabling the option 'safe checks'), some of Nessus's vulnerability tests may
try to cause vulnerable services or operating systems to crash. This lets a user test the resistance of a device
before putting it in production.
Nessus provides additional functionality beyond testing for known network vulnerabilities. For instance, it
can use Windows credentials to examine patch levels on computers running the Windows operating system,
and can perform password auditing using dictionary and brute force methods. Nessus 3 can also audit
systems to make sure they have been configured per a specific policy, such as the NSA's guide for
hardening Windows servers.
Source [http://en.wikipedia.org/wiki/Nessus_(software)]
nmap
Nmap (“Network Mapper”) is a free and open source (license) utility for network exploration or security
auditing. Many systems and network administrators also find it useful for tasks such as network inventory,
managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in
novel ways to determine what hosts are available on the network, what services (application name and
version) those hosts are offering, what operating systems (and OS versions) they are running, what type of
packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large
networks, but works fine against single hosts.
Example
26 sur 39
-iL <inputfilename>: Input from list of hosts/networks
-iR <num hosts>: Choose random targets
--exclude <host1[,host2][,host3],...>: Exclude hosts/networks
--excludefile <exclude_file>: Exclude list from file
HOST DISCOVERY:
-sL: List Scan - simply list targets to scan
-sP: Ping Scan - go no further than determining if host is online
-PN: Treat all hosts as online -- skip host discovery
-PS/PA/PU [portlist]: TCP SYN/ACK or UDP discovery to given ports
-PE/PP/PM: ICMP echo, timestamp, and netmask request discovery probes
-PO [protocol list]: IP Protocol Ping
-n/-R: Never do DNS resolution/Always resolve [default: sometimes]
--dns-servers <serv1[,serv2],...>: Specify custom DNS servers
--system-dns: Use OS's DNS resolver
SCAN TECHNIQUES:
-sS/sT/sA/sW/sM: TCP SYN/Connect()/ACK/Window/Maimon scans
-sU: UDP Scan
-sN/sF/sX: TCP Null, FIN, and Xmas scans
--scanflags <flags>: Customize TCP scan flags
-sI <zombie host[:probeport]>: Idle scan
-sO: IP protocol scan
-b <FTP relay host>: FTP bounce scan
--traceroute: Trace hop path to each host
--reason: Display the reason a port is in a particular state
PORT SPECIFICATION AND SCAN ORDER:
-p <port ranges>: Only scan specified ports
Ex: -p22; -p1-65535; -p U:53,111,137,T:21-25,80,139,8080
-F: Fast mode - Scan fewer ports than the default scan
-r: Scan ports consecutively - don't randomize
--top-ports <number>: Scan <number> most common ports
--port-ratio <ratio>: Scan ports more common than <ratio>
SERVICE/VERSION DETECTION:
-sV: Probe open ports to determine service/version info
--version-intensity <level>: Set from 0 (light) to 9 (try all probes)
--version-light: Limit to most likely probes (intensity 2)
--version-all: Try every single probe (intensity 9)
--version-trace: Show detailed version scan activity (for debugging)
SCRIPT SCAN:
-sC: equivalent to --script=default
--script=<Lua scripts>: <Lua scripts> is a comma separated list of
directories, script-files or script-categories
--script-args=<n1=v1,[n2=v2,...]>: provide arguments to scripts
--script-trace: Show all data sent and received
--script-updatedb: Update the script database.
OS DETECTION:
-O: Enable OS detection
--osscan-limit: Limit OS detection to promising targets
--osscan-guess: Guess OS more aggressively
TIMING AND PERFORMANCE:
Options which take <time> are in milliseconds, unless you append 's'
(seconds), 'm' (minutes), or 'h' (hours) to the value (e.g. 30m).
-T[0-5]: Set timing template (higher is faster)
--min-hostgroup/max-hostgroup <size>: Parallel host scan group sizes
--min-parallelism/max-parallelism <time>: Probe parallelization
--min-rtt-timeout/max-rtt-timeout/initial-rtt-timeout <time>: Specifies
probe round trip time.
--max-retries <tries>: Caps number of port scan probe retransmissions.
--host-timeout <time>: Give up on target after this long
--scan-delay/--max-scan-delay <time>: Adjust delay between probes
--min-rate <number>: Send packets no slower than <number> per second
--max-rate <number>: Send packets no faster than <number> per second
FIREWALL/IDS EVASION AND SPOOFING:
-f; --mtu <val>: fragment packets (optionally w/given MTU)
-D <decoy1,decoy2[,ME],...>: Cloak a scan with decoys
27 sur 39
-S <IP_Address>: Spoof source address
-e <iface>: Use specified interface
-g/--source-port <portnum>: Use given port number
--data-length <num>: Append random data to sent packets
--ip-options <options>: Send packets with specified ip options
--ttl <val>: Set IP time-to-live field
--spoof-mac <mac address/prefix/vendor name>: Spoof your MAC address
--badsum: Send packets with a bogus TCP/UDP checksum
OUTPUT:
-oN/-oX/-oS/-oG <file>: Output scan in normal, XML, s|<rIpt kIddi3,
and Grepable format, respectively, to the given filename.
-oA <basename>: Output in the three major formats at once
-v: Increase verbosity level (use twice or more for greater effect)
-d[level]: Set or increase debugging level (Up to 9 is meaningful)
--open: Only show open (or possibly open) ports
--packet-trace: Show all packets sent and received
--iflist: Print host interfaces and routes (for debugging)
--log-errors: Log errors/warnings to the normal-format output file
--append-output: Append to rather than clobber specified output files
--resume <filename>: Resume an aborted scan
--stylesheet <path/URL>: XSL stylesheet to transform XML output to HTML
--webxml: Reference stylesheet from Nmap.Org for more portable XML
--no-stylesheet: Prevent associating of XSL stylesheet w/XML output
MISC:
-6: Enable IPv6 scanning
-A: Enables OS detection and Version detection, Script scanning and Traceroute
--datadir <dirname>: Specify custom Nmap data file location
--send-eth/--send-ip: Send using raw ethernet frames or IP packets
--privileged: Assume that the user is fully privileged
--unprivileged: Assume the user lacks raw socket privileges
-V: Print version number
-h: Print this help summary page.
EXAMPLES:
nmap -v -A scanme.nmap.org
nmap -v -sP 192.168.0.0/16 10.0.0.0/8
nmap -v -iR 10000 -PN -p 80
SEE THE MAN PAGE FOR MANY MORE OPTIONS, DESCRIPTIONS, AND EXAMPLES
Source [http://nmap.org/]
wireshark
Wireshark is a free packet sniffer computer application. It is used for network troubleshooting, analysis,
software and communications protocol development, and education. In June 2006 the project was renamed
from Ethereal due to trademark issues.
Wireshark is software that “understands” the structure of different networking protocols. Thus, it is able to
display the encapsulation and the fields along with their meanings of different packets specified by different
networking protocols. Wireshark uses pcap to capture packets, so it can only capture the packets on the
networks supported by pcap.
Data can be captured “from the wire” from a live network connection or read from a file that records
the already-captured packets.
Live data can be read from a number of types of network, including Ethernet, IEEE 802.11, PPP, and
loopback.
Captured network data can be browsed via a GUI, or via the terminal (command line) version of the
utility, tshark.
Captured files can be programmatically edited or converted via command-line switches to the
“editcap” program.
28 sur 39
Display filters can also be used to selectively highlight and color packet summary information.
Data display can be refined using a display filter.
Hundreds of protocols can be dissected.
Wireshark's native network trace file format is the libpcap format supported by libpcap and WinPcap, so it
can read capture files from applications such as tcpdump and CA NetMaster that use that format. It can also
read captures from other network analyzers, such as snoop, Network General's Sniffer, and Microsoft
Network Monitor.
Source [http://en.wikipedia.org/wiki/Wireshark]
Command-line options
Wireshark 1.0.3
Interactively dump and analyze network traffic.
See http://www.wireshark.org for more information.
Capture interface:
-i <interface> name or idx of interface (def: first non-loopback)
-f <capture filter> packet filter in libpcap filter syntax
-s <snaplen> packet snapshot length (def: 65535)
-p don't capture in promiscuous mode
-k start capturing immediately (def: do nothing)
-Q quit Wireshark after capturing
-S update packet display when new packets are captured
-l turn on automatic scrolling while -S is in use
-y <link type> link layer type (def: first appropriate)
-D print list of interfaces and exit
-L print list of link-layer types of iface and exit
Processing:
-R <read filter> packet filter in Wireshark display filter syntax
-n disable all name resolutions (def: all enabled)
-N <name resolve flags> enable specific name resolution(s): "mntC"
User interface:
-C <config profile> start with specified configuration profile
-g <packet number> go to specified packet number after "-r"
-m <font> set the font name used for most text
-t ad|a|r|d|dd|e output format of time stamps (def: r: rel. to first)
-X <key>:<value> eXtension options, see man page for details
-z <statistics> show various statistics, see man page for details
29 sur 39
Output:
-w <outfile|-> set the output filename (or '-' for stdout)
Miscellaneous:
-h display this help and exit
-v display version info and exit
-P <key>:<path> persconf:path - personal configuration files
persdata:path - personal data files
-o <name>:<value> ... override preference or recent setting
--display=DISPLAY X display to use
Description: Candidates should be familiar with the use and configuration of network security scanning,
network monitoring and network intrusion detection software. This includes updating and maintaining the
security scanners.
The following is a partial list of the used files, terms and utilities:
ntop
Cacti
snort
snort-stat
/etc/snort/*
openvas-adduser, openvas-rmuser
openvas-nvt-sync
openvassd
openvas-mkcert
/etc/openvas/*
ntop
30 sur 39
ntop is a network traffic probe that shows the network usage, similar to what the popular top Unix command
does. Ntop is based on libpcap and it has been written in a portable way in order to virtually run on every
Unix platform.
a web interface
limited configuration and administration via the web interface
reduced CPU and memory usage (they vary according to network size and traffic).
Using Ntop
This is a very simple procedure. Run this command in the bash shell:
What does it means ? Well, -P option reads the configuration files in the “/etc/ntop” directory. The -W
option enables the port on which we want to access Ntop through our web browser. If you don't specify this
option the default port is 3000. Finally the -d option enables Ntop in daemon mode. This means that Ntop
will work until the system runs and works.
Once is started in web mode Ntop enables its web server and allow us to view and use its statistics through
any web browser by using the web address http://host:portnumber/ [http://host:portnumber/].
The example on our test machine:
# http://192.168.0.6:4242/
Source [http://wiki.engardelinux.org/index.php/HOWTO:Installing_and_running_NTOP]
snort
Configure Snort
We need to modify the snort.conf file to suite our needs. Open /etc/snort/snort.conf with your favorite text
editor (nano, vi, vim, etc.).
# vi /etc/snort/snort.conf
Change “var HOME_NET any” to “var HOME_NET 192.168.1.0/24” (your home network may differ from
192.168.1.0) Change “var EXTERNAL_NET any” to “var EXTERNAL_NET !$HOME_NET” (this is
stating everything except HOME_NET is external) Change “var RULE_PATE ../rules” to “var
RULE_PATH /etc/snort/rules” Save and quit.
Change permissions on the conf file to keep things secure (thanks rojo):
# snort -c /etc/snort/snort.conf
31 sur 39
If everything went well you should see an ascii pig.
To end the test hit ctrl + c.
Updating rules
modify /etc/oinkmaster.conf so that:
Then:
groupadd snort
useradd -g snort snort -s /bin/false
chmod 640 /etc/oinktmaster.conf
chown root:snort /etc/oinkmaster.conf
nano -w /usr/local/bin/oinkdaily
#!/bin/bash
## if you have "mail" installed, uncomment this to have oinkmaster mail you reports:
# /usr/sbin/oinkmaster -C /etc/oinkmaster.conf -o /etc/snort/rules 2>&1 | mail -s "oinkmaster" [email protected]
Finally:
In user snort's crontab, to launch the update on the 30th minute of the 5th hour of every day, add the
following:
30 5 * * * /usr/local/bin/oinkdaily
But you should randomize those times (for instance, 2:28 or 4:37 or 6:04) to reduce the impact on
snort.org's servers.
Source [http://www.howtoforge.com/intrusion-detection-with-snort-mysql-apache2-on-ubuntu-7.10-updated]
Description: Candidates should be familiar with the use and configuration of packet filters. This includes
32 sur 39
netfilter, iptables and ip6tables as well as basic knowledge of nftables, nft and ebtables.
The following is a partial list of the used files, terms and utilities:
iptables
ip6tables
iptables-save, iptables-restore
ip6tables-save, ip6tables-restore
ipset
nft
ebtables
iptables
Basic Commands
Typing
$ sudo iptables -L
lists your current rules in iptables. If you have just set up your server, you will have no rules, and you
should see
-A - Append this rule to a rule chain. Valid chains for what we're doing are INPUT, FORWARD and
OUTPUT, but we mostly deal with INPUT in this tutorial, which affects only incoming traffic.
33 sur 39
-m state - Allow filter rules to match based on connection state. Permits the use of the –state option.
–state - Define the list of states for the rule to match on. Valid states are:
-m limit - Require the rule to match only a limited number of times. Allows the use of the –limit option.
Useful for limiting logging rules.
–limit - The maximum matching rate, given as a number followed by “/second”, “/minute”, “/hour”,
or “/day” depending on how often you want the rule to match. If this option is not used and -m limit
is used, the default is “3/hour”.
–dport - The destination port(s) required for this rule. A single port may be given, or a range may be given
as start:end, which will match all ports from start to end, inclusive.
ACCEPT - Accept the packet and stop processing rules in this chain.
REJECT - Reject the packet and notify the sender that we did so, and stop processing rules in this
chain.
DROP - Silently ignore the packet, and stop processing rules in this chain.
LOG - Log the packet, and continue processing more rules in this chain. Allows the use of the –log-
prefix and –log-level options.
–log-prefix - When logging, put this text before the log message. Use double quotes around the text to use.
–log-level - Log using the specified syslog level. 7 is a good choice unless you specifically need something
else.
-I - Inserts a rule. Takes two options, the chain to insert the rule into, and the rule number it should be.
-I INPUT 5 would insert the rule into the INPUT chain and make it the 5th rule in the list.
-v - Display more information in the output. Useful for if you have rules that look similar without using -v.
34 sur 39
$ sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
Referring back to the list above, you can see that this tells iptables:
append this rule to the input chain (-A INPUT) so we look at incoming traffic
check to see if it is TCP (-p tcp).
if so, check to see if the input goes to the SSH port (–dport ssh).
if so, accept the input (-j ACCEPT).
Lets check the rules: (only the first few lines shown, you will see more)
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere tcp dpt:www
We have specifically allowed tcp traffic to the ssh and web ports, but as we have not blocked anything, all
traffic can still come in.
Blocking Traffic
Once a decision is made to accept a packet, no more rules affect it. As our rules allowing ssh and web traffic
come first, as long as our rule to block all traffic comes after them, we can still accept the traffic we want.
All we need to do is put the rule to block all traffic at the end.
35 sur 39
Because we didn't specify an interface or a protocol, any traffic for any port on any interface is blocked,
except for web and ssh.
Editing iptables
The only problem with our setup so far is that even the loopback port is blocked. We could have written the
drop rule for just eth0 by specifying -i eth0, but we could also add a rule for the loopback. If we append this
rule, it will come too late - after all the traffic has been dropped. We need to insert this rule before that.
Since this is a lot of traffic, we'll insert it as the first rule so it's processed first.
The first and last lines look nearly the same, so we will list iptables in greater detail.
$ sudo iptables -L -v
You can now see a lot more information. This rule is actually very important, since many programs use the
loopback interface to communicate with each other. If you don't allow them to talk, you could break those
programs!
Logging
In the above examples none of the traffic will be logged. If you would like to log dropped packets to syslog,
this would be the quickest way:
$ sudo iptables -I INPUT 5 -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7
Saving iptables
If you were to reboot your machine right now, your iptables configuration would disappear. Rather than
type this each time you reboot, however, you can save the configuration, and have it start up automatically.
To save the configuration, you can use iptables-save and iptables-restore.
Source [https://help.ubuntu.com/community/IptablesHowTo]
ip6tables
ip6tables-save, ip6tables-restore
ipset
nft
36 sur 39
ebtables
Description: Candidates should be familiar with the use of OpenVPN and IPsec. Key Knowledge Areas:
Configure and operate OpenVPN server and clients for both bridged and routed VPN networks.
Configure and operate IPsec server and clients for routed VPN networks using IPsec-Tools / racoon.
Awareness of L2TP.
The following is a partial list of the used files, terms and utilities:
/etc/openvpn/*
openvpn server and client
setkey
/etc/ipsec-tools.conf
/etc/racoon/racoon.conf
Configuration
OpenVPN is a full-featured open source SSL VPN solution that accommodates a wide range of
configurations, including remote access, site-to-site VPNs, Wi-Fi security, and enterprise-scale remote
access solutions with load balancing, failover, and fine-grained access-controls. Starting with the
fundamental premise that complexity is the enemy of security, OpenVPN offers a cost-effective, lightweight
alternative to other VPN technologies that is well-targeted for the SME and enterprise markets.
Simple Example
This example demonstrates a bare-bones point-to-point OpenVPN configuration. A VPN tunnel will be
created with a server endpoint of 10.8.0.1 and a client endpoint of 10.8.0.2. Encrypted communication
between client and server will occur over UDP port 1194, the default OpenVPN port.
Copy the static key to both client and server, over a pre-existing secure channel. Server configuration file
dev tun
ifconfig 10.8.0.1 10.8.0.2
secret static.key
remote myremote.mydomain
dev tun
ifconfig 10.8.0.2 10.8.0.1
secret static.key
Firewall configuration
Make sure that:
37 sur 39
the virtual TUN interface used by OpenVPN is not blocked on either the client or server (on Linux,
the TUN interface will probably be called tun0 while on Windows it will probably be called
something like Local Area Connection n unless you rename it in the Network Connections control
panel).
Bear in mind that 90% of all connection problems encountered by new OpenVPN users are firewall-related.
comp-lzo
Deal with:
keepalive 10 60
ping-timer-rem
persist-tun
persist-key
user nobody
group nobody
daemon
Then on the server side, add a route to the server's LAN gateway that routes 10.8.0.2 to the OpenVPN
server machine (only necessary if the OpenVPN server machine is not also the gateway for the server-side
LAN). Also, don't forget to enable IP Forwarding on the OpenVPN server machine.
Source [http://openvpn.net/index.php/documentation/miscellaneous/static-key-mini-howto.html]
setkey
/etc/ipsec-tools.conf
38 sur 39
/etc/racoon/racoon.conf
39 sur 39