100% found this document useful (1 vote)
257 views39 pages

LPI 303-200: Security

Uploaded by

Brahim HAMDI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
257 views39 pages

LPI 303-200: Security

Uploaded by

Brahim HAMDI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

LPI Certification Documentation

LPI 303-200: Security


In this document you find information for the different objectives from the LPIC 303 exam. Before using
this document you should check on the LPI site if the objectives are still the same. This document is
provided as an aid in studying and is in noway a guaranty for passing the exam. Try to gain some practical
knowledge and really understand the concepts how things work, that should help.

Topic 325: Cryptography


325.1 X.509 Certificates and Public Key Infrastructures (weight: 5)

Description: Candidates should understand X.509 certificates and public key infrastructures. They should
know how to configure and use OpenSSL to implement certification authorities and issue SSL certificates
for various purposes.

Key Knowledge Areas:

Understand X.509 certificates, X.509 certificate lifecycle, X.509 certificate fields and X.509v3
certificate extensions.
Understand trust chains and public key infrastructures.
Generate and manage public and private keys.
Create, operate and secure a certification authority.
Request, sign and manage server and client certificates.
Revoke certificates and certification authorities.

The following is a partial list of the used files, terms and utilities:

openssl, including relevant subcommands


OpenSSL configuration
PEM, DER, PKCS
CSR
CRL
OCSP

OpenSSL

How to be your own Certificate Authority

1. Install OpenSSL and make sure it is available in your path.

$ openssl version

1 sur 39
This command should display the version and date OpenSSl was created.

OpenSSL 0.9.5 28 Feb 2000

2. Some system may require the creation of a random number file. Cryptographic software needs a source of
unpredictable data to work correctly. Many Open Source operating systems provide a “random device”.
Systems like AIX do not. The command to do this is:

$ openssl rand -out .rnd 512

3. Edit the /etc/ssl/openssl.cnf file and search for _default. Edit each of these default settings to fit you
needs. Also search for “req_distinguished_name”, in this section you find the default answer for some of the
openssl questions. If you need to create multiple certificates with the same details, it is helpful to change
these default answers.

4. Create a RSA private key for your CA (will be Triple-DES encrypted and PEM formatted):

$ cd /var/ssl or $ cd /usr/lib/ssl (for ubuntu)


$ misc/CA.pl -newca

If this doesn't work try:

$ openssl req -new -x509 -keyout demoCA/private/cakey.pem -out demoCA/cacert.pem -days 3650

This command creates two files. The first is the Private CA key in demoCA/private/cakey.pem. The second
is demoCA/cacert.pem. This is the Public CA key. As a part of this process you are ask several questions.
Answer them as you see fit. The password is the access to your Private Key. Make it a good one. With this
key anyone can sign ohter certificates as you. The “Common Name” questions should reflect the fact your
are a CA. A name like MyCompany Certificate Authority would be good.

Creating a server certificate

1. Now you need to create a key to use as your server key.

$ misc/CA.pl -newreq

or

$ openssl genrsa -des3 -out server.key 1024

Generating RSA private key, 1024 bit long modulus


.........................................................++++++
........++++++
e is 65537 (0x10001)
Enter PEM pass phrase:
Verifying password - Enter PEM pass phrase:

2. Generate a CSR (Certificate Signing Request)

2 sur 39
Once the private key is generated a Certificate Signing Request can be generated.
During the generation of the CSR, you will be prompted for several pieces of information. These are the
X.509 attributes of the certificate. One of the prompts will be for “Common Name (e.g., YOUR name)”. It
is important that this field be filled in with the fully qualified domain name of the server to be protected by
SSL. If the website to be protected will be https://www.domain.com [https://www.domain.com], then enter
www.domain.com [http://www.domain.com] at this prompt. The command to generate the CSR is as follows:

$ openssl req -new -key server.key -out server.csr

Country Name (2 letter code) [GB]: NL


State or Province Name (full name) [Berkshire]: Zuid-Holland
Locality Name (eg, city) [Newbury]: Delft
Organization Name (eg, company) [My Company Ltd]: HostingComp
Organizational Unit Name (eg, section) []: Information Technology
Common Name (eg, your name or your server's hostname) []: www.domain.com
Email Address []: admin at domain.com
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

3. Remove Passphrase from Key


One unfortunate side-effect of the pass-phrased private key is that Apache will ask for the pass-phrase each
time the web server is started. Obviously this is not necessarily convenient as someone will not always be
around to type in the pass-phrase, such as after a reboot or crash. mod_ssl includes the ability to use an
external program in place of the built-in pass-phrase dialog, however, this is not necessarily the most secure
option either. It is possible to remove the Triple-DES encryption from the key, thereby no longer needing to
type in a pass-phrase. If the private key is no longer encrypted, it is critical that this file only be readable by
the root user! If your system is ever compromised and a third party obtains your unencrypted private key,
the corresponding certificate will need to be revoked. With that being said, use the following command to
remove the pass-phrase from the key:

$ cp server.key server.key.org
$ openssl rsa -in server.key.org -out server.key

The newly created server.key file has no more passphrase in it.

4. Now you have three option:

Let an official CA sign the CSR.


Self-sign the CSR
Self-sign the CSR using your own CA.

4.1 Let an official CA sign the CSR. You now have to send this Certificate Signing Request (CSR) to a
Certifying Authority (CA) for signing. The result is then a real Certificate which can be used for Apache.
Here you have to options: First you can let the CSR sign by a commercial CA like Verisign or Thawte.
Then you usually have to post the CSR into a web form, pay for the signing and await the signed Certificate
you then can store into a server.crt file.

4.2 Self-sign the CSR

$ openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

3 sur 39
4.3 Self-sign the CSR using your own CA. Now you can use this CA to sign server CSR's in order to create
real SSL Certificates for use inside an Apache webserver (assuming you already have a server.csr at hand):

$ /var/ssl/misc/CA.pl -sign

or

$ openssl ca -policy policy_anything -out server.crt -infiles server.csr

This signs the server CSR and results in a server.crt file.

5. You can see the details of the received Certificate via the command:

$ openssl x509 -noout -text -in server.crt

6. Now you have two files: server.key and server.crt. These now can be used as following inside your
Apache's httpd.conf file:

SSLCertificateFile /path/to/this/server.crt
SSLCertificateKeyFile /path/to/this/server.key

The server.csr file is no longer needed.

Convert the signed key to P12 format so it can be inported into a browser.

openssl pkcs12 -export -in newcert.pem -inkey newreq.pem -name "www.domain.com" -certfile demoCA/cacert.pem -

Source1 [http://www.akadia.com/services/ssh_test_certificate.html]
Source2 [http://www.grennan.com/CA-HOWTO-1.html]
Source3 [https://lpi.universe-network.net/doku.php?id=wiki:certification:lpic303]

CTRL
OCSP
Understand X.509 certificates, X.509 certificate lifecycle, X.509 certificate fields and
X.509v3 certificate extensions.
Understand trust chains and public key infrastructures.
secure a certification authority.
Revoke certificates and certification authorities.

325.2 X.509 Certificates for Encryption, Signing and Authentication (weight: 4)

Description: Candidates should know how to use X.509 certificates for both server and client
authentication. Candidates should be able to implement user and server authentication for Apache HTTPD.
The version of Apache HTTPD covered is 2.4 or higher.

Key Knowledge Areas:

4 sur 39
Understand of SSL, TLS and protocol versions.
Understand common transport layer security threats, for example Man-in-the-Middle.
Configure Apache HTTPD with mod_ssl to provide HTTPS service, including SNI and HSTS.
Configure Apache HTTPD with mod_ssl to authenticate users using certificates.
Configure Apache HTTPD with mod_ssl to provide OCSP stapling.
Use OpenSSL for SSL/TLS client and server tests.

The following is a partial list of the used files, terms and utilities:

Intermediate certification authorities


Cipher configuration (no cipher-specific knowledge)
httpd.conf
mod_ssl
openssl

Understand of SSL, TLS and protocol versions.


Understand common transport layer security threats, for example Man-in-the-Middle.
Configure Apache HTTPD with mod_ssl to provide HTTPS service, including SNI and HSTS.
Configure Apache HTTPD with mod_ssl to authenticate users using certificates.
Configure Apache HTTPD with mod_ssl to provide OCSP stapling.
Use OpenSSL for SSL/TLS client and server tests.

325.3 Encrypted File Systems (weight: 3)

Description: Candidates should be able to set up and configure encrypted file systems.

Key Knowledge Areas:

Understand block device and file system encryption.


Use dm-crypt with LUKS to encrypt block devices.
Use eCryptfs to encrypt file systems, including home directories and PAM integration.
Be aware of plain dm-crypt and EncFS.

The following is a partial list of the used files, terms and utilities:

cryptsetup
cryptmount
/etc/crypttab
ecryptfsd
ecryptfs-* commands
mount.ecryptfs, umount.ecryptfs
pam_ecryptfs

LUKS, cryptsetup-luks

LUKS (Linux Unified Key Setup) provides a standard on-disk format for encrypted partitions to facilitate
cross distribution compatability, to allow for multiple users/passwords, effective password revocation, and
to provide additional security against low entropy attacks. To use LUKS, you must use an enabled version

5 sur 39
of cryptsetup.

Create the Container and Loopback Mount it


First we need to create the container file, and loopback mount it.

root@host:~$ dd if=/dev/urandom of=testfile bs=1M count=10


10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 1.77221 seconds, 5.9 MB
root@host:~$ losetup /dev/loop/0 testfile
root@host:~$

Note: Skip this step for encrypted partitions.

luksFormat
Before we can open an encrypted partition, we need to initialize it.

root@host:~$ cryptsetup luksFormat /dev/loop/0

WARNING!
========
This will overwrite data on /dev/loop/0 irrevocably.

Are you sure? (Type uppercase yes): YES


Enter LUKS passphrase:
Verify passphrase:
Command successful.
root@host:~$

Note: For encrypted partitions replace the loopback device with the device label of the partition.

luksOpen
Now that the partition is formated, we can create a Device-Mapper mapping for it.

root@host:~$ cryptsetup luksOpen /dev/loop/0 testfs


Enter LUKS passphrase:
key slot 0 unlocked.
Command successful.
root@host:~$

Formating the Filesystem


The first time we create the Device-Mapper mapping, we need to format the new virtual device with a new
filesystem.

root@host:~$ mkfs.ext2 /dev/mapper/testfs

Mounting the Virtual Device


Now, we can mount the new virtual device just like any other device.

root@host:~$ mount /dev/mapper/testfs /mnt/test/


root@host:~$

Mounting an Existing Encrypted Container File or Partition

6 sur 39
root@host:~$ losetup /dev/loop/0 testfile
root@host:~$ cryptsetup luksOpen /dev/loop/0 testfs
Enter LUKS passphrase:
key slot 0 unlocked.
Command successful.
root@host:~$ mount /dev/mapper/testfs /mnt/test/
root@host:~$

Note: Skip the losetup setup for encrypted partitions.

Unmounting and Closing an Encrypted Container File or Partition

root@host:~$ umount /mnt/test


root@host:~$ cryptsetup luksClose /dev/mapper/testfs
root@host:~$ losetup -d /dev/loop/0
root@host:~$

Note: Skip the losetup setup for encrypted partitions.

Handling Multiple Users and Passwords

The LUKS header allows you to assign 8 different passwords that can access the encyrpted partition or
container. This is useful for environments where the CEO & CTO can each have passwords for the device
and the administrator(s) can have another. This makes it easy to change the password in case of employee
turnover while keeping the data accessible.

Adding passwords to new slots

root@host:~$ cryptsetup luksAddKey /dev/loop/0


Enter any LUKS passphrase:
Verify passphrase:
key slot 0 unlocked.
Enter new passphrase for key slot:
Verify passphrase:
Command successful.
root@host:~$

Deleting key slots

root@host:~$ cryptsetup luksDelKey /dev/loop/0 1


Command successful.
root@host:~$

Displaying LUKS Header Information

root@host:~$ cryptsetup luksDump /dev/loop/0


LUKS header information for /dev/loop/0

Version: 1
Cipher name: aes
Cipher mode: cbc-essiv:sha256
Hash spec: sha1
Payload offset: 1032
MK bits: 128
MK digest: a9 3c c2 33 0b 33 db ff d2 b9 dc 6c 01 d6 90 48 1d c1 2e bb
MK salt: 98 46 a3 28 64 35 f1 55 f0 2b 8e af f5 71 16 64
3c 30 1f 6c b1 4b 43 fd 23 49 28 a6 b0 e4 e2 14
MK iterations: 10

7 sur 39
UUID: 089559af-41af-4dfe-b736-9d9d48d3bf53

Key Slot 0: ENABLED


Iterations: 254659
Salt: 02 da 9c c3 c7 39 a5 62 72 81 37 0f eb aa 30 47
01 1b a8 53 93 23 83 71 20 03 1b 6c 90 84 a5 6e
Key material offset: 8
AF stripes: 4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED
root@host:~$

Source [http://feraga.com/node/51]

dm-crypt and awareness CBC, ESSIV, LRW and XTS modes

CBC
Despite its deficiencies the CBC (Cipher Block Chaining) mode is still the most commonly used for disk
encryption[citation needed]. Since auxiliary information isn't stored for the IV of each sector, it is thus
derived from the sector number, its content, and some static information. Several such methods were
proposed and used.

ESSIV
Encrypted Salt-Sector Initialization Vector (ESSIV)[3] is a method for generating initialization vectors for
block encryption to use in disk encryption.
The usual methods for generating IVs are predictable sequences of numbers based on for example time
stamp or sector number and permits certain attacks such as a Watermarking attack.
ESSIV prevents such attacks by generating IVs from a combination of the sector number with the hash of
the key. It is the combination with the key in form of a hash that makes the IV unpredictable.

LRW
In order to prevent such elaborate attacks, different modes of operation were introduced: tweakable narrow-
block encryption (LRW and XEX) and wide-block encryption (CMC and EME).
Whereas a purpose of a usual block cipher EK is to mimic a random permutation for any secret key K, the
purpose of tweakable encryption E_K^T is to mimic a random permutation for any secret key K and any
known tweak T.

XTS
XTS is XEX-based Tweaked CodeBook mode (TCB) with CipherText Stealing (CTS). Although XEX-
TCB-CTS should be abbreviated as XTC, “C” was replaced with “S” (for “stealing”) to avoid confusion
with the abbreviated ecstasy. Ciphertext stealing provides support for sectors with size not divisible by
block size, for example, 520-byte sectors and 16-byte blocks. XTS-AES was standardized in 2007-12-19 [1]
as IEEE P1619 Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices.
The XTS proof[2] yields strong security guarantees as long as the same key is not used to encrypt much
more than 1 terabyte of data. Up until this point, no attack can succeed with probability better than
approximately one in eight quadrillion. However this security guarantee deteriorates as more data is

8 sur 39
encrypted with the same key. With a petabyte the attack success probability rate decreases to at most eight
in a trillion, with an exabyte, the success probability is reduced to at most eight in a million.
This means that using XTS, with one key for more than a few hundred terabytes of data opens up the
possibility of attacks (and is not mitigated by using a larger AES key size, so using a 256-bit key doesn't
change this).
The decision on the maximum amount to data to be encrypted with a single key using XTS should consider
the above together with the practical implication of the attack (which is the ability of the adversary to
modify plaintext of a specific block, where the position of this block may not be under the adversary's
control).

Source [http://en.wikipedia.org/wiki/Disk_encryption_theory]

cryptmount

In order to create a new encrypted filing system managed by cryptmount, you can use the supplied
’cryptmount-setup’ program, which can be used by the superuser to interactively configure a basic setup.
Alternatively, suppose that we wish to setup a new encrypted filing system, that will have a target-name of
“opaque”. If we have a free disk partition available, say /dev/hdb63, then we can use this directly to store
the encrypted filing system. Alternatively, if we want to store the encrypted filing system within an ordinary
file, we need to create space using a recipe such as:

dd if=/dev/zero of=/home/opaque.fs bs=1M count=512

and then replace all occurences of ’/dev/hdb63’ in the following with ’/home/opaque.fs’. (/dev/urandom can
be used in place of /dev/zero, debatably for extra security, but is rather slower.)
First, we need to add an entry in /etc/cryptmount/cmtab, which describes the encryption that will be used to
protect the filesystem itself and the access key, as follows:

opaque {
dev=/dev/hdb63 dir=/home/crypt
fstype=ext2 fsoptions=defaults cipher=twofish
keyfile=/etc/cryptmount/opaque.key
keyformat=builtin
}

Here, we will be using the “twofish” algorithm to encrypt the filing system itself, with the built-in key-
manager being used to protect the decryption key (to be stored in /etc/cryptmount/opaque.key).
In order to generate a secret decryption key (in /etc/cryptmount/opaque.key) that will be used to encrypt the
filing system itself, we can execute, as root:

cryptmount --generate-key 32 opaque

This will generate a 32-byte (256-bit) key, which is known to be supported by the Twofish cipher algorithm,
and store it in encrypted form after asking the system administrator for a password.
If we now execute, as root:

cryptmount --prepare opaque

we will then be asked for the password that we used when setting up /etc/cryptmount/opaque.key, which
will enable cryptmount to setup a device-mapper target (/dev/mapper/opaque). (If you receive an error
message of the form device-mapper ioctl cmd 9 failed: Invalid argument, this may mean that you have

9 sur 39
chosen a key-size that isn’t supported by your chosen cipher algorithm. You can get some information about
suitable key-sizes by checking the output from “more /proc/crypto”, and looking at the “min keysize” and
“max keysize” fields.)
We can now use standard tools to create the actual filing system on /dev/mapper/opaque:

mke2fs /dev/mapper/opaque

(It may be advisable, after the filesystem is first mounted, to check that the permissions of the top-level
directory created by mke2fs are appropriate for your needs.)
After executing

cryptmount --release opaque


mkdir /home/crypt

the encrypted filing system is ready for use. Ordinary users can mount it by typing

cryptmount -m opaque

or

cryptmount opaque

and unmount it using

cryptmount -u opaque

cryptmount keeps a record of which user mounted each filesystem in order to provide a locking mechanism
to ensure that only the same user (or root) can unmount it.

PASSWORD CHANGING
After a filesystem has been in use for a while, one may want to change the access password. For an example
target called “opaque”, this can be performed by executing:

cryptmount --change-password opaque

After successfully supplying the old password, one can then choose a new password which will be used to
re-encrypt the access key for the filesystem. (The filesystem itself is not altered or re-encrypted.)

Source: Man-page CRYPTMOUNT(8)

/etc/crypttab
ecryptfsd
ecryptfs-* commands
mount.ecryptfs, umount.ecryptfs
pam_ecryptfs
Use eCryptfs to encrypt file systems, including home directories and PAM integration.
Be aware of plain dm-crypt and EncFS.

325.4 DNS and Cryptography (weight: 5)

Description: Candidates should have experience and knowledge of cryptography in the context of DNS and

10 sur 39
its implementation using BIND. The version of BIND covered is 9.7 or higher.

Key Knowledge Areas:

Understanding of DNSSEC and DANE.


Configure and troubleshoot BIND as an authoritative name server serving DNSSEC secured zones.
Configure BIND as an recursive name server that performs DNSSEC validation on behalf of its
clients.
Key Signing Key, Zone Signing Key, Key Tag
Key generation, key storage, key management and key rollover
Maintenance and re-signing of zones
Use DANE to publish X.509 certificate information in DNS.
Use TSIG for secure communication with BIND.

The following is a partial list of the used files, terms and utilities:

DNS, EDNS, Zones, Resource Records


DNS resource records: DS, DNSKEY, RRSIG, NSEC, NSEC3, NSEC3PARAM, TLSA
DO-Bit, AD-Bit
TSIG
named.conf
dnssec-keygen
dnssec-signzone
dnssec-settime
dnssec-dsfromkey
rndc
dig
delv
openssl

Understanding of DNSSEC and DANE.


Configure and troubleshoot BIND as an authoritative name server serving DNSSEC secured zones.
Configure BIND as an recursive name server that performs DNSSEC validation on behalf of its
clients.
Key Signing Key, Zone Signing Key, Key Tag
Key generation, key storage, key management and key rollover
Maintenance and re-signing of zones
Use DANE to publish X.509 certificate information in DNS.
Use TSIG for secure communication with BIND.

Topic 326: Host Security


326.1 Host Hardening (weight: 3)

Description: Candidates should be able to secure computers running Linux against common threats. This
includes kernel and software configuration.

11 sur 39
Key Knowledge Areas:

Configure BIOS and boot loader (GRUB 2) security.


Disable useless software and services.
Use sysctl for security related kernel configuration, particularly ASLR, Exec-Shield and IP / ICMP
configuration.
Limit resource usage.
Work with chroot environments.
Drop unnecessary capabilities.
Be aware of the security advantages of virtualization.

The following is a partial list of the used files, terms and utilities:

grub.cfg
chkconfig, systemctl
ulimit
/etc/security/limits.conf
pam_limits.so
chroot
sysctl
/etc/sysctl.conf

Limit resource usage.

root@richard:/etc/security# ls -alh
total 52K
drwxr-xr-x 2 root root 4.0K 2008-11-07 18:06 .
drwxr-xr-x 149 root root 12K 2008-11-08 17:13 ..
-rw-r--r-- 1 root root 4.6K 2008-10-16 06:36 access.conf
-rw-r--r-- 1 root root 3.4K 2008-10-16 06:36 group.conf
-rw-r--r-- 1 root root 1.9K 2008-10-16 06:36 limits.conf
-rw-r--r-- 1 root root 1.5K 2008-10-16 06:36 namespace.conf
-rwxr-xr-x 1 root root 1003 2008-10-16 06:36 namespace.init
-rw-r--r-- 1 root root 3.0K 2007-10-01 20:49 pam_env.conf
-rw-r--r-- 1 root root 419 2008-10-16 06:36 sepermit.conf
-rw-r--r-- 1 root root 2.2K 2007-10-01 20:49 time.conf

Config files:

access.conf - Login access control table.


group.conf - pam_group module.
namespace.conf - configuration of polyinstantiate directories.
pam_env.conf - pam environment variables.
time.conf - pam_time module.

root@richard:/etc/security# cat limits.conf


# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:

12 sur 39
#<domain> can be:
# - an user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open files
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
# - chroot - change root to directory (Debian-specific)
#
#<domain> <type> <item> <value>
#

#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#ftp - chroot /ftp
#@student - maxlogins 4

# End of file

Configure BIOS and boot loader (GRUB 2) security.


Disable useless software and services.
Use sysctl for security related kernel configuration, particularly ASLR, Exec-Shield and IP / ICMP
configuration.
Work with chroot environments.
Drop unnecessary capabilities.
Be aware of the security advantages of virtualization.

326.2 Host Intrusion Detection (weight: 4)

Description: Candidates should be familiar with the use and configuration of common host intrusion
detection software. This includes updates and maintenance as well as automated host scans.

13 sur 39
Key Knowledge Areas:

Use and configure the Linux Audit system.


Use chkrootkit.
Use and configure rkhunter, including updates.
Use Linux Malware Detect.
Automate host scans using cron.
Configure and use AIDE, including rule management.
Be aware of OpenSCAP.

The following is a partial list of the used files, terms and utilities:

auditd
auditctl
ausearch, aureport
auditd.conf
audit.rules
pam_tty_audit.so
chkrootkit
rkhunter
/etc/rkhunter.conf
maldet
conf.maldet
aide
/etc/aide/aide.conf

Use and configure the Linux Audit system.


Use chkrootkit.
Use and configure rkhunter, including updates.
Use Linux Malware Detect.
Automate host scans using cron.
Configure and use AIDE, including rule management.
Be aware of OpenSCAP.

326.3 User Management and Authentication (weight: 5)

Description: Candidates should be familiar with management and authentication of user accounts. This
includes configuration and use of NSS, PAM, SSSD and Kerberos for both local and remote directories and
authentication mechanisms as well as enforcing a password policy.

Key Knowledge Areas:

Understand and configure NSS.


Understand and configure PAM.
Enforce password complexity policies and periodic password changes.
Lock accounts automatically after failed login attempts.
Configure and use SSSD.
Configure NSS and PAM for use with SSSD.

14 sur 39
Configure SSSD authentication against Active Directory, IPA, LDAP, Kerberos and local domains.
Obtain and manage Kerberos tickets.

The following is a partial list of the used files, terms and utilities:

nsswitch.conf
/etc/login.defs
pam_cracklib.so
chage
pam_tally.so, pam_tally2.so
faillog
pam_sss.so
sssd
sssd.conf
sss_* commands
krb5.conf
kinit, klist, kdestroy

Understand and configure NSS.


Understand and configure PAM.
Enforce password complexity policies and periodic password changes.
Lock accounts automatically after failed login attempts.
Configure and use SSSD.
Configure NSS and PAM for use with SSSD.
Configure SSSD authentication against Active Directory, IPA, LDAP, Kerberos and local domains.
Obtain and manage Kerberos tickets.

326.4 FreeIPA Installation and Samba Integration (weight: 4)

Description: Candidates should be familiar with FreeIPA v4.x. This includes installation and maintenance
of a server instance with a FreeIPA domain as well as integration of FreeIPA with Active Directory.

Key Knowledge Areas:

Understand FreeIPA, including its architecture and components.


Understand system and configuration prerequisites for installing FreeIPA.
Install and manage a FreeIPA server and domain.
Understand and configure Active Directory replication and Kerberos cross-realm trusts.
Be aware of sudo, autofs, SSH and SELinux integration in FreeIPA.

The following is a partial list of the used files, terms and utilities:

389 Directory Server, MIT Kerberos, Dogtag Certificate System, NTP, DNS, SSSD, certmonger
ipa, including relevant subcommands
ipa-server-install, ipa-client-install, ipa-replica-install
ipa-replica-prepare, ipa-replica-manage

15 sur 39
Understand FreeIPA, including its architecture and components.
Understand system and configuration prerequisites for installing FreeIPA.
Install and manage a FreeIPA server and domain.
Understand and configure Active Directory replication and Kerberos cross-realm trusts.
Be aware of sudo, autofs, SSH and SELinux integration in FreeIPA.

Topic 327: Access Control


327.1 Discretionary Access Control (weight: 3)

Description: Candidates are required to understand Discretionary Access Control and know how to
implement it using Access Control Lists. Additionally, candidates are required to understand and know how
to use Extended Attributes.

Key Knowledge Areas:

Understand and manage file ownership and permissions, including SUID and SGID.
Understand and manage access control lists.
Understand and manage extended attributes and attribute classes.

The following is a partial list of the used files, terms and utilities:

getfacl
setfacl
getfattr
setfattr

Extended Attributes

In Linux, the ext2, ext3, ext4, JFS, ReiserFS and XFS filesystems support extended attributes (abbreviated
xattr) if the libattr feature is enabled in the kernel configuration. Any regular file may have a list of
extended attributes. Each attribute is denoted by a name and the associated data. The name must be a null-
terminated string, and must be prefixed by a namespace identifier and a dot character. Currently, four
namespaces exist: user, trusted, security and system. The user namespace has no restrictions with regard to
naming or contents. The system namespace is primarily used by the kernel for access control lists. The
security namespace is used by SELinux, for example.
Extended attributes are not widely used in user-space programs in Linux, although they are supported in the
2.6 and later versions of the kernel. Beagle does use extended attributes, and freedesktop.org publishes
recommendations for their use.

Source [http://en.wikipedia.org/wiki/Extended_file_attributes#Linux]

getfattr
For each file, getfattr displays the file name, and the set of extended attribute names (and optionally values)
which are associated with that file.

OPTIONS

-n name, --name=name
Dump the value of the named extended attribute extended attribute.

16 sur 39
-d, --dump
Dump the values of all extended attributes associated with pathname.
-e en, --encoding=en
Encode values after retrieving them. Valid values of en are "text", "hex", and "base64". Values encoded a
double quotes ("), while strings encoded as hexidecimal and base64 are prefixed with 0x and 0s, respectiv
-h, --no-dereference
Do not follow symlinks. If pathname is a symbolic link, the symbolic link itself is examined, rather than
-m pattern, --match=pattern
Only include attributes with names matching the regular expression pattern. The default value for pattern
includes all the attributes in the user namespace. Refer to attr(5) for a more detailed discussion on nam
--absolute-names
Do not strip leading slash characters ('/'). The default behaviour is to strip leading slash characters.
--only-values
Dump out the extended attribute value(s) only.
-R, --recursive
List the attributes of all files and directories recursively.
-L, --logical
Logical walk, follow symbolic links. The default behaviour is to follow symbolic link arguments, and to s
encountered in subdirectories.
-P, --physical
Physical walk, skip all symbolic links. This also skips symbolic link arguments.

The output format of getfattr -d is as follows:

1: # file: somedir/
2: user.name0="value0"
3: user.name1="value1"
4: user.name2="value2"
5: ...

Line 1 identifies the file name for which the following lines are being reported. The remaining lines (lines 2
to 4 above) show the name and value pairs associated with the specified file.

Source [http://linux.about.com/library/cmd/blcmdl1_getfattr.htm]

setfattr
The setfattr command associates a new value with an extended attribute name for each specified file.

OPTIONS

-n name, --name=name
Specifies the name of the extended attribute to set.
-v value, --value=value
Specifies the new value for the extended attribute.
-x name, --remove=name
Remove the named extended attribute entirely.
-h, --no-dereference
Do not follow symlinks. If pathname is a symbolic link, it is not followed, but is instead itself the ino
--restore=file
Restores extended attributes from file. The file must be in the format generated by the getfattr command
If a dash (-) is given as the file name, setfattr reads from standard input.

Example:

$ setfattr -n user.testing -v "this is a test" test-1.txt


$ getfattr -n user.testing test-1.txt

# file: test-1.txt
user.testing="this is a test"

Source [http://linux.about.com/library/cmd/blcmdl1_setfattr.htm]

17 sur 39
Access Control Lists

getfacl
getfacl - get file access control lists For each file, getfacl displays the file name, owner, the group, and the
Access Control List (ACL). If a directory has a default ACL, getfacl also displays the default ACL. Non-
directories cannot have default ACLs.
If getfacl is used on a file system that does not support ACLs, getfacl displays the access permissions
defined by the traditional file mode permission bits.
The output format of getfacl is as follows:

$ getfacl somedir

1: # file: somedir/
2: # owner: lisa
3: # group: staff
4: user::rwx
5: user:joe:rwx #effective:r-x
6: group::rwx #effective:r-x
7: group:cool:r-x
8: mask:r-x
9: other:r-x
10: default:user::rwx
11: default:user:joe:rwx #effective:r-x
12: default:group::r-x
13: default:mask:r-x
14: default:other:---

Lines 4, 6 and 9 correspond to the user, group and other fields of the file mode permission bits. These three
are called the base ACL entries. Lines 5 and 7 are named user and named group entries. Line 8 is the
effective rights mask. This entry limits the effective rights granted to all groups and to named users. (The
file owner and others permissions are not affected by the effective rights mask; all other entries are.) Lines
10–14 display the default ACL associated with this directory. Directories may have a default ACL. Regular
files never have a default ACL.

Source: Man-page getfacl

setfacl
setfacl - set file access control lists

OPTIONS
-b, --remove-all
Remove all extended ACL entries. The base ACL entries of the owner,
group and others are retained.

-k, --remove-default
Remove the Default ACL. If no Default ACL exists, no warnings are
issued.

-n, --no-mask
Do not recalculate the effective rights mask. The default behavior
of setfacl is to recalculate the ACL mask entry, unless a mask
entry was explicitly given. The mask entry is set to the union of
all permissions of the owning group, and all named user and group
entries. (These are exactly the entries affected by the mask
entry).

--mask
Do recalculate the effective rights mask, even if an ACL mask entry

18 sur 39
was explicitly given. (See the -n option.)

-d, --default
All operations apply to the Default ACL. Regular ACL entries in the
input set are promoted to Default ACL entries. Default ACL entries
in the input set are discarded. (A warning is issued if that hap-
pens).

--restore=file
Restore a permission backup created by ‘getfacl -R’ or similar. All
permissions of a complete directory subtree are restored using this
mechanism. If the input contains owner comments or group comments,
and setfacl is run by root, the owner and owning group of all files
are restored as well. This option cannot be mixed with other
options except ‘--test’.

ACL ENTRIES
The setfacl utility recognizes the following ACL entry formats (blanks
inserted for clarity):

[d[efault]:] [u[ser]:]uid [:perms]


Permissions of a named user. Permissions of the file owner if
uid is empty.

[d[efault]:] g[roup]:gid [:perms]


Permissions of a named group. Permissions of the owning group if
gid is empty.

[d[efault]:] m[ask][:] [:perms]


Effective rights mask

[d[efault]:] o[ther][:] [:perms]


Permissions of others.

EXAMPLES

Granting an additional user read access


setfacl -m u:lisa:r file

Revoking write access from all groups and all named users (using the
effective rights mask)
setfacl -m m::rx file

Removing a named group entry from a file’s ACL


setfacl -x g:staff file

Copying the ACL of one file to another


getfacl file1 | setfacl --set-file=- file2

Copying the access ACL into the Default ACL


getfacl --access dir | setfacl -d -M- dir

Source: Man-page setfacl

327.2 Mandatory Access Control (weight: 4)

Description: Candidates should be familiar with Mandatory Access Control systems for Linux. Specifically,
candidates should have a thorough knowledge of SELinux. Also, candidates should be aware of other
Mandatory Access Control systems for Linux. This includes major features of these systems but not
configuration and use.

19 sur 39
Key Knowledge Areas:

Understand the concepts of TE, RBAC, MAC and DAC.


Configure, manage and use SELinux.
Be aware of AppArmor and Smack.

The following is a partial list of the used files, terms and utilities:

getenforce, setenforce, selinuxenabled


getsebool, setsebool, togglesebool
fixfiles, restorecon, setfiles
newrole, runcon
semanage
sestatus, seinfo
apol
seaudit, seaudit-report, audit2why, audit2allow
/etc/selinux/*

TE

- No info found.

RBAC

Role-based access control (RBAC) is an access policy determined by the system, not the owner. RBAC is
used in commercial applications and also in military systems, where multi-level security requirements may
also exist. RBAC differs from DAC in that DAC allows users to control access to their resources, while in
RBAC, access is controlled at the system level, outside of the user's control. Although RBAC is non-
discretionary, it can be distinguished from MAC primarily in the way permissions are handled. MAC
controls read and write permissions based on a user's clearance level and additional labels. RBAC controls
collections of permissions that may include complex operations such as an e-commerce transaction, or may
be as simple as read or write. A role in RBAC can be viewed as a set of permissions.

Three primary rules are defined for RBAC:

1. Role assignment: A subject can execute a transaction only if the subject has selected or been assigned a
role.

2. Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule
ensures that users can take on only roles for which they are authorized.

3. Transaction authorization: A subject can execute a transaction only if the transaction is authorized for the
subject's active role. With rules 1 and 2, this rule ensures that users can execute only transactions for which
they are authorized.

Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level
roles subsume permissions owned by sub-roles.

Most IT vendors offer RBAC in one or more products.

20 sur 39
Source [http://en.wikipedia.org/wiki/Access_control#Access_Control_Techniques]

MAC

Mandatory access control (MAC) is an access policy determined by the system, not the owner. MAC is used
in multilevel systems that process highly sensitive data, such as classified government and military
information. A multilevel system is a single computer system that handles multiple classification levels
between subjects and objects.

Sensitivity labels: In a MAC-based system, all subjects and objects must have labels assigned to
them. A subject's sensitivity label specifies its level of trust. An object's sensitivity label specifies the
level of trust required for access. In order to access a given object, the subject must have a sensitivity
level equal to or higher than the requested object.
Data import and export: Controlling the import of information from other systems and export to
other systems (including printers) is a critical function of MAC-based systems, which must ensure
that sensitivity labels are properly maintained and implemented so that sensitive information is
appropriately protected at all times.

Two methods are commonly used for applying mandatory access control:

Rule-based access controls: This type of control further defines specific conditions for access to a
requested object. All MAC-based systems implement a simple form of rule-based access control to
determine whether access should be granted or denied by matching:

1. An object's sensitivity label


2. A subject's sensitivity label

Lattice-based access controls: These can be used for complex access control decisions involving
multiple objects and/or subjects. A lattice model is a mathematical structure that defines greatest
lower-bound and least upper-bound values for a pair of elements, such as a subject and an object.

Source [http://en.wikipedia.org/wiki/Access_control#Access_Control_Techniques]

DAC

Discretionary access control (DAC) is an access policy determined by the owner of an object. The owner
decides who is allowed to access the object and what privileges they have.

Two important concepts in DAC are

File and data ownership: Every object in the system has an owner. In most DAC systems, each
object's initial owner is the subject that caused it to be created. The access policy for an object is
determined by its owner.
Access rights and permissions: These are the controls that an owner can assign to other subjects for
specific resources.

Access controls may be discretionary in ACL-based or capability-based access control systems. (In
capability-based systems, there is usually no explicit concept of 'owner', but the creator of an object has a
similar degree of control over its access policy.)

21 sur 39
Source [http://en.wikipedia.org/wiki/Access_control#Access_Control_Techniques]

SELinux configuration

getenforce Display SELinux mode

$ getenforce
Disabled

setenforce
Modify the mode if SELinux will it is running SELinux has two modes

Enforcing - enforce policy


Permissive - warn only

$ setenforce Enforcing

To enable or disable SELinux you need to modify /etc/selinux/config and reboot the system.

getsebool
Example display all booleans used for squid

$ getsebool -a | grep squid


allow_httpd_squid_script_anon_write --> off
squid_connect_any --> off
squid_disable_trans --> off

semanage
Example display all contexts for squid

$ semanage fcontext -l | grep squid


/etc/squid(/.*)? all files system_u:object_r:squid_conf_t:s0
/var/log/squid(/.*)? all files system_u:object_r:squid_log_t:s0
/var/spool/squid(/.*)? all files system_u:object_r:squid_cache_t:s0
/usr/share/squid(/.*)? all files system_u:object_r:squid_conf_t:s0
/var/cache/squid(/.*)? all files system_u:object_r:squid_cache_t:s0
/usr/sbin/squid regular file system_u:object_r:squid_exec_t:s0
/var/run/squid\.pid regular file system_u:object_r:squid_var_run_t:s0
/usr/lib/squid/cachemgr\.cgi regular file system_u:object_r:httpd_squid_script_ex
/usr/lib64/squid/cachemgr\.cgi regular file system_u:object_r:httpd_squid_script_ex

setsebool
Example: Allow anonymous ftp

$ setsebool allow_ftp_anon_write=on

Be aware of AppArmor and Smack.


selinuxenabled
togglesebool
fixfiles, restorecon, setfiles
newrole, runcon
sestatus, seinfo

22 sur 39
apol
seaudit, seaudit-report, audit2why, audit2allow
/etc/selinux/*

327.3 Network File Systems (weight: 3)

Description: Candidates should have experience and knowledge of security issues in use and configuration
of NFSv4 clients and servers as well as CIFS client services. Earlier versions of NFS are not required
knowledge.

Key Knowledge Areas:

Understand NFSv4 security issues and improvements.


Configure NFSv4 server and clients.
Understand and configure NFSv4 authentication mechanisms (LIPKEY, SPKM, Kerberos).
Understand and use NFSv4 pseudo file system.
Understand and use NFSv4 ACLs.
Configure CIFS clients.
Understand and use CIFS Unix Extensions.
Understand and configure CIFS security modes (NTLM, Kerberos).
Understand and manage mapping and handling of CIFS ACLs and SIDs in a Linux system.

The following is a partial list of the used files, terms and utilities:

/etc/exports
/etc/idmap.conf
nfs4acl
mount.cifs parameters related to ownership, permissions and security modes
winbind
getcifsacl, setcifsacl

Configuration

limit access
For instance, the following line in the /etc/exports file shares the directory /tmp/nfs/ to the host
bob.example.com with read and write permissions. /etc/exports:

/tmp/nfs/ bob.example.com(rw)

Restricting access is also by protecting the portmap service, this can be done using libwrap en iptables.

iptables -A INPUT -p udp -s! 192.168.0.0/24 --dport 111 -j DROP

#
# hosts.allow This file describes the names of the hosts which are
# allowed to use the local INET services, as decided
# by the '/usr/sbin/tcpd' server.
#

portmap : 127. : ALLOW


portmap : ALL : DENY

Do Not Use the no_root_squash Option

23 sur 39
By default, NFS shares change root-owned files to user nfsnobody. This prevents uploading of programs
with the setuid bit set.

Configure /etc/idmapd.conf
The id mapper daemon is required on both client and server. It maps NFSv4 username@domain user strings
back and forth into numeric UIDs and GIDs when necessary. The client and server must have matching
domains in this configuration file:

[General]

Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = vanemery.com

[Mapping]

Nobody-User = nfsnobody
Nobody-Group = nfsnobody

Based on information from source [http://www.vanemery.com/Linux/NFSv4/NFSv4-no-rpcsec.html]

Understand NFSv4 security issues and improvements.


Configure NFSv4 server and clients.
Understand and configure NFSv4 authentication mechanisms (LIPKEY, SPKM, Kerberos).
Understand and use NFSv4 pseudo file system.
Understand and use NFSv4 ACLs.
Configure CIFS clients.
Understand and use CIFS Unix Extensions.
Understand and configure CIFS security modes (NTLM, Kerberos).
Understand and manage mapping and handling of CIFS ACLs and SIDs in a Linux system.

The following is a partial list of the used files, terms and utilities:

nfs4acl
mount.cifs parameters related to ownership, permissions and security modes
winbind
getcifsacl, setcifsacl

Topic 328: Network Security


328.1 Network Hardening (weight: 4)

Description: Candidates should be able to secure networks against common threats. This includes
verification of the effectiveness of security measures.

Key Knowledge Areas:

Configure FreeRADIUS to authenticate network nodes.


Use nmap to scan networks and hosts, including different scan methods.

24 sur 39
Use Wireshark to analyze network traffic, including filters and statistics.
Identify and deal with rogue router advertisements and DHCP messages.

The following is a partial list of the used files, terms and utilities:

radiusd
radmin
radtest, radclient
radlast, radwho
radiusd.conf
/etc/raddb/*
nmap
wireshark
tshark
tcpdump
ndpmon

nessus

In computer security, Nessus is a proprietary comprehensive vulnerability scanning software. It is free of


charge for personal use in a non-enterprise environment. Its goal is to detect potential vulnerabilities on the
tested systems. For example:

Vulnerabilities that allow a remote cracker to control or access sensitive data on a system.
Misconfiguration (e.g. open mail relay, missing patches, etc).
Default passwords, a few common passwords, and blank/absent passwords on some system
accounts. Nessus can also call Hydra (an external tool) to launch a dictionary attack.
Denials of service against the TCP/IP stack by using mangled packets

On UNIX (including Mac OS X), it consists of nessusd, the Nessus daemon, which does the scanning, and
nessus, the client, which controls scans and presents the vulnerability results to the user. For Wndows,
Nessus 3 installs as an executable and has a self-contained scanning, reporting and management system.

Operation
In typical operation, Nessus begins by doing a port scan with one of its four internal portscanners (or it can
optionally use Amap or Nmap ) to determine which ports are open on the target and then tries various
exploits on the open ports. The vulnerability tests, available as subscriptions, are written in NASL (Nessus
Attack Scripting Language), a scripting language optimized for custom network interaction.

Tenable Network Security produces several dozen new vulnerability checks (called plugins) each week,
usually on a daily basis. These checks are available for free to the general public seven days after they are
initially published. Nessus users who require support and the latest vulnerability checks should contact
Tenable Network Security for a Direct Feed subscription which is not free. Commercial customers are also
allowed to access vulnerability checks without the seven-day delay.

Optionally, the results of the scan can be reported in various formats, such as plain text, XML, HTML and
LaTeX. The results can also be saved in a knowledge base for reference against future vulnerability scans.
On UNIX, scanning can be automated through the use of a command-line client. There exist many different
commercial, free and open source tools for both UNIX and Windows to manage individual or distributed

25 sur 39
Nessus scanners.

If the user chooses to do so (by disabling the option 'safe checks'), some of Nessus's vulnerability tests may
try to cause vulnerable services or operating systems to crash. This lets a user test the resistance of a device
before putting it in production.

Nessus provides additional functionality beyond testing for known network vulnerabilities. For instance, it
can use Windows credentials to examine patch levels on computers running the Windows operating system,
and can perform password auditing using dictionary and brute force methods. Nessus 3 can also audit
systems to make sure they have been configured per a specific policy, such as the NSA's guide for
hardening Windows servers.

Source [http://en.wikipedia.org/wiki/Nessus_(software)]

nmap

Nmap (“Network Mapper”) is a free and open source (license) utility for network exploration or security
auditing. Many systems and network administrators also find it useful for tasks such as network inventory,
managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in
novel ways to determine what hosts are available on the network, what services (application name and
version) those hosts are offering, what operating systems (and OS versions) they are running, what type of
packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large
networks, but works fine against single hosts.
Example

# nmap -A -T4 scanme.nmap.org

Starting Nmap ( http://nmap.org )


Interesting ports on scanme.nmap.org (64.13.134.52):
Not shown: 994 filtered ports
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 4.3 (protocol 2.0)
25/tcp closed smtp
53/tcp open domain ISC BIND 9.3.4
70/tcp closed gopher
80/tcp open http Apache httpd 2.2.2 ((Fedora))
|_ HTML title: Go ahead and ScanMe!
113/tcp closed auth
Device type: general purpose
Running: Linux 2.6.X
OS details: Linux 2.6.20-1 (Fedora Core 5)

TRACEROUTE (using port 80/tcp)


HOP RTT ADDRESS
[Cut first seven hops for brevity]
8 10.59 so-4-2-0.mpr3.pao1.us.above.net (64.125.28.142)
9 11.00 metro0.sv.svcolo.com (208.185.168.173)
10 9.93 scanme.nmap.org (64.13.134.52)

Nmap done: 1 IP address (1 host up) scanned in 17.00 seconds

Nmap 4.76 ( http://nmap.org )


Usage: nmap [Scan Type(s)] [Options] {target specification}
TARGET SPECIFICATION:
Can pass hostnames, IP addresses, networks, etc.
Ex: scanme.nmap.org, microsoft.com/24, 192.168.0.1; 10.0.0-255.1-254

26 sur 39
-iL <inputfilename>: Input from list of hosts/networks
-iR <num hosts>: Choose random targets
--exclude <host1[,host2][,host3],...>: Exclude hosts/networks
--excludefile <exclude_file>: Exclude list from file
HOST DISCOVERY:
-sL: List Scan - simply list targets to scan
-sP: Ping Scan - go no further than determining if host is online
-PN: Treat all hosts as online -- skip host discovery
-PS/PA/PU [portlist]: TCP SYN/ACK or UDP discovery to given ports
-PE/PP/PM: ICMP echo, timestamp, and netmask request discovery probes
-PO [protocol list]: IP Protocol Ping
-n/-R: Never do DNS resolution/Always resolve [default: sometimes]
--dns-servers <serv1[,serv2],...>: Specify custom DNS servers
--system-dns: Use OS's DNS resolver
SCAN TECHNIQUES:
-sS/sT/sA/sW/sM: TCP SYN/Connect()/ACK/Window/Maimon scans
-sU: UDP Scan
-sN/sF/sX: TCP Null, FIN, and Xmas scans
--scanflags <flags>: Customize TCP scan flags
-sI <zombie host[:probeport]>: Idle scan
-sO: IP protocol scan
-b <FTP relay host>: FTP bounce scan
--traceroute: Trace hop path to each host
--reason: Display the reason a port is in a particular state
PORT SPECIFICATION AND SCAN ORDER:
-p <port ranges>: Only scan specified ports
Ex: -p22; -p1-65535; -p U:53,111,137,T:21-25,80,139,8080
-F: Fast mode - Scan fewer ports than the default scan
-r: Scan ports consecutively - don't randomize
--top-ports <number>: Scan <number> most common ports
--port-ratio <ratio>: Scan ports more common than <ratio>
SERVICE/VERSION DETECTION:
-sV: Probe open ports to determine service/version info
--version-intensity <level>: Set from 0 (light) to 9 (try all probes)
--version-light: Limit to most likely probes (intensity 2)
--version-all: Try every single probe (intensity 9)
--version-trace: Show detailed version scan activity (for debugging)
SCRIPT SCAN:
-sC: equivalent to --script=default
--script=<Lua scripts>: <Lua scripts> is a comma separated list of
directories, script-files or script-categories
--script-args=<n1=v1,[n2=v2,...]>: provide arguments to scripts
--script-trace: Show all data sent and received
--script-updatedb: Update the script database.
OS DETECTION:
-O: Enable OS detection
--osscan-limit: Limit OS detection to promising targets
--osscan-guess: Guess OS more aggressively
TIMING AND PERFORMANCE:
Options which take <time> are in milliseconds, unless you append 's'
(seconds), 'm' (minutes), or 'h' (hours) to the value (e.g. 30m).
-T[0-5]: Set timing template (higher is faster)
--min-hostgroup/max-hostgroup <size>: Parallel host scan group sizes
--min-parallelism/max-parallelism <time>: Probe parallelization
--min-rtt-timeout/max-rtt-timeout/initial-rtt-timeout <time>: Specifies
probe round trip time.
--max-retries <tries>: Caps number of port scan probe retransmissions.
--host-timeout <time>: Give up on target after this long
--scan-delay/--max-scan-delay <time>: Adjust delay between probes
--min-rate <number>: Send packets no slower than <number> per second
--max-rate <number>: Send packets no faster than <number> per second
FIREWALL/IDS EVASION AND SPOOFING:
-f; --mtu <val>: fragment packets (optionally w/given MTU)
-D <decoy1,decoy2[,ME],...>: Cloak a scan with decoys

27 sur 39
-S <IP_Address>: Spoof source address
-e <iface>: Use specified interface
-g/--source-port <portnum>: Use given port number
--data-length <num>: Append random data to sent packets
--ip-options <options>: Send packets with specified ip options
--ttl <val>: Set IP time-to-live field
--spoof-mac <mac address/prefix/vendor name>: Spoof your MAC address
--badsum: Send packets with a bogus TCP/UDP checksum
OUTPUT:
-oN/-oX/-oS/-oG <file>: Output scan in normal, XML, s|<rIpt kIddi3,
and Grepable format, respectively, to the given filename.
-oA <basename>: Output in the three major formats at once
-v: Increase verbosity level (use twice or more for greater effect)
-d[level]: Set or increase debugging level (Up to 9 is meaningful)
--open: Only show open (or possibly open) ports
--packet-trace: Show all packets sent and received
--iflist: Print host interfaces and routes (for debugging)
--log-errors: Log errors/warnings to the normal-format output file
--append-output: Append to rather than clobber specified output files
--resume <filename>: Resume an aborted scan
--stylesheet <path/URL>: XSL stylesheet to transform XML output to HTML
--webxml: Reference stylesheet from Nmap.Org for more portable XML
--no-stylesheet: Prevent associating of XSL stylesheet w/XML output
MISC:
-6: Enable IPv6 scanning
-A: Enables OS detection and Version detection, Script scanning and Traceroute
--datadir <dirname>: Specify custom Nmap data file location
--send-eth/--send-ip: Send using raw ethernet frames or IP packets
--privileged: Assume that the user is fully privileged
--unprivileged: Assume the user lacks raw socket privileges
-V: Print version number
-h: Print this help summary page.
EXAMPLES:
nmap -v -A scanme.nmap.org
nmap -v -sP 192.168.0.0/16 10.0.0.0/8
nmap -v -iR 10000 -PN -p 80
SEE THE MAN PAGE FOR MANY MORE OPTIONS, DESCRIPTIONS, AND EXAMPLES

Source [http://nmap.org/]

wireshark

Wireshark is a free packet sniffer computer application. It is used for network troubleshooting, analysis,
software and communications protocol development, and education. In June 2006 the project was renamed
from Ethereal due to trademark issues.
Wireshark is software that “understands” the structure of different networking protocols. Thus, it is able to
display the encapsulation and the fields along with their meanings of different packets specified by different
networking protocols. Wireshark uses pcap to capture packets, so it can only capture the packets on the
networks supported by pcap.

Data can be captured “from the wire” from a live network connection or read from a file that records
the already-captured packets.
Live data can be read from a number of types of network, including Ethernet, IEEE 802.11, PPP, and
loopback.
Captured network data can be browsed via a GUI, or via the terminal (command line) version of the
utility, tshark.
Captured files can be programmatically edited or converted via command-line switches to the
“editcap” program.

28 sur 39
Display filters can also be used to selectively highlight and color packet summary information.
Data display can be refined using a display filter.
Hundreds of protocols can be dissected.

Wireshark's native network trace file format is the libpcap format supported by libpcap and WinPcap, so it
can read capture files from applications such as tcpdump and CA NetMaster that use that format. It can also
read captures from other network analyzers, such as snoop, Network General's Sniffer, and Microsoft
Network Monitor.

Source [http://en.wikipedia.org/wiki/Wireshark]

Command-line options

Wireshark 1.0.3
Interactively dump and analyze network traffic.
See http://www.wireshark.org for more information.

Copyright 1998-2008 Gerald Combs <[email protected]> and contributors.


This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Usage: wireshark [options] ... [ <infile> ]

Capture interface:
-i <interface> name or idx of interface (def: first non-loopback)
-f <capture filter> packet filter in libpcap filter syntax
-s <snaplen> packet snapshot length (def: 65535)
-p don't capture in promiscuous mode
-k start capturing immediately (def: do nothing)
-Q quit Wireshark after capturing
-S update packet display when new packets are captured
-l turn on automatic scrolling while -S is in use
-y <link type> link layer type (def: first appropriate)
-D print list of interfaces and exit
-L print list of link-layer types of iface and exit

Capture stop conditions:


-c <packet count> stop after n packets (def: infinite)
-a <autostop cond.> ... duration:NUM - stop after NUM seconds
filesize:NUM - stop this file after NUM KB
files:NUM - stop after NUM files
Capture output:
-b <ringbuffer opt.> ... duration:NUM - switch to next file after NUM secs
filesize:NUM - switch to next file after NUM KB
files:NUM - ringbuffer: replace after NUM files
Input file:
-r <infile> set the filename to read from (no pipes or stdin!)

Processing:
-R <read filter> packet filter in Wireshark display filter syntax
-n disable all name resolutions (def: all enabled)
-N <name resolve flags> enable specific name resolution(s): "mntC"

User interface:
-C <config profile> start with specified configuration profile
-g <packet number> go to specified packet number after "-r"
-m <font> set the font name used for most text
-t ad|a|r|d|dd|e output format of time stamps (def: r: rel. to first)
-X <key>:<value> eXtension options, see man page for details
-z <statistics> show various statistics, see man page for details

29 sur 39
Output:
-w <outfile|-> set the output filename (or '-' for stdout)

Miscellaneous:
-h display this help and exit
-v display version info and exit
-P <key>:<path> persconf:path - personal configuration files
persdata:path - personal data files
-o <name>:<value> ... override preference or recent setting
--display=DISPLAY X display to use

Configure FreeRADIUS to authenticate network nodes.


Identify and deal with rogue router advertisements and DHCP messages.
radiusd
radmin
radtest, radclient
radlast, radwho
radiusd.conf
/etc/raddb/*
tshark
ndpmon

328.2 Network Intrusion Detection (weight: 4)

Description: Candidates should be familiar with the use and configuration of network security scanning,
network monitoring and network intrusion detection software. This includes updating and maintaining the
security scanners.

Key Knowledge Areas:

Implement bandwidth usage monitoring.


Configure and use Snort, including rule management.
Configure and use OpenVAS, including NASL.

The following is a partial list of the used files, terms and utilities:

ntop
Cacti
snort
snort-stat
/etc/snort/*
openvas-adduser, openvas-rmuser
openvas-nvt-sync
openvassd
openvas-mkcert
/etc/openvas/*

ntop

30 sur 39
ntop is a network traffic probe that shows the network usage, similar to what the popular top Unix command
does. Ntop is based on libpcap and it has been written in a portable way in order to virtually run on every
Unix platform.

How Ntop Works ?


Ntop users can use a a web browser to navigate through ntop (that acts as a web server) traffic information
and get a dump of the network status. In the latter case, ntop can be seen as a simple RMON-like agent with
an embedded web interface. The use of:

a web interface
limited configuration and administration via the web interface
reduced CPU and memory usage (they vary according to network size and traffic).

Using Ntop
This is a very simple procedure. Run this command in the bash shell:

# ntop -P /etc/ntop -W4242 -d

What does it means ? Well, -P option reads the configuration files in the “/etc/ntop” directory. The -W
option enables the port on which we want to access Ntop through our web browser. If you don't specify this
option the default port is 3000. Finally the -d option enables Ntop in daemon mode. This means that Ntop
will work until the system runs and works.
Once is started in web mode Ntop enables its web server and allow us to view and use its statistics through
any web browser by using the web address http://host:portnumber/ [http://host:portnumber/].
The example on our test machine:

# http://192.168.0.6:4242/

Source [http://wiki.engardelinux.org/index.php/HOWTO:Installing_and_running_NTOP]

snort

Configure Snort
We need to modify the snort.conf file to suite our needs. Open /etc/snort/snort.conf with your favorite text
editor (nano, vi, vim, etc.).

# vi /etc/snort/snort.conf

Change “var HOME_NET any” to “var HOME_NET 192.168.1.0/24” (your home network may differ from
192.168.1.0) Change “var EXTERNAL_NET any” to “var EXTERNAL_NET !$HOME_NET” (this is
stating everything except HOME_NET is external) Change “var RULE_PATE ../rules” to “var
RULE_PATH /etc/snort/rules” Save and quit.
Change permissions on the conf file to keep things secure (thanks rojo):

# chmod 600 /etc/snort/snort.conf

Time to test Snort


In the terminal type:

# snort -c /etc/snort/snort.conf

31 sur 39
If everything went well you should see an ascii pig.
To end the test hit ctrl + c.

Updating rules
modify /etc/oinkmaster.conf so that:

url = http://www.snort.org/pub-bin/oinkmaster.cgi/<your registered key>/snortrules-snapshot-CURRENT.tar.gz

Then:

groupadd snort
useradd -g snort snort -s /bin/false
chmod 640 /etc/oinktmaster.conf
chown root:snort /etc/oinkmaster.conf
nano -w /usr/local/bin/oinkdaily

In /usr/local/bin/oinkdaily, include the following, uncommenting the appropriate line:

#!/bin/bash

## if you have "mail" installed, uncomment this to have oinkmaster mail you reports:
# /usr/sbin/oinkmaster -C /etc/oinkmaster.conf -o /etc/snort/rules 2>&1 | mail -s "oinkmaster" [email protected]

## otherwise, use this one:


# /usr/sbin/oinkmaster -C /etc/oinkmaster.conf -o /etc/snort/rules >/dev/null 2>&1

Finally:

chmod 700 /usr/local/bin/oinkdaily


chown -R snort:snort /usr/local/bin/oinkdaily /etc/snort/rules
crontab -u snort -e

In user snort's crontab, to launch the update on the 30th minute of the 5th hour of every day, add the
following:

30 5 * * * /usr/local/bin/oinkdaily

But you should randomize those times (for instance, 2:28 or 4:37 or 6:04) to reduce the impact on
snort.org's servers.

Source [http://www.howtoforge.com/intrusion-detection-with-snort-mysql-apache2-on-ubuntu-7.10-updated]

Configure and use OpenVAS, including NASL.


Cacti
openvas-adduser, openvas-rmuser
openvas-nvt-sync
openvassd
openvas-mkcert
/etc/openvas/*

328.3 Packet Filtering (weight: 5)

Description: Candidates should be familiar with the use and configuration of packet filters. This includes

32 sur 39
netfilter, iptables and ip6tables as well as basic knowledge of nftables, nft and ebtables.

Key Knowledge Areas:

Understand common firewall architectures, including DMZ.


Understand and use netfilter, iptables and ip6tables, including standard modules, tests and targets.
Implement packet filtering for both IPv4 and IPv6.
Implement connection tracking and network address translation.
Define IP sets and use them in netfilter rules.
Have basic knowledge of nftables and nft.
Have basic knowledge of ebtables.
Be aware of conntrackd.

The following is a partial list of the used files, terms and utilities:

iptables
ip6tables
iptables-save, iptables-restore
ip6tables-save, ip6tables-restore
ipset
nft
ebtables

iptables

Basic Commands
Typing

$ sudo iptables -L

lists your current rules in iptables. If you have just set up your server, you will have no rules, and you
should see

Chain INPUT (policy ACCEPT)


target prot opt source destination

Chain FORWARD (policy ACCEPT)


target prot opt source destination

Chain OUTPUT (policy ACCEPT)


target prot opt source destination

Basic Iptables Options


Here are explanations for some of the iptables options you will see in this tutorial. Don't worry about
understanding everything here now, but remember to come back and look at this list as you encounter new
options later on.

-A - Append this rule to a rule chain. Valid chains for what we're doing are INPUT, FORWARD and
OUTPUT, but we mostly deal with INPUT in this tutorial, which affects only incoming traffic.

-L - List the current filter rules.

33 sur 39
-m state - Allow filter rules to match based on connection state. Permits the use of the –state option.

–state - Define the list of states for the rule to match on. Valid states are:

NEW - The connection has not yet been seen.


RELATED - The connection is new, but is related to another connection already permitted.
ESTABLISHED - The connection is already established.
INVALID - The traffic couldn't be identified for some reason.

-m limit - Require the rule to match only a limited number of times. Allows the use of the –limit option.
Useful for limiting logging rules.

–limit - The maximum matching rate, given as a number followed by “/second”, “/minute”, “/hour”,
or “/day” depending on how often you want the rule to match. If this option is not used and -m limit
is used, the default is “3/hour”.

-p - The connection protocol used.

–dport - The destination port(s) required for this rule. A single port may be given, or a range may be given
as start:end, which will match all ports from start to end, inclusive.

-j - Jump to the specified target. By default, iptables allows four targets:

ACCEPT - Accept the packet and stop processing rules in this chain.
REJECT - Reject the packet and notify the sender that we did so, and stop processing rules in this
chain.
DROP - Silently ignore the packet, and stop processing rules in this chain.
LOG - Log the packet, and continue processing more rules in this chain. Allows the use of the –log-
prefix and –log-level options.

–log-prefix - When logging, put this text before the log message. Use double quotes around the text to use.

–log-level - Log using the specified syslog level. 7 is a good choice unless you specifically need something
else.

-i - Only match if the packet is coming in on the specified interface.

-I - Inserts a rule. Takes two options, the chain to insert the rule into, and the rule number it should be.

-I INPUT 5 would insert the rule into the INPUT chain and make it the 5th rule in the list.

-v - Display more information in the output. Useful for if you have rules that look similar without using -v.

-s –source - address[/mask] source specification

-d –destination - address[/mask] destination specification

-o –out-interface - output name[+] network interface name ([+] for wildcard)

Allowing Established Sessions


We can allow established sessions to receive traffic:

34 sur 39
$ sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

Allowing Incoming Traffic on Specific Ports


You could start by blocking traffic, but you might be working over SSH, where you would need to allow
SSH before blocking everything else.
To allow incoming traffic on the default SSH port (22), you could tell iptables to allow all TCP traffic on
that port to come in.

$ sudo iptables -A INPUT -p tcp --dport ssh -j ACCEPT

Referring back to the list above, you can see that this tells iptables:

append this rule to the input chain (-A INPUT) so we look at incoming traffic
check to see if it is TCP (-p tcp).
if so, check to see if the input goes to the SSH port (–dport ssh).
if so, accept the input (-j ACCEPT).

Lets check the rules: (only the first few lines shown, you will see more)

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh

Now, let's allow all incoming web traffic

$ sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT

Checking our rules, we have

$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere tcp dpt:www

We have specifically allowed tcp traffic to the ssh and web ports, but as we have not blocked anything, all
traffic can still come in.

Blocking Traffic
Once a decision is made to accept a packet, no more rules affect it. As our rules allowing ssh and web traffic
come first, as long as our rule to block all traffic comes after them, we can still accept the traffic we want.
All we need to do is put the rule to block all traffic at the end.

$ sudo iptables -A INPUT -j DROP


$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere tcp dpt:www
DROP all -- anywhere anywhere

35 sur 39
Because we didn't specify an interface or a protocol, any traffic for any port on any interface is blocked,
except for web and ssh.

Editing iptables
The only problem with our setup so far is that even the loopback port is blocked. We could have written the
drop rule for just eth0 by specifying -i eth0, but we could also add a rule for the loopback. If we append this
rule, it will come too late - after all the traffic has been dropped. We need to insert this rule before that.
Since this is a lot of traffic, we'll insert it as the first rule so it's processed first.

$ sudo iptables -I INPUT 1 -i lo -j ACCEPT


$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere tcp dpt:www
DROP all -- anywhere anywhere

The first and last lines look nearly the same, so we will list iptables in greater detail.

$ sudo iptables -L -v

Chain INPUT (policy ALLOW 0 packets, 0 bytes)


pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- lo any anywhere anywhere
0 0 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLI
0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh
0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:www
0 0 DROP all -- any any anywhere anywhere

You can now see a lot more information. This rule is actually very important, since many programs use the
loopback interface to communicate with each other. If you don't allow them to talk, you could break those
programs!

Logging
In the above examples none of the traffic will be logged. If you would like to log dropped packets to syslog,
this would be the quickest way:

$ sudo iptables -I INPUT 5 -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7

Saving iptables
If you were to reboot your machine right now, your iptables configuration would disappear. Rather than
type this each time you reboot, however, you can save the configuration, and have it start up automatically.
To save the configuration, you can use iptables-save and iptables-restore.

Source [https://help.ubuntu.com/community/IptablesHowTo]

ip6tables
ip6tables-save, ip6tables-restore
ipset
nft

36 sur 39
ebtables

328.4 Virtual Private Networks (weight: 4)

Description: Candidates should be familiar with the use of OpenVPN and IPsec. Key Knowledge Areas:

Configure and operate OpenVPN server and clients for both bridged and routed VPN networks.
Configure and operate IPsec server and clients for routed VPN networks using IPsec-Tools / racoon.
Awareness of L2TP.

The following is a partial list of the used files, terms and utilities:

/etc/openvpn/*
openvpn server and client
setkey
/etc/ipsec-tools.conf
/etc/racoon/racoon.conf

Configuration

OpenVPN is a full-featured open source SSL VPN solution that accommodates a wide range of
configurations, including remote access, site-to-site VPNs, Wi-Fi security, and enterprise-scale remote
access solutions with load balancing, failover, and fine-grained access-controls. Starting with the
fundamental premise that complexity is the enemy of security, OpenVPN offers a cost-effective, lightweight
alternative to other VPN technologies that is well-targeted for the SME and enterprise markets.

Simple Example
This example demonstrates a bare-bones point-to-point OpenVPN configuration. A VPN tunnel will be
created with a server endpoint of 10.8.0.1 and a client endpoint of 10.8.0.2. Encrypted communication
between client and server will occur over UDP port 1194, the default OpenVPN port.

Generate a static key:

openvpn --genkey --secret static.key

Copy the static key to both client and server, over a pre-existing secure channel. Server configuration file

dev tun
ifconfig 10.8.0.1 10.8.0.2
secret static.key

Client configuration file

remote myremote.mydomain
dev tun
ifconfig 10.8.0.2 10.8.0.1
secret static.key

Firewall configuration
Make sure that:

UDP port 1194 is open on the server, and

37 sur 39
the virtual TUN interface used by OpenVPN is not blocked on either the client or server (on Linux,
the TUN interface will probably be called tun0 while on Windows it will probably be called
something like Local Area Connection n unless you rename it in the Network Connections control
panel).

Bear in mind that 90% of all connection problems encountered by new OpenVPN users are firewall-related.

Testing the VPN


Run OpenVPN using the respective configuration files on both server and client, changing
myremote.mydomain in the client configuration to the domain name or public IP address of the server.
To verify that the VPN is running, you should be able to ping 10.8.0.2 from the server and 10.8.0.1 from the
client. Expanding on the Simple Example Use compression on the VPN link
Add the following line to both client and server configuration files:

comp-lzo

Make the link more resistent to connection failures

Deal with:

keeping a connection through a NAT router/firewall alive, and


follow the DNS name of the server if it changes its IP address.

Add the following to both client and server configuration files:

keepalive 10 60
ping-timer-rem
persist-tun
persist-key

Run OpenVPN as a daemon (Linux/BSD/Solaris/MacOSX only)


Run OpenVPN as a daemon and drop privileges to user/group nobody.
Add to configuration file (client and/or server):

user nobody
group nobody
daemon

Allow client to reach entire server subnet


Suppose the OpenVPN server is on a subnet 192.168.4.0/24. Add the following to client configuration:

route 192.168.4.0 255.255.255.0

Then on the server side, add a route to the server's LAN gateway that routes 10.8.0.2 to the OpenVPN
server machine (only necessary if the OpenVPN server machine is not also the gateway for the server-side
LAN). Also, don't forget to enable IP Forwarding on the OpenVPN server machine.

Source [http://openvpn.net/index.php/documentation/miscellaneous/static-key-mini-howto.html]

setkey
/etc/ipsec-tools.conf

38 sur 39
/etc/racoon/racoon.conf

wiki/certification/lpic303-200.txt · Last modified: 2017/08/24 08:02 by ferry

39 sur 39

You might also like