Fedora - Security - Team 1 Defensive - Coding en US
Fedora - Security - Team 1 Defensive - Coding en US
Defensive Coding
A Guide to Improving Software Security
Florian Weimer
Defensive Coding
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available
at http://creativecommons.org/licenses/by-sa/3.0/. The original authors of this document, and Red Hat,
designate the Fedora Project as the "Attribution Party" for purposes of CC-BY-SA. In accordance with
CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the
original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity
Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
For guidelines on the permitted uses of the Fedora trademarks, refer to https://fedoraproject.org/wiki/
Legal:Trademark_guidelines.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other
countries.
This document provides guidelines for improving software security through secure coding. It covers
common programming languages and libraries, and focuses on concrete recommendations.
I. Programming Languages 1
iii
Defensive Coding
iv
12.5. XML serialization .............................................................................................. 57
12.5.1. External references ................................................................................ 57
12.5.2. Entity expansion .................................................................................... 58
12.5.3. XInclude processing ............................................................................... 58
12.5.4. Algorithmic complexity of XML validation ................................................. 58
12.5.5. Using Expat for XML parsing .................................................................. 58
12.5.6. Using Qt for XML parsing ....................................................................... 59
12.5.7. Using OpenJDK for XML parsing and validation ....................................... 61
12.6. Protocol Encoders ............................................................................................ 63
13. Cryptography 65
13.1. Primitives ......................................................................................................... 65
13.2. Randomness .................................................................................................... 65
14. RPM packaging 67
14.1. Generating X.509 self-signed certificates during installation ................................. 67
14.2. Generating X.509 self-signed certificates before service start ............................... 68
v
vi
Part I. Programming Languages
Chapter 1.
Example 1.1, “Array processing in C” shows how to extract Pascal-style strings from a character
buffer. The two pointers kept for length checks are inend and outend. inp and outp are the
respective positions. The number of input bytes is checked using the expression len > (size_t)
(inend - inp). The cast silences a compiler warning; inend is always larger than inp.
ssize_t
extract_strings(const char *in, size_t inlen, char **out, size_t outlen)
{
const char *inp = in;
const char *inend = in + inlen;
char **outp = out;
char **outend = out + outlen;
3
Chapter 1. The C Programming Language
}
s = malloc(len + 1);
if (s == NULL) {
goto err;
}
memcpy(s, inp, len);
inp += len;
s[len] = '\0';
*outp = s;
++outp;
}
return outp - out;
err:
{
int errno_old = errno;
while (out != outp) {
free(*out);
++out;
}
errno = errno_old;
}
return -1;
}
It is important that the length checks always have the form len > (size_t)(inend - inp),
where len is a variable of type size_t which denotes the total number of bytes which are about to be
read or written next. In general, it is not safe to fold multiple such checks into one, as in len1 + len2
> (size_t)(inend - inp), because the expression on the left can overflow or wrap around (see
Section 1.1.3, “Recommendations for integer arithmetic”), and it no longer reflects the number of bytes
to be processed.
void report_overflow(void);
int
add(int a, int b)
{
int result = a + b;
if (a < 0 || b < 0) {
return -1;
}
// The compiler can optimize away the following if statement.
if (result < 0) {
report_overflow();
}
return result;
}
The following approaches can be used to check for overflow, without actually causing it.
• Use a wider type to perform the calculation, check that the result is within bounds, and convert the
result to the original type. All intermediate results must be checked in this way.
4
Recommendations for integer arithmetic
• Perform the calculation in the corresponding unsigned type and use bit fiddling to detect the
overflow. Example 1.3, “Overflow checking for unsigned addition” shows how to perform an overflow
check for unsigned integer addition. For three or more terms, all the intermediate additions have to
be checked in this way.
void report_overflow(void);
unsigned
add_unsigned(unsigned a, unsigned b)
{
unsigned sum = a + b;
if (sum < a) { // or sum < b
report_overflow();
}
return sum;
}
• Compute bounds for acceptable input values which are known to avoid overflow, and reject other
values. This is the preferred way for overflow checking on multiplications, see Example 1.4,
“Overflow checking for unsigned multiplication”.
unsigned
mul(unsigned a, unsigned b)
{
if (b && a > ((unsigned)-1) / b) {
report_overflow();
}
return a * b;
}
Basic arithmetic operations are commutative, so for bounds checks, there are two different but
mathematically equivalent expressions. Sometimes, one of the expressions results in better code
because parts of it can be reduced to a constant. This applies to overflow checks for multiplication a *
b involving a constant a, where the expression is reduced to b > C for some constant C determined at
compile time. The other expression, b && a > ((unsigned)-1) / b, is more difficult to optimize
at compile time.
When a value is converted to a signed integer, GCC always chooses the result based on 2's
complement arithmetic. This GCC extension (which is also implemented by other compilers) helps a
lot when implementing overflow checks.
Sometimes, it is necessary to compare unsigned and signed integer variables. This results in a
compiler warning, comparison between signed and unsigned integer expressions, because the
comparison often gives unexpected results for negative values. When adding a cast, make sure that
negative values are covered properly. If the bound is unsigned and the checked quantity is signed, you
should cast the checked quantity to an unsigned type as least as wide as either operand type. As a
result, negative values will fail the bounds check. (You can still check for negative values separately for
clarity, and the compiler will optimize away this redundant check.)
5
Chapter 1. The C Programming Language
Legacy code should be compiled with the -fwrapv GCC option. As a result, GCC will provide 2's
complement semantics for integer arithmetic, including defined behavior on integer overflow.
Global constants are not a problem, but declaring them can be tricky. Example 1.5, “Declaring a
constant array of constant strings” shows how to declare a constant array of constant strings. The
second const is needed to make the array constant, and not just the strings. It must be placed after
the *, and not before it.
Sometimes, static variables local to functions are used as a replacement for proper memory
management. Unlike non-static local variables, it is possible to return a pointer to static local variables
to the caller. But such variables are well-hidden, but effectively global (just as static variables at file
scope). It is difficult to add thread safety afterwards if such interfaces are used. Merely dropping the
static keyword in such cases leads to undefined behavior.
Another source for static local variables is a desire to reduce stack space usage on embedded
platforms, where the stack may span only a few hundred bytes. If this is the only reason why
the static keyword is used, it can just be dropped, unless the object is very large (larger than
128 kilobytes on 32 bit platforms). In the latter case, it is recommended to allocate the object using
malloc, to obtain proper array checking, for the same reasons outlined in Section 1.3.2, “alloca and
other forms of stack-based allocation”.
Please check the applicable documentation before using the recommended replacements. Many of
these functions allocate buffers using malloc which your code must deallocate explicitly using free.
• gets ⟶ fgets
• readdir_r ⟶ readdir
6
Functions to avoid
• realpath (with a non-NULL second parameter) ⟶ realpath with NULL as the second parameter,
or canonicalize_file_name
The constants listed below must not be used, either. Instead, code must allocate memory dynamically
and use interfaces with length checking.
• _PC_NAME_MAX (This limit, returned by the pathconf function, is not enforced by the kernel.)
• _PC_PATH_MAX (This limit, returned by the pathconf function, is not enforced by the kernel.)
• f_namemax in struct statvfs (limit not actually enforced by the kernel, see _PC_NAME_MAX
above)
• sprintf
• strcat
• strcpy
• vsprintf
• alloca ⟶ malloc and free (see Section 1.3.2, “alloca and other forms of stack-based
allocation”)
• putenv ⟶ explicit envp argument in process creation (see Section 11.1.3, “Specifying the process
environment”)
• setenv ⟶ explicit envp argument in process creation (see Section 11.1.3, “Specifying the process
environment”)
• strdupa ⟶ strdup and free (see Section 1.3.2, “alloca and other forms of stack-based
allocation”)
• strndupa ⟶ strndup and free (see Section 1.3.2, “alloca and other forms of stack-based
allocation”)
• unsetenv ⟶ explicit envp argument in process creation (see Section 11.1.3, “Specifying the
process environment”)
7
Chapter 1. The C Programming Language
1.2.3.1. snprintf
The snprintf function provides a way to construct a string in a statically-sized buffer. (If the buffer
size is allocated on the heap, consider use asprintf instead.)
char fraction[30];
snprintf(fraction, sizeof(fraction), "%d/%d", numerator, denominator);
The second argument to the snprintf call should always be the size of the buffer in the first
argument (which should be a character array). Elaborate pointer and length arithmetic can introduce
errors and nullify the security benefits of snprintf.
char buf[512];
char *current = buf;
const char *const end = buf + sizeof(buf);
for (struct item *it = data; it->key; ++it) {
snprintf(current, end - current, "%s%s=%d",
current == buf ? "" : ", ", it->key, it->value);
current += strlen(current);
}
If you want to avoid the call to strlen for performance reasons, you have to check for a negative
return value from snprintf and also check if the return value is equal to the specified buffer length or
larger. Only if neither condition applies, you may advance the pointer to the start of the write buffer by
the number return by snprintf. However, this optimization is rarely worthwhile.
Note that it is not permitted to use the same buffer both as the destination and as a source argument.
8
String Functions With Explicit Length Arguments
void
log_format(const char *format, ...)
{
char buf[1000];
va_list ap;
va_start(ap, format);
vsnprintf(buf, sizeof(buf), format, ap);
va_end(ap);
log_string(buf);
}
1.2.3.3. strncpy
The strncpy function does not ensure that the target buffer is NUL-terminated. A common idiom for
ensuring NUL termination is:
char buf[10];
strncpy(buf, data, sizeof(buf));
buf[sizeof(buf) - 1] = '\0';
buf[0] = '\0';
strncat(buf, data, sizeof(buf) - 1);
1.2.3.4. strncat
The length argument of the strncat function specifies the maximum number of characters copied
from the source buffer, excluding the terminating NUL character. This means that the required
number of bytes in the destination buffer is the length of the original string, plus the length argument
in the strncat call, plus one. Consequently, this function is rarely appropriate for performing a
length-checked string operation, with the notable exception of the strcpy emulation described in
Section 1.2.3.3, “strncpy”.
To implement a length-checked string append, you can use an approach similar to Example 1.6,
“Repeatedly writing to a buffer using snprintf”:
char buf[10];
snprintf(buf, sizeof(buf), "%s", prefix);
snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "%s", data);
In many cases, including this one, the string concatenation can be avoided by combining everything
into a single format string:
But you should must not dynamically construct format strings to avoid concatenation because this
would prevent GCC from type-checking the argument lists.
It is not possible to use format strings like "%s%s" to implement concatenation, unless you use
separate buffers. snprintf does not support overlapping source and target strings.
9
Chapter 1. The C Programming Language
The C compiler knows about these functions and can use their expected behavior for optimizations.
For instance, the compiler assumes that an existing pointer (or a pointer derived from an existing
pointer by arithmetic) will not point into the memory area returned by malloc.
If the allocation fails, realloc does not free the old pointer. Therefore, the idiom ptr =
realloc(ptr, size); is wrong because the memory pointed to by ptr leaks in case of an error.
The same rules apply to realloc if the memory area cannot be enlarged in-place. For instance, the
compiler may assume that a comparison between the old and new pointer will always return false, so it
is impossible to detect movement this way.
In general, if you cannot check all allocation calls and handle failure, you should abort the program
on allocation failure, and not rely on the null pointer dereference to terminate the process. See
10
alloca and other forms of stack-based allocation
Section 12.1, “Recommendations for manually written decoders” for related memory allocation
concerns.
This is sufficient for detecting typical stack overflow situations such as unbounded recursion, but it fails
when the stack grows in increments larger than the size of the guard page. In this case, it is possible
that the stack pointer ends up pointing into a memory area which has been allocated for a different
purposes. Such misbehavior can be exploitable.
A common source for large stack growth are calls to alloca and related functions such as strdupa.
These functions should be avoided because of the lack of error checking. (They can be used safely
if the allocated size is less than the page size (typically, 4096 bytes), but this case is relatively rare.)
Additionally, relying on alloca makes it more difficult to reorgnize the code because it is not allowed
to use the pointer after the function calling alloca has returned, even if this function has been inlined
into its caller.
Similar concerns apply to variable-length arrays (VLAs), a feature of the C99 standard which started
as a GNU extension. For large objects exceeding the page size, there is no error checking, either.
In both cases, negative or very large sizes can trigger a stack-pointer wraparound, and the stack
pointer and end up pointing into caller stack frames, which is fatal and can be exploitable.
If you want to use alloca or VLAs for performance reasons, consider using a small on-stack array
(less than the page size, large enough to fulfill most requests). If the requested size is small enough,
use the on-stack array. Otherwise, call malloc. When exiting the function, check if malloc had been
called, and free the buffer as needed.
If malloc or realloc is used, the size check must be written manually. For instance, to allocate an
array of n elements of type T, check that the requested size is not greater than ((size_t) -1) /
sizeof(T). See Section 1.1.3, “Recommendations for integer arithmetic”.
Memory allocators are difficult to write and contain many performance and security pitfalls.
• When computing array sizes or rounding up allocation requests (to the next allocation granularity, or
for alignment purposes), checks for arithmetic overflow are required.
• Size computations for array allocations need overflow checking. See Section 1.3.3, “Array
allocation”.
11
Chapter 1. The C Programming Language
often, utilization of individual pools is poor, and external fragmentation increases the overall memory
usage.
However, using a conservative garbage collector may reduce opertunities for code reduce because
once one library in a program uses garbage collection, the whole process memory needs to be subject
to it, so that no pointers are missed. The Boehm-Dehmers-Weiser collector also reserves certain
signals for internal use, so it is not fully transparent to the rest of the program.
In general, such wrappers are a bad idea, particularly if they are not implemented as inline functions
or preprocessor macros. The compiler lacks knowledge of such wrappers outside the translation unit
which defines them, which means that some optimizations and security checks are not performed.
Adding __attribute__ annotations to function declarations can remedy this to some extent, but
these annotations have to be maintained carefully for feature parity with the standard implementation.
• If you wrap function which accepts are GCC-recognized format string (for example, a printf-style
function used for logging), you should add a suitable format attribute, as in Example 1.7, “The
format function attribute”.
• If you wrap a function which carries a warn_unused_result attribute and you propagate its return
value, your wrapper should be declared with warn_unused_result as well.
For other attributes (such as malloc), careful analysis and comparison with the compiler
documentation is required to check if propagating the attribute is appropriate. Incorrectly applied
attributes can result in undesired behavioral changes in the compiled code.
12
Chapter 2.
The std::vector template can be used instead an explicit array allocation. (The GCC
implementation detects overflow internally.)
If there is no alternative to operator new[] and the sources will be compiled with older GCC
versions, code which allocates arrays with a variable length must check for overflow manually. For
the new T[n] example, the size check could be n || (n > 0 && n > (size_t(-1) - 8) /
sizeof(T)). (See Section 1.1.3, “Recommendations for integer arithmetic”.) If there are additional
dimensions (which must be constants according to the C++ standard), these should be included as
factors in the divisor.
These countermeasures prevent out-of-bounds writes and potential code execution. Very large
memory allocations can still lead to a denial of service. Section 12.1, “Recommendations for manually
written decoders” contains suggestions for mitigating this problem when processing untrusted data.
See Section 1.3.3, “Array allocation” for array allocation advice for C-style memory allocation.
2.1.2. Overloading
Do not overload functions with versions that have different security characteristics. For instance,
do not implement a function strcat which works on std::string arguments. Similarly, do not name
methods after such functions.
Outside of extremely performance-critical code, you should ensure that a wide range of changes is
possible without breaking ABI. Some very basic guidelines are:
• Try to avoid templates. Use them if the increased type safety provides a benefit to the programmer.
• Move security-critical code out of templated code, so that it can be patched in a central place if
necessary.
13
Chapter 2. The C++ Programming Language
The KDE project publishes a document with more extensive guidelines on ABI-preserving changes
1
to C++ code, Policies/Binary Compatibility Issues With C++ (d-pointer refers to the pointer-to-
implementation idiom).
• -std=c++03 for the 1998 standard with the changes from the TR1 technical report
• -std=c++11 for the 2011 C++ standard. This option should not be used.
• -std=c++0x for several different versions of C++11 support in development, depending on the
GCC version. This option should not be used.
For each of these flags, there are variants which also enable GNU extensions (mostly language
features also found in C99 or C11): -std=gnu++98, -std=gnu++03, -std=gnu++11. Again, -
std=gnu++11 should not be used.
If you enable C++11 support, the ABI of the standard C++ library libstdc++ will change in subtle
ways. Currently, no C++ libraries are compiled in C++11 mode, so if you compile your code in C+
+11 mode, it will be incompatible with the rest of the system. Unfortunately, this is also the case if you
do not use any C++11 features. Currently, there is no safe way to enable C++11 mode (except for
freestanding applications).
The meaning of C++0X mode changed from GCC release to GCC release. Earlier versions were still
ABI-compatible with C++98 mode, but in the most recent versions, switching to C++0X mode activates
C++11 support, with its compatibility problems.
Some C++11 features (or approximations thereof) are available with TR1 support, that is, with -std=c
++03 or -std=gnu++03 and in the <tr1/*> header files. This includes std::tr1::shared_ptr
(from <tr1/memory>) and std::tr1::function (from <tr1/functional>). For other C++11
features, the Boost C++ library contains replacements.
• std::copy
1
http://techbase.kde.org/Policies/Binary_Compatibility_Issues_With_C++
14
String handling with std::string
• std::copy_backward
• std::copy_if
• std::move_backward
• std::partition_copy_if
• std::remove_copy
• std::remove_copy_if
• std::replace_copy
• std::replace_copy_if
• std::swap_ranges
• std::transform
These output-iterator-expecting functions should only be used with unlimited-range output iterators,
such as iterators obtained with the std::back_inserter function.
Other functions use single input or forward iterators, which can read beyond the end of the input range
if the caller is not careful:
• std::equal
• std::is_permutation
• std::mismatch
The pointer returned by the data() member function does not necessarily point to a NUL-terminated
string. To obtain a C-compatible string pointer, use c_str() instead, which adds the NUL terminator.
The pointers returned by the data() and c_str() functions and iterators are only valid until certain
events happen. It is required that the exact std::string object still exists (even if it was initially
created as a copy of another string object). Pointers and iterators are also invalidated when non-const
member functions are called, or functions with a non-const reference parameter. The behavior of the
GCC implementation deviates from that required by the C++ standard if multiple threads are present.
In general, only the first call to a non-const member function after a structural modification of the string
(such as appending a character) is invalidating, but this also applies to member function such as the
non-const version of begin(), in violation of the C++ standard.
Particular care is necessary when invoking the c_str() member function on a temporary object. This
is convenient for calling C functions, but the pointer will turn invalid as soon as the temporary object
15
Chapter 2. The C++ Programming Language
is destroyed, which generally happens when the outermost expression enclosing the expression on
which c_str() is called completes evaluation. Passing the result of c_str() to a function which
does not store or otherwise leak that pointer is safe, though.
Like with std::vector and std::array, subscribing with operator[] does not perform bounds
checks. Use the at(size_type) member function instead. See Section 2.2.3, “Containers and
operator[]”. Furthermore, accessing the terminating NUL character using operator[] is not
possible. (In some implementations, the c_str() member function writes the NUL character on
demand.)
Never write to the pointers returned by data() or c_str() after casting away const. If you need a
C-style writable string, use a std::vector<char> object and its data() member function. In this
case, you have to explicitly add the terminating NUL character.
The front() and back() member functions are undefined if a vector object is empty. You can
use vec.at(0) and vec.at(vec.size() - 1) as checked replacements. For an empty vector,
data() is defined; it returns an arbitrary pointer, but not necessarily the NULL pointer.
2.2.4. Iterators
Iterators do not perform any bounds checking. Therefore, all functions that work on iterators should
accept them in pairs, denoting a range, and make sure that iterators are not moved outside that range.
For forward iterators and bidirectional iterators, you need to check for equality before moving the first
or last iterator in the range. For random-access iterators, you need to compute the difference before
adding or subtracting an offset. It is not possible to perform the operation and check for an invalid
operator afterwards.
Output iterators cannot be compared for equality. Therefore, it is impossible to write code that detects
that it has been supplied an output area that is too small, and their use should be avoided.
These issues make some of the standard library functions difficult to use correctly, see Section 2.2.1.1,
“Unpaired iterators”.
16
Chapter 3.
To avoid allocating extremely large amounts of data, you can allocate a small array initially
and grow it as you read more data, implementing an exponential growth policy. See the
readBytes(InputStream, int) function in Example 3.1, “Incrementally reading a byte array”.
When reading data into arrays, hash maps or hash sets, use the default constructor and do not specify
a size hint. You can simply add the elements to the collection as you read them.
17
Chapter 3. The Java Programming Language
The first option is the try-finally construct, as shown in Example 3.2, “Resource management with
a try-finally block”. The code in the finally block should be as short as possible and should not
throw any exceptions.
Note that the resource allocation happens outside the try block, and that there is no null check in
the finally block. (Both are common artifacts stemming from IDE code templates.)
If the resource object is created freshly and implements the java.lang.AutoCloseable interface,
the code in Example 3.3, “Resource management using the try-with-resource construct” can be used
instead. The Java compiler will automatically insert the close() method call in a synthetic finally
block.
To be compatible with the try-with-resource construct, new classes should name the resource
deallocation method close(), and implement the AutoCloseable interface (the latter breaking
backwards compatibility with Java 6). However, using the try-with-resource construct with objects
that are not freshly allocated is at best awkward, and an explicit finally block is usually the better
approach.
In general, it is best to design the programming interface in such a way that resource deallocation
methods like close() cannot throw any (checked or unchecked) exceptions, but this should not be a
reason to ignore any actual error conditions.
3.1.3. Finalizers
Finalizers can be used a last-resort approach to free resources which would otherwise leak.
Finalization is unpredictable, costly, and there can be a considerable delay between the last reference
to an object going away and the execution of the finalizer. Generally, manual resource management is
required; see Section 3.1.2, “Resource management”.
Finalizers should be very short and should only deallocate native or other external resources
held directly by the object being finalized. In general, they must use synchronization: Finalization
necessarily happens on a separate thread because it is inherently concurrent. There can be multiple
18
Recovering from exceptions and errors
finalization threads, and despite each object being finalized at most once, the finalizer must not
assume that it has exclusive access to the object being finalized (in the this pointer).
Finalizers should not deallocate resources held by other objects, especially if those objects have
finalizers on their own. In particular, it is a very bad idea to define a finalizer just to invoke the resource
deallocation method of another object, or overwrite some pointer fields.
Finalizers are not guaranteed to run at all. For instance, the virtual machine (or the machine
underneath) might crash, preventing their execution.
Objects with finalizers are garbage-collected much later than objects without them, so using finalizers
to zero out key material (to reduce its undecrypted lifetime in memory) may have the opposite effect,
keeping objects around for much longer and prevent them from being overwritten in the normal course
of program execution.
For the same reason, code which allocates objects with finalizers at a high rate will eventually fail
(likely with a java.lang.OutOfMemoryError exception) because the virtual machine has finite
resources for keeping track of objects pending finalization. To deal with that, it may be necessary to
recycle objects with finalizers.
The remarks in this section apply to finalizers which are implemented by overriding the finalize()
method, and to custom finalization using reference queues.
• Run-time exceptions do not have to be declared explicitly and can be explicitly thrown from any
code, by calling code which throws them, or by triggering an error condition at run time, like division
by zero, or an attempt at an out-of-bounds array access. These exceptions derive from from the
java.lang.RuntimeException class (perhaps indirectly).
• Checked exceptions have to be declared explicitly by functions that throw or propagate them.
They are similar to run-time exceptions in other regards, except that there is no language
construct to throw them (except the throw statement itself). Checked exceptions are only
present at the Java language level and are only enforced at compile time. At run time, the virtual
machine does not know about them and permits throwing exceptions from any code. Checked
exceptions must derive (perhaps indirectly) from the java.lang.Exception class, but not from
java.lang.RuntimeException.
• Errors are exceptions which typically reflect serious error conditions. They can be thrown at any
point in the program, and do not have to be declared (unlike checked exceptions). In general, it
is not possible to recover from such errors; more on that below, in Section 3.1.4.1, “The difficulty
of catching errors”. Error classes derive (perhaps indirectly) from java.lang.Error, or from
java.lang.Throwable, but not from java.lang.Exception.
The general expection is that run-time errors are avoided by careful programming (e.g., not dividing
by zero). Checked exception are expected to be caught as they happen (e.g., when an input file is
unexpectedly missing). Errors are impossible to predict and can happen at any point and reflect that
something went wrong beyond all expectations.
19
Chapter 3. The Java Programming Language
• The error can happen at any point, resulting in inconsistencies due to half-updated objects.
Examples are java.lang.ThreadDeath, java.lang.OutOfMemoryError and
java.lang.StackOverflowError.
• The error indicates that virtual machine failed to provide some semantic guarantees by the Java
programming language. java.lang.ExceptionInInitializerError is an example—it can
leave behind a half-initialized class.
In general, if an error is thrown, the virtual machine should be restarted as soon as possible because
it is in an inconsistent state. Continuing running as before can have unexpected consequences.
However, there are legitimate reasons for catching errors because not doing so leads to even greater
problems.
Code should be written in a way that avoids triggering errors. See Section 3.1.1, “Inceasing
robustness when reading arrays” for an example.
It is usually necessary to log errors. Otherwise, no trace of the problem might be left
anywhere, making it very difficult to diagnose realted failures. Consequently, if you catch
java.lang.Exception to log and suppress all unexpected exceptions (for example, in a request
dispatching loop), you should consider switching to java.lang.Throwable instead, to also cover
errors.
The other reason mainly applies to such request dispatching loops: If you do not catch errors, the loop
stops looping, resulting in a denial of service.
However, if possible, catching errors should be coupled with a way to signal the requirement of a
virtual machine restart.
The transition between the Java world and the C world is not fully type-checked, and the C code can
easily break the Java virtual machine semantics. Therefore, extra care is needed when using this
functionality.
To provide a moderate amount of type safety, it is recommended to recreate the class-specific header
file using javah during the build process, include it in the implementation, and use the -Wmissing-
declarations option.
20
Java Native Interface (JNI)
Ideally, the required data is directly passed to static JNI methods and returned from them, and the
code and the C side does not have to deal with accessing Java fields (or even methods).
If necessary, you can use the Java long type to store a C pointer in a field of a Java class. On the C
side, when casting between the jlong value and the pointer on the C side,
You should not try to perform pointer arithmetic on the Java side (that is, you should treat pointer-
carrying long values as opaque). When passing a slice of an array to the native code, follow the Java
convention and pass it as the base array, the integer offset of the start of the slice, and the integer
length of the slice. On the native side, check the offset/length combination against the actual array
length, and use the offset to compute the pointer to the beginning of the array.
In any case, classes referring to native resources must be declared final, and must not be
serializeable or cloneable. Initialization and mutation of the state used by the native side must be
controlled carefully. Otherwise, it might be possible to create an object with inconsistent native state
which results in a crash (or worse) when used (or perhaps only finalized) later. If you need both Java
inheritance and native resources, you should consider moving the native state to a separate class,
and only keep a reference to objects of that class. This way, cloning and serialization issues can be
avoided in most cases.
21
Chapter 3. The Java Programming Language
If there are native resources associated with an object, the class should have an explicit resource
deallocation method (Section 3.1.2, “Resource management”) and a finalizer (Section 3.1.3,
“Finalizers”) as a last resort. The need for finalization means that a minimum amount of
synchronization is needed. Code on the native side should check that the object is not in a closed/
freed state.
Many JNI functions create local references. By default, these persist until the JNI-implemented
method returns. If you create many such references (e.g., in a loop), you may have to free them using
DeleteLocalRef, or start using PushLocalFrame and PopLocalFrame. Global references must
be deallocated with DeleteGlobalRef, otherwise there will be a memory leak, just as with malloc
and free.
When throwing exceptions using Throw or ThrowNew, be aware that these functions return regularly.
You have to return control manually to the JVM.
Technically, the JNIEnv pointer is not necessarily constant during the lifetime of your JNI module.
Storing it in a global variable is therefore incorrect. Particularly if you are dealing with callbacks, you
may have to store the pointer in a thread-local variable (defined with __thread). It is, however, best
to avoid the complexity of calling back into Java code.
Keep in mind that C/C++ and Java are different languages, despite very similar syntax for
expressions. The Java memory model is much more strict than the C or C++ memory models, and
native code needs more synchronization, usually using JVM facilities or POSIX threads mutexes.
Integer overflow in Java is defined, but in C/C++ it is not (for the jint and jlong types).
3.2.3. sun.misc.Unsafe
The sun.misc.Unsafe class is unportable and contains many functions explicitly designed to break
Java memory safety (for performance and debugging). If possible, avoid using this class.
The type safety and accessibility checks provided by the Java language and JVM would be sufficient
to implement a sandbox. However, only some Java APIs employ such a capabilities-based approach.
(The Java SE library contains many public classes with public constructors which can break any
security policy, such as java.io.FileOutputStream.) Instead, critical functionality is protected
by stack inspection: At a security check, the stack is walked from top (most-nested) to bottom. The
security check fails if a stack frame for a method is encountered whose class lacks the permission
which the security check requires.
This simple approach would not allow untrusted code (which lacks certain permissions) to call into
trusted code while the latter retains trust. Such trust transitions are desirable because they enable
Java as an implementation language for most parts of the Java platform, including security-relevant
code. Therefore, there is a mechanism to mark certain stack frames as trusted (Section 3.3.4, “Re-
gaining privileges”).
In theory, it is possible to run a Java virtual machine with a security manager that acts very differently
from this approach, but a lot of code expects behavior very close to the platform default (including
many classes which are part of the OpenJDK implementation).
22
Security manager compatibility
• Avoid explicit class loading. Access to a suitable class loader might not be available when executing
as untrusted code.
If the functionality you are implementing absolutely requires privileged access and this functionality
has to be used from untrusted code (hopefully in a restricted and secure manner), see Section 3.3.4,
“Re-gaining privileges”.
The -Djava.security.manager option activates the security manager, with the fairly restrictive
default policy. With a very permissive policy, most Java code will run unchanged. Assuming the policy
in Example 3.5, “Most permissve OpenJDK policy file” has been saved in a file grant-all.policy,
this policy can be activated using the option -Djava.security.policy=grant-all.policy (in
addition to the -Djava.security.manager option).
grant {
permission java.security.AllPermission;
};
With this most permissive policy, the security manager is still active, and explicit requests to drop
privileges will be honored.
Example 3.6. Using the security manager to run code with reduced privileges
23
Chapter 3. The Java Programming Language
AccessController.doPrivileged(new PrivilegedExceptionAction<Void>() {
@Override
public Void run() throws Exception {
// This code runs with reduced privileges and is
// expected to fail.
try (FileInputStream in = new FileInputStream(path)) {
System.out.format("FileInputStream: %s%n", in);
}
return null;
}
}, context);
The example above does not add any additional permissions to the permissions object. If such
permissions are necessary, code like the following (which grants read permission on all files in the
current directory) can be used:
permissions.add(new FilePermission(
System.getProperty("user.dir") + "/-", "read"));
Important
The example code above does not prevent the called code from calling the
java.security.AccessController.doPrivileged() methods. This mechanism should
be considered an additional safety net, but it still can be used to prevent unexpected behavior of
trusted code. As long as the executed code is not dynamic and came with the original application
or library, the sandbox is fairly effective.
The context argument in Example 3.6, “Using the security manager to run code with reduced
privileges” is extremely important—otherwise, this code would increase privileges instead of
reducing them.
For activating the security manager, see Section 3.3.2, “Activating the security manager”.
Unfortunately, this affects the virtual machine as a whole, so it is not possible to do this from a library.
24
Re-gaining privileges
Important
By design, this feature can undermine the Java security model and the sandbox. It has to be
used very carefully. Most sandbox vulnerabilities can be traced back to its misuse.
In essence, the doPrivileged() methods cause the stack inspection to end at their call site.
Untrusted code further down the call stack becomes invisible to security checks.
The following operations are common and safe to perform with elevated privileges.
• Reading custom system properties with fixed names, especially if the value is not propagated to
untrusted code. (File system paths including installation paths, host names and user names are
sometimes considered private information and need to be protected.)
• Reading from the file system at fixed paths, either determined at compile time or by a system
property. Again, leaking the file contents to the caller can be problematic.
• Accessing network resources under a fixed address, name or URL, derived from a system property
or configuration file, information leaks not withstanding.
Example 3.7, “Using the security manager to run code with increased privileges” shows how to request
additional privileges.
Example 3.7. Using the security manager to run code with increased privileges
Obviously, this only works if the class containing the call to doPrivileged() is marked trusted
(usually because it is loaded from a trusted class loader).
When writing code that runs with elevated privileges, make sure that you follow the rules below.
• Make the privileged code as small as possible. Perform as many computations as possible before
and after the privileged code section, even if it means that you have to define a new class to pass
the data around.
• Make sure that you either control the inputs to the privileged code, or that the inputs are harmless
and cannot affect security properties of the privileged code.
25
Chapter 3. The Java Programming Language
• Data that is returned from or written by the privileged code must either be restricted (that is, it cannot
be accessed by untrusted code), or must be harmless. Otherwise, privacy leaks or information
disclosures which affect security properties can be the result.
If the code calls back into untrusted code at a later stage (or performs other actions under control from
the untrusted caller), you must obtain the original security context and restore it before performing the
callback, as in Example 3.8, “Restoring privileges when invoking callbacks”. (In this example, it would
be much better to move the callback invocation out of the privileged code section, of course.)
interface Callback<T> {
T call(boolean flag);
}
class CallbackInvoker<T> {
private final AccessControlContext context;
Callback<T> callback;
CallbackInvoker(Callback<T> callback) {
context = AccessController.getContext();
this.callback = callback;
}
public T invoke() {
// Obtain increased privileges.
return AccessController.doPrivileged(new PrivilegedAction<T>() {
@Override
public T run() {
// This operation would fail without
// additional privileges.
final boolean flag = Boolean.getBoolean("some.property");
26
Chapter 4.
• Chapter 12, Serialization and Deserialization, in particular Section 12.4, “Library support for
deserialization”
• compile
• eval
• exec
• execfile
If you need to parse integers or floating point values, use the int and float functions instead of
eval. Sandboxing untrusted Python code does not work reliably.
4.3. Sandboxing
The rexec Python module cannot safely sandbox untrusted code and should not be used. The
standard CPython implementation is not suitable for sandboxing.
27
28
Chapter 5.
Code which does not run in parallel and does not use the unsafe package (or other packages which
expose unsafe constructs) is memory-safe. For example, invalid casts and out-of-range subscripting
cause panics and run time.
Keep in mind that finalization can introduce parallelism because finalizers are executed concurrently,
potentially interleaved with the rest of the program.
Not checking error return values can lead to incorrect operation and data loss (especially in the case
of writes, using interfaces such as io.Writer).
The correct way to check error return values depends on the function or method being called. In the
majority of cases, the first step after calling a function should be an error check against the nil value,
handling any encountered error. See Example 5.1, “Regular error handling in Go” for details.
However, with io.Reader, io.ReaderAt and related interfaces, it is necessary to check for a non-
zero number of read bytes first, as shown in Example 5.2, “Read error handling in Go”. If this pattern
is not followed, data loss may occur. This is due to the fact that the io.Reader interface permits
returning both data and an error at the same time.
29
Chapter 5. The Go Programming Language
30
Chapter 6.
Its syntax is inspired by C⟶ (and thus, indirectly, by Java). But unlike C⟶ and Java, Vala does not
attempt to provide memory safety: Vala is compiled to C, and the C code is compiled with GCC using
typical compiler flags. Basic operations like integer arithmetic are directly mapped to C constructs. As
a results, the recommendations in Chapter 1, The C Programming Language apply.
In particular, the following Vala language constructs can result in undefined behavior at run time:
• Pointer arithmetic, string subscripting and the substring method on strings (the string class in
the glib-2.0 package) are not range-checked. It is the responsibility of the calling code to ensure
that the arguments being passed are valid. This applies even to cases (like substring) where the
implementation would have range information to check the validity of indexes. See Section 1.1.2,
“Recommendations for pointers and array handling”.
• Similarly, Vala only performs garbage collection (through reference counting) for GObject values.
For plain C pointers (such as strings), the programmer has to ensure that storage is deallocated
once it is no longer needed (to avoid memory leaks), and that storage is not being deallocated while
it is still being used (see Section 1.3.1.1, “Use-after-free errors”).
31
32
Part II. Specific Programming Tasks
Chapter 7.
Library Design
Throught this section, the term client code refers to applications and other libraries using the library.
If this is impossible, the global state must be protected with a lock. For C/C++, you can use the
pthread_mutex_lock and pthread_mutex_unlock functions without linking against -lpthread
because the system provides stubs for non-threaded processes.
For compatibility with fork, these locks should be acquired and released in helpers registered with
pthread_atfork. This function is not available without -lpthread, so you need to use dlsym or a
weak symbol to obtain its address.
If you need fork protection for other reasons, you should store the process ID and compare it to the
value returned by getpid each time you access the global state. (getpid is not implemented as a
system call and is fast.) If the value changes, you know that you have to re-create the state object.
(This needs to be combined with locking, of course.)
7.1.2. Handles
Library state should be kept behind a curtain. Client code should receive only a handle. In C, the
handle can be a pointer to an incomplete struct. In C++, the handle can be a pointer to an abstract
base class, or it can be hidden using the pointer-to-implementation idiom.
The library should provide functions for creating and destroying handles. (In C++, it is possible to use
virtual destructors for the latter.) Consistency between creation and destruction of handles is strongly
recommended: If the client code created a handle, it is the responsibility of the client code to destroy it.
(This is not always possible or convenient, so sometimes, a transfer of ownership has to happen.)
Using handles ensures that it is possible to change the way the library represents state in a way that
is transparent to client code. This is important to facilitate security updates and many other code
changes.
It is not always necessary to protect state behind a handle with a lock. This depends on the level of
thread safety the library provides.
Virtual member functions can be used as callbacks. See Section 7.3, “Callbacks” for some of the
challenges involved.
35
Chapter 7. Library Design
7.3. Callbacks
Higher-order code is difficult to analyze for humans and computers alike, so it should be avoided.
Often, an iterator-based interface (a library function which is called repeatedly by client code and
returns a stream of events) leads to a better design which is easier to document and use.
In older C++ code and in C code, all callbacks must have an additional closure parameter of type
void *, the value of which can be specified by client code. If possible, the value of the closure
parameter should be provided by client code at the same time a specific callback is registered (or
specified as a function argument). If a single closure parameter is shared by multiple callbacks,
flexibility is greatly reduced, and conflicts between different pieces of client code using the same
library object could be unresolvable. In some cases, it makes sense to provide a de-registration
callback which can be used to destroy the closure parameter when the callback is no longer used.
Callbacks can throw exceptions or call longjmp. If possible, all library objects should remain in a
valid state. (All further operations on them can fail, but it should be possible to deallocate them without
causing resource leaks.)
The presence of callbacks raises the question if functions provided by the library are reentrant. Unless
a library was designed for such use, bad things will happen if a callback function uses functions in
the same library (particularly if they are invoked on the same objects and manipulate the same state).
When the callback is invoked, the library can be in an inconsistent state. Reentrant functions are more
difficult to write than thread-safe functions (by definition, simple locking would immediately lead to
deadlocks). It is also difficult to decide what to do when destruction of an object which is currently
processing a callback is requested.
• umask
• file locks (especially fcntl locks behave in surprising ways, not just in a multi-threaded
environment)
Library code should avoid manipulating these global process attributes. It should not rely on
environment variables, umask, the current working directory and signal masks because these
attributes can be inherted from an untrusted source.
In addition, there are obvious process-wide aspects such as the virtual memory layout, the set of open
files and dynamic shared objects, but with the exception of shared objects, these can be manipulated
in a relatively isolated way.
36
Chapter 8.
File descriptors are small, non-negative integers in userspace, and are backed on the kernel side with
complicated data structures which can sometimes grow very large.
Sometimes, it is necessary to close a file descriptor concurrently, while another thread might be about
to use it in a system call. In order to support this, a program needs to create a single special file
descriptor, one on which all I/O operations fail. One way to achieve this is to use socketpair, close
one of the descriptors, and call shutdown(fd, SHUTRDWR) on the other.
When a descriptor is closed concurrently, the program does not call close on the descriptor. Instead
it program uses dup2 to replace the descriptor to be closed with the dummy descriptor created earlier.
This way, the kernel will not reuse the descriptor, but it will carry out all other steps associated with
calling a descriptor (for instance, if the descriptor refers to a stream socket, the peer will be notified).
This is just a sketch, and many details are missing. Additional data structures are needed to determine
when it is safe to really close the descriptor, and proper locking is required for that.
The SO_LINGER socket option alters the behavior of close, so that it will return only after the
lingering data has been processed, either by sending it to the peer successfully, or by discarding it
37
Chapter 8. File Descriptor Management
after the configured timeout. However, there is no interface which could perform this operation in
the background, so a separate userspace thread is needed for each close call, causing scalability
issues.
These problems are not related to the TIME_WAIT state commonly seen in netstat output. The kernel
automatically expires such sockets if necessary.
Usually, this behavior is not desirable. There are two ways to turn it off, that is, to prevent new process
images from inheriting the file descriptors in the parent process:
• Set the close-on-exec flag on all newly created file descriptors. Traditionally, this flag is controlled by
the FD_CLOEXEC flag, using F_GETFD and F_SETFD operations of the fcntl function.
However, in a multi-threaded process, there is a race condition: a subprocess could have been
created between the time the descriptor was created and the FD_CLOEXEC was set. Therefore,
many system calls which create descriptors (such as open and openat) now accept the
O_CLOEXEC flag (SOCK_CLOEXEC for socket and socketpair), which cause the FD_CLOEXEC
flag to be set for the file descriptor in an atomic fashion. In addition, a few new systems calls were
introduced, such as pipe2 and dup3.
The downside of this approach is that every descriptor needs to receive special treatment at the
time of creation, otherwise it is not completely effective.
• After calling fork, but before creating a new process image with execve, all file descriptors which
the child process will not need are closed.
Traditionally, this was implemented as a loop over file descriptors ranging from 3 to 255 and later
1023. But this is only an approximatio because it is possible to create file descriptors outside this
range easily (see Section 8.3, “Dealing with the select limit”). Another approach reads /proc/
self/fd and closes the unexpected descriptors listed there, but this approach is much slower.
At present, environments which care about file descriptor leakage implement the second approach.
OpenJDK 6 and 7 are among them.
The select function only supports a maximum of FD_SETSIZE file descriptors (that is, the maximum
permitted value for a file descriptor is FD_SETSIZE - 1, usually 1023.) If a process opens many files,
descriptors may exceed such limits. It is impossible to query such descriptors using select.
If a library which creates many file descriptors is used in the same process as a library which uses
select, at least one of them needs to be changed. Calls to select can be replaced with calls to
38
Dealing with the select limit
poll or another event handling mechanism. Replacing the select function is the recommended
approach.
Alternatively, the library with high descriptor usage can relocate descriptors above the FD_SETSIZE
limit using the following procedure.
• Create the file descriptor fd as usual, preferably with the O_CLOEXEC flag.
• Check that newfd result is non-negative, otherwise close fd and report an error, and return.
The new descriptor has been allocated above the FD_SETSIZE. Even though this algorithm is
racy in the sense that the FD_SETSIZE first descriptors could fill up, a very high degree of physical
parallelism is required before this becomes a problem.
39
40
Chapter 9.
Temporary files are covered in their own chapter, Chapter 10, Temporary files.
Accessing files across trust boundaries faces several challenges, particularly if an entire directory tree
is being traversed:
1. Another user might add file names to a writable directory at any time. This can interfere with file
creation and the order of names returned by readdir.
2. Merely opening and closing a file can have side effects. For instance, an automounter can be
triggered, or a tape device rewound. Opening a file on a local file system can block indefinitely,
due to mandatory file locking, unless the O_NONBLOCK flag is specified.
3. Hard links and symbolic links can redirect the effect of file system operations in unexpected ways.
The O_NOFOLLOW and AT_SYMLINK_NOFOLLOW variants of system calls only affected final path
name component.
4. The structure of a directory tree can change. For example, the parent directory of what used to be
a subdirectory within the directory tree being processed could suddenly point outside that directory
tree.
Files should always be created with the O_CREAT and O_EXCL flags, so that creating the file will fail if
it already exists. This guards against the unexpected appearance of file names, either due to creation
of a new file, or hard-linking of an existing file. In multi-threaded programs, rather than manipulating
the umask, create the files with mode 000 if possible, and adjust it afterwards with fchmod.
To avoid issues related to symbolic links and directory tree restructuring, the “at” variants of system
calls have to be used (that is, functions like openat, fchownat, fchmodat, and unlinkat, together
with O_NOFOLLOW or AT_SYMLINK_NOFOLLOW). Path names passed to these functions must
have just a single component (that is, without a slash). When descending, the descriptors of parent
directories must be kept open. The missing opendirat function can be emulated with openat (with
an O_DIRECTORY flag, to avoid opening special files with side effects), followed by fdopendir.
If the “at” functions are not available, it is possible to emulate them by changing the current directory.
(Obviously, this only works if the process is not multi-threaded.) fchdir has to be used to change
the current directory, and the descriptors of the parent directories have to be kept open, just as with
the “at”-based approach. chdir("...") is unsafe because it might ascend outside the intended
directory tree.
This “at” function emulation is currently required when manipulating extended attributes. In this case,
the lsetxattr function can be used, with a relative path name consisting of a single component.
This also applies to SELinux contexts and the lsetfilecon function.
41
Chapter 9. File system manipulation
Currently, it is not possible to avoid opening special files and changes to files with hard links if the
directory containing them is owned by an untrusted user. (Device nodes can be hard-linked, just as
regular files.) fchmodat and fchownat affect files whose link count is greater than one. But opening
the files, checking that the link count is one with fstat, and using fchmod and fchown on the file
descriptor may have unwanted side effects, due to item 2 above. When creating directories, it is
therefore important to change the ownership and permissions only after it has been fully created. Until
that point, file names are stable, and no files with unexpected hard links can be introduced.
Similarly, when just reading a directory owned by an untrusted user, it is currently impossible to reliably
avoid opening special files.
There is no workaround against the instability of the file list returned by readdir. Concurrent
modification of the directory can result in a list of files being returned which never actually existed on
disk.
Hard links and symbolic links can be safely deleted using unlinkat without further checks because
deletion only affects the name within the directory tree being processed.
One approach is to spawn a child process which runs under the target user and group IDs (both
effective and real IDs). Note that this child process can block indefinitely, even when processing
regular files only. For example, a special FUSE file system could cause the process to hang in
uninterruptible sleep inside a stat system call.
An existing process could change its user and group ID using setfsuid and setfsgid. (These
functions are preferred over seteuid and setegid because they do not allow the impersonated
user to send signals to the process.) These functions are not thread safe. In multi-threaded processes,
these operations need to be performed in a single-threaded child process. Unexpected blocking may
occur as well.
It is not recommended to try to reimplement the kernel permission checks in user space because
the required checks are complex. It is also very difficult to avoid race conditions during path name
resolution.
You should not write code in a way that assumes that there is an upper limit on the number of
subdirectories of a directory, the number of regular files in a directory, or the link count of an inode.
42
File system features
• Name length limits vary greatly, from eight to thousands of bytes. Path length limits differ as well.
Most systems impose an upper bound on path names passed to the kernel, but using relative
path names, it is possible to create and access files whose absolute path name is essentially of
unbounded length.
• Some file systems do not store names as fairly unrestricted byte sequences, as it has been
traditionally the case on GNU systems. This means that some byte sequences (outside the
POSIX safe character set) are not valid names. Conversely, names of existing files may not be
representable as byte sequences, and the files are thus inaccessible on GNU systems. Some file
systems perform Unicode canonicalization on file names. These file systems preserve case, but
reading the name of a just-created file using readdir might still result in a different byte sequence.
• Permissions and owners are not universally supported (and SUID/SGID bits may not be available).
For example, FAT file systems assign ownership based on a mount option, and generally mark all
files as executable. Any attempt to change permissions would result in an error.
• Only on some file systems, files can have holes, that is, not all of their contents is backed by disk
storage.
• ioctl support (even fairly generic functionality such as FIEMAP for discovering physical file layout
and holes) is file-system-specific.
• Not all file systems support extended attributes, ACLs and SELinux metadata. Size and naming
restriction on extended attributes vary.
• Hard links may not be supported at all (FAT) or only within the same directory (AFS). Symbolic links
may not be available, either. Reflinks (hard links with copy-on-write semantics) are still very rare.
Recent systems restrict creation of hard links to users which own the target file or have read/write
access to it, but older systems do not.
• Renaming (or moving) files using rename can fail (even when stat indicates that the source and
target directories are located on the same file system). This system call should work if the old and
new paths are located in the same directory, though.
• Locking semantics vary among file systems. This affects advisory and mandatory locks. For
example, some network file systems do not allow deleting files which are opened by any process.
• Resolution of time stamps varies from two seconds to nanoseconds. Not all time stamps are
available on all file systems. File creation time (birth time) is not exposed over the stat/fstat
interface, even if stored by the file system.
43
Chapter 9. File system manipulation
reliable way to discover if the file system still has space for a file is to try to create it. The f_bfree
field should be reasonably accurate, though.
44
Chapter 10.
Temporary files
In this chapter, we describe how to create temporary files and directories, how to remove them, and
how to work with programs which do not create files in ways that are safe with a shared directory for
temporary files. General file system manipulation is treated in a separate chapter, Chapter 9, File
system manipulation.
• The location of the directory for temporary files must be obtained in a secure manner (that is,
untrusted environment variables must be ignored, see Section 11.3.1, “Accessing environment
variables”).
• A new file must be created. Reusing an existing file must be avoided (the /tmp race condition). This
is tricky because traditionally, system-wide temporary directories shared by all users are used.
• The file must be created in a way that makes it impossible for other users to open it.
• The descriptor for the temporary file should not leak to subprocesses.
Traditionally, temporary files are often used to reduce memory usage of programs. More and
more systems use RAM-based file systems such as tmpfs for storing temporary files, to increase
performance and decrease wear on Flash storage. As a result, spooling data to temporary files does
not result in any memory savings, and the related complexity can be avoided if the data is kept in
process memory.
• Use secure_getenv to obtain the value of the TMPDIR environment variable. If it is set, convert
the path to a fully-resolved absolute path, using realpath(path, NULL). Check if the new path
refers to a directory and is writeable. In this case, use it as the temporary directory.
Java does not support SUID/SGID programs, so you can use the
java.lang.System.getenv(String) method to obtain the value of the TMPDIR environment
variable, and follow the two steps described above. (Java's default directory selection does not honor
TMPDIR.)
The file is not removed automatically. It is not safe to rename or delete the file before processing,
or transform the name in any way (for example, by adding a file extension). If you need multiple
temporary files, call mkostemp multiple times. Do not create additional file names derived from the
45
Chapter 10. Temporary files
name provided by a previous mkostemp call. However, it is safe to close the descriptor returned by
mkostemp and reopen the file using the generated name.
The Python class tempfile.NamedTemporaryFile provides similar functionality, except that the
file is deleted automatically by default. Note that you may have to use the file attribute to obtain
the actual file object because some programming interfaces cannot deal with file-like objects. The C
function mkostemp is also available as tempfile.mkstemp.
Alternatively, if the maximum size of the temporary file is known beforehand, the fmemopen function
can be used to create a FILE * object which is backed by memory.
In Python, unnamed temporary files are provided by the tempfile.TemporaryFile class, and the
tempfile.SpooledTemporaryFile class provides a way to avoid creation of small temporary files.
When creating files in the temporary directory, use automatically generated names, e.g., derived from
a sequential counter. Files with externally provided names could be picked up in unexpected contexts,
and crafted names could actually point outside of the tempoary directory (due to directory traversal).
Removing a directory tree in a completely safe manner is complicated. Unless there are overriding
performance concerns, the rm program should be used, with the -rf and -- options.
• Create a temporary directory and place the file there. If possible, run the program in a subprocess
which uses the temporary directory as its current directory, with a restricted environment. Use
generated names for all files in that temporary directory. (See Section 10.4, “Temporary directories”.)
46
Compensating for unsafe file creation
• Create the temporary file and pass the generated file name to the function or program. This only
works if the function or program can cope with a zero-length existing file. It is safe only under
additional assumptions:
• The function or program must not create additional files whose name is derived from the specified
file name or are otherwise predictable.
• The function or program must not delete the file before processing it.
It is often difficult to check whether these additional assumptions are matched, therefore this
approach is not recommended.
47
48
Chapter 11.
Processes
11.1. Safe process creation
This section describes how to create new child processes in a safe manner. In addition to the
concerns addressed below, there is the possibility of file descriptor leaks, see Section 8.2, “Preventing
file descriptor leaks to child processes”.
11.1.1. Obtaining the program path and the command line template
The name and path to the program being invoked should be hard-coded or controlled by a static
configuration file stored at a fixed location (at an file system absolute path). The same applies to the
template for generating the command line.
The configured program name should be an absolute path. If it is a relative path, the contents of the
PATH must be obtained in a secure manner (see Section 11.3.1, “Accessing environment variables”). If
the PATH variable is not set or untrusted, the safe default /bin:/usr/bin must be used.
If too much flexibility is provided here, it may allow invocation of arbitrary programs without proper
authorization.
For C/C++, system should not be used. The posix_spawn function can be used instead, or a
combination fork and execve. (In some cases, it may be preferable to use vfork or the Linux-
specific clone system call instead of fork.)
In Python, the subprocess module bypasses the shell by default (when the shell keyword
argument is not set to true). os.system should not be used.
The Java class java.lang.ProcessBuilder can be used to create subprocesses without interference
from the system shell.
Portability notice
On Windows, there is no argument vector, only a single argument string. Each application is
responsible for parsing this string into an argument vector. There is considerable variance among
the quoting style recognized by applications. Some of them expand shell wildcards, others do not.
Extensive application-specific testing is required to make this secure.
Note that some common applications (notably ssh) unconditionally introduce the use of a shell, even
if invoked directly without a shell. It is difficult to use these applications in a secure manner. In this
case, untrusted data should be supplied by other means. For example, standard input could be used,
instead of the command line.
49
Chapter 11. Processes
In C/C++, the environment should be constructed as an array of strings and passed as the envp
argument to posix_spawn or execve. The functions setenv, unsetenv and putenv should not be
used. They are not thread-safe and suffer from memory leaks.
Python programs need to specify a dict for the the env argument of the subprocess.Popen
constructor. The Java class java.lang.ProcessBuilder provides a environment() method,
which returns a map that can be manipulated.
The following list provides guidelines for selecting the set of environment variables passed to the child
process.
• USER and HOME can be inhereted from the parent process environment, or they can be initialized
from the pwent structure for the user.
• The DISPLAY and XAUTHORITY variables should be passed to the subprocess if it is an X program.
Note that this will typically not work across trust boundaries because XAUTHORITY refers to a file
with 0600 permissions.
• The called process may need application-specific environment variables, for example for passing
passwords. (See Section 11.1.5, “Passing secrets to subprocesses”.)
• All other environment variables should be dropped. Names for new environment variables should
not be accepted from untrusted sources.
The following recommendations assume that the program being invoked uses GNU-style option
processing using getopt_long. This convention is widely used, but it is just that, and individual
programs might interpret a command line in a different way.
If the untrusted data has to go into an option, use the --option-name=VALUE syntax, placing the
option and its value into the same command line argument. This avoids any potential confusion if the
data starts with -.
For positional arguments, terminate the option list with a single -- marker after the last option, and
include the data at the right position. The -- marker terminates option processing, and the data will
not be treated as an option even if it starts with a dash.
50
Passing secrets to subprocesses
Portability notice
On some UNIX-like systems (notably Solaris), environment variables can be read by any system
user, just like command lines.
If the environment-based approach cannot be used due to portability concerns, the data can be
passed on standard input. Some programs (notably gpg) use special file descriptors whose numbers
are specified on the command line. Temporary files are an option as well, but they might give digital
forensics access to sensitive data (such as passphrases) because it is difficult to safely delete them in
all cases.
• The parent process calls wait, waitpid, waitid, wait3 or wait4, without specifying a process
ID. This will deliver any matching process ID. This approach is typically used from within event
loops.
• The parent process calls waitpid, waitid, or wait4, with a specific process ID. Only data for the
specific process ID is returned. This is typically used in code which spawns a single subprocess in a
synchronous manner.
• The parent process installs a handler for the SIGCHLD signal, using sigaction, and specifies to
the SA_NOCLDWAIT flag. This approach could be used by event loops as well.
None of these approaches can be used to wait for child process terminated in a completely thread-
safe manner. The parent process might execute an event loop in another thread, which could pick up
the termination signal. This means that libraries typically cannot make free use of child processes (for
example, to run problematic code with reduced privileges in a separate address space).
At the moment, the parent process should explicitly wait for termination of the child process using
waitpid or waitid, and hope that the status is not collected by an event loop first.
51
Chapter 11. Processes
Linux supports fscaps, which can grant additional capabilities to a process in a finer-grained manner.
Additional mechanisms can be provided by loadable security modules.
When such a trust transition has happened, the process runs in a potentially hostile environment.
Additional care is necessary not to rely on any untrusted information. These concerns also apply to
libraries which can be linked into such processes.
• Compile your C/C++ sources with -D_GNU_SOURCE. The Autoconf macro AC_GNU_SOURCE
ensures this.
• Check for the presence of the secure_getenv and __secure_getenv function. The Autoconf
directive AC_CHECK_FUNCS([__secure_getenv secure_getenv]) performs these checks.
• Arrange for a proper definition of the secure_getenv function. See Example 11.1, “Obtaining a
definition for secure_getenv”.
• Use secure_getenv instead of getenv to obtain the value of critical environment variables.
secure_getenv will pretend the variable has not bee set if the process environment is not trusted.
Critical environment variables are debugging flags, configuration file locations, plug-in and log file
locations, and anything else that might be used to bypass security restrictions or cause a privileged
process to behave in an unexpected way.
Either the secure_getenv function or the __secure_getenv is available from GNU libc.
#include <stdlib.h>
#ifndef HAVE_SECURE_GETENV
# ifdef HAVE__SECURE_GETENV
# define secure_getenv __secure_getenv
# else
# error neither secure_getenv nor __secure_getenv are available
# endif
#endif
11.4. Daemons
Background processes providing system services (daemons) need to decouple themselves from the
controlling terminal and the parent process environment:
• Fork.
• In the child process, call setsid. The parent process can simply exit (using _exit, to avoid
running clean-up actions twice).
• In the child process, fork again. Processing continues in the child process. Again, the parent
process should just exit.
52
Semantics of command line arguments
• Replace the descriptors 0, 1, 2 with a descriptor for /dev/null. Logging should be redirected to
syslog.
Older instructions for creating daemon processes recommended a call to umask(0). This is risky
because it often leads to world-writable files and directories, resulting in security vulnerabilities such as
arbitrary process termination by untrusted local users, or log file truncation. If the umask needs setting,
a restrictive value such as 027 or 077 is recommended.
Other aspects of the process environment may have to changed as well (environment variables, signal
handler disposition).
It is increasingly common that server processes do not run as background processes, but as regular
foreground process under a supervising master process (such as systemd). Server processes should
offer a command line option which disables forking and replacement of the standard output and
standard error streams. Such an option is also useful for debugging.
Similar concerns apply to environment variables, the contents of the current directory and its
subdirectories.
Consequently, careful analysis is required if it is safe to pass untrusted data to another program.
53
54
Chapter 12.
When reading variable-sized objects, do not allocate large amounts of data solely based on the value
of a size field. If possible, grow the data structure as more data is read from the source, and stop when
no data is available. This helps to avoid denial-of-service attacks where little amounts of input data
results in enormous memory allocations during decoding. Alternatively, you can impose reasonable
bounds on memory allocations, but some protocols do not permit this.
In new datagram-oriented protocols, unique numbers such as sequence numbers or identifiers for
fragment reassembly (see Section 12.3, “Fragmentation”) should be at least 64 bits large, and really
should not be smaller than 32 bits in size. Protocols should not permit fragments with overlapping
contents.
12.3. Fragmentation
Some serialization formats use frames or protocol data units (PDUs) on lower levels which are
smaller than the PDUs on higher levels. With such an architecture, higher-level PDUs may have to be
fragmented into smaller frames during serialization, and frames may need reassembly into large PDUs
during deserialization.
Serialization formats may use conceptually similar structures for completely different purposes, for
example storing multiple layers and color channels in a single image file.
When fragmenting PDUs, establish a reasonable lower bound for the size of individual fragments
(as large as possible—limits as low as one or even zero can add substantial overhead). Avoid
fragmentation if at all possible, and try to obtain the maximum acceptable fragment length from a
trusted data source.
• Avoid allocating significant amount of resources without proper authentication. Allocate memory
for the unfragmented PDU as more and more and fragments are encountered, and not based
on the initially advertised unfragmented PDU size, unless there is a sufficiently low limit on the
unfragmented PDU size, so that over-allocation cannot lead to performance problems.
55
Chapter 12. Serialization and Deserialization
unrelated fragments, as it can happen with small fragment IDs (see Section 12.3.1, “Fragment IDs”).
It also guards to some extent against deliberate injection of fragments, by guessing fragment IDs.
• Carefully keep track of which bytes in the unfragmented PDU have been covered by fragments
so far. If message reordering is a concern, the most straightforward data structure for this is an
array of bits, with one bit for every byte (or other atomic unit) in the unfragmented PDU. Complete
reassembly can be determined by increasing a counter of set bits in the bit array as the bit array is
updated, taking overlapping fragments into consideration.
• Reject overlapping fragments (that is, multiple fragments which provide data at the same offset of
the PDU being fragmented), unless the protocol explicitly requires accepting overlapping fragments.
The bit array used for tracking already arrived bytes can be used for this purpose.
• Check for conflicting values of unfragmented PDU lengths (if this length information is part of every
fragment) and reject fragments which are inconsistent.
• Validate fragment lengths and offsets of individual fragments against the unfragmented PDU
length (if they are present). Check that the last byte in the fragment does not lie after the end
of the unfragmented PDU. Avoid integer overflows in these computations (see Section 1.1.3,
“Recommendations for integer arithmetic”).
If the transport may be subject to blind PDU injection (again, like UDP), the fragment ID must
be generated randomly. If the fragment ID is 64 bit or larger (strongly recommended), it can be
generated in a completely random fashion for most traffic volumes. If it is less than 64 bits large (so
that accidental collisions can happen if a lot of PDUs are transmitted), the fragment ID should be
incremented sequentially from a starting value. The starting value should be derived using a HMAC-
like construction from the endpoint addresses, using a long-lived random key. This construction
ensures that despite the limited range of the ID, accidental collisions are as unlikely as possible.
(This will not work reliable with really short fragment IDs, such as the 16 bit IDs used by the Internet
Protocol.)
The following serialization frameworks are in the first category, are known to be unsafe, and must not
be used for untrusted data:
56
XML serialization
When using a type-directed deserialization format where the types of the deserialized objects are
specified by the programmer, make sure that the objects which can be instantiated cannot perform any
destructive actions in their destructors, even when the data members have been manipulated.
In general, JSON decoders do not suffer from this problem. But you must not use the eval function to
parse JSON objects in Javascript; even with the regular expression filter from RFC 4627, there are still
information leaks remaining. JSON-based formats can still turn out risky if they serve as an encoding
form for any if the serialization frameworks listed above.
• In a namespace declaration:
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
• In an entity defintion:
• In a notation:
Originally, these external references were intended as unique identifiers, but by many XML
implementations, they are used for locating the data for the referenced element. This causes
unwanted network traffic, and may disclose file system contents or otherwise unreachable network
resources, so this functionality should be disabled.
Depending on the XML library, external referenced might be processed not just when parsing XML, but
also when generating it.
57
Chapter 12. Serialization and Deserialization
Consequently, the processing internal DTD subsets should be disabled if possible, and only trusted
DTDs should be processed. If a particular XML application does not permit such restrictions, then
application-specific limits are called for.
XInclude processing is also fairly complex and may pull in support for the XPointer and XPath
specifications, considerably increasing the amount of code required for XML processing.
XML schemas and RELAX NG (via the xsd: prefix) directly support textual regular expressions which
are not required to be deterministic.
This handler must be installed when the XML_Parser object is created (Example 12.2, “Creating an
Expat XML parser”).
58
Using Qt for XML parsing
Example 12.3, “A QtXml entity handler which blocks entity processing” shows an entity handler which
always returns errors, causing parsing to stop when encountering entity declarations.
bool
NoEntityHandler::attributeDecl
(const QString&, const QString&, const QString&, const QString&,
const QString&)
{
return false;
}
bool
NoEntityHandler::internalEntityDecl(const QString&, const QString&)
{
return false;
}
bool
NoEntityHandler::externalEntityDecl(const QString&, const QString&, const
QString&)
{
return false;
59
Chapter 12. Serialization and Deserialization
QString
NoEntityHandler::errorString() const
{
return "XML declaration not permitted";
}
NoEntityReader::NoEntityReader()
{
QXmlSimpleReader::setDeclHandler(&handler);
setFeature("http://xml.org/sax/features/namespaces", true);
setFeature("http://xml.org/sax/features/namespace-prefixes", false);
}
void
NoEntityReader::setDeclHandler(QXmlDeclHandler *)
{
// Ignore the handler which was passed in.
}
Our NoEntityReader class can be used with one of the overloaded QDomDocument::setContent
methods. Example 12.5, “Parsing an XML document with QDomDocument, without entity expansion”
shows how the buffer object (of type QByteArray) is wrapped as a QXmlInputSource. After
calling the setContent method, you should check the return value and report any error.
Example 12.5. Parsing an XML document with QDomDocument, without entity expansion
NoEntityReader reader;
QBuffer buffer(&data);
buffer.open(QIODevice::ReadOnly);
QXmlInputSource source(&buffer);
QDomDocument doc;
QString errorMsg;
int errorLine;
int errorColumn;
bool okay = doc.setContent
(&source, &reader, &errorMsg, &errorLine, &errorColumn);
60
Using OpenJDK for XML parsing and validation
The approach taken to deal with entity expansion differs from the general
recommendation in Section 12.5.2, “Entity expansion”. We enable the the feature flag
javax.xml.XMLConstants.FEATURE_SECURE_PROCESSING, which enforces heuristic restrictions
on the number of entity expansions. Note that this flag alone does not prevent resolution of external
references (system IDs or public IDs), so it is slightly misnamed.
Example 12.6. Helper class to prevent DTD external entity resolution in OpenJDK
Example 12.8, “Java imports for OpenJDK XML parsing” shows the imports used by the examples.
import javax.xml.XMLConstants;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.parsers.SAXParser;
import javax.xml.parsers.SAXParserFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.sax.SAXSource;
import javax.xml.validation.Schema;
import javax.xml.validation.SchemaFactory;
import javax.xml.validation.Validator;
import org.w3c.dom.Document;
61
Chapter 12. Serialization and Deserialization
import org.w3c.dom.ls.LSInput;
import org.w3c.dom.ls.LSResourceResolver;
import org.xml.sax.EntityResolver;
import org.xml.sax.ErrorHandler;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
import org.xml.sax.SAXParseException;
import org.xml.sax.XMLReader;
// Turn on validation.
// This step can be omitted if validation is not desired.
factory.setValidating(true);
External entity references are prohibited using the NoEntityResolver class in Example 12.6,
“Helper class to prevent DTD external entity resolution in OpenJDK”. Because external DTD
references are prohibited, DTD validation (if enabled) will only happen against the internal DTD subset
embedded in the XML document.
62
Protocol Encoders
The NoResourceResolver class is defined in Example 12.7, “Helper class to prevent schema
resolution in OpenJDK”.
If you need to validate a document against an XML schema, use the code in Example 12.9, “DOM-
based XML parsing in OpenJDK” to create the document, but do not enable validation at this point.
Then use Example 12.11, “Validation of a DOM document against an XML schema in OpenJDK” to
perform the schema-based validation on the org.w3c.dom.Document instance document.
The class java.beans.XMLDecoder acts as a bridge between the Java object serialization format and
XML. It is close to impossible to securely deserialize Java objects in this format from untrusted inputs,
so its use is not recommended, as with the Java object serialization format itself. See Section 12.4,
“Library support for deserialization”.
63
Chapter 12. Serialization and Deserialization
You should avoid copying data directly from a received packet during encoding, disregarding the
format. Propagating malformed data could enable attacks on other recipients of that data.
When using C or C++ and copying whole data structures directly into the output, make sure that you
do not leak information in padding bytes between fields or at the end of the struct.
64
Chapter 13.
Cryptography
13.1. Primitives
Choosing from the following cryptographic primitives is recommended:
• SHA-256
• HMAC-SHA-256
• HMAC-SHA-1
Other cryptographic algorithms can be used if they are required for interoperability with existing
software:
• RSA with key sizes larger than 1024 and legacy padding
• AES-192
• AES-256
• SHA-1
• HMAC-MD5
Important
These primitives are difficult to use in a secure way. Custom implementation of security protocols
should be avoided. For protecting confidentiality and integrity of network transmissions, TLS
should be used (Chapter 16, Transport Layer Security).
13.2. Randomness
The following facilities can be used to generate unpredictable and non-repeating values. When these
functions are used without special safeguards, each individual random value should be at least 12
bytes long.
• gnutls_rnd in GNUTLS, with GNUTLS_RND_RANDOM as the first argument (usable for high data
rates)
65
Chapter 13. Cryptography
• os.urandom in Python
All these functions should be non-blocking, and they should not wait until physical randomness
becomes available. (Some cryptography providers for Java can cause java.security.SecureRandom to
block, however.) Those functions which do not obtain all bits directly from /dev/urandom are suitable
for high data rates because they do not deplete the system-wide entropy pool.
Both RAND_bytes and PK11_GenerateRandom have three-state return values (with conflicting
meanings). Careful error checking is required. Please review the documentation when using
these functions.
Generating randomness for cryptographic keys in long-term use may need different steps and is best
left to cryptographic libraries.
66
Chapter 14.
RPM packaging
This chapter deals with security-related concerns around RPM packaging. It has to be read in
conjunction with distribution-specific packaging guidelines.
Important
The way the key is generated may not be suitable for key material of critical value. (openssl
genrsa uses, but does not require, entropy from a physical source of randomness, among other
things.) Such keys should be stored in a hardware security module if possible, and generated
from random bits reserved for this purpose derived from a non-deterministic physical source.
In the spec file, we define two RPM variables which contain the names of the files used to store the
private and public key, and the user name for the service:
# Name of the user owning the file with the private key
%define tlsuser %{name}
# Name of the directory which contains the key and certificate files
%define tlsdir %{_sysconfdir}/%{name}
%define tlskey %{tlsdir}/%{name}.key
%define tlscert %{tlsdir}/%{name}.crt
These variables likely need adjustment based on the needs of the package.
Typically, the file with the private key needs to be owned by the system user which needs to read it,
%{tlsuser} (not root). In order to avoid races, if the directory %{tlsdir} is owned by the services
user, you should use the code in Example 14.1, “Creating a key pair in a user-owned directory”. The
invocation of su with the -s /bin/bash argument is necessary in case the login shell for the user
has been disabled.
%post
if [ $1 -eq 1 ] ; then
if ! test -e %{tlskey} ; then
su -s /bin/bash \
-c "umask 077 && openssl genrsa -out %{tlskey} 2048 2>/dev/null" \
%{tlsuser}
67
Chapter 14. RPM packaging
fi
if ! test -e %{tlscert} ; then
cn="Automatically generated certificate for the %{tlsuser} service"
req_args="-key %{tlskey} -out %{tlscert} -days 7305 -subj \"/CN=$cn/\""
su -s /bin/bash \
-c "openssl req -new -x509 -extensions usr_cert $req_args" \
%{tlsuser}
fi
fi
%files
%dir %attr(0755,%{tlsuser},%{tlsuser]) %{tlsdir}
%ghost %attr(0600,%{tlsuser},%{tlsuser}) %config(noreplace) %{tlskey}
%ghost %attr(0644,%{tlsuser},%{tlsuser}) %config(noreplace) %{tlscert}
The files containing the key material are marked as ghost configuration files. This ensures that they
are tracked in the RPM database as associated with the package, but RPM will not create them
when the package is installed and not verify their contents (the %ghost), or delete the files when the
package is uninstalled (the %config(noreplace) part).
If the directory %{tlsdir} is owned by root, use the code in Example 14.2, “Creating a key pair in a
root-owned directory”.
%post
if [ $1 -eq 1 ] ; then
if ! test -e %{tlskey} ; then
(umask 077 && openssl genrsa -out %{tlskey} 2048 2>/dev/null)
chown %{tlsuser} %{tlskey}
fi
if ! test -e %{tlscert} ; then
cn="Automatically generated certificate for the %{tlsuser} service"
openssl req -new -x509 -extensions usr_cert \
-key %{tlskey} -out %{tlscert} -days 7305 -subj "/CN=$cn/"
fi
fi
%files
%dir %attr(0755,root,root]) %{tlsdir}
%ghost %attr(0600,%{tlsuser},%{tlsuser}) %config(noreplace) %{tlskey}
%ghost %attr(0644,root,root) %config(noreplace) %{tlscert}
In order for this to work, the package which generates the keys must require the openssl package.
If the user which owns the key file is generated by a different package, the package generating the
certificate must specify a Requires(pre): on the package which creates the user. This ensures that
the user account will exist when it is needed for the su or chmod invocation.
68
Generating X.509 self-signed certificates before service start
Important
The caveats about the way the key is generated in Section 14.1, “Generating X.509 self-signed
certificates during installation” apply to this procedure as well.
Generating key material before service start may happen very early during boot, when the kernel
randomness pool has not yet been initialized. Currently, the only way to check for the initialization is to
look for the kernel message random: nonblocking pool is initialized. In theory, it is also
possible to read from /dev/random while generating the key material (instead of /dev/urandom),
but this can block not just during the boot process, but also much later at run time, and generally
results in a poor user experience.
69
70
Part III. Implementing
Security Features
Chapter 15.
• The server uses a TLS certificate which is valid according to the web browser public key
infrastructure, and the client verifies the certificate and the host name.
• The server uses a TLS certificate which is expectedby the client (perhaps it is stored in a
configuration file read by the client). In this case, no host name checking is required.
• On Linux, UNIX domain sockets (of the PF_UNIX protocol family, sometimes called PF_LOCAL)
are restricted by file system permissions. If the server socket path is not world-writable, the server
identity cannot be spoofed by local users.
• Port numbers less than 1024 (trusted ports) can only be used by root, so if a UDP or TCP server is
running on the local host and it uses a trusted port, its identity is assured. (Not all operating systems
enforce the trusted ports concept, and the network might not be trusted, so it is only useful on the
local system.)
TLS (Chapter 16, Transport Layer Security) is the recommended way for securing connections over
untrusted networks.
If the server port number is 1024 is higher, a local user can impersonate the process by binding to this
socket, perhaps after crashing the real server by exploiting a denial-of-service vulnerability.
Host-based authentication trust the network and may not offer sufficient granularity, so it has to be
considered a weak form of authentication. On the other hand, IP-based authentication can be made
extremely robust and can be applied very early in input processing, so it offers an opportunity for
significantly reducing the number of potential attackers for many services.
The names returned by gethostbyaddr and getnameinfo functions cannot be trusted. (DNS PTR
records can be set to arbitrary values, not just names belong to the address owner.) If these names
are used for ACL matching, a forward lookup using gethostbyaddr or getaddrinfo has to be
performed. The name is only valid if the original address is found among the results of the forward
lookup (double-reverse lookup).
An empty ACL should deny all access (deny-by-default). If empty ACLs permits all access, configuring
any access list must switch to deny-by-default for all unconfigured protocols, in both name-based and
address-based variants.
Similarly, if an address or name is not matched by the list, it should be denied. However, many
implementations behave differently, so the actual behavior must be documented properly.
73
Chapter 15. Authentication and Authorization
IPv6 addresses can embed IPv4 addresses. There is no universally correct way to deal with this
ambiguity. The behavior of the ACL implementation should be documented.
Nowadays, most systems support the SO_PEERCRED (Linux) or LOCAL_PEERCRED (FreeBSD) socket
options, or the getpeereid (other BSDs, MacOS X). These interfaces provide direct access to the
(effective) user ID on the other end of a domain socket connect, without cooperation from the other
end.
Historically, credentials passing was implemented using ancillary data in the sendmsg and recvmsg
functions. On some systems, only credentials data that the peer has explicitly sent can be received,
and the kernel checks the data for correctness on the sending side. This means that both peers need
to deal with ancillary data. Compared to that, the modern interfaces are easier to use. Both sets of
interfaces vary considerably among UNIX-like systems, unfortunately.
If you want to authenticate based on supplementary groups, you should obtain the user ID using one
of these methods, and look up the list of supplementary groups using getpwuid (or getpwuid_r)
and getgrouplist. Using the PID and information from /proc/PID/status is prone to race
conditions and insecure.
When processing Netlink messages from the kernel, it is important to check that these messages
actually originate from the kernel, by checking that the port ID (or PID) field nl_pid in the
sockaddr_nl structure is 0. (This structure can be obtained using recvfrom or recvmsg, it is
different from the nlmsghdr structure.) The kernel does not prevent other processes from sending
unicast Netlink messages, but the nl_pid field in the sender's socket address will be non-zero in
such cases.
Applications should not use AF_NETLINK sockets as an IPC mechanism among processes, but prefer
UNIX domain sockets for this tasks.
74
Chapter 16.
• Most TLS implementations have questionable default TLS cipher suites. Most of them enable
anonymous Diffie-Hellman key exchange (but we generally want servers to authenticate
themselves). Many do not disable ciphers which are subject to brute-force attacks because of
restricted key lengths. Some even disable all variants of AES in the default configuration.
When overriding the cipher suite defaults, it is recommended to disable all cipher suites which are
not present on a whitelist, instead of simply enabling a list of cipher suites. This way, if an algorithm
is disabled by default in the TLS implementation in a future security update, the application will not
re-enable it.
• The name which is used in certificate validation must match the name provided by the user or
configuration file. No host name canonicalization or IP address lookup must be performed.
• The TLS handshake has very poor performance if the TCP Nagle algorithm is active. You should
switch on the TCP_NODELAY socket option (at least for the duration of the handshake), or use the
Linux-specific TCP_CORK option.
• Both client and server should work towards an orderly connection shutdown, that is send
close_notify alerts and respond to them. This is especially important if the upper-layer protocol
does not provide means to detect connection truncation (like some uses of HTTP).
• When implementing a server using event-driven programming, it is important to handle the TLS
handshake properly because it includes multiple network round-trips which can block when an
ordinary TCP accept would not. Otherwise, a client which fails to complete the TLS handshake for
some reason will prevent the server from handling input from other clients.
• Unlike regular file descriptors, TLS connections cannot be passed between processes. Some TLS
implementations add additional restrictions, and TLS connections generally cannot be used across
fork function calls (see Section 11.6, “fork as a primitive for parallelism”).
75
Chapter 16. Transport Layer Security
• The value 0 indicates semantic failure (for example, a signature verification which was unsuccessful
because the signing certificate was self-signed).
• The value -1 indicates a low-level error in the system, such as failure to allocate memory using
malloc.
Treating such tri-state return values as booleans can lead to security vulnerabilities. Note that some
OpenSSL functions return boolean results or yet another set of status indicators. Each function needs
to be checked individually.
Recovering precise error information is difficult. Example 16.2, “Obtaining OpenSSL error codes”
shows how to obtain a more precise error code after a function call on an SSL object has failed.
However, there are still cases where no detailed error information is available (e.g., if SSL_shutdown
fails due to a connection teardown by the other end).
The OPENSSL_config function is documented to never fail. In reality, it can terminate the entire
process if there is a failure accessing the configuration file. An error message is written to standard
error, but which might not be visible if the function is called from a daemon process.
OpenSSL contains two separate ASN.1 DER decoders. One set of decoders operate on BIO handles
(the input/output stream abstraction provided by OpenSSL); their decoder function names start with
d2i_ and end in _fp or _bio (e.g., d2i_X509_fp or d2i_X509_bio). These decoders must not be
used for parsing data from untrusted sources; instead, the variants without the _fp and _bio (e.g.,
76
GNUTLS Pitfalls
d2i_X509) shall be used. The BIO variants have received considerably less testing and are not very
robust.
For the same reason, the OpenSSL command line tools (such as openssl x509) are generally
generally less robust than the actual library code. They use the BIO functions internally, and not the
more robust variants.
The command line tools do not always indicate failure in the exit status of the openssl process. For
instance, a verification failure in openssl verify result in an exit status of zero.
OpenSSL command-line commands, such as openssl genrsa, do not ensure that physical entropy
is used for key generation—they obtain entropy from /dev/urandom and other sources, but not from
/dev/random. This can result in weak keys if the system lacks a proper entropy source (e.g., a virtual
machine with solid state storage). Depending on local policies, keys generated by these OpenSSL
tools should not be used in high-value, critical functions.
The OpenSSL server and client applications (openssl s_client and openssl s_server) are
debugging tools and should never be used as generic clients. For instance, the s_client tool reacts in
a surprisign way to lines starting with R and Q.
OpenSSL allows application code to access private key material over documented interfaces. This can
significantly increase the part of the code base which has to undergo security certification.
The gnutls_global_init function must be called before using any functionality provided by the
library. This function is not thread-safe, so external locking is required, but it is not clear which lock
should be used. Omitting the synchronization does not just lead to a memory leak, as it is suggested in
the GNUTLS documentation, but to undefined behavior because there is no barrier that would enforce
memory ordering.
The gnutls_global_deinit function does not actually deallocate all resources allocated by
gnutls_global_init. It is currently not thread-safe. Therefore, it is best to avoid calling it
altogether.
The X.509 implementation in GNUTLS is rather lenient. For example, it is possible to create and
process X.509 version 1 certificates which carry extensions. These certificates are (correctly) rejected
by other implementations.
OpenJDK (in the source code as published by Oracle) and other implementations of the Java platform
require that the system administrator has installed so-called unlimited strength jurisdiction policy files.
Without this step, it is not possible to use the secure algorithms which offer sufficient cryptographic
strength. Most downstream redistributors of OpenJDK remove this requirement.
77
Chapter 16. Transport Layer Security
Some versions of OpenJDK use /dev/random as the randomness source for nonces and other
random data which is needed for TLS operation, but does not actually require physical randomness.
As a result, TLS applications can block, waiting for more bits to become available in /dev/random.
If the NSPR descriptor is in an unexpected state, the SSL_ForceHandshake function can succeed,
but no TLS handshake takes place, the peer is not authenticated, and subsequent data is exchanged
in the clear.
NSS disables itself if it detects that the process underwent a fork after the library has been initialized.
This behavior is required by the PKCS⟶11 API specification.
• The client must configure the TLS library to use a set of trusted root certificates. These certificates
are provided by the system in /etc/ssl/certs or files derived from it.
• The client selects sufficiently strong cryptographic primitives and disables insecure ones (such as
no-op encryption). Compression and SSL version 2 support must be disabled (including the SSLv2-
compatible handshake).
• The client initiates the TLS connection. The Server Name Indication extension should be used
if supported by the TLS implementation. Before switching to the encrypted connection state, the
contents of all input and output buffers must be discarded.
• The client needs to validate the peer certificate provided by the server, that is, the client must check
that there is a cryptographically protected chain from a trusted root certificate to the peer certificate.
(Depending on the TLS implementation, a TLS handshake can succeed even if the certificate
cannot be validated.)
• The client must check that the configured or user-provided server name matches the peer certificate
provided by the server.
It is safe to provide users detailed diagnostics on certificate validation failures. Other causes
of handshake failures and, generally speaking, any details on other errors reported by the TLS
implementation (particularly exception tracebacks), must not be divulged in ways that make them
accessible to potential attackers. Otherwise, it is possible to create decryption oracles.
Important
Depending on the application, revocation checking (against certificate revocations lists or via
OCSP) and session resumption are important aspects of production-quality client. These aspects
are not yet covered.
78
Implementation TLS Clients With OpenSSL
The OpenSSL library needs explicit initialization (see Example 16.3, “OpenSSL library initialization”).
After that, a context object has to be created, which acts as a factory for connection objects
(Example 16.4, “OpenSSL client context creation”). We use an explicit cipher list so that we do not
pick up any strange ciphers when OpenSSL is upgraded. The actual version requested in the client
hello depends on additional restrictions in the OpenSSL library. If possible, you should follow the
example code and use the default list of trusted root certificate authorities provided by the system
because you would have to maintain your own set otherwise, which can be cumbersome.
79
Chapter 16. Transport Layer Security
A single context object can be used to create multiple connection objects. It is safe to use the same
SSL_CTX object for creating connections concurrently from multiple threads, provided that the
SSL_CTX object is not modified (e.g., callbacks must not be changed).
After creating the TCP socket and disabling the Nagle algorithm (per Example 16.1, “Deactivating
the TCP Nagle algorithm”), the actual connection object needs to be created, as show in
Example 16.4, “OpenSSL client context creation”. If the handshake started by SSL_connect fails,
the ssl_print_error_and_exit function from Example 16.2, “Obtaining OpenSSL error codes” is
called.
80
Implementation TLS Clients With OpenSSL
if (ssl == NULL) {
ERR_print_errors(bio_err);
exit(1);
}
SSL_set_fd(ssl, sockfd);
X509_free(peercert);
The connection object can be used for sending and receiving data, as in Example 16.6, “Using an
OpenSSL connection to send and receive data”. It is also possible to create a BIO object and use the
SSL object as the underlying transport, using BIO_set_ssl.
81
Chapter 16. Transport Layer Security
When it is time to close the connection, the SSL_shutdown function needs to be called twice for
an orderly, synchronous connection termination (Example 16.7, “Closing an OpenSSL connection
in an orderly fashion”). This exchanges close_notify alerts with the server. The additional logic
is required to deal with an unexpected close_notify from the server. Note that is necessary to
explicitly close the underlying socket after the connection object has been freed.
Example 16.8, “Closing an OpenSSL connection in an orderly fashion” shows how to deallocate the
context object when it is no longer needed because no further TLS connections will be established.
SSL_CTX_free(ctx);
gnutls_global_init();
Failing to do so can result in obscure failures in Base64 decoding. See Section 16.1.2, “GNUTLS
Pitfalls” for additional aspects of initialization.
82
Implementation TLS Clients With GNUTLS
Before setting up TLS connections, a credentials objects has to be allocated and initialized with the set
of trusted root CAs (Example 16.9, “Initializing a GNUTLS credentials structure”).
After the last TLS connection has been closed, this credentials object should be freed:
gnutls_certificate_free_credentials(cred);
During its lifetime, the credentials object can be used to initialize TLS session objects from multiple
threads, provided that it is not changed.
Once the TCP connection has been established, the Nagle algorithm should be disabled (see
Example 16.1, “Deactivating the TCP Nagle algorithm”). After that, the socket can be associated with
a new GNUTLS session object. The previously allocated credentials object provides the set of root
CAs. The NORMAL set of cipher suites and protocols provides a reasonable default. Then the TLS
handshake must be initiated. This is shown in Example 16.10, “Establishing a TLS client connection
using GNUTLS”.
83
Chapter 16. Transport Layer Security
if (ret != GNUTLS_E_SUCCESS) {
fprintf(stderr, "error: gnutls_priority_set_direct: %s\n"
"error: at: \"%s\"\n", gnutls_strerror(ret), errptr);
exit(1);
}
// Associate the socket with the session object and set the server
// name.
gnutls_transport_set_ptr(session, (gnutls_transport_ptr_t)(uintptr_t)sockfd);
ret = gnutls_server_name_set(session, GNUTLS_NAME_DNS,
host, strlen(host));
if (ret != GNUTLS_E_SUCCESS) {
fprintf(stderr, "error: gnutls_server_name_set: %s\n",
gnutls_strerror(ret));
exit(1);
}
After the handshake has been completed, the server certificate needs to be verified
(Example 16.11, “Verifying a server certificate using GNUTLS”). In the example, the user-defined
certificate_validity_override function is called if the verification fails, so that a separate,
user-specific trust store can be checked. This function call can be omitted if the functionality is not
needed.
84
Implementation TLS Clients With GNUTLS
In the next step (Example 16.12, “Matching the server host name and certificate in a
GNUTLS client”, the certificate must be matched against the host name (note the unusual
return value from gnutls_x509_crt_check_hostname). Again, an override function
certificate_host_name_override is called. Note that the override must be keyed to the
certificate and the host name. The function call can be omitted if the override is not needed.
Example 16.12. Matching the server host name and certificate in a GNUTLS client
In newer GNUTLS versions, certificate checking and host name validation can be combined using the
gnutls_certificate_verify_peers3 function.
An established TLS session can be used for sending and receiving data, as in Example 16.13, “Using
a GNUTLS session”.
char buf[4096];
85
Chapter 16. Transport Layer Security
In order to shut down a connection in an orderly manner, you should call the gnutls_bye function.
Finally, the session object can be deallocated using gnutls_deinit (see Example 16.14, “Using a
GNUTLS session”).
import java.security.NoSuchAlgorithmException;
import java.security.NoSuchProviderException;
import java.security.cert.CertificateEncodingException;
import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLParameters;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
import sun.security.util.HostnameChecker;
TLS connections are established using an SSLContext instance. With a properly configured
OpenJDK installation, the SunJSSE provider uses the system-wide set of trusted root certificate
authorities, so no further configuration is necessary. For backwards compatibility with OpenJDK 6, the
TLSv1 provider has to be supported as a fall-back option. This is shown in Example 16.15, “Setting up
an SSLContext for OpenJDK TLS clients”.
86
Implementing TLS Clients With OpenJDK
In addition to the context, a TLS parameter object will be needed which adjusts the cipher suites
and protocols (Example 16.16, “Setting up SSLParameters for TLS use with OpenJDK”). Like the
context, these parameters can be reused for multiple TLS connections.
As initialized above, the parameter object does not yet require host name checking. This has to be
enabled separately, and this is only supported by OpenJDK 7 and later:
params.setEndpointIdentificationAlgorithm("HTTPS");
87
Chapter 16. Transport Layer Security
All application protocols can use the "HTTPS" algorithm. (The algorithms have minor differences with
regard to wildcard handling, which should not matter in practice.)
Example 16.17, “Establishing a TLS connection with OpenJDK” shows how to establish the
connection. Before the handshake is initialized, the protocol and cipher configuration has to be
performed, by applying the parameter object params. (After this point, changes to params will not
affect this TLS socket.) As mentioned initially, host name checking requires using an internal API on
OpenJDK 6.
Starting with OpenJDK 7, the last lines can be omitted, provided that host name verification has been
enabled by calling the setEndpointIdentificationAlgorithm method on the params object
(before it was applied to the socket).
The TLS socket can be used as a regular socket, as shown in Example 16.18, “Using a TLS client
socket in OpenJDK”.
socket.getOutputStream().write("GET / HTTP/1.0\r\n\r\n"
.getBytes(Charset.forName("UTF-8")));
byte[] buffer = new byte[4096];
int count = socket.getInputStream().read(buffer);
System.out.write(buffer, 0, count);
In the trust manager shown in Example 16.19, “A customer trust manager for OpenJDK TLS clients”,
the server certificate is identified by its SHA-256 hash.
88
Implementing TLS Clients With OpenJDK
@Override
public void checkClientTrusted(X509Certificate[] chain, String authType)
throws CertificateException {
throw new UnsupportedOperationException();
}
@Override
public void checkServerTrusted(X509Certificate[] chain,
String authType) throws CertificateException {
byte[] digest = getCertificateDigest(chain[0]);
String digestHex = formatHex(digest);
if (Arrays.equals(digest, certHash)) {
System.err.println("info: accepting certificate: " + digestHex);
} else {
throw new CertificateException("certificate rejected: " +
digestHex);
}
}
@Override
public X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[0];
}
}
This trust manager has to be passed to the init method of the SSLContext object, as show in
Example 16.20, “Using a custom TLS trust manager with OpenJDK”.
SSLContext ctx;
try {
ctx = SSLContext.getInstance("TLSv1.2", "SunJSSE");
} catch (NoSuchAlgorithmException e) {
try {
ctx = SSLContext.getInstance("TLSv1", "SunJSSE");
} catch (NoSuchAlgorithmException e1) {
throw new AssertionError(e1);
} catch (NoSuchProviderException e1) {
throw new AssertionError(e1);
}
} catch (NoSuchProviderException e) {
throw new AssertionError(e);
}
MyTrustManager tm = new MyTrustManager(certHash);
ctx.init(null, new TrustManager[] {tm}, null);
89
Chapter 16. Transport Layer Security
When certificate overrides are in place, host name verification should not be performed because
there is no security requirement that the host name in the certificate matches the host name used
to establish the connection (and it often will not). However, without host name verification, it is not
possible to perform transparent fallback to certification validation using the system certificate store.
The approach described above works with OpenJDK 6 and later versions. Starting with OpenJDK 7, it
is possible to use a custom subclass of the javax.net.ssl.X509ExtendedTrustManager class.
The OpenJDK TLS implementation will call the new methods, passing along TLS session information.
This can be used to implement certificate overrides as a fallback (if certificate or host name verification
fails), and a trust manager object can be used for multiple servers because the server address is
available to the trust manager.
Keep in mind that the error handling needs to be improved before the code can be used in production.
Using NSS needs several header files, as shown in Example 16.21, “Include files for NSS”.
Initializing the NSS library is shown in Example 16.22, “Initializing the NSS library”. This initialization
procedure overrides global state. We only call NSS_SetDomesticPolicy if there are no strong
ciphers available, assuming that it has already been called otherwise. This avoids overriding the
process-wide cipher suite policy unnecessarily.
The simplest way to configured the trusted root certificates involves loading the libnssckbi.so NSS
module with a call to the SECMOD_LoadUserModule function. The root certificates are compiled into
this module. (The PEM module for NSS, libnsspem.so, offers a way to load trusted CA certificates
from a file.)
90
Implementing TLS Clients With NSS
// Ciphers to enable.
static const PRUint16 good_ciphers[] = {
TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_256_CBC_SHA,
SSL_RSA_WITH_3DES_EDE_CBC_SHA,
SSL_NULL_WITH_NULL_NULL // sentinel
};
Some of the effects of the initialization can be reverted with the following function calls:
SECMOD_DestroyModule(module);
NSS_ShutdownContext(ctx);
After NSS has been initialized, the TLS connection can be created (Example 16.23, “Creating a TLS
connection with NSS”). The internal PR_ImportTCPSocket function is used to turn the POSIX file
descriptor sockfd into an NSPR file descriptor. (This function is de-facto part of the NSS public ABI,
so it will not go away.) Creating the TLS-capable file descriptor requires a model descriptor, which is
91
Chapter 16. Transport Layer Security
configured with the desired set of protocols. The model descriptor is not needed anymore after TLS
support has been activated for the existing connection descriptor.
Triggering the actual handshake requires three function calls, SSL_ResetHandshake, SSL_SetURL,
and SSL_ForceHandshake. (If SSL_ResetHandshake is omitted, SSL_ForceHandshake will
succeed, but the data will not be encrypted.) During the handshake, the certificate is verified and
matched against the host name.
92
Implementing TLS Clients With NSS
exit(1);
}
nspr = newfd;
PR_Close(model);
}
After the connection has been established, Example 16.24, “Using NSS for sending and receiving
data” shows how to use the NSPR descriptor to communicate with the server.
char buf[4096];
snprintf(buf, sizeof(buf), "GET / HTTP/1.0\r\nHost: %s\r\n\r\n", host);
PRInt32 ret = PR_Write(nspr, buf, strlen(buf));
if (ret < 0) {
const PRErrorCode err = PR_GetError();
fprintf(stderr, "error: PR_Write error %d: %s\n",
err, PR_ErrorToName(err));
exit(1);
}
ret = PR_Read(nspr, buf, sizeof(buf));
if (ret < 0) {
const PRErrorCode err = PR_GetError();
fprintf(stderr, "error: PR_Read error %d: %s\n",
err, PR_ErrorToName(err));
exit(1);
}
Example 16.25, “Closing NSS client connections” shows how to close the connection.
93
Chapter 16. Transport Layer Security
Important
Currently, most Python function which accept https:// URLs or otherwise implement HTTPS
support do not perform certificate validation at all. (For example, this is true for the httplib and
xmlrpclib modules.) If you use HTTPS, you should not use the built-in HTTP clients. The Curl
class in the curl module, as provided by the python-pycurl package implements proper
certificate validation.
The ssl module currently does not perform host name checking on the server certificate.
Example 16.26, “Implementing TLS host name checking Python (without wildcard support)” shows
how to implement certificate matching, using the parsed certificate returned by getpeercert.
Example 16.26. Implementing TLS host name checking Python (without wildcard support)
To turn a regular, connected TCP socket into a TLS-enabled socket, use the ssl.wrap_socket
function. The function call in Example 16.27, “Establishing a TLS client connection with Python”
provides additional arguments to override questionable defaults in OpenSSL and in the Python
module.
94
Implementing TLS Clients With Python
The ssl module (and OpenSSL) perform certificate validation, but the certificate must be compared
manually against the host name, by calling the check_host_name defined above.
sock = ssl.wrap_socket(sock,
ciphers="HIGH:-aNULL:-eNULL:-PSK:RC4-SHA:RC4-MD5",
ssl_version=ssl.PROTOCOL_TLSv1,
cert_reqs=ssl.CERT_REQUIRED,
ca_certs='/etc/ssl/certs/ca-bundle.crt')
# getpeercert() triggers the handshake as a side effect.
if not check_host_name(sock.getpeercert(), host):
raise IOError("peer certificate does not match host name")
After the connection has been established, the TLS socket can be used like a regular socket:
sock.close()
95
96
Appendix A. Revision History
Revision 1.2-1 Wed Jul 16 2014 Florian Weimer [email protected]
C: Corrected the strncat example
C: Mention mixed signed/unsigned comparisons
C: Unsigned overflow checking example
C++: operator new[] has been fixed in GCC
C++: Additional material on std::string, iterators
OpenSSL: Mention openssl genrsa entropy issue
Packaging: X.509 key generation
Go, Vala: Add short chapters
Serialization: Notes on fragmentation and reassembly
1
https://bugzilla.redhat.com/show_bug.cgi?id=995595
97
98