Suse Linux
Suse Linux
Suse Linux
Enterprise Server
11 SP4 www.suse.com
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU
Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section
being this copyright notice and license. A copy of the license version 1.2 is included in the section en-
titled “GNU Free Documentation License”.
For SUSE and Novell trademarks, see the Novell Trademark and Service Mark list http://
www.novell.com/company/legal/trademarks/tmlist.html. All other third party
trademarks are the property of their respective owners. A trademark symbol (®, ™ etc.) denotes a
SUSE or Novell trademark; an asterisk (*) denotes a third party trademark.
All information found in this book has been compiled with utmost attention to detail. However, this
does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the transla-
tors shall be held liable for possible errors or the consequences thereof.
Contents
About This Guide ix
1 Available Documentation .......................................................................... x
2 Feedback ............................................................................................. xii
3 Documentation Conventions ................................................................... xiii
I Basics 1
1 General Notes on System Tuning 3
1.1 Be Sure What Problem to Solve ............................................................. 3
1.2 Rule Out Common Problems ................................................................. 4
1.3 Finding the Bottleneck .......................................................................... 4
1.4 Step-by-step Tuning .............................................................................. 5
II System Monitoring 7
2 System Monitoring Utilities 9
2.1 Multi-Purpose Tools ............................................................................. 9
2.2 System Information ............................................................................. 17
2.3 Processes ........................................................................................... 21
2.4 Memory ............................................................................................ 28
2.5 Networking ........................................................................................ 31
2.6 The /proc File System ...................................................................... 35
2.7 Hardware Information ......................................................................... 39
2.8 Files and File Systems ......................................................................... 40
2.9 User Information ................................................................................ 43
2.10 Time and Date ................................................................................. 43
2.11 Graph Your Data: RRDtool ................................................................ 44
Generally it is not possible to ship a distribution that will by default be optimized for all
kinds of workloads. Due to the simple fact that different workloads vary substantially
in various aspects—most importantly I/O access patterns, memory access patterns, and
process scheduling. A behavior that perfectly suits a certain workload might t reduce
performance of a completely different workload (for example, I/O intensive databases
usually have completely different requirements compared to CPU-intensive tasks, such
as video encoding). The great versatility of Linux makes it possible to configure your
system in a way that it brings out the best in each usage scenario.
This manual introduces you to means to monitor and analyze your system. It describes
methods to manage system resources and to tune your system. This guide does not offer
recipes for special scenarios, because each server has got its own different demands. It
rather enables you to thoroughly analyze your servers and make the most out of them.
Some programs or packages mentioned in this guide are only available from
the SUSE Linux Enterprise Software Development Kit (SDK). The SDK is an
add-on product for SUSE Linux Enterprise Server and is available for down-
load from http://download.suse.com/.
Many chapters in this manual contain links to additional documentation resources. This
includes additional documentation that is available on the system as well as documenta-
tion available on the Internet.
For an overview of the documentation available for your product and the latest docu-
mentation updates, refer to http://www.suse.com/doc or to the following sec-
tion:
1 Available Documentation
We provide HTML and PDF versions of our books in different languages. The follow-
ing manuals for users and administrators are available for this product:
Virtualization with KVM for IBM System z (↑Virtualization with KVM for IBM System
z)
Offers an introduction to setting up and managing virtualization with KVM (Ker-
nel-based Virtual Machine) on SUSE Linux Enterprise Server. Learn how to man-
AutoYaST (↑AutoYaST)
AutoYaST is a system for installing one or more SUSE Linux Enterprise systems
automatically and without user intervention, using an AutoYaST profile that con-
tains installation and configuration data. The manual guides you through the basic
steps of auto-installation: preparation, installation, and configuration.
In addition to the comprehensive manuals, several quick start guides are available:
Find HTML versions of most product manuals in your installed system under /usr/
share/doc/manual or in the help centers of your desktop. Find the latest docu-
mentation updates at http://www.suse.com/doc where you can download PDF
or HTML versions of the manuals for your product.
2 Feedback
Several feedback channels are available:
To report bugs for a product component, log in to the Novell Customer Center
from http://www.suse.com/support/ and select My Support > Service
Request.
User Comments
We want to hear your comments about and suggestions for this manual and the
other documentation included with this product. Use the User Comments fea-
ture at the bottom of each page in the online documentation or go to http://
www.suse.com/doc/feedback.html and enter your comments there.
Mail
For feedback on the documentation of this product, you can also send a mail to
doc-team@suse.de. Make sure to include the document title, the product ver-
sion, and the publication date of the documentation. To report errors or suggest en-
hancements, provide a concise description of the problem and refer to the respec-
tive section number and page (or URL).
3 Documentation Conventions
The following typographical conventions are used in this manual:
• Alt, Alt + F1: a key to press or a key combination; keys are shown in uppercase as on
a keyboard
►ipseries zseries: This paragraph is only relevant for the architectures System z
and ipseries. The arrows mark the beginning and the end of the text block. ◄
4 Step-by-step tuning
Furthermore, make sure you can apply a measurement to your problem, otherwise you
will not be able to control if the tuning was a success or not. You should always be able
to compare “before” and “after”.
• Check (using top or ps) whether a certain process misbehaves by eating up unusual
amounts of CPU time or memory.
• In case of I/O problems with physical disks, make sure it is not caused by hardware
problems (check the disk with the smartmontools) or by a full disk.
• Ensure that background jobs are scheduled to be carried out in times the server load
is low. Those jobs should also run with low priority (set via nice).
• If the machine runs several services using the same resources, consider moving ser-
vices to another server.
For each of the described commands, examples of the relevant outputs are present-
ed. In the examples, the first line is the command itself (after the > or # sign prompt).
Omissions are indicated with square brackets ([...]) and long lines are wrapped
where necessary. Line breaks for long lines are indicated by a backslash (\).
# command -x -y
output line 1
output line 2
output line 3 is annoyingly long, so long that \
we have to break it
output line 4
[...]
output line 98
output line 99
The descriptions have been kept short so that we can include as many utilities as possi-
ble. Further information for all the commands can be found in the manual pages. Most
of the commands also understand the parameter --help, which produces a brief list
of possible parameters.
2.1.1 vmstat
vmstat collects information about processes, memory, I/O, interrupts and CPU. If
called without a sampling rate, it displays average values since the last reboot. When
called with a sampling rate, it displays actual samples:
tux@mercury:~> vmstat -a 2
procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu-------
r b swpd free inact active si so bi bo in cs us sy id wa st
0 0 0 750992 570648 548848 0 0 0 1 8 9 0 0 100 0 0
0 0 0 750984 570648 548912 0 0 0 0 63 48 1 0 99 0 0
0 0 0 751000 570648 548912 0 0 0 0 55 47 0 0 100 0 0
0 0 0 751000 570648 548912 0 0 0 0 56 50 0 0 100 0 0
0 0 0 751016 570648 548944 0 0 0 0 57 50 0 0 100 0 0
tux@mercury:~> vmstat 2
procs -----------memory----------- ---swap-- -----io---- -system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
32 1 26236 459640 110240 6312648 0 0 9944 2 4552 6597 95 5 0 0 0
23 1 26236 396728 110336 6136224 0 0 9588 0 4468 6273 94 6 0 0 0
35 0 26236 554920 110508 6166508 0 0 7684 27992 4474 4700 95 5 0 0 0
28 0 26236 518184 110516 6039996 0 0 10830 4 4446 4670 94 6 0 0 0
21 5 26236 716468 110684 6074872 0 0 8734 20534 4512 4061 96 4 0 0 0
TIP
The first line of the vmstat output always displays average values since the
last reboot.
r
Shows the number of processes in the run queue. These processes are waiting for
a free CPU slot to be executed. If the number of processes in this column is con-
stantly higher than the number of CPUs available, this is an indication of insuffi-
cient CPU power.
swpd
The amount of swap space (KB) currently used.
free
The amount of unused memory (KB).
inact
Recently unused memory that can be reclaimed. This column is only visible when
calling vmstat with the parameter -a (recommended).
active
Recently used memory that normally does not get reclaimed. This column is only
visible when calling vmstat with the parameter -a (recommended).
buff
File buffer cache (KB) in RAM. This column is not visible when calling vmstat
with the parameter -a (recommended).
cache
Page cache (KB) in RAM. This column is not visible when calling vmstat with
the parameter -a (recommended).
si
Amount of data (KB) that is moved from swap to RAM per second. High values
over a long period of time in this column are an indication that the machine would
benefit from more RAM.
so
Amount of data (KB) that is moved from RAM to swap per second. High val-
ues over a longer period of time in this column are an indication that the machine
would benefit from more RAM.
bi
Number of blocks per second received from a block device (e.g. a disk read). Note
that swapping also impacts the values shown here.
bo
Number of blocks per second sent to a block device (e.g. a disk write). Note that
swapping also impacts the values shown here.
cs
Number of context switches per second. Simplified this means that the kernel has
to replace executable code of one program in memory with that of another pro-
gram.
us
Percentage of CPU usage from user processes.
sy
Percentage of CPU usage from system processes.
id
Percentage of CPU time spent idling. If this value is zero over a longer period of
time, your CPU(s) are working to full capacity. This is not necessarily a bad sign
—rather refer to the values in columns r and b to determine if your machine is
equipped with sufficient CPU power.
wa
If "wa" time is non-zero, it indicates throughput lost due to waiting for I/O. This
may be inevitable, for example, if a file is being read for the first time, background
writeback cannot keep up, and so on. It can also be an indicator for a hardware
bottleneck (network or hard disk). Lastly, it can indicate a potential for tuning the
virtual memory manager (refer to Chapter 15, Tuning the Memory Management
Subsystem (page 177)).
st
Percentage of CPU time used by virtual machines.
sar and sadc are part of sysstat package. You need to install the pack-
age either with YaST, or with zypper in sysstat.
• Data is collected every ten minutes, a summary report is generated every 6 hours
(see /etc/sysstat/sysstat.cron).
If you need to customize the configuration, copy the sa1 and sa2 scripts and adjust
them according to your needs. Replace the link /etc/cron.d/sysstat with a
customized copy of /etc/sysstat/sysstat.cron calling your scripts.
Find examples for useful sar calls and their interpretation below. For detailed infor-
mation on the meaning of each column, please refer to the man (1) of sar. Also re-
fer to the man page for more options and reports—sar offers plenty of them.
If the value for %iowait (percentage of the CPU being idle while waiting for I/O) is
significantly higher than zero over a longer period of time, there is a bottleneck in the
I/O system (network or hard disk). If the %idle value is zero over a longer period of
time, your CPU(s) are working to full capacity.
mercury:~ # sar -r 10 5
Linux 2.6.31.12-0.2-default (mercury) 03/05/10 _x86_64_ (2 CPU)
The last two columns (kbcommit and %commit) show an approximation of the total
amount of memory (RAM plus swap) the current workload would need in the worst
case (in kilobyte or percent respectively).
mercury:~ # sar -B 10 5
Linux 2.6.31.12-0.2-default (mercury) 03/05/10 _x86_64_ (2 CPU)
16:11:43 pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff
16:11:53 225.20 104.00 91993.90 0.00 87572.60 0.00 0.00 0.00 0.00
16:12:03 718.32 601.00 82612.01 2.20 99785.69 560.56 839.24 1132.23 80.89
16:12:13 1222.00 1672.40 103126.00 1.70 106529.00 1136.00 982.40 1172.20 55.33
16:12:23 112.18 77.84 113406.59 0.10 97581.24 35.13 127.74 159.38 97.86
16:12:33 817.22 81.28 121312.91 9.41 111442.44 0.00 0.00 0.00 0.00
Average: 618.72 507.20 102494.86 2.68 100578.98 346.24 389.76 492.60 66.93
The majflt/s (major faults per second) column shows how many pages are loaded from
disk (swap) into memory. A large number of major faults slows down the system and
is an indication of insufficient main memory. The %vmeff column shows the number
of pages scanned (pgscand/s) in relation to the ones being reused from the main mem-
ory cache or the swap cache (pgsteal/s). It is a measurement of the efficiency of page
reclaim. Healthy values are either near 100 (every inactive page swapped out is being
reused) or 0 (no pages have been scanned). The value should not drop below 30.
mercury:~ # sar -d -p 10 5
Linux 2.6.31.12-0.2-default (neo) 03/05/10 _x86_64_ (2 CPU)
16:28:31 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
16:28:41 sdc 11.51 98.50 653.45 65.32 0.10 8.83 4.87 5.61
16:28:41 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
16:28:41 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
16:28:51 sdc 15.38 329.27 465.93 51.69 0.10 6.39 4.70 7.23
16:28:51 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
16:29:01 sdc 32.47 876.72 647.35 46.94 0.33 10.20 3.67 11.91
16:29:01 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
16:29:01 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
16:29:11 sdc 48.75 2852.45 366.77 66.04 0.82 16.93 4.91 23.94
16:29:11 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
16:29:11 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
16:29:21 sdc 13.20 362.40 412.00 58.67 0.16 12.03 6.09 8.04
16:29:21 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: sdc 24.26 903.52 509.12 58.23 0.30 12.49 4.68 11.34
Average: scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
If your machine uses multiple disks, you will receive the best performance, if I/O re-
quests are evenly spread over all disks. Compare the Average values for tps, rd_sec/s,
and wr_sec/s of all disks. Constantly high values in the svctm and %util columns could
be an indication that the amount of free space on the disk is insufficient.
The first iostat report shows statistics collected since the system was booted. Subse-
quent reports cover the time since the previous report.
tux@mercury:~> iostat
Linux 2.6.32.7-0.2-default (geeko@buildhost) 02/24/10 _x86_64_
When invoked with the -n option, iostat adds statistics of network file systems
(NFS) load. The option -x shows extended statistics information.
You can also specify which device should be monitored at what time intervals. For ex-
ample, iostat -p sda 3 5 will display five reports at three second intervals for
device sda.
iostat is part of sysstat package. To use it, install the package with zyp-
per in sysstat
For example, pidstat -C top 2 3 prints the load statistic for tasks whose com-
mand name includes the string “top”. There will be three reports printed at two second
intervals.
tux@mercury:~> pidstat -C top 2 3
Linux 2.6.27.19-5-default (geeko@buildhost) 03/23/2009 _x86_64_
The special shell variable $$, whose value is the process ID of the shell, has been used.
The command lsof lists all the files currently open when used without any parame-
ters. There are often thousands of open files, therefore, listing all of them is rarely use-
When used with -i, lsof lists currently open Internet files as well:
tux@mercury:~> lsof -i
[...]
pidgin 4349 tux 17r IPv4 15194 0t0 TCP \
jupiter.example.com:58542->www.example.net:https (ESTABLISHED)
pidgin 4349 tux 21u IPv4 15583 0t0 TCP \
jupiter.example.com:37051->aol.example.org:aol (ESTABLISHED)
evolution 4578 tux 38u IPv4 16102 0t0 TCP \
jupiter.example.com:57419->imap.example.com:imaps (ESTABLISHED)
npviewer. 9425 tux 40u IPv4 24769 0t0 TCP \
jupiter.example.com:51416->www.example.com:http (CLOSE_WAIT)
npviewer. 9425 tux 49u IPv4 24814 0t0 TCP \
jupiter.example.com:43964->www.example.org:http (CLOSE_WAIT)
ssh 17394 tux 3u IPv4 40654 0t0 TCP \
jupiter.example.com:35454->saturn.example.com:ssh (ESTABLISHED)
Only root user is allowed to monitor udev events by running the udevadm
command.
2.3 Processes
To list all processes with user and command line information, use ps axu:
tux@mercury:~> ps axu
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 696 272 ? S 12:59 0:01 init [5]
root 2 0.0 0.0 0 0 ? SN 12:59 0:00 [ksoftirqd
root 3 0.0 0.0 0 0 ? S< 12:59 0:00 [events
[...]
tux 4047 0.0 6.0 158548 31400 ? Ssl 13:02 0:06 mono-best
tux 4057 0.0 0.7 9036 3684 ? Sl 13:02 0:00 /opt/gnome
tux 4067 0.0 0.1 2204 636 ? S 13:02 0:00 /opt/gnome
tux 4072 0.0 1.0 15996 5160 ? Ss 13:02 0:00 gnome-scre
tux 4114 0.0 3.7 130988 19172 ? SLl 13:06 0:04 sound-juic
tux 4818 0.0 0.3 4192 1812 pts/0 Ss 15:59 0:00 -bash
tux 4959 0.0 0.1 2324 816 pts/0 R+ 16:17 0:00 ps axu
To check how many sshd processes are running, use the option -p together with the
command pidof, which lists the process IDs of the given processes.
tux@mercury:~> ps -p $(pidof sshd)
PID TTY STAT TIME COMMAND
3524 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pid
4813 ? Ss 0:00 sshd: tux [priv]
4817 ? R 0:00 sshd: tux@pts/0
The process list can be formatted according to your needs. The option -L returns a list
of all keywords. Enter the following command to issue a list of all processes sorted by
memory usage:
tux@mercury:~> ps ax --format pid,rss,cmd --sort rss
PID RSS CMD
2 0 [ksoftirqd/0]
Useful ps Calls
ps axo pid,%cpu,rss,vsz,args,wchan
Shows every process, their PID, CPU usage ratio, memory size (resident and virtu-
al), name, and their syscall.
ps axfo pid,args
Show a process tree.
The parameter -p adds the process ID to a given name. To have the command lines
displayed as well, use the -a parameter:
By default the output is sorted by CPU usage (column %CPU, shortcut Shift + P). Use
following shortcuts to change the sort field:
To use any other field for sorting, press F and select a field from the list. To toggle the
sort order, Use Shift + R.
The parameter -U UID monitors only the processes associated with a particular
user. Replace UID with the user ID of the user. Use top -U $(id -u) to show
processes of the current user
You can run hyptop in interactive mode (default) or in batch mode with the -b op-
tion. Help in the interactive mode is available by pressing ? after hyptop is started.
TIP
iotop is not installed by default. You need to install it manually with zypper
in iotop as root.
iotop displays columns for the I/O bandwidth read and written by each process dur-
ing the sampling period. It also displays the percentage of time the process spent while
swapping in and while waiting on I/O. For each process, its I/O priority (class/level) is
shown. In addition, the total I/O bandwidth read and written during the sampling peri-
od is displayed at the top of the interface.
Use the left and right arrows to change the sorting, R to reverse the sorting order, O to
toggle the --only option, P to toggle the --processes option, A to toggle the --
accumulated option, Q to quit or I to change the priority of a thread or a process'
thread(s). Any other key will force a refresh.
Following is an example output of the command iotop --only, while find and
emacs are running:
tux@mercury:~> iotop --only
Total DISK READ: 50.61 K/s | Total DISK WRITE: 11.68 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
3416 be/4 ke 50.61 K/s 0.00 B/s 0.00 % 4.05 % find /
275 be/3 root 0.00 B/s 3.89 K/s 0.00 % 2.34 % [jbd2/sda2-8]
5055 be/4 ke 0.00 B/s 3.89 K/s 0.00 % 0.04 % emacs
iotop can be also used in a batch mode (-b) and its output stored in a file for later
analysis. For a complete set of options, see the manual page (man 1 iotop).
Adjusting the niceness level is useful when running a non time-critical process that lasts
long and uses large amounts of CPU time, such as compiling a kernel on a system that
also performs other tasks. Making such a process “nicer”, ensures that the other tasks,
for example a Web server, will have a higher priority.
Running nice command increments the current nice level for the given command by
10. Using nice -n level command lets you specify a new niceness relative to
the current one.
To renice all processes owned by a specific user, use the option -u user. Process
groups are reniced by the option -g process group id.
2.4 Memory
The options -b, -k, -m, -g show the output in bytes, KB, MB, or GB, respectively.
The parameter -d delay ensures that the display is refreshed every delay seconds.
For example, free -d 1.5 produces an update every 1.5 seconds.
MemTotal
Total amount of usable RAM
MemFree
Total amount of unused RAM
Buffers
File buffer cache in RAM
Cached
Page cache (excluding buffer cache) in RAM
SwapCached
Page cache in swap
Active
Recently used memory that normally is not reclaimed. This value is the sum of
memory claimed by anonymous pages (listed as Active(anon)) and file-backed
pages (listed as Active(file))
Inactive
Recently unused memory that can be reclaimed. This value is the sum of memory
claimed by anonymous pages (listed as Inactive(anon)) and file-backed pages (list-
ed as Inactive(file)).
SwapTotal
Total amount of swap space
SwapFree
Total amount of unused swap space
Dirty
Amount of memory that will be written to disk
Writeback
Amount of memory that currently is written to disk
Slab
Kernel data structure cache
SReclaimable
Reclaimable slab caches (inode, dentry, etc.)
Committed_AS
An approximation of the total amount of memory (RAM plus swap) the current
workload needs in the worst case.
2.5 Networking
If you run ifconfig with no additional parameter, it lists all active network inter-
faces. ifconfig -a lists all (even inactive) network interfaces, while ifconfig
net_interface lists statistics for the specified interface only.
# ifconfig br0
br0 Link encap:Ethernet HWaddr 00:25:90:98:6A:00
inet addr:10.100.2.76 Bcast:10.100.63.255 Mask:255.255.192.0
The following table shows ethtool's options that you can use to query the device for
specific information:
-k offload information
When displaying network connections or statistics, you can specify the socket type to
display: TCP (-t), UDP (-u), or raw (-r). The -p option shows the PID and name of
the program to which each socket belongs.
The following example lists all TCP connections and the programs using these connec-
tions.
mercury:~ # netstat -t -p
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Pro
[...]
tcp 0 0 mercury:33513 www.novell.com:www-http ESTABLISHED 6862/
fi
tcp 0 352 mercury:ssh mercury2.:trc-netpoll ESTABLISHED
19422/s
tcp 0 0 localhost:ssh localhost:17828 ESTABLISHED -
TIP
If you enter the command without any option, it runs in an interactive mode. You can
navigate through graphical menus and choose the statistics that you want iptraf to
report. You can also specify which network interface to examine.
The command iptraf understands several options and can be run in a batch mode as
well. The following example will collect statistics for network interface eth0 (-i) for 1
minute (-t). It will be run in the background (-B) and the statistics will be written to
the iptraf.log file in your home directory (-L).
tux@mercury:~> iptraf -i eth0 -t 1 -B -L ~/iptraf.log
You can examine the log file with the more command:
tux@mercury:~> more ~/iptraf.log
Mon Mar 23 10:08:02 2010; ******** IP traffic monitor started ********
Query the allocation and use of interrupts with the following command:
tux@mercury:~> cat /proc/interrupts
CPU0
0: 3577519 XT-PIC timer
1: 130 XT-PIC i8042
2: 0 XT-PIC cascade
5: 564535 XT-PIC Intel 82801DB-ICH4
7: 1 XT-PIC parport0
8: 2 XT-PIC rtc
9: 1 XT-PIC acpi, uhci_hcd:usb1, ehci_hcd:usb4
/proc/devices
Available devices
/proc/modules
Kernel modules loaded
/proc/cmdline
Kernel command line
/proc/meminfo
Detailed information about memory usage
/proc/config.gz
gzip-compressed configuration file of the kernel currently running
The address assignment of executables and libraries is contained in the maps file:
tux@mercury:~> cat /proc/self/maps
08048000-0804c000 r-xp 00000000 03:03 17753 /bin/cat
0804c000-0804d000 rw-p 00004000 03:03 17753 /bin/cat
0804d000-0806e000 rw-p 0804d000 00:00 0 [heap]
b7d27000-b7d5a000 r--p 00000000 03:03 11867 /usr/lib/locale/en_GB.utf8/
b7d5a000-b7e32000 r--p 00000000 03:03 11868 /usr/lib/locale/en_GB.utf8/
b7e32000-b7e33000 rw-p b7e32000 00:00 0
b7e33000-b7f45000 r-xp 00000000 03:03 8837 /lib/libc-2.3.6.so
b7f45000-b7f46000 r--p 00112000 03:03 8837 /lib/libc-2.3.6.so
b7f46000-b7f48000 rw-p 00113000 03:03 8837 /lib/libc-2.3.6.so
b7f48000-b7f4c000 rw-p b7f48000 00:00 0
b7f52000-b7f53000 r--p 00000000 03:03 11842 /usr/lib/locale/en_GB.utf8/
[...]
b7f5b000-b7f61000 r--s 00000000 03:03 9109 /usr/lib/gconv/gconv-module
b7f61000-b7f62000 r--p 00000000 03:03 9720 /usr/lib/locale/en_GB.utf8/
b7f62000-b7f76000 r-xp 00000000 03:03 8828 /lib/ld-2.3.6.so
b7f76000-b7f78000 rw-p 00013000 03:03 8828 /lib/ld-2.3.6.so
bfd61000-bfd76000 rw-p bfd61000 00:00 0 [stack]
ffffe000-fffff000 ---p 00000000 00:00 0 [vdso]
2.6.1 procinfo
Important information from the /proc file system is summarized by the command
procinfo:
tux@mercury:~> procinfo
Linux 2.6.32.7-0.2-default (geeko@buildhost) (gcc 4.3.4) #1 2CPU
Bootup: Wed Feb 17 03:39:33 2010 Load average: 0.86 1.10 1.11 3/118 21547
To see all the information, use the parameter -a. The parameter -nN produces up-
dates of the information every N seconds. In this case, terminate the program by press-
ing q.
By default, the cumulative values are displayed. The parameter -d produces the differ-
ential values. procinfo -dn5 displays the values that have changed in the last five
seconds:
/proc/sys/vm/
Entries in this path relate to information about the virtual memory, swapping, and
caching.
/proc/sys/kernel/
Entries in this path represent information about the task scheduler, system shared
memory, and other kernel-related parameters.
/proc/sys/fs/
Entries in this path relate to used file handles, quotas, and other file system-orient-
ed parameters.
/proc/sys/net/
Entries in this path relate to information about network bridges, and general net-
work parameters (mainly the ipv4/ subdirectory).
Most operating systems require root user privileges to grant access to the
computer's PCI configuration.
Information about device name resolution is obtained from the file /usr/share/
pci.ids. PCI IDs not listed in this file are marked “Unknown device.”
The parameter -vv produces all the information that could be queried by the program.
To view the pure numeric values, use the parameter -n.
The parameter -f list specifies a file with a list of filenames to examine. The -z
allows file to look inside compressed files:
The parameter -i outputs a mime type string rather than the traditional description.
tux@mercury:~> file -i /usr/share/misc/magic
/usr/share/misc/magic: text/plain charset=utf-8
Obtain information about total usage of the file systems with the command df. The
parameter -h (or --human-readable) transforms the output into a form under-
standable for common users.
tux@mercury:~> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 20G 5,9G 13G 32% /
devtmpfs 1,6G 236K 1,6G 1% /dev
tmpfs 1,6G 668K 1,6G 1% /dev/shm
/dev/sda3 208G 40G 159G 20% /home
Display the total size of all the files in a given directory and its subdirectories with
the command du. The parameter -s suppresses the output of detailed information
and gives only a total for each argument. -h again transforms the output into a hu-
man-readable form:
The parameter --file-system produces details of the properties of the file system
in which the specified file is located:
tux@mercury:~> stat /etc/profile --file-system
Following termination of the less process, which was running on another terminal,
the file system can successfully be unmounted. When used with -k option, fuser
will kill processes accessing the file as well.
If any users of other systems have logged in remotely, the parameter -f shows the
computers from which they have established the connection.
real 0m4.051s
user 0m0.042s
sys 0m0.205s
The real time that elapsed from the command's start-up until it finished.
CPU time of the user as reported by the times system call.
CPU time of the system as reported by the times system call.
RRDtool is available for most UNIX platforms and Linux distributions. SUSE® Linux
Enterprise Server ships RRDtool as well. Install it either with YaST or by entering
TIP
There are Perl, Python, Ruby, or PHP bindings available for RRDtool, so that
you can write your own monitoring scripts with your preferred scripting lan-
guage.
Sometimes it is not possible to obtain the data automatically and regularly. Their for-
mat needs to be pre-processed before it is supplied to RRDtool, and often you need to
manipulate RRDtool even manually.
The following is a simple example of basic RRDtool usage. It illustrates all three im-
portant phases of the usual RRDtool workflow: creating a database, updating measured
values, and viewing the output.
Our situation is different - we need to obtain the data manually. A helper script
free_mem.sh repetitively reads the current state of free memory and writes it to the
standard output.
tux@mercury:~> cat free_mem.sh
INTERVAL=4
for steps in {1..10}
do
DATE=`date +%s`
Points to Notice
• The time interval is set to 4 seconds, and is implemented with the sleep command.
• RRDtool accepts time information in a special format - so called Unix time. It is de-
fined as the number of seconds since the midnight of January 1, 1970 (UTC). For
example, 1272907114 represents 2010-05-03 17:18:34.
• The free memory information is reported in bytes with free -b. Prefer to supply
basic units (bytes) instead of multiple units (like kilobytes).
• The line with the echo ... command contains the future name of the database
file (free_mem.rrd), and together creates a command line for the purpose of up-
dating RRDtool values.
Points to Notice
• This command creates a file called free_mem.rrd for storing our measured val-
ues in a Round Robin type database.
• The --start option specifies the time (in Unix time) when the first value will be
added to the database. In this example, it is one less than the first time value of the
free_mem.sh output (1272974835).
• The --step specifies the time interval in seconds with which the measured data
will be supplied to the database.
As you can see, the size of free_mem.rrd remained the same even after updating
its data.
To retrieve all the values from our database, enter the following on the command line:
tux@mercury:~> rrdtool fetch free_mem.rrd AVERAGE --start 1272974830 \
--end 1272974871
memory
1272974832: nan
1272974836: 1.1729059840e+09
1272974840: 1.1461806080e+09
1272974844: 1.0807572480e+09
1272974848: 1.0030243840e+09
1272974852: 8.9019289600e+08
1272974856: 8.3162112000e+08
1272974860: 9.1693465600e+08
1272974864: 1.1801251840e+09
1272974868: 1.1799787520e+09
1272974872: nan
Points to Notice
• AVERAGE will fetch average value points from the database, because only one da-
ta source is defined (Section 2.11.2.2, “Creating Database” (page 46)) with AV-
ERAGE processing and no other function is available.
• The first line of the output prints the name of the data source as defined in Sec-
tion 2.11.2.2, “Creating Database” (page 46).
• The left results column represents individual points in time, while the right one rep-
resents corresponding measured average values in scientific notation.
Now a graph representing representing the values stored in the database is drawn:
tux@mercury:~> rrdtool graph free_mem.png \
--start 1272974830 \
--end 1272974871 \
--step=4 \
DEF:free_memory=free_mem.rrd:memory:AVERAGE \
LINE2:free_memory#FF0000 \
--vertical-label "GB" \
--title "Free System Memory in Time" \
--zoom 1.5 \
--x-grid SECOND:1:SECOND:4:SECOND:10:0:%X
• --start and --end limit the time range within which the graph will be drawn.
• The DEF:... part is a data definition called free_memory. Its data are read from
the free_mem.rrd database and its data source called memory. The average val-
ue points are calculated, because no others were defined in Section 2.11.2.2, “Creat-
ing Database” (page 46).
• The LINE... part specifies properties of the line to be drawn into the graph. It is 2
pixels wide, its data come from the free_memory definition, and its color is red.
• --vertical-label sets the label to be printed along the y axis, and --title
sets the main label for the whole graph.
• --zoom specifies the zoom factor for the graph. This value must be greater than ze-
ro.
• --x-grid specifies how to draw grid lines and their labels into the graph. Our ex-
ample places them every second, while major grid lines are placed every 4 seconds.
Labels are placed every 10 seconds under the major grid lines.
If you are interested in monitoring network traffic, have a look at MRTG [http://
oss.oetiker.ch/mrtg/]. It stands for Multi Router Traffic Grapher and can
graph the activity of all sorts of network devices. It can easily make use of RRDtool.
• Simple plug-in design that allows administrators to develop further service checks.
Both methods install the packages nagios and nagios-www. The later RPM pack-
age contains a Web interface for Nagios which allows, for example, to view the service
status and the problem history. However, this is not absolutely necessary.
Nagios is modular designed and, thus, uses external check plug-ins to verify whether a
service is available or not. It is recommended to install the nagios-plugin RPM
package that contains ready-made check plug-ins. However, it is also possible to write
your own, custom check plug-ins.
/etc/nagios/nagios.cfg
Main configuration file of Nagios containing a number of directives which de-
fine how Nagios operates. See http://nagios.sourceforge.net/
docs/3_0/configmain.html for a complete documentation.
/etc/nagios/resource.cfg
Containing path to all Nagios plug-ins (default: /usr/lib/nagios/plug
ins).
/etc/nagios/command.cfg
Defining the programs to be used to determine the availability of services or the
commands which are used to send e-mail notifications.
/etc/nagios/cgi.cfg
Contains options regarding the Nagios Web interface.
/etc/nagios/objects/
A directory containing object definition files. See Section 3.3.1, “Object Defini-
tion Files” (page 53) for a more complete documentation.
• Hosts
• Services
• Contacts
The flexibility lies in the fact that objects are easily enhanceable. Imagine you are re-
sponsible for a host with only one service running. However, you want to install anoth-
er service on the same host machine and you want to monitor that service as well. It is
possible to add another service object and assign it to the host object without huge ef-
forts.
Right after the installation, Nagios offers default templates for object definition config-
uration files. They can be found at /etc/nagios/objects. In the following see a
description on how hosts, services and contacts are added:
The host_name option defines a name to identify the host that has to be monitored.
address is the IP address of this host. The use statement tells Nagios to inherit
other configuration values from the generic-host template. check_period defines
whether the machine has to be monitored 24x7. check_interval makes Nagios
checking the service every 5 minutes and retry_interval tells Nagios to sched-
ule host check retries at 1 minute intervals. Nagios tries to execute the checks multi-
ple times when they do not pass. You can define how many attempts Nagios should do
The first configuration directive use tells Nagios to inherit from the gener
ic-service template. host_name is the name that assigns the service to the host
object. The host itself is defined in the host object definition. A description can be set
with service_description. In the example above the description is just PING.
Within the contact_groups option it is possible to refer to a group of people who
will be contacted on a failure of the service. This group and its members are later de-
fined in a contact group object definition. check_command sets the program that
checks whether the service is available, or not.
define contactgroup {
contactgroup_name router-admins
alias Administrators
members admins
}
The example listing above shows the direct contact definition and its proper con-
tactgroup. The contact definition contains the e-mail address and the name of
An overview of all Nagios objects and further information about them can
be found at: http://nagios.sourceforge.net/docs/3_0/
objectdefinitions.html.
3 Change to the configuration directory created in the first step and create the follow-
ing files: hosts.cfg, services.cfg and contacts.cfg
define contactgroup {
contactgroup_name admins
alias Administrators
members max-mustermann
}
2 Write your test scripts (for example a script that checks the disk usage) like this:
#!/bin/bash
NAGIOS_SERVER=10.10.4.166
THIS_HOST=foobar
#
# Write own test algorithm here
#
# Execute On SUCCESS:
echo "$THIS_HOST;diskcheck;0;OK: test ok" \
| send_nsca -H $NAGIOS_SERVER -p 5667 -c /etc/nagios/
send_nsca.cfg -d ";"
# Execute On Warning:
echo "$THIS_HOST;diskcheck;1;Warning: test warning" \
| send_nsca -H $NAGIOS_SERVER -p 5667 -c /etc/nagios/
send_nsca.cfg -d ";"
# Execute On FAILURE:
echo "$THIS_HOST;diskcheck;2;CRITICAL: test critical" \
| send_nsca -H $NAGIOS_SERVER -p 5667 -c /etc/nagios/
send_nsca.cfg -d ";"
3 Insert a new cron entry with crontab -e. A typical cron entry could look like
this:
*/5 * * * * /directory/to/check/program/check_diskusage
3.5 Troubleshooting
Error: ABC 'XYZ' specified in ... '...' is not defined
anywhere!
Make sure that you have defined all necessary objects correctly. Be careful with
the spelling.
Object Definitions
http://nagios.sourceforge.net/docs/3_0/
objectdefinitions.html
Nagios Plugins
http://nagios.sourceforge.net/docs/3_0/plugins.html
acpid
Log of the advanced configuration and power interface event daemon (acpid), a
daemon to notify user-space programs of ACPI events. acpid will log all of its
activities, as well as the STDOUT and STDERR of any actions to syslog.
apparmor
AppArmor log files. See Part “Confining Privileges with AppArmor” (↑Security
Guide) for details of AppArmor.
boot.msg
Log of the system init process—this file contains all boot messages from the Ker-
nel, the boot scripts and the services started during the boot sequence.
Check this file to find out whether your hardware has been correctly initialized or
all services have been started successfully.
boot.omsg
Log of the system shutdown process - this file contains all messages issued on the
last shutdown or reboot.
ConsoleKit/*
Logs of the ConsoleKit daemon (daemon for tracking what users are logged in
and how they interact with the computer).
cups/
Access and error logs of the Common UNIX Printing System (cups).
faillog
Database file that contains all login failures. Use the faillog command to view.
See man 8 faillog for more information.
firewall
Firewall logs.
gdm/*
Log files from the GNOME display manager.
krb5
Log files from the Kerberos network authentication system.
lastlog
The lastlog file is a database which contains info on the last login of each user. Use
the command lastlog to view. See man 8 lastlog for more information.
localmessages
Log messages of some boot scripts, for example the log of the DHCP client.
mail*
Mail server (postfix, sendmail) logs.
NetworkManager
NetworkManager log files
news/*
Log messages from a news server.
ntp
Logs from the Network Time Protocol daemon (ntpd).
pk_backend_zypp
PackageKit (with libzypp backend) log files.
puppet/*
Log files from the data center automation tool puppet.
samba/*
Log files from samba, the Windows SMB/CIFS file server.
SaX.log
Logs from SaX2, the SUSE advanced X11 configuration tool.
scpm
Logs from the system configuration profile management (scpm).
warn
Log of all system warnings and errors. This should be the first place (along with /
var/log/messages) to look at in case of problems.
wtmp
Database of all login/logout activities, runlevel changes and remote connections.
Use the command last to view. See man 1 last for more information.
xinetd.log
Log files from the extended Internet services daemon (xinetd).
Xorg.0.log
X startup log file. Refer to this in case you have problems starting X. Copies from
previous X starts are numbered Xorg.?.log.
zypp/*
libzypp log files. Refer to these files for the package installation history.
zypper.log
Logs from the command line installer zypper.
For viewing log files in a text console, use the commands less or more. Use head
and tail to view the beginning or end of a log file. To view entries appended to a log
file in real-time use tail -f. For information about how to use these tools, see their
man pages.
To search for strings or regular expressions in log files use grep. awk is useful for
parsing and rewriting log files.
logrotate is usually run as a daily cron job. It does not modify any log files more
than once a day unless the log is to be modified because of its size, because logro-
tate is being run multiple times a day, or the --force option is used.
IMPORTANT
The create option pays heed to the modes and ownerships of files spec-
ified in /etc/permissions*. If you modify these settings, make sure no
conflicts arise.
The command-line syntax is easy. You basically tell logwatch for which service,
time span and to which detail level to generate a report:
# Detailed report on all kernel messages from yesterday
logwatch --service kernel --detail High --range Yesterday --print
# Low detail report on all sshd events recorded (incl. archived logs)
logwatch --service sshd --detail Low --range All --archives --print
# Mail a report on all smartd messages from May 5th to May 7th to root@localhost
logwatch --service smartd --range 'between 5/5/2005 and 5/7/2005' \
--mailto root@localhost --print
The --range option has got a complex syntax—see logwatch --range help
for details. A list of all services that can be queried is available with the following com-
mand:
ls /usr/share/logwatch/default.conf/services/ | sed 's/\.conf//g'
logwatch.conf
The main configuration file. The default version is extensively commented. Each
configuration option can be overwritten on the command line.
ignore.conf
Filter for all lines that should globally be ignored by logwatch.
services/*.conf
The service directory holds configuration files for each service you can generate a
report for.
logfiles/*.conf
Specifications on which log files should be parsed for each service.
The essential idea behind a SystemTap script is to name events, and to give them
handlers. When SystemTap runs the script, it monitors for certain events. When an
event occurs, the Linux kernel runs the handler as a sub-routine, then resumes. Thus,
events serve as the triggers for handlers to run. Handlers can record specified data and
print it in a certain manner.
The SystemTap language only uses a few data types (integers, strings, and associative
arrays of these), and full control structures (blocks, conditionals, loops, functions). It
has a lightweight punctuation (semicolons are optional) and does not need detailed dec-
larations (types are inferred and checked automatically).
For more information about SystemTap scripts and their syntax, refer to Section 5.3,
“Script Syntax” (page 76) and to the stapprobes and stapfuncs man pages,
that are available with the systemtap-docs package.
5.1.2 Tapsets
Tapsets are a library of pre-written probes and functions that can be used in System-
Tap scripts. When a user runs a SystemTap script, SystemTap checks the script's probe
events and handlers against the tapset library. SystemTap then loads the corresponding
probes and functions before translating the script to C. Like SystemTap scripts them-
selves, tapsets use the filename extension *.stp.
However, unlike SystemTap scripts, tapsets are not meant for direct execution—they
constitute the library from which other scripts can pull definitions. Thus, the tapset li-
brary is an abstraction layer designed to make it easier for users to define events and
functions. Tapsets provide useful aliases for functions that users may want to specify as
an event (knowing the proper alias is mostly easier than remembering specific kernel
functions that might vary between kernel versions).
stap
SystemTap front-end. Runs a SystemTap script (either from file, or from standard
input). It translates the script into C code, compiles it, and loads the resulting ker-
nel module into a running Linux kernel. Then, the requested system trace or probe
functions are performed.
staprun
SystemTap back-end. Loads and unloads kernel modules produced by the System-
Tap front-end.
For a list of options for each command, use --help. For details, refer to the stap
and the staprun man pages.
To avoid giving root access to users just for running SystemTap, you can make use of
the following SystemTap groups. They are not available by default on SUSE Linux En-
terprise, but you can create the groups and modify the access rights accordingly.
stapdev
Members of this group can run SystemTap scripts with stap, or run System-
Tap instrumentation modules with staprun. As running stap involves compil-
ing scripts into kernel modules and loading them into the kernel, members of this
group still have effective root access.
stapusr
Members of this group are only allowed to run SystemTap instrumentation mod-
ules with staprun. In addition, they can only run those modules from /lib/
modules/kernel_version/systemtap/. This directory must be owned
by root and must only be writable for the root user.
/lib/modules/kernel_version/systemtap/
Holds the SystemTap instrumentation modules.
/usr/share/doc/packages/systemtap/examples
Holds a number of example SystemTap scripts for various purposes. Only avail-
able if the systemtap-docs package is installed.
~/.systemtap/cache
Data directory for cached SystemTap files.
/tmp/stap*
Temporary directory for SystemTap files, including translated C code and kernel
object.
If you subscribed your system for online updates, you can find “debuginfo”
packages in the *-Debuginfo-Updates online installation repository rel-
evant for SUSE Linux Enterprise Server 11 SP4. Use YaST to enable the
repository.
For the classic SystemTap setup, install the following packages (using either YaST or
zypper).
• systemtap
• systemtap-server
• systemtap-docs (optional)
• kernel-*-base
• kernel-*-debuginfo
• kernel-*-devel
• gcc
To get access to the man pages and to a helpful collection of example SystemTap
scripts for various purposes, additionally install the systemtap-docs package.
To check if all packages are correctly installed on the machine and if SystemTap is
ready to use, execute the following command as root.
stap -v -e 'probe vfs.read {printf("read performed\n"); exit()}'
It probes the currently used kernel by running a script and returning an output. If the
output is similar to the following, SystemTap is successfully deployed and ready to use:
In case any error messages appear during the test, check the output for hints about any
missing packages and make sure they are installed correctly. Rebooting and loading the
appropriate kernel may also be needed.
Comments can be inserted anywhere in the SystemTap script in various styles: using ei-
ther #, /* */, or // as marker.
Each probe has a corresponding statement block. This statement block must be en-
closed in { } and contains the statements to be executed per event.
The event begin (the start of the SystemTap session) triggers the handler enclosed
in { }, in this case the printf function which prints hello world followed by
a new line , then exits.
If your statement block holds several statements, SystemTap executes these statements
in sequence—you do not need to insert special separators or terminators between mul-
tiple statements. A statement block can also be nested within another statement blocks.
Generally, statement blocks in SystemTap scripts use the same syntax and semantics as
in the C programming language.
When used in conjunction with other probes that collect information, timer events
allow you to print out periodic updates and see how that information changes over
time.
For example, the following probe would print the text “hello world” every 4 seconds:
probe timer.s(4)
{
printf("hello world\n")
}
For detailed information about supported events, refer to the stapprobes man page.
The See Also section of the man page also contains links to other man pages that dis-
cuss supported events for specific subsystems and components.
5.3.3.1 Functions
If you need the same set of statements in multiple probes, you can place them in a
function for easy reuse. Functions are defined by the keyword function followed by
a name. They take any number of string or numeric arguments (by value) and may re-
turn a single string or number.
The statements in function_name are executed when the probe for event exe-
cutes. The arguments are optional values passed into the function.
Functions can be defined anywhere in the script. They may take any
One of the functions needed very often was already introduced in Example 5.1, “Sim-
ple SystemTap Script” (page 76): the printf function for printing data in a for-
matted way. When using the printf function, you can specify how arguments should
be printed by using a format string. The format string is included in quotation marks
and can contain further format specifiers, introduced by a % character.
Which format strings to use depends on your list of arguments. Format strings can have
multiple format specifiers—each matching a corresponding argument. Multiple argu-
ments can be separated by a comma.
The example above would print the current executable name (execname()) as string
and the process ID (pid()) as integer in brackets, followed by a space, then the word
open and a line break:
[...]
vmware-guestd(2206) open
hald(2360) open
[...]
Among the most commonly used SystemTap functions are the following:
tid()
ID of the current thread.
uid()
ID of the current user.
cpu()
Current CPU number.
execname()
Name of the current process.
gettimeofday_s()
Number of seconds since UNIX epoch (January 1, 1970).
ctime()
Convert time into a string.
pp()
String describing the probe point currently being handled.
thread_indent()
Useful function for organizing print results. It (internally) stores an indentation
counter for each thread (tid()). The function takes one argument, an indentation
delta, indicating how many spaces to add or remove from the thread's indentation
counter. It returns a string with some generic trace data along with an appropri-
ate number of indentation spaces. The generic data returned includes a time stamp
(number of microseconds since the initial indentation for the thread), a process
name, and the thread ID itself. This allows you to identify what functions were
called, who called them, and how long they took.
Call entries and exits often do not immediately precede each other (otherwise
it would be easy to match them). In between a first call entry and its exit, usu-
ally a number of other call entries and exits are made. The indentation counter
helps you match an entry with its corresponding exit as it indents the next func-
tion call in case it is not the exit of the previous one. For an example SystemTap
script using thread_indent() and the respective output, refer to the Sys-
temTap Tutorial: http://sourceware.org/systemtap/tutori
al/Tracing.html#fig:socket-trace.
For more information about supported SystemTap functions, refer to the stapfuncs
man page.
Variables
Variables may be defined anywhere in the script. To define one, simply choose a name
and assign a value from a function or expression to it:
foo = gettimeofday( )
Then you can use the variable in an expression. From the type of values assigned to the
variable, SystemTap automatically infers the type of each identifier (string or number).
Any inconsistencies will be reported as errors. In the example above, foo would auto-
matically be classified as a number and could be printed via printf() with the inte-
ger format specifier (%d).
However, by default, variables are local to the probe they are used in: They are ini-
tialized, used and disposed of at each handler evocation. To share variables between
probes, declare them global anywhere in the script. To do so, use the global key-
word outside of the probes:
This example script computes the CONFIG_HZ setting of the kernel by using timers
that count jiffies and milliseconds, then computing accordingly. (A jiffy is the du-
ration of one tick of the system timer interrupt. It is not an absolute time interval
unit, since its duration depends on the clock interrupt frequency of the particular
hardware platform). With the global statement it is possible to use the variables
count_jiffies and count_ms also in the probe timer.ms(12345). With +
+ the value of a variable is incremented by 1.
If/Else Statements
They are expressed in the following format:
if (condition) statement1
else statement2
While Loops
They are expressed in the following format:
while (condition) statement
For Loops
They are basically a shortcut for while loops and are expressed in the following
format:
for (initialization ; conditional ; increment ) statement
The expression specified in is used to initialize a counter for the number of loop
iterations and is executed before execution of the loop starts. The execution of the
loop continues until the loop condition is false. (This expression is checked at
the beginning of each loop iteration). The expression specified in is used to in-
crement the loop counter. It is executed at the end of each loop iteration.
Conditional Operators
The following operators can be used in conditional statements:
==: Is equal to
probe begin {
printf("%6s %16s %6s %6s %16s\n",
"UID", "CMD", "PID", "PORT", "IP_SOURCE")
}
probe kernel.function("tcp_accept").return?,
kernel.function("inet_csk_accept").return? {
sock = $return
if (sock != 0)
printf("%6d %16s %6d %6d %16s\n", uid(), execname(), pid(),
inet_get_local_port(sock), inet_get_ip_source(sock))
}
This SystemTap script monitors the incoming TCP connections and helps to identify
unauthorized or unwanted network access requests in real time. It shows the following
information for each new incoming TCP connection accepted by the computer:
• User ID (UID)
To get the required utrace infrastructure and the uprobes Kernel module for user-space
probing, you need to install the kernel-trace package in addition to the packages
listed in Section 5.2, “Installation and Setup” (page 74).
SystemTap includes support for probing the entry into and return from a function in
user-space processes, probing predefined markers in user-space code, and monitoring
user-process events.
To check if the currently running Kernel provides the needed utrace support, use the
following command:
grep CONFIG_UTRACE /boot/config-`uname -r`
http://sourceware.org/systemtap/wiki/
Huge collection of useful information about SystemTap, ranging from detailed user
and developer documentation to reviews and comparisons with other tools, or Fre-
quently Asked Questions and tips. Also contains collections of SystemTap scripts,
examples and usage stories and lists recent talks and papers about SystemTap.
http://sourceware.org/systemtap/documentation.html
Features a SystemTap Tutorial, a SystemTap Beginner's Guide, a Tapset Developer's
Guide, and a SystemTap Language Reference in PDF and HTML format. Also lists
the relevant man pages.
You can also find the SystemTap language reference and SystemTap tutorial in your in-
stalled system under /usr/share/doc/packages/systemtap. Example Sys-
temTap scripts are available from the example subdirectory.
You can insert these probes into any kernel routine, and specify a handler to be invoked
after a particular break-point is hit. The main advantage of kernel probes is that you
no longer need to rebuild the kernel and reboot the system after you make changes in a
probe.
To use kernel probes, you typically need to write or obtain a specific kernel module.
Such module includes both the init and the exit function. The init function (such as
register_kprobe()) registers one or more probes, while the exit function un-
registers them. The registration function defines where the probe will be inserted and
which handler will be called after the probe is hit. To register or unregister a group of
probes at one time, you can use relevant register_<probe_type>probes()
or unregister_<probe_type>probes() functions.
Debugging and status messages are typically reported with the printk kernel routine.
printk is a kernel-space equivalent of a user-space printf routine. For more infor-
mation on printk, see Logging kernel messages [http://www.win.tue.nl/
~aeb/linux/lk/lk-2.html#ss2.8]. Normally, you can view these messages
by inspecting /var/log/messages or /var/log/syslog. For more informa-
tion on log files, see Chapter 4, Analyzing and Managing System Log Files (page 61).
Kernel Probes 87
6.1 Supported Architectures
Kernel probes are fully implemented on the following architectures:
• i386
• ppc64
• arm
• ppc
6.2.1 Kprobe
Kprobe can be attached to any instruction in the Linux kernel. When it is registered,
it inserts a break-point at the first bytes of the probed instruction. When the proces-
sor hits this break-point, the processor registers are saved, and the processing passes to
kprobes. First, a pre-handler is executed, then the probed instruction is stepped, and,
finally a post-handler is executed. The control is then passed to the instruction follow-
ing the probe point.
When jprobe is hit, the processor registers are saved, and the instruction pointer is
directed to the jprobe handler routine. The control then passes to the handler with
the same register contents as the function being probed. Finally, the handler calls the
jprobe_return() function, and switches the control back to the control function.
In general, you can insert multiple probes on one function. Jprobe is, however, limited
to only one instance per function.
register_kprobe()
Inserts a break-point on a specified address. When the break-point is hit, the
pre_handler and post_handler are called.
Kernel Probes 89
register_jprobe()
Inserts a break-point in the specified address. The address has to be the address of
the first instruction of the probed function. When the break-point is hit, the spec-
ified handler is run. The handler should have the same argument list and return
type as the probed.
register_kretprobe()
Inserts a return probe for the specified function. When the probed function returns,
a specified handler is run. This function returns 0 on success, or a negative error
number on failure.
unregister_kprobe(), unregister_jprobe(),
unregister_kretprobe()
Removes the specified probe. You can use it any time after the probe has been
registered.
register_kprobes(), register_jprobes(),
register_kretprobes()
Inserts each of the probes in the specified array.
unregister_kprobes(), unregister_jprobes(),
unregister_kretprobes()
Removes each of the probes in the specified array.
The first column lists the address in the kernel where the probe is inserted. The sec-
ond column prints the type of the probe: k for kprobe, j for jprobe, and r for return
probe. The third column specifies the symbol, offset and optional module name of the
probe. The following optional columns include the status information of the probe. If
the probe is inserted on a virtual address which is not valid anymore, it is marked with
[GONE]. If the probe is temporarily disabled, it is marked with [DISABLED].
Note that this way you do not change the status of the probes. If a probe is temporari-
ly disabled, it will not be enabled automatically but will remain in the [DISABLED]
state after entering the latter command.
Kernel Probes 91
• Thorough but more technically oriented information about kernel probes is in
/usr/src/linux/Documentation/kprobes.txt (package ken-
rel-source).
• Examples of all three types of probes (together with related Makefile) are in
the /usr/src/linux/samples/kprobes/ directory (package ken-
rel-source).
• In-depth information about Linux kernel modules and printk kernel routine is
in The Linux Kernel Module Programming Guide [http://tldp.org/LDP/
lkmpg/2.6/html/lkmpg.html]
• Practical but slightly outdated information about practical use of kernel probes is
in Kernel debugging with Kprobes [http://www.ibm.com/developer
works/library/l-kprobes.html]
Modern processors contain a performance monitoring unit (PMU). The design and
functionality of a PMU is CPU specific: for example, the number of registers, counters
and features supported will vary by CPU implementation.
The Perfmon interface is designed to be generic, flexible and extensible. It can monitor
at the program (thread) or system levels. In either mode, it is possible to count or sam-
ple your profile information. This uniformity makes it easier to write portable tools.
Figure 7.1, “Architecture of perfmon2” (page 94) gives an overview.
pfmon Userspace
Generic
perfmon
Linux Kernel
Architecture- specific
perfmon
Each PMU model consists of a set of registers: the performance monitor configuration
(PMC) and the performance monitor data (PMD). Only PMCs are writeable, but both
can be read. These registers store configuration information and data.
Both methods store their information into a sample. This sample contains information
about, for example, where a thread was or instruction pointers.
The following example demonstrates the counting of the CPU_OP_CYCLES event and
the sampling of this event, generating a sample per 100000 occurances of the event:
pfmon --no-cmd-output -e CPU_OP_CYCLES_ALL /bin/ls
1306604 CPU_OP_CYCLES_ALL
7.2 Installation
In order to use Perfmon2, first check the following preconditions:
The pfmon on SUSE Linux Enterprise11 supports the following processors (taken
from /usr/share/doc/packages/pfmon/README):
Model Processor
Architecture Packages
ia64 pfmon
Get an explanation of these entries with the option -i and the event name:
pfmon -i CPU_OP_CYCLES_ALL
Name : CPU_OP_CYCLES_ALL
Code : 0x12
Counters : [ 4 5 6 7 8 9 10 11 12 13 14 15 ]
Desc : CPU Operating Cycles -- All CPU cycles counted
Umask : 0x0
EAR : None
ETB : No
MaxIncr : 1 (Threshold 0)
Qual : None
Type : Causal
Set : None
It is possible that one system-wide session can run concurrently with other system-wide
sessions as long as they do not monitor the same set of CPUs. However, you cannot run
a system-wide session together with any per-thread session.
The following examples are taken from a Itanium IA64 Montecito processor. To exe-
cute a system-wide session, perform the following procedure:
2 Delimit your session. The following list describes options which are used in the ex-
amples below (refer to the man page for more details):
-e/--events
Profile only selected events. See Section 7.3.1, “Getting Event
Information” (page 96) for how to get a list.
--cpu-list
Specifies the list of processors to monitor. Without this options, all available
processors are monitored.
-t/--session-timeout
Specifies the duration of the monitor session expressed in seconds.
• Execute a command. The session is automatically started when the program starts
and automatically stopped when the program is finished:
4 If you want to aggregate counts, use the -aggr option after the previous command:
pfmon --cpu-list=0-1 --system-wide -u -e
CPU_OP_CYCLES_ALL,IA64_INST_RETIRED --aggr
<press ENTER to stop session>
52655 CPU_OP_CYCLES_ALL
53164 IA64_INST_RETIRED
Access the data through mounting the debug file system as root under /sys/ker
nel/debug
File Description
This might be useful to compare your metrics before and after the perfmon run. For
example, collect your data first:
for i in /sys/kernel/debug/perfmon/cpu0/*; do
echo "$i:"; cat $i
done >> pfmon-before.txt
http://perfmon2.sourceforge.net/
The project home page.
http://www.iop.org/EJ/arti
cle/1742-6596/119/4/042017/jpconf8_119_042017.pdf
A good overview as PDF.
It is not necessary to recompile or use wrapper libraries in order to use OProfile. Not
even a Kernel patch is needed. Usually, when you profile an application, a small over-
head is expected, depending on work load and sampling frequency.
During the post-processing step, all information is collected and instruction addresses
are mapped to a function name.
It is useful to install the *-debuginfo package for the respective application you
want to profile. If you want to profile the Kernel, you need the debuginfo package
as well.
opannotate
Outputs annotated source or assembly listings mixed with profile information.
opcontrol
Controls the profiling sessions (start or stop), dumps profile data, and sets up para-
meters.
ophelp
Lists available events with short descriptions.
opimport
Converts sample database files from a foreign binary format to the native format.
opreport
Generates reports from profiled data.
2a Profile With the Linux Kernel Execute the following commands, because
the opcontrol command needs an uncompressed image:
cp /boot/vmlinux-`uname -r`.gz /tmp
gunzip /tmp/vmlinux*.gz
opcontrol --vmlinux=/tmp/vmlinux*
If you want to see which functions call other functions in the output, use ad-
ditionally the --callgraph option and set a maximum DEPTH:
opcontrol --no-vmlinux --callgraph DEPTH
4 Start your application you want to profile right after the previous step.
7 Create a report:
2 Use specific events to find bottlenecks. To list them, use the command opcon-
trol --list-events.
If you need to profile certain events, first check the available events supported by your
processor with the ophelp command (example output generated from Intel Core i5
CPU):
ophelp
oprofile: available events for CPU type "Intel Architectural Perfmon"
Specify the performance counter events with the option --event. Multiple options
are possible. This option needs an event name (from ophelp) and a sample rate, for
example:
opcontrol --event=CPU_CLK_UNHALTED:100000
Setting sampling rates is dangerous as small rates cause the system to over-
load and freeze.
Calling oreport without any options gives a complete summary. With an executable
as an argument, retrieve profile data only from this executable. If you analyze applica-
tions written in C++, use the --demangle smart option.
The opannotate generates output with annotations from source code. Run it with
the following options:
opannotate --source \
--base-dirs=BASEDIR \
--search-dirs= \
--output-dir=annotated/ \
/lib/libfoo.so
The option --base-dir contains a comma separated list of paths which is stripped
from debug source files. This paths were searched prior than looking in --search-
dirs. The --search-dirs option is also a comma separated list of directories to
search for source files.
http://oprofile.sourceforge.net
The project home page.
Manpages
Details descriptions about the options of the different tools.
http://developer.intel.com/
Architecture reference for Intel processors.
http://www.amd.com/us-en/assets/content_type/
white_papers_and_tech_docs/22007.pdf
Architecture reference for AMD Athlon/Opteron/Phenom/Turion.
http://www-01.ibm.com/chips/techlib/techlib.nsf/product
families/PowerPC/
Architecture reference for PowerPC64 processors in IBM iSeries, pSeries, and
blade server systems.
9.1.1 Partitioning
Depending on the server's range of applications and the hardware layout, the partition-
ing scheme can influence the machine's performance (although to a lesser extend on-
ly). It is beyond the scope of this manual to suggest different partition schemes for par-
ticular workloads, however, the following rules will positively affect performance. Of
course they do not apply when using an external storage system.
• Make sure there always is some free space available on the disk, since a full disk has
got inferior performance
• Disperse simultaneous read and write access onto different disks by, for example:
SUSE Linux Enterprise Server lets you customize the installation scope on the Instal-
lation Summary screen. By default, you can select or remove pre-configured patterns
for specific tasks, but it is also possible to start the YaST Software Manager for a fine-
grained package based selection.
One or more of the following default patterns may not be needed in all cases:
X Window System
When solely administrating the server and its applications via command line, con-
sider to not install this pattern. However, keep in mind that it is needed to run GUI
applications from a remote machine. If your application is managed by a GUI or if
you prefer the GUI version of YaST, keep this pattern.
Print Server
This pattern is only needed when you want to print from the machine.
The following list shows services that are started by default after the installation of
SUSE Linux Enterprise Server. Check which of the components you need, and disable
the others:
alsasound
Loads the Advanced Linux Sound System.
auditd
A daemon for the audit system (see Part “The Linux Audit Framework” (↑Security
Guide) for details). Disable if you do not use Audit.
bluez-coldplug
Handles cold plugging of Bluetooth dongles.
cups
A printer daemon.
java.binfmt_misc
Enables the execution of *.class or *.jar Java programs.
nfs
Services needed to mount NFS file systems.
splash / splash_early
Shows the splash screen on start-up.
9.3.1.1 NFS
NFS (Version 3) tuning is covered in detail in the NFS Howto at http://
nfs.sourceforge.net/nfs-howto/. The first thing to experiment with when
mounting NFS shares is increasing the read write blocksize to 32768 by using the
mount options wsize and rsize.
Idle
A process from the idle scheduling class is only granted disk access when no other
process has asked for disk I/O.
Best effort
The default scheduling class used for any process that has not asked for a specific
I/O priority. Priority within this class can be adjusted to a level from 0 to 7 (with
0 being the highest priority). Programs running at the same best-effort priority are
served in a round-robin fashion. Some kernel versions treat priority within the best-
effort class differently—for details, refer to the ionice(1) man page.
Real-time
Processes in this class are always granted disk access first. Fine-tune the priori-
ty level from 0 to 7 (with 0 being the highest priority). Use with care, since it can
starve other processes.
For more details and the exact command syntax refer to the ionice(1) man page.
• In a cgroup there is a set of tasks (processes) associated with a set of subsystems that
act as parameters constituting an environment for the tasks.
• Subsystems provide the parameters that can be assigned and define CPU sets, freez-
er, or—more general—“resource controllers” for memory, disk I/O, network traffic,
etc.
• cgroups are organized in a tree-structured hierarchy. There can be more than one hi-
erarchy in the system. You use a different or alternate hierarchy to cope with specific
situations.
10.2 Scenario
See the following resource planning scenario for a better understanding (source: /
usr/src/linux/Documentation/cgroups/cgroups.txt):
CPUs
Top CPU Set (20%)
Professors Students
Network I/O
WWW Browsing (20%)
Professors Students
(15%) (5%)
Others (20%)
Resource Controllers
cpu (scheduler), cpuacct, memory, disk I/O, network
or all subsystems in one go; you can use an arbitrary device name (e.g., none), which
will appear in /proc/mounts:
mount -t cgroup none /sys/fs/cgroup
Cpuset (Isolation)
Use cpuset to tie processes to system subsets of CPUs and memory (“memory
nodes”). For an example, see Section 10.4.3, “Example: Cpusets” (page 126).
Freezer (Control)
The Freezer subsystem is useful for high-performance computing clusters (HPC
clusters). Use it to freeze (stop) all tasks in a group or to stop tasks, if they reach a
defined checkpoint. For more information, see /usr/src/linux/Documen
tation/cgroups/freezer-subsystem.txt.
Checkpoint/Restart (Control)
Save the state of all processes in a cgroup to a dump file. Restart it later (or just
save the state and continue).
Devices (Isolation)
A system administrator can provide a list of devices that can be accessed by
processes under cgroups.
It limits access to a device or a file system on a device to only tasks that belong to
the specified cgroup. For more information, see /usr/src/linux/Docu
mentation/cgroups/devices.txt.
Cpuacct (Control)
The CPU accounting controller groups tasks using cgroups and accounts the CPU
usage of these groups. For more information, see /usr/src/linux/Docu
mentation/cgroups/cpuacct.txt.
These are the basic commands to configure proportional weight division of band-
width by setting weight values in blkio.weight:
# Setup in /sys/fs/cgroup
mkdir /sys/fs/cgroup/blkio
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
# Start two cgroups
mkdir -p /sys/fs/cgroup/blkio/group1 /sys/fs/cgroup/blkio/group2
# Set weights
echo 1000 > /sys/fs/cgroup/blkio/group1/blkio.weight
echo 500 > /sys/fs/cgroup/blkio/group2/blkio.weight
# Write the PIDs of the processes to be controlled to the
# appropriate groups
command1 &
echo $! > /sys/fs/cgroup/blkio/group1/tasks
command2 &
echo $! > /sys/fs/cgroup/blkio/group2/tasks
These are the basic commands to configure throttling or upper limit policy by
setting values in blkio.throttle.read_bps_device for reads and
blkio.throttle.write_bps_device for writes:
# Setup in /sys/fs/cgroup
mkdir /sys/fs/cgroup/blkio
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
# Bandwidth rate of a device for the root group; format:
# <major>:<minor> <byes_per_second>
echo "8:16 1048576" > /sys/fs/cgroup/blkio/
blkio.throttle.read_bps_device
For more information about caveats, usage scenarios, and additional parame-
ters, see /usr/src/linux/Documentation/cgroups/blkio-
controller.txt.
For example, to limit the traffic from all tasks from a file_server cgroup to
100 Mbps, proceed as follows:
# create a file_transfer cgroup and assign it a unique classid
# of 0x10 - this will be used later to direct packets.
mkdir -p /dev/cgroup
mount -t cgroup tc -otc /dev/cgroup
mkdir /dev/cgroup/file_transfer
echo 0x10 > /dev/cgroup/file_transfer/tc.classid
echo $PID_OF_FILE_XFER_PROCESS > /dev/cgroup/file_transfer/tasks
# Now create an HTB class that rate-limits traffic to 100 mbits and
attach
# a filter to direct all traffic from the file_transfer cgroup
# to this new class.
tc qdisc add dev eth0 root handle 1: htb
tc class add dev eth0 parent 1: classid 1:10 htb rate 100mbit ceil
100mbit
tc filter add dev eth0 parent 1: handle 800 protocol ip prio 1 \
flow map key cgroup-classid baseclass 1:10
10.4.1 Prerequisites
To conveniently use cgroups, install the following additional packages:
The following subsystems are available: perf_event, blkio, net_cls, freezer, devices,
memory, cpuacct, cpu, cpuset.
1 To determine the number of CPUs and memory nodes see /proc/cpuinfo and
/proc/zoneinfo.
This fails as long as this cpuset is in use. First, you must remove the inside cpusets
or tasks (processes) that belong to it. Check it with:
cat /sys/fs/cgroup/cpuset/Charlie/tasks
2 Understanding cpu.shares:
3 Changing cpu.shares
echo 1024 > cpu.shares
• /usr/src/linux/Documentation/cgroups/blkio-
controller.txt
• /usr/src/linux/Documentation/cgroups/cgroups.txt
• /usr/src/linux/Documentation/cgroups/cpuacct.txt
• /usr/src/linux/Documentation/cgroups/cpusets.txt
• /usr/src/linux/Documentation/cgroups/devices.txt
• /usr/src/linux/Documentation/cgroups/freez
er-subsystem.txt
• /usr/src/linux/Documentation/cgroups/
memcg_test.txt
• /usr/src/linux/Documentation/cgroups/memory.txt
• For Linux Containers (LXC) based on cgroups, see Virtualization with Linux Con-
tainers (LXC) (↑Virtualization with Linux Containers (LXC)).
Some states also have submodes with different power saving latency levels. Which C-
states and submodes are supported depends on the respective processor. However, C1
is always available.
Table 11.1: C-States
Mode Definition
The higher the P-state, the lower the frequency and voltage at which the processor runs.
The number of P-states is processor-specific and the implementation differs across
the various types. However, P0 is always the highest-performance state. Higher P-
state numbers represent slower processor speeds and lower power consumption. For
example, a processor in P3 state runs more slowly and uses less power than a proces-
sor running at P1 state. To operate at any P-state, the processor must be in the C0 state
where the processor is working and not idling. The CPU P-states are also defined in
the Advanced Configuration and Power Interface (ACPI) specification, see http://
www.acpi.info/spec.htm.
Note that throttling does not reduce voltage and since the CPU is forced to idle part of
the time, processes will take longer to finish and will consume more power instead of
saving any power.
T-states are only useful if reducing thermal effects is the primary goal. Since T-states
can interfere with C-states (preventing the CPU from reaching higher C-states), they
can even increase power consumption in a modern CPU capable of C-states.
However, the conditions under which a CPU core may use turbo frequencies are very
architecture-specific. Learn how to evaluate the efficiency of those new features in
Section 11.3.2, “Using the cpupower Tools” (page 139).
In order to dynamically scale processor frequencies at runtime, you can use the
CPUfreq infrastructure to set a static or dynamic power policy for the system. Its main
components are the CPUfreq subsystem (providing a common interface to the various
low-level technologies and high-level policies) , the in-kernel governors (policy gover-
nors that can change the CPU frequency based on different criteria) and CPU-specific
drivers that implement the technology for the specific processor.
The dynamic scaling of the clock speed helps to consume less power and generate less
heat when not operating at full capacity.
Performance Governor
The CPU frequency is statically set to the highest possible for maximum perfor-
mance. Consequently, saving power is not the focus of this governor.
Tuning options: The range of maximum frequencies available to the governor can
be adjusted (for example, with the cpupower command line tool).
Powersave Governor
The CPU frequency is statically set to the lowest possible. This can have severe
impact on the performance, as the system will never rise above this frequency no
matter how busy the processors are.
Tuning options: The range of minimum frequencies available to the governor can
be adjusted (for example, with the cpupower command line tool).
On-demand Governor
The kernel implementation of a dynamic CPU frequency policy: The governor
monitors the processor utilization. As soon as it exceeds a certain threshold, the
governor will set the frequency to the highest available. If the utilization is less
than the threshold, the next lowest frequency is used. If the system continues to be
underemployed, the frequency is again reduced until the lowest available frequency
is set.
For SUSE Linux Enterprise, the on-demand governor is the default governor and
the one that has the best test coverage.
Tuning options: The range of available frequencies, the rate at which the gov-
ernor checks utilization, and the utilization threshold can be adjusted. An-
other parameter you might want to change for the on-demand governor is
ignore_nice_load. For details, refer to Procedure 11.1, “Ignoring Nice Val-
ues in Processor Utilization” (page 145).
Conservative Governor
Similar to the on-demand implementation, this governor also dynamically adjusts
frequencies based on processor utilization, except that it allows for a more gradual
increase in power. If processor utilization exceeds a certain threshold, the governor
does not immediately switch to the highest available frequency (as the on-demand
governor does), but only to next higher frequency available.
Tuning options: The range of available frequencies, the rate at which the governor
checks utilization, the utilization thresholds, and the frequency step rate can be ad-
justed.
The settings under the cpufreq directory can be different for each proces-
sor. If you want to use the same policies across all processors, you need
to adjust the parameters for each processor. Instead of looking up or mod-
ifying the current settings manually (in /sys/devices/system/cpu*/
cpufreq), we advise to use the tools provided by the cpupower package
or by the older cpufrequtils package for that.
After you have installed the cpufrequtils package, you can make use of the
cpufreq-info and cpufreq-set command line tools.
Using the appropriate options, you can view the current CPU frequency, the mini-
mum and maximum CPU frequency allowed, show the currently used CPUfreq poli-
cy, the available CPUfreq governors, or determine the CPUfreq kernel driver used. For
more details and the available options, refer to the cpufreq-info man page or run
cpufreq-info --help.
• To specify the number of the CPU to which the command is applied, both com-
mands have the -c option. Due to the command-subcommand structure, the place-
ment of the -c option is different for cpupower:
cpupower lets you also specify a list of CPUs with -c. For example, the following
command would affect the CPUs 1 , 2, 3, and 5:
• If cpufreq* and cpupower are used without the -c option, the behavior differs:
analyzing CPU 0:
driver: acpi-cpufreq
CPUs which run at the same hardware frequency: 0 1 2 3
CPUs which need to have their frequency coordinated by software: 0
maximum transition latency: 10.0 us.
hardware limits: 2.00 GHz - 2.83 GHz
available frequency steps: 2.83 GHz, 2.34 GHz, 2.00 GHz
available cpufreq governors: conservative, userspace, powersave, ondemand, performance
current policy: frequency should be within 2.00 GHz and 2.83 GHz.
The governor "ondemand" may decide which speed to use
within this range.
current CPU frequency is 2.00 GHz (asserted by call to hardware).
boost state support:
Supported: yes
Active: yes
Analyzing CPU 0:
Number of idle states: 3
Available idle states: C1 C2
C1:
Flags/Description: ACPI FFH INTEL MWAIT 0x0
Latency: 1
Usage: 3156464
Duration: 233680359
C2:
Flags/Description: ACPI FFH INTEL MWAIT 0x10
Latency: 1
Usage: 273007117
Duration: 103148860538
After finding out which processor idle states are supported with cpupower idle-
info, individual states can be disabled using the cpupower idle-set command.
Typically one wants to disable the deepest sleep state, for example:
cpupower idle-set -d 4
But before making this change permanent by adding the corresponding command to a
current /etc/init.d/* service file, check for performance or power impact.
Mperf shows the average frequency of a CPU, including boost frequencies, over
a period of time. Additionally, it shows the percentage of time the CPU has been
active (C0) or in any sleep state (Cx). The default sampling rate is 1 second and
the values are read directly from the hardware registers. As the turbo states are
managed by the BIOS, it is impossible to get the frequency values at a given in-
stant. On modern processors with turbo features the Mperf monitor is the only
way to find out about the frequency a certain CPU has been running in.
Idle_Stats shows the statistics of the cpuidle kernel subsystem. The kernel up-
dates these values every time an idle state is entered or left. Therefore there can
be some inaccuracy when cores are in an idle state for some time when the mea-
sure starts or ends.
Apart from the (general) monitors in the example above, other architecture-specific
monitors are available. For detailed information, refer to the cpupower-monitor
man page.
By comparing the values of the individual monitors, you can find correlations and de-
pendencies and evaluate how well the power saving mechanism works for a certain
workload. In Example 11.4 (page 142) you can see that CPU 0 is idle (the value
of Cx is near to 100%), but runs at a very high frequency. Additionally, the CPUs 0
and 1 have the same frequency values which means that there is a dependency between
them.
The column shows the C-states. When working, the CPU is in state 0, when rest-
ing it is in some state greater than 0, depending on which C-states are available
and how deep the CPU is sleeping.
The column shows average time in milliseconds spent in the particular C-state.
The column shows the percentages of time spent in various C-states. For consid-
erable power savings during idle, the CPU should be in deeper C-states most of
the time. In addition, the longer the average time spent in these C-states, the more
power is saved.
The column shows the frequencies the processor and kernel driver support on
your system.
The column shows the amount of time the CPU cores stayed in different frequen-
cies during the measuring period.
Shows how often the CPU is awoken per second (number of interrupts). The low-
er the number the better. The interval value is the powerTOP refresh interval
which can be controlled with the -t option. The default time to gather data is 5
seconds.
When running powerTOP on a laptop, this line displays the ACPI information on
how much power is currently being used and the estimated time until discharge of
the battery. On servers, this information is not available.
Shows what is causing the system to be more active than needed. powerTOP dis-
plays the top items causing your CPU to awake during the sampling period.
Suggestions on how to improve power usage for this machine.
If you want the change in governor to persist also after a reboot or shutdown, use the
pm-profiler as described in Section 11.5, “Creating and Using Power Management
Profiles” (page 147).
To set values for the minimum or maximum CPU frequency the governor may select,
use the -d or -u option, respectively.
Apart from the governor settings that can be influenced with cpupower or
cpufreq*, you can also tune further governor parameters manually, for example, Ig-
noring Nice Values in Processor Utilization (page 145).
One parameter you might want to change for the on-demand or conservative governor
is ignore_nice_load.
Each process has a niceness value associated with it. This value is used by the kernel
to determine which processes require more processor time than others. The higher the
nice value, the lower the priority of the process. Or: the “nicer” a process, the less CPU
it will try to take from other processes.
1 Change to the subdirectory of the governor whose settings you want to modify, for
example:
cd /sys/devices/system/cpu/cpu0/cpufreq/conservative/
When setting the ignore_nice_load value for cpu0, the same value is
automatically used for all cores. In this case, you do not need to repeat the
steps above for each of the processors where you want to modify this gover-
nor parameter.
Another parameter that significantly impacts the performance loss caused by dynam-
ic frequency scaling is the sampling rate (rate at which the governor checks the current
CPU load and adjusts the processor's frequency accordingly). Its default value depends
on a BIOS value and it should be as low as possible. However, in modern systems, an
appropriate sampling rate is set by default and does not need manual intervention.
2 To create the configuration file for the new profile, copy the profile template to the
newly created directory:
cp /usr/share/doc/packages/pm-profiler/config.template \
/etc/pm-profiler/testprofile/config
The profile name you enter here must match the name you used in the path to the
profile configuration file (/etc/pm-profiler/testprofile/config),
not necessarily the NAME you used for the profile in the /etc/pm-profil
er/testprofile/config.
or
/usr/lib/pm-profiler/enable-profile testprofile
Though you have to manually create or modify a profile by editing the respective pro-
file configuration file, you can use YaST to switch between different profiles. Start
YaST and select System > Power Management to open the Power Management Settings.
Alternatively, become root and execute yast2 power-management on a com-
mand line. The drop-down list shows the available profiles. Default means that the
system default settings will be kept. Select the profile to use and click Finish.
11.6 Troubleshooting
BIOS options enabled?
In order to make use of C-states or P-states, check your BIOS options:
In case of a CPU upgrade, make sure to upgrade your BIOS, too. The BIOS needs
to know the new CPU and its valid frequencies steps in order to pass this informa-
tion on to the operating system.
If you suspect problems with the CPUfreq subsystem on your machine, you can al-
so enable additional debug output. To do so, either use cpufreq.debug=7 as
boot parameter or execute the following command as root:
echo 7 > /sys/module/cpufreq/parameters/debug
This will cause CPUfreq to log more information to dmesg on state transitions,
which is useful for diagnosis. But as this additional output of kernel messages can
be rather comprehensive, use it only if you are fairly sure that a problem exists.
Using this functionality, you can safely test kernel updates while being able to always
fall back to the proven former kernel. To do so, do not use the update tools (such as the
YaST Online Update or the updater applet), but instead follow the process described in
this chapter.
Please be aware that you loose your entire support entitlement for the ma-
chine when installing a self-compiled or a third-party kernel. Only kernels
shipped with SUSE Linux Enterprise Server and kernels delivered via the of-
ficial update channels for SUSE Linux Enterprise Server are supported.
2 Search for the string multiversion. To enable multiversion for all kernel pack-
ages capable of this feature, uncomment the following line
# multiversion = provides:multiversion(kernel)
3 To restrict multiversion support to certain kernel flavors, add the package names
as a comma-separated list, to the multiversion option in /etc/zypp/
zypp.conf—for example
multiversion = kernel-default,kernel-default-base,kernel-source
2 Search for the string multiversion.kernels and activate this option by un-
commenting the line. This option takes a comma separated list of the following val-
ues
latest-N: keep the kernel with the Nth highest version number
oldest keep the kernel with the lowest version number (the one that was origi-
nally shipped with SUSE Linux Enterprise Server)
oldest+N keep the kernel with the Nth lowest version number
multiversion.kernels = latest,running
Keep the latest kernel and the one currently running one. This is similar to not
enabling the multiversion feature at all, except that the old kernel is removed af-
ter the next reboot and not immediately after the installation.
multiversion.kernels = latest,latest-1,running
Keep the last two kernels and the one currently running.
multiversion.kernels = latest,running,3.0.rc7-test
Keep the latest kernel, the one currently running and 3.0.rc7-test.
Unless using special setups, you probably always want to keep the run-
ning kernel.
2 List all packages capable of providing multiple versions by choosing View > Package
Groups > Multiversion Packages.
3 Select a package and open its Version tab in the bottom pane on the left.
4 To install a package, click its check box. A green check mark indicates it is selected
for installation.
To remove an already installed package (marked with a white check mark), click its
check box until a red X indicates it is selected for removal.
Choosing the best suited I/O elevator not only depends on the workload, but on the
hardware, too. Single ATA disk systems, SSDs, RAID arrays, or network storage sys-
tems, for example, each require different tuning strategies.
By default the CFQ (Completely Fair Queuing) scheduler is used. Change this default
by entering the boot parameter
elevator=SCHEDULER
To change the elevator for a specific device in the running system, run the following
command:
where SCHEDULER is one of cfq, noop, or deadline and DEVICE the block de-
vice (sda for example).
On IBM System z the default I/O scheduler for a storage device is set by the
device driver.
where VALUE is the desired value for the TUNABLE and DEVICE the block device.
To find out which elevator is the current default, run the following command. The cur-
rently selected scheduler is listed in brackets:
jupiter:~ # cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
/sys/block/<device>/queue/iosched/slice_idle
When a task has no more I/O to submit in its time slice, the I/O scheduler waits
for a while before scheduling the next thread to improve locality of I/O. For me-
dia where locality does not play a big role (SSDs, SANs with lots of disks) setting
/sys/block/<device>/queue/iosched/slice_idle to 0 can im-
prove the throughput considerably.
/sys/block/<device>/queue/iosched/low_latency
For workloads where the latency of I/O is crucial, setting /sys/block/<de
vice>/queue/iosched/low_latency to 1 can help.
13.2.2 NOOP
A trivial scheduler that just passes down the I/O that comes to it. Useful for checking
whether complex I/O scheduling decisions of other schedulers are not causing I/O per-
formance regressions.
In some cases it can be helpful for devices that do I/O scheduling themselves, as intelli-
gent storage, or devices that do not depend on mechanical movement, like SSDs. Usu-
ally, the DEADLINE I/O scheduler is a better choice for these devices, but due to less
overhead NOOP may produce better performance on certain workloads.
13.2.3 DEADLINE
DEADLINE is a latency-oriented I/O scheduler. Each I/O request has got a dead-
line assigned. Usually, requests are stored in queues (read and write) sorted by sec-
tor numbers. The DEADLINE algorithm maintains two additional queues (read and
write) where the requests are sorted by deadline. As long as no request has timed out,
the “sector” queue is used. If timeouts occur, requests from the “deadline” queue are
served until there are no more expired requests. Generally, the algorithm prefers reads
over writes.
This scheduler can provide a superior throughput over the CFQ I/O scheduler in cas-
es where several threads read and write and fairness is not an issue. For example, for
/sys/block/<device>/queue/iosched/writes_starved
Controls how many reads can be sent to disk before it is possible to send writes. A
value of 3 means, that three read operations are carried out for one write opera-
tion.
/sys/block/<device>/queue/iosched/read_expire
Sets the deadline (current time plus the read_expire value) for read operations in
milliseconds. The default is 500.
/sys/block/<device>/queue/iosched/write_expire
/sys/block/<device>/queue/iosched/read_expire Sets the
deadline (current time plus the read_expire value) for read operations in millisec-
onds. The default is 500.
Sending write barriers can be disabled using the barrier=0 mount option (for ext3,
ext4, and reiserfs), or using the nobarrier mount option (for XFS).
WARNING
Disabling barriers when disks cannot guarantee caches are properly written
in case of power failure can lead to severe file system corruption and data
loss.
The following sections explain the most important terms related to a process schedul-
ing. They also introduce information about the task scheduler policy, scheduling algo-
rithm, description of the task scheduler used by SUSE Linux Enterprise Server, and
references to other sources of relevant information.
14.1 Introduction
The Linux kernel controls the way tasks (or processes) are managed in the running sys-
tem. The task scheduler, sometimes called process scheduler, is the part of the kernel
that decides which task to run next. It is one of the core components of a multitasking
operating system (such as Linux), being responsible for best utilizing system resources
to guarantee that multiple tasks are being executed simultaneously.
As already mentioned, Linux, like all other Unix variants, is a multitasking operating
system. That means that several tasks can be running at the same time. Linux provides
a so called preemptive multitasking, where the scheduler decides when a process is sus-
pended. This forced suspension is called preemption. All Unix flavors have been pro-
viding preemptive multitasking since the beginning.
14.1.2 Timeslice
The time period for which a process will be running before it is preempted is defined
in advance. It is called a process' timeslice and represents the amount of processor time
that is provided to each process. By assigning timeslices, the scheduler makes global
decisions for the running system, and prevents individual processes from dominating
over the processor resources.
processor-bound
On the other hand, processor-bound tasks use their time to execute a code, and
usually run until they are preempted by the scheduler. They do not block process-
es waiting for I/O requests, and, therefore, can be run less frequently but for longer
time intervals.
• Interactive processes spend a lot of time waiting for I/O requests, such as keyboard
or mouse operations. The scheduler must wake up such process quickly on user re-
quest, or the user will find the environment unresponsive. The typical delay is ap-
proximately 100 ms. Office applications, text editors or image manipulation pro-
grams represent typical interactive processes.
• Batch processes often run in the background and do not need to be responsive. They
usually receive lower priority from the scheduler. Multimedia converters, database
search engines, or log files analyzers are typical examples of batch processes.
• Real-time processes must never be blocked by low-priority processes, and the sched-
uler guarantees a short response time to them. Applications for editing multimedia
content are a good example here.
The scheduler calculates the timeslices dynamically. However, to determine the appro-
priate timeslice is a complex task: Too long timeslices cause the system to be less inter-
A process does not have to use all its timeslice at once. For instance, a process with a
timeslice of 150ms does not have to be running for 150ms in one go. It can be running
in five different schedule slots for 30ms instead. Interactive tasks typically benefit from
this approach because they do not need such a large timeslice at once while they need
to be responsive as long as possible.
The scheduler also assigns process priorities dynamically. It monitors the processes' be-
havior and, if needed, adjusts its priority. For example, a process which is being sus-
pended for a long time is brought up by increasing its priority.
Group Scheduling
For example, if you split processes into groups according to which user is running
them, CFS tries to provide each of these groups with the same amount of proces-
sor time.
As a result, CFS brings more optimized scheduling for both servers and desktops.
When a task enters into the run queue (a planned time line of processes to be execut-
ed next), the scheduler records the current time. While the process waits for processor
time, its “wait” value gets incremented by an amount derived from the total number of
tasks currently in the run queue and the process priority. As soon as the processor runs
the task, its “wait” value gets decremented. If the value drops below a certain level, the
task is preempted by the scheduler and other tasks get closer to the processor. By this
algorithm, CFS tries to reach the ideal state where the “wait” value is always zero.
• By user IDs
The way the kernel scheduler lets you group the runnable tasks depends on set-
ting the kernel compile-time options CONFIG_FAIR_USER_SCHED and
CONFIG_FAIR_CGROUP_SCHED. The default setting in SUSE® Linux Enterprise
Server 11 SP4 is to use control groups, which lets you create groups as needed. For
more information, see Chapter 10, Kernel Control Groups (page 119).
If you run SUSE Linux Enterprise Server on a kernel that was not shipped
with it, for example on a self-compiled kernel, you loose the entire support en-
titlement.
14.4.4 Terminology
Documents regarding task scheduling policy often use several technical terms which
you need to know to understand the information correctly. Here are some of them:
Latency
Delay between the time a process is scheduled to run and the actual process execu-
tion.
Granularity
The relation between granularity and latency can be expressed by the following
equation:
gran = ( lat / rtasks ) - ( lat / rtasks / rtasks )
where gran stands for granularity, lat stand for latency, and rtasks is the number of
running tasks.
SCHED_FIFO
Scheduling policy designed for special time-critical applications. It uses the First
In-First Out scheduling algorithm.
SCHED_BATCH
Scheduling policy designed for CPU-intensive tasks.
SCHED_IDLE
Scheduling policy intended for very low prioritized tasks.
SCHED_OTHER
Default Linux time-sharing scheduling policy used by the majority of processes.
Before setting a new scheduling policy on the process, you need to find out the mini-
mum and maximum valid priorities for each scheduling algorithm:
saturn.example.com:~ # chrt -m
SCHED_OTHER min/max priority : 0/0
SCHED_FIFO min/max priority : 1/99
SCHED_RR min/max priority : 1/99
SCHED_BATCH min/max priority : 0/0
SCHED_IDLE min/max priority : 0/0
For more information on chrt, see its man page (man 1 chrt).
sysctl variable
sysctl variable=value
Note that variables ending with “_ns” and “_us” accept values in nanoseconds and mi-
croseconds, respectively.
A list of the most important task scheduler sysctl tuning variables (located at /
proc/sys/kernel/) with a short description follows:
sched_child_runs_first
A freshly forked child runs before the parent continues execution. Setting this pa-
rameter to 1 is beneficial for an application in which the child performs an ex-
ecution after fork. For example make -j<NO_CPUS> performs better when
sched_child_runs_first is turned off. The default value is 0.
sched_compat_yield
Enables the aggressive yield behavior of the old 0(1) scheduler. Java applications
that use synchronization extensively perform better with this value set to 1. Only
use it when you see a drop in performance. The default value is 0.
sched_migration_cost
Amount of time after the last execution that a task is considered to be “cache hot”
in migration decisions. A “hot” task is less likely to be migrated, so increasing this
variable reduces task migrations. The default value is 500000 (ns).
If the CPU idle time is higher than expected when there are runnable processes,
try reducing this value. If tasks bounce between CPUs or nodes too often, try in-
creasing it.
sched_latency_ns
Targeted preemption latency for CPU bound tasks. Increasing this variable increas-
es a CPU bound task's timeslice. A task's timeslice is its weighted fair share of the
scheduling period:
The task's weight depends on the task's nice level and the scheduling policy. Mini-
mum task weight for a SCHED_OTHER task is 15, corresponding to nice 19. The
maximum task weight is 88761, corresponding to nice -20.
Timeslices become smaller as the load increases. When the number of runnable
tasks exceeds sched_latency_ns/sched_min_granularity_ns, the
slice becomes number_of_running_tasks * sched_min_granularity_ns.
Prior to that, the slice is equal to sched_latency_ns.
This value also specifies the maximum amount of time during which a sleeping
task is considered to be running for entitlement calculations. Increasing this vari-
able increases the amount of time a waking task may consume before being pre-
empted, thus increasing scheduler latency for CPU bound tasks. The default value
is 20000000 (ns).
sched_min_granularity_ns
Minimal preemption granularity for CPU bound tasks. See
sched_latency_ns for details. The default value is 4000000 (ns).
sched_wakeup_granularity_ns
The wake-up preemption granularity. Increasing this variable reduces wake-up
preemption, reducing disturbance of compute bound tasks. Lowering it improves
WARNING
sched_rt_period_us
Period over which real-time task bandwidth enforcement is measured. The default
value is 1000000 (µs).
sched_rt_runtime_us
Quantum allocated to real-time tasks during sched_rt_period_us. Setting to -1 dis-
ables RT bandwidth enforcement. By default, RT tasks may consume 95%CPU/
sec, thus leaving 5%CPU/sec or 0.05s to be used by SCHED_OTHER tasks.
sched_features
Provides information about specific debugging features.
sched_stat_granularity_ns
Specifies the granularity for collecting task scheduler statistics.
sched_nr_migrate
Controls how many tasks can be moved across processors through migration soft-
ware interrupts (softirq). If a large number of tasks is created by SCHED_OTHER
policy, they will all be run on the same processor. The default value is 32. Increas-
ing this value gives a performance boost to large SCHED_OTHER threads at the
expense of increased latencies for real-time tasks.
runnable tasks:
task PID tree-key switches prio exec-runtime sum-exec sum-
sleep
--------------------------------------------------------------------------
R cat 16884 54410632.307072 0 120 54410632.307072 13.836804
0.000000
/proc/schedstat
Displays statistics relevant to the current run queue. Also domain-specific statis-
tics for SMP systems are displayed for all connected processors. Because the out-
put format is not user-friendly, read the contents of /usr/src/linux/Docu
mentation/scheduler/sched-stats.txt for more information.
/proc/PID/sched
Displays scheduling information on the process with id PID.
saturn.example.com:~ # cat /proc/`pidof nautilus`/sched
• For task scheduler System Calls description, see the relevant manual page (for exam-
ple man 2 sched_setaffinity).
The memory management subsystem, also called the virtual memory manager, will
subsequently be referred to as “VM”. The role of the VM is to manage the allocation of
physical memory (RAM) for the entire kernel and user programs. It is also responsible
for providing a virtual memory environment for user processes (managed via POSIX
APIs with Linux extensions). Finally, the VM is responsible for freeing up RAM when
there is a shortage, either by trimming caches or swapping out “anonymous” memory.
The most important thing to understand when examining and tuning VM is how its
caches are managed. The basic goal of the VM's caches is to minimize the cost of I/O
as generated by swapping and file system operations (including network file systems).
This is achieved by avoiding I/O completely, or by submitting I/O in better patterns.
Free memory will be used and filled up by these caches as required. The more mem-
ory is available for caches and anonymous memory, the more effectively caches and
swapping will operate. However, if a memory shortage is encountered, caches will be
trimmed or memory will be swapped out.
For a particular workload, the first thing that can be done to improve performance is to
increase memory and reduce the frequency that memory must be trimmed or swapped.
The second thing is to change the way caches are managed by changing kernel parame-
ters.
15.1.2 Pagecache
A cache of file data. When a file is read from disk or network, the contents are stored
in pagecache. No disk or network access is required, if the contents are up-to-date in
pagecache. tmpfs and shared memory segments count toward pagecache.
When a file is written to, the new data is stored in pagecache before being written back
to a disk or the network (making it a write-back cache). When a page has new data
not written back yet, it is called “dirty”. Pages not classified as dirty are “clean”. Clean
pagecache pages can be reclaimed if there is a memory shortage by simply freeing
them. Dirty pages must first be made clean before being reclaimed.
15.1.5 Writeback
As applications write to files, the pagecache (and buffercache) becomes dirty. When
pages have been dirty for a given amount of time, or when the amount of dirty memory
reaches a particular percentage of RAM, the kernel begins writeback. Flusher threads
perform writeback in the background and allow applications to continue running. If the
I/O cannot keep up with applications dirtying pagecache, and dirty data reaches a criti-
cal percentage of RAM, then applications begin to be throttled to prevent dirty data ex-
ceeding this threshold.
15.1.6 Readahead
The VM monitors file access patterns and may attempt to perform readahead. Reada-
head reads pages into the pagecache from the file system that have not been requested
yet. It is done in order to allow fewer, larger I/O requests to be submitted (more effi-
cient). And for I/O to be pipelined (I/O performed at the same time as the application
is running).
Swap I/O tends to be much less efficient than other I/O. However, some pagecache
pages will be accessed much more frequently than less used anonymous memory.
The right balance should be found here.
If swap activity is observed during slowdowns, it may be worth reducing this para-
meter. If there is a lot of I/O activity and the amount of pagecache in the system is
rather small, or if there are large dormant applications running, increasing this val-
ue might improve performance.
Note that the more data is swapped out, the longer the system will take to swap da-
ta back in when it is needed.
/proc/sys/vm/min_free_kbytes
This controls the amount of memory that is kept free for use by special reserves
including “atomic” allocations (those which cannot wait for reclaim). This should
not normally be lowered unless the system is being very carefully tuned for mem-
ory usage (normally useful for embedded rather than server applications). If
“page allocation failure” messages and stack traces are frequently seen in logs,
min_free_kbytes could be increased until the errors disappear. There is no need
for concern, if these messages are very infrequent. The default value depends on
the amount of RAM.
/proc/sys/vm/dirty_background_ratio
This is the percentage of the total amount of free and reclaimable memory. When
the amount of dirty pagecache exceeds this percentage, writeback threads start
writing back dirty memory. The default value is 10 (%).
Similar percentage value as above. When this is exceeded, applications that want to
write to the pagecache are blocked and start performing writeback as well. The de-
fault value is 40 (%).
These two values together determine the pagecache writeback behavior. If these values
are increased, more dirty memory is kept in the system for a longer time. With more
dirty memory allowed in the system, the chance to improve throughput by avoiding
writeback I/O and to submitting more optimal I/O patterns increases. However, more
dirty memory can either harm latency when memory needs to be reclaimed or at data
integrity (sync) points when it needs to be written back to disk.
/proc/sys/vm/zone_reclaim_mode
This parameter controls whether memory reclaim is performed on a local NUMA
node even if there is plenty of memory free on other nodes. This parameter is au-
tomatically turned on on machines with more pronounced NUMA characteristics.
If the VM caches are not being allowed to fill all of memory on a NUMA ma-
chine, it could be due to zone_reclaim_mode being set. Setting to 0 will disable
this behavior.
1. vmstat: This tool gives a good overview of what the VM is doing. See Section 2.1.1,
“vmstat” (page 10) for details.
3. slabtop: This tool provides detailed information about kernel slab memory usage.
buffer_head, dentry, inode_cache, ext3_inode_cache, etc. are the major caches. This
command is available with the package procps.
Since kernel version 2.6.17 full autotuning with 4 MB maximum buffer size
exists. This means that manual tuning in most cases will not improve net-
working performance considerably. It is often the best not to touch the follow-
ing variables, or, at least, to check the outcome of tuning efforts carefully.
In the /proc file system, for example, it is possible to either set the Maximum Socket
Receive Buffer and Maximum Socket Send Buffer for all protocols, or both these op-
tions for the TCP protocol only (in ipv4) and thus overriding the setting for all proto-
cols (in core).
/proc/sys/net/ipv4/tcp_moderate_rcvbuf
If /proc/sys/net/ipv4/tcp_moderate_rcvbuf is set to 1, autotuning
is active and buffer size is adjusted dynamically.
/proc/sys/net/ipv4/tcp_rmem
The three values setting the minimum, initial, and maximum size of the Memo-
ry Receive Buffer per connection. They define the actual memory usage, not just
TCP window size.
/proc/sys/net/ipv4/tcp_wmem
The same as tcp_rmem, but just for Memory Send Buffer per connection.
/proc/sys/net/core/rmem_max
Set to limit the maximum receive buffer size that applications can request.
/proc/sys/net/core/wmem_max
Set to limit the maximum send buffer size that applications can request.
Via /proc it is possible to disable TCP features that you do not need (all TCP fea-
tures are switched on by default). For example, check the following files:
/proc/sys/net/ipv4/tcp_timestamps
TCP timestamps are defined in RFC1323.
/proc/sys/net/ipv4/tcp_window_scaling
TCP window scaling is also defined in RFC1323.
Use sysctl to read or write variables of the /proc file system. sysctl is prefer-
able to cat (for reading) and echo (for writing), because it also reads settings from /
etc/sysctl.conf and, thus, those settings survive reboots reliably. With sysctl
you can read all variables and their values easily; as root use the following command
to list TCP related settings:
sysctl -a | grep tcp
Tuning network variables can affect other system resources such as CPU or
memory use.
The following tools can help analyzing your network traffic: netstat, tcpdump,
and wireshark. Wireshark is a network traffic analyzer.
16.3 Netfilter
The Linux firewall and masquerading features are provided by the Netfilter kernel
modules. This is a highly configurable rule based framework. If a rule matches a pack-
et, Netfilter accepts or denies it or takes special action (“target”) as defined by rules
such as address translation.
There are quite some properties, Netfilter is able to take into account. Thus, the more
rules are defined, the longer packet processing may last. Also advanced connection
tracking could be rather expensive and, thus, slowing down overall networking.
When the kernel queue becomes full, all new packets are dropped, causing exist-
ing connections to fail. The 'fail-open' feature, available since SUSE Linux En-
For more information, see the home page of the Netfilter and iptables project,
http://www.netfilter.org
Some modern network interfaces can help distribute the work to multiple CPU cores
through the implementation of multiple transmission and multiple receive queues in
hardware. However, others are only equipped with a single queue and the driver must
deal with all incoming packets in a single, serialized stream. To work around this issue,
the operating system must "parallelize" the stream to distribute the work across multi-
ple CPUs. On SUSE Linux Enterprise Server this is done via Receive Packet Steering
(RPS). RPS can also be used in virtual environments.
RPS creates a unique hash for each data stream using IP addresses and port numbers.
The use of this hash ensures that packets for the same data stream ta are sent to the
same CPU, which helps to increase performance.
RPS is configured per network device receive queue and interface. The configuration
file names match the following scheme:
/sys/class/net/<device>/queues/<rx-queue>/rps_cpus
where <device> is the network device, such as eth0, eth1, and <rx-queue> is
the receive queue, such as rx-0, rx-1.
If the network interface hardware only supports a single receive queue, only rx-0 will
exist. If it supports multiple receive queues, there will be an rx-N directory for each re-
ceive queue.
To enable RPS and enable specific CPUs to process packets for the receive queue of
the interface, set the value of their positions in the bitmap to 1. For example, to enable
CPUs 0-3 to process packets for the first receive queue for eth0, you would need to set
bit positions 0-3 to 1 in binary, this value is 00001111. Tt needs to be converted to
hex—which results in F in this case. Set this hex value with the following command:
echo "f" > /sys/class/net/eth0/queues/rx-0/rps_cpus
On NUMA machines, best performance can be achieved by configuring RPS to use the
CPUs on the same NUMA node as the interrupt for the interface's receive queue.
On non-NUMA machines, all CPUs can be used. If the interrupt rate is very high, ex-
cluding the CPU handling the network interface can boost performance. The CPU be-
ing used for the network interface can be determined from /proc/interrupts.
For example:
root # cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
...
51: 113915241 0 0 0 Phys-fasteoi eth0
...
In this case, CPU 0, is the only CPU processing interrupts for eth0, since only CPU0
contains a non-zero value.
On i586 and x86_64 platforms, irqbalance can be used to distribute hardware in-
terrupts across CPUs. See man 1 irqbalance for more details.
While a running process is being monitored for system or library calls, the
performance of the process is heavily reduced. You are advised to use trac-
ing tools only for the time you need to collect the data.
To attach strace to an already running process, you need to specify the -p with the
process ID (PID) of the process that you want to monitor:
tux@mercury:~> strace -p `pidof mysqld`
Process 2868 attached - interrupt to quit
select(15, [13 14], NULL, NULL, NULL) = 1 (in [14])
fcntl(14, F_SETFL, O_RDWR|O_NONBLOCK) = 0
accept(14, {sa_family=AF_FILE, NULL}, [2]) = 31
fcntl(14, F_SETFL, O_RDWR) = 0
getsockname(31, {sa_family=AF_FILE, path="/var/run/mysql"}, [28]) = 0
fcntl(31, F_SETFL, O_RDONLY) = 0
fcntl(31, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(31, F_SETFL, O_RDWR|O_NONBLOCK) = 0
[...]
setsockopt(31, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation \
not supported)
clone(child_stack=0x7fd1864801f0, flags=CLONE_VM|CLONE_FS|CLONE_ \
FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_ \
PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7fd1864809e0, \
tls=0x7fd186480910, child_tidptr=0x7fd1864809e0) = 21993
select(15, [13 14], NULL, NULL, NULL
The -c calculates the time the kernel spent on each system call:
tux@mercury:~> strace -c find /etc -name xorg.conf
/etc/X11/xorg.conf
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
32.38 0.000181 181 1 execve
22.00 0.000123 0 576 getdents64
19.50 0.000109 0 917 31 open
19.14 0.000107 0 888 close
4.11 0.000023 2 10 mprotect
0.00 0.000000 0 1 write
[...]
0.00 0.000000 0 1 getrlimit
0.00 0.000000 0 1 arch_prctl
0.00 0.000000 0 3 1 futex
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 4 fadvise64
0.00 0.000000 0 1 set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00 0.000559 3633 33 total
If you need to analyze the output of strace and the output messages are too long to
be inspected directly in the console window, use -o. In that case, unnecessary mes-
sages, such as information about attaching and detaching processes, are suppressed.
You can also suppress these messages (normally printed on the standard output) with -
q. To optionally prepend timestamps to each line with a system call, use -t:
tux@mercury:~> strace -t -o strace_sleep.txt sleep 1; more strace_sleep.txt
08:44:06 execve("/bin/sleep", ["sleep", "1"], [/* 81 vars */]) = 0
08:44:06 brk(0) = 0x606000
08:44:06 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \
-1, 0) = 0x7f8e78cc5000
[...]
08:44:06 close(3) = 0
08:44:06 nanosleep({1, 0}, NULL) = 0
08:44:07 close(1) = 0
08:44:07 close(2) = 0
08:44:07 exit_group(0) = ?
The behavior and output format of strace can be largely controlled. For more informa-
tion, see the relevant manual page (man 1 strace).
In addition to library calls, ltrace with the -S option can trace system calls as well:
tux@mercury:~> ltrace -S -o ltrace_find.txt find /etc -name \
xorg.conf; more ltrace_find.txt
SYS_brk(NULL) = 0x00628000
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7f1327ea1000
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7f1327ea0000
[...]
fnmatch("xorg.conf", "xorg.conf", 0) = 0
free(0x0062db80) = <void>
__errno_location() = 0x7f1327e5d698
__ctype_get_mb_cur_max(0x7fff25227af0, 8192, 0x62e020, -1, 0) = 6
__ctype_get_mb_cur_max(0x7fff25227af0, 18, 0x7f1327e5d6f0, 0x7fff25227af0,
0x62e031) = 6
__fprintf_chk(0x7f1327821780, 1, 0x420cf7, 0x7fff25227af0, 0x62e031
<unfinished ...>
SYS_fstat(1, 0x7fff25227230) = 0
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7f1327e72000
SYS_write(1, "/etc/X11/xorg.conf\n", 19) = 19
[...]
You can change the type of traced events with the -e option. The following example
prints library calls related to fnmatch and strlen functions:
tux@mercury:~> ltrace -e fnmatch,strlen find /etc -name xorg.conf
[...]
fnmatch("xorg.conf", "xorg.conf", 0) = 0
strlen("Xresources") = 10
strlen("Xresources") = 10
strlen("Xresources") = 10
fnmatch("xorg.conf", "Xresources", 0) = 1
strlen("xorg.conf.install") = 17
[...]
You can make the output more readable by indenting each nested call by the specified
number of space with the -n num_of_spaces.
17.3.1 Installation
Valgrind is not shipped with standard SUSE Linux Enterprise Server distribution. To
install it on your system, you need to obtain SUSE Software Development Kit, and ei-
ther install it as an Add-On product and run
or browse through the SUSE Software Development Kit directory tree, locate the Val-
grind package and install it with
rpm -i valgrind-version_architecture.rpm
• i386
• x86_64 (AMD-64)
• ppc
• System z
Valgrind consists of several tools, and each provides specific functionality. Information
in this section is general and valid regardless of the used tool. The most important con-
figuration option is --tool . This option tells Valgrind which tool to run. If you omit
this option, memcheck is selected by default. For example, if you want to run find
~ -name .bashrc with Valgrind's memcheck tools, enter the following in the
command line:
memcheck
Detects memory errors. It helps you tune your programs to behave correctly.
cachegrind
Profiles cache prediction. It helps you tune your programs to run faster.
callgrind
Works in a similar way to cachegrind but also gathers additional cache-profil-
ing information.
exp-drd
Detects thread errors. It helps you tune your multi-threaded programs to behave
correctly.
helgrind
Another thread error detector. Similar to exp-drd but uses different techniques
for problem analysis.
lackey
An example tool showing instrumentation basics.
1. The file .valgrindrc in the home directory of the user who runs Valgrind.
3. The file .valgrindrc in the current directory where Valgrind is run from.
These resources are parsed exactly in this order, while later given options take
precedence over earlier processed options. Options specific to a particular Valgrind
tool must be prefixed with the tool name and a colon. For example, if you want
cachegrind to always write profile data to the /tmp/cachegrind_PID.log,
add the following line to the .valgrindrc file in your home directory:
--cachegrind:cachegrind-out-file=/tmp/cachegrind_%p.log
For example, memcheck adds its code, which checks every memory access. As a con-
sequence, the program runs much slower than in the native execution environment.
Valgrind simulates every instruction of your program. Therefore, it not only checks the
code of your program, but also all related libraries (including the C library), libraries
used for graphical environment, and so on. If you try to detect errors with Valgrind, it
also detects errors in associated libraries (like C, X11, or Gtk libraries). Because you
Note that you should supply a real executable (machine code) as an Valgrind argument.
Therefore, if your application is run, for example, from a shell or a Perl script you will
by mistake get error reports related to /bin/sh (or /usr/bin/perl). In such
case, you can use --trace-children=yes or, which is better, supply a real exe-
cutable to avoid any processing confusion.
17.3.6 Messages
During its runtime, Valgrind reports messages with detailed errors and important
events. The following example explains the messages:
tux@mercury:~> valgrind --tool=memcheck find ~ -name .bashrc
[...]
==6558== Conditional jump or move depends on uninitialised value(s)
==6558== at 0x400AE79: _dl_relocate_object (in /lib64/ld-2.11.1.so)
==6558== by 0x4003868: dl_main (in /lib64/ld-2.11.1.so)
[...]
==6558== Conditional jump or move depends on uninitialised value(s)
==6558== at 0x400AE82: _dl_relocate_object (in /lib64/ld-2.11.1.so)
==6558== by 0x4003868: dl_main (in /lib64/ld-2.11.1.so)
[...]
==6558== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
==6558== malloc/free: in use at exit: 2,228 bytes in 8 blocks.
==6558== malloc/free: 235 allocs, 227 frees, 489,675 bytes allocated.
==6558== For counts of detected errors, rerun with: -v
==6558== searching for pointers to 8 not-freed blocks.
==6558== checked 122,584 bytes.
==6558==
==6558== LEAK SUMMARY:
==6558== definitely lost: 0 bytes in 0 blocks.
==6558== possibly lost: 0 bytes in 0 blocks.
==6558== still reachable: 2,228 bytes in 8 blocks.
==6558== suppressed: 0 bytes in 0 blocks.
==6558== Rerun with --leak-check=full to see details of leaked memory.
The ==6558== introduces Valgrind's messages and contains the process ID number
(PID). You can easily distinguish Valgrind's messages from the output of the program
itself, and decide which messages belong to a particular process.
Basically, you can make Valgrind send its messages to three different places:
2. The second and probably more useful way is to send Valgrind's messages to a file
with --log-file=filename. This option accepts several variables, for exam-
ple, %p gets replaced with the PID of the currently profiled process. This way you
can send messages to different files based on their PID. %q{env_var} is replaced
with the value of the related env_var environment variable.
The following example checks for possible memory errors during the Apache Web
server restart, while following children processes and writing detailed Valgrind's
messages to separate files distinguished by the current process PID:
tux@mercury:~> valgrind -v --tool=memcheck --trace-children=yes \
--log-file=valgrind_pid_%p.log rcapache2 restart
This process created 52 log files in the testing system, and took 75 seconds instead
of the usual 7 seconds needed to run rcapache2 restart without Valgrind,
which is approximately 10 times more.
tux@mercury:~> ls -1 valgrind_pid_*log
valgrind_pid_11780.log
valgrind_pid_11782.log
valgrind_pid_11783.log
[...]
valgrind_pid_11860.log
valgrind_pid_11862.log
valgrind_pid_11863.log
3. You may also prefer to send the Valgrind's messages over the network. You need to
specify the aa.bb.cc.dd IP address and port_num port number of the network
socket with the --log-socket=aa.bb.cc.dd:port_num option. If you
omit the port number, 1500 will be used.
The -v option will add a summary of all reports (sorted by their total count) to the end
of the Valgrind's execution output. Moreover, Valgrind stops collecting errors if it de-
tects either 1000 different errors, or 10 000 000 errors in total. If you want to suppress
this limit and wish to see all error messages, use --error-limit=no.
Some errors usually cause other ones. Therefore, fix errors in the same order as they
appear and re-check the program continuously.
18.1 Introduction
With kexec, you can replace the running kernel with another one without a hard reboot.
The tool is useful for several reasons:
If you need to reboot the system frequently, kexec can save you significant time.
Computer hardware is complex and serious problems may occur during the system
start-up. You cannot always replace unreliable hardware immediately. kexec boots
the kernel to a controlled environment with the hardware already initialized. The risk
of unsuccessful system start is then minimized.
kexec preserves the contents of the physical memory. After the production kernel
fails, the capture kernel (an additional kernel running in a reserved memory range)
saves the state of the failed kernel. The saved image can help you with the subse-
quent analysis.
When the system boots a kernel with kexec, it skips the boot loader stage. Normal
booting procedure can fail due to an error in the boot loader configuration. With
kexec, you do not depend on a working boot loader configuration.
To set up an environment that helps you obtain useful debug information in case of a
kernel crash, you need to install makedumpfile in addition.
The preferred method to use kdump in SUSE Linux Enterprise Server is through the
YaST kdump module. Install the package yast2-kdump by entering zypper in-
stall yast2-kdump in the command line as root.
If you want to boot another kernel and preserve the data of the production kernel when
the system crashes, you need to reserve a dedicated area of the system memory. The
production kernel never loads to this area because it must be always available. It is used
for the capture kernel so that the memory pages of the production kernel can be pre-
served. You reserve the area with crashkernel = size@offset as a command
line parameter of the production kernel. Note that this is not a parameter of the capture
kernel. The capture kernel does not use kexec at all.
To load the capture kernel, you need to include the kernel boot parameters. Usually,
the initial RAM file system is used for booting. You can specify it with --initrd =
filename. With --append = cmdline , you append options to the command line
of the kernel to boot. It is helpful to include the command line of the production ker-
nel if these options are necessary for the kernel to boot. You can simply copy the com-
mand line with --append = "$(cat /proc/cmdline)" or add more options
with --append = "$(cat /proc/cmdline) more_options" .
You can always unload the previously loaded kernel. To unload a kernel that was loaded
with the -l option, use the kexec -u command. To unload a crash kernel loaded
with the -p option, use kexec -p -u command.
1 Make sure no users are currently logged in and no important services are running on
the system.
2 Log in as root.
4 Load the new kernel to the address space of the production kernel with the following
command:
5 Unmount all mounted file systems except the root file system with umount -a
Unmounting all file systems will most likely produce a device is busy
warning message. The root file system cannot be unmounted if the system
is running. Ignore the warning.
mount -o remount,ro /
7 Initiate the reboot of the kernel that you loaded in Step 4 (page 207) with kexec
-e
The new kernel previously loaded to the address space of the older kernel rewrites it
and takes control immediately. It displays the usual start-up messages. When the new
kernel boots, it skips all hardware and firmware checks. Make sure no warning mes-
sages appear. All the file systems are supposed to be clean if they had been unmount-
ed.
Note that firmware as well as the boot loader are not used when the system reboots
with kexec. Any changes you make to the boot loader configuration will be ignored un-
til the computer performs a hard reboot.
kdump works similar to kexec (see Chapter 18, kexec and kdump (page 205)). The
capture kernel is executed after the running production kernel crashes. The difference
is that kexec replaces the production kernel with the capture kernel. With kdump, you
still have access to the memory space of the crashed production kernel. You can save
the memory snapshot of the crashed kernel in the environment of the kdump kernel.
In environments with limited local storage, you need to set up kernel dumps
over the network. kdump supports configuring the specified network inter-
face and bringing it up via initrd. Both LAN and VLAN interfaces are sup-
ported. You have to specify the network interface and the mode (dhcp or sta-
tic) either with YaST, or using the KDUMP_NETCONFIG option in the /etc/
sysconfig/kdump file. The third way is to build initrd manually, for ex-
ample with
/sbin/mkinitrd -D vlan0
When configuring kdump, you can specify a location to which the dumped
images will be saved (default: /var/crash). This location must be mounted
when configuring kdump, otherwise the configuration will fail.
1 Append the following kernel command line option to your boot loader configura-
tion, and reboot the system:
crashkernel=size@offset
You can find the corresponding values for size and offset in the following ta-
ble:
ppc64 crashkernel=128M@4M or
crashkernel=256M@4M (larger sys-
tems)
4 Execute the init script once with rckdump start, or reboot the system.
After configuring kdump with the default values, check if it works as expected. Make
sure that no users are currently logged in and no important services are running on your
system. Then follow these steps:
2 Unmount all the disk file systems except the root file system with umount -a
4 Invoke “kernel panic” with the procfs interface to Magic SysRq keys:
echo c >/proc/sysrq-trigger
The capture kernel boots and the crashed kernel memory snapshot is saved to the file
system. The save path is given by the KDUMP_SAVEDIR option and it defaults to /
var/crash. If KDUMP_IMMEDIATE_REBOOT is set to yes , the system automat-
ically reboots the production kernel. Log in and check that the dump has been created
under /var/crash.
When kdump takes control and you are logged in an X11 session, the screen
will freeze without any notice. Some kdump activity can be still visible (for ex-
ample, deformed messages of a booting kernel on the screen).
In the Start-Up window, select Enable Kdump. The default value for kdump memory is
sufficient on most systems.
Click Dump Filtering in the left pane, and check what pages to include in the dump.
You do not need to include the following memory content to be able to debug kernel
problems:
• Cache pages
• Free pages
Fill the Email Notification window information if you want kdump to inform you about
its events via E-mail and confirm your changes with OK after fine tuning kdump in the
Expert Settings window. kdump is now configured.
The original tool to analyze the dumps is GDB. You can even use it in the latest envi-
ronments, although it has several disadvantages and limitations:
• GDB does not understand other formats than ELF dumps (it cannot debug com-
pressed dumps).
That is why the crash utility was implemented. It analyzes crash dumps and debugs the
running system as well. It provides functionality specific to debugging the Linux kernel
and is much more suitable for advanced debugging.
If you want to debug the Linux kernel, you need to install its debugging information
package in addition. Check if the package is installed on your system with zypper
se kernel | grep debug.
If you subscribed your system for online updates, you can find “debuginfo”
packages in the *-Debuginfo-Updates online installation repository rel-
evant for SUSE Linux Enterprise Server 11 SP4. Use YaST to enable the
repository.
To open the captured dump in crash on the machine that produced the dump, use a
command like this:
The ELF image is never directly used on x86. Therefore, the main kernel package con-
tains the vmlinux file in compressed form called vmlinux.gz.
To sum it up, an x86 SUSE kernel package has two kernel files:
• vmlinux.gz, the compressed ELF image that is required by crash and GDB.
18.7.1.2 IA64
The elilo boot loader, which boots the Linux kernel on the IA64 architecture, sup-
ports loading ELF images (even compressed ones) out of the box. The IA64 kernel
package contains only one file called vmlinuz. It is a compressed ELF image. vm
linuz on IA64 is the same as vmlinux.gz on x86.
You can analyze the dump on another computer only if it runs a Linux system of the
same architecture. To check the compatibility, use the command uname -i on both
computers and compare the outputs.
If you are going to analyze the dump on another computer, you also need the appropri-
ate files from the kernel and kernel debug packages.
1 Put the kernel dump, the kernel image from /boot, and its associated debugging
info file from /usr/lib/debug/boot into a single empty directory.
3 In the directory with the dump, the kernel image, its debug info file, and the mod
ules subdirectory, launch the crash utility: crash vmlinux-version vm-
core.
Compressed kernel images (gzip, not the bzImage file) are supported by
SUSE packages of crash since SUSE® Linux Enterprise Server 11. For old-
er versions, you have to extract the vmlinux.gz (x86) or the vmlinuz
(IA64) to vmlinux.
Regardless of the computer on which you analyze the dump, the crash utility will pro-
duce an output similar to this:
tux@mercury:~> crash /boot/vmlinux-2.6.32.8-0.1-default.gz
/var/crash/2010-04-23-11\:17/vmcore
crash 4.0-7.6
Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008 Red Hat, Inc.
Copyright (C) 2004, 2005, 2006 IBM Corporation
Copyright (C) 1999-2006 Hewlett-Packard Co
Copyright (C) 2005, 2006 Fujitsu Limited
Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
Copyright (C) 2005 NEC Corporation
Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
KERNEL: /boot/vmlinux-2.6.32.8-0.1-default.gz
DEBUGINFO: /usr/lib/debug/boot/vmlinux-2.6.32.8-0.1-default.debug
DUMPFILE: /var/crash/2009-04-23-11:17/vmcore
CPUS: 2
DATE: Thu Apr 23 13:17:01 2010
UPTIME: 00:10:41
LOAD AVERAGE: 0.01, 0.09, 0.09
TASKS: 42
NODENAME: eros
RELEASE: 2.6.32.8-0.1-default
VERSION: #1 SMP 2010-03-31 14:50:44 +0200
MACHINE: x86_64 (2999 Mhz)
MEMORY: 1 GB
PANIC: "SysRq : Trigger a crashdump"
PID: 9446
COMMAND: "bash"
TASK: ffff88003a57c3c0 [THREAD_INFO: ffff880037168000]
CPU: 1
STATE: TASK_RUNNING (SYSRQ)
crash>
The command output prints first useful data: There were 42 tasks running at the mo-
ment of the kernel crash. The cause of the crash was a SysRq trigger invoked by the
task with PID 9446. It was a Bash process because the echo that has been used is an
internal command of the Bash shell.
The crash utility builds upon GDB and provides many useful additional commands. If
you enter bt without any parameters, the backtrace of the task running at the moment
of the crash is printed:
crash> bt
PID: 9446 TASK: ffff88003a57c3c0 CPU: 1 COMMAND: "bash"
#0 [ffff880037169db0] crash_kexec at ffffffff80268fd6
#1 [ffff880037169e80] __handle_sysrq at ffffffff803d50ed
#2 [ffff880037169ec0] write_sysrq_trigger at ffffffff802f6fc5
#3 [ffff880037169ed0] proc_reg_write at ffffffff802f068b
#4 [ffff880037169f10] vfs_write at ffffffff802b1aba
#5 [ffff880037169f40] sys_write at ffffffff802b1c1f
Now it is clear what happened: The internal echo command of Bash shell sent a char-
acter to /proc/sysrq-trigger. After the corresponding handler recognized this
character, it invoked the crash_kexec() function. This function called panic()
and kdump saved a dump.
In addition to the basic GDB commands and the extended version of bt, the crash util-
ity defines many other commands related to the structure of the Linux kernel. These
commands understand the internal data structures of the Linux kernel and present their
contents in a human readable format. For example, you can list the tasks running at the
moment of the crash with ps. With sym, you can list all the kernel symbols with the
corresponding addresses, or inquire an individual symbol for its value. With files,
you can display all the open file descriptors of a process. With kmem, you can display
details about the kernel memory usage. With vm, you can inspect the virtual memo-
ry of a process, even at the level of individual page mappings. The list of useful com-
mands is very long and many of these accept a wide range of options.
The commands that we mentioned reflect the functionality of the common Linux com-
mands, such as ps and lsof. If you would like to find out the exact sequence of
events with the debugger, you need to know how to use GDB and to have strong debug-
ging skills. Both of these are out of the scope of this document. In addition, you need
to understand the Linux kernel. Several useful reference information sources are given
at the end of this document.
Kernel dumps are usually huge and contain many pages that are not necessary for
analysis. With KDUMP_DUMPLEVEL option, you can omit such pages. The option
understands numeric value between 0 and 31. If you specify 0, the dump size will be
largest. If you specify 31, it will produce the smallest dump. For a complete table of
possible values, see the manual page of kdump (man 7 kdump).
Sometimes it is very useful to make the size of the kernel dump smaller. For exam-
ple, if you want to transfer the dump over the network, or if you need to save some
disk space in the dump directory. This can be done with KDUMP_DUMPFORMAT set
to compressed. The crash utility supports dynamic decompression of the com-
pressed dumps.
You always need to execute rckdump restart after you make manual
changes to /etc/sysconfig/kdump. Otherwise these changes will take
effect next time you reboot the system.
• For the kexec utility usage, see the manual page of kexec (man 8 kexec).
For more details on crash dump analysis and debugging tools, use the following re-
sources:
• In addition to the info page of GDB (info gdb), you might want to read the print-
able guides at http://sourceware.org/gdb/documentation/ .
• The crash utility also features a comprehensive online help. Just write help com-
mand to display the online help for command.
• If you have the necessary Perl skills, you can use Alicia to make the debugging
easier. This Perl-based front end to the crash utility can be found at http://
alicia.sourceforge.net/ .
• If you prefer Python instead, you may want to install Pykdump. This package helps
you control GDB through Python scripts and can be downloaded from http://
sf.net/projects/pykdump .
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone
the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License pre-
serves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the
GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should
come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any tex-
tual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is
instruction or reference.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/
or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or
authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject.
(Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter
of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the
Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant.
The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document
is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general pub-
lic, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or
(for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats
suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to
thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of
text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML
using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent
image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word proces-
sors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF pro-
duced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License re-
quires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent ap-
pearance of the work's title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text
that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications",
"Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ"
according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers
are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Dis-
claimers may have is void and has no effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices,
and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those
of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However,
you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's
license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the
front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front
cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying
with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in
other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover,
and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy
along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has ac-
cess to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter
option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will re-
main thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or re-
tailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a
chance to provide you with an updated version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modi-
fied Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the
Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together
with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this re-
quirement.
C.State on the Title page the name of the publisher of the Modified Version, as the publisher.
E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this Li-
cense, in the form shown in the Addendum below.
G.Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Mod-
ified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and pub-
lisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network loca-
tions given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for
a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
K.For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and
tone of each of the contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part
of the section titles.
M.Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
N.Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the
Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the
Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for exam-
ple, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover
Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by)
any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity
you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the
old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorse-
ment of any Modified Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions,
provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sec-
tions of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are
multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parenthe-
ses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the
list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise
combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License
into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggre-
gate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the
Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
8. TRANSLATION
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant
Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in
addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document,
and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and
disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will
prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1)
will typically require changing the actual title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, mod-
ify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received
copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or
any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has
been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose
any version ever published (not as a draft) by the Free Software Foundation.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software
license, such as the GNU General Public License, to permit their use in free software.