Load (Computing) - Wikipedia
Load (Computing) - Wikipedia
Load (computing)
In UNIX computing, the system load is a measure of the
amount of computational work that a computer system
performs. The load average represents the average system
load over a period of time. It conventionally appears in the
form of three numbers which represent the system load during
htop displaying a significant
the last one-, five-, and fifteen-minute periods.
computing load (top right: Load
average:)
$ uptime
14:34:03 up 10:43, 4 users, load average: 0.06, 0.11, 0.09
The w and top commands show the same three load average numbers, as do a range of graphical
user interface utilities.
In operating systems based on the Linux kernel, this information can be easily accessed by reading
the /proc/loadavg file.
To explore this kind of information in depth, according to the Linux's Filesystem Hierarchy
Standard, architecture-dependent information are exposed on the file /proc/stat.[1][2][3]
An idle computer has a load number of 0 (the idle process is not counted). Each process using or
waiting for CPU (the ready queue or run queue) increments the load number by 1. Each process
that terminates decrements it by 1. Most UNIX systems count only processes in the running (on
CPU) or runnable (waiting for CPU) states. However, Linux also includes processes in
uninterruptible sleep states (usually waiting for disk activity), which can lead to markedly different
results if many processes remain blocked in I/O due to a busy or stalled I/O system.[4] This, for
example, includes processes blocking due to an NFS server failure or too slow media (e.g., USB 1.x
storage devices). Such circumstances can result in an elevated load average, which does not reflect
an actual increase in CPU use (but still gives an idea of how long users have to wait).
Systems calculate the load average as the exponentially damped/weighted moving average of the
load number. The three values of load average refer to the past one, five, and fifteen minutes of
system operation.[5]
Mathematically speaking, all three values always average all the system load since the system
started up. They all decay exponentially, but they decay at different speeds: they decay
exponentially by e after 1, 5, and 15 minutes respectively. Hence, the 1-minute load average
consists of 63% (more precisely: 1 - 1/e) of the load from the last minute and 37% (1/e) of the
average load since start up, excluding the last minute. For the 5- and 15-minute load averages, the
same 63%/37% ratio is computed over 5 minutes and 15 minutes, respectively. Therefore, it is not
technically accurate that the 1-minute load average only includes the last 60 seconds of activity, as
it includes 37% of the activity from the past, but it is correct to state that it includes mostly the last
minute.
Interpretation
For single-CPU systems that are CPU bound, one can think of load average as a measure of system
utilization during the respective time period. For systems with multiple CPUs, one must divide the
load by the number of processors in order to get a comparable measure.
For example, one can interpret a load average of "1.73 0.60 7.98" on a single-CPU system as:
During the last minute, the system was overloaded by 73% on average (1.73 runnable
processes, so that 0.73 processes had to wait for a turn for a single CPU system on average).
During the last 5 minutes, the CPU was idling 40% of the time, on average.
During the last 15 minutes, the system was overloaded 698% on average (7.98 runnable
processes, so that 6.98 processes had to wait for a turn for a single CPU system on average).
This means that this system (CPU, disk, memory, etc.) could have handled all the work scheduled
for the last minute if it were 1.73 times as fast.
In a system with four CPUs, a load average of 3.73 would indicate that there were, on average, 3.73
processes ready to run, and each one could be scheduled into a CPU.
On modern UNIX systems, the treatment of threading with respect to load averages varies. Some
systems treat threads as processes for the purposes of load average calculation: each thread waiting
to run will add 1 to the load. However, other systems, especially systems implementing so-called
M:N threading, use different strategies such as counting the process exactly once for the purpose of
load (regardless of the number of threads), or counting only threads currently exposed by the user-
thread scheduler to the kernel, which may depend on the level of concurrency set on the process.
Linux appears to count each thread separately as adding 1 to the load.[6]
activities use this number of ticks to time themselves. Specifically, the timer.c::calc_load()
function, which calculates the load average, runs every LOAD_FREQ = (5*HZ+1) ticks, or about
every five seconds:
count -= ticks;
if (count < 0) {
count += LOAD_FREQ;
active_tasks = count_active_tasks();
CALC_LOAD(avenrun[0], EXP_1, active_tasks);
CALC_LOAD(avenrun[1], EXP_5, active_tasks);
CALC_LOAD(avenrun[2], EXP_15, active_tasks);
}
}
The avenrun array contains 1-minute, 5-minute and 15-minute average. The CALC_LOAD macro
and its associated values are defined in sched.h:
#define CALC_LOAD(load,exp,n) \
load *= exp; \
load += n*(FIXED_1-exp); \
load >>= FSHIFT;
The "sampled" calculation of load averages is a somewhat common behavior; FreeBSD, too, only
refreshes the value every five seconds. The interval is usually taken to not be exact so that they do
not collect processes that are scheduled to fire at a certain moment.[8]
A post on the Linux mailing list considers its +1 tick insufficient to avoid Moire artifacts from such
collection, and suggests an interval of 4.61 seconds instead.[9] This change is common among
Android system kernels, although the exact expression used assumes an HZ of 100.[10]
See also
CPU usage
References
1. "CPU load" (https://www.kernel.org/doc/html/latest/admin-guide/cpu-load.html). Retrieved
4 October 2023.
2. "/proc" (https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html). Linux Filesystem
Hierarchy. Retrieved 4 October 2023.
3. "Miscellaneous kernel statistics in /proc/stat" (https://www.kernel.org/doc/html/latest/filesystem
s/proc.html#miscellaneous-kernel-statistics-in-proc-stat). Retrieved 4 October 2023.
4. "Linux Tech Support: What exactly is a load average?" (http://linuxtechsupport.blogspot.com/20
08/10/what-exactly-is-load-average.html). 23 October 2008.
5. Walker, Ray (1 December 2006). "Examining Load Average" (http://www.linuxjournal.com/articl
e/9001). Linux Journal. Retrieved 13 March 2012.
6. See http://serverfault.com/a/524818/27813
7. Ferrari, Domenico; and Zhou, Songnian; "An Empirical Investigation of Load Indices For Load
Balancing Applications (http://www.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-353.pdf)",
Proceedings of Performance '87, the 12th International Symposium on Computer Performance
Modeling, Measurement, and Evaluation, North Holland Publishers, Amsterdam, the
Netherlands, 1988, pp. 515–528
8. "How is load average calculated on FreeBSD?" (https://unix.stackexchange.com/a/342778).
Unix & Linux Stack Exchange.
9. Ripke, Klaus (2011). "Linux-Kernel Archive: LOAD_FREQ (4*HZ+61) avoids loadavg Moire" (ht
tp://lkml.iu.edu/hypermail/linux/kernel/1111.1/02446.html). lkml.iu.edu. graph & patch (http://ripk
e.com/loadavg/moire)
10. "Patch kernel with the 4.61s load thing · Issue #2109 · AOSC-Dev/aosc-os-abbs" (https://githu
b.com/AOSC-Dev/aosc-os-abbs/issues/2109). GitHub.
11. Baker, Scott (28 September 2022). "dool - Python3 compatible clone of dstat" (https://github.co
m/scottchiefbaker/dool). GitHub. Retrieved 22 November 2022. "...Dag Wieers ceased
development of Dstat..."
12. "Iotop(8) - Linux manual page" (http://man7.org/linux/man-pages/man8/iotop.8.html).
External links
Brendan Gregg (8 August 2017). "Linux Load Averages: Solving the Mystery" (http://www.bren
dangregg.com/blog/2017-08-08/linux-load-averages.html). Retrieved 22 January 2018.
Neil J. Gunther. "UNIX Load Average – Part 1: How It Works" (http://www.teamquest.com/pdfs/
whitepaper/ldavg1.pdf) (PDF). TeamQuest. Retrieved 12 August 2009.
Andre Lewis (31 July 2009). "Understanding Linux CPU Load – when should you be worried?"
(http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages). Retrieved 21 July
2011. Explanation using an illustrated traffic analogy.
Ray Walker (1 December 2006). "Examining Load Average" (http://www.linuxjournal.com/articl
e/9001). Linux Journal. Retrieved 21 July 2011.
Karsten Becker. "Linux OSS load monitoring toolset" (http://www.loadavg.com). LoadAvg.