Boot Process
Boot Process
Credits
Introduction
This article about the Windows boot process is part of a continuing series on OS boot and user logon
delays on Windows computers joined to Active Directory domains. Related articles describing known
issues and tools to troubleshoot slow boots and user logons can be found in the following links:
Root Causes for Slow Boots and Slow Logons (aka SBSL)
Tools for Troubleshooting Slow Boots and Slow Logons (SBSL)
Troubleshooting Slow Operating System Boot Times and Slow User Logons (SBSL)
A question that Premier Field Engineers often get asked onsite is “Why do our users wait so long for
Windows to boot that they sometimes have time to get a cup of coffee?”
The reality is that there are a myriad of reasons including hardware performance, network performance,
the amount of the workloads added by administrators as well as inefficiencies in Microsoft and ISV
applications and OS components.
The goal of this article is to give readers an overview of the Windows boot process so that you can
better troubleshoot a slow OS start or slow user logon that is caused by delays in the OS boot process.
Related problems about resuming from sleep, wake from hibernate, or OS shutdown processes are not
covered in this article.
Table of Contents
Credits
Introduction
Boot Process Overview
BIOS Initialization
OS Loader
OS Initialization
o Sub phase 1 - PreSMSS: Kernel Initialization
o Sub phase 2 - SMSSInit : Session Initialization
o Sub phase 3 - WinLogonInit: Winlogon Initialization
o Sub phase 4 – ExplorerInit: Explorer Initialization
The PostBoot phase
The ReadyBootPrefetcher
Additional references
Fast OS startup performance is critical for a good user experience. The time required to boot the
operating system on a given computer to the point where the user can start working is one of the most
important benchmarks for Windows client performance. The Windows boot process consists of several
phases which are explained in more detail by the picture and supporting text below.
The Windows Performance Toolkit (included in the Windows 7.1 SDK ) allows you to investigate most
of the boot phases (except for BIOS Initialization and OS Loader).
BIOS Initialization
During the BIOS Initialization phase, the platform firmware identifies and initializes hardware devices,
and then runs a power-on self-test (POST). The POST process ends when the BIOS detects a valid system
disk, reads the master boot record (MBR), and starts Bootmgr.exe. Bootmgr.exe finds and starts
Winload.exe on the Windows boot partition, which begins the OSLoader phase [1].
The BIOS version, the BIOS configuration and the firmware of the computer hardware components can
have an impact on the overall boot performance. There is no way to trace this phase using the Windows
Performance Toolkit. You need to manually measure the time. In order to optimize or troubleshoot this
early phase in the overall computer startup process, make sure to update the BIOS version and firmware
of all hardware components to the latest versions. In addition check the BIOS configuration (device boot
order, PXE boot-enabled, Quick/Fast boot (POST check) enabled, AHCI settings, and so on).
Be careful changing the BIOS configuration or updating the firmware/UEFI/BIOS versions. Read the
hardware vendor manuals carefully because misconfigurations and failed updates can cause complete
system outages. Create a backup of your system and data beforehand.
OS Loader
During the OSLoader phase, the Windows loader binary (Winload.exe) loads essential system drivers
that are required to read minimal data from the disk and initializes the system to the point where the
Windows kernel can begin execution. When the kernel starts to run, the OSloader loads the system
registry hive and additional drivers that are marked as BOOT_START into memory. [1]
This phase is mainly impacted by boot start drivers. While a delay caused by a dual boot menu would be
easy to fix, make sure that all boot start drivers are signed and up-to-date.
1. Create a boot trace using the Windows Performance Toolkit. For more information about how
to create a trace, see slow boot and logon analysis articles for analysis.
2. Run
5. If you find a driver which is not signed, look for driver updates.
OS Initialization
During the OS Initialization phase, most of the operating system work occurs. This phase involves kernel
initialization, Plug and Play activity, service start, logon, and Explorer (desktop) initialization. The OS
Initialization can be divided into four subphases. Each subphase has unique characteristics and
performance vulnerabilities. [1]
After you have taken a boot trace the different subphases are shown as follows in XPERFVIEW.EXE:
The PreSMSS subphase begins when the kernel is invoked. During this subphase, the kernel initializes
data structures and components. It also starts the PnP manager, which initializes the BOOT_START
drivers that were loaded during the OSLoader phase. [1]
The WinLogonInit subphase begins when SMSSInit completes and starts Winlogon.exe. During
WinLogonInit, the user logon screen appears, the service control manager starts services, and Group
Policy scripts run. WinLogonInit ends when the Explorer process starts. [1]
The ExplorerInit subphase begins when Explorer.exe starts. During ExplorerInit, the system creates the
desktop window manager (DWM) process, which initializes the desktop and displays it for the first time.
[1]
A detailed analysis of each phase would go far beyond the scope of this article. The analysis always
starts with a boot analysis trace created with the Windows Performance Toolkit, which is described in
the Windows On/Off Transition Performance Analysis Whitepaper . Common performance
vulnerabilities are described in the whitepaper as well.
Still, it might require more tools (like parallel network traces and additional debug logs such as Gpsvc
logging) to fully analyze a problem.
For now, begin your analysis on phases that consume the most time and compare traces with a
fresh/clean-OS installation on same hardware.
If the WinLogonInit phase takes a long time, you can use the Winlogon graph for further analysis.
In this example the Group Policy processing took around 160 seconds to complete, before the Windows
desktop could be loaded. While the Winlogon graph does not explain why it took 160 seconds to
complete GPO processing (which could be related to network issues, policy settings, GPO preferences,
scripts, and so on), your can see where to investigate further.
In another example while analyzing the ReadyingProcess/ReadyingThreadId graphs we found the profile
service waiting about 25 seconds on the network.
The PostBoot phase includes all background activity that occurs after the desktop is ready. The user can
interact with the desktop, but the system might still be starting services, tray icons, and application code
in the background, potentially having an impact on how the user perceives system responsiveness. [1]
The ReadyBootPrefetcher
During the Windows boot process a lot of data is read from disk and I/O pressure is one of the
determining factors for boot performance. The Windows prefetcher (or ReadyBoot) helps to read data
into memory before Windows needs it. In addition each reboot will allow the prefetcher to better
predict what data is needed.
While ReadyBoot is usually turned on for classic harddisks, it is off for fast SSDs, of if WinSAT disk
score is > 6.0.
One way to analyze the prefetcher activities is to run xperf.exe from the Windows Performance Toolkit.
The above should give you some insight into where to start looking for issues during the Windows boot
phase, as it will help you identify the correction section to start troubleshooting.
A recommendation is to check the hardware platform thoroughly by updating the BIOS and checking
hard drive performance with benchmarking tools prior to searching for the problem on the OS layer.
33.2. A DETAILED LOOK AT THE BOOT PROCESS
The beginning of the boot process varies depending on the hardware platform being used. However,
once the kernel is found and loaded by the boot loader, the default boot process is identical across all
architectures. This chapter focuses primarily on the x86 architecture.
Other platforms use different programs to perform low-level tasks roughly equivalent to those of the
BIOS on an x86 system. For instance, Itanium-based computers use the Extensible Firmware
Interface (EFI) Shell.
Once loaded, the BIOS tests the system, looks for and checks peripherals, and then locates a valid device
with which to boot the system. Usually, it checks any diskette drives and CD-ROM drives present for
bootable media, then, failing that, looks to the system's hard drives. In most cases, the order of the
drives searched while booting is controlled with a setting in the BIOS, and it looks on the master IDE
device on the primary IDE bus. The BIOS then loads into memory whatever program is residing in the
first sector of this device, called the Master Boot Record or MBR. The MBR is only 512 bytes in size and
contains machine code instructions for booting the machine, called a boot loader, along with the
partition table. Once the BIOS finds and loads the boot loader program into memory, it yields control of
the boot process to it.
A boot loader for the x86 platform is broken into at least two stages. The first stage is a small machine
code binary on the MBR. Its sole job is to locate the second stage boot loader and load the first part of it
into memory.
GRUB has the advantage of being able to read ext2 and ext3 [13] partitions and load its configuration file
— /boot/grub/grub.conf — at boot time. Refer to Section 9.7, “GRUB Menu Configuration File” for
information on how to edit this file.
Note
If upgrading the kernel using the Red Hat Update Agent, the boot loader configuration file is updated
automatically. More information on Red Hat Network can be found online at the following
URL: https://rhn.redhat.com/.
Once the second stage boot loader is in memory, it presents the user with a graphical screen showing
the different operating systems or kernels it has been configured to boot. On this screen a user can use
the arrow keys to choose which operating system or kernel they wish to boot and press Enter. If no key
is pressed, the boot loader loads the default selection after a configurable period of time has passed.
Once the second stage boot loader has determined which kernel to boot, it locates the corresponding
kernel binary in the /boot/ directory. The kernel binary is named using the following format
— /boot/vmlinuz-<kernel-version> file (where <kernel-version> corresponds to the kernel version
specified in the boot loader's settings).
For instructions on using the boot loader to supply command line arguments to the kernel, refer
to Chapter 9, The GRUB Boot Loader. For information on changing the runlevel at the boot loader
prompt, refer Section 9.8, “Changing Runlevels at Boot Time”.
The boot loader then places one or more appropriate initramfs images into memory. Next, the kernel
decompresses these images from memory to /sysroot/, a RAM-based virtual file system, via cpio.
The initramfs is used by the kernel to load drivers and modules necessary to boot the system. This is
particularly important if SCSI hard drives are present or if the systems use the ext3 file system.
Once the kernel and the initramfs image(s) are loaded into memory, the boot loader hands control of
the boot process to the kernel.
For a more detailed overview of the GRUB boot loader, refer to Chapter 9, The GRUB Boot Loader.
For example, the Itanium architecture uses the ELILO boot loader, the IBM eServer pSeries architecture
uses yaboot, and the IBM System z systems use the z/IPL boot loader.
At this point, the kernel is loaded into memory and operational. However, since there are no user
applications that allow meaningful input to the system, not much can be done with the system.
To set up the user environment, the kernel executes the /sbin/init program.
When the init command starts, it becomes the parent or grandparent of all of the processes that start
up automatically on the system. First, it runs the /etc/rc.d/rc.sysinit script, which sets the environment
path, starts swap, checks the file systems, and executes all other steps required for system initialization.
For example, most systems use a clock, so rc.sysinit reads the /etc/sysconfig/clock configuration file to
initialize the hardware clock. Another example is if there are special serial port processes which must be
initialized, rc.sysinit executes the /etc/rc.serial file.
The init command then runs the /etc/inittab script, which describes how the system should be set up in
each SysV init runlevel. Runlevels are a state, or mode, defined by the services listed in the
SysV /etc/rc.d/rc<x>.d/ directory, where <x> is the number of the runlevel. For more information on
SysV init runlevels, refer to Section 33.4, “SysV Init Runlevels”.
Next, the init command sets the source function library, /etc/rc.d/init.d/functions, for the system, which
configures how to start, kill, and determine the PID of a program.
The init program starts all of the background processes by looking in the appropriate rc directory for the
runlevel specified as the default in /etc/inittab. The rc directories are numbered to correspond to the
runlevel they represent. For instance, /etc/rc.d/rc5.d/ is the directory for runlevel 5.
When booting to runlevel 5, the init program looks in the /etc/rc.d/rc5.d/ directory to determine which
processes to start and stop.
As illustrated in this listing, none of the scripts that actually start and stop the services are located in
the /etc/rc.d/rc5.d/ directory. Rather, all of the files in /etc/rc.d/rc5.d/ are symbolic links pointing to
scripts located in the /etc/rc.d/init.d/ directory. Symbolic links are used in each of the rc directories so
that the runlevels can be reconfigured by creating, modifying, and deleting the symbolic links without
affecting the actual scripts they reference.
The name of each symbolic link begins with either a K or an S. The K links are processes that are killed on
that runlevel, while those beginning with an S are started.
The init command first stops all of the K symbolic links in the directory by issuing
the /etc/rc.d/init.d/<command> stop command, where <command> is the process to be killed. It then
starts all of the S symbolic links by issuing /etc/rc.d/init.d/<command> start.
Note
After the system is finished booting, it is possible to log in as root and execute these same scripts to start
and stop services. For instance, the command /etc/rc.d/init.d/httpd stop stops the Apache HTTP Server.
Each of the symbolic links are numbered to dictate start order. The order in which the services are
started or stopped can be altered by changing this number. The lower the number, the earlier it is
started. Symbolic links with the same number are started alphabetically.
Note
One of the last things the init program executes is the /etc/rc.d/rc.local file. This file is useful for system
customization. Refer to Section 33.3, “Running Additional Programs at Boot Time” for more information
about using the rc.local file.
After the init command has progressed through the appropriate rc directory for the runlevel,
the /etc/inittab script forks an /sbin/mingetty process for each virtual console (login prompt) allocated
to the runlevel. Runlevels 2 through 5 have all six virtual consoles, while runlevel 1 (single user mode)
has one, and runlevels 0 and 6 have none. The /sbin/mingetty process opens communication pathways
to tty devices[14], sets their modes, prints the login prompt, accepts the user's username and password,
and initiates the login process.
In runlevel 5, the /etc/inittab runs a script called /etc/X11/prefdm. The prefdm script executes the
preferred X display manager[15] — gdm, kdm, or xdm, depending on the contents of
the /etc/sysconfig/desktop file.
Once finished, the system operates on runlevel 5 and displays a login screen.
[13]
GRUB reads ext3 file systems as ext2, disregarding the journal file. See the chapter titled The ext3 File
System in the Red Hat Enterprise Linux Deployment Guide for more information on the ext3 file system.
[14]
Refer to the Red Hat Enterprise Linux Deployment Guide for more information about tty devices.
[15]
Refer to the Red Hat Enterprise Linux Deployment Guide for more information about display
managers.