0% found this document useful (0 votes)
33 views49 pages

Advanced Linux System Administra3on: Topic

The document discusses the booting and shutting down processes in advanced Linux system administration, detailing the four stages of booting: hardware, bootloader, kernel, and init. It emphasizes the importance of understanding and customizing these procedures to address potential administrative issues. Additionally, it provides insights into firmware types like BIOS and EFI, and their roles in the boot process.

Uploaded by

reddylourdu2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views49 pages

Advanced Linux System Administra3on: Topic

The document discusses the booting and shutting down processes in advanced Linux system administration, detailing the four stages of booting: hardware, bootloader, kernel, and init. It emphasizes the importance of understanding and customizing these procedures to address potential administrative issues. Additionally, it provides insights into firmware types like BIOS and EFI, and their roles in the boot process.

Uploaded by

reddylourdu2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Advanced

 Linux  System  Administra3on  


Topic  3.  Boo3ng  &  shu@ng  down  

Pablo  Abad  Fidalgo  


José Ángel Herrero Velasco
Departamento  de  Ingeniería  Informá2ca  y  Electrónica  

Este  tema  se  publica  bajo  Licencia:  


Crea2ve  Commons  BY-­‐NC-­‐SA  4.0  
Index  
• Introduc,on.  
• Boo,ng,  Stage  1:  Hardware.  
• Boo,ng,  Stage  2:  Bootloader:  
– LILO.  
– GRUB.  
• Boo,ng,  Stage  1+2  (UEFI).  
• Boo,ng,  Stage  3:  Kernel.  
• Boo,ng,  Stage  4:  SysV  /  Systemd.  
• ShuLng  Down.  
Introduc,on  
• Boo#ng/Shu*ng  Down  are  complex  procedures,  but  system  
provides  mechanisms  to  deal  with  them.  
• …However,  this  is  one  of  the  poten#al  troubles  of  administra#on.  
• Goals  of  this  Chapter:  
– To  understand  the  basic  opera#on  of  both  procedures.  
– Being  able  to  customize  them.  
– Being  able  to  solve  generic  problems  related  to  Boot  process.  
• Bootstrapping.  Where  does  the  name  come  from?:  
– Allusion  to  “Baron  Münchausen”.  
– Defines  a  process  where  a  simple  system  starts  up  another  one  with  higher  
complexity  (star#ng  the  system  forms  a  small  por#on  of  the  system  itself).  
Introduc,on  
• The  main  objec#ve  of  the  Boo#ng  process  is  to  load  the  kernel  in  
the  memory  and  to  start  execu#ng  it:  
– Where  is  the  kernel  before  boo#ng?  
– What’s  the  memory  content  before  boo#ng?  
• It  is  a  sequen#al  process  divided  into  4  stages:  
Stage 1: Hardware   Stage 2: Bootloader   Stage 3: Stage 4: INIT  
Kernel  

LILO   LOGIN  
STAGE    1   PROMPT  

BIOS   BOOT   BOOT   KERNEL   INIT   INIT  


POST   SECTOR   LOADER   LOADING   PROCESS   LEVEL  

GRUB   GRUB   GRUB   XDM  


STAGE    1   STAGE    1.5   STAGE    2   PROCESS  
Stage 1: Hardware   Stage 2: Bootloader   Stage 3: Stage 4: INIT  
Kernel  

Index   BIOS   BOOT  


LILO  
STAGE    1  

BOOT   KERNEL   INIT  


LOGIN  
PROMPT  

INIT  
POST   SECTOR   LOADER   LOADING   PROCESS   LEVEL  

• Introduc,on.  
GRUB   GRUB   GRUB   XDM  
STAGE    1   STAGE    1.5   STAGE    2   PROCESS  

• Boo,ng,  Stage  1:  Hardware.  


• Boo,ng,  Stage  2:  Bootloader:  
– LILO.  
– GRUB.  
• Boo,ng,  Stage  1+2  (UEFI).  
• Boo,ng,  Stage  3:  Kernel.  
• Boo,ng,  Stage  4:  SysV  /  Systemd.  
• ShuLng  Down.  
Stage  1:  Hardware  
• First  Steps:
– Acer  pushing  the  Power-­‐On
bu]on,  the  Reset  Vector  tells  the  
CPU  the  address  of  the  first  
instruc#on  to  be  executed  
(FFFFFFF0h  for  x86).

– Such  direc#on  corresponds  to  an  EPROM/Flash  (motherboard)  that  stores


the  code  corresponding  to  the  Firmware  (memory-­‐mapped  I/O).  
• Firmware:
– Stores  Hardware  configura#on  for  the  system.
– Some  configura#on  parameters  with  its  own  power  supply  (ba]ery).
• Want  detailed  descrip#on?  (hardcore…):
– h]p://www.drdobbs.com/parallel/boo#ng-­‐an-­‐intel-­‐architecture-­‐system-­‐par/232300699.
Stage  1:  Hardware  
Stage  1:  Hardware  
• Tasks  to  be  performed:  
– Power-­‐on-­‐self-­‐test  (POST):  examina#on,  verifica#on  and  start  up  of  
hardware  devices  (CPU,  RAM,  Controllers,  etc.).  
– Configura#on  of  previous  aspects,  independent  of  OS  (Virtualiza#on  
extensions,  security,  etc.).  
– Star#ng  up  the  Opera#ng  System:  in  the  case  of  BIOS,  look  for  the  OS  
loader  in  the  first  block  (512  bytes)  [Master  Boot  Record  (MBR)]  from  the  
boo#ng  device  in  the  configured  order.  When  found,  the  contents  are  
loaded  into  the  memory.  
• Two  main  kinds  of  Firmware:  
– BIOS:  Basic  Input  Output  System.  
– EFI:  Extensible  Firmware  Interface.  
Stage  1:  Hardware  
• BIOS  (Basic  Input/Output  System):  
– 1975:  first  appeared  in  the  CP/M  Opera#ng  System.  
– It  runs  in  real  address  mode  (16  bit):  1MB  of  addressable  memory.  
– 1990:  “BIOS  setup  u#lity”  appears:  allows  the  user  to  define  some  
configura#on  op#ons  (boot  priority).  
– ROM  customized  for  a  par#cular  HW.  Provides  a  small  library  with  I/O  
func#ons  to  work  with  peripherals  (keyboard,  screen).  Very  slow  (protected  
to  real  mode).  
– Emerging  applica#ons  require  more  and  more  BIOS  support:  security,  
temperature/power  metrics  (ACPI),  virtualiza#on  extensions,  turbo-­‐boost…  
(hard  to  put  all  that  in  1MB).  
– 2002:  intel  develops  an  alterna#ve  firmware:  EFI  (/UEFI).  
Stage 1: Hardware   Stage 2: Bootloader   Stage 3: Stage 4: INIT  
Kernel  

Index   BIOS   BOOT  


LILO  
STAGE    1  

BOOT   KERNEL   INIT  


LOGIN  
PROMPT  

INIT  
POST   SECTOR   LOADER   LOADING   PROCESS   LEVEL  

• Introduc,on.  
GRUB   GRUB   GRUB   XDM  
STAGE    1   STAGE    1.5   STAGE    2   PROCESS  

• Boo,ng,  Stage  1:  Hardware.  


• Boo,ng,  Stage  2:  Bootloader:  
– LILO.  
– GRUB.  
• Boo,ng,  Stage  1+2  (UEFI).  
• Boo,ng,  Stage  3:  Kernel.  
• Boo,ng,  Stage  4:  SysV  /  Systemd.  
• ShuLng  Down.  
Previous:  MBR  Disks  &  Par,,ons  
Master  Boot  Record  

Bootloader  
Code  
Primary  
par##on  
Par,,on  
Table  

Boot  Signature  

Primary  
par##on   Volume  Boot  Record  
disk  

Extended  Boot  Record  


Bootloader   Volume  Boot  Record  
Code  
Data   Logical  
Par##on  
Primary   Extended  
par##on   Par,,on  
Extended  Boot  Record  
Table  
Boot  Signature   Volume  Boot  Record  
Boot  Signature  

Logical  
Par##on  
Extended  
Par##on  


Previous:  MBR  Disks  &  Par,,ons  
• Master  Boot  Record  (MBR):  
– First  block  of  the  Disk,  512  Bytes.    
– Par,,on  Table:  informa#on  about  four  primary  par##ons:  begin  and  end  
blocks,  size,  etc.  (64  bytes).  
– Boot  Signature:  numerical  value  indica#ng  the  presence  of  valid  
bootloader  code  in  the  code  field  (0x55AA)  (2  bytes).  
• Volume  Boot  Record  (VBR):  
– First  block  of  each  primary  par##on.  
– Could  contain  bootloader  code  (indicated  by  Boot  Signature).  
• Extended  Par,,on:  
– Par##on  that  can  be  sub-­‐divided  into  mul#ple  logical  par,,ons.  
– Extended  Boot  Record  (EBR):  first  block  of  each  logical  par##on.  It  only  
contains  a  par##on  table  with  two  fields.  Extended  par,,on  table  forms  a  
linked  list  with  all  logical  par##ons.  
Previous:  MBR  Disks  &  Par,,ons  
• Linux  Naming  Conven,on:  
– Remember:  I/O  devices  are  treated  as  files.  Under  directory  /dev  we  find  
all  system  disks.  
– Generic  PC:  2  IDE  controllers,  each  can  have  two  devices  (master/slave):  
• /dev/hda:  first  device  (master)  of  the  first  IDE  controller.  
• /dev/hdb:  second  device  (slave)  of  the  first  IDE  controller.  
• /dev/hdc:  first  device  of  the  second  controller.  
• /dev/hdd:  second  device  of  the  second  controller.  
– In  a  disk,  each  primary  par,,on  is  iden#fied  with  a  number  from  1  to  4:  
• /dev/hda1:  first  primary  par##on  of  the  hda  disk.  
– Logical  par,,ons  start  from  5:  
• /dev/hda5:  first  logical  par##on  of  hda  disk.  
– In  SCSI  devices  same  naming  conven#on,  changing  “hd”  to  “sd”:  
• /dev/sda1.  
Stage  2:  Bootloader  
• Hardware  requires  an  OS  in  charge  of  providing  all  the  
func#onality  in  a  computer.  
• Target:  to  load  the  OS  kernel  into  memory  and  start  running  it.  
Loader  with  different  loca#ons:  USB,  CD,  Disk…  
• Stage  2.1:  
– Located  in  MBR:  512  first  bytes  (block  0)  of  the  ac,ve  device.  
– Loaded  into  memory  by  BIOS  (Stage  1).  
– Triggers,  when  executed,  the  load  and  execu#on  of  Stage  2.2.  
• Stage  2.2:  
– Located  in  the  ac,ve  par,,on,  where  the  kernel  is  placed.  
– Loads  the  kernel  into  memory  and  transfers  control  to  it  (data  ini#aliza#on,  
drivers,  check  CPU,  etc.).  
– Acer  this  process,  the  init  process  is  executed  (Stage  3).  
Stage  2:  LILO  
• LInux  Loader:      
– Two  stage  Bootloader.  
– Does  not  “understand”  about  opera#ng  system  or  about  file  system.  Only  
works  with  physical  loca#ons.  
– Obsolete  (but  easy  to  follow  for  academic  purposes).  
• Steps:  
– Master  boot  loads  LILO  from  the  first  ac#ve  par##on  and  runs  it:  
• LILO  can  be  in  the  MBR  or  in  the  Boot  Block  of  a  primary  par##on.  In  the  second  case,  
MBR  contains  the  necessary  code  to  load  LILO  from  another  block.  
– LILO  asks  the  user  what  kind  of  boot  he  wants  (par##on,  kernel,  mode).  
Through  a  prompt.  
– LILO  loads  the  kernel  and  a  ramdisk.  
– The  kernel  starts  running  once  it  is  loaded  into  memory.  
Stage  2:  LILO  
• Configura#on:  /etc/lilo.conf:   Device  where  LILO  is  installed  (IDE/
SATA/Floppy…).  
boot=/dev/hda #o by ID
map=/boot/map File  with  informa#on  about  disk  blocks  
install=/boot/boot.b with  the  files  required  to  boot  system.  
prompt
timeout=50
message=/boot/message
Loader  Assembly  code.  
linear
default=linux Kernel  for  boo#ng  and  its  op#ons.  
image=/boot/vmlinuz-2.6.2-2 Linux  system  par##on  (/).  Not  
label=linux
read-only
necessarily  a  disk  (usb  loader).  
root=/dev/hda2 #o by UUID
initrd=/boot/initrd-2.4.2-2.img Filesystem  loaded  into  memory  as  a  
ramdisk.  Socware  support  not  provided  
other=/dev/hda1 by  the  kernel  to  ini#alize  the  system.  
label=dos
optional
Link  to  other  loader  (boot  a  different  OS).  
Stage  2:  LILO  
• Configura#on:  /etc/lilo.conf:  
– Any  change  in  the  files  employed  in  boot  process  (boot.b,  kernel,  ramdisk)  
requires  loader  update:  
• Map  file  must  reflect  those  changes,  otherwise  boo#ng  process  is  corrupted.  
• Check  if  map  file  is  updated:  #  lilo  -­‐q.  
• Update  map  file:  #  lilo  [-­‐v].  

• A  boo#ng  error  cannot  be  fixed  from  the  shell…  


• Possible  error  sources:  
– Installa#on  of  a  new  OS  overwri#ng  MBR  (M$).  
– Failed  kernel  compila#on.  
– Modifica#on  in  boot  files  without  map  upda#ng.  
• Rescue  Systems:  
– mkbootdisk.  
– Installa#on  Live  CD  (op#on  rescue)  or  specialized  (SystemRescueCD).  
Stage  2:  GRUB/GRUB2  
• GRand  Unified  Bootloader:  linux  loader:  
– Bootloader  with  three  stages.  
– Can  work  with  file  systems  (ext2,  ext3,  ext4…),  directly  accessing  par##ons  
(no  map  files).  
– UEFI  version  available  (grub.efi).  
– Much  more  flexible,  has  its  own  mini-­‐shell  (grub>):  
• Boo#ng  parameters  can  be  decided  through  that  prompt.  It  is  possible  to  indicate  the  
kernel  and  the  ramdisk    before  startup  (boo#ng  an  OS  which  was  not  in  the  boot  menu).  
• “c”  from  the  startup  window  opens  the  console  with  the  values  for  the  selected  input.  
• “e”  edits  each  input  in  n-­‐curses  format.    
• “kernel”,  “initrd”  loads  a  kernel  or  a  ramdisk.  
• “boot”  boots  your  OS.  
• Access  to  the  file  system  and  command  has  auto-­‐complete  (TAB).  
– Currently  GRUB2  is  the  most  commonly  used  bootloader.  
Stage  2:  GRUB/GRUB2  
• GRand  Unified  Bootloader:  
– Configura#on:  
• More  complex  scripts  than  LILO.  Advantage:  modifica#ons  in  files  required  to  boot  
(kernel  or  initrd)  are  processed  “automa#cally”.  
• Everything  in  /etc/default/grub  and  /etc/grub.d/.  
• Final  configura#on  (/boot/grub)  is  performed  through  the  command  “update-­‐grub”.  
– Stages:  
• Stage  1.  Boot.img  stored  in  MBR  (or  VBR),  loaded  into  memory  and  executed  (loads  the  
first  sector  of  core.img).  
• Stage  1.5.  Core.img  stored  in  the  blocks  between  MBR  and  first  par##on  (MBR  gap),  
loaded  into  memory  and  executed.  Loads  its  configura#on  file  and  drivers  for  the  file  
system.  
• Stage  2.  Load  Kernel  and  ramdisk,  accessing  directly  to  the  file  system  (/boot/grub).  
Stage  2:  Bootloader  
• Having  physical  access  to  a  system,  stages  1  &  2  can  become  a  
weakness:  
– Modifying  boot  op#ons  we  could  obtain  superuser  privileges.  
• Protect  BIOS  and  loader  with  password.  
• Example:  protec#on  of  GRUB2  with  password:  
– Edit  /etc/grub.d/00_header  and  at  the  end  of  the  file  add  (remember  to  
perform  update-­‐grub  acer  that):  

cat << EOF


set superusers=“alumno"
password alumno <<<<<secuencia de grub-mkpasswd-pbkdf2>>> o <<password-plano>>
export superusers
EOF
Stage 1: Hardware   Stage 2: Bootloader   Stage 3: Stage 4: INIT  
Kernel  

Index   BIOS   BOOT  


LILO  
STAGE    1  

BOOT   KERNEL   INIT  


LOGIN  
PROMPT  

INIT  
POST   SECTOR   LOADER   LOADING   PROCESS   LEVEL  

• Introduc,on.  
GRUB   GRUB   GRUB   XDM  
STAGE    1   STAGE    1.5   STAGE    2   PROCESS  

• Boo,ng,  Stage  1:  Hardware.  


• Boo,ng,  Stage  2:  Bootloader:  
– LILO.  
– GRUB.  
• Boo,ng,  Stage  1+2  (UEFI).  
• Boo,ng,  Stage  3:  Kernel.  
• Boo,ng,  Stage  4:  SysV  /  Systemd.  
• ShuLng  Down.  
UEFI  
• From:  
LILO   LOGIN  
STAGE    1   PROMPT  

BIOS   BOOT   BOOT   KERNEL   INIT   INIT  


POST   SECTOR   LOADER   LOADING   PROCESS   LEVEL  

GRUB   GRUB   GRUB   XDM  


STAGE    1   STAGE    1.5   STAGE    2   PROCESS  

• To:  
LOGIN  
PROMPT  

UEFI   KERNEL   INIT   INIT  


(+  grub)   LOADING   PROCESS   LEVEL  

XDM  
PROCESS  
UEFI  
• EFI/UEFI  (Unified  Extensible  Firmware  Interface):  
– 2002:  itanium  plavorm  from  intel  provides  EFI  firmware.  
– 2005:  UEFI.  Consor#um  of  companies  takes  control  over  the  firmware.  
Unified  EFI  Forum.  
– Works  in  32/64  bits  mode.  
– Much  more  flexible  than  BIOS:  
• Supports  big  disks  (MBR:  32-­‐bit  block  addresses.  GPT:  64-­‐bit  block  addresses):  
– MBR:  512KB  block:  2TB  disk.  
• Supports  more  boo#ng  devices  (network).  
• Can  eliminate  the  need  for  a  bootloader  (no  stage  2).  
• Improved  Security  (network  authen#ca#on,  signed  start  up).  
• Extends  bootloader  opera#on  (load  the  OS)  to  a  UEFI-­‐capable  shell  (interac#on).  
– Requires  support  from  the  OS  (Linux,  OSX,  Windows8).  
– Can  emulate  BIOS.  
– VirtualBox  supports  UEFI.  
Previous:  GPT  Disks  &  Par,,ons  
Legacy  MBR  
Par,,on  
Table  
Signature  
Current  Block  
Primary  GPT   Backup  Block  
Entries  Start  block  
GPT  Header  
Entry1   Entry2   Entry3   Entry4   num/size  entries  
Entry5   …  
Par##on1   CRC32  
disk  

Par##on  2  

Par,,on  Type  
Entry128   Par,,on  GUID  
First  Block  
Last  Block  

A`ributes  
Par##on  N  
Par,,on  name  

Secondary  
GPT  
Previous:  GPT  Disks  &  Par,,ons  
• Protec,ve/Legacy  MBR:  
– Backward  compa#bility,  first  block  reserved.    
– Prevent  MBR-­‐based  disk  u#li#es  from  misrecognizing/overwri#ng  GPT  disks.  
– Single  par##on  of  special  type  (iden#fies  a  GPT  disk).  OS  &  tools  which  
cannot  read  GPT  recognize  the  disk  and  typically  refuse  to  modify  it.  
• Primary  GPT  Header:  
– Defines  the  usable  blocks  on  the  disk.    
– Also  defines  the  par##on  table  (number  &  size  of  the  par##on  entries).  
Minimum  table:  128  entries,  each  128  bytes  long.  
– Also  contains  disk  UUID,  CRC32  checksum,  its  own  size  and  loca#on  (always  
LBA  1)  and  the  size  and  loca#on  of  the  secondary  GPT  header  &  table  
(always  last  disk  sectors).    
Previous:  GPT  Disks  &  Par,,ons  
• Par,,on  Entries:  
– 128  bytes  for  each  entry  block.  
– Each  par##on  includes  the  following  contents:  Type,  unique  ID,  First  and  
last  blocks,  A]ributes  (e.g.  read  only)  &  par##on  name.    
• Secondary  GPT  Header:  
– Copy  of  the  Primary  GPT  header,  placed  in  last  disk  blocks.  
– If  checksum  of  primary  header  fails,  secondary  is  employed.  
UEFI  
• Instead  of  a  512  MBR  and  some  boot  code,  UEFI  has  its  own  
filesystem,  with  files  and  drives  (FAT32,  200-­‐500Mb).  
• UEFI  marks  one  GPT  par##on  with  the  boot  flag:  
– But  this  is  an  EFI  par##on,  never  any  of  the  OS  par##ons.  
• Each  installed  OS  has  its  own  directory  in  EFI  par##on:  
– All  necessary  files  for  loading  the  OS  are  under  these  directories.  
– In  Linux,  acer  computer  boot-­‐up  the  EFI  par##on  is  some#mes  mounted  
under  the  boot  par##on.  
• Taking  a  look  at  the  UEFI  boot  process,  you  realize  it  reminds  you  
of  a  mini-­‐OS.  
!"#

UEFI  
!"#$"%#
• Boot  Manager:  
– Firmware  policy  !"#$%&''()*%+,-%-.+-/*0(/%(1%2'&+1(34%1034)&3-%56%'(&70/8%!"#$%7309-3%&/7%!"#$%&22'0:&+0(/
engine  that  can  be  configured  by  modifying  architecturally  
defined  global  N04&8-*;%<,-/%!"#$%7309-3*%&/7%!"#$%&22'0:&+0(/*%&3-%'(&7-7%+,-6%,&9-%&::-**%+(%&''%!"#$=7-
VRAM  variables.    
3>/+04-%&/7%5((+%*-390:-*;%?--%#08>3-%@;%
– In  charge  of  loading  UEFI  
drivers  and  UEFI  
applica#ons  (including  
UEFI  OS  boot  loaders).   EFI EFI EFI
OS Loader
Boot  order  defined  by  the  
Driver Application Bootcode

Retry
global  NVRAM  variables.     Failure EFI API
EFI Boot
Platform EFI Image Services
Init Load OS Loader
Load Terminate

Standard Drivers and Boot from Operation


firmware applications ordered list handed off
platform loaded of EFIOS to OS loader
initialization iteratively loaders
API specified Value add implementation
Boot Manager EFI binaries
OM13144
Stage 1: Hardware   Stage 2: Bootloader   Stage 3: Stage 4: INIT  
Kernel  

Index   BIOS   BOOT  


LILO  
STAGE    1  

BOOT   KERNEL   INIT  


LOGIN  
PROMPT  

INIT  
SECTOR   LOADER   LOADING   PROCESS   LEVEL  
POST  

• Introduc,on.  
GRUB   GRUB   GRUB   XDM  
STAGE    1   STAGE    1.5   STAGE    2   PROCESS  

• Boo,ng,  Stage  1:  Hardware.  


• Boo,ng,  Stage  2:  Bootloader:  
– LILO.  
– GRUB.  
• Boo,ng,  Stage  1+2  (UEFI).  
• Boo,ng,  Stage  3:  Kernel.  
• Boo,ng,  Stage  4:  SysV  /  Systemd.  
• ShuLng  Down.  
Stage  3:  Loading  the  Kernel  
• The  bootloader  has  loaded  kernel  &  ramdisk  files  into  memory:  
– vmlinuz-­‐2.6.26-­‐2-­‐686.  
– initrd.img-­‐2.6.26-­‐2-­‐686.  
• Once  Stage  2  is  complete,  kernel  execu#on  starts:  
– The  Kernel  uncompresses  itself.  
– Detects  memory  map,  the  CPU  and  its  features  supported.  
– Starts  the  display  (console)  to  show  informa#on  through  the  screen.  
– Checks  the  PCI  bus,  crea#ng  a  table  with  the  peripheral  detected.  
– Ini#alizes  the  system  in  charge  of  virtual  memory  management,  including  
swapper.  
– Ini#alizes  the  drivers  for  the  peripherals  detected  (Monolithic  or  modular).  
– Mount  file  system  root  (“/”).  
– Calls  the  init  process  (Stage  4):  PID  1,  father  of  the  rest  of  processes.  
Stage 1: Hardware   Stage 2: Bootloader   Stage 3: Stage 4: INIT  
Kernel  

Index   BIOS   BOOT  


LILO  
STAGE    1  

BOOT   KERNEL   INIT  


LOGIN  
PROMPT  

INIT  
POST   SECTOR   LOADER   LOADING   PROCESS   LEVEL  

• Introduc,on.  
GRUB   GRUB   GRUB   XDM  
STAGE    1   STAGE    1.5   STAGE    2   PROCESS  

• Boo,ng,  Stage  1:  Hardware.  


• Boo,ng,  Stage  2:  Bootloader:  
– LILO.  
– GRUB.  
• Boo,ng,  Stage  1+2  (UEFI).  
• Boo,ng,  Stage  3:  Kernel.  
• Boo,ng,  Stage  4:  SysV.  
• ShuLng  Down.  
Stage  4:  INIT  (SysV)  
• The  init  process  performs  the  following  tasks:  
– Step  1.  Configura,on:  read  from  the  file  /etc/ini`ab  the  ini#al  configura#on  
of  the  system:  Opera#on  mode,  runlevels,  consoles,…  
– Step  2.  Ini,aliza,on:  runs  the  command  /etc/init.d/rc.S  (debian),  which  
performs  a  basic  ini#aliza#on  of  the  system.  
– Step  3.  Services:  according  to  the  runlevel  configured,  runs  the  scripts/
services  pre-­‐established  for  that  runlevel.  
• Runlevels  (Opera#on  modes):  
– Standard:  7  levels.  Each  distribu#on  has  its  own  configura#on  (here  Debian).  
– Level  S.  Only  executed  at  boot  #me  (replaces  /etc/rc.boot).  
– Level  0.  Halt:  employed  to  Shut  down  the  system.  
– Level  1.  Single  User:  maintenance  tasks  (no  ac#ve  network).  
– Level  2-­‐5.  Mul,user:  all  the  network  and  Graphical  services  ac#vated.  
–  Level  6.  Reboot:  similar  to  level  0.    
Stage  4:  INIT  (SysV)  
• Step  1.  Configura,on.  The  file  /etc/ini`ab:  
# /etc/inittab: init(8) configuration.

# The default runlevel.


id:2:initdefault:

# Boot-time system configuration/initialization


# Normally not reached, but fallthrough in case of
# script. This is run first except when booting in
# emergency.
# emergency (-b) mode.
si::sysinit:/etc/init.d/rcS z6:6:respawn:/sbin/sulogin

# What to do in single-user mode. # What to do when CTRL-ALT-DEL is pressed.


~~:S:wait:/sbin/sulogin ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now

# /etc/init.d executes S and K scripts upon change
# of runlevel. # Note that on most Debian systems tty7 is used by
l0:0:wait:/etc/init.d/rc 0 # the X Window System, so if you want to add more
l1:1:wait:/etc/init.d/rc 1 # getty's go ahead but skip tty7 if you run X.
l2:2:wait:/etc/init.d/rc 2 1:2345:respawn:/sbin/getty 38400 tty1
l3:3:wait:/etc/init.d/rc 3 2:23:respawn:/sbin/getty 38400 tty2
l4:4:wait:/etc/init.d/rc 4 3:23:respawn:/sbin/getty 38400 tty3
l5:5:wait:/etc/init.d/rc 5 4:23:respawn:/sbin/getty 38400 tty4
l6:6:wait:/etc/init.d/rc 6 5:23:respawn:/sbin/getty 38400 tty5
6:23:respawn:/sbin/getty 38400 tty6
Stage  4:  INIT  (SysV)  
• Step  1.  Configura,on.  The  file  /etc/ini`ab:  
– Line  format:  id:runlevels:ac,on:process.  
– id:  iden#fier  for  the  entry  inside  ini]ab.  
– Runlevels:  execu#on  levels  for  that  entry  (empty  means  all).  
– Ac,on:  what  must  init  do  with  the  process:  
• Wait:  wait  un#l  it  finishes.  
• Off:  ignore  the  entry  (deac#vated).  
• Once:  run  only  once.  
• Respawn:  rerun  the  process  if  it  dies.  
• Sysinit:  ask  the  user  what  to  do  with  that  entry.  
• Special:  ctrlaltdel.  
– Process:  sh  line  tells  init  which  process  to  start  when  this  entry  is  reached.  
Stage  4:  INIT  (SysV)  
• Step  2.  Ini,aliza,on.  The  file  /etc/init.d/rc:  
– Input  parameters:  the  runlevel.  Example  rc  2:  mul#user.  
– Tasks:  
• Establishes  PATHs.  
• Loads  swap  space:  swapon.  
• Checks  and  mounts  local  filesystems  (/ets/fstab).  
• Ac#vates  and  configures  the  network.  
• Removes  unnecessary  files  (/tmp).  
• Configures  the  kernel.  Loads  modules:  drivers  (managing  dependencies).  
• Triggers  the  startup  of  the  services  associated  with  the  runlevel.  
– Modifying  the  runlevel:  command  init,  telinit:  
• Allows  changing  from  one  runlevel  to  another.  
• Single  User?  
• Restores  original  state.  
Stage  4:  INIT  (SysV)  
• Step  3.  Services.  The  directories  /etc/init.d  and  /etc/rcN.d:  
– All  the  services  available  are  found  in  /etc/init.d:  
• Examples:  cron,  ssh,  lpd…  
– How  do  we  tell  each  runlevel  which  services  to  start?:  
• With  a  special  directory,  /etc/rcN.d/  (being  N  the  runlevel).  
• In  these  directories  a  list  of  links  to  the  services  is  found.  
– The  directory  /etc/rcN.d/:  
• The  links  begin  with  le]ers  “S”  or  “K”  plus  two  digits  (execu#on  order).  
• “S”:  executed  in  ascending  order  when  a  runlevel  is  started  (ssh  start).  
• “K”:  executed  in  descending  order  when  shu*ng  down  (ssh  stop).  
• These  links  are  controlled  with  “update-­‐rc.d”.  
– S99local:  script  to  perform  local  configura#ons:  
• Minor  boo#ng  aspects:  auxiliary  kernel  modules,  personalized  services…  
• Employed  by  the  administrator.  
• It  really  runs  the  script  /etc/rc.local.  
Stage  4:  INIT  (SysV)  
• Step  3.  Services.  The  directories  /etc/init.d  and  /etc/rcN.d:  
– The  directory  /etc/rcN.d/.  

pablo@si:/etc/rc2.d$ ls
README S03cgroupfs-mount S03vboxdrv S05cups
S01bootlogs S03cron S04avahi-daemon S05cups-browsed
S01rsyslog S03dbus S04docker S05saned
S02apache2 S03exim4 S04lightdm S06plymouth

pablo@si:/etc/rc6.d$ ls
K01alsa-utils K01network-manager K02avahi-daemon K06rpcbind
K01apache2 K01plymouth K02vboxdrv K07hwclock.sh

Stage  4:  INIT  (SysV)  
• Manual  administra#on  of  services:  
– Acer  boo#ng  process,  services  can  be  modified  (stop  running  services  or  
start  new  services).  
– Directly  through  its  script  (example  ssh):  
• #  /etc/init.d/ssh  [stop/start/restart/status].  
– Or  through  the  command  service:  
• Service  -­‐-­‐status-­‐all:  reads  /etc/init.d/  verifying  service  state  [+]  [-­‐]  [?].  
– These  changes  are  vola#le  (lost  acer  reboot):  
• Permanent  with  update.rc-­‐d.  
– Checking  possible  errors  concerning  boot  process:  
• #  tail  -­‐f  /var/log/messages  (Other  important  files:  syslog,  daemon.log).  
• #  ls  -­‐lart  /var/log.  
Stage  4:  INIT  (SysV)  
• Manual  administra#on  of  services:  
– Examples  of  start  script  and  services  command:  
#!/bin/sh
#SIMPLIFICADO
[ -f /usr/local/sbin/sshd2 ] || exit 0
PORT=

PORT=`grep Port /etc/ssh2/sshd2_config | awk '{ x = $2 } END {print x}' -`


if [ "X$PORT" = "X" ]
then
PORT=22
fi
# See how we were called.
case "$1" in
start) # Start daemons.
echo -n "Starting sshd2 in port $PORT: "
/usr/local/sbin/sshd2
echo "done."
;;
stop) # Stop daemons.
echo -n "Shutting down sshd2 in port $PORT: "
kill `cat /var/run/sshd2_$PORT.pid`
echo "done."
;;
restart)
$0 stop
$0 start
;;
*)
echo "Usage: sshd2 {start|stop|restart}"
exit 1
esac
exit 0
Stage 1: Hardware   Stage 2: Bootloader   Stage 3: Stage 4: INIT  
Kernel  

Index   BIOS   BOOT  


LILO  
STAGE    1  

BOOT   KERNEL   INIT  


LOGIN  
PROMPT  

INIT  
POST   SECTOR   LOADER   LOADING   PROCESS   LEVEL  

• Introduc,on.  
GRUB   GRUB   GRUB   XDM  
STAGE    1   STAGE    1.5   STAGE    2   PROCESS  

• Boo,ng,  Stage  1:  Hardware.  


• Boo,ng,  Stage  2:  Bootloader:  
– LILO.  
– GRUB.  
• Boo,ng,  Stage  1+2  (UEFI).  
• Boo,ng,  Stage  3:  Kernel.  
• Boo,ng,  Stage  4:  Systemd.  
• ShuLng  Down.  
Stage  4:  Systemd  
• SysV  is  not  the  only  available  init  system:  
– BSD  init,  ubuntu’s  Upstart,  systemd.  
• What  are  systemd  benefits?:  
– Faster  Startup:  
• Sysvinit  is  slow:  it  starts  processes  one  at  a  #me,  performs  dependency  checks  on  each  
one,  and  waits  for  daemons  to  start  so  more  daemons  can.  
• Daemons  don't  need  to  know  if  the  daemons  they  depend  on  are  actually  running  (only  
need  the  inter-­‐process  communica#on  sockets  to  be  available).    
• Step  1.  Create  all  sockets  for  all  daemons.  Step  2:  start  all  daemons.  
• Client  requests  for  daemons  not  yet  running  buffered  in  the  socket,  filled  when  the  
daemons  are  up  and  running.  
– Hotplugging  and  On-­‐Demand  Services:  
• Acer  startup  sysvinit  goes  to  sleep  and  doesn't  do  any  more.  
• Systemd  (making  use  of  D-­‐Bus)  can  expand  init  du#es,  working  as  a  full-­‐#me  Linux  
process  babysi]er.  
Stage  4:  Systemd  
• Systemd  Unit:  any  resource  that  system  can  operate/manage:    
– This  is  the  primary  object  that  the  systemd  tools  know  how  to  deal  with.  
• Available  Systemd  unit  types:  
– .service:  a  system  service.    
– .target:  a  group  of  systemd  units.    
– .automount:  a  file  system  automount  point.    
– .device:  a  device  file  recognized  by  the  kernel.    
– .mount:  a  file  system  mount  point.    
– .path:  a  file  or  directory  in  a  file  system.    
– .socket:  an  inter-­‐process  communica#on  socket.    
– .swap:  a  swap  device  or  a  swap  file.    
– .,mer:  a  systemd  #mer.  
– …  
Stage  4:  Systemd  
• Loca#on  of  the  Unit  files:  
– /usr/lib/systemd/system/,  /run/systemd/system/,  /etc/systemd/system/.  
• General  Characteris#cs  of  Unit  files:  
– Internal  structure  organized  with  sec#ons,  denoted  as:  [sec#on_name].      
– At  each  sec#on,  behavior  is  defined  through  key-­‐value  direc#ves  (one  per  
line).  
[Unit]  
Descrip#on=Simple  firewall  

[Service]  
Type=oneshot  
RemainAcerExit=yes  
ExecStart=/usr/local/sbin/simple-­‐firewall-­‐start  
ExecStop=/usr/local/sbin/simple-­‐firewall-­‐stop  

[Install]  
WantedBy=mul#-­‐user.target  
Stage  4:  Systemd  
• Systemd  boot  process:  
– Configure  grub2  for  systemd:    
• GRUB_CMDLINE_LINUX=“init=/lib/systemd/systemd”  (run  update-­‐grub  acerwards).  
– Systemd  handles  boot  &  service  management  using  Targets:  
• Target:  special  unit  employed  to  group  boot  units  and  start  up  synchroniza#on  processes.  
– First  target  executed:  default.target:  
• Usually  a  symbolic  link  to  graphical.target.  
– Target  Unit  File  main  op#ons:  
• Requires:  hard  dependencies.  Must  start  before  your  own  service.  
• Wants:  soc  dependencies  (not  required  to  start).  Can  be  replaced  by  a  directory,  named  
foo.target.wants.  
• Acer:  boots  acer  these  services.  
[Unit]  
– Runlevels:  Specific  Target  units.  
Descrip#on=foo  boot  target  
Requires=mul#-­‐user.target  
Wants=foobar.service  
Acer=mul#-­‐user.target  rescue.service  rescue.target  
Stage  4:  Systemd  
graphical.target  
emergency.target  

display-­‐ mul#-­‐user.target   emergency.service  


system  services  for  
manager.service  
graphical  interface  
various  system  
services  

basic.target  

#mers.target   paths.target   sockets.target   rescue.target  

various  #mers   various  paths   various  sockets   rescue.service  

sysinit.target  

local-­‐fs.target   swap.target   cryptsetup.target  


various  low-­‐level   various  low-­‐level  
services:  udev,   API  VFS  mounts:  
various  mounts   various  swap   tmpfiles,  random   mqueue,  configfs,  
and  fsck  services   devices   seed,  sysctl   debugfs  
Stage  4:  Systemd  
• Service  administra#on  through  the  systemctl  command:  
– Table:  comparison  of  the  service  u#lity  with  systemctl.  

service  (sysV)   systemctl  (systemd)   Descrip,on  

service  name  start   systemctl  start  name.service   Starts  a  service  


service  name  stop   systemctl  stop  name.service   Stops  a  service  
service  name  restart   systemctl  restart  name.service   Restarts  a  service  
service  name  status   systemctl  status  name  .service   Checks  if  a  service  is  running  
service  –status-­‐all   systemctl  list-­‐units  –type  service   Displays  the  status  of  all  services  

• System  &  Boot  performance  sta#s#cs  through  systemd-­‐analyze  


command:  
– Alterna#ves  for  SysV:  Bootchart.  
Index  
• Introduc,on.  
• Boo,ng,  Stage  1:  Hardware.  
• Boo,ng,  Stage  2:  Bootloader:  
– LILO.  
– GRUB.  
• Boo,ng,  Stage  1+2  (UEFI).  
• Boo,ng,  Stage  3:  Kernel.  
• Boo,ng,  Stage  4:  INIT.  
• ShuLng  Down.  
ShuLng  Down  
• Never  shut  down  directly  (reset!):  
– If  this  rule  is  not  followed,  there  is  a  high  probability  of  losing  or  corrup#ng  
system  files  (with  a  bit  of  bad  luck,  fully  broken  system).  
– Intermediate  Buffers  for  disk  read/write.  Synchroniza#on.  
• Never  shut  down  without  warning  all  system  users:  
– Periodically  programmed  shut-­‐downs.  
• Steps  for  a  correct  shut  down:  
– Warn  all  users  beforehand.  
– Stop  all  services  associated  with  (/etc/rcN.d/Kxxservice  stop).  
– Send  the  specific  signal  to  all  the  processes  to  end  their  execu#on.  
– Users  and  processes  s#ll  present,  killed.  
– Subsystems  shut  down  sequen#ally.  
– File  System  unmounted  (synchronizes  pending  changes  with  disk).  
ShuLng  Down  
• Command  shutdown:  
– Format:  /sbin/shutdown  -­‐<op#ons>  #me  message:  
• Op#on  –r:  reboot  instead  power  off.  
• Op#on  –h:  stop  the  system(with  ACPI).  
– Message:  message  sent  to  all  users.  
– Time:  delay  to  begin  the  shutdown  (mandatory):  
• Format:  hh:mm.  
• Supports  now+,minutes.  

• /etc/shutdown.allow  or  ini]ab:  


– Avoid  Ctrl+Alt+Del.  
• Other  commands:  /sbin/halt,  /sbin/poweroff.  

You might also like