Unit1 Slides
Unit1 Slides
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Need for Operating System
If OS is not developed then how will the application developers access the hardware?
OPERATING SYSTEMS
General Definition
• Memory
• Processor(s)
• I/O Devices
OPERATING SYSTEMS
Why Study Operating Systems?
● However almost all code runs on top of an operating system, and thus
knowledge of how operating systems work is crucial to proper,
efficient, effective, and secure programming.
● Operating system
4 Controls and coordinates use of hardware among various applications and users
● Application programs – define the ways in which the system resources are used to solve
the computing problems of the users
● Users
users happy.
● Available CPU time, memory, and I/O are used efficiently and that no
● Handheld computers are resource poor, optimized for usability and battery life.
● OS is a resource allocator
● Decides between conflicting requests for efficient and fair resource use
● OS is a control program
● The fundamental goal of computer systems is to execute user programs and to make solving
user problems easier.
● The common functions of controlling and allocating resources are then brought together into
one piece of software: the operating system
● “The one program running at all times on the computer” is the kernel.
● an application program.
OPERATING SYSTEMS
Computer System Organization
● The CPU and the device controllers can execute concurrently, competing
for memory cycles.
• Each device controller has registers for action (like “read character from
keyboard”) to take
• Device controller informs CPU that it has finished its operation by causing
an interrupt
OPERATING SYSTEMS
Computer System Organization
• When the system is booted, the first program that starts running is a Boostrap.
• It is stored in read-only memory (ROM) or electrically erasable programmable read-only memory (EEPROM).
• Bootstrap is known by the general term firmware, within the computer hardware.
• It initializes all aspects of the system, from CPU registers to device controllers to memory contents.
• The bootstrap program must know how to load the operating system and how to start executing that
system.
• The bootstrap program must locate and load into memory the operating system kernel.
• The first program that is created is init, after the OS is booted. It waits for the occurrence of event.
interrupted instruction
• polling
Interrupt
processing
times vary..
..and so do I/O
transfer times
OPERATING SYSTEMS
Storage Structure
• Main memory – only large storage media that the CPU can
• Secondary storage – extension of main memory that provides large nonvolatile storage capacity
• Hard disks – rigid metal or glass platters covered with magnetic recording material
• Disk surface is logically divided into tracks, which are subdivided into sectors
• The disk controller determines the logical interaction between the device and the computer
• Speed
• Cost
• Volatility
• After I/O starts, control returns to user program only upon I/O completion
• After I/O starts, control returns to user program without waiting for I/O completion
• System call – request to the OS to allow user to wait for I/O completion
• Device-status table contains entry for each I/O device indicating its type, address, and state
• OS indexes into I/O device table to determine device status and to modify table entry to include
interrupt.
OPERATING SYSTEMS
Direct Memory Access Structure
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
OPERATING SYSTEMS
Computer-System Architecture,
OS Structure and Operations
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Slides Credits for all the PPTs of this course
Computer-System Architecture,
OS Structure & Operations
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Computer-System Architecture
• Advantages include:
• Increased throughput
• Economy of scale
• Symmetric clustering has multiple nodes running applications, monitoring each other
• Some have distributed lock manager (DLM) to avoid conflicting operations (Ex: when multiple hosts
access the same data on shared storage)
OPERATING SYSTEMS
Clustered Systems
• Single user cannot keep CPU and I/O devices busy at all times
• When it has to wait (for I/O for example), OS switches to another job
OPERATING SYSTEMS
Operating-System Structure - Multitasking
frequently that users can interact with each job while it is running, creating
interactive computing
• If processes don’t fit in memory, swapping moves them in and out to run
• Provides ability to distinguish when system is running user code or kernel code
• System call changes mode to kernel, return from call resets it to user
• When the request is fulfilled, the system always switches to user mode (by
• Timer is set to interrupt the computer after a specified period (fixed 1/60 sec or variable 1 msec to 1 sec)
• Timer can be used to prevent a user program from running too long (terminate the program)
THANK YOU
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
OPERATING SYSTEMS
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Kernel Data Structures
OPERATING SYSTEMS
Kernel Data Structures
Array:
• An array is a simple data structure in which each element can be accessed directly.
• Items with multiple bytes are accessed as item number × item size
• what about removing an item if the relative positions of the remaining items must be
preserved?
OPERATING SYSTEMS
Lists
In a doubly linked list, a given item can refer either to its predecessor or to its successor.
In a circularly linked list, the last element in the list refers to the first element, rather than
to null.
OPERATING SYSTEMS
Lists
Advantages:
• Linked lists accommodate items of varying sizes.
• Allow easy insertion and deletion of items
Disadvantages:
• Performance for retrieving a specified item in a list of size n is linear — O(n) , as it requires
potentially traversing all n elements in the worst case.
Usage:
• Lists are used by the some of the kernel algorithms
• Constructing more powerful data structures such as stacks and queues
OPERATING SYSTEMS
Stacks & Queues
Stack - a sequentially ordered data structure that uses LIFO principle for adding and removing items
• OS often uses a stack when involving function calls.
• Parameters, local variables and the return address are pushed onto the stack when a function is called
• Return from the function call pops those items off the stack
Queue - a sequentially ordered data structure that uses FIFO principle for adding and removing items
• Tasks waiting to be run on an available CPU are organized in queues
• Balanced binary search tree – (a tree containing n items has at most log n levels)
• Used by Linux for selecting which task to run next (CPU-Scheduling algorithm)
OPERATING SYSTEMS
Hash Functions and Maps
• Hash functions can result in the same output value for 2 inputs
• 0 – resource is available
• 1 – resource is unavailable
• Value of the ith position in the bitmap is associated with the ith resource
• Example: bitmap 001011101 shows resources 2, 4, 5, 6, and 8 are unavailable; resources 0, 1, 3, and 7
are available
• Commonly used to represent the availability of a large number of resources (disk blocks)
OPERATING SYSTEMS
Computing Environments – Traditional
the Internet)
“traditional” laptop?
connectivity
Distributed computing
• Collection of separate, possibly heterogeneous systems networked together
• Network is a communications path, TCP/IP most common
• Local Area Network (LAN)
• Wide Area Network (WAN)
• Metropolitan Area Network (MAN)
• Personal Area Network (PAN)
• Network Operating System provides features between systems across network
• Communication scheme allows systems to exchange messages
• Illusion of a single system
OPERATING SYSTEMS
Computing Environments – Client-Server
Client-Server Computing
• Broadcast request for service and respond to requests for service via discovery protocol
• Examples include Napster and Gnutella, Voice over IP (VoIP) such as Skype
OPERATING SYSTEMS
Computing Environments – Virtualization
• Use cases involve laptops and desktops running multiple OSes for exploration
or compatibility
• VMM can run natively, in which case they are also the host
OPERATING SYSTEMS
Computing Environments – Virtualization
OPERATING SYSTEMS
Computing Environments – Cloud Computing
functionality.
• Amazon EC2 has thousands of servers, millions of virtual machines, petabytes of
storage available across the Internet, pay based on usage
• Many types
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
OPERATING SYSTEMS
Operating-System Services,
Design and Implementation
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Slides Credits for all the PPTs of this course
Operating-System Services
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
A View of Operating System Services
OPERATING SYSTEMS
Services
• Operating systems provide an environment for execution of programs and services to programs and users
• Set of operating-system services provides functions that are helpful to the user:
• User interface - Almost all operating systems have a user interface (UI).
• I/O operations - A running program may require I/O, which may involve a file or an I/O device
OPERATING SYSTEMS
Services
• May occur in the CPU and memory hardware, in I/O devices, in user
program
• Another set of OS functions exists for ensuring the efficient operation of the system itself
• Many types of resources - CPU cycles, main memory, file storage, I/O devices.
• Accounting - To keep track of which users use how much and what kinds of computer
resources
OPERATING SYSTEMS
Services
1. Process control
2. File manipulation
3. Device manipulation
4. Information maintenance
5. Communications
6. Protection
OPERATING SYSTEMS
Example of a system call
Design goals
• Start the design by defining goals and specifications
• User goals – operating system should be convenient to use, easy to learn, reliable,
safe, and fast
• System goals – operating system should be easy to design, implement, and maintain,
as well as flexible, reliable, error-free, and efficient
OPERATING SYSTEMS
Operating-System Design and Implementation
• Much variation
• Early OSes in assembly language
• Then system programming languages like Algol, PL/1
• Now C, C++
• Actually usually a mix of languages
• Lowest levels in assembly
• Main body in C
• Systems programs in C, C++, scripting languages like PERL, Python, shell scripts
• More high-level language easier to port to other hardware
• But slower
• Emulation can allow an OS to run on non-native hardware
THANK YOU
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
OPERATING SYSTEMS
Process Management
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Slides Credits for all the PPTs of this course
Process Concept
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Process Concept
• Maximize CPU use, quickly switch processes onto CPU for time sharing
• Process scheduler selects among available processes for next execution on CPU
• Maintains scheduling queues of processes
• Job queue – set of all processes in the system
• Ready queue – set of all processes residing in main memory, ready and
waiting to execute
• Device queues – set of processes waiting for an I/O device
• Processes migrate among the various queues
OPERATING SYSTEMS
Ready Queue And Various I/O Device Queues
OPERATING SYSTEMS
Representation of Process Scheduling
OPERATING SYSTEMS
Schedulers
• When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch
• Context-switch time is overhead; the system does no useful work while switching
• The more complex the OS and the PCB 🡺 the longer the context switch
• Some hardware provides multiple sets of registers per CPU 🡺 multiple contexts
loaded at once
OPERATING SYSTEMS
Operations on Processes
• Address space
• Child duplicate of parent
• Child has a program loaded into it
• UNIX examples
• fork() system call creates new process
• exec() system call used after a fork() to replace the
process’ memory space with a new program
OPERATING SYSTEMS
C Program forking Separate Process
OPERATING SYSTEMS
Process Termination
• Process executes last statement and then asks the operating system to delete it using the exit() system call.
• Parent may terminate the execution of children processes using the abort() system call. Some reasons for
doing so:
• The parent is exiting and the operating systems does not allow a child to continue if its parent
terminates
OPERATING SYSTEMS
Process Termination
• Some operating systems do not allow child to exist if its parent has terminated. If a process
terminates, then all its children must also be terminated.
• The parent process may wait for termination of a child process by using the wait() system call.
The call returns status information and the pid of the terminated process
• pid = wait(&status);
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
UE20CS254
Operating Systems
Suresh Jamadagni
Department of Computer Science
and Engineering
Operating Systems
Suresh Jamadagni
Department of Computer Science and Engineering
OPERATING SYSTEMS
Process Identifiers
• An existing process can create a new one by calling the fork function
• When a process calls one of the exec functions, that process is completely replaced by the new
program, and the new program starts executing at its main function.
• The process ID does not change across an exec, because a new process is not created;
• exec merely replaces the current process — its text, data, heap, and stack segments — with a
brand-new program from disk.
• A race condition occurs when multiple processes are trying to do something with shared data and
the final outcome depends on the order in which the processes run.
• The fork function is a lively breeding ground for race conditions, if the logic after the fork either
explicitly or implicitly depends on whether the parent or child runs first after the fork.
• In general, which process runs first cannot be predicted
• A process that wants to wait for a child to terminate must call one of the wait functions.
• If a process wants to wait for its parent to terminate, a loop of the following form could be used:
• The problem with this type of loop, called polling, is that it wastes CPU time, as the caller is
awakened every second to test the condition.
• To avoid race conditions and to avoid polling, some form of signaling is used between multiple
processes
THANK YOU
Suresh Jamadagni
Department of Computer Science and Engineering
[email protected]
OPERATING SYSTEMS
CPU Scheduling
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Slides Credits for all the PPTs of this course
CPU Scheduling
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
CPU Scheduling - Basic Concepts
• In a system with a single CPU core, only one process can run at a time. Others must wait
until the CPU is free and can be rescheduled.
• When one process has to wait, the operating system takes the CPU away from that
process and gives the CPU to another process. This pattern continues.
• Every time one process has to wait, another process can take over use of the CPU. On a
multicore system, this concept of keeping the CPU busy is extended to all processing
cores on the system.
OPERATING SYSTEMS
CPU Scheduling - Basic Concepts (Contd)
• In a simple computer system, the CPU then just sits idle. All this
multiprogramming
data are shared among several processes. Ex: While one process is updating
the shared data, it is pre-empted so that the second process can run. The
second process then tries to read the data, which are in an inconsistent
state.
race conditions when accessing shared kernel data structures. Most modern
operating systems are now fully pre-emptive when running in kernel mode.
OPERATING SYSTEMS
Dispatcher
• Dispatcher module gives control of the CPU to the process selected by the
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that
program
• Dispatch latency – time it takes for the dispatcher to stop one process and
metric)
• Waiting time – amount of time a process has been waiting in the ready queue
• Response time – amount of time it takes from when a request was submitted until the
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
OPERATING SYSTEMS
Scheduling Algorithms
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Slides Credits for all PPTs of this course
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
First- Come, First-Served (FCFS) Scheduling
• The average waiting time under an FCFS policy is generally not minimal and may vary
• FCFS scheduling algorithm is non-preemptive. Once the CPU has been allocated to a
process, that process keeps the CPU until it releases the CPU
important that each process to get a share of the CPU at regular intervals.
• It is not desirable to allow one process to keep the CPU for an extended period
OPERATING SYSTEMS
Shortest-Job-First (SJF) Scheduling
• Associate with each process the length of its next CPU burst
• Use these lengths to schedule the process with the shortest time
• SJF is optimal – gives minimum average waiting time for a given set of processes
Note: If FCFS scheduling is used, average waiting time = (0 + 6 + 14 + 21) / 4 = 10.25 ms.
OPERATING SYSTEMS
Determining Length of Next CPU Burst
● Can be estimated using the length of previous CPU bursts, using exponential averaging
where τn+1 is the predicted value for the next CPU burst,
The parameter α controls the relative weight of recent and past history in the prediction .
Commonly, α = 1/2, so recent history and past history are equally weighted
Calculate the exponential averaging with T1 = 10, α = 0.5 and the algorithm is SJF
with previous runs as 8, 7, 4, 16.
Initially T1 = 10 and α = 0.5 and the run times given are 8, 7, 4, 16 as it is shortest job first,
So the possible order in which these processes would serve will be 4, 7, 8, 16 since SJF is a non-preemptive
technique.
So, using formula: T2 = α*t1 + (1-α)T1
we have,
T2 = 0.5*4 + 0.5*10 = 7, here t1 = 4 and T1 = 10
T3 = 0.5*7 + 0.5*7 = 7, here t2 = 7 and T2 = 7
T4 = 0.5*8 + 0.5*7 = 7.5, here t3 = 8 and T3 = 7
T5 = 0.5*16 + 0.5*7.5 = 11.8, here t4 = 16 and T4 = 7.5
Src: https://www.geeksforgeeks.org/shortest-job-first-cpu-scheduling-with-predicted-burst-time/
th
So the prediction for 5 burst time will be T5 = 11.8
OPERATING SYSTEMS
Prediction of the Length of the Next CPU Burst
OPERATING SYSTEMS
Examples of Exponential Averaging
• α =0
• τn+1 = τn
• Most recent CPU burst does not count
• α =1
• τn+1 = α tn
• Only the actual last CPU burst counts
• If we expand the formula, we get:
• τn+1 = α tn+(1 - α)α tn-1 + …
• +(1 - α )j α tn-j + …
• +(1 - α )n+1 τ0
• Since both α and (1 - α) are less than or equal to 1, each successive term has less weight than its predecessor
OPERATING SYSTEMS
More examples
Suppose that the following processes arrive for execution at the times indicated. Each process
will run for the amount of time listed. In answering the questions, use non-preemptive
scheduling, and base all decisions on the information you have at the time the decision must be
made.
• What is the average wait time for these processes with the FCFS scheduling algorithm?
Average wait time = (0 + 8 + 12)/3 = 6.67
• What is the average wait time for these processes with the SJF scheduling algorithm?
Average wait time = (5 + 1 + 0)/3 = 2
OPERATING SYSTEMS
Example of Shortest-remaining-time-first
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
OPERATING SYSTEMS
Scheduling Algorithms
Chandravva Hebbi
Department of Computer Science
OPERATING SYSTEMS
Slides Credits for all PPTs of this course
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Priority Scheduling
• The CPU is allocated to the process with the highest priority (smallest integer ≡ highest
priority)
• Preemptive
• Nonpreemptive
• SJF is priority scheduling where priority is the inverse of predicted next CPU burst time
• Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds:
• If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of at most q time units.
• Each process must wait no longer than (n − 1) × q time units until its next time
quantum.
• For example, with five processes and a time quantum of 20 milliseconds, each
process will get up to 20 milliseconds every 100 milliseconds.
• Performance of the RR algorithm depends heavily on the size of the time quantum
• If the time quantum is extremely large, the RR policy is the same as the FCFS policy
• If the time quantum is extremely small, the RR approach can result in a large
number of context switches
OPERATING SYSTEMS
Example of Round-Robin Scheduling Performance
• In practice, most modern systems have time quanta ranging from 10 to 100 milliseconds.
• The time required for a context switch is typically less than 10 microseconds. Thus, the
context-switch time is a small fraction of the time quantum.
OPERATING SYSTEMS
Turnaround Time Varies With The Time Quantum
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
OPERATING SYSTEMS
Scheduling Algorithms
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Slides Credits for all the PPTs of this course
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Multilevel Queue
• A process can move between the various queues; aging can be implemented this
way
• Number of queues
• Method used to determine which queue a process will enter when that process needs service
OPERATING SYSTEMS
Example of Multilevel Feedback Queue
• Three queues:
• Q0 – RR with time quantum 8 milliseconds
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS
• Scheduling
• A new job enters queue Q0 which is served FCFS
• When it gains CPU, job receives 8 milliseconds
• If it does not finish in 8 milliseconds, job is moved to queue Q1
• At Q1 job is again served FCFS and receives 16 additional milliseconds
• If it still does not complete, it is preempted and moved to queue Q2
OPERATING SYSTEMS
Multilevel Feedback Queue
• Three queues:
• Q0 – RR with time quantum 8 milliseconds
• Q1 – RR time quantum 16 milliseconds
• Q2 – FCFS
• When a process has been running on a specific processor, the data most recently accessed by
the process populate the cache of the processor.
• Successive memory accesses by the process are often satisfied in cache memory.
• If the process migrates to another processor, the contents of cache memory must be
invalidated for the first processor and the cache for the second processor must be repopulated.
• Because of the high cost of invalidating and repopulating caches, most SMP systems try to
avoid migration of processes from one processor to another and instead attempt to keep a
process running on the same processor. This is known as processor affinity
• Soft affinity - OS will attempt to keep a process on a single processor, but it is possible for a process to
migrate between processors
• Hard affinity – OS provides system calls for process to specify a subset of processors on which it may
run
OPERATING SYSTEMS
Load Balancing
• On SMP systems, it is important to keep the workload balanced among all processors to fully
utilize the benefits of having more than one processor.
• Load balancing attempts to keep the workload evenly distributed across all processors in an
SMP system.
• Load balancing is typically necessary only on systems where each processor has its own private
queue of eligible processes to execute
1. Push migration - a specific task periodically checks the load on each processor and if it finds
an imbalance, evenly distributes the load by moving (or pushing) processes from overloaded
to idle or less-busy processors
2. Pull migration occurs when an idle processor pulls a waiting task from a busy processor.
• Push and pull migration need not be mutually exclusive
THANK YOU
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
OPERATING SYSTEMS
Suresh Jamadagni
Department of Computer Science
OPERATING SYSTEMS
Linux Scheduling Through Version 2.5
• Supported SMP systems - processor affinity and load balancing between processors with
good performance
• Poor response times for the interactive processes that are common on many desktop
computer systems
OPERATING SYSTEMS
Linux Scheduling in Version 2.6.23 +
• Using different scheduling classes, the kernel can accommodate different scheduling algorithms
• Scheduler picks the highest priority task in the highest scheduling class
• Proportion calculated based on nice value which ranges from -20 to +19
• Tasks with lower nice values receive a higher proportion of CPU processing time than tasks with higher
nice values
• Calculates target latency – interval of time during which task should run at least once
• Proportions of CPU time are allocated from the value of targeted latency computed.
• CFS scheduler maintains per task virtual run time in variable vruntime
• Associated with decay factor based on priority of task – lower priority is higher decay
rate
• Normal default priority yields virtual run time = actual run time
• To decide next task to run, scheduler picks task with lowest virtual run time
OPERATING SYSTEMS
Linux Scheduling in Version 2.6.23 + (Cont.)
• The value of vruntime will be lower for the I/O-bound task than for the CPU-bound
task, giving the I/O-bound task higher priority than the CPU-bound task.
• If the CPU-bound task is executing when the I/O-bound task becomes eligible to run,
• Windows API identifies several priority classes to which a process can belong
• REALTIME_PRIORITY_CLASS, HIGH_PRIORITY_CLASS,
ABOVE_NORMAL_PRIORITY_CLASS,NORMAL_PRIORITY_CLASS,
BELOW_NORMAL_PRIORITY_CLASS, IDLE_PRIORITY_CLASS
• All are variable except REALTIME
• A thread within a given priority class has a relative priority
• TIME_CRITICAL, HIGHEST, ABOVE_NORMAL, NORMAL, BELOW_NORMAL, LOWEST,
IDLE
• Priority class and relative priority combine to give numeric priority
• Base priority is NORMAL within the class
• If quantum expires, priority lowered, but never below base
OPERATING SYSTEMS
Windows Thread Priorities
Suresh Jamadagni
Department of Computer Science Engineering
[email protected]
Operating Systems
UE22CS242B
Suresh Jamadagni
Department of Computer Science
and Engineering
Operating Systems
Suresh Jamadagni
Department of Computer Science and Engineering
OPERATING SYSTEMS
What is a shell?
echo $SHELL
OPERATING SYSTEMS
Environment variables
• Environmental variables are variables that are defined for the current shell and are
inherited by any child shells or processes.
• Environmental variables are used to pass information into processes that are spawned from
the shell.
• Shell variables are variables that are contained exclusively within the shell in which they
were set or defined.
• They are often used to keep track of ephemeral data, like the current working directory.
• By convention, these types of variables are usually defined using all capital letters. This
helps users distinguish environmental variables within other contexts.
• We can see a list of all of our environmental variables by using the env or printenv
OPERATING SYSTEMS
Common Environment variables
• SHELL: This describes the shell that will be interpreting any commands you type in. In most cases,
this will be bash by default, but other values can be set if you prefer other options.
• TERM: This specifies the type of terminal to emulate when running the shell. Different hardware
terminals can be emulated for different operating requirements. You usually won’t need to worry
about this though.
• USER: The current logged in user.
• PWD: The current working directory.
• OLDPWD: The previous working directory. This is kept by the shell in order to switch back to your
previous directory by running cd -.
• PATH: A list of directories that the system will check when looking for commands. When a user
types in a command, the system will check directories in this order for the executable.
• HOME: The current user’s home directory.
OPERATING SYSTEMS
Common SHELL variables
• BASHOPTS: The list of options that were used when bash was executed. This can be useful
for finding out if the shell environment will operate in the way you want it to.
• BASH_VERSION: The version of bash being executed, in human-readable form.
• BASH_VERSINFO: The version of bash, in machine-readable output.
OPERATING SYSTEMS
SHELL basics
• In the child shell process, the shell variable defined in the parent is not available
• To pass the shell variables to child shell processes, you need to export the variable
OPERATING SYSTEMS
SHELL basics
• For setting environment variables at login, edit the .profile file in the $HOME directory and add
the export command
OPERATING SYSTEMS
SHELL control flow - if
Nested loop:
OPERATING SYSTEMS
SHELL control flow – continue and break
OPERATING SYSTEMS
cron
• In Unix and Linux, cron is a daemon, which is an unattended program that runs continuously
in the background and wakes up (executes) to handle periodic service requests when
required. The daemon runs when the system boots
• Cron is a job scheduling utility present in Unix like systems. The crond daemon enables cron
functionality and runs in background.
• The cron reads the crontab (cron tables) for running predefined scripts.
• By using a specific syntax, you can configure a cron job to schedule scripts or other
commands to run automatically.
• Checking for jobs scheduled in cron
OPERATING SYSTEMS
cron
Suresh Jamadagni
Department of Computer Science and Engineering
[email protected]