0% found this document useful (0 votes)
222 views6 pages

Os Case Study

The document discusses the evolution and development of Linux operating systems. It began as a university project to develop a free and open-source version of UNIX. Key developments included the creation of the Linux kernel by Linus Torvalds in 1991 and the work of Richard Stallman to develop the GNU operating system. The document outlines the core components and architecture of Linux systems including the kernel, shell, and applications. It also describes basic Linux processes and process management using system calls.

Uploaded by

Atul Draws
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
222 views6 pages

Os Case Study

The document discusses the evolution and development of Linux operating systems. It began as a university project to develop a free and open-source version of UNIX. Key developments included the creation of the Linux kernel by Linus Torvalds in 1991 and the work of Richard Stallman to develop the GNU operating system. The document outlines the core components and architecture of Linux systems including the kernel, shell, and applications. It also describes basic Linux processes and process management using system calls.

Uploaded by

Atul Draws
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Case Study on Linux operating System

EVOLUTION OF UNIX

 UNIX development was started in 1969 at Bell Laboratories in New Jersey.


 Bell Laboratories was (1964–1968) involved on the development of a multi-user,
timesharing operating system called Multics (Multiplexed Information and Computing
System).
 In early 1969, Bell Labs withdrew from the Multics project.
 When Multics was withdrawn Ken Thompson and Dennis Ritchie needed to rewrite
an operating system in order to play space travel on another smaller machine The
result was a system called UNICS (UNiplexed Information and Computing Service)
The first version of Unix was written in the low-level PDP-7(Programmed data
process) assembler language.
 When the PDP-11 computer arrived at Bell Labs, Dennis Ritchie built on B to create
a new language called C. Unix components were later rewritten in C, and finally with
the kernel itself in 1973.
  Unix V6 was free and was distributed with its source code.
 In 1983, AT&T released Unix System V which was a commercial version.
 Meanwhile, the University of California at Berkeley started the development of its
own version of UNIX.
 AT&T was developing its System V Unix.
 Berkeley took initiative on its own Unix BSD (Berkeley Software Distribution) Unix.
 Sun Microsystems developed its own BSD-based Unix called SunOS and later was
renamed to Sun Solaris.
 Microsoft and the Santa Cruz operation (SCO) were involved in another version of
UNIX called XENIX.
 Hewlett-Packard developed HP-UX for its workstations.
 DEC released ULTRIX.
 In 1986, IBM developed AIX (Advanced Interactive eXecutive)

WHAT IS LINUX?

Just like Windows XP, Windows 7, and Mac OS X, Linux is an operating system. An
operating system manages all of the hardware resources associated with your desktop or
laptop. The kernel is the core of the system and manages the CPU, memory, and peripheral
devices.

DIFFERENT LAYERS IN LINUX

The Linux system basically works on 4 layers.


See the below diagram, shows the layers of of the Linux system architecture.

 Hardware − Hardware consists of all physical devices attached to the System.


 Kernel − Kernel is the core component for any operating system which directly
interacts with the hardware.
Shell − Shell is the interface which takes input from Users and sends instructions to
the Kernel, Also takes the output from Kernel and send the result back to output
shell.
 Applications − these are the utility programs which runs on Shell. This can be
any application like Your web browser, media player, text editor etc.

LINUX PROCESSES

For the Linux to manage the processes in the system, each process is represented by a
task_struct data structure. The task vector is an array of pointers to every task_struct data
structure in the system. As processes are created, a new task_struct is allocated from
system memory and added into the task vector.

Although the task_struct data structure is quite large and complex, but its fields can be
divided into a number of functional areas:

State

As a process executes it changes STATE according to its circumstances. Linux


processes have the following states: 1

Running

The process is either running (it is the current process in the system) or it is ready to
run (it is waiting to be assigned to one of the system's CPUs).

Waiting

The process is waiting for an event or for a resource. Linux differentiates between
two types of waiting process; INTERRUPTIBLE and UNINTERRUPTIBLE.
Interruptible waiting processes can be interrupted by signals whereas uninterruptible
waiting processes are waiting directly on hardware conditions and cannot be
interrupted under any circumstances.
Stopped

The process has been stopped, usually by receiving a signal. A process that is being
debugged can be in a stopped state.

Zombie

This is a halted process which, for some reason, still has a task_struct data structure
in the task vector. It is what it sounds like, a dead process.

Scheduling Information

The scheduler needs this information in order to fairly decide which process in the
system most deserves to run,

Identifiers

Every process in the system has a process identifier. The process identifier is not an
index into the task vector, it is simply a number. Each process also has User and
group identifiers, these are used to control this processes access to the files and
devices in the system,

Inter-Process Communication

Linux supports the classic Unix TM IPC mechanisms of signals, pipes and
semaphores and also the System V IPC mechanisms of shared memory,
semaphores and message queues. The IPC mechanisms supported by Linux are
described in Chapter IPC-chapter.

Times and Timers

The kernel keeps track of a processes creation time as well as the CPU time that it
consumes during its lifetime. Each clock tick, the kernel updates the amount of time
in jiffies that the current process has spent in system and in user mode. Linux also
supports process specific INTERVAL timers, processes can use system calls to set
up timers to send signals to themselves when the timers expire. These timers can be
single-shot or periodic timers.

Virtual memory

Most processes have some virtual memory (kernel threads and daemons do not) and
the Linux kernel must track how that virtual memory is mapped onto the system's
physical memory.

Processor Specific Context

A process could be thought of as the sum total of the system's current state.
Whenever a process is running it is using the processor's registers, stacks and so on.
This is the processes context and, when a process is suspended, all of that CPU
specific context must be saved in the task_struct for the process. When a process is
restarted by the scheduler its context is restored from here.
PROCESS SYSTEM CALLS

Processes are the most fundamental abstraction in a Linux system, after files. As object code in
execution - active, alive, running programs - processes are more than just assembly language; they
consist of data, resources, state, and a virtualized computer. • Linux took an interesting path, one
seldom traveled, and separated the act of reating a new process from the act of loading a new binary
image. Although the two tasks are performed in tandem most of the time, the division has allowed a
great deal of freedom for experimentation and evolution for each of the tasks. This road less traveled
has survived to this day, and while most operating systems offer a single system call to start up a new
program, Linux requires two: a fork and an exec.

Creation and termination

 CLONE Create a child process


 FORK Create a child process
 VFORK Create a child process and block parent
 EXECVE Execute program
 EXECVEAT Execute program relative to a directory file descriptor
 EXIT Terminate the calling process
 EXIT_GROUP Terminate all threads in a process
 WAIT4 Wait for process to change state
 WAITID Wait for process to change state

Pocess id

 GETPID Get process ID


 GETPPID Get parent process ID
 GETTID Get thread ID

Session id

 SETSID Set session ID


 GETSID Get session ID

Process group id

 SETPGID Set process group ID


 GETPGID Get process group ID
 GETPGRP Get the process group ID of the calling process

Users and groups

 SETUID Set real user ID


 GETUID Get real user ID
 SETGID Set real group ID
 GETGID Get real group ID
 SETRESUID Set real, effective and saved user IDs
 GETRESUID Get real, effective and saved user IDs
 SETRESGID Set real, effective and saved group IDs
 GETRESGID Get real, effective and saved group IDs
 SETREUID Set real and/or effective user ID
 SETREGID Set real and/or effective group ID
 SETFSUID Set user ID used for file system checks
 SETFSGID Set group ID used for file system checks
 GETEUID Get effective user ID
 GETEGID Get effective group ID
 SETGROUPS Set list of supplementary group IDs
 GETGROUPS Get list of supplementary group IDs

PROCESS SCHEDULING ALGORITHMS USED

First Come First Serve

 First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU
scheduling algorithm. In this type of algorithm, the process which requests the CPU gets the
CPU allocation first. This scheduling method can be managed with a FIFO queue.
 As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail
of the queue. So, when CPU becomes free, it should be assigned to the process at the
beginning of the queue.
 It offers non-preemptive and pre-emptive scheduling algorithm.
 Jobs are always executed on a first-come, first-serve basis
 It is easy to implement and use.
 However, this method is poor in performance, and the general wait time is quite high.

Shortest Remaining Time

 The full form of SRT is Shortest remaining time. It is also known as SJF preemptive
scheduling. In this method, the process will be allocated to the task, which is closest to its
completion. This method prevents a newer ready state process from holding the completion of
an older process.
 This method is mostly applied in batch environments where short jobs are required to be
given preference.
 This is not an ideal method to implement it in a shared system where the required CPU time
is unknown.
 Associate with each process as the length of its next CPU burst. So that operating system
uses these lengths, which helps to schedule the process with the shortest possible time.

Priority Based Scheduling

 Priority scheduling is a method of scheduling processes based on priority. In this method, the
scheduler selects the tasks to work as per the priority.
 Priority scheduling also helps OS to involve priority assignments. The processes with higher
priority should be carried out first, whereas jobs with equal priorities are carried out on a
round-robin or FCFS basis. Priority can be decided based on memory requirements, time
requirements, etc.

Round-Robin Scheduling

 Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes
from the round-robin principle, where each person gets an equal share of something in turn. It
is mostly used for scheduling algorithms in multitasking. This algorithm method helps for
starvation free execution of processes.
 Round robin is a hybrid model which is clock-driven
 Time slice should be minimum, which is assigned for a specific task to be processed.
However, it may vary for different processes.
 It is a real time system which responds to the event within a specific time limit.

Shortest Job First

 SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the
shortest execution time should be selected for execution next. This scheduling method can be
preemptive or non-preemptive. It significantly reduces the average waiting time for other
processes awaiting execution.
 It is associated with each job as a unit of time to complete.
 In this method, when the CPU is available, the next process or job with the shortest
completion time will be executed first.
 It is Implemented with non-preemptive policy.
 This algorithm method is useful for batch-type processing, where waiting for jobs to complete
is not critical.
 It improves job output by offering shorter jobs, which should be executed first, which mostly
have a shorter turnaround time.

Multiple-Level Queues Scheduling

 This algorithm separates the ready queue into various separate queues. In this method,
processes are assigned to a queue based on a specific property of the process, like the
process priority, size of the memory, etc.
 However, this is not an independent scheduling OS algorithm as it needs to use other types of
algorithms in order to schedule the jobs.
 Multiple queues should be maintained for processes with some characteristics.
 Every queue may have its separate scheduling algorithms.
 Priorities are given for each queue.

You might also like