0% found this document useful (0 votes)
100 views48 pages

Operating Systems: Bea May M. Belarmino Joselle A. Banocnoc Khryss Leanne B. Omnes

This document provides an overview of three modules that cover topics in operating systems: concurrency, scheduling, and the Linux vi editor. Module 8 discusses concurrency, including the principles of concurrency and deadlock. It defines concurrency as the interleaving of processes to give the appearance of simultaneous execution. The module then covers the four conditions for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It also discusses strategies for deadlock prevention, avoidance, and detection. Module 9 covers scheduling, outlining different types of processor scheduling algorithms as well as scheduling for multiprocessor and multicore systems. It also touches on real-time scheduling. Module 10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views48 pages

Operating Systems: Bea May M. Belarmino Joselle A. Banocnoc Khryss Leanne B. Omnes

This document provides an overview of three modules that cover topics in operating systems: concurrency, scheduling, and the Linux vi editor. Module 8 discusses concurrency, including the principles of concurrency and deadlock. It defines concurrency as the interleaving of processes to give the appearance of simultaneous execution. The module then covers the four conditions for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. It also discusses strategies for deadlock prevention, avoidance, and detection. Module 9 covers scheduling, outlining different types of processor scheduling algorithms as well as scheduling for multiprocessor and multicore systems. It also touches on real-time scheduling. Module 10
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Operating Systems

Bea May M. Belarmino


Joselle A. Banocnoc Khryss
Leanne B. Omnes
Table of Contents

Module 8: Concurrency
Introduction 4
Learning Outcomes 4
Lesson 1: Principles of Concurrency 155
Lesson 2: Principles of Deadlock 156
Lesson 3: Deadlock Prevention, Avoidance, and Detection 159
Lesson 4: Mutual Exclusion 162
Lesson 5: Semaphores 164
Lesson 6: Monitor 165
Assessment Task 167
Summary 168

Module 9: Scheduling
Introduction 169
Learning Outcomes 169
Lesson 1: Types of Processor Scheduling 170
Lesson 2: Scheduling Algorithms 172
Lesson 3: Multiprocessor and Multicore 177
Lesson 4: Real-Time Scheduling 179
Assessment Task 180
Summary 181

Module 10: Linux vi Editor


Introduction 182
Assessment Task 189
Summary 190

iii
MODULE 8
CONCURRENCY

Introduction

This module will tackle about the concurrency about the principles of
concurrency, principles of deadlock, deadlock prevention, avoidance, and detection,
mutual exclusion, semaphores, monitor.

Learning Outcomes

At the end of this lesson, the student should be able to:

1. Determine the basic concepts related to concurrency;


2. Describe hardware approaches to supporting mutual exclusion;
3. Distinguish semaphore from monitor;
4. List and describe the conditions for deadlock;
5. Identify strategies to prevent deadlock; and
6. Distinguish the difference in approach between deadlock detection
and deadlock prevention or avoidance.

i
v
Lesson 1. Principles of Concurrency
Concurrency is the interleaving of processes in time to give the
appearance of simultaneous execution (Staillings, 2015, p. 202).
In the figure 7.1, there are times that a certain data in a database is being
shared with clients and it is usually processed in the server. A server can handle
processes and can handle storage of processed data (Staillings, 2015, p. 202).

Figure 8.1 Concurrency


Source: Stallings (2015, p. 202)
In relation to concurrency, processors have the capability to handle
simultaneous execution of processes. There are certain instances when two executing
processes will fetch the same data, it would cause conflicts in sharing or getting
resources (Staillings, 2015, p. 202).

Remember when we say concurrency in OS it denotes to handling of


processes simultaneously or in parallel. When we have to process simultaneous data,
it is prone to errors since the OS is handling too much processes, there would be
instances that these so-called deadlocks may appear and crash/halt the system (Staillings,
2015, p. 202).

Issues and difficulties:

 Sharing global resources safely is difficult;


 Optimal allocation of resources is difficult;

15
5
 Locating programming errors can be difficult, because the contexts in which
errors occur cannot always be reproduced easily (Staillings, 2015, p. 202).

Lesson 2. Principles of Deadlock


Deadlock can be defined as the permanent blocking of a set of processes that
either compete for system resources or communicate with each other (Staillings,
2015, p. 260).
A set of processes is deadlocked when each process in the set is blocked
awaiting an event (typically the freeing up of some requested resource) that can only
be triggered by another blocked process in the set (Staillings, 2015, p. 260).
In this given scenario, in figure 8.2, in a junction, let us set CAR 1, 2, 3 and
4 as processes. What if CARS 1, 2, 3 and 4 drove all at once, it would cause collision
resulting into an accident. That accident pertains to DEADLOCK (Staillings, 2015,
p. 260).

Figure 8.2 Deadlock


Source: Stallings (2015, p. 261)
Deadlock can be defined as the permanent blocking of a set of processes that
either compete for system resources or communicate with each other (Staillings,
2015, p. 261).
A set of processes is deadlocked when each process in the set is blocked
awaiting an event that can only be triggered by another blocked process in the set
(Staillings, 2015, p. 261).
Let P1 and P2 be processes; R1 and R2 be resources (Staillings, 2015, p.
269):
 From the 1st figure in figure 8.2, P1 and P2 is needing a particular
resource in order to execute.
 From the 2nd figure in figure 8.2, P1 needs R1 and P2 needs R2 but
P1 is currently holding R2 while P2 is currently holding R2. If both,
process is stuck waiting for the resource to become available, the
processes are experiencing as what we have so called DEADLOCK.
Possibility of Deadlock Existence of Deadlock
Mutual exclusion Mutual Exclusion
No preemption No preemption
Hold and wait Hold and wait
Circular wait

Figure 8.3 Conditions for Deadlock


Source: Stallings (2015, p. 269)
We can label them both as a Possibility (where Deadlock is going to
happen) or Existence of Deadlock (wherein Deadlock exists in a current context).

These are the conditions for encountering a deadlock (Staillings, 2015, p. 270):
1. Mutual exclusion. Only one process may use a resource at a time. No process
may access a resource unit that has been allocated to another process.

Figure 8.4 Mutual


Exclusion Source: Stallings
(2015, p. 270)

2. Hold and wait. A process may hold allocated resources while awaiting
assignment of other resources.
Figure 8.5 Hold and Wait
Source: Stallings (2015, p.
53)

3. No preemption. No resource can be forcibly removed from a process holding

it.
Figure 8.6 No Preemption
Source: Stallings (2015, p.
53)

4. Circular wait. A closed chain of processes exists, such that each process holds
at least one resource needed by the next process.
 From the four conditions of deadlock, Numbers 1, 2 and 3 are only
stating that there is a possibility of a deadlock.
 If the process has encountered Number 4, it is already in the state of
deadlock.
Figure 8.7 Circular
Wait Source: Stallings
(2015, p. 53)
Example (Staillings, 2015, p. 270):
 Mutual exclusion: Microsoft Word and Microsoft Excel processes wants to
use, at the same time, the printer.
 Hold and Wait: The OS hold some memory allocation and waiting for some
process to use it, so that, the OS could let go for other process.
 No Preemption: The Windows 10 OS installs the updates, which includes the
latest patch and hotfix for better storage management.
 Circular Wait: Window Explorer wants accessing the storage device and
currently holding the RAM. Microsoft Access want to access the RAM and
currently holding the processor. Operating System wants to access the
processor but currently holds the storage device. In short, they are in a
circular wait, waiting for a resource to be open in order to proceed
processing.

Lesson 3. Deadlock Prevention, Avoidance, and Detection


Deadlock Prevention
There are different ways in order to prevent a deadlock (Staillings, 2015, p. 270):
 Mutual Exclusion: In general, the first of the four listed conditions cannot be
disallowed. If access to a resource requires mutual exclusion, then mutual
exclusion must be supported by the OS. Deadlock can occur if more than one
process requires write permission (Staillings, 2015, p. 270).
Figure 8.8 Mutual Exclusion
Source: Stallings (2015, p.
53)

 Hold and Wait: The hold-and-wait condition can be prevented by requiring


that a process request all of its required resources at one time and blocking the
process until all requests can be granted simultaneously.

Figure 8.9 Hold and


Wait Source: Stallings
(2015, p. 53)

 No Preemption: This condition can be prevented in several ways:


o If a process holding certain resources is denied a further request, that
process must release its original resources.
o If necessary, request them again together with the additional resource.
o In short, this means letting go something in order to proceed other
process execution.

 Circular Wait: The circular wait condition can be prevented by defining a linear
ordering of resource types.

Figure 8.10 Circular Wait


Source: Stallings (2015, p.
53)

Deadlock Avoidance
It allows the three necessary conditions but makes practical choices to assure that
the deadlock point is never reached. There are two options (Staillings, 2015, p.
271):
 Process Initiation Denial. Setting up needed/maximum resources preprocess prior to
its execution.
 Resource Allocation Denial. Banker’s Algorithm

Figure 8.10 Deadlock Avoidance


Source: Stallings (2015, p. 53)
Deadlock Detection
Both prevention and avoidance of deadlock lead to conservative allocation of
resources, with corresponding inefficiencies (Staillings, 2015, p. 277):
 Deadlock detection takes the opposite approach:
o Make allocations liberally, allowing deadlock to occur
o Apply a detection algorithm periodically to check for deadlock
 The detection algorithm can be applied at every resource allocation.

Figure 8.11 Deadlock Detection


Source: Stallings (2015, p. 53)

Lesson 4. Mutual Exclusion


It is the requirement that when a process P is accessing a shared resource R,
no other process should be able to access R until P has finished with R (Staillings,
2015, p. 219):

 Examples of such resources include files, I/O devices such as printers, and
shared data structures.

Figure 8.12 Conditions for Deadlock


Source: Stallings (2015, p. 219)
There are three approaches to implementing mutual exclusion (Stallings, 2015, p. 219):

 Leave the responsibility with the processes themselves.


 Provide support through the operating system, or through the programming
language.
 Allow access to shared resources only through special-purpose machine
instructions.

Figure 8.13 Conditions for Deadlock


Source: Stallings (2015, p. 219)
Mutual Exclusion in hardware approaches (Stallings, 2015, p. 219):

 Disabling interrupts during process's critical section


o Applies only in single-processor systems
o Disadvantage
 Tracking of Time
 Control will never be retuned
 Atomic test-and-set
o SET: used to write “1” to a memory location and return its old value as a single
atomic operation.
o TEST: multiple processes may access the same memory location.
 Atomic Compare-and-Swap used in multithreading
o Compare: It compares the contents of a memory location with a given value
o Swap: If contents are the same, modifies the contents of that memory location
to a new given value.
Lesson 5. Semaphores
The fundamental principle is this: Two or more processes can cooperate by
means of simple signals, such that a process can be forced to stop at a specified
place until it has received a specific signal (Stallings, 2015, p. 220).

Figure 8.14 Conditions for


Deadlock Source: Stallings
(2015, p. 220)
Any complex coordination requirement can be satisfied by the appropriate
structure of signals. For signaling, special variables called semaphores are used.
The fundamental idea of semaphores is that processes “communicate” via
global counters that are initialized to a positive integer and that can be accessed only
through two atomic operations (Stallings, 2015, p. 221):
 semSignal(x)- increments the value of the semaphore x.
 semWait(x)- tests the value of the semaphore x:
o If x > 0, the process decrements x and continues;
o If x = 0, the process is blocked until some other process performs a
semSignal, then it proceeds as above.

Figure 8.15 Shared Data (Semaphores)


Source: Stallings (2015, p. 221)
A critical code section is then protected by bracketing it between these two
operations (Stallings, 2015, p. 221):

semWait (x);
<critical code section>
semSignal (x);

Figure 8.16 Semaphores


Source: Stallings (2015, p. 53)
In general, the number of processes that can execute this critical section
simultaneously is determined by the initial value given to x (Stallings, 2015, p.
221).

If more than this number try to enter the critical section, the excess processes
will be blocked until some processes exit (Stallings, 2015, p. 221).

Lesson 6. Monitor as Deadlock Prevention


A monitor is essentially an object which has the semaphore variables as
internal (private) data and the semaphore operations as (public) operations (Stallings,
2015, p. 227).

Figure 8.17 Monitor


Source: Stallings (2015, p. 229)
Mutual exclusion is provided by allowing only one process to execute the monitor’s
code at any given time.

Figure 8.17 Semaphores monitor’s code


Source: Stallings (2015, p. 230)

Monitors are significantly easier to validate than “bare” semaphores for at least two reasons
(Stallings, 2015, p. 230):
 All synchronization code is confined to the monitor; and
 Once the monitor is correct, any number of processes sharing the
resource will operate correctly.
Table 8.1 Difference between Semaphores and Monitors (Stalings, 2015 p. 229-233)

Assessment Task

I. ESSAY (80 points) Perform what is being asked. Type your answer on EDMODO or
Google Classroom.

1. State the four conditions of a deadlock. (20 points)


2. Differentiate Monitors and Semaphores. (20 points)
3. Differentiate Deadlock Prevention, Avoidance, and Detection. (20 points)
4. Differentiate Concurrency from Deadlock. (20 points)
Summary

This module discussed about the concurrency about the principles of


concurrency, principles of deadlock, deadlock prevention, avoidance, and detection,
mutual exclusion, semaphores, monitor.

References

 Stallings, (2015).Operating Systems Internals And Design Principles 8th Edition


.Pearson Education
 Tanenbaum, A. (2015). Modern Operating Systems 4th Edition. Pearson Education
 McHoes, A. and Flynn, I. (2014). Understanding operating system, 7th edition.
Cengage Learning
MODULE 9 SCHEDULING

This module will tackle the scheduling about the


types of processor scheduling, scheduling algorithm,
Introduction
multiprocessor and multicore, scheduling, and real time
scheduling.

Learning Outcomes

At the end of this lesson, the student should be able to:

1. Differentiate the types of processor scheduling;


2. Assess the performance of different scheduling
policies; and
3. Identify the scheduling technique used in operating
systems.

169
Lesson 1. Types of Processor Scheduling
The aim of processor scheduling is to assign processes to be executed by
the processor, in a way that meets system objectives (Stallings, 2015, p. 398):
Scheduling affects the performance of the system because it determines which
processes will wait and which will progress (Stallings, 2015, p. 398).
Types of Scheduling
 Long-Term - This determines which programs are admitted to the system
for processing. Once admitted, a job or user program becomes a process and is
added to the queue for the short-term scheduler (Stallings, 2015, p. 399):

Figure 9.1 Scheduling


Source: Stallings (2017, p. 400)
 Medium-Term - Medium-term scheduling is part of the swapping function.
The swapping-in decision is based on the need to manage the degree of
multiprogramming (Stallings, 2015, p. 401).
 Short-Term - It is known as the dispatcher which executes process/threads
whenever an event occurs that may lead to the blocking of the current process
(Stallings, 2015, p. 402).
Examples: clock interrupts, I/O interrupts, OS calls & semaphores

17
0
Non-Preemptive Scheduling
Non-Preemptive Scheduling means once a process starts its execution or the
CPU is processing a specific process it cannot be halted or in other words we cannot
preempt (take control) the CPU to some other process (Stallings, 2015, p. 406).
A computer system implementing this cannot support the execution of
process in a multi task fashion. It executes all the processes in a sequential manner
(Stallings, 2015, p. 406).
Preemptive Scheduling
Preemptive Scheduling means once a process started its execution, the
currently running process can be paused for a short period of time to handle some
other process of higher priority, it means we can preempt the control of CPU from
one process to another if required (Stallings, 2015, p. 406).
A computer system implementing this supports multi-tasking as it gives the
user impression that the user can work on multiple processes (Stallings, 2015, p.
406).

Table 9.1 Comparison of Preemptive Scheduling and Non-Preemptive Scheduling


(Stallings, 2015, p. 406)
Preemptive Scheduling Non-Preemptive Scheduling
Processor can be preempted to execute a Once Processor starts to execute a
different process in the middle of execution process it must finish it before executing
of any current process the other. It
cannot be paused in middle.
CPU utilization is more compared to CPU utilization is less compared to
Non-
Preemptive Scheduling
Preemptive Scheduling.
Waiting time and Response time is less Waiting time and Response time is
more.
The preemptive scheduling is prioritized. When a process enters the state of
The highest priority process should running, the state of that process is not
always deleted from
be the process that is currently utilized the scheduler until it finishes its service
time.
If a high priority process frequently arrives If a process with long burst time is
in the ready queue, low priority process running CPU, then another process
may with less CPU
Lesson 2. Scheduling Algorithms
The different CPU scheduling algorithms are:

 First‐Come, First Served Algorithm


 Shortest Process First Algorithm
 Shortest Remaining Time First Algorithm
 Round Robin Algorithm
 Priority Scheduling

First Come First Serve (no arrival Time)

It is a non-preemptive scheduling algorithm where in the one that enters the


ready queues first gets to be executed by the CPU first. In choosing the next process
to be executed, the CPU scheduler selects the process at the front of the ready
queue. Processes enter at the rear of the ready queue (Stallings, 2015, p. 407).

Figure 9.2 Scheduling FCFS


Source: Stallings (2017, p.
408)
First Come First Serve (with Arrival Time)

Figure 9.3 Scheduling FCFS


Source: Stallings (2017, p.
409)

Short Process Next


Because shortest job first always produces the minimum average response
time for batch systems, it would be nice if it could be used for interactive
processes as well. To a certain extent, it can be. Interactive processes generally
follow the pattern of wait for command, execute command, wait for command,
execute command, etc. If we regard the execution of each command as a separate
‘‘job,’’ then we can minimize overall response time by running the shortest one first. The
problem is figuring out which of the currently runnable processes is the shortest one.
One approach is to make estimates based on past behavior and run the process with
the shortest estimated running time (Tanenbaum, 2015, 162).

The technique of estimating the next value in a series by taking the weighted
average of the current measured value and the previous estimate is sometimes called
aging. It is applicable to many situations where a prediction must be made based on
previous values. Aging is especially easy to implement when a = 1/2. All that is
needed is to add the new value to the current estimate and divide the sum by 2 (by
shifting it right 1 bit) (Tanenbaum, 2015, 162).
Figure 9.4 Scheduling SPN
Source: Tanenbaum (2015, p.
Short Remaining Time First 162)

It is a preemptive version of SPF algorithm. The currently executing process is


preempted if a process with shorter CPU burst time than the remaining CPU burst
time of the running process arrives in the ready queue. The preempted process will enter
the rear of the queue (McHoes & Flynn, 2011, p. 122)

Figure 9.5 Scheduling SRTF


Source: McHoes & Flynn (2011, p.
123)
Round Robin Scheduling
It is a preemptive version of FCFS algorithm. The one that enters the ready
queue first gets to be executed by the CPU first, but is given a time limit. This limit is
called time quantum or time slice. The process will enter the rear of the queue after
its time slice expires (McHoes & Flynn, 2011, p. 124)

Figure 9.6 Scheduling Round Robin


Scheduling Source: McHoes & Flynn
(2011, p. 125)
Priority Scheduling (non-preemptive)
This algorithm may be non‐preemptive or preemptive. The CPU scheduler
chooses the process with the highest priority to be executed next. Each process is
assigned a priority which is usually expressed as an integer. In this discussion, the lower
the value of the integer, the higher its priority (Tanenbaum, 2015, p. 161).
Figure 9.7 Priority Scheduling
Source: Tanenbaum (2017, p.
161)

Priority Scheduling (preemptive)


In preemptive priority scheduling, the currently executing process is
preempted if a higher priority process arrives in the ready queue. The preempted process
will enter the rear of the queue. If two or more processes have the same priority, the
FCFS algorithm may be used Tanenbaum (2017, p. 161)
Figure 9.8 Priority Scheduling
Source: Tanenbaum (2017, p.
162)

In priority scheduling, the waiting time and turnaround time of higher priorities
are minimized. However, this may lead to starvation, where in low‐priority processes
may wait indefinitely in the ready queue. To solve this, a method called aging can be
used where in the priority of a process gradually increases the longer it stays in the
ready queue Tanenbaum (2017, p. 162).

Lesson 3. Multiprocessor and Multicore


These schedulers tend to focus on keeping processors busy by load balancing
so that threads that are ready to run are evenly distributed among the processors. As
the number of cores per chip increases, a need to minimize access to off chip
memory takes precedence over a desire to maximize processor utilization
(Stallings, 2015, p. 433).
Figure 9.9 Multiprocessor and
Multicore Source: Stallings (2017,
p. 433)
Granularity of Process to Threads
A good way of characterizing multiprocessors and placing them in context with
other architectures is to consider the synchronization granularity, or frequency of
synchronization, between processes in a system (Stallings, 2015, p. 433-434).

Figure 9.10 Granularity of Process to


Threads Source: Stallings (2017, p.
434)
Lesson 4. Real-Time Scheduling
Real-Time computing is a type of computing that a data and command that is
executed as real-time processes or tasks. It may be actual on-time events or
procedures in a form of a deadline or in a priority (Stallings, 2015, p. 446).

Characteristics of RTS (Stallings, 2015, p. 447):

 Determinism - how long does an OS delays before acknowledging an


interrupt.
 Responsiveness - how long, after acknowledgment, it takes an operating
system to service the interrupt.
 User control - the user should be able to distinguish between hard and soft
tasks and to specify relative priorities within each class.
 Reliability - avoiding loss or degradation of performance.
 Fail-soft operation -preserving data, as much, during system failure.
Assessment Task

Shortest Process Next Algorithm


Objectives:

 At the end of the exercise, the students should be able to:


 Familiarize with FCFS CPU Scheduling Algorithm; and
 Use proper programming demonstration and techniques in implementing algorithms.

Materials:

 PC with installed Microsoft Visual Studio/ NetBeans (preferably C# or Java or

VB.net) Procedures:

1. Create a new program and name the project “Surname-FCFSALGO”.


2. With the use of Form Controls, the student should create a process
simulation program using the FCFS Algorithm.
3. Data for Input:
a. Four Processes as P1, P2, P3, & P4
b. Four Textboxes for processes’ Burst Time
c. Four Textboxes for processes’ Arrival Time
4. Data to be displayed:
a. Labels for Waiting Time of each processes.
b. Labels for Turnaround Time of each processes.
c. Labels for Total Turnaround Time and Total Waiting Time.
d. Labels for Average Turnaround Time and Average Waiting Time.
e. Label for a particular process during execution.
f. Four Progress Bar for each process.
g. Label for Finished Process.
5. Then submit the finished program to your Google Classroom/Edmodo.
Summary

This module discussed the scheduling about the types of processor scheduling,
scheduling algorithm, multiprocessor and multicore, scheduling, and real time scheduling.

References

 Stallings, (2015). Operating Systems Internals And Design Principles 8th Edition
.Pearson Education
 Tanenbaum, A. (2015). Modern Operating Systems 4th Edition. Pearson Education
 McHoes, A. and Flynn, I. (2014). Understanding operating system, 7th edition.
Cengage Learning
 Haldar S. and Aravind A., (2010). Operating Systems. India: Dorling Kindersley
MODULE 10
USING THE vi EDITOR IN LINUX

Introduction

This module introduces the vi editor and describes the vi commands. These
commands include the input commands, the positioning commands, and the editing
commands (Tutorialspoint, 2013).

Learning Outcomes

At the end of this lesson, the student should be able to:

1. Describe the fundamentals of the vi editor.


2. Modify files by using the vi editor.

Lesson 1. Manipulating and Managing Files and Directories

Fundamentalsof the vi Editor

The visual display or vieditor is an interactive editor that you can use
tocreate and modify text files. You can use the vieditor when the desktop
environment window system is not available. The vieditor is also the only text
editor that you can use to edit certain system files without changing the
permissions of the files.

All text editing with the vieditor takes place in a buffer. You can eitherwrite
the changes to the disk, or discard them (Tutorialspoint, 2013).

182
The viEditor Modes of Operation

The vieditor is a command-line editor that has three basic modes ofoperation:

 Command mode

 Edit mode

 Last line mode

1. Command Mode

The command mode is the default mode for the vi editor. In this mode,
you can perform commands to delete, change, copy, and move text. You
can also position the cursor, search for text strings, and exit the
vieditor(Tutorialspoint, 2013).

2. Edit Mode

You can enter text into a file in the edit mode. The vieditor
interpretseverything you type in the edit mode as text. To enter the edit
mode, perform the commands:

• i– Inserts text before the cursor

• o– Opens a new blank line below the cursor

• a– Appends text after the cursor (Tutorialspoint, 2013)

3. Last Line Mode

You can use advanced editing commands in the last line mode. To
accessthe last line mode, enter a colon (:) while in the command mode.
The colon places your cursor at the bottom line of the screen
(Tutorialspoint, 2013).

Switching Between the Command and Edit Modes


The default mode for the vieditor is the command mode. When you perform an
i, o, or a command, the vi editor switches to the edit mode.After editing a
file, press Escape to return the vieditor to the commandmode. When in
18
3
the command

18
4
mode, you can save the file and quit the vieditor.

The following example shows how to switch modes in the vieditor:

1. Perform the vifilenamecommand to create a file. You


areautomatically in the command mode.
2. Type the icommand to insert text. The icommand switches the vieditor to the
edit mode.
3. Press Escape to return to the command mode.
4. Perform the :wqcommand to save the file and exit the vieditor (Tutorialspoint,
2013).

Using the viCommand

The vicommand enables you to create, edit, and view files in


the vieditor.

The syntax for the vicommand is:

vi
vi filename

vi optionsfilename

If the system crashes while you are editing a file, you can use the
-roption to recover the file.

To recover a file, perform the command:

$ vi -r filename

The file opens so that you can edit it. You can then save the file and exitthe vieditor.

$ vi -R filename

The file opens in read-only mode to prevent accidental overwriting of


thecontents of the file (Tutorialspoint, 2013).
Lesson 2. Modifying Files with the vi Editor

You can use the vi editor to view files in the read-only mode, or you can edit
files in the vi editor using the vi editing commands. When using the vieditor, you can
move the cursor using certain key sequences (Tutorialspoint, 2013).

Viewing Files in the Read-Only Mode

The viewcommand enables you to view files in the read-only mode. It invokes the vieditor
with the read-only option. Although most of the vicommands are available, you cannot
save changes to the file (Tutorialspoint, 2013).

The syntaxfor the viewcommand is:

viewfilename

To view the dantefile in the read-only mode, perform the command:

$ cd

$ viewdante
The dantefile appears. Perform the: q command to exit the file and the
vieditor. q

Inserting and Appending Text

Table 10.1 Input Commands for the viEditor


Source: https://www.tutorialspoint.com/unix/unix-vi-editor.htm

Command Function

a Appends text after the cursor

A Appends text at the end of the line

i Inserts text before the cursor

I Inserts text at the beginning of the line

o Opens a new line below the cursor

O Opens a new line above the cursor

:rfilename Inserts text from another file into the current file
Note – The vieditor is case sensitive. Use the appropriate case for
theinput commands.

Lesson 3. Displaying the Command History

Using the history Command

The history command displays previously-executed commands. By default, the history


command displays the last 16 commands to the standard output (Tutorialspoint, 2013).

The syntax for the history command is:

history option

To display previously executed commands, perform the following command:

$ history

date
cd/etc
touch dat1
dat2 ps -ef
history

The history command is an alias built into the Korn shell that enables you to display
previously- executed commands.

To display the command history without line numbers, perform


thefollowing command:
$ history -n
...
date
cd /etc
touch dat1 dat2 ps
-ef history

To display the current command and the four commands preceding it, perform the following
command:

$ history -4
...
107 date
108 cd /etc
109 touch dat1 dat2
110 ps -ef
111 history

Using the r Command

The r command is an alias built into the Korn shell that enables you to repeat a
command (Tutorialspoint, 2013).
To repeat the cal command by using the r command, perform the following
command:

$ cal December

2004

S M Tu W Th F S

1 2 3 4

5 6 7 8 9 10 11

12 13 14 15 16 17 18

19 20 21 22 23 24 25

26 27 28 29 30 31

$r

cal

December 2004

S M Tu W Th F S

1 2 3 4

5 6 7 8 9 10 11

12 13 14 15 16 17 18

19 20 21 22 23 24 25

26 27 28 29 30 31
Assessment Task

I. Answer the following accordingly.


1. Which specific shell characters have special meaning to the shell?

2. Name some common shell metacharacters.

3. Which metacharacter is a shell substitute for the home directory of a


user?

4. Write the command to navigate to your home directory from your current
working directory using the appropriate special metacharacter.
II. To use directory and file commands, complete the following steps. Write
the commands that you would use to perform each task in the space
provided.
1. Return to your home directory (if you need to), and list the contents.

2. Copy the dir1/coffees/beans/beansfile into the dir4directory,and call it roses.

3. Create a directory called vegetablesin dir3.

4. Move the dir1/coffees/beans/beansfile into the dir2/recipes directory.


5. Complete the missing options and their descriptions in Table

Option Description

-i
Includes the contents of a directory, including
the contents of all subdirectories, when you
copy a directory
6. From your home directory, create a directory called practice1.

7. Using a single command, copy the files file.1and file.2to the


practice1directory.
8. Copy dir3/planets/marsfile to the practice1directory, andname the
file addresses.

9. Create a directory called playin your practice1directory, andmove


the practice1/addressesfile to the playdirectory.

10. Using a single command with options, copy the playdirectory in


thepractice1 directory to a new directory in the practice1 directorycalled
appointments.
Summary

In this chapter, we will understand how the vi Editor works in Unix. There are
many ways to edit files in Unix. Editing files using the screen-oriented text editor vi
is one of the best ways. This editor enables you to edit lines in context with other
lines in the file.

An improved version of the vi editor which is called the VIM has also been
made available now. Here, VIM stands for Vi IMproved.
vi is generally considered the de facto standard in Unix editors because −
 It's usually available on all the flavors of Unix system.
 Its implementations are very similar across the board.
 It requires very few resources.
 It is more user-friendly than other editors such as the ed or the ex.
You can use the vi editor to edit an existing file or to create a new file from scratch.
You can also use this editor to just read a text file.

References

 Stallings, (2015).Operating Systems Internals And Design Principles 8th Edition


.Pearson Education
 Tanenbaum, A. (2015). Modern Operating Systems 4th Edition. Pearson Education
 Tutorialspoint. (2013). Unix - the Vi Editor Tutorial.

*END OF MODULE FOR FINAL TERM

PERIOD THERE WILL BE FINAL EXAMINATION ON

MAY 2021

You might also like