CCS 3301 SYSTEMS PROGRAMMING Course
CCS 3301 SYSTEMS PROGRAMMING Course
Prerequisite
Operating Systems
Purpose
Objectives
At the end of the course a student is expected to be able :
COURSE CONTENTS
A runtime environment is a virtual machine state which provides software services for
processes or programs while a computer is running. It may pertain to the operating
system itself, or the software that runs beneath it. The primary purpose is to accomplish
the objective of "platform independent" programming.
Runtime activities include loading and linking of the classes needed to execute a
program, optional machine code generation and dynamic optimization of the program,
and actual program execution.
For example, a program written in Java would receive services from the Java Runtime
Environment by issuing commands from which the expected result is returned by the
Java software. By providing these services, the Java software is considered the runtime
environment of the program. Both the program and the Java software combined request
services from the operating system. The operating system kernel provides services for
itself and all processes and software running under its control. The Operating System
may be considered as providing a runtime environment for itself.
In most cases, the operating system handles loading the program with a piece of code
called the loader, doing basic memory setup and linking the program with any
dynamically linked libraries it references. In some cases a language or implementation
will have these tasks done by the language runtime instead, though this is unusual in
mainstream languages on common consumer operating systems.
Some program debugging can only be performed (or are more efficient or accurate)
when performed at runtime. Logical errors and array bounds checking are examples.
For this reason, some programming bugs are not discovered until the program is tested
in a "live" environment with real data, despite sophisticated compile-time checking and
pre-release testing. In this case, the end user may encounter a runtime error message.
Early runtime libraries such as that of Fortran provided such features as mathematical
operations. Other languages add more sophisticated memory garbage collection, often
in association with support for objects.
Examples of RTEs:
An Operating System provides services to both the users and to the programs.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities is
encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system
with respect to program management −
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages communications
between all the processes. Multiple processes communicate with one another through
communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or
in the memory hardware. Following are the major activities of an operating system with
respect to error handling −
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory,
CPU cycles and files storage are to be allocated to each user or job. Following are the
major activities of an operating system with respect to resource management −
Protection
Considering a computer system having multiple users and concurrent execution of
multiple processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes,
or users to the resources defined by a computer system. Following are the major
activities of an operating system with respect to protection −
Batch processing
Batch processing is a technique in which an Operating System collects the programs
and data together in a batch before processing starts. An operating system does the
following activities related to batch processing −
The OS defines a job which has predefined sequence of commands, programs
and data as a single unit.
Advantages
Batch processing takes much of the work of the operator to the computer.
Increased performance as a new job get started as soon as the previous job is
finished, without any manual intervention.
Disadvantages
Multitasking
Multitasking is when multiple jobs are executed by the CPU simultaneously by switching
between them. Switches occur so frequently that the users may interact with each
program while it is running. An OS does the following activities related to multitasking −
The user gives instructions to the operating system or to a program directly, and
receives an immediate response.
The OS handles multitasking in the way that it can handle multiple
operations/executes multiple programs at a time.
Multitasking Operating Systems are also known as Time-sharing systems.
Multiprogramming
Sharing the processor, when two or more programs reside in memory at the same time,
is referred as multiprogramming. Multiprogramming assumes a single shared
processor. Multiprogramming increases CPU utilization by organizing jobs so that the
CPU always has one to execute.
The following figure shows the memory layout for a multiprogramming system.
Interactivity
Interactivity refers to the ability of users to interact with a computer system. An Operating
system does the following activities related to interactivity −
In such systems, Operating Systems typically read from and react to sensor data.
The Operating system must guarantee response to events within fixed periods of time to
ensure correct performance.
Distributed Environment
A distributed environment refers to multiple independent CPUs or processors in a
computer system. An operating system does the following activities related to distributed
environment −
The OS distributes computation logics among several physical processors.
The processors do not share memory or a clock. Instead, each processor has its
own local memory.
The OS manages the communications between the processors. They
communicate with each other through various communication lines.
Spooling
Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers
to putting data of various I/O jobs in a buffer. This buffer is a special area in memory or
hard disk which is accessible to I/O devices.
An operating system does the following activities related to distributed environment −
Handles I/O device data spooling as devices have different data access rates.
Maintains the spooling buffer which provides a waiting station where data can rest
while the slower device catches up.
Maintains parallel computation because of spooling process as a computer can
perform I/O in parallel fashion. It becomes possible to have the computer read
data from a tape, write data to disk and to write out to a tape printer while it is
doing its computing task.
Process management
Process
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned
in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory −
1 Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.
3 Text
This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.
4 Data
This section contains the global and static variables.
2. Program
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to
have the processor allocated to them by the operating system so that they can run.
Process may come into this state after Start state or while running it by but interrupted
by the scheduler to assign CPU to some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it
is moved to the terminated state where it waits to be removed from main memory.
1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating system.
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed
for this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for running
state.
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution
ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
10. The architecture of a PCB is completely dependent on Operating System and may
contain different information in different operating systems. Here is a simplified
diagram of a PCB −
Multithreading
Thread
A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment
and open files. When one thread alters a code segment memory item, all other threads
see that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread is
equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process.
Each thread represents a separate flow of control. Threads have been successfully used
in implementing network servers and web server. They also provide a suitable foundation
1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser
resources than a process.
2 Process switching needs interaction with operating Thread switching does not need to
system. interact with operating system.
3 In multiple processing environments, each process All threads can share same set of
executes the same code but has its own memory and open files, child processes.
file resources.
5 Multiple processes without using threads use more Multiple threaded processes use
resources. fewer resources.
6 In multiple processes each process operates One thread can read, write or
independently of the others. change another thread's data.
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication.
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Types of Thread
Threads are implemented in following two ways −
User Level Threads − User managed threads.
Kernel Level Threads − Operating System managed threads acting on kernel,
an operating system core.
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread
facility. Solaris is a good example of this combined approach. In a combined system,
multiple threads within the same application can run in parallel on multiple processors
and a blocking system call need not block the entire process. Multithreading models are
three types
1 User-level threads are faster to create and Kernel-level threads are slower to
manage. create and manage.
3 User-level thread is generic and can run on any Kernel-level thread is specific to the
operating system. operating system.
From the programmer's perspective, each Windows process includes resources such as the
following components:
Each thread in a process shares code, global variables, environment strings, and resources. Each thread is
independently scheduled, and a thread has the following elements:
A stack for procedure calls, interrupts, exception handlers, and automatic storage.
Thread Local Storage (TLS)—An arraylike collection of pointers giving each thread the ability to allocate
storage to create its own unique data environment.
An argument on the stack, from the creating thread, which is usually unique for each thread.
A context structure, maintained by the kernel, with machine register values.
Figure below shows a process with several threads. This figure is schematic and does not indicate actual
memory addresses, nor is it drawn to scale.
This chapter shows how to work with processes consisting of a single thread. Chapter 7 shows how to use
multiple threads.
Since every single user request may result in multiple processes running in
the operating system, the process may require to communicate with each
other. Each IPC protocol approach has its own advantage and limitation, so it
is not unusual for a single program to use all of the IPC methods
Information sharing: Since some users may be interested in the same piece of information
(for example, a shared file), you must provide a situation for allowing concurrent access to that
information.
Computation speedup: If you want a particular work to run fast, you must break it into sub-
tasks where each of them will get executed in parallel with the other tasks. Note that such a
speed-up can be attained only when the computer has compound or various processing
elements like CPUs or I/O channels.
Modularity: You may want to build the system in a modular way by dividing the system
functions into split processes or threads.
Convenience: Even a single user may work on many tasks at a time. For example, a user
may be editing, formatting, printing, and compiling in parallel.
Interprocess communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or transferring of
data from one process to another.
A diagram that illustrates interprocess communication is as follows −
Socket Types
There are four types of sockets available to the users. The first two are most commonly
used and the last two are rarely used.
Processes are presumed to communicate only between sockets of the same type but
there is no restriction that prevents communication between sockets of different types.
Subnetting
Subnetting or subnetworking basically means to branch off a network. It can be done for
a variety of reasons like network in an organization, use of different physical media (such
as Ethernet, FDDI, WAN, etc.), preservation of address space, and security. The most
common reason is to control network traffic.
Client Process
This is the process, which typically makes a request for information. After getting the
response, this process may terminate or may do some other processing.
Example, Internet Browser works as a client application, which sends a request to the
Web Server to get one HTML webpage.
Server Process
This is the process which takes a request from the clients. After getting a request from
the client, this process will perform the required processing, gather the requested
information, and send it to the requestor client. Once done, it becomes ready to serve
another client. Server processes are always alert and ready to serve incoming requests.
Example − Web Server keeps waiting for requests from Internet Browsers and as soon
as it gets any request from a browser, it picks up a requested HTML page and sends it
back to that Browser.
Note that the client needs to know the address of the server, but the server does not
need to know the address or even the existence of the client prior to the connection being
established. Once a connection is established, both sides can send and receive
information.
Types of Server
There are two types of servers you can have −
Iterative Server − This is the simplest form of server where a server process
serves one client and after completing the first request, it takes request from
another client. Meanwhile, another client keeps waiting.
Concurrent Servers − This type of server runs multiple concurrent processes to
serve many requests at a time because one process may take longer and another
client cannot wait for so long. The simplest way to write a concurrent server under
Unix is to fork a child process to handle each client separately.
The system calls for establishing a connection are somewhat different for the client and
the server, but both involve the basic construct of a socket. Both the processes establish
their own sockets.
The steps involved in establishing a socket on the client side are as follows −
Create a socket with the socket() system call.
Connect the socket to the address of the server using the connect() system call.
Send and receive data. There are a number of ways to do this, but the simplest
way is to use the read() and write() system calls.
sockaddr
The first structure is sockaddr that holds the socket information −
struct sockaddr {
unsigned short sa_family;
sa_data Protocol-specific The content of the 14 bytes of protocol specific address are
Address interpreted according to the type of address. For the Internet
family, we will use port number IP address, which is represented
by sockaddr_in structure defined below.
sockaddr in
The second structure that helps you to reference to the socket's elements is as follows
−
struct sockaddr_in {
short int sin_family;
unsigned short int sin_port;
struct in_addr sin_addr;
unsigned char sin_zero[8];
};
Here is the description of the member fields −
sin_zero Not Used You just set this value to NULL as this is not being used.
in addr
This structure is used only in the above structure as a structure field and holds 32 bit
netid/hostid.
struct in_addr {
unsigned long s_addr;
};
Here is the description of the member fields −
hostent
This structure is used to keep information related to host.
struct hostent {
char *h_name;
char **h_aliases;
int h_addrtype;
int h_length;
char **h_addr_list
h_addrtype AF_INET It contains the address family and in case of Internet based application, it will
always be AF_INET.
h_length 4 It holds the length of the IP address, which is 4 for Internet Address.
h_addr_list in_addr For Internet addresses, the array of pointers h_addr_list[0], h_addr_list[1], and so
on, are points to structure in_addr.
servent
This particular structure is used to keep information related to service and associated
ports.
struct servent {
char *s_name;
char **s_aliases;
int s_port;
char *s_proto;
};
Here is the description of the member fields −
s_name http This is the official name of the service. For example, SMTP, FTP POP3, etc.
s_aliases ALIAS It holds the list of service aliases. Most of the time this will be set to NULL.
s_port 80 It will have associated port number. For example, for HTTP, this will be 80.
Socket Structures
Socket address structures are an integral part of every network program. We allocate
them, fill them in, and pass pointers to them to various socket functions. Sometimes we
pass a pointer to one of these structures to a socket function and it fills in the contents.
We always pass these structures by reference (i.e., we pass a pointer to the structure,
not the structure itself), and we always pass the size of the structure as another
argument.
When a socket function fills in a structure, the length is also passed by reference, so that
its value can be updated by the function. We call these value-result arguments.
Always, set the structure variables to NULL (i.e., '\0') by using memset() for bzero()
functions, otherwise it may get unexpected junk values in your structure.
s_name http It is the official name of the service. For example, SMTP, FTP POP3, etc.
s_aliases ALIAS It holds the list of service aliases. Most of the time, it will be set to NULL.
s_port 80 It will have the associated port number. For example, for HTTP, it will be 80.
s_proto It is set to the protocol used. Internet services are provided using either TCP
TCP
or UDP.
UDP
Function Description
Family Description
Type Description
Protocol Description
#include <netdb.h>
#include <netinet/in.h>
#include <string.h>
if (sockfd < 0) {
perror("ERROR opening socket");
exit(1);
}
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(portno);
listen(sockfd,5);
clilen = sizeof(cli_addr);
if (newsockfd < 0) {
perror("ERROR on accept");
exit(1);
}
if (n < 0) {
perror("ERROR reading from socket");
exit(1);
}
if (n < 0) {
perror("ERROR writing to socket");
exit(1);
}
return 0;
}
#include <netdb.h>
#include <netinet/in.h>
#include <string.h>
if (sockfd < 0) {
perror("ERROR opening socket");
exit(1);
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(portno);
listen(sockfd,5);
clilen = sizeof(cli_addr);
while (1) {
newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen);
if (newsockfd < 0) {
perror("ERROR on accept");
exit(1);
}
if (pid < 0) {
perror("ERROR on fork");
exit(1);
}
if (pid == 0) {
/* This is the client process */
close(sockfd);
doprocessing(newsockfd);
exit(0);
}
else {
close(newsockfd);
}
57 |Systems Programming compiled by P. Njuguna
} /* end of while */
}
#include <netdb.h>
#include <netinet/in.h>
#include <string.h>
char buffer[256];
if (argc < 3) {
fprintf(stderr,"usage %s hostname port\n", argv[0]);
exit(0);
}
portno = atoi(argv[2]);
if (sockfd < 0) {
perror("ERROR opening socket");
exit(1);
}
if (server == NULL) {
fprintf(stderr,"ERROR, no such host\n");
exit(0);
}
if (n < 0) {
perror("ERROR writing to socket");
exit(1);
}
if (n < 0) {
perror("ERROR reading from socket");
exit(1);
}
printf("%s\n",buffer);
return 0;
}
IP Address Functions
int inet_aton (const char *strptr, struct in_addr *addrptr) − This function call
converts the specified string, in the Internet standard dot notation, to a network
address, and stores the address in the structure provided. The converted address
will be in Network Byte Order (bytes ordered from left to right). It returns 1 if the
string is valid and 0 on error.
in_addr_t inet_addr (const char *strptr) − This function call converts the
specified string, in the Internet standard dot notation, to an integer value suitable
for use as an Internet address. The converted address will be in Network Byte
Order (bytes ordered from left to right). It returns a 32-bit binary network byte
ordered IPv4 address and INADDR_NONE on error.
char *inet_ntoa (struct in_addr inaddr) − This function call converts the
specified Internet host address to a string in the Internet standard dot notation.
We know that to communicate between two or more processes, we use shared memory
but before using the shared memory what needs to be done with the system calls, let us
see this −
Create the shared memory segment or use an already created shared memory
segment (shmget())
Attach the process to the already created shared memory segment (shmat())
Detach the process from the already attached shared memory segment (shmdt())
Control operations on the shared memory segment (shmctl())
Let us look at a few details of the system calls related to shared memory.
int shmget(key_t key, size_t size, int shmflg)
The above system call creates or allocates a System V shared memory segment. The
arguments that need to be passed are as follows −
The first argument, key, recognizes the shared memory segment. The key can be
either an arbitrary value or one that can be derived from the library function ftok(). The
key can also be IPC_PRIVATE, means, running processes as server and client (parent
and child relationship) i.e., inter-related process communiation. If the client wants to use
shared memory with this key, then it must be a child process of the server. Also, the child
process needs to be created after the parent has obtained a shared memory.
The second argument, size, is the size of the shared memory segment rounded to
multiple of PAGE_SIZE.
The third argument, shmflg, specifies the required shared memory flag/s such as
IPC_CREAT (creating new segment) or IPC_EXCL (Used with IPC_CREAT to create
new segment and the call fails, if the segment already exists). Need to pass the
permissions as well.
Note − Refer earlier sections for details on permissions.
This call would return a valid shared memory identifier (used for further calls of shared
memory) on success and -1 in case of failure. To know the cause of failure, check with
errno variable or perror() function.
struct shmseg {
int cnt;
int complete;
char buf[BUF_SIZE];
};
int fill_buffer(char * bufptr, int size);
if (shmdt(shmp) == -1) {
perror("shmdt");
return 1;
}
struct shmseg {
int cnt;
int complete;
char buf[BUF_SIZE];
};
Writing into the shared memory by one process with different data packets and
reading from it by multiple processes, i.e., as per message type.
int main(void) {
struct my_msgbuf buf;
int msqid;
int len;
key_t key;
system("touch msgq.txt");
/* Filename: msgq_recv.c */
74 |Systems Programming compiled by P. Njuguna
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>
int main(void) {
struct my_msgbuf buf;
int msqid;
int toend;
key_t key;
#include<unistd.h>
int main() {
int pipefds[2];
int returnstatus;
char writemessages[2][20]={"Hi", "Hello"};
char readmessage[20];
returnstatus = pipe(pipefds);
if (returnstatus == -1) {
printf("Unable to create pipe\n");
return 1;
}
Execution Steps
78 |Systems Programming compiled by P. Njuguna
Compilation
Example program 2 − Program to write and read two messages through the pipe using
the parent and the child processes.
Algorithm
Step 1 − Create a pipe.
Step 2 − Create a child process.
Step 3 − Parent process writes to the pipe.
Step 4 − Child process retrieves the message from the pipe and writes it to the standard
output.
Step 5 − Repeat step 3 and step 4 once again.
Source Code: pipewithprocesses.c
#include<stdio.h>
#include<unistd.h>
int main() {
int pipefds[2];
int returnstatus;
int pid;
char writemessages[2][20]={"Hi", "Hello"};
char readmessage[20];
returnstatus = pipe(pipefds);
if (returnstatus == -1) {
printf("Unable to create pipe\n");
return 1;
}
pid = fork();
// Child process
if (pid == 0) {
read(pipefds[0], readmessage, sizeof(readmessage));
printf("Child Process - Reading from pipe – Message 1 is %s\n", readmessage);
read(pipefds[0], readmessage, sizeof(readmessage));
printf("Child Process - Reading from pipe – Message 2 is %s\n", readmessage);
} else { //Parent process
printf("Parent Process - Writing to pipe - Message 1 is %s\n", writemessages[0]);
write(pipefds[1], writemessages[0], sizeof(writemessages[0]));
printf("Parent Process - Writing to pipe - Message 2 is %s\n", writemessages[1]);
write(pipefds[1], writemessages[1], sizeof(writemessages[1]));
}
return 0;
}
Sample Programs
Sample program 1 − Achieving two-way communication using pipes.
Algorithm
Step 1 − Create pipe1 for the parent process to write and the child process to read.
Step 2 − Create pipe2 for the child process to write and the parent process to read.
int main() {
int pipefds1[2], pipefds2[2];
int returnstatus1, returnstatus2;
int pid;
char pipe1writemessage[20] = "Hi";
char pipe2writemessage[20] = "Hello";
char readmessage[20];
returnstatus1 = pipe(pipefds1);
if (returnstatus1 == -1) {
printf("Unable to create pipe 1 \n");
return 1;
}
returnstatus2 = pipe(pipefds2);
if (returnstatus2 == -1) {
printf("Unable to create pipe 2 \n");
return 1;
}
pid = fork();
Signal
signal is a notification to a process indicating the occurrence of an event. Signal is also
called software interrupt and is not predictable to know its occurrence, hence it is also
called an asynchronous event.
Signal can be specified with a number or a name, usually signal names start with SIG.
The available signals can be checked with the command kill –l (l for Listing signal
names), which is as follows −
Default Action
Handle the signal
Ignore the signal
As discussed the signal can be handled altering the execution of default action. Signal
handling can be done in either of the two ways i.e., through system calls, signal() and
sigaction().
#include <signal.h>
int sigaction(int signum, const struct sigaction *act, struct sigaction *oldact)
This system call is used to either examine or change a signal action. If the act is not null,
the new action for signal signum is installed from the act. If oldact is not null, the previous
action is saved in oldact.
The sigaction structure contains the following fields −
Field 1 − Handler mentioned either in sa_handler or sa_sigaction.
void (*sa_handler)(int);
void (*sa_sigaction)(int, siginfo_t *, void *);
The handler for sa_handler specifies the action to be performed based on the signum
and with SIG_DFL indicating default action or SIG_IGN to ignore the signal or pointer to
a signal handling function.
The handler for sa_sigaction specifies the signal number as the first argument, pointer
to siginfo_t structure as the second argument and pointer to user context (check
getcontext() or setcontext() for further details) as the third argument.
The structure siginfo_t contains signal information such as the signal number to be
delivered, signal value, process id, real user id of sending process, etc.
Field 2 − Set of signals to be blocked.
int sa_mask;
This variable specifies the mask of signals that should be blocked during the execution
of signal handler.
Field 3 − Special flags.
int sa_flags;
This field specifies a set of flags which modify the behavior of the signal.
Field 4 − Restore handler.
int main() {
int result;
int v1, v2;
v1 = 121;
v2 = 0;
result = v1/v2;
printf("Result of Divide by Zero is %d\n", result);
return 0;
}
int main() {
int result;
int v1, v2;
void (*sigHandlerReturn)(int);
sigHandlerReturn = signal(SIGFPE, handler_dividebyzero);
if (sigHandlerReturn == SIG_ERR) {
perror("Signal Error: ");
return 1;
}
v1 = 121;
v2 = 0;
int main() {
printf("Testing SIGSTOP\n");
raise(SIGSTOP);
return 0;
}
int main() {
pid_t pid;
printf("Testing SIGSTOP\n");
pid = getpid();
printf("Open Another Terminal and issue following command\n");
printf("kill -SIGCONT %d or kill -CONT %d or kill -18 %d\n", pid, pid, pid);
raise(SIGSTOP);
printf("Received signal SIGCONT\n");
return 0;
}
In another terminal
kill -SIGCONT 30379
So far, we have seen the program which handles the signal generated by the system.
Now, let us see the signal generated through program (using raise() function or through
kill command). This program generates signal SIGTSTP (terminal stop), whose default
action is to stop the execution. However, since we are handling the signal now instead
of default action, it will come to the defined handler. In this case, we are just printing the
message and exiting.
/* signal_raising_handling.c */
#include<stdio.h>
#include<signal.h>
#include<stdlib.h>
int main() {
void (*sigHandlerReturn)(int);
sigHandlerReturn = signal(SIGTSTP, handler_sigtstp);
if (sigHandlerReturn == SIG_ERR) {
perror("Signal Error: ");
return 1;
}
printf("Testing SIGTSTP\n");
raise(SIGTSTP);
return 0;
}
int main() {
void (*sigHandlerReturn)(int);
sigHandlerReturn = signal(SIGTSTP, SIG_IGN);
if (sigHandlerReturn == SIG_ERR) {
perror("Signal Error: ");
return 1;
}
int main(void) {
void (*sigHandlerInterrupt)(int);
void (*sigHandlerQuit)(int);
void (*sigHandlerReturn)(int);
sigHandlerInterrupt = sigHandlerQuit = handleSignals;
sigHandlerReturn = signal(SIGINT, sigHandlerInterrupt);
if (sigHandlerReturn == SIG_ERR) {
perror("signal error: ");
return 1;
}
sigHandlerReturn = signal(SIGQUIT, sigHandlerQuit);
if (sigHandlerReturn == SIG_ERR) {
Second Method
To terminate this program, perform the following:
1. Open another terminal
2. Issue command: kill 71 or issue CTRL+C 2 times (second time it terminates)
^C
You pressed CTRL+C
Now reverting SIGINT signal to default action
int main(void) {
void (*sigHandlerReturn)(int);
struct sigaction mysigaction;
mysigaction.sa_handler = handleSignals;
sigemptyset(&mysigaction.sa_mask);
mysigaction.sa_flags = 0;
sigaction(SIGINT, &mysigaction, NULL);
if (mysigaction.sa_handler == SIG_ERR) {
perror("signal error: ");
return 1;
}
mysigaction.sa_handler = handleSignals;
sigemptyset(&mysigaction.sa_mask);
mysigaction.sa_flags = 0;
sigaction(SIGQUIT, &mysigaction, NULL);
if (mysigaction.sa_handler == SIG_ERR) {
perror("signal error: ");
Remote procedure calls support process oriented and thread oriented models.
The internal message passing mechanism of RPC is hidden from the user.
The effort to re-write and re-develop the code is minimum in remote procedure calls.
Remote procedure calls can be used in distributed environment as well as the local
environment.
Many of the protocol layers are omitted by RPC to improve performance.
Disadvantages of Remote Procedure Call
Some of the disadvantages of RPC are as follows −
The remote procedure call is a concept that can be implemented in different ways. It is
not a standard.
There is no flexibility in RPC for hardware architecture. It is only interaction based.
There is an increase in costs because of remote procedure call.
After you decide that your application would benefit from IPC, you must decide which of
the available IPC methods to use. It is likely that an application will use several IPC
mechanisms. The answers to these questions determine whether an application can
benefit by using one or more IPC mechanisms.
Clipboard
COM
Data Copy
DDE
File Mapping
Mailslots
Pipes
RPC
Windows Sockets
Key Point: All applications should support the clipboard for those data formats that
they understand. For example, a text editor or word processor should at least be able to
The foundation of OLE is the Component Object Model (COM). A software component
that uses COM can communicate with a wide variety of other components, even those
that have not yet been written. The components interact as objects and clients.
Distributed COM extends the COM programming model so that it works across a
network.
Key Point: OLE supports compound documents and enables an application to include
embedded or linked data that, when chosen, automatically starts another application for
data editing. This enables the application to be extended by any other application that
uses OLE. COM objects provide access to an object's data through one or more sets of
related functions, known as interfaces. For more information, see COM and ActiveX
Object Services.
Key Point: Data copy can be used to quickly send information to another application
using Windows messaging. For more information, see Data Copy.
The data formats used by DDE are the same as those used by the clipboard. DDE can be
thought of as an extension of the clipboard mechanism. The clipboard is almost always
used for a one-time response to a user command, such as choosing the Paste command
from a menu. DDE is also usually initiated by a user command, but it often continues to
function without further user interaction. You can also define custom DDE data formats
for special-purpose IPC between applications with more tightly coupled
communications requirements.
DDE exchanges can occur between applications running on the same computer or on
different computers on a network.
Key Point: DDE is not as efficient as newer technologies. However, you can still use DDE
if other IPC mechanisms are not suitable or if you must interface with an existing
application that only supports DDE. For more information, see Dynamic Data
Exchange and Dynamic Data Exchange Management Library.
You can use a special case of file mapping to provide named shared memory between
processes. If you specify the system swapping file when creating a file-mapping object,
the file-mapping object is treated as a shared memory block. Other processes can
access the same block of memory by opening the same file-mapping object.
A mailslot client can send a message to a mailslot on its local computer, to a mailslot on
another computer, or to all mailslots with the same name on all computers in a specified
network domain. Messages broadcast to all mailslots on a domain can be no longer
than 400 bytes, whereas messages sent to a single mailslot are limited only by the
maximum message size specified by the mailslot server when it created the mailslot.
Key Point: Mailslots offer an easy way for applications to send and receive short
messages. They also provide the ability to broadcast messages across all computers in a
network domain. For more information, see Mailslots.
Named pipes are used to transfer data between processes that are not related processes
and between processes on different computers. Typically, a named-pipe server process
creates a named pipe with a well-known name or a name that is to be communicated to
its clients. A named-pipe client process that knows the name of the pipe can open its
other end, subject to access restrictions specified by named-pipe server process. After
Key Point: Anonymous pipes provide an efficient way to redirect standard input or
output to child processes on the same computer. Named pipes provide a simple
programming interface for transferring data between two processes, whether they
reside on the same computer or over a network. For more information, see Pipes.
The RPC provided by Windows is compliant with the Open Software Foundation (OSF)
Distributed Computing Environment (DCE). This means that applications that use RPC
are able to communicate with applications running with other operating systems that
support DCE. RPC automatically supports data conversion to account for different
hardware architectures and for byte-ordering between dissimilar environments.
RPC clients and servers are tightly coupled but still maintain high performance. The
system makes extensive use of RPC to facilitate a client/server relationship between
different parts of the operating system.
Key Point: RPC is a function-level interface, with support for automatic data conversion
and for communications with other operating systems. Using RPC, you can create high-
performance, tightly coupled distributed applications. For more information,
see Microsoft RPC Components.
Windows Sockets are based on the sockets first popularized by Berkeley Software
Distribution (BSD). An application that uses Windows Sockets can communicate with
other socket implementation on other types of systems. However, not all transport
service providers support all available options.
Memory management
Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main memory
and disk during execution. Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is free. It checks how
much memory is to be allocated to processes. It decides which process will get memory
at what time. It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
This tutorial will teach you basic concepts related to Memory Management.
1
Symbolic addresses
The addresses used in a source code. The variable names, constants, and
instruction labels are the basic elements of the symbolic address space.
2
Relative addresses
At the time of compilation, a compiler converts symbolic addresses into
relative addresses.
3
Physical addresses
The loader generates these addresses at the time when a program is loaded
into main memory.
Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main
memory (or move) to secondary storage (disk) and make that memory available to other
processes. At some later time, the system swaps back the process from the secondary
storage to main memory.
Though performance is usually affected by swapping process but it helps in running
multiple and big processes in parallel and that's the reason Swapping is also known
as a technique for memory compaction.
1
Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user processes from
each other, and from changing operating-system code and data. Relocation register contains
value of smallest physical address whereas limit register contains range of logical addresses.
Each logical address must be less than the limit register.
2
Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized partitions where
each partition should contain only one process. When a partition is free, a process is selected
from the input queue and is loaded into the free partition. When the process terminates, the
partition becomes available for another process.
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken
into little pieces. It happens after sometimes that processes cannot be allocated to
memory blocks considering their small size and memory blocks remains unused. This
problem is known as Fragmentation.
Fragmentation is of two types −
1
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but
it is not contiguous, so it cannot be used.
The following diagram shows how fragmentation can cause waste of memory and a
compaction technique can be used to create more free memory out of fragmented
memory −
Paging
A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a hard
that's set up to emulate the computer's RAM. Paging technique plays an important role
in implementing virtual memory.
Paging is a memory management technique in which process address space is broken
into blocks of the same size called pages (size is power of 2, between 512 bytes and
8192 bytes). The size of the process is measured in the number of pages.
Address Translation
Page address is called logical address and represented by page number and
the offset.
Logical Address = Page number + page offset
Frame address is called physical address and represented by a frame number and
the offset.
Physical Address = Frame number + page offset
A data structure called page map table is used to keep track of the relation between a
page of a process to a frame in physical memory.
Demand Paging
A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not in
advance. When a context switch occurs, the operating system does not copy any of the
old program’s pages out to the disk or any of the new program’s pages into the main
memory Instead, it just begins executing the new program after loading the first page
and fetches that program’s pages as they are referenced.
Reference String
The string of memory references is called reference string. Reference strings are
generated artificially or by tracing a given system and recording the address of each
memory reference. The latter choice produces a large number of data, where we note
two things.
For a given page size, we need to consider only the page number, not the entire
address.
If we have a reference to a page p, then any immediately following references to
page p will never cause a page fault. Page p will be in memory after the first
reference; the immediately following references will not fault.
For example, consider the following sequence of addresses −
123,215,600,1234,76,96
If page size is 100, then the reference string is 1,2,6,12,0,0
Device Controllers
Device drivers are software modules that can be plugged into an OS to handle a
particular device. Operating System takes help from device drivers to handle all I/O
devices.
The Device Controller works like an interface between a device and a device driver. I/O
units (Keyboard, mouse, printer, etc.) typically consist of a mechanical component and
an electronic component where electronic component is called the device controller.
There is always a device controller and a device driver for each device to communicate
with the Operating Systems. A device controller may be able to handle multiple devices.
As an interface its main task is to convert serial bit stream to block of bytes, perform error
correction as necessary.
Any device connected to the computer is connected by a plug and socket, and the socket
is connected to a device controller. Following is a model for connecting the CPU,
memory, controllers, and I/O devices where CPU and device controllers all use a
common bus for communication.
Step Description
5 DMA controller transfers bytes to buffer, increases the memory address, decreases the
counter C until C becomes zero.
IO subsystem software
I/O software is often organized in the following layers −
User Level Libraries − This provides simple interface to the user program to
perform input and output. For example, stdio is a library provided by C and C++
programming languages.
Kernel Level Modules − This provides device driver to interact with the device
controller and device independent I/O modules used by the device drivers.
Hardware − This layer includes actual hardware and hardware controller which
interact with the device drivers and makes hardware alive.
Device Drivers
Device drivers are software modules that can be plugged into an OS to handle a
particular device. Operating System takes help from device drivers to handle all I/O
devices. Device drivers encapsulate device-dependent code and implement a standard
interface in such a way that code contains device-specific register reads/writes. Device
driver, is generally written by the device's manufacturer and delivered along with the
device on a CD-ROM.
A device driver performs the following jobs −
Interrupt handlers
An interrupt handler, also known as an interrupt service routine or ISR, is a piece of
software or more specifically a callback function in an operating system or more
specifically in a device driver, whose execution is triggered by the reception of an
interrupt.
When the interrupt happens, the interrupt procedure does whatever it has to in order to
handle the interrupt, updates data structures and wakes up process that was waiting for
an interrupt to happen.
The interrupt mechanism accepts an address ─ a number that selects a specific interrupt
handling routine/function from a small set. In most architectures, this address is an offset
stored in a table called the interrupt vector table. This vector contains the memory
addresses of specialized interrupt handlers.
File Structure
A File Structure should be according to a required format that the operating system can
understand.
A file has a certain defined structure according to its type.
A text file is a sequence of characters organized into lines.
A source file is a sequence of procedures and functions.
An object file is a sequence of bytes organized into blocks that are understandable
by the machine.
When operating system defines different file structures, it also contains the code
to support these file structure. Unix, MS-DOS support minimum number of file
structure.
File Type
File type refers to the ability of the operating system to distinguish different types of file
such as text files source files and binary files etc. Many operating systems support many
types of files. Operating system like MS-DOS and UNIX have the following types of files
−
Ordinary files
These files contain list of file names and other information related to these files.
Special files
Sequential access
Direct/Random access
Indexed sequential access
Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e.,
the information in the file is processed in order, one record after the other. This access
method is the most primitive one. Example: Compilers usually access files in this fashion.
Direct/Random access
Random access file organization provides, accessing the records directly.
Each record has its own address on the file with by the help of which it can be
directly accessed for reading or writing.
The records need not be in any sequence within the file and they need not be in
adjacent locations on the storage medium.
Indexed sequential access
Space Allocation
Files are allocated disk spaces by operating system. Operating systems deploy following
three main ways to allocate disk space to files.
Contiguous Allocation
Linked Allocation
Indexed Allocation
Authentication
One Time passwords
Program Threats
System Threats
Computer Security Classifications
Authentication
Authentication refers to identifying each user of the system and associating the executing
programs with those users. It is the responsibility of the Operating System to create a
protection system which ensures that a user who is running a particular program is
authentic. Operating Systems generally identifies/authenticates users using following
three ways −
Username / Password − User need to enter a registered username and
password with Operating system to login into the system.
User card/key − User need to punch card in card slot, or enter key generated by
key generator in option provided by operating system to login into the system.
User attribute - fingerprint/ eye retina pattern/ signature − User need to pass
his/her attribute via designated input device used by operating system to login
into the system.
Program Threats
Operating system's processes and kernel do the designated task as instructed. If a user
program made these process do malicious tasks, then it is known as Program Threats.
One of the common example of program threat is a program installed in a computer
which can store and send user credentials via network to some hacker. Following is the
list of some well-known program threats.
Trojan Horse − Such program traps user login credentials and stores them to
send to malicious user who can later on login to computer and can access system
resources.
Trap Door − If a program which is designed to work as required, have a security
hole in its code and perform illegal action without knowledge of user then it is
called to have a trap door.
Logic Bomb − Logic bomb is a situation when a program misbehaves only when
certain conditions met otherwise it works as a genuine program. It is harder to
detect.
Virus − Virus as name suggest can replicate themselves on computer system.
They are highly dangerous and can modify/delete user files, crash systems. A
virus is generatlly a small code embedded in a program. As user accesses the
program, the virus starts getting embedded in other files/ programs and can make
system unusable for user
System Threats
System threats refers to misuse of system services and network connections to put user
in trouble. System threats can be used to launch program threats on a complete network
called as program attack. System threats creates such an environment that operating
system resources/ user files are misused. Following is the list of some well-known
system threats.
Worm − Worm is a process which can choked down a system performance by
using system resources to extreme levels. A Worm process generates its multiple
copies where each copy uses system resources, prevents all other processes to
get required resources. Worms processes can even shut down an entire network.
Port Scanning − Port scanning is a mechanism or means by which a hacker can
detects system vulnerabilities to make an attack on the system.
Denial of Service − Denial of service attacks normally prevents user to make
legitimate use of the system. For example, a user may not be able to use internet
if denial of service attacks browser's content settings.
1
Type A
Highest Level. Uses formal design specifications and verification techniques.
Grants a high degree of assurance of process security.
2
Type B
Provides mandatory protection system. Have all the properties of a class C2
system. Attaches a sensitivity label to each object. It is of three types.
B1 − Maintains the security label of each object in the system. Label is used
for making decisions to access control.
B2 − Extends the sensitivity labels to each system resource, such as
storage objects, supports covert channels and auditing of events.
B3 − Allows creating lists or user groups for access-control to grant access
or revoke access to a given named object.
3
Type C
Provides protection and user accountability using audit capabilities. It is of two
types.
C1 − Incorporates controls so that users can protect their private
information and keep other users from accidentally reading / deleting their
data. UNIX versions are mostly Cl class.
C2 − Adds an individual-level access control to the capabilities of a Cl level
system.
4
Type D
Lowest level. Minimum protection. MS-DOS, Window 3.1 fall in this category.