0% found this document useful (0 votes)
40 views

Operating System Report

This document presents basic information about operating systems. Explains key concepts such as hardware, software, firmware and operating systems. It describes the main hardware components such as the motherboard and CPU. Defines system software, including operating systems and device drivers. Finally, it introduces the main objectives of operating systems such as providing a user interface and using resources efficiently.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Operating System Report

This document presents basic information about operating systems. Explains key concepts such as hardware, software, firmware and operating systems. It describes the main hardware components such as the motherboard and CPU. Defines system software, including operating systems and device drivers. Finally, it introduces the main objectives of operating systems such as providing a user interface and using resources efficiently.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Bolivarian Republic of Venezuela

Ministry of Popular Power for defense


National Polytechnic Experimental University of the Bolivarian National Armed Forces
UNEFA - Puerto Cabello Nucleus

Operating System Report

Teacher : Bachelor:

Marisela Materano Victor Cuauro


CI: 26671901
Robert Sanchez
CI: 27307525
6th Semester Eng. Of Systems.

Puerto Cabello, May 2021


I-BASIC CONCEPTS OF OPERATING SYSTEMS

INTRODUCTION TO COMPUTER SYSTEMS

The computer is located when the need to count and have adequate control of our
belongings appears, as well as the need to record or save memory; As time has passed,
human beings have developed concepts and support tools to act with increasing ease,
precision and with less time in the processing and recording of information.

In this way, computing has as its background the need for human beings to have tools and
means that allow them to record and manipulate information and develop logical
procedures to obtain various results from the information; which has manifested itself from
the simple case of adding and subtracting quantities, to reaching new ways of storing,
processing and manipulating all types of information.

Here we have some basic concepts:

 Computer Science: According to the Dictionary of the Royal Spanish Academy,


it is the “Set of scientific knowledge and techniques that make the automatic
processing of information possible by computers.” Its origin is related to the
coining of the word Informatik by the German Karl Steinbuch in 1957, which is a
contraction of the words information and automatic.

 Computer (Computer): is an electronic device to which data can be entered so


that it performs certain operations and generates useful information for the user;
It is a set of peripheral devices that are interconnected through standard
communication protocols and ports, to a system of integrated circuits with the
purpose of facilitating the capture of data, their processing and the issuance of
their results.

 Data: In computer terms, it is the minimum unit of information arranged in such a


way that it can be captured and processed by a computer.

 Information: From the point of view of computing, it is an ordered set of data that
has meaning and meaning for the system for which it is intended, whether it is
computational or human.
Hardware

It represents the tangible physical components of a computing system, these elements alone
cannot perform any action, therefore they require Software for their operation. It is a
tangible part or physical devices of the computer; such as the motherboard, the central
storage unit, peripherals and the monitor. Among the most important are:

Motherboard: it is one of the most important components of the computer and essential
for its operation (also known as Motherboard), which is a printed circuit board that houses
the Central Processing Unit (CPU) or microprocessor, Chipset (circuit integrated auxiliary),
RAM Memory, BIOS or Flash-ROM, etc., in addition to communicating with each other.

Hardware components and devices are divided into basic and complementary.

Basic Hardware: These are the fundamental and essential parts for the computer to
function, such as: Motherboard, monitor, keyboard and mouse.

Complementary Hardware: are all those additional non-essential devices such as: printer,
scanner, digital video camera, webcam, etc.

Software

It constitutes the intangible logical part in a computing system, it refers to the set of
programs that are used to manage the hardware components, and allow interaction between
them and the user.

Software are application programs and operating systems, which according to the
functions they perform can be classified into:

 System Software : This is the name given to the set of programs that are used to
interact with the system, conferring control over the hardware, in addition to
providing support to other programs.

The System Software is divided into:

 Operating system : It is a set of programs that manage the computer's resources


and control its operation. An operating system performs five basic functions:

1. Provision of user interface : Allows the user to communicate with the computer
through command-based interfaces, menu-based interfaces, and graphical user
interfaces.
2. Resource management : Manage hardware resources such as CPU, memory,
secondary storage devices, and input and output peripherals.

3. File Management: Controls the creation, deletion, copying and access of data and
program files.
4. Task Management: Manage information about the programs and processes
running on the computer. You can change the priority between processes, terminate
them and check their CPU usage, as well as terminate programs.

5. Support Service: The Support Services of each operating system depend on the
implementations added to it, and may consist of the inclusion of new utilities,
version updates, security improvements, drivers for new peripherals, or correction
of software errors.

 Device drivers: These are programs that allow another higher level program such as
an operating system to interact with a hardware device.

 Utility programs: they perform various functions to solve specific problems, in


addition to performing general and maintenance tasks. Some are included in the
operating system.

 Application Software: Application Software are programs designed for or by users


to facilitate the performance of specific tasks on the computer, such as office
applications (word processor, spreadsheet, presentation program, management
system). database...), or other types of specialized software such as medical
software, educational software, music editors, accounting programs, etc.

 Programming Software: Programming Software is the set of tools that allow the
computer developer to write programs using different alternatives and programming
languages.

This type of software mainly includes compilers, interpreters, assemblers, linkers,


debuggers, text editors and an integrated development environment that contains the above
tools, and usually has an advanced graphical user interface (GUI).

Firmware

Also known as firmware, it is the basic program that controls the electronic circuits of any
device. This program or software is a portion of code in charge of controlling what the
hardware of a device has to do, and which ensures that the basic operation is correct. It is a
read-only memory unit that contains a set of basic instructions, allowing communication
between Hardware and Software.
You can have several circuits connected inside your device, but those circuits need primary
logic, basic instructions that tell them how they should work, how they should start, and
what operations they should perform. That, simplifying it so that it is understood, is what
firmware means.

The code that makes up the firmware of any device usually comes on memory chips apart
from the main ones. This means that from your mouse to your washing machine, in all
devices there is a minimum ROM memory in which this firmware is stored. Thanks to it, an
interface for system configuration is also established and allows controlling the boot and
the main connections and functions of the device.

INTRODUCTION TO OPERATIVE SYSTEMS

The first objective of an operating system is for the computer to be comfortable to use. The
second objective is that the machine is used efficiently.

An Operating System is a program that acts as an intermediary between the user and the
machine. The purpose of an operating system is to provide an environment in which the
user can run their applications. Applications are all those programs that the user runs to
improve their productivity or to have fun. Without software, a computer is nothing more
than a useless mass of metal. With software, a computer can store, process, and retrieve
information, find spelling errors in manuscripts, have adventures, and engage in many other
valuable livelihood activities.

Computer software can be broadly classified into two classes: system programs, which
control the operation of the computer itself, and application programs, which solve
problems for its users. The foundational program of all system programs is the operating
system (OS), which controls all computer resources and provides the foundation on which
application programs can be written.

What is an Operative System

Since their creation, digital computers have used a system of coding instructions in a binary
numbering system, that is, with 0S. This is because integrated circuits work on this
principle, that is, there is current or there is no current.
At the beginning of the history of computers (about forty years ago), operating systems did
not exist and introducing a program to be executed became an incredible effort that could
only be carried out by very few experts. This made computers very complicated to use and
required high technical knowledge to operate them. Its management was so complex that in
some cases the result was disastrous.

One of the purposes of the operating system that manages the intermediary core is to
manage hardware location and access protection resources, which relieves application
programmers of having to deal with these details. Most electronic devices that use
microprocessors to operate have a built-in operating system (mobile phones, DVD players,
computers, routers, etc.). In which case, they are managed through a graphical user
interface, a window manager or a desktop environment, if it is a cell phone, through a
console or remote control if it is a DVD, and through a command line or web browser if it
is a a router.

The dominant desktop operating system is Microsoft Windows with a market share of
around 82.74%. Apple Inc.'s macOS is in second place (13.23%), and GNU/Linux varieties
are collectively in third place (1.57%).2 In the mobile sector (including smartphones and
tablets), the Android's share is up to 70% in 2017.3 Linux distributions are dominant in the
server and supercomputing sectors4. Other specialized classes of operating systems, such as
embedded and real-time systems, exist for many applications.

Structure

A system is made up of clearly distinguishable parts, its elements , related to each other in
some particular way, a combination that constitutes its structure. This structure is an
arrangement that is attributed to the elements through their relationships. Elements can be
material objects, such as metal parts of a bridge, or ideals, such as the words of a sentence.
That is, structures are real or ideal arrangements of elements

The internal organization of the operating system is considered and for this we must
observe two types of requirements:

User requirements:

- Easy to use and learn system

- Sure

- Fast

- Suitable for the intended use

Software requirements:

- Maintenance

- Way of operation

- Use restrictions

- Efficiency

- Error tolerance
- Flexibility

As user needs grew and systems were perfected, greater organization of the operating
system software became necessary, where a part of the system contained subparts and this
was organized in the form of levels.

It is a hierarchical structure, with greater organization of the operating system software.


The operating system is divided into parts or levels, each one perfectly defined and with a
clear interface (communication) with the rest of the elements.

Levels that make up the operating system

 Core (Kernell)

 It is the main part of the operating system. The kernel or center of the operating
system manages the entire system, synchronizes all processes. At the kernel level
we only work with processes.

 Input/output management

 The operating system manages external devices through their drivers.

 Memory management

 The operating system manages all aspects related to real memory and virtual
memory.

 File systems

 The operating system takes care of managing the user's files through a directory
structure with some type of organization.

 command interpreter

 It is a communication mechanism between users and the system. Reads user


instructions and causes requested system functions to execute.

Historical evolution

The concept of Operating System emerged in the 1950s. The first Operating System in
history was created in 1956 for an IBM 704 computer, and basically the only thing it did
was start the execution of a program when the previous one finished.
In the 60s, a revolution occurred in the field of Operating Systems. Concepts such as
multitasking system, multiuser system, multiprocessor system and real-time system appear.

It is in this decade when UNIX appears, the basis of the vast majority of Operating Systems
that exist today.

In the 70s there was a boom in personal computers, bringing them closer to the general
public in a way that was unthinkable until then. This multiplied development, creating the
C programming language (specifically designed to completely rewrite UNIX code).

As a consequence of this exponential growth of users, the vast majority of them without
any knowledge of low or high level languages, in the 80s, the priority when designing an
operating system was ease of use, thus emerging the first user interfaces.

In the 80s, systems like MacOS, MS-DOS, and Windows were born.

In the 90s, Linux made its appearance, publishing the first version of the kernel in
September 1991, which would later join the GNU project, a completely free operating
system, similar to UNIX, which lacked a functional kernel to function. Nowadays, most
people know Linux as the Operating System that is actually called GNU/Linux.

Reentrant
In computing, a computer program or subroutine is called re-entry if it can be interrupted in
the middle of its execution and safely called again ("re-enter") before previous invocations
complete their execution.

A reentrant function does not have static variables on successive calls, nor does it return a
pointer to static data. All data is supplied by the caller of the function, a reentrant function
should not call non-reentrant functions. A non-reentrant function can often, but not always,
be identified by its external interface and its usage. For example, the strtok routine is non-
reentrant, as it stores the string that must be split into its elements.

Processes
The main concept in any operating system is the process. A process is a running program,
including the program counter value, registers, and variables. Conceptually, each process
has a thread of execution that is seen as a virtual CPU. The processor resource is alternated
between the different processes that exist in the system, giving the idea that they execute in
parallel (multiprogramming).
Processes are created and destroyed by the operating system, and it must also take care of
communication between processes, but it does so at the request of other processes. The
mechanism by which one process creates another process is called a fork. New processes
are independent and do not share memory (that is, information) with the process that
created them.
In multithreaded operating systems it is possible to create both threads and processes. The
difference is that a process can only create threads for itself and that these threads share all
the memory reserved for the process.
Mutual Exclusion
Mutual exclusion is the activity carried out by the operating system to prevent two or more
processes from entering a shared data area at the same time or accessing the same resource.
In other words, it is the condition by which, of a set of processes, only one can access a
given resource or perform a given function at a given instant of time. In multiprogramming
systems with a single processor, the processes are interleaved in the time to give the
appearance of simultaneous execution. One of the big problems that we can encounter is
that sharing resources is full of risks.
For example, if two processes use a global variable at the same time and both carry out both
read and write operations on said variable, the order in which these reads and writes are
executed is critical, since the process will be affected. value of the variable. It consists of a
single process temporarily excluding all others from using a shared resource in a way that
guarantees the integrity of the system.
The critical section is the part of the program with a clearly marked beginning and end that
usually contains the update of one or more shared variables. For a solution to the mutual
exclusion problem to be valid, a series of conditions must be met:
 Mutual exclusion must be guaranteed between the different processes when
accessing the shared resource.
 There cannot be 2 processes within their respective critical sections.
 No assumptions should be made as to the relative speed of the conflicting processes.
 No process that is outside its critical section should interrupt another process to
access the critical section.
 When more than one process wishes to enter its critical section, it must be granted
entry within a finite time.
For the solution to the mutual exclusion problem there are 3 types of solutions
 Software solutions
 Hardware solutions
 Solutions provided by the operating system
Spooling
The spooling technique (Simultaneous Peripheral Operation On Line) allows reading and
writing to peripherals when needed, instead of having to wait for all jobs to be ready. To do
this, the operating system maintains a print queue, that is, with information on the jobs to
be printed.
When non-shareable devices are used, it may happen that during periods of high demand,
several processes are blocked waiting for the peripherals to be used.
The spooling technique aims to avoid these losses of time by ensuring that the transfer is
carried out on an intermediate support and not directly on the peripheral.
Description:
Even when a process opens a stream associated with a non-shareable device, the I/O routine
assigns it an anonymous file on intermediate support, so that all outputs of the stream are
directed to it; When the stream is closed, the file is added to a queue containing similar files
created by other processes;
The spooler (independent process associated with the non-shareable device) is responsible
for transferring the information of the files stored in that queue to the physical device.
USE SPOOLING
Spooling is useful in case of devices that access data at different speeds. Or in those cases
where there is no direct communication between the programs that write the data and those
that read it. Temporary area data can only be modified by adding or deleting it to the end of
the area (generally no random access or editing). It is also widely used in printing (print
spooling), where the documents to be printed are loaded into the printing spool, and the
printer takes them in due time to print them. Spooling allows documents to be placed in a
"print queue", which will be printed in that order, while the user does other tasks. Another
use of spooling is email spooling, a temporary storage area for emails to be sent by the
email transmitting agent program. However, this type of spooling is different, since it
allows random access to email messages in the temporary storage area.
Kernel
The kernel is the central part of an operating system and is responsible for carrying out all
secure communication between the software and the computer's hardware. The kernel core
is the most important part of the Unix operating system and its derivatives, such as Linux
and all distributions that depend on it.
All operating systems have a Kernel, even Windows 10, but perhaps the most famous is the
Linux Kernel, which is now also integrated into Windows 10 with its latest updates.
This core of the operating systems runs in privileged mode with special access to system
resources to be able to make the access requests requested by the software that needs it, and
since the resources are not unlimited, it also acts as an arbiter of the time to assign them,
deciding the order of the requests received according to their priority and importance. A
very important and fundamental management that in most cases goes unnoticed even
though it is an essential job to coordinate all the hardware with the software.
The Windows kernel is private and its code is kept safe in which only Microsoft can make
modifications to improve its next versions of Windows 10, the same happens with macOS,
based on Unix but with a private license and reserved for the team. development in charge
of Apple to make these modifications, but Linux has a public Kernel under the GPL v2
license and its code (or most of it) is available to be downloaded, examined or even made
contributions and useful modifications for other users.
Microprogramming
Definition of Microprogramming It is the process of writing microcode for a processor,
which is a lower level that defines how a microprocessor should function when it executes
machine language instructions. Typically, machine language instructions are translated into
several microcode instructions, which are stored (depending on the computer) in the ROM,
where it cannot be modified, or in the EPROM, where it can be replaced by newer versions.
The main characteristics of the firmware are:
 This is software that is generally located in read-only memory.
 Look for machine language instructions to execute as a series of small steps.
 The set of instructions it interprets defines machine language.
 On certain machines it is implemented in the hardware and is not actually a distinct
layer.

The design of general purpose microprocessors uses two techniques that lead to their
classification into two groups:
 Wired microprocessors: These are those that have a control unit specifically
designed on silicon for a specific set of instructions. In this case, it is the hardware
that is responsible for executing level 2 instructions.
 Microprogrammed microprocessors: These are those that have a generic or
predesigned control unit and that implement one set of instructions or another
depending on a software microprogram (step from level 2 to level 1).
Today, microprogramming has almost completely disappeared. This is due to:
 There are advanced tools to design complex microprocessors with millions of
lithographed transistors. These tools practically guarantee the absence of design
errors.
 Wired microprocessors have a higher performance than any microprogrammed unit,
thus becoming more efficient and competitive.

II-INTRODUCTION TO OPERATING SYSTEMS

Structure of an Operating System

A system is made up of clearly distinguishable parts, its elements , related to each other in
some particular way, a combination that constitutes its structure. This structure is an
arrangement that is attributed to the elements through their relationships. Elements can be
material objects, such as metal parts of a bridge, or ideals, such as the words of a sentence.
That is, structures are real or ideal arrangements of elements.

The internal organization of the operating system is considered and for this we must
observe two types of requirements:

User requirements:

- Easy to use and learn system

- Sure

- Fast

- Suitable for the intended use


Software requirements:

- Maintenance

- Way of operation

- Use restrictions

- Efficiency

- Error tolerance

- Flexibility

As user needs grew and systems were perfected, greater organization of the operating
system software became necessary, where a part of the system contained subparts and this
was organized in the form of levels.

It is a hierarchical structure, with greater organization of the operating system software.


The operating system is divided into parts or levels, each one perfectly defined and with a
clear interface (communication) with the rest of the elements.

Levels that make up the operating system

 Core (Kernell)

 It is the main part of the operating system. The kernel or center of the operating
system manages the entire system, synchronizes all processes. At the kernel level
we only work with processes.

 Input/output management

 The operating system manages external devices through their drivers.

 Memory management

 The operating system manages all aspects related to real memory and virtual
memory.

 File systems

 The operating system takes care of managing the user's files through a directory
structure with some type of organization.

 command interpreter
 It is a communication mechanism between users and the system. Reads user
instructions and causes requested system functions to execute.

Evolution of Operating Systems

Using a computer wasn't always so easy. Operating systems emerged as a necessity to be


able to use very complex machines in times when highly specialized personnel were needed
to operate them. The evolution of operating systems was, therefore, closely linked to the
particular characteristics and needs of the available machines. It is difficult to talk about
operating systems without referring at the same time to the evolution of hardware, since
both aspects have advanced hand in hand for much of history.

This article describes some milestones in the evolution of the software we know as an
operating system and highlights the emergence of concepts that persist in modern operating
systems. The division of generations is approximate in terms of years, and is guided mainly
by the milestones that marked the hardware.

Prehistory of operating systems

The first machine that can be called a general-purpose digital computer was designed by the
English mathematician Charles Babbage (1791-1871), who designed a digital mechanical
machine (capable of working with digits), known as the analytical engine. , or Babbage's
machine. Although he developed all the plans, he was never able to finish building it.

Charles Babbage (1791-1871) and the Analytical Engine. This reconstruction was made
from Babbage's designs.

Babbage's machine, however, did not have any software. The machine could be
"programmed" (a new concept for the time) using punched cards, a method that was
already used to configure machines in the textile industry. Ada Lovelace, a mathematician,
wrote a set of notes describing a procedure for calculating a sequence of Bernoulli numbers
using Babbage's machine. This document is considered the first program developed for a
computing machine, and Ada Lovelace as the first programmer. The Ada programming
language was named in his honor.

Ada Lovelace (1815-1852) and the first algorithm for a computing machine. Ada died at
the age of 36 from uterine cancer.

First Generation (1945-55): Vacuum tubes

After Babbage's work, the development of programmable machines was relegated to the
realm of scientific research, without major practical applications. As has happened with so
many other inventions, it was the period of the Second World War that reinvigorated
interest in this type of machines.

The first electronic machines began to be developed, such as Konrad Zuse's Z3 (1941), and
the Atanasoff-Berry machine (1942). The computing flow of these machines was controlled
by electronic switches (relay), built using vacuum tubes. Being made up of hundreds or
thousands of these tubes, it was not unusual for one or more to fail during operation. Some
of these machines were programmable, although not all were "general purpose" or Turing-
complete.

In 1944, a group of scientists in Bletchley Park, England, including Alan Turing, built the
Colossus computer, the best-known model of which, the Colossus Mark 2, used 2,400
vacuum tubes. This computer, although it was not Turing-complete either (which shows
that it is not enough to have Alan Turing to be Turing-complete) since it was designed for a
particular cryptographic task, it was programmable using paper tapes. It was important in
the decryption process of the German Lorenz cryptosystem.
Colossus Mark 2 @ Bletchley Park. Reconstruction.

In 1946, William Mauchley and J. Presper Eckert built one of the first general-purpose
programmable computers at the University of Pennsylvania: the ENIAC (Electronic
Numerical Integrator and Computer). It had 20,000 vacuum tubes, weighed 27 tons,
occupied 167m2 and consumed 150kW of electricity. Its input device was a punch card
reader and its output was a card punch (IBM 405). It had a clock of 100kHz, and used 20
registers of 10 binary digits. There was no programming language, not even an assembly
language, so all computing was described on the punched cards using machine code.

Second Generation (1955-65): Transistors and Batch Systems

The creation of transistors in the 1950s revolutionized the construction of electronic


devices, drastically reducing failure rates compared to hardware built with vacuum tubes
and increasing response speed. Large computers based on transistors, known as
mainframes, began to be built. Due to its construction cost, a computer of this type was
only accessible to large corporations, governments and universities.

The operation of a mainframe required the collaboration of several actors. A mainframe


executes jobs, which consist of the code of a program, or a sequence of programs. Programs
are entered using punch cards and are written in assembly language. In 1953, John W.
Backus, from IBM, proposes an alternative to make the description of programs more
practical instead of assembler and develops the FORmula TRANslating system, known as
the FORTRAN language, along with a tool for translating to assembler called a compiler.
This work would earn him the Turing Award in 1977.

A program written in FORTRAN on punched cards is delivered as input to a card reader.


The card reader writes on a tape that is delivered to the main machine, which executes the
instructions, a process that could take hours depending on the complexity of the
computation, and writes the result on another output tape. The output tape is read by
another device capable of printing the contents of the tape to paper. At that moment the
execution of the job ends.

Note that during the time a device is reading punch cards, both the processing device and
the output device are not doing any useful work. Given the cost of the equipment, it was
inconvenient to have these units on standby while a punched card was translated to
magnetic tape. This is why solutions such as the batch processing system were developed.
In this model, a programmer hands his punch cards to an operator (another person) who
enters the cards into a card reader unit (IBM 1402). When there are a sufficient number of
jobs, the operator takes the output tape and moves it (physically) to a processing device
such as the IBM 1401 (3 registers, 6-bit word with BCD encoding) or the more powerful
IBM 7094 (7 registers, 36-bit word, and 15-bit address space: 32768 words). The operator
loads a first program (something similar to an operating system) that prepares the computer
to read a series of jobs from the tape. While the processing device performs computing
tasks, the IBM 1402 could continue reading the next set of cards. The output from the
processing device went to a magnetic output tape. The operator must again take this tape,
take it to a printing device (IBM 1403) that transfers the contents of the magnetic tape to
paper offline. That is, not connected to the processing device.

This type of computer was used primarily for scientific and engineering computing.
Programs that allowed these computers to sequentially process a number of jobs were some
of the first to fulfill the task of an operating system, such as FMS (FORTRAN Monitor
System, basically a FORTRAN compiler), and the IBM 7094 system, IBSYS.

An IBM 1620, like the one at the DCC. It uses BCD-encoded words, and was capable of
storing up to 20,000 digits. It did not have ALU. He used 100-digit tables for addition and
subtraction, and a 200-digit table for multiplication. The division was carried out through
software subroutines. It had a clock of 1MHz, and memory access time of 20usec.

Third Generation (1965-1980): Integrated Circuits and Multiprogramming

In the 1960s, the mainframes of IBM (International Business Machines Corporation), the
most important computer equipment manufacturing company of the time, each required
different software and peripherals to function, since the instructions were not compatible. A
program made for one model had to be rewritten when a new hardware model was
introduced. The company decides to unify the hardware under a family called System/360.
This was the first major line based on new integrated circuit technology capable of
integrating large numbers of small transistors, providing a huge price/performance
advantage over traditional transistors.
OS/360, and multiprogramming

The idea of having a line of mutually compatible, general-purpose hardware required a


system capable of working on all models. This system was OS/360. The resulting software
was enormously large (millions of assembly lines) and complex to develop, with numerous
bugs, at a time when software engineering was not yet developed as a discipline. Project
manager Fred Brooks described his experiences in the book "The Mythical Man-Month", a
software engineering classic. His contributions to this new discipline earned him the Turing
Award in 1999.
The history of Linux is written from the 90s, after the Free Software Foundation was
formed during the previous years and the GNU General Public License was developed. The
large amount of software stored in the early 90s meant that a complete operating system
could be developed by Linus Torvalds, who was the one who started the project and who
later made Linux arrive.

Fourth Generation (1980-Present): Personal Computers

The technological development of integrated circuits reached the level known as VLSI (
Very Large Scale Integration ), capable of integrating up to 1 million transistors in a 1cm 2
chip, allowing up to 100,000 logic cells. Computing systems for personal use called
microcomputers emerged, which in principle were not technologically much superior to
the PDP-11, but at a significantly lower price.

Intel 8080, CP/M and the takeoff of microcomputers

In 1974, Intel introduced the Intel 8080 chip, an 8-bit general-purpose CPU with a clock of
2MHz, successor to the 4004 and 8008 , the first microprocessors on the market. It was part
of the popular MITS Altair 8800 , which began the era of microcomputers.

Apple and the evolution of MacOS

It would not be, until the development of the Apple Lisa (1983) and the Apple Macintosh
(1984), the first personal computers to include a graphical interface, that GUIs would
become popular by bringing computer use closer to the general public and incorporating the
concept of user friendliness. It is said that Steve Jobs, co-founder of Apple Computer Inc.,
would have had the idea of incorporating the GUI into his next computer (Lisa) after a visit
he made in 1979 to Xerox PARC, however there are testimonies that indicate that the plan
incorporating a GUI into the Apple Lisa existed prior to that visit (Steve Jobs and the Apple
engineers had enough reasons to visit Xerox PARC, in any case, and the visit actually
occurred). In any case the Apple Macintosh was widely popular particularly in the field of
graphic design.

Microsoft and the evolution of Windows

Strongly influenced by the success of the Apple Macintosh, in the early 1980s Microsoft
was planning a successor to MS-DOS that would have its own GUI. Their first attempt was
a window management system called Windows 1.0 (1985) that ran as an application on top
of MS-DOS. The version that achieved the greatest adoption was Windows 3.11, for 16-bit
systems. It was in 1995, with the launch of Windows 95 and then Windows 98, that code
was incorporated to take advantage of the new 32-bit CPUs, even though part of the
operating system still had to support 16-bit applications for backward compatibility. MS-
DOS was still used to boot the system and as underlying support for older applications.

Since 1987, Microsoft had worked together with IBM to build a GUI operating system.
This system was known as OS/2, however it never achieved great popularity before the
Macintosh and Windows 9x themselves. Eventually Microsoft took some of the work
developed for OS/2 and reimplemented Windows using completely 32-bit code. This new
system was called Windows NT (Windows New Technology), while OS/2 was eventually
abandoned by IBM.
Fifth Generation (1990-Present): Mobile Computers

Until 1993, mobile phone devices were nothing more than communication devices that
used dedicated, embedded systems to manage their hardware . The concept of using these
devices to carry out activities beyond telephony arose with the devices known as PDA
(Personal Digital Assistant), among which is the Apple Newton, which included a Newton
OS operating system written in C++, and with a pioneer use of handwriting recognition. It
had an API for applications developed by third parties, however it did not obtain wide
adoption.

Perhaps the first device called a smartphone was the IBM Simon, with a touch screen
interface (with stylus) and a ROM-DOS operating system, compatible with MS-DOS and
developed by the company Datalight. Its one-hour battery life did not allow it to compete
with new devices.

Nokia and SymbianOS

The success of Palm led other mobile phone players such as Nokia to co-found and later
fully acquire Symbian Ltd.. The founding consortium included Psion, a company that was
behind EPOC, a 32-bit single-user operating system with preemptive multitasking from
1998, which under Symbian would become Symbian OS whose first version (6.0) was used
in the Nokia 9210 Communicator. Symbian OS ran on ARM processors, a RISC
architecture. At its peak, Symbian OS was the system of choice for manufacturers such as
Samsung, Motorola, Sony Ericsson, and mainly Nokia. It had a microkernel called EKA2
that supported preemptive multithreading, memory protection, and scheduling for real-time
tasks (RTOS). It had an object-oriented design and was written in C++. Symbian OS
dominated a large part of the mobile operating systems market until its gradual
abandonment by Samsung, Sony Ericsson and eventually Nokia (which would replace it
with Windows Phone), which caused it to lose ground to the emergence of iOS and
Android.

Microsoft and Windows Phone

Since 1996, Microsoft had developed an embedded operating system called Windows CE
(currently Windows Embedded Compact) designed for a platform specification initially
called Pocket PC. The first Windows CE devices were released in 2002. Windows CE
contained a hybrid kernel written in C and supported x86, ARM, MIPS, and PowerPC
architectures. The series of mobile operating systems based on Windows CE was known as
Windows Mobile (including the Zune media player) and was developed until 2010.
Microsoft would later reimplement its mobile operating system based on the Windows NT
line, giving rise to Windows Phone, a line that was discontinued in 2017 due to the little
interest of developers in generating applications for this platform given the dominance of
iOS and Android.

RIM and Blackberry OS

In 2002 the Canadian company Research In Motion (RIM) developed its own line of
mobile devices known as BlackBerry and its own operating system BlackBerry OS (RIM
would eventually change its name to BlackBerry Ltd.). BlackBerry OS was a multitasking
system with support for applications through the special platform for embedded devices
Java Micro Edition (JavaME). It included support for WAP, a mobile communication
protocol stack that was no longer adopted when mobile devices became powerful enough to
process the traditional TCP/IP stack. In 2010, BlackBerry OS was replaced by BlackBerry
10, a system based on the QNX real-time microkernel (RTOS). Since 2016, devices
produced by BlackBerry started using Android instead of BlackBerry 10, support for which
has been announced at least until 2019.

Apple: the iPhone and iOS

In 2007, the entry of one of the main competitors occurred when Apple presented its iPhone
along with its iOS operating system (originally iPhone OS). iOS, like MacOSX, is based on
the hybrid XNU kernel and the Darwin (UNIX-like) operating system. Since 2010, with
iOS 4, the system added API support for multitasking by user applications. Previously
multitasking was restricted to only certain system services. The availability of the iOS SDK
(Software Development Kit) attracted the development of multiple native applications
available from an online store (App Store), quickly popularizing the use of the iPhone and
positioning it as one of the main competitors.

Android, Google's entry

Months after the launch of the first iPhone, a group of companies led by Google, including
HTC, Sony, Dell, Intel, Motorola, Samsung, LG, Nvidia, among others, form the Open
Handset Alliance (OHA). With the support of OHA, Google launched the first version of
Android in 2008, an open source monolithic (UNIX-like) operating system based on the
Linux kernel. Android began its development under the company Android, Inc. founded in
2003. In 2005 Google acquired Android, Inc. and it was under its leadership that the
development team finished the first version of Android 1.0. Similar to the App Store,
Android launched the Android Market (later Google Play Store), and the Android SDK for
developing applications (written primarily in Java, and recently in Kotlin) for third parties.
The support of OHA, made up of important players in the smartphone market, was key to
positioning Android as the dominant operating system on mobile devices since 2010, with
iOS as its only (and distant) real competitor.

Unlike Linux, the Android kernel does not use the traditional GNU C Library glibc, but
rather uses an alternative implementation of the C library developed by Google, called
Bionic, which has a smaller memory footprint than glibc and was designed for CPUs.
running at a lower clock frequency (and therefore, optimized for lower energy
consumption). Bionic, however, does not implement the entire POSIX interface, making
Android Non-POSIX compliant.

Objective and functionality of each of the parts that make up an Operating System

The operating system is the core of a computer: without this complex software, other
programs cannot function. The tasks he is in charge of are very diverse. Some run
completely in the background, and many occur in parallel.

 Hardware management: This function, one of the most important of the operating
system, usually remains in the background, that is, it is hidden from the user. The
program manages all the hardware, both input and output. To do this, it uses drivers
provided by hardware manufacturers, which serve to receive and forward
commands from devices, as well as to transfer its own commands to the hardware.
This is how the keyboard, mouse, screen, hard drive, graphics card and all other
components of a computer work.

 Software management: Generally, when you download a program from the


Internet, you can choose between several versions for different operating systems on
the download page, which demonstrates the extent to which application
programming is related to system specifications operational. Computer core systems
have interfaces that regulate communication with all applications. In this way, it is
possible to assign memory to them, allow them to use processor resources or
execute actions carried out with the keyboard and mouse.

 File management: If you have written a document, you can print it (for which the
program passes the order to the operating system, which in turn passes it to the
printer) or you can save it as a file in a folder. Being able to work with a folder
structure is only possible thanks to the operating system, since that order does not
exist on the hard drive itself.

 Rights management: In certain situations, for example, in the business


environment, several people work with the same device. However, not everyone
should be able to configure the system. Therefore, modern operating systems allow
you to create different users and grant them rights individually. Additionally, each
account can be protected with a password.
 User orientation: Everyone should be able to use a computer without any problem,
even people without much computer knowledge. Therefore, it is important that the
operating system makes functions and options as easy as possible. The most
important aspects should also be easy to use for the basic user, although many
operating systems (especially PC ones) offer additional options for professionals.

 Network functions: As the operating system manages the hardware, it is also


responsible for the network card and therefore the connection to the Internet and
other networks. It is usually possible to configure the computer as a network node
via the operating system and, for example, assign it a specific IP address. In the
settings, you can also enter the specifications of the LAN and other subnets so that
the device can connect to other computers. The network settings also allow you to
set the DNS server individually.

 Security measures: Traditionally, security is not a task specific to the operating


system, although it can also be added to its functions through additional software.
As computers constantly connected to the Internet are exposed to dangers, operating
systems have also implemented their own security measures. For example,
Windows already has a built-in firewall and antivirus.

Operation of an Operating System

The operating system provides the computer with basic routines to control all the devices
on the computer and manage, escalate and perform task interaction.

Its main task is to manage the computer's tasks and resources, coordinate the hardware, and
organize the files and directories on the computer's storage devices.

But it also performs other functions such as managing the exchange of internal memory
between the various applications; execute several programs at the same time and determine
in what order and how long they should be executed; manage the sharing of internal
memory between applications; It deals with the input and output of the hardware devices
that are connected such as hard drives, printers or ports.

Some operating systems allow you to manage a large number of users and others control
hardware devices. One of the best-known functions of the operating system is to load into
memory and facilitate the execution of the programs that the user uses.

When a program is running, the operating system continues its work, since many programs
need keyboard access, for example, video, printer or hard drive access to be able to read
and write files, for example.

The operating system has a great responsibility, since it makes sure that all the programs
and components of the computer work well.
The operating system is made up of a set of software packages that can be used to manage
interactions with the hardware.

These packages are included in the following software suite:

 The nucleus; representative of the basic functions of the operating system, such as
memory management, processes, files, etc.
 The command interpreter; It makes communication with the operating system
possible through a control language, which allows the user to control the peripherals
without needing to know the characteristics of the hardware used.
 File System; allows files to be recorded in a tree structure

Services of an Operating System

Services are small scripts that perform actions on a selection using the functionalities of the
application that installs them or work independently. Depending on whether they are
installed from an application or are integrated into the system, they will have different
locations.1

 User interface

Almost all operating systems have a user interface (UI), which can take different forms.
One of the existing types is the command-line interface (CLI) that uses text commands, and
on the other hand a graphical user interface (GUI) composed of windows is used.

 Program Execution

The system must be able to load a program and execute said program. Every program must
be able to terminate its execution, normally or abnormally (indicating an error).

 I/O operations

A running program may need to perform I/O operations, directed to a file or I/O device. For
certain devices it is desirable to have special functions. For efficiency and protection, users
cannot directly control I/O devices; the operating system must provide means to perform
the I/O.

 File system manipulation

The file system is of special importance. Obviously, programs need to read and write to
files and directories. You also need to create and delete them using their name, search a
certain file, or present the information contained in a file. Some programs include
permission management mechanisms to grant or deny access to files or directories
depending on who is the owner.
 Communications

There are many circumstances in which one process needs to exchange information with
another. Such communication can take place between processes running on the same
computer or between processes on different computers connected by a network.
Communications can be implemented using shared memory, a procedure in which the
operating system transfers information packets between processes.

 Error detection

The operating system needs to constantly detect possible errors. These errors can occur in
processor and memory hardware, on an I/O device, or in user programs. For each type of
error, the operating system must perform the appropriate operation to ensure correct and
consistent operation.

 Resource allocation

When there are multiple users, or there are multiple jobs running at the same time, each of
them must be assigned the necessary resources. The operating system manages many
different types of resources; some may have special software code that manages their
assignment, while others may have code that manages their request and release much more
generally.

Characteristics of an Operating System

 It is the intermediary between the user and the hardware.

 It is necessary for the operation of all computers, tablets and mobile phones.

 Provides security and protects computer programs and files.

 It is designed to be user friendly and easy to use.

 It allows you to efficiently manage computer resources.

 Most require payment of a license for use.

 Allows you to interact with multiple devices.

 It is progressive, since there are constantly new versions that are updated and
adapted to the user's needs.
Types of Operating System

Operating system types vary depending on the hardware and function of each device. There
are some for computers and others for mobile devices.

 Depending on the user, they can be: multi-user, an operating system that allows
several users to run their programs simultaneously; or single-user, operating system
that only allows one user's programs to run at a time.

 Depending on task management, they can be: single-task, an operating system that
only allows one process to run at a time; or multitasking, operating system that can
run several processes at the same time.

 Depending on the resource management, they can be: centralized, an operating


system that only allows the resources of a single computer to be used; or distributed,
operating system that allows the processes of more than one computer to run at the
same time.

Some examples of Operating Systems:

1. Microsoft Windows. One of the most popular that exists, initially it was a set of
distributions or graphical operating environments, whose role was to provide other
older operating systems such as MS-DOS with a visual representation of support
and other software tools. It was first published in 1985 and has since been updated
to new versions.

2. MS-DOS. This is the MicroSoft Disk Operating System, one of the most common
operating systems for IBM personal computers during the 1980s and mid-90s. It had
a series of internal and external commands displayed on a dark screen sequentially.

3. UNIX. This operating system was developed in 1969 to be portable, multitasking


and multiuser. It is actually an entire family of similar OS, some of whose
distributions have been offered commercially and others in free format, always
based on the kernel called Linux.

4. macOS. It is the operating system for Apple's Macintosh computers, also known as
OSX or Mac OSX. Based on Unix and developed and sold on Apple computers
since 2002, it is the staunchest competitor to the popular Windows.

5. Ubuntu. This operating system is free and open source, that is, everyone can modify
it without violating copyrights. It takes its name from a certain ancient South
African philosophy, focused on man's loyalty to his own species above all else.
Based on GNU/Linux, Ubuntu is oriented towards ease of use and total freedom.
The British company that distributes it, Canonical, survives by providing technical
service.
6. Android. This operating system based on the Linux kernel operates on cell phones
and tablets and other devices with touch screens. It was developed by Android Inc.
and later purchased by Google, making it so popular that sales of Android computer
systems surpass those of IOS (for Macintosh cell phones) and Windows Phone (for
MicroSoft cell phones).

Aspects that affect the design of an Operating System

1. REENTRANCY:

program or module that can be used simultaneously by several users at the same time.

It consists of two parts: - a pure code (non-modifiable part). - a memory area for each of the
user processes. operating system reentrant program user information 1 user information 2
user information n mem. ppal.

2. INTERRUPTIONS:

 It is an event that alters the normal sequence of operation of the processor.


activities:

1- he so take control of the computer.


2- he so saves the state of the interrupted process
3- Interrupts are disabled.
4- he so Analyze the interruption.
5- the interrupt is processed. (handling routine).
6- The state of the interrupted process is restored.
7- Interrupts are enabled.
8- The processor continues the execution of the process.

3.- I/O PROCESSORS

 Special purpose processor dedicated to controlling I/O operations, independent of


the CPU
 They execute instructions (commands) grouped into programs called "channel
programs". Main memory 3 end i/o processor i/o device i/o device i/o 1 2 start i/o
cpu

4.- WATCHES:
 interval time: - after a certain time interval, the clock generates an interrupt as a
warning signal to the processor. - useful in multi-user systems to prevent one job
from monopolizing the CPU.
 Time and Time: Maintains the time and calendar in the system.

5.- SPOOL: (simultaneous peripheral operations on line). -consists of interposing a high-


speed device between an executing program and a low-speed device related to the
input/output of the program example:

1. printer.
2. CPU
3. program
4. disk
5. printer

6.- EMULATION:

 technique that allows one computer to behave as if it were another.


 The machine language programs of the “emulated” machine can be executed
directly on the host machine.
 Equipment manufacturers use this technique when introducing new systems.

microprogramming.

* programs made up of microinstructions (primitives).

* Each machine language instruction that can be executed by the processor has its
corresponding microprogram.

* implemented in rom memory .

You might also like