Computer Basics
Computer Basics
OCN-3205
(Computer Programming for Oceanography)
Lectures 01-04 in two days
(Handout)
British Mathematics Professor Charles Babbage constructed a small Difference Engine during
1819 – 1822 and later on originated the concept of a digital programmable computer (Analytical
Engine). Augusta Ada Byron (10 December 1815–27 November 1852), the only legitimate child of
the British poet Lord Byron is claimed to be the first programmer. Computer language Ada is very
powerful and is still being used.
Generally, computers can be classified into three generations. Each generation lasted for a certain
period of time, and each gave us either a new and improved computer or an improvement to the
existing computer.
Second generation: 1956 – 1963 - This generation of computers used transistors instead of
vacuum tubes which were more reliable. In 1951 the first computer for commercial use was
introduced to the public; the Universal Automatic Computer (UNIVAC 1). In 1953 the International
Business Machine (IBM) 650 and 700 series computers made their mark in the computer world. During
this generation of computers over 100 computer programming languages were developed, computers
had memory and operating systems. Storage media such as tape and disk were in use also were printers
for output.
Third generation: 1964 - 1971 - The invention of Integrated Circuit (IC) brought us the third
generation of computers. With this invention computers became smaller, more powerful more reliable
and they are able to run many different programs at the same time.
Fourth generation: 1972 – 2010 – Computers of this generation used Microprocessors (called
CPU). In1980 Microsoft Disk Operating System (MS-DOS) was born and in 1981 IBM introduced the
personal computer (PC) for home and office use. Three years later Apple gave us the Macintosh
computer with its icon driven interface.
Fifth generation: 2010 – Present – Artificial Intelligence (AI) was introduced in this
generation. In 1990s Microsoft Company gave us Win dows Operating System.
As a result of the various improvements to the development of the computer we have seen the
computer being used in all areas of life. It is a very useful tool that will continue to experience new
development as time passes.
Development history of computers at a glance:
• Logical operations − Examples include comparison operations like greater than, less than,
equal to, opposite, etc.
The corresponding figure for an actual computer looks something like this −
• Input Unit − Devices like keyboard, scanner and mouse etc. that are used to input data and
instructions to the computer are called input unit.
• Output Unit − Devices like printer and visual display unit that are used to provide information
to the user in desired format are called output unit.
• Control Unit − As the name suggests, this unit controls all the functions of the computer. All
devices or parts of computer interact through the control unit.
• Arithmetic Logic Unit − This is the brain of the computer where all arithmetic operations and
logical operations take place.
• Memory − All input data, instructions and data interim to the processes are stored in the
memory. Memory is of two types – primary memory and secondary memory. Primary
memory resides within the CPU whereas secondary memory is external to it.
Control unit, arithmetic logic unit and memory are together called the Central Processing Unit or
CPU. Computer devices like keyboard, mouse, printer, etc. that we can see and touch are the
hardware components of a computer. The set of instructions or programs that make the computer
function using these hardware parts are called software. We cannot see or touch software. Both
hardware and software are necessary for working of a computer.
Input Devices:- An input device is any hardware component that allows you the user to enter data
into the computer. There are many input devices. Six of the most widely used input devices are:
1. A keyboard -- You use the keyboard to type letters, numbers, and symbols into the computer.
2. A Mouse --The mouse is a pointing device that has a pointer that changes into different shapes as
you use the mouse. You click the mouse by pressing and releasing the button. This action allows you to
enter data when using a mouse.
3. A Scanner -- This input device copies from paper into your computer.
4 . A Microphone -- The microphone is usually used for voice input into the computer.
5. A Digital Camera -- The digital camera allows you to take pictures that you can input into your
computer.
6. A PC Video Camera:- The PC video camera allows you take both video and still
images that you can input onto your computer.
Output Devices:An output device is any hardware component that gives information to the user.
Three commonly used output devices are as follow:
1. A Monitor -- This output device displays your information on a screen.
2. A Printer -- This output device prints information on paper. This type of printed output is called a
hard copy.
3. A Speaker -- Sound is the type of output you will get from a speaker.
Characteristics of Computer
To understand why computers are such an important part of our lives, let us look at some of its
characteristics −
• Speed − Typically, a computer can carry out 3-4 million instructions per second.
• Accuracy − Computers exhibit a very high degree of accuracy. Errors that may occur are
usually due to inaccurate data, wrong instructions or bug in chips – all human errors.
• Reliability − Computers can carry out same type of work repeatedly without throwing up errors
due to tiredness or boredom, which are very common among humans.
• Versatility − Computers can carry out a wide range of work from data entry and ticket booking
to complex mathematical calculations and continuous astronomical observations. If you can
input the necessary data with correct instructions, computer will do the processing.
• Storage Capacity − Computers can store a very large amount of data at a fraction of cost of
traditional storage of files. Also, data is safe from normal wear and tear associated with paper.
• Computers can take up routine tasks while releasing human resource for more intelligent
functions.
• Computers have no intelligence; they follow the instructions blindly without considering the
outcome.
• Regular electric supply is necessary to make computers work, which could prove difficult
everywhere especially in developing nations.
Booting
Starting a computer or a computer-embedded device is called booting. Booting takes place in two steps
−
• Cold Booting − When the system is started by switching on the power supply it is called cold
booting. The next step in cold booting is loading of BIOS.
• Warm Booting − When the system is already running and needs to be restarted or rebooted, it
is called warm booting. Warm booting is faster than cold booting because BIOS is not reloaded.
Classification of Computers
Historically computers were classified according to processor types because development in processor
and processing speeds were the developmental benchmarks. Earliest computers used vacuum tubes for
processing, were huge and broke down frequently. However, as vacuum tubes were replaced by
transistors and then chips, their size decreased and processing speeds increased manifold.
All modern computers and computing devices use microprocessors whose speeds and storage
capacities are skyrocketing day by day. The developmental benchmark for computers is now their size.
Computers are now classified on the basis of their use or size −
• Desktop
• Laptop
• Tablet (Tab) or PAD (eg. iPAD)
• Server
• Mainframe
• Supercomputer
Let us look at all these types of computers in detail.
Desktop Computers
Desktop computers are personal computers (PCs) designed for use by an individual at a fixed
location. IBM was the first computer to introduce
and popularize use of desktops. A desktop unit
typically has a CPU (Central Processing Unit),
monitor, keyboard and mouse. Introduction of
desktops popularized use of computers among
common people as it was compact and affordable.
Riding on the wave of desktop’s popularity many
software and hardware devices were developed
specially for the home or office user. The foremost
design consideration here was user friendliness.
Laptop Computers
Despite its huge popularity, desktops gave way to a more
compact and portable personal computer called laptop in
2000s. Laptops are also called notebook computers or
simply notebooks. Laptops run using batteries and connect
to networks using Wi-Fi (Wireless Fidelity) chips. They
also have chips for energy efficiency so that they can
conserve power whenever possible and have a longer life.
Modern laptops have enough processing power and storage capacity to be used for all office work,
website designing, software development and even audio/video editing.
Tablet
After laptops computers were further miniaturized to develop machines that have processing power of a
desktop but are small enough to be held in one’s palm.
Tablets have touch sensitive screen of typically 5 to 10 inches
where one finger is used to touch icons and invoke
applications.
Keyboard is also displayed virtually whenever required and
used with touch strokes. Applications that run on tablets are
called apps. They use operating systems by Microsoft
(Windows 8 and later versions) or Google (Android). Apple
computers have developed their own tablet called iPad which
uses a proprietary OS called iOS.
Computer Server
Servers are computers with high processing speeds that provide one or more services to other systems
on the network. They may or may not have screens attached to them. A group of computers or digital
devices connected together to share resources is called a network.
Mainframe Computers
Mainframes are computers used by organizations like banks, airlines and railways to handle millions
and trillions of online transactions per second. Important features of mainframes are −
• Big in size
• Hundreds times Faster than servers, typically hundred megabytes per second
• Very expensive
• Use proprietary OS provided by the manufacturers
• In-built hardware, software and firmware security features
Supercomputers
Supercomputers are the fastest computers on Earth. They are used for carrying out complex, fast and
time intensive calculations for scientific and engineering applications. Supercomputer speed or
performance is measured in teraflops, i.e. 1012 floating point operations per second.
Chinese supercomputer Sunway TaihuLight is the world’s fastest supercomputer with a rating of 93
petaflops per second, i.e. 93 quadrillion floating point operations per second.
Most common uses of supercomputers include −
• System Software
• Application Software
• Utility Software
Let us discuss them in detail.
System Software
Software required to run the hardware parts of the computer and other application software are called
system software. System software acts as interface between hardware and user applications. An
interface is needed because hardware devices or machines and humans speak in different languages.
Machines understand only binary language i.e. 0 (absence of electric signal) and 1 (presence of
electric signal) while humans speak in English, French, German, Tamil, Hindi and many other
languages. English is the pre-dominant language of interacting with computers. Software is required to
convert all human instructions into machine understandable instructions. And this is exactly what
system software does.
Based on its function, system software is of four types −
• Operating System
• Language Processor
• Device Drivers
Language Processor
As discussed earlier, an important function of system software is to convert all user instructions into
machine understandable language. When we talk of human machine interactions, languages are of three
types −
• Machine-level language − This language is nothing but a string of 0s and 1s that the machines
can understand. It is completely machine dependent.
• High level language − This language uses English like statements and is completely
independent of machines. Programs written using high level languages are easy to create, read
and understand.
Program written in high level programming languages like BASIC, FORTRAN, Java, C++, etc. is
called source code. Set of instructions in machine readable form is called object code or machine
code. System software that converts source code to object code is called language processor. There
are three types of language interpreters−
• Interpreter − Converts high level programs into machine level program line by line.
• Compiler − Converts high level programs into machine level programs at one go rather than
line by line.
Device Drivers
System software that controls and monitors functioning of a specific device on computer is called
device driver. Each device like printer, scanner, microphone, speaker, etc. that needs to be attached
externally to the system has a specific driver associated with it. When you attach a new device, you
need to install its driver so that the OS knows how it needs to be managed.
Application Software
A software that performs a single task and nothing else is called application software. Application
software are very specialized in their function and approach to solving a problem. So a spreadsheet
software can only do operations with numbers and nothing else. A hospital management software will
manage hospital activities and nothing else. Here are some commonly used application software −
• Word processing
• Spreadsheet
• Presentation
• Database management
• Multimedia tools
Utility Software
Application software that assist system software in doing their work is called utility software. Thus
utility software is actually a cross between system software and application software. Examples of
utility software include −
• Antivirus software
• Disk management tools
• File management tools
• Compression tools
• Backup tools
Day 1: Lecture - 02
System Software
As you know, system software acts as an interface for the underlying hardware system. Here we will
discuss some important system software in detail.
Assembler
Assembler is a system software that converts assembly level programs to machine level code.
Interpreter
The major advantage of assembly level language was its ability to optimize memory usage and
hardware utilization. However, with technological advancements computers had more memory and
better hardware components. So ease of writing programs became more important than optimizing
memory and other hardware resources.
In addition, a need was felt to take programming out of a handful of trained scientists and computer
programmers, so that computers could be used in more areas. This led to development of high level
languages that were easy to understand due to resemblance of commands to English language.
The system software used to translate high level language source code into machine level language
object code line by line is called an interpreter. An interpreter takes each line of code and converts it
into machine code and stores it into the object file.
The advantage of using an interpreter is that they are very easy to write and they do not require a large
memory space. However, there is a major disadvantage in using interpreters, i.e., interpreted programs
take a long time in executing. To overcome this disadvantage, especially for large programs,
compilers were developed.
Compiler
System software that store the complete program, scan it, translate the complete program into object
code and then creates an executable code is called a compiler. On the face of it compilers compare
unfavorably with interpreters because they −
These are the steps in compiling source code into executable code −
• Lexical analysis − Here all instructions are converted to lexical units like constants, variables,
arithmetic symbols, etc.
• Parsing − Here all instructions are checked to see if they conform to grammar rules of the
language. If there are errors, compiler will ask you to fix them before you can proceed.
• Compiling − At this stage the source code is converted into object code.
• Linking − If there are any links to external files or libraries, addresses of their executable will
be added to the program. Also, if the code needs to be rearranged for actual execution, they will
be rearranged. The final output is the executable code that is ready to be executed.
Functions of OS
As you know, operating system is responsible for functioning of the computer system. To do that it
carries out these three broad categories of activities −
Processor management
Managing a computer’s CPU to ensure its optimum utilization is called processor management.
Managing processor basically involves allocating processor time to the tasks that need to be completed.
This is called job scheduling. Jobs must be scheduled in such a way that −
• Preemptive scheduling
• Non-Preemptive scheduling
Preemptive Scheduling
In this type of scheduling, next job to be done by the processor can be scheduled before the current
job completes. If a job of higher priority comes up, the processor can be forced to release the
current job and take up the next job. There are two scheduling techniques that use pre-emptive
scheduling −
• Round robin scheduling − A small unit of time called time slice is defined and each program
gets only one time slice at a time. If it is not completed during that time, it must join the job
queue at the end and wait till all programs have got one time slice. The advantage here is that all
programs get equal opportunity. The downside is that if a program completes execution before
the time slice is over, CPU is idle for the rest of the duration.
Non-preemptive Scheduling
In this type of scheduling, job scheduling decisions are taken only after the current job completes. A
job is never interrupted to give precedence to higher priority jobs. Scheduling techniques that use non-
preemptive scheduling are −
• First come first serve scheduling − This is the simplest technique where the first program to
throw up a request is completed first.
• Shortest job next scheduling − Here the job that needs least amount of time for execution is
scheduled next.
• Deadline scheduling − The job with the earliest deadline is scheduled for execution next.
Memory Management
Process of regulating computer memory and using optimization techniques to enhance overall system
performance is called memory management. Memory space is very important in modern computing
environment, so memory management is an important role of operating systems.
Computers have two types of memory – primary and secondary. Primary memory is fast but
expensive and secondary memory is cheap but slower. OS has to strike a balance between the two to
ensure that system performance is not hurt due to very less primary memory or system costs do not
shoot up due to too much primary memory.
Input and output data, user instructions and data interim to program execution need to be stored,
accessed and retrieved efficiently for high system performance. Once a program request is accepted,
OS allocates it primary and secondary storage areas as per requirement. Once execution is completed,
the memory space allocated to it is freed. OS uses many storage management techniques to keep a track
of all storage spaces that are allocated or free.
• Program paging − A program is broken down into fixed size page and stored in the secondary
memory. The pages are given logical address or virtual address from 0 to n. A page table
maps the logical addresses to the physical addresses, which is used to retrieve the pages when
required.
• Program segmentation − A program is broken down into logical units called segments,
assigned logical address from 0 to n and stored in secondary memory. A segment table is used
to load segments from secondary memory to primary memory.
Operating systems typically use a combination of page and program segmentation to optimize memory
usage. A large program segment may be broken into pages or more than one small segments may be
stored as a single page.
File Management
Data and information is stored on computers in form of files. Managing file system to enable users to
keep their data safely and correctly is an important function of operating systems. Managing file
systems by OS is called file management. File management is required to provide tools for these file
related activities −
Device Management
The process of implementation, operation and maintenance of a device by operating system is called
device management. Operating system uses a utility software called device driver as interface to the
device.
When many processes access the devices or request access to the devices, the OS manages the devices
in a way that efficiently shares the devices among all processes. Processes access devices through
system call interface, a programming interface provided by the OS.
Types of OS
As computers and computing technologies have evolved over the years, so have their usage across
many fields. To meet growing requirements more and more customized software have flooded the
market. As every software needs operating system to function, operating systems have also evolved
over the years to meet growing demand on their techniques and capabilities. Here we discuss some
common types of OS based on their working techniques and some popularly used OS as well.
GUI OS
GUI is the acronym for Graphical User Interface. An operating system that presents an interface
comprising graphics and icons is called a GUI OS. GUI OS is very easy to navigate and use as users
need not remember commands to be given to accomplish each task. Examples of GUI OS includes
Windows, macOS, Ubuntu, etc.
Time Sharing OS
Operating systems that schedule tasks for efficient processor use are called time sharing OS. Time
sharing, or multitasking, is used by operating systems when multiple users located at different
terminals need processor time to complete their tasks. Many scheduling techniques like round robin
scheduling and shortest job next scheduling are used by time sharing OS.
Real Time OS
An operating system that guarantees to process live events or data and deliver the results within a
stipulated span of time is called a real time OS. It may be single tasking or multitasking.
Distributed OS
An operating system that manages many computers but presents an interface of single computer to the
user is called distributed OS. Such type of OS is required when computational requirements cannot be
met by a single computer and more systems have to be used. User interaction is restricted to a single
system; it’s the OS that distributed work to multiple systems and then presents the consolidated output
as if one computer has worked on the problem at hand.
• Windows − Windows is a GUI operating system first developed by Microsoft in 1985. The
latest version of Windows is Windows 10. Windows is used by almost 88% of PCs and laptops
globally.
• Unix/ Linux − Linux is an open source operating system mostly used by mainframes and
supercomputers. Being open source means that its code is available for free and anyone can
develop a new OS based on it.
Mobile OS
An operating system for smartphones, tablets and other mobile devices is called mobile OS. Some of
the most popular OS for mobile devices includes−
• Android − This Linux-based OS by Google is the most popular mobile OS currently. Almost
85% of mobile devices use it.
• Apple iOS − This mobile OS is an OS developed by Apple exclusively for its own mobile
devices like iPhone, iPad, etc.
• Blackberry OS − This is the OS used by all blackberry mobile devices like smartphones and
playbooks.
Utility Software
Application software that assists OS in carrying out certain specialized tasks are called utility software.
Let us look some of the most popular utility software.
Antivirus
A virus can be defined as a malicious program that attaches itself to a host program and makes multiple
copies of itself, slowing down, corrupting or destroying the system. A software that assists the OS in
providing virus free environment to the users is called Antivirus. An antivirus scans the system for any
virus and if detected, gets rid of it by deleting or isolating it. It can detect many types of virus like boot
virus, Trojan, worm, spyware, etc. Computer Viruses are TSR (Terminate and Stay Resident) based
softwares.
When any external storage device like USB drive is attached to the system, anti-virus software scans it
and gives an alert if a virus is detected. You can set up your system for periodic scans or scan whenever
you feel the need. A combination of both the techniques is advisable to keep your system virus free.
Compression Tools
Compression tools are utilities that assist operating systems in shortening files so that they take less
space. After compression files are stored in a different format and cannot be read or edited directly. It
needs to be uncompressed before it can be accessed for further use. Some of the popular compression
tools are WinRAR, PeaZip, The Unarchiver, etc.
Disk Cleanup
Disk cleanup tools assist users in freeing up disk space. The software scans hard disks to find files that
are no longer used and frees up space by deleting them.
Disk Defragmenter
Disk defragmenter is a Disk Management Utility that increases file access speeds by rearranging
fragmented files on contiguous locations. Large files are broken down into fragments and may be
stores in non-contiguous locations if contiguous ones are not available. When such files are accessed
by the user, access speed is slow due to fragmentation. Disk defragmenter utility scans the hard disk
and tries to assemble file fragments so that they may be stored in contiguous locations.
Backup
Backup utility enables backing up of files, folders, databases or complete disks. Backups are taken so
that data may be restored in case of data loss. Backup is a service provided by all operating systems. In
stand-alone systems backup may be taken in the same or different drive. In case of networked systems
backup may be done on backup servers.
• Security
• Affordability
• Transparent
• Interoperable on multiple platforms
• Flexible due to customizations
• Localization is possible
Freeware
A software that is available free of cost for use and distribution but cannot be modified as its source
code is not available is called freeware. Examples of freeware are Google Chrome, Adobe Acrobat
PDF Reader, Skype, etc.
Shareware
A software that is initially free and can be distributed to others as well, but needs to be paid for after a
stipulated period of time is called shareware. Its source code is also not available and hence cannot be
modified.
Proprietary Software
Software that can be used only by obtaining license from its developer after paying for it is called
proprietary software. An individual or a company can own such proprietary software. Its source code
is often closely guarded secret and it can have major restrictions like −
• No further distribution
• Number of users that can use it
• Type of computer it can be installed on, example multitasking or single user, etc.
For example, Microsoft Windows is a proprietary operating software that comes in many editions for
different types of clients like single-user, multi-user, professional, etc.
Office Tools
Application software that assist users in regular office jobs like creating, updating and maintaining
documents, handling large amounts of data, creating presentations, scheduling, etc. are called office
tools. Using office tools saves time and effort and lots of repetitive tasks can be done easily. Some of
the software that do this are −
• Word processors
• Spreadsheets
• Database systems
• Presentation software
• E-mail tools
Let us look at some of these in detail.
Word Processor
A software for creating, storing and manipulating text documents is called word processor. Some
common word processors are MS-Word, WordPad, WordPerfect, Google docs, etc.
Spreadsheet
Spreadsheet is a software that assists users in processing and analyzing tabular data. It is a
computerized accounting tool. Data is always entered in a cell (intersection of a row and a column)
and formulas and functions to process a group of cells is easily available. Some of the popular
spreadsheet software include MS-Excel, Gnumeric, Google Sheets, etc. Here is a list of activities that
can be done within a spreadsheet software −
Presentation Tool
Presentation tool enables user to demonstrate information broken down into small chunks and
arranged on pages called slides. A series of slides that present a coherent idea to an audience is called a
presentation. The slides can have text, images, tables, audio, video or other multimedia information
arranged on them. MS-PowerPoint, OpenOffice Impress, Lotus Freelance, etc. are some popular
presentation tools.
Database Management System
Software that manages storage, updating and retrieval of data by creating databases is called
database management system. Some popular database management tools are MS-Access, MySQL,
Oracle, FoxPro, etc.
Day-2: Lecture - 03
Domain Specific Tools
Depending on its usage, software may be generic or specific. Generic software is a software that can
perform multiple tasks in different scenarios without being modified. For example, a word processor
software can be used by anyone to create different types of documents like report, whitepaper, training
material, etc. Specific software is a software for a particular application, like railway reservation
system, weather forecasting, etc. Let us look at some examples of domain specific tools.
Inventory Management
Managing multiple activities like purchase, sales, order, delivery, stock maintenance, etc. associated
with raw or processed goods in any business is called inventory management. The inventory
management software ensures that stocks are never below specified limits and purchase/deliveries are
done in time.
Payroll Software
Payroll software handles complete salary calculations of employees, taking care of leave, bonus, loans,
etc. Payroll software is usually a component of HR (human resource) management software in mid-
sized to big organizations.
Financial Accounting
Financial management software keeps an electronic record of all financial transactions of the
organization. It has many functional heads like account receivables, accounts payable, loans, payroll,
etc.
Restaurant Management
Restaurant management software helps restaurant managers in keeping track of inventory levels, daily
orders, customer management, employee scheduling, table bookings, etc.
Number System
The technique to represent and work with numbers is called number system. Decimal number system
is the most common number system. Other popular number systems include binary number system,
octal number system, hexadecimal number system, etc.
Decimal Number System
Decimal number system is a base 10 number system having 10 digits from 0 to 9. This means that any
numerical quantity can be represented using these 10 digits. Decimal number system is also a
positional value system. This means that the value of digits will depend on its position. Let us take an
example to understand this.
Say we have three numbers – 734, 971 and 207. The value of 7 in all three numbers is different−
In digital systems, instructions are given through electric signals; variation is done by varying the
voltage of the signal. Having 10 different voltages to implement decimal number system in digital
equipment is difficult. So, many number systems that are easier to implement digitally have been
developed. Let’s look at them in detail.
In any binary number, the rightmost digit is called least significant bit (LSB) and leftmost digit is
called most significant bit (MSB).
And decimal equivalent of this number is sum of product of each digit with its positional value.
Decimal equivalent of any octal number is sum of product of each digit with its positional value.
ASCII
Besides numerical data, computer must be able to handle alphabets, punctuation marks, mathematical
operators, special symbols, etc. that form the complete character set of English language. The complete
set of characters or symbols are called alphanumeric codes. The complete alphanumeric code typically
includes −
Decimal to Binary
Decimal numbers can be converted to binary by repeated division of the number by 2 while recording
the remainder. Let’s take an example to see how this happens.
The remainders are to be read from bottom to top to obtain the binary equivalent.
4310 = 1010112
Decimal to Octal
Decimal numbers can be converted to octal by repeated division of the number by 8 while recording
the remainder. Let’s take an example to see how this happens.
Reading the remainders from bottom to top,
47310 = 7318
Decimal to Hexadecimal
Decimal numbers can be converted to octal by repeated division of the number by 16 while recording
the remainder. Let’s take an example to see how this happens.
• Starting from the least significant bit, make groups of three bits.
• If there are one or two bits less in making the groups, 0s can be added after the most significant
bit
Octal Digit 0 1 2 3 4 5 6 7
Binary Equivalent 000 001 010 011 100 101 110 111
Binary to Hexadecimal
To convert a binary number to hexadecimal number, these steps are followed −
• Starting from the least significant bit, make groups of four bits.
• If there are one or two bits less in making the groups, 0s can be added after the most significant
bit.
101101101012 = DB516
To convert an octal number to binary, each octal digit is converted to its 3-bit binary equivalent.
Day-2: Lecture - 04
Microprocessor Concepts
Microprocessor is the brain of computer, which does all the work. It is a computer processor that
incorporates all the functions of CPU (Central Processing Unit) on a single IC (Integrated Circuit) or at
the most a few ICs. Microprocessors were first introduced in early 1970s. 4004 was the first general
purpose microprocessor used by Intel in building personal computers. Arrival of low cost general
purpose microprocessors has been instrumental in development of modern society the way it has.
Microprocessors Characteristics
Microprocessors are multipurpose devices that can be designed for generic or specialized functions.
The microprocessors of laptops and smartphones are general purpose whereas ones designed for
graphical processing or machine vision are specialized ones. There are some characteristics that are
common to all microprocessors.
These are the most important defining characteristics of a microprocessor −
• Clock speed
• Instruction set
• Word size
Clock Speed
Every microprocessor has an internal clock that regulates the speed at which it executes instructions
and also synchronizes it with other components. The speed at which the microprocessor executes
instructions is called clock speed. Clock speeds are measured in MHz or GHz where 1 MHz means 1
million cycles per second whereas 1 GHz equals to 1 billion cycles per second. Here cycle refers to
single electric signal cycle.
Currently microprocessors have clock speed in the range of 3 GHz, which is maximum that current
technology can attain. Speeds more than this generate enough heat to damage the chip itself. To
overcome this, manufacturers are using multiple processors working in parallel on a chip.
Word Size
Number of bits that can be processed by a processor in a single instruction is called its word size. Word
size determines the amount of RAM that can be accessed at one go and total number of pins on the
microprocessor. Total number of input and output pins in turn determines the architecture of the
microprocessor.
First commercial microprocessor Intel 4004 was a 4-bit processor. It had 4 input pins and 4 output pins.
Number of output pins is always equal to the number of input pins. Currently most microprocessors use
32-bit or 64-bit architecture.
Instruction Set
A command given to a digital machine to perform an operation on a piece of data is called an
instruction. Basic set of machine level instructions that a microprocessor is designed to execute is
called its instruction set. These instructions do carry out these types of operations −
• Data transfer
• Arithmetic operations
• Logical operations
• Control flow
• Input/output and machine control
Microprocessor Components
Compared to the first microprocessors, today’s processors are very small but still they have these basic
parts right from the first model −
• CPU
• Bus
• Memory
CPU
CPU is fabricated as a very large scale integrated circuit (VLSI) and has these parts −
• Decoder − It decodes (converts to machine level language) the instruction and sends to the
ALU (Arithmetic Logic Unit).
• ALU − It has necessary circuits to perform arithmetic, logical, memory, register and program
sequencing operations.
• Register − It holds intermediate results obtained during program processing. Registers are used
for holding such results rather than RAM because accessing registers is almost 10 times faster
than accessing RAM.
Bus
Connection lines used to connect the internal parts of the microprocessor chip is called bus. There are
three types of buses in a microprocessor −
• Data Bus − Lines that carry data to and from memory are called data bus. It is a bidirectional
bus with width equal to word length of the microprocessor.
• Control Bus − Lines that carry control signals like clock signals, interrupt signal or ready
signal are called control bus. They are bidirectional. Signal that denotes that a device is ready
for processing is called ready signal. Signal that indicates to a device to interrupt its process is
called an interrupt signal.
Memory
Microprocessor has two types of memory
• RAM − Random Access Memory is volatile memory that gets erased when power is switched
off. All data and instructions are stored in RAM.
• ROM − Read Only Memory is non-volatile memory whose data remains intact even after
power is switched off. Microprocessor can read from it any time it wants but cannot write to it.
It is preprogrammed with most essential data like booting sequence by the manufacturer.
Evaluation of Microprocessor
The first microprocessor introduced in 1971 was a 4-bit microprocessor with 4m5KB memory and had
a set of 45 instructions. In the past 5 decades microprocessor speed has doubled every two years, as
predicted by Gordon Moore, Intel co-founder. Current microprocessors can access 64 GB memory.
Depending on width of data microprocessors can process, they are of these categories−
• 8-bit
• 16-bit
• 32-bit
• 64-bit
Size of instruction set is another important consideration while categorizing microprocessors. Initially,
microprocessors had very small instructions sets because complex hardware was expensive as well as
difficult to build.
As technology developed to overcome these issues, more and more complex instructions were added to
increase functionality of the microprocessor. However, soon it was realized that having large
instruction sets was counterproductive as many instructions that were rarely used sat idle on precious
memory space. So the old school of thought that supported smaller instruction sets gained popularity.
Let us learn more about the two types of microprocessors based on their instruction set.
RISC
RISC stands for Reduced Instruction Set Computers. It has a small set of highly optimized
instructions. Complex instruction are also implemented using simpler instructions, reducing the size of
instruction set. The designing philosophy for RISC incorporates these salient points −
• Single cycle execution − Most of RISC instructions take one CPU cycle to execute.
Examples of RISC processors are Intel P6, Pentium4, AMD K6 and K7, etc.
CISC
CISC stands for Complex Instruction Set Computers. It supports hundreds of instructions.
Computers supporting CISC can accomplish wide variety of tasks, making them ideal for personal
computers. These are some characteristics of CISC architecture −
EPIC
EPIC stands for Explicitly Parallel Instruction Computing. It is a computer architecture that is a
cross between RISC and CISC, trying to provide the best of both. Its important features include −
Primary Memory
Memory is required in computers to store data and instructions. Memory is physically organized as a
large number of cells that are capable of storing one bit each. Logically they are organized as groups of
bits called words that are assigned an address. Data and instructions are accessed through these
memory address. The speed with which these memory addresses can be accessed determines the cost
of the memory. Faster the memory speed, higher the price.
Computer memory can be said to be organized in a hierarchical way where memory with the fastest
access speeds and highest costs lies at the top whereas those with lowest speeds and hence lowest costs
lie at the bottom. Based on this criteria memory is of two types – primary and secondary. Here we
will look at primary memory in detail.
The main features of primary memory, which distinguish it from secondary memory are −
Cache Memory
Small piece of high speed volatile memory available to the processor for fast processing is called cache
memory. Cache may be a reserved portion of main memory, another chip on CPU or an independent
high speed storage device. Cache memory is made of fast speed SRAMs. The process of keeping some
data and instructions in cache memory for faster access is called caching. Caching is done when a set
of data or instructions is accesses again and again.
Whenever the processor needs any piece of data or instructions, it checks the cache first. If it is
unavailable there, then the main memory and finally secondary memory is accessed. As cache has very
high speed, time spent in accessing it every time is negligible as compared to time saved if data indeed
is in the cache. Finding data or instruction in cache is called cache hit.
Secondary Memory
You know that processor memory, also known as primary memory, is expensive as well as limited. The
faster primary memory are also volatile. If we need to store large amount of data or programs
permanently, we need a cheaper and permanent memory. Such memory is called secondary memory.
Here we will discuss secondary memory devices that can be used to store large amount of data, audio,
video and multimedia files.
CD Drive
CD stands for Compact Disk. CDs are circular disks that use optical rays, usually lasers, to read and
write data. They are very cheap as you can get 700 MB of storage space for less than a dollar. CDs are
inserted in CD drives built into CPU cabinet. They are portable as you can eject the drive, remove the
CD and carry it with you. There are three types of CDs −
• CD-ROM (Compact Disk – Read Only Memory) − The data on these CDs are recorded by
the manufacturer. Proprietary Software, audio or video are released on CD-ROMs.
• CD-R (Compact Disk – Recordable) − Data can be written by the user once on the CD-R. It
cannot be deleted or modified later.
• CD-RW (Compact Disk – Rewritable) − Data can be written and deleted on these optical
disks again and again.
DVD Drive
DVD stands for Digital Video Display. DVD are optical devices that can store 15 times the data held
by CDs. They are usually used to store rich multimedia files that need high storage capacity. DVDs
also come in three varieties – read only, recordable and rewritable.
Input/Output Ports
A connection point that acts as interface between the computer and external devices like mouse,
printer, modem, etc. is called port. Ports are of two types −
• Internal port − It connects the motherboard to internal devices like hard disk drive, CD drive,
internal modem, etc.
• External port − It connects the motherboard to external devices like modem, mouse, printer,
flash drives, etc.
Let us look at some of the most commonly used ports.
Serial Port
Serial ports transmit data sequentially one bit at a time. So they need only one wire to transmit 8 bits.
However it also makes them slower. Serial ports are usually 9-pin or 25-pin male connectors. They are
also known as COM (communication) ports or RS323C ports.
Parallel Port
Parallel ports can send or receive 8 bits or 1 byte at a time. Parallel ports come in form of 25-pin female
pins and are used to connect printer, scanner, external hard disk drive, etc.
USB Port
USB stands for Universal Serial Bus. It is the industry standard for short distance digital data
connection. USB port is a standardized port to connect a variety of devices like printer, camera,
keyboard, speaker, etc.
PS-2 Port
PS/2 stands for Personal System/2. It is a female 6-pin port standard that connects to the male mini-
DIN cable. PS/2 was introduced by IBM to connect mouse and keyboard to personal computers. This
port is now mostly obsolete, though some systems compatible with IBM may have this port.
Infrared Port
Infrared port is a port that enables wireless exchange of data within a radius of 10m. Two devices that
have infrared ports are placed facing each other so that beams of infrared lights can be used to share
data. It is used with Remote Control and many electronic devices like TV, Fan, Light etc.
Bluetooth Port
Bluetooth is a telecommunication specification that facilitates wireless connection between phones,
computers and other digital devices over short range wireless connection. Bluetooth port enables
synchronization between Bluetooth-enabled devices. There are two types of Bluetooth ports −
FireWire Port
FireWire is Apple Computer’s interface standard for enabling high speed communication using serial
bus. It is also called IEEE 1394 and used mostly for audio and video devices like digital camcorders.
HDMI (High-Definition Multimedia Interface)
HDMI (High-Definition Multimedia Interface) is a proprietary audio/video interface for transmitting
uncompressed video data and compressed or uncompressed digital audio data from an HDMI-
compliant source device, such as a digital controller, to a compatible computer monitor, video
projector, digital TV, or digital audio device. HDMI is a digital replacement for analog video standards.
Network
A system of interconnected computers and computerized peripherals such as printers is called computer
network. This interconnection among computers facilitates information sharing among them.
Computers may connect to each other by either wired or wireless media.
Internet/ Intranet
A network of networks is called an internetwork, or simply the internet. It is the largest network in
existence on this planet.The internet hugely connects all WANs (Wide Area Networks) and it can
have connection to LANs (Local Area Networks)and Home networks. Internet uses TCP/IP
(Transmission Control Protocol/Internet Protocol) protocol suite and uses IP as its addressing protocol.
Present day, Internet is widely implemented using IPv4. Because of shortage of address spaces, it is
gradually migrating from IPv4 to IPv6.
Internet enables users to share and access enormous amount of information worldwide. It uses WWW
(World Wide Web), FTP (File Transfer Protocol), email services, audio and video streaming etc. At
huge level, internet works on Client-Server model. Internet uses very high speed backbone of fiber
optics. To inter-connect various continents, fibers are laid under sea known to us as submarine cable.
Intranet is applicable for the technologies that are inside a particular area or an organization. It is used
for Local Area Networks (LAN), Metropolitan Area Network (MAN).