COS101 Introduction To Computing Sciences
COS101 Introduction To Computing Sciences
COURSE CODE:
COS101
COURSE TITLE:
INTRODUCTION TO COMPUTING SCIENCES
(3 Units)
This lecture material is prepared to introduce and acquaint students with the basic knowledge of
computing sciences alongside tools of computing, information processing, and its application in
society. In addition, the Internet and different areas of computing disciplines are discussed..
Importantly, the resource materials are gathered from Books, Journals and Web resources.
1|Page
COURSE INFORMATION
Course Code: COS101
Credit Unit: 3
Semester: First
Course Duration:
Year 2024.
GRADING SCHEME
Class Assessment: 30%
Final Examination: 70%. This will cover every aspect of the course.
Total: 100%
2|Page
COURSE DESCRIPTION
COS101: Introduction to Computing Sciences is a three (3) credit unit course. It deals with the
brief history of computing. Description of the basic components of a computer/computing
device. Input/Output devices and peripherals. Hardware, software and human ware. Diverse
and growing computer/digital applications. Information processing and its roles in society. The
Internet, its applications and its impact on the world today. The different areas/programs of the
computing discipline. The job specializations for computing professionals. The future of
computing.
So, it is important not only to know how to use a computer, but also to understand the
components of it and what they do. Students will be required to complete practical
demonstration of the basic parts of a computer. Illustration of different operating systems of
different computing devices including desktops, laptops, tablets, smart boards and smart
phones. Demonstration of commonly used applications such as word processors, spreadsheets,
presentation software and graphics. Illustration of input and output devices including printers,
scanners, projectors and smartboards. Practical demonstration of the Internet and its various
applications. Illustration of browsers and search engines. How to access online resources.
3|Page
LECTURE 1
BRIEF HISTORY OF COMPUTING
Introduction
Computing refers to the process of using computers and computational techniques to analyse,
manipulate, and transform data in order to solve problems and accomplish tasks. It
encompasses both the hardware and software components of computer systems, as well as the
algorithms and methodologies used to process information. In other words, computing
encompasses the process of utilizing computers and crafting programs to execute tasks. This
multifaceted discipline entails conceiving and refining software and hardware infrastructures
to serve diverse objectives, from manipulating and organizing data to conducting scientific
inquiries, constructing artificial intelligence systems, and leveraging various media for
recreational and communicative purposes. Fundamental to computing is the deployment of
computer devices (hardware) and the programs that animate them (software), both of which
are constructed based on data and procedural specifications (data structures and algorithms).
The history of computing is a fascinating journey that spans centuries and has revolutionized
human civilization. From ancient counting devices to the modern era of smartphones and
supercomputers, the evolution of computing has been marked by remarkable milestones,
inventions, and innovations. In this lecture, we will explore the key developments and
milestones in the history of computing, from its humble beginnings to the present day.
Objectives
History of Computing
The history of the computer dates to several years. The literature is full of many different
written versions of the history of computing starting from the ancient era. However, a deeper
examination revealed some commonalities, and, in this lecture, the focus will be on those
common viewpoints. The history will cover pre-mechanical era (ancient era), mechanical era,
electro-mechanical era, and modern era.
Ancient Era
Devices have been used to aid computation for thousands of years, mostly using one-to-one
correspondence with fingers. The earliest counting device was probably a form of tally stick.
Another was the use of counting rods.
4|Page
One of the earliest known devices used for computing tasks was Abacus originating in ancient
Mesopotamia. It consists of beads or stones on rods or wires, enabling users to perform basic
arithmetic calculations. What we now call the Roman abacus was used in Babylonia as early
as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented.
Mechanical Era
In 1642 Blaise Pascal (a famous French mathematician) invented an adding machine based on
mechanical gears in which numbers were represented by the cogs on the wheels.
In 1690, Leibnitz developed a machine that could perform additions, subtractions, divisions
and square roots. These instructions were hardcoded and could not be altered once written.
In 1822, an Englishman, Charles Babbage invented a machine called “Difference Engine,” that
would perform calculations without human intervention. In 1833, he developed the “Analytical
Engine” along with his associate Augusta Ada Byron (later Countess of Lovelace) who assisted
with the programming. His design contained the basic units of a modern computer: input,
output and processing units, memory and storage devices. Hence, he is regarded as the “father
of the modern-day computers”.
An American, Herman Hollerith, developed (around 1890) the first electrically driven device.
It utilized punched cards and metal rods which passed through the holes to close an electrical
circuit and thus cause a counter to advance. This machine was able to complete the calculation
of the 1890 U.S. census in 6 weeks compared with 7 1/2 years for the 1880 census which was
manually counted.
In 1936 Howard Aiken of Harvard University convinced Thomas Watson of IBM to invest $1
million in the development of an electromechanical version of Babbage's analytical engine.
The Harvard Mark 1 was completed in 1944 and was 8 feet high and 55 feet long.
At about the same time (the late 1930's) John Atanasoff of Iowa State University and his
assistant Clifford Berry built the first digital computer that worked electronically, the ABC
(Atanasoff-Berry Computer). This machine was basically a small calculator.
In 1943, as part of the British war effort, a series of vacuum tube-based computers (named
Colossus) were developed to crack German secret codes. The Colossus Mark 2 series consisted
of 2400 vacuum tubes.
John Mauchly and J. Presper Eckert of the University of Pennsylvania developed these ideas
further by proposing a huge machine consisting of 18,000 vacuum tubes. ENIAC (Electronic
Numerical Integrator and Computer) was born in 1946. It was a huge machine with a huge
power requirement and two major disadvantages. Maintenance was extremely difficult as the
tubes broke down regularly and had to be replaced, and also there was a big problem with
overheating. The most important limitation, however, was that every time a new task needed
to be performed the machine must be rewired.
5|Page
In the late 1940's John von Neumann (at the time a special consultant to the ENIAC team)
developed the EDVAC (Electronic Discrete Variable Automatic Computer) which pioneered
the "stored program concept". This allowed programs to be read into the computer and so gave
birth to the age of general-purpose computers.
Modern Era
In the modern era, computers are classified into a number of generations. There are five
prominent generations of computers. Each generation has witnessed several technological
advances which change the functionality of the computers. This results in more compact,
powerful, robust systems which are less expensive. The classification can be based on the
hardware technology used in building the computer or based on applications/software used.
Note that, the literature did not present consistent generational periods and therefore those
indicated here may be different from several generational periods in the literature.
The first generation of computers started with ENIAC described above. This was followed by
IBM UNIVAC (Universal Automatic Computer) designed and built by Mauchly and Eckert in
1951.
The first-generation computers used vacuum tubes, which were very large, requiring lot of
energy and slow in input and output processing. They also suffered from heat and maintenance
problems. The vacuum tubes have to be replaced often because of their short life span. See
figures below for the ENIAC machine and vacuum tubes.
In the mid-1950's Bell Labs developed the transistor. Transistors were capable of performing
many of the same tasks as vacuum tubes but were only a fraction of the size. The first transistor-
based computer was produced in 1959. Transistors were not only smaller, enabling computer
size to be reduced, but they were faster, more reliable and consumed less electricity. See figure
below for sample transistors.
The computers were able to perform operations comparatively faster. The storage capacity was
also improved. IBM 650, 700, 305 RAMAC, 1401, and 1620 desktop computers were
manufactured and distributed during this period. It was also during this period, assembly and
symbolic programming languages and high-level computer programming languages, such as
FORTRAN (Formula Translation) and COBOL (Common Business Oriented Language) and
BASIC (Beginner’s All-purpose Symbolic Instruction Code), were launched. Computer
programs written in these English-like high-level programming languages are translated to
machine code using interpreters, compilers or translators.
7|Page
Characteristics of 2nd Generation of Computers
This period marked the development of computers based on Integrated Circuits (ICs)
instead of transistors. An integrated circuit or monolithic integrated circuit (also referred to
as an IC, a chip, or a microchip) is a set of electronic circuits on one small plate ("chip") of
semiconductor material, normally silicon. A silicon chip consumes less than one-eighth of
an inch square on which many electronic components like diodes, transistors, and
capacitors can be fixed. See figure below for an illustration of an integrated circuit on a
chip. Third generation computers are smaller, faster and more flexible in terms of input and
output than second generation computers. Third generation computers satisfy the need of a
small business and became popular as minicomputers. IBM 360, PDP 8 and PDP 11
computers are examples for third generation computers. Another feature of this period is
that computer software became much more powerful and flexible and for the first time
more than one program could share the computer's resources at the same time (multi-
tasking). The majority of programming languages used today are often referred to as 3GL's
(3rd generation languages) even though some of them originated during the 2nd generation.
8|Page
▪ Increased speed and reliability.
▪ Development of minicomputers.
▪ On-line, real-time processing.
▪ Multiprogramming operating system was introduced.
▪ It was faster than the previous generation.
▪ To improved input and output devices.
The third-generation computers have integrated circuits consisting of anywhere from 1 to 500
transistors and were considered small-scale integration (1 to 10 transistors) to medium-scale
integration (10 to 500 transistors). In 1970 large-scale integration was achieved where the
equivalent of thousands of integrated circuits was crammed onto a single silicon chip. This
development again increased computer performance (especially reliability and speed) whilst
reducing computer size and cost. Around this time the first complete general-purpose
microprocessor became available on a single chip (giving birth to microcomputers, also called
personal computers). In 1975 Very Large-Scale Integration (VLSI) took the process one step
further. The development started with hundreds of thousands of transistors in the early 1980s,
and continues beyond several billion transistors as of 2009. Examples of fourth generation
computers are IBM PC and Apple II. The fourth-generation computers also include
supercomputers such as CRAY series computers. Supercomputers are the best in terms in
processing capacity and cost. These computers can process billions of instructions per second.
They are used for applications which require intensive numerical computations such as stock
analysis, weather forecasting and other similar complex applications. The spread of computer
network was also observed during this period.
During this period Fourth Generation Languages (4GL's) have come into existence. Such
languages are a step further removed from the computer hardware in that they use language
much like natural language. Many database languages can be described as 4GL's. They are
generally much easier to learn than are 3GL's. Microsoft developed MS-DOS operating system
for IBM PCs. In 1980, Alan Shugart presents the Winchester hard drive, revolutionizing
storage for PCs. In 1982, Hayes introduced the 300 bits per second smart modem. In 1989, Tim
Berners-Lee invented an Internet-based hypermedia enterprise for information sharing giving
birth to World Wide Web (WWW). In the same year, Intel 486 becomes the world’s first
1,000,000 transistor microprocessor. It crams 1.2 million transistors on a .4 in by .6 in sliver of
silicon and executes 15 million instructions per second, four times as fast as its predecessor the
80386 chips, which has 275,000 transistors.
9|Page
Figure 2.4: Intel 80386, Intel 8086 Microprocessor Chip, Intel 80486
Fifth generation computers are further made smarter in terms of processing speed, user
friendliness and connectivity to network. These computers are portable and sophisticated.
Powerful desktops, notebooks, variety of storage mechanism such as optical disks and
advanced software technology such as distributed operating system and artificial intelligence
are characteristic of this period. IBM notebooks, Pentium PCs and PARAM 10000 are
examples of fifth generation computers.
In 1992, Microsoft releases Windows 3.1 and within two months sold over 3 million copies.
The Pentium processor, a successor of Intel 486 was produced in 1993. Pentium processor
contains 3.1 million transistors and could perform 112 million instructions per second.
Microsoft released Microsoft Office this same year. Other inventions (not exhaustive) are
tabulated below.
1994 – Jim Clark and Marc Andreessen found Netscape and launched Netscape Navigator 1.0,
a browser for the World Wide Web.
1996 – U.S. Robotics introduced the PalmPilot, a low-cost, user-friendly personal digital
assistant (PDA).
1997 – Pentium II processor with 7.5 million transistors was introduced by Intel. This processor
incorporates MMX technology, processes video, audio and graphics data more efficiently and
supports applications such as movie editing, gaming and more. In the same year Microsoft
releases Internet Explorer 4.0.
1998 – Apple Computer releases iMac, the next version of Macintosh computer. iMac did not
feature floppy disk drive. Windows 98 was introduced this year, which was an extension to
Windows 95 with improved Internet access, system performance and support for new
generation of hardware and software. Google, a search engine was founded.
2001 – Intel unveils Pentium 4 chip with clock speeds starting at 1.4 GHz and with 42
transistors. Windows XP for desktops and servers was introduced.
10 | P a g e
2002 – Intel revamped Pentium 4 chip with 0.13 micron processor and Hyper-Threading (HT)
Technology and operating at a speed of 3.06GHz. DVD writes were introduced to replace CD
writers (CD-RW).
2004 - Flat panel LCD monitors were introduced, replacing the bulky CRT monitors as the
popular choice. USB flash drive was also made popular this year as a cost-effective eay to
transport data amd information. Apple Computer introduced the sleek iMac G5. Smart phone
overtakes the PDA as the perosnal mobile device of choice. A smart phone offers the user a
cell phone, full personal information management, a Web browser, e-mail functionality, instant
messaging and ability to listen to music, watch and record video, play games and take pictures.
In 2005, Microsoft releases Xbox 360, a game console with capability to play music, display
photos, and network with computers and other Xbox games.
2006 – Intel introduced Core 2 Duo processor family with 291 million transistors and uses 40
percent less power than Pentium processor. IBM produced the fastest supercomputer called
Blue Gene/L, which can perform 28 trillion calculations in the time it takes to blink an eye or
about 0ne-tenth of a second. Sony launches Playstation 3 to include Blu-ray disc players, high
definition capabilities and always-on online connectivity.
2007 – Intel introduces Core 2 Quad, a four-core processor made of dual processor servers and
desktop computers. Apple launches iPhone.
An effective 5th generation computer would be highly complex and intelligent electronic
device conceived with an idea of intelligence without going through the various stages of
technical development. This idea of intelligence is called artificial intelligence or AI. The
emphasis is now shifting developing reliable, faster, and smaller but dump machines to more
intelligent machines.
Practice Problems
1. Write a short note on the history of computers. Explain why Charles Babbage is known
as the father of the modern-day computers
2. What are the characteristics of first-generation computers? Discuss their major
drawbacks.
3. What are the characteristics of second-generation computers? Discuss their major
drawbacks.
4. What distinguishes fourth generation computers from third generation computers?
5. What are the full meanings of these acronyms: ENIAC, UNIVAC, EDVAC, PDA,
FORTRAN, COBOL and BASIC?
6. Who were the major inventors in the electro-mechanical era? Write short notes about
their inventions.
7. What distinguishes Pentium 4 from Pentium II chip?
8. What distinguishes smart phones from PDAs? List all the capabilities of a smart phone.
9. Describe the invention of Herman Hollerith.
10. Describe the invention of Atanasoff and Berry.
11. Describe the invention of Mauchly and Eckert.
11 | P a g e
12. What is the speed capability of Blue Gene/L developed by IBM in 2006?
References
12 | P a g e
LECTURE 2
COMPONENTS OF A COMPUTER SYSTEM
Description
Any kind of computers consists of basically two components: hardware and software. A
computer must need software (logical aspect) and hardware (physical aspect) to operate. Figure
2.1 highlights computer system architecture layers. This simple architecture shows layers
selecting and interconnecting hardware components to create computer system that meets
functional, performance and cost goal. As shown, user interacts with system through software
(application/system) in the hardware. In this lecture, the physical aspect of a computer will be
discussed.
Objectives
Introduction
1 . https://www.learncomputerscienceonline.com/introduction-to-computer-system/
13 | P a g e
Hardware: This is the tangible or physical part of a computer. The devices that make up the
various units in the simple computer model form the hardware. It refers to the collection of
physical parts of a computer system that one can touch or feel. This includes the computer case,
monitor, keyboard, and mouse. It also includes all the parts inside the computer case, such as
the hard disk drive, processor, motherboard, video card, and other peripheral devices.
The hardware components of a computer can be viewed into 4 primary categories: System Unit,
Display Device, Input Devices, External Devices.
System Unit
A System Unit is the main component of a computer/personal computer, which houses the
other devices necessary for the computer to function. It is comprised of a chassis and the
internal components of a personal computer such as the system board (mother board), the
microprocessor, memory modules, disk drives, adapter cards, the power supply, a fan or other
cooling device and ports for connecting external components such as monitors, keyboards,
mice, and other devices as shown in figure 2.2.
• Output Devices
Output devices enable user to view information in human readable form (text and graphical
data) associated with a computer program. Display devices commonly connect to the system
unit via a cable, and they have controls to adjust the settings for the device. They vary in size
and shape, as well as the technology used. Such devices include speakers, monitors, plotters,
printers, etc.
Monitors: Monitors, commonly called as Visual Display Unit (VDU), are the main output
device of a computer. It forms images from tiny dots, called pixels that are arranged in a
rectangular form. The sharpness of the image depends upon the number of pixels.
14 | P a g e
There are two kinds of viewing screen used for monitors.
• Input Devices
An input device is a computer component that enables users to enter data or instructions into a
computer. The most common input devices are keyboards and computer mice. Input devices
can connect to the system via a cable or a wireless connection, but laptop systems typically use
a touchpad instead of a mouse. Other input devices include webcams, microphones, joysticks,
scanners, light pen, track ball, Magnetic Ink Card Reader(MICR), Optical Character
Reader(OCR), Bar Code Reader, and Optical Mark Reader(OMR), etc.
15 | P a g e
Figure 2.5: several categories of external devices
16 | P a g e
More examples of hardware components are as follows:
• Printers
Printer is an output device, which is used to print information on paper. There are two types of
printers: Impact Printers and Non-Impact Printers
Impact Printers
The impact printers print the characters by striking them on the ribbon which is then pressed
on the paper.
These printers are of two types: Character printers and Line printers
Character Printers: Character printers are the printers which print one character at a time.
These are further divided into two types: Dot Matrix Printer (DMP) and Daisy Wheel
Line Printers: Line printers are the printers which print one line at a time. These are of further
two types: Drum Printer and Chain Printer
Non-impact Printers
Non-impact printers print the characters without using ribbon. These printers print a complete
page at a time so they are also called as Page Printers.
These printers are of two types: Laser Printers and Inkjet Printers
17 | P a g e
i. The System Board
ii. Central Processing Unit
iii. Memory
iv. Power Supplies
v. Cooling Systems etc
The system board is a computer component that acts as the backbone for the entire computer
system as it serves as a single platform to connect all of the parts of a computer together. It
connects the CPU, memory, hard drives, optical drives, video card, sound card, and other ports
and expansion cards directly or via cables. System Board is also known as motherboard. It
consists of a large, flat circuit board with chips and other electrical components on it.
Some popular manufacturers of the motherboard are: Intel, Asus, Gigabyte, Biostar, Msi
The Central Processing Unit (CPU), sometimes called microprocessor or just processor, is the
real brain of the computer and is where most of the calculations take place. It also stores data,
intermediate results and instructions(program) and controls the operation of all parts of
computer.
18 | P a g e
• CPU performs all types of data processing operations.
• It stores data, intermediate results, and instructions (program).
• It controls the operation of all parts of the computer.
• Supervises flow of data within CPU
• Transfers data to Arithmetic and Logic Unit
• Transfers results to memory
• Fetches results from memory to output devices
i. Control Unit
ii. Arithmetic Logic Unit (ALU)
iii. Memory unit
19 | P a g e
This unit consists of two subsections namely; arithmetic and logic section.
Logic Section: Function of logic section is to perform logic operations such as comparing,
selecting, matching and merging of data (>, <, =, etc).
Whenever calculations are required, the control unit transfer the data from storage unit to ALU
once the computation are done, the results are transferred to the memory unit by the control
unit and then, it is sent to the output unit for display.
This unit controls the operations of all parts of the computer but does not carry out any actual
data processing operations.
• It is responsible for controlling the transfer of data and instructions among other units
of a computer.
• It manages and coordinates all the units of the computer.
• It obtains the instructions from the memory, interprets them, and directs the operation
of the computer.
• It communicates with Input/Output devices for transfer of data or results from storage.
• It does not process or store data.
The power and performance of a computer depends on mainly the CPU or the Central
Processing Unit of the computer. As the brains of the computer, the CPU is the most important
MVP (most valuable player) of the system.
Computer manufacturers have found that performance is boosted if a computer has more than
one CPU. This arrangement is called Dual-core or Multi-core Processing and harnesses the
power of two processors. In this configuration, one integrated circuit contains two processors,
their caches as well as the cache controllers. These two "cores" have resources to perform tasks
in parallel, almost doubling the efficiency and performance of the computer as a whole. Dual
processor systems on the other hand have two separate physical processors in the system.
20 | P a g e
INTEL CPU'S
Currently, Intel and AMD are the major CPU manufacturers who seem to have the market
covered. Most computers such as APPLE Macs, Gateway computers, HP computers and Dell
use processors made by the same computer manufacturers, such as Intel, AMD, etc. Some
examples of CPUs are giving below:
The core speed of Xeon family of processors ranges from 1.6GHz to 3.2 GHz. These processors
are suited for specific communication applications such as telecommunications servers, search
engines, network management or storage. It provides high memory bandwidth, memory
capacity and I/O bandwidth. Examples of Computer manufacturers that use this processor in
their systems - Dell, Apple, etc.
This first four core desktop processor is designed for multimedia applications such as
audio/video editing and rendering, 3D modelling and other intensive, high CPU demanding
tasks. With multi-core processing, the system response is improved by delegating certain tasks
to specific cores. Quad core configuration also has an unlocked clock for easier overclocking.
Examples of Computer manufacturers that use this processor in their systems - Dell, Gateway.
The new Intel chipsets that feature 1333MHz bus speeds are enabling the creation of higher
performance processors at competitive prices. There are four processors: Core 2 Duo processor
E6400, Core 2 Duo processor E4300, Core 2 Duo processor T7400 and the Core 2 Duo
processor L7400. This family of processors delivers more instructions per cycle, improves
system performance by efficiently using the memory bandwidth and is more environmentally
friendly because of its low energy consumption. Examples of Computer manufacturers that use
this processor in their systems - Apple computers, Gateway.
This family of Pentium Processors uses a micro architecture for high-performance computing
using low-power. These processors are designed for medium to large enterprise
communications applications, transaction terminals, etc. The cheapest Intel CPUs now
available in the market are the Intel Pentium models.
21 | P a g e
Intel Celeron Processors
The Celeron M family is designed for the next generation mobile applications. Combining
Intel's trademark high performance stats with low power consumption, these processors are
perfect for thermally sensitive embedded and communications applications. This family of
processors will probably be used for small to medium businesses and for enterprise
communications, Point on Sale appliances and ATMs.
Popular computer for medium to high performance computing. Boosts performance by using
dual-core technology. The Athlon 64 X2 is the best AMD value for its price.
Examples of companies that use this processor in their systems - Acer, HP, Gateway.
Other processors by AMD include: AMD Athlon X2, AMD Sempron, AMD Turion, AMD
Opteron, AMD Athlon 64 FX.
This unit stores data, instructions & results for processing and stores the final results of
processing before these results are released to an output device. It is also responsible for the
transmission of all inputs and outputs. The memory is divided into large number of small parts
called cells. Each location or cell has a unique address which varies from zero to memory size
minus one. For example, if computer has 64k words, then this memory unit has 64 *
1024=65536 memory locations. The address of these locations varies from 0 to 65535.
i. Cache Memory
ii. Primary Memory/Main Memory
iii. Secondary Memory
Cache Memory
Cache memory is a very high-speed semiconductor memory which can speed up CPU. It acts
as a buffer between the CPU and main memory. It is used to hold those parts of data and
program which are most frequently used by CPU. The parts of data and programs are
transferred from disk to cache memory by operating system, from where CPU can access them.
Advantages
22 | P a g e
Disadvantages
This unit can store instructions, data and intermediate results. This unit supplies information to
the other units of the computer when needed. It is also known as internal storage unit or main
memory or primary storage.
Primary memory holds only those data and instructions on which computer is currently
working. It has limited capacity and data is lost when power is switched off. It is generally
made up of semiconductor device. These memories are not as fast as registers. The data and
instruction required to be processed reside in main memory.
RAM (Random Access Memory) is Computer Memory that is directly accessible by the
CPU. RAM stores temporary data, that is in case of power loss, the stored information gets
lost. In Simple words, it stores the data which is currently processing by the CPU. It is the
internal memory of the CPU for storing data, program and program result. It is read/write
memory which stores data until the machine is working. As soon as the machine is switched
off, data is erased.
23 | P a g e
Access time in RAM is independent of the address that is, each storage location inside the
memory is as easy to reach as other locations and takes the same amount of time. Data in the
RAM can be accessed randomly but it is very expensive.
RAM is volatile, i.e., data stored in it is lost when we switch off the computer or if there is a
power failure. Hence a backup uninterruptible power system (UPS) is often used with
computers. RAM is small, both in terms of its physical size and in the amount of data it can
hold.
RAM is of two types: Static RAM (SRAM) and Dynamic RAM (DRAM)
Data is stored in transistors and requires a constant power flow. Because of the continuous
power, SRAM doesn’t need to be refreshed to remember the data being stored. SRAM is
called static as no change or action i.e.; refreshing is not needed to keep the data intact. It is
used in cache memories.
2
https://www.geeksforgeeks.org/difference-between-sram-and-dram/
24 | P a g e
• Static Ram comprises of more complex design.
Data is stored in capacitors. Capacitors that store data in DRAM gradually discharge energy,
no energy means the data has been lost. So, a periodic refresh of power is required in order
to function. DRAM is called dynamic as constant change or action (change is continuously
happening) i.e., refreshing is needed to keep the data intact. It is used to implement main
memory.
3
https://www.geeksforgeeks.org/difference-between-sram-and-dram/
25 | P a g e
SRAM DRAM
Capacitors are not used hence no To store information for a longer time, the contents of
refreshing is required. the capacitor need to be refreshed periodically.
SRAMs has lower latency DRAM has more latency than SRAM
26 | P a g e
SRAM DRAM
ROM stands for Read Only Memory. One can only read but cannot write on it. This type of
memory is non-volatile. The information is stored permanently in such memories during
manufacture. A ROM stores such instructions that are required to start a computer. This
operation is referred to as bootstrap. ROM chips are not only used in the computer but also in
other electronic items like washing machine and microwave oven.
The very first ROMs were hard-wired devices that contained a pre-programmed set of data or
instructions. These kinds of ROMs are known as masked ROMs which are inexpensive.
PROM is read-only memory that can be modified only once by a user. The user buys a blank
PROM and enters the desired contents using a PROM program. It can be programmed only
once and is not erasable.
The EPROM can be erased by exposing it to ultra-violet light for a duration of up to 40 minutes.
Usually, an EPROM eraser achieves this function.
The EEPROM is programmed and erased electrically. It can be erased and reprogrammed about
ten thousand times. EEPROMs can be erased one byte at a time, rather than erasing the entire
chip. Hence, the process of re-programming is flexible but slow.
Advantages of ROM
27 | P a g e
Secondary Memory
This type of memory is also known as external memory or non-volatile. It is slower than main
memory. These are used for storing data/Information permanently. CPU directly does not
access these memories instead they are accessed via input-output routines. Contents of
secondary memories are first transferred to main memory, and then CPU can access it.
Examples include: hard disk, CD-ROM, DVD etc.
Hard drives come with one of several different connectors built in. The five types are ATA/IDE
and SATA for consumer-level drives, and SCSI, Serial Attached SCSI (SAS), and Fibre
Channel for enterprise-class drives.
ATA/IDE Cable
For many years, Advanced Technology Attachment (ATA) connections were the favoured
internal drive connection in PCs. Apple adopted ATA with the Blue and White G3 models.
ATA drives must be configured as either a master or a slave when connecting. This is usually
accomplished by the use of a hardware jumper or, more recently, through the use of a cable
that can tell the drive to act as either a master or slave.
ATA also goes by the name ATAPI, IDE, EIDE and PATA, which stands for Parallel ATA.
ATA is still in use in many computers today, but most drive manufacturers are switching over
to SATA (Serial ATA).
28 | P a g e
SATA
As of 2007, most new computers (Macs and PCs, laptops and desktops) use the newer SATA
interface. It has a number of advantages, including longer cables, faster throughput, multidrive
support through port multiplier technology, and easier configuration. SATA drives can also be
used with eSATA hardware to enable fast, inexpensive configuration as an external drive. Most
people investing in new hard drive enclosures for photo storage should be using SATA drives.
SCSI, SAS, and Fibre Channel drives are rare in desktop computers, and are typically found in
expensive enterprise-level storage systems. You can also find SAS drives (along with the
necessary SAS controller cards) in video editing systems where maximum throughput is
needed.
Power Supply
Power Supply known as switch-mode power supply (SMPS) is an electronic circuit that
converts power using switching devices that are turned on and off at high frequencies, and
storage components such as inductors or capacitors to supply power when the switching device
is in its non-conduction state. Switching power supplies have high efficiency and are widely
used in a variety of electronic equipment, including computers and other sensitive equipment
requiring stable and efficient power supply.
Cooling System
Cooling may be required for CPU, Video Card, Mother Board, Hard Drive, etc. Without proper
cooling, the computer hardware may suffer from overheating. This overheating causes
slowdowns, system error messages, and crashing. Also, the life expectancy of the PC's
components is likely to diminish. The following are commonly used techniques for cooling the
PC or Server components:
29 | P a g e
• Heat Sinks
• CPU/Case Fans
• Thermal Compound
• Liquid Cooling Systems
Heat Sinks: The purpose of a heatsink is to conduct the heat away from the processor or any
other component (such as chipset) to which it is attached. Thermal transfer takes place at the
surface of a heatsink. Therefore, heat sinks should have a large surface area. A commonly used
technique to increase the surface area is by using fins. A typical processor heat sink is shown
in the figure below:
Fan: The Fan is primarily used to force cooler air in to the system or remove hot air out of the
system. A fan keeps the surrounding cooler by displacing air around the heatsink and other
parts of the computer. A typical CPU fan is shown below.
Thermal Compound: A thermal compound is used for maximum transfer of heat from CPU
to the heatsink. The surface of a CPU or a heatsink is not perfectly flat. If you place a heatsink
directly on a CPU, there will be some air gaps between the two. Air is a poor conductor of heat.
Therefore, an interface material with a high thermal conductivity is used to fill these gaps, and
thus improve heat conductivity between CPU and heatsink.
Liquid Cooling Systems: Like a radiator for a car, a liquid cooling system circulates a liquid
through a heat sink attached to the processor. First, the cooler liquid passes through the
30 | P a g e
heatsink, and then gets hot due to transfer of heat from the processor to the heatsink. Then the
hot liquid passes through the radiator at the back of the case, and transfers the heat to the
secondary coolant (air). Now, the liquid is cool enough to pass through the hot processor
heatsink, and the cycle repeats. The chief advantage of LCS (Liquid Cooling System) is that
the cooling takes place very efficiently (since liquids transfer heat more efficiently than
air/solids). The disadvantages include bulkier cooling system, cost, and additional reliability
issues associated with LCS.
References
https://en.wikipedia.org/wiki/computer-hardware
www.tutorialspoint.com/computer-fundamentals/computer_ram.html
www.geekswhoknow.com/articles/cpu.php
http://www.dpbestflow.org/data-storage-hardware/hard-drive-101#interfaces
https://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading01.htm
https://www.learncomputerscienceonline.com/introduction-to-computer-system/
https://www.tutorialspoint.com/computer_concepts/computer_concepts_introduction_to_com
puter.htm
https://www.geeksforgeeks.org/difference-between-sram-and-dram/
31 | P a g e
LECTURE 3
Description
The proliferation of software systems has left most non-computer professionals at a bewildered
situation especially with the various operating systems such as Windows and Linux, mobile
operating systems such as Android and iOS, software applications in terms of open and closed
source and finally, the firmware and human ware. In this lecture, software component of a
computer will be discussed.
Objectives
Software
A set of instructions that drives computer to do stipulated tasks is called a program. Software
instructions are programmed in a computer language, translated into machine language, and
executed by computer. Software can be categorized into two types:
i. System software
ii. Application software
System software (systems software) is computer software designed to operate and control the
computer hardware and to provide a platform for running application software. System
software can be separated into two different categories, operating systems and utility
software.
An operating system (OS) is software that manages computer hardware and software
resources and provides common services for computer programs. The operating system is an
essential component of the system software in a computer system. Application programs
usually require an operating system to function.
32 | P a g e
Figure 3.1: Operating System
It is necessary to have at least one operating system installed in the computer to run basic
programs like browsers. Figure 3.2 shows user interaction with a computer by using the input
unit to call the required application software which must be managed by the OS to produce a
required output via output devices.
Memory Management: An operating system manages the allocation and deallocation of the
memory to various processes and ensures that the other process does not consume the memory
allocated to one process.
4
https://www.learncomputerscienceonline.com/introduction-to-computer-system/
33 | P a g e
Device Management: There are various input and output devices. An OS controls the working
of these input-output devices. It receives the requests from these devices, performs a specific
task, and communicates back to the requesting process.
File Management: An operating system keeps track of information regarding the creation,
deletion, transfer, copy, and storage of files in an organized way. It also maintains the integrity
of the data stored in these files, including the file directory structure, by protecting against
unauthorized access.
Security: The operating system provides various techniques which assure the integrity and
confidentiality of user data. Following security measures are used to protect user data:
Error Detection: From time to time, the operating system checks the system for any external
threat or malicious software activity. It also checks the hardware for any type of damage. This
process displays several alerts to the user so that the appropriate action can be taken against
any damage caused to the system.
In order to perform its functions, the operating system has two components: Shell and Kernel.
• Shell
Shell handles user interactions. It is the outermost layer of the OS and manages the interaction
between user and operating system by doing the following:
Shell provides a way to communicate with the OS by either taking the input from the user or
the shell script. A shell script is a sequence of system commands that are stored in a file.
• Kernel
The kernel is the core component of an operating system for a computer (OS). All other
components of the OS rely on the core to supply them with essential services. It serves as the
primary interface between the OS and the hardware and aids in the control of devices,
networking, file systems, and process and memory management.
34 | P a g e
Figure 3.3: Kernel
Functions of kernel
The kernel is the core component of an operating system which acts as an interface between
applications, and the data is processed at the hardware level. When an OS is loaded into
memory, the kernel is loaded first and remains in memory until the OS is shut down. After that,
the kernel provides and manages the computer resources and allows other programs to run and
use these resources. The kernel also sets up the memory address space for applications, loads
the files with application code into memory, and sets up the execution stack for programs.
• Input-Output management
• Memory Management
• Process Management for application execution
• Device Management
• System calls control
There are several different types of operating systems present. In this section, we will discuss
the advantages and disadvantages of these types of OS.
i. Batch OS
ii. Distributed OS
iii. Multitasking OS
iv. Network OS
v. Real-Time OS
vi. Mobile OS
• Batch OS
35 | P a g e
Batch OS is the first operating system for second-generation computers. This OS does not
directly interact with the computer. Instead, an operator takes up similar jobs and groups them
together into a batch, and then these batches are executed one by one based on the first-come,
first, serve principle.
Advantages of Batch OS
Disadvantages of OS
Examples of Batch OS: payroll system, bank statements, data entry, etc.
• Distributed OS
Advantages of Distributed OS
i. Failure of one system will not affect the other systems because all the computers are
independent of each other.
ii. The load on the host system is reduced.
iii. The size of the network is easily scalable as many computers can be added to the
network.
iv. As the workload and resources are shared therefore the calculations are performed at a
higher speed.
v. Data exchange speed is increased with the help of electronic mail.
Disadvantages of Distributed OS
• Multitasking OS
The multitasking OS is also known as the time-sharing operating system as each task is given
some time so that all the tasks work efficiently. This system provides access to a large number
of users, and each user gets the time of CPU as they get in a single system. The tasks performed
are given by a single user or by different users. The time allotted to execute one task is called
a quantum, and as soon as the time to execute one task is completed, the system switches over
to another task.
Advantages of Multitasking OS
Disadvantages of Multitasking OS
• Processes with higher priority cannot be executed first as equal priority is given to each
process or task.
• Various user data is needed to be taken care of from unauthorized access.
• Sometimes there is a data communication problem.
• Network OS
Network operating systems are the systems that run on a server and manage all the networking
functions. They allow sharing of various files, applications, printers, security, and other
networking functions over a small network of computers like LAN or any other private
network. In the network OS, all the users are aware of the configurations of every other user
within the network, which is why network operating systems are also known as tightly coupled
systems.
Advantages of Network OS
Disadvantages of Network OS
37 | P a g e
Examples of Network OS: Microsoft Windows server 2008, LINUX, etc.
• Real-Time OS
Real-Time operating systems serve real-time systems. These operating systems are useful when
many events occur in a short time or within certain deadlines, such as real-time simulations.
• Hard real-time OS
The hard real-time OS is the operating system for mainly the applications in which the slightest
delay is also unacceptable. The time constraints of such applications are very strict. Such
systems are built for life-saving equipment like parachutes and airbags, which immediately
need to be in action if an accident happens.
• Soft real-time OS
The soft real-time OS is the operating system for applications where time constraint is not very
strict. In a soft real-time system, an important task is prioritized over less important tasks, and
this priority remains active until the completion of the task. Furthermore, a time limit is always
set for a specific job, enabling short time delays for future tasks, which is acceptable. For
Example, virtual reality, reservation systems, etc.
Advantages of Real-Time OS
i. It provides more output from all the resources as there is maximum utilization of
systems.
ii. It provides the best management of memory allocation.
iii. These systems are always error-free.
iv. These operating systems focus more on running applications than those in the queue.
v. Shifting from one task to another takes very little time.
Disadvantages of Real-Time OS
• Mobile OS
38 | P a g e
Advantages of Mobile OS
Disadvantages of Mobile OS
Examples of Mobile OS: Android OS, iOS, Symbian OS, and Windows mobile OS.
i. Windows: Windows is the most popular desktop operating system, used by over 1
billion users worldwide. It has a wide range of features and applications, including the
Office suite, gaming, and productivity tools.
ii. macOS: macOS is the desktop operating system used by Apple Mac computers. It is
known for its clean, user-friendly interface and is popular among creative professionals.
iii. Linux: Linux is an open-source operating system that is available for free and can be
customized to meet specific needs. It is used by developers, businesses, and individuals
who prefer an open-source, customizable operating system.
iv. iOS: iOS is the mobile operating system used by Apple iPhones and iPads. It is known
for its user-friendly interface, tight integration with Apple’s hardware and software, and
robust security features.
v. Android: Android is the most popular mobile operating system, used by over 2 billion
users worldwide. It is known for its open-source nature, customization options, and
compatibility with a wide range of devices.
Utility software helps to analyse, configure, optimize and maintain the computer, such as virus
protection, file/disk management tools, cleaner, compression tools, etc
Application Software
Application software is used to accomplish specific tasks other than just running the computer
system. Application software may consist of a single program, such as an image viewer; a small
collection of programs (often called a software package) that work closely together to
accomplish a task, such as a spreadsheet or text processing system; a larger collection (often
called a software suite) of related but independent programs and packages that have a common
user interface or shared data format, such as Microsoft Office, which consists of closely
integrated word processor, spreadsheet, database, etc.; or a software system, such as a database
management system, which is a collection of fundamental programs that may provide some
service to a variety of other independent applications. In contrast to system software, it allows
users to do things like create text documents, play games, listen to music, or web browsers to
surf the web are called application software.
39 | P a g e
Figure 3.4: Application software
Firmware
In electronic systems and computing, firmware is a type of software that provides control,
monitoring and data manipulation of engineered products and systems. Typical examples of
devices containing firmware are embedded systems (such as traffic lights, consumer
appliances, and digital watches), computers, computer peripherals, mobile phones, and digital
cameras. Also, remote control is a very simple example of an engineered product that contains
firmware. The firmware monitors the buttons, controls the LEDs, and processes the button
presses in order to send data in a format the receiving device (a TV set, for example) can
understand and process. The firmware contained in these devices provides the low-level control
program for the device. Presently, most firmware can be updated.
Firmware is held in non-volatile memory devices such as ROM, EPROM, or flash memory.
Changing the firmware of a device may rarely or never be done during its economic lifetime;
40 | P a g e
some firmware memory devices are permanently installed and cannot be changed after
manufacture. Common reasons for updating firmware include fixing bugs or adding features
to the device. This may require ROM integrated circuits to be physically replaced, or flash
memory to be reprogrammed through a special procedure. Firmware such as the ROM BIOS of
a personal computer may contain only elementary basic functions of a device and may only
provide services to higher-level software. Firmware such as the program of an embedded
system may be the only program that will run on the system and provide all of its functions.
Human ware
Human Ware in computers refers to the skills, knowledge, and attitudes of the people who
interact with and use computers. It encompasses both technical skills related to using hardware
and software, as well as the social, emotional, and cognitive aspects of human-computer
interaction.
i. Productivity: Computer users with strong Human Ware skills are more efficient and
productive in their work. They can navigate software interfaces, troubleshoot problems,
and use computer resources effectively.
ii. User Experience: Human Ware impacts the overall user experience of computer
systems. A well-designed interface that considers human factors and user needs can
enhance usability and satisfaction.
iii. Security: Awareness of security best practices, such as using strong passwords and
recognizing phishing attempts, is a key aspect of Human Ware in computers.
iv. Collaboration and Communication: Human Ware skills enable effective
communication and collaboration in a digital environment. This is particularly
important for remote work and virtual teams.
v. Adaptability: As technology evolves, users need to be able to adapt to new software,
hardware, and digital tools. Human Ware skills enable users to learn new technologies
and update their knowledge.
Examples of Human Ware in Computers
Here are some examples of Human Ware skills and attitudes in the context of computer use:
i. Operating System Proficiency: Understanding the basics of operating systems (OS) like
Windows, macOS, or Linux, including navigating the file system, managing settings,
and installing software.
ii. File Management: Organizing and managing files and folders efficiently, including
creating backups and using cloud storage.
iii. Software Applications: Proficiency in common software applications such as Microsoft
Office (Word, Excel, PowerPoint), web browsers, email clients, and specialized
software relevant to the user's field (e.g., graphic design software, accounting software).
41 | P a g e
iv. Security Awareness: Recognizing and avoiding common security threats like malware,
phishing, and social engineering attacks. Knowing how to use antivirus software,
firewalls, and secure browsing practices.
v. Keyboard Shortcuts: Familiarity with keyboard shortcuts can significantly increase
productivity when using computers.
vi. Basic Troubleshooting: Being able to troubleshoot common computer issues like frozen
programs, slow performance, or internet connectivity problems.
vii. Digital Literacy: The ability to critically evaluate information found online, discerning
credible sources from misinformation or fake news.
Example of Human Ware in Action: Remote Work
Let's consider how Human Ware skills are important for remote work:
References
http://www.uswitch.com/mobiles/guides/mobile-operating-systems/
https://www.tutorialspoint.com/operating_system/os_overview.htm
https://mechanicalnotes.com/computer/
https://www.learncomputerscienceonline.com/introduction-to-computer-system/
42 | P a g e
LECTURE 4
Introduction
This lecture discuses diverse and growing computer and digital applications. It will explore the
ever-expanding role of computers and digital technologies in various industries and aspects of
our daily lives. As technology evolves, the range of applications and uses for computers and
digital technologies continues to expand. From business and healthcare to education,
agriculture, research, entertainment, and beyond, the applications of computers and digital
technologies continue to diversify and evolve, shaping the way we work, communicate, and
interact with the world around us.
Objectives
Computers and digital technologies are widely used in business and industry for various
purposes, including:
i. Data Analysis and Decision Making: Businesses use computers to analyse large
volumes of data and make informed decisions based on the insights gained.
ii. Automation and Robotics: In industries such as manufacturing, computers and robotics
are used to automate processes, increasing efficiency and reducing human error.
iii. Supply Chain Management: Computers play a crucial role in managing supply chains,
from tracking inventory to optimizing shipping routes.
iv. Salary and payroll calculation, etc.
Applications in Healthcare
Computers and digital technologies are transforming the healthcare industry in many ways,
including:
43 | P a g e
i. Electronic Health Records (EHR): Computers are used to store and manage patient
health records electronically, making them easily accessible to healthcare providers and
improving the accuracy and efficiency of record-keeping.
ii. Telemedicine: Digital technologies enable remote consultations and medical services,
allowing patients to receive care from healthcare providers without having to visit a
physical clinic or hospital.
iii. Medical Imaging: Computers are used to process and analyse medical images such as
X-rays, CT scans, and MRIs, aiding in the diagnosis and treatment of various medical
conditions.
iv. Health Monitoring Devices: Wearable devices such as fitness trackers and
smartwatches use computer technology to monitor and track various health metrics,
providing users with valuable insights into their health and well-being.
v. Medical Research and Development: Computers are used in medical research to analyse
large datasets, simulate biological processes, and develop new treatments and therapies
for various diseases and conditions.
vi. Surgery: Advance computers are used in surgery.
vii. Lab-diagnostics, etc.
Applications in Education
Computers and digital technologies are revolutionizing education, with applications such as:
Computers and digital technologies are at the heart of the entertainment and media industry,
enabling:
i. Streaming Services: Digital platforms such as Netflix and Spotify offer on-demand
access to a wide range of content.
ii. Video Games: Computers are used to create and play video games, offering immersive
and interactive experiences.
iii. Digital Art and Design: Computers enable artists and designers to create and manipulate
digital art, from illustrations to animations.
Applications in Agriculture
i. Remote Sensing: Computers process data from satellites and other remote sensing
devices to monitor environmental conditions such as climate change, deforestation,
and natural disasters.
ii. Air and Water Quality Monitoring: Digital sensors and monitors track pollutants in
the air and water, helping to identify and mitigate environmental risks.
iii. Smart agriculture.
Applications in Transportation
i. Autonomous Vehicles: Computers and sensors enable self-driving cars and trucks,
improving safety and efficiency in transportation.
ii. Traffic Management: Computers analyse data from traffic cameras and sensors to
optimize traffic flow, reduce congestion, and improve safety.
Applications in Finance
45 | P a g e
ii. Personalization and Recommendation Systems: Digital technologies analyze customer
data to provide personalized recommendations and improve the shopping experience.
Applications in Sports and Fitness
i. Wearable Fitness Devices: Computers in wearable devices track fitness metrics such
as heart rate, steps, and calories burned, helping users stay active and healthy.
ii. Sports Analytics: Computers analyze data from sports events to improve training,
performance, and strategy in sports such as soccer, basketball, and baseball.
Applications in Social Media and Communication
i. Virtual Reality (VR) and Augmented Reality (AR): Computers enable immersive
gaming experiences through VR and AR technologies.
ii. Streaming Platforms: Computers and digital technologies provide on-demand access
to movies, TV shows, music, and other entertainment content through streaming
platforms such as Netflix and Spotify, etc.
Computers and digital technologies are playing an increasingly important role in our lives, with
diverse applications across various industries and aspects of society. As technology continues
to evolve, we can expect to see even more innovative uses for computers and digital
technologies in the future.
46 | P a g e
LECTURE 5
Introduction
In this section, we will explore the concept of information processing, its components, and
applications in various computing domains and and its impact on individuals and organizations.
Understanding information processing is crucial for individuals and organizations to
effectively manage and utilize information in today's digital age.
Objectives
The word “computer” was used long before the modern definition to mean “a person that
computes.” This definition of computer was upheld until the 20th century when computer was
associated with “a programmable electronic device that can store, retrieve, and process data”
as defined in the Webster’s Dictionary. Therefore, a computer has come to be synonymous
with a device that “computes.” With the ability to perform a multitude of tasks, a computer is
regarded as a general (or multi) purpose machine. Computing here includes mathematical
computing as well as logical-based tasks. Generally, a computer is an electronic device,
operating under the control of instructions stored in its own memory that can accept data
(input), process the data according to specified rules, produce information (output), and store
the information for future use. As the modern definition suggests, a computer must be capable
of retrieving, processing, and storing data or information.
To perform a task, we must identify data/information associated with task and provide clear
guidelines, i.e., proper set of instructions for the task to be solved on the computer. This set of
steps or instructions written in simple understandable language is called an algorithm. More
specifically, an algorithm is a sequence of instructions needed to perform a task.
Functions of Computers
• Receiving Input: Data is fed into computer through various input devices like
keyboard, mouse, digital pens, etc. Input can also be fed through devices like CD-
ROM, pen drive, scanner, etc.
• Processing the information: Operations on the input data are carried out based on the
instructions provided in the programs.
47 | P a g e
• Storing the information: After processing, the information gets stored in the primary
or secondary storage area.
• Producing output: The processed information and other details are communicated to
the outside world through output devices like monitor, printer, etc.
Characteristics of a Computer
Speed: speed is one of the main characteristics of a computer. Computers provide very high
speed accompanied by an equally high level for reliability. Thus, computers never make
mistakes of their own accord. A computer can perform billions of calculations in a second. The
time taken by computers for their operations is microseconds and nanoseconds.
Accuracy: Computers can perform operations and process data faster with accuracy. Results
can be wrong only if incorrect data is fed to the computer or a bug may be the cause of an error.
(Garbage In Garbage Out – GIGO).
Diligence: A computer can perform millions of tasks or calculations with the same consistency
and accuracy. It does not feel any fatigue or lack of concentration. Its memory also makes it
superior to that of human beings.
Reliability: A computer is reliable as it gives consistent result for similar set of data i.e., if we
give same set of input any number of times, we will get the same result.
48 | P a g e
Automation: Computer performs all the tasks automatically i.e. it performs tasks without
manual intervention.
Memory/Storage: A computer has built-in memory called primary memory where it stores
data. Secondary storage are removable devices such as CDs, pen drives, etc., which are also
used to store data.
Communications: Computers have the ability to communicate using some sort of connection
(either Wired or Wireless connection). Two computers can be connected to share data and
information and collaboratively complete assigned tasks.
Based on the characteristics described above, the following are the advantages of a computer.
1. Computers makes it possible to receive, supply and process large volumes of data at
very high speed.
2. Computer reduces the cost of all data related operations including, input, output,
storage, processing, and transmission.
3. Computer ensures consistent and error free processing of data.
4. Digitization of all kinds of information including sounds and images, combined with
massive information processing capabilities of the computer has resulted in
development of application to produce physical products of very high quality at great
speed and very economically.
5. Computers have enabled development of many real time applications requiring speedy
continuous monitoring.
49 | P a g e
Therefore, information processing is the manipulation of data to produce useful information; it
involves the capture of data in a format that is retrievable and analysable. Processing
information involves taking raw data and making it more useful by putting it into context. In
general, information processing means processing new data, which includes several steps:
acquiring, inputting, validating, manipulating, storing, outputting, communicating, retrieving,
and disposing. The future accessing and updating of files involves one or more of these steps.
Information processing provides individuals with basic skills to use the computer to process
many types of information effectively and efficiently. The term has often been associated
specifically with computer-based operations.
A computer as has noted above is an information processing machine. Computers process data
to produce information. It accepts data and instructions, executes the instructions on the data
to produce results or perform actions as an output. The set of data and instructions provided by
the user is called input and the result obtained after the computer has processed the input is
output. Typically, the process can be repeated using the same instructions but different data.
This is only possible if these instructions are converted into machine readable format called
program and stored in a computer (this is also called stored-program concept). As indicated in
the Figure 5.1, the computer must remember the data in order to execute program. Computer
accept input from the user, based on the instruction, processes the data and produce output.
i. Input: Receiving data from external sources, such as user input, sensors, or other
devices.
ii. Processing: Analysing and interpreting the data using algorithms and software, which
may involve computational tasks such as sorting, searching, and calculations.
Operations on the input data are carried out based on the instructions provided in the
programs.
iii. Storage: Storing the processed data in memory, which can be volatile (temporary) or
non-volatile (permanent).
iv. Output: Presenting the processed data to users or other systems in a usable format, such
as text, images, or audio.
The input unit provides a mechanism for a computer to accept data and instructions from the
users. Typical input devices are mouse and keyboard. The input data and instructions are stored
in the memory of the computer before that they are processed in the processing unit (also called
processor). Results are presented to the user via a mechanism called output unit. Typical output
devices are monitor and printer. The input unit, processor and output unit constitute the basic
components of a computer.
50 | P a g e
Figure 5.3: simple computing architecture.
Computer memory unit stores data and instructions obtained from the input unit as well as
processed results for future use. Computer memory retains data and instructions for a short
duration or for a long time. A computer memory that is capable of retaining information for a
very short duration, either while work is still in progress or power supply is ensured is called
volatile memory. This forms the primary storage of a computer; hence it is called primary
memory or simply memory. It is also called main memory or temporary memory. The content
in the main memory changes depending on the instructions being processed by the computer
and the latest content remain in the memory until the power supply is switched off. When the
computer is switched off or reset, the content in the memory is lost. To preserve content for a
long time, a secondary or auxiliary storage is used. Unlike the primary memory, the secondary
storage memory is non-volatile, slow, less expensive, and has the capacity to store large data.
Examples for secondary storage devices are hard disks, USB flash drives, compact discs (CDs),
and digital volatile/video discs (DVDs). Aside from the primary memory, faster memory
module, also called cache memory is used as a bridge between primary memory and the
processing unit.
The processing unit, also called computer processing unit (CPU) is regarded as the brain of the
computer. The CPU accepts data and instructions from the primary memory, executes
instructions and produces results which are either preserved in the primary memory or stored
in the secondary storage memory for future use. The unit that executes instructions that involve
arithmetic and logical calculations is called the Arithmetic and Logic Unit (ALU) and all other
instructions involving the control operations of computer components are executed by the
Control Unit (CU). The ALU and CU together form the CPU.
Output unit is a mechanism for displaying results of the processed data from the processing
unit. The most commonly used output device is the monitor. A monitor as a display screen used
for visual representation of results. Other output devices are speakers and headphones for
sound-oriented results and printers for hard copy presentation of results. Results can also be
stored in a secondary storage memory device for future use.
51 | P a g e
Areas of application and impact of information processing in the society
Information processing has had an enormous impact on modern society. The marketplace has
become increasingly complex with the escalating availability of data and information.
Individuals need a sound understanding of how to create, access, use, and manage information,
which is essential in the work environment. People need to understand the interrelationship
among individuals, the business world nationally and internationally, and government to
constructively participate as both consumers and producers. These general competencies must
be coupled with those that lead to employment in business as well as advanced business studies.
i. Text Analysis: Information processing techniques are used to analyze and extract
information from text data, such as sentiment analysis or named entity recognition.
ii. Machine Translation: Information processing algorithms are used to translate text from
one language to another, such as in translation software or language learning apps.
iii. Text Generation: Information processing techniques are used to generate human-like
text, such as in chatbots or automated content generation.
4 Speech Recognition and Synthesis
i. Sensory Processing: Information processing algorithms are used to analyze data from
sensors, such as cameras or touch sensors, to allow robots to perceive and interact with
their environment.
ii. Navigation and Path Planning: Information processing is used to develop algorithms
that allow robots to navigate and plan optimal paths to reach a goal, such as in
autonomous vehicles or warehouse robots.
iii. Object Recognition and Manipulation: Information processing techniques are used to
develop algorithms that allow robots to recognize and interact with objects, such as in
industrial robots or household robots.
7 Virtual Reality (VR) and Augmented Reality (AR)
53 | P a g e
iii. Personal Finance: Information processing algorithms are used to develop systems that
can assist individuals in managing their finances, such as in budgeting apps or
investment planning tools.
10 E-Commerce and Retail
While information processing in computing offers many benefits, it also presents challenges
and considerations among others:
a) Data Security: Information processing raises concerns about data security, as sensitive
information must be protected from unauthorized access or misuse.
b) Data Privacy: Information processing raises concerns about data privacy, as users'
personal information may be collected and processed without their consent.
c) Ethical Considerations: Information processing raises ethical considerations, such as
the responsible use of data and the potential for bias in algorithms and decision-making.
Information processing in computing is a fundamental aspect of computer systems and
technology, involving the manipulation and transformation of data to produce meaningful
information. It has numerous applications and has had a profound impact on computing
performance, storage capacity, and efficiency. As we continue to navigate the digital age, it is
essential to understand the principles of information processing in computing and its
implications for technology and society.
54 | P a g e
LECTURE 6
Introduction
The Internet has fundamentally transformed the way we communicate, access information, and
conduct business. In this lecture, we will explore the history and evolution of the Internet, its
key applications, and its profound impact on various aspects of our lives and society.
Objectives
• Concept of Internet
• Internet key components
• Application of Internet
• Challenges and considerations
• Examples/impact on e-commerce.
The internet may be simply defined as a global communication network that allows almost all
computers worldwide to connect and exchange information. The internet is the single
worldwide computer network that interconnects other computer networks, on which end-user
services, such as World Wide Web sites, Electronic Mail or data repositories are located,
enabling data and other information to be exchanged effectively. The Internet grew out of the
Advanced Research Projects Agency's Wide Area Network (then called ARPANET), which
was established by the US Department of Defence in 1960s for collaboration in military
research among business and government laboratories. Over the decades, the Internet has
evolved into a global network of networks, connecting billions of devices and users around the
world.
World Wide Web (WWW), Electronic Mail (E-mail), News Groups, File Transfer Protocol
(FTP), Internet Relay Chat (IRC), Telnet, Voice over Internet Protocol (VoIP), Gopher, etc.
Among these internet-enabled services, the WWW and the E-mail are the most predominantly
used.
55 | P a g e
i. Protocols and Standards: The Internet relies on a set of protocols and standards, such
as Transmission Control Protocol (TCP), / Internet Protocol (IP), Hypertext Transfer
Protocol (HTTP), and HTML, to ensure interoperability and communication between
devices.
ii. Infrastructure: The Internet's infrastructure includes physical components such as
cables, routers, and clients (computer devices), servers, network devices as well as
virtual components such as cloud computing platforms.
iii. Domain Name System (DNS): The DNS is a hierarchical naming system that translates
domain names into IP addresses, allowing users to access websites and services using
human-readable names. DNS acts as a bridge between human-friendly domain names
(like www.google.com) and computer-friendly IP addresses (such as 74.125.68.102)
iv. Internet Service Providers (ISP): Internet service providers are companies that provide
Internet access to users. These companies connect to the Internet infrastructure and
provide connections through technologies such as DSL, cable, fibre optics or wireless
connections. ISPs play a crucial role in connecting users to the Internet.
The Internet has enabled a wide range of applications that have transformed various aspects of
our lives, including:
The Internet has transformed how business is conducted, and it has provided powerful new
ways to locate, learn about, and buy all types of products and services. It has inspired and made
56 | P a g e
possible the creation of entirely new business enterprises, including the much touted and highly
speculative business of e- commerce. It has enabled governments to better share information
about and distribute information to their citizens, and better collect information about those
citizens. It has facilitated collaboration on research, which, incidentally, fulfilled one of the
visions of its original creators. It has dramatically changed the way we communicate and has
enabled the creation of new social structures in the form of virtual communities. It has forever
altered how we access information and the variety and quantity of information we can access,
empowering us to gain knowledge through a richness of resources that was previously only
imagined in science fiction. It has allowed us to become publishers of family photos, shared
genealogies, journals, diaries, diatribes, musical compositions, short stories, full length novels,
and just about anything else that can be stored and distributed in the form of a computer file.
The Internet has had a profound impact on various aspects of our society, including:
i. Economic Impact: The Internet has fuelled economic growth and innovation, creating
new business models, job opportunities, and industries.
ii. Social Impact: The Internet has changed the way we interact with one another, fostering
online communities, social movements, and global connections.
iii. Cultural Impact: The Internet has influenced our cultural norms, behaviours, and
identities, shaping how we express ourselves and consume media.
While the Internet has brought many benefits, it also presents challenges and considerations,
including:
i. Privacy and Security: The Internet raises concerns about data privacy, security threats,
and cyberattacks, prompting the need for robust cybersecurity measures.
ii. Digital Divide: The Internet has highlighted disparities in access to technology and
digital literacy, creating a digital divide between those who have access to the Internet
and those who do not.
iii. Misinformation and Disinformation: The Internet has facilitated the spread of
misinformation and disinformation, leading to concerns about the reliability of online
information.
i. E-Commerce Platforms: The Internet has enabled the rise of e-commerce platforms
such as Amazon, eBay, and Alibaba, allowing consumers to shop online and purchase
goods and services from anywhere in the world.
ii. Global Marketplaces: E-commerce platforms have created global marketplaces,
connecting buyers and sellers across geographical boundaries and providing consumers
with access to a wide range of products and services.
iii. Digital Payments: The Internet has facilitated the development of digital payment
systems such as PayPal, Apple Pay, and Google Pay, enabling secure and convenient
online transactions.
iv. Data Analytics: E-commerce platforms use data analytics to track consumer behaviour,
preferences, and purchasing patterns, allowing businesses to personalize the shopping
experience and target customers more effectively.
57 | P a g e
v. Challenges: The Internet has also presented challenges for e-commerce, including
concerns about data privacy, security threats, and online fraud. It is essential for
businesses to implement robust cybersecurity measures and protect consumer data.
Generally, the Internet Revolution, which began in the U.S. in the early 1990s has brought
about mechanization of information and communication. It has revolutionised how we
communicate, acquire information, and educate ourselves, perform our jobs, entertain
ourselves, contribute to our communities, and interact with others and to society at large.
If you currently use the Internet, consider for a moment how much time you spend online at
home, at work, or elsewhere. Think about the information you routinely access through the
Internet or the amount of email you send and receive. Two of the most popular Internet services
email and the Web are used by millions of people across the globe every day. These two
services constitute only a small fraction of the services the Internet offers. But they alone have
changed the way we interact with our friends, family, and others, the variety and volume of
information at our disposal, and, more generally, how we conduct our lives.
Everywhere we look, we see more and more references to the Internet. That is because it is
becoming part and parcel of everything we do. The Internet is changing how we raise and
educate our children, how we stay connected with our families and friends, how, when, and
where we perform our jobs, how we purchase our goods, how we read the weather forecast or
our horoscope or send a birthday card. These changes in our behaviour are fundamental and
permanent, and they are becoming more pervasive with each passing year. Consequently, the
Internet is changing us, our communities, our societies in unprecedented ways.
58 | P a g e
LECTURE 7
Introduction
This lecture provides an overview of the different areas/programs of the computing discipline,
including computer science, information technology, and software engineering. They cover a
wide range of topics, including algorithms and data structures, programming languages,
operating systems, computer networks, artificial intelligence, computer graphics and
visualization, human-computer interaction, cryptography and security, databases and
information retrieval, computer hardware, computer software, computer networking, database
management, systems analysis and design, project management, IT security, web development,
cloud computing, and data science.
Objective
The field of computing encompasses various disciplines, each with its unique focus and
applications. Some key areas within the computing discipline include:
1. Computer Science: Computer science is the study of computation, algorithms, and the
design of computer systems. It covers a wide range of topics, including:
➢ Algorithms and Data Structures: The study of algorithms (step-by-step procedures for
solving problems) and data structures (ways of organizing and storing data).
➢ Programming Languages: The study of programming languages (e.g., Java, C++,
Python) and their design and implementation.
➢ Operating Systems: The study of operating systems (e.g., Windows, macOS, Linux)
and their design, implementation, and management.
➢ Computer Networks: The study of computer networks (e.g., the Internet) and their
design, implementation, and management.
➢ Artificial Intelligence: The study of artificial intelligence (AI) and machine learning
(ML) and their applications in computer systems.
➢ Computer Graphics and Visualization: The study of computer graphics and
visualization techniques and their applications in computer systems.
➢ Human-Computer Interaction (HCI): The study of HCI and user experience (UX)
design and their applications in computer systems.
➢ Cryptography and Security: The study of cryptography (the study of secure
communication) and security (the protection of computer systems and data).
➢ Databases and Information Retrieval: The study of databases and information retrieval
systems and their applications in computer systems.
59 | P a g e
2. Information Technology: Information technology (IT) is the study of the use of computer
systems and telecommunications for storing, retrieving, and transmitting information. It covers
a wide range of topics, including:
➢ Computer Hardware: The study of computer hardware (e.g., CPUs, memory) and its
design, implementation, and management.
➢ Computer Software: The study of computer software (e.g., applications, operating
systems) and its design, implementation, and management.
➢ Computer Networking: The study of computer networking (e.g., LANs, WANs) and its
design, implementation, and management.
➢ Database Management: The study of database management systems (DBMS) and their
design, implementation, and management.
➢ Systems Analysis and Design: The study of systems analysis and design (SAD) and its
applications in computer systems.
➢ Project Management: The study of project management (PM) and its applications in
computer systems.
➢ IT Security: The study of IT security and its applications in computer systems.
➢ Web Development: The study of web development (e.g., HTML, CSS, JavaScript) and
its applications in computer systems.
➢ Cloud Computing: The study of cloud computing and its applications in computer
systems.
➢ Data Science: The study of data science and its applications in computer systems.
60 | P a g e
LECTURE 8
Introduction
The field of computing offers a wide range of job specializations, each focusing on a particular
aspect of technology. These are just a few examples of the many specializations available in
the field of computing. Each specialization requires a unique set of skills and expertise, and
professionals may choose to specialize in one area or work across multiple areas depending on
their interests and career goals.
Objective
Job specialization
61 | P a g e
✓ Cloud Computing: Professionals in this area work with cloud-based technologies, such
as Amazon Web Services (AWS) and Microsoft Azure. They may specialize in cloud
architecture, deployment, or security.
✓ Artificial Intelligence/Machine Learning: AI/ML specialists work on developing and
implementing algorithms that allow computers to learn and make decisions based on
data. They may specialize in natural language processing, computer vision, or robotics.
✓ Database Administration: Database administrators are responsible for managing and
maintaining databases, including designing schemas, optimizing performance, and
ensuring data security.
✓ Information Technology (IT) Management: IT managers oversee technology operations
within an organization, including budgeting, planning, and strategic decision-making.
✓ Embedded Systems: Professionals in this area focus on designing and developing
software and hardware for embedded systems, which are specialized computer systems
designed to perform specific functions within larger systems (e.g., automotive, medical
devices, consumer electronics).
✓ Computer Vision: Computer vision specialists work on developing algorithms and
systems that enable computers to interpret and understand visual information from the
world around them. This includes applications such as facial recognition, object
detection, and image processing.
✓ Robotics: Robotics engineers work on designing, building, and programming robots.
They may specialize in areas such as autonomous navigation, robotic manipulation, or
human-robot interaction.
✓ Virtual Reality (VR) and Augmented Reality (AR): VR/AR specialists work on
developing immersive experiences using virtual and augmented reality technologies.
This includes applications such as virtual training simulations, interactive games, and
virtual product demonstrations.
✓ Quantum Computing: Quantum computing specialists work on developing and
implementing algorithms for quantum computers, which use quantum mechanics
principles to perform complex computations. Quantum computing has applications in
fields such as cryptography, optimization, and materials science.
✓ Cloud Security: Cloud security specialists focus on ensuring the security of data and
applications hosted in cloud environments. This includes developing and implementing
security protocols, monitoring for security threats, and responding to security incidents.
✓ Computer Forensics: Computer forensics specialists work on investigating and
analysing digital evidence related to cybercrimes. This includes retrieving data from
storage devices, analysing network traffic, and providing expert testimony in legal
proceedings.
✓ Bioinformatics: Bioinformatics specialists work on developing and applying
computational tools and techniques to analyse biological data, such as DNA sequences,
protein structures, and gene expression profiles. This includes developing algorithms
for sequence alignment, phylogenetic analysis, and gene prediction.
✓ Geographic Information Systems (GIS): GIS specialists work on developing and
implementing systems for capturing, storing, analysing, and displaying spatial data.
This includes applications such as mapping, urban planning, and environmental
monitoring.
62 | P a g e
✓ Business Intelligence (BI) and Data Warehousing: BI and data warehousing specialists
work on developing and maintaining systems for collecting, storing, and analysing data
to support business decision-making. This includes developing data models, building
data warehouses, and creating reports and dashboards.
63 | P a g e
LECTURE 9
Introduction
The field of computing is dynamic and constantly evolving, so it is important to stay informed
about new developments and emerging technologies. Today, computers are ubiquitous and
invisible. They reside in watches, car engines, cameras, televisions, and toys. They manage
electrical grids, analyse scientific data, and predict weather patterns. The future lies in
seamlessly integrated computing, where devices work harmoniously without drawing attention
to themselves.
Objective
Predicting the future of computing is challenging due to the rapid pace of technological
advancement and the complexity of the field. However, there are several trends that are likely
to shape the future of computing in the coming years:
65 | P a g e