0% found this document useful (0 votes)
48 views57 pages

CSC1301 Lecture Notes - Revised - 2024-1

The document provides an overview of the history of computing from early human computers through modern computers. It discusses the earliest mechanical computers developed in the 1600s-1800s by pioneers like Pascal, Leibniz, and Babbage. The first modern computer is generally considered the Z3, built in 1941 with electronic relays. Later important developments included the Mark 1, Colossus, ENIAC, and the first stored program computer at Manchester. The emergence of transistors and integrated circuits led to subsequent generations of computers. The development of the PC in the 1980s popularized computing. The document defines a computer as an electronic device that can accept data, process it according to instructions, produce results, and store them.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views57 pages

CSC1301 Lecture Notes - Revised - 2024-1

The document provides an overview of the history of computing from early human computers through modern computers. It discusses the earliest mechanical computers developed in the 1600s-1800s by pioneers like Pascal, Leibniz, and Babbage. The first modern computer is generally considered the Z3, built in 1941 with electronic relays. Later important developments included the Mark 1, Colossus, ENIAC, and the first stored program computer at Manchester. The emergence of transistors and integrated circuits led to subsequent generations of computers. The development of the PC in the 1980s popularized computing. The document defines a computer as an electronic device that can accept data, process it according to instructions, produce results, and store them.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

ALIKO DANGOTE UNIVERSITY OF SCIENCE AND

TECHNOLOGY, WUDIL
FACULTY OF COMPUTING AND MATHEMATICAL
SCIENCES
DEPARTMENT OF COMPUTER
SCIENCE

Lecture notes for:

CSC1301:
INTRODUCTION TO COMPUTER
1 CSC1301@ 2019
SCIENCE
CHAPTER 1
INTRODUCTION TO COMPUTING

WHY THE NEED TO STUDY THIS COURSE?

Computer Science is Foundational for Every Student We believe that computing is so


fundamental to understanding and participating in society that it is valuable for every student to
learn as part of a modern education. We see computer science as a liberal art, a subject that provides
students with a critical lens for interpreting the world around them. Computer science prepares all
students to be active and informed contributors to our increasingly technological society whether
they pursue careers in technology or not. Computer science can be life-changing, not just skill
training.

Computers are a primary means of local and global communication for billions of people.
Consumers use computers to correspond with businesses, employees with other employees and
customers, students with classmates and teachers, and family members and military personnel with
friends and other family members. In addition to sending simple notes, people use computers to
share photos, drawings, documents, calendars, journals, music, and videos. Through computers,
society has instant access to information from around the globe. Local and national news, weather
reports, sports scores, airline schedules, telephone directories, maps and directions, job listings,
credit reports, and countless forms of educational materials are always accessible. From the
computer, you can make a telephone call, meet new friends, share opinions or life stories, book
flights, shop, fill prescriptions, file taxes, take a course, receive alerts, and automate your home.

As technology continues to advance, computers have become a part of everyday life. Thus, many
people believe that computer literacy is vital to success in today‘s world. Computer literacy,
also known as digital literacy, involves having a current knowledge and understanding of
computers and their uses. Because the requirements that determine computer literacy change as
technology changes, you must keep up with these changes to remain computer literate.

Figure 1: People Using Computers

2 CSC1301@ 2024
1.1 A HISTORICAL OVERVIEW OF COMPUTING

The term computer was originally applied to humans who were employed to solve some kind of
difficult equation. Human computers were often used to compile almanac consisting of tabulated
values that could be used by navigators, for example to help them quickly find the answer to
complicated trigonometric equation. The task is then called computing. Unfortunately, the results
produced by human computers are not only tedious to produce but were also prone to errors.
Earliest known attempt to automate this process was an attempt by mathematician called Charles
Babbage 1822.

Long before Babbage (thousands of years) Chinese were known to use Abacus to mechanize
arithmetic. However, the Abacus was only really good at fixed point arithmetic (such as integer
arithmetic). In 1622 William Oughtred invented the slide rule that can be used for floating point
numbers but only to a precision of four points.

The first mechanical computer was designed and built by German scientist Wilhelm Schickard in
1623. It was capable of addition, subtraction, multiplication and division. Blaise Pascal built a
similar system in 1623, multiplication machine was first built by Gottfried Leibniz in 1673. Later
Babbage released that his earlier attempt would result in a system that could perform only one kind
of operation, in 1833 he realized it is possible to build a general mechanical computer which he
called the Analytical Engine

Lady Ada Lovelace wrote programs for this machine. She was in effect the world‘s first computer
programmer. The idea of a program, or algorithm, had been around for some time as a
mathematical notion. It was first proposed by Muhammad ibn Musa Alkhowarizimi in the 12th
century. The word algorithm arises from the anglicized form of Alkhowarizimi.

The next development in the history of computing is attributed to the American engineer Herman
Hollerith, who developed tabulating machinery for the United States census of 1890. This was
based on punch cards, which held data in the form of holes, a technique that had previously been
used in controlling weaving looms. Joseph-Marie Jacquard a Frenchman had invented the
Jacquard loom in the early 1800s. In Hollerith‘s machines, stacks of cards could be run through a
machine to count the number of cards with holes in particular positions. The technology that
Hollerith pioneered rapidly took off for census applications (essentially counting) and quite soon
for other tasks including multiplication. The company Hollerith founded went on to become a part
of the present day IBM. By the early twentieth century, such machinery was in fairly widespread
use, and punch cards or punched paper tape was a primary form of computer input well into the
1970s.

3 CSC1301@ 2024
1.1.1 MODERN COMPUTERS
Although the underlying (mechanical) technologies of the machines above were developed further
in the early 1900s, it was hard to develop such machines in a financially viable manner. However,
advances in the field of electronics soon opened up new possibilities. Modern computers are based
on digital electronics. There are some disputes as to who actually built the first modern computer.
Today it is accepted that the German claim is the strongest. Konrad Zuse is known to have
developed Z3 computer (after a couple of earlier models) using electronically controlled
mechanical relays. Z3 was finished in Nazi Germany in complete Isolation in 1941, and as a
consequence engineers in the US and UK remained unaware of Zuse‘s work.

The Mark 1 computer, which was developed at Harvard University in partnership with IBM in
1944, is known as another early famous success of a programmable machine that used
electronically controlled relays. With a length of 16m and a height 2.4m, the Mark 1 was enormous
in size. It was initially used by the Navy and later in other fields to solve repetitive calculations. It
was operational for 15 years.

The use of vacuum tubes was the next step in the advances of computer technology. The Colossus
computer and the ENIAC (Electronic Numerical Integrator and Computer) are generally quoted as
the first computers of this era and hence as the first electronic computers. The Colossus was used
at Bletchley Park to support the decoding of German messages during the Second World War
(www.bletchley park.org.uk).

The ENIAC was built by the University of Pennsylvania to address the needs of the Army's
Ballistics Research Laboratory during the Second World War for the accurate calculation of the
firing tables used to aim their artillery. However, the ENIAC was only completed in 1946, and as
a consequence its first task was to help with some complex calculations that were required to assess
the feasibility of the hydrogen bomb. Hence the ENIAC is often described as the first general-
purpose electronic computer and is often chosen as the main representative of vacuum tube
computers or as what we now refer to as first-generation computers.

The earlier computers had to be programmed by hand; reprogramming a computer to perform a


different task could involve physically rewiring the machine. The first stored program machine
was Manchester University computer called Baby developed in 1948. This was later developed
into Manchester Mark I machine that was developed commercially by Ferranti.

Second-generation computers emerged with the development of the transistor in the 1950s. The
transistor replaced the vacuum tube. Made from silicon, major advantages of the transistor included
its smaller size, lower price and lower heat emission.

The invention of the integrated circuit in 1958 marks the era of third generation computers. Rather
than having to treat components such as transistors, resistors and conductors as separate

4 CSC1301@ 2024
components that have to be connected, the advances in micro-electronics allowed the production
of an entire circuit from one tiny piece of silicon.

Beyond the third generation, there are varying views of how or whether computer hardware
advancement can be divided into further generations, but some attempts have been made based on
the advances of integrated circuit technology.

A major milestone in the development of computers since the 1940s includes the development of
the first desktop computers. Several (successful) attempts were made by computer companies and
individuals. However, the most famous remains the development of the PC (personal computer),
which was first developed by IBM in 1981 with software by Microsoft. We still use the term PC
today to refer to any computer, from any manufacturer that has evolved from IBMs original
desktop computer.

Since the early beginnings of the desktop computer as a tool that was useful in disciplines beyond
mathematical calculations, and accessible to a more general user group, computer technology has
penetrated all aspects of life. Today most of us use a PC on a daily basis, and computer technology
is integrated in our telephones, entertainment technologies, our cars, etc.

WHAT IS A COMPUTER?
A computer is an electronic device, operating under the control of instructions stored in its own
memory, that can accept data, process the data according to specified rules, produce
results/information, and store the results/information for future use.

Data and Information


Computers process data into information. Data is a collection of unprocessed items, which can
include text, numbers, images, audio, and video. Information conveys meaning and is useful to
people. Data and information are often used interchangeably; however data becomes information
when it is viewed in context or in post-analysis.

While the concept of data is commonly associated with scientific research, data is collected by a
huge range of organizations and institutions, including businesses (e.g., sales data, revenue, profits,
stock price), governments (e.g., crime rates, unemployment rates, literacy rates) and
nongovernmental organizations (e.g., censuses of the number of homeless people by non-profit
organizations). Data is measured, collected and reported, and analyzed, to produce information,
whereupon it can be visualized using graphs, images or other analysis tools.

5 CSC1301@ 2024
TYPES OF COMPUTER
Computers are classified based on uses and size.

Based on uses:
● Analog Computers: An analog computer or analogue computer is a type of computer that
uses the continuously changeable aspects of physical phenomena such as electrical,
mechanical, or hydraulic quantities to model the problem being solved. An analogue
computer does not use discrete values, but rather continuous values and it is widely used
in scientific and industrial applications. Examples of analog are: Cosmic Engine, Pascaline,
Stepped Rockoner and so on.

● Digital Computers: The advent of digital computing made simple analog computers
obsolete as early as the 1950s and 1960s. Digital computers represent varying quantities
symbolically, as their numerical values change. It employs the use of discrete values and
are referred to as the computers of the modern age. Examples are: Personal Computer (PC),
Smartphones and so on.

● Hybrid Computers: Hybrid computers are computers that exhibit features of analog
computers and digital computers. The digital component normally serves as the controller
and provides logical and numerical operations, while the analog component often serves as
a solver of differential equations and other mathematically complex equations. Examples
are: modern thermometer, fuel vending machine at Gasoline station.

Based on Sizes:
● Mainframe computer: Mainframe computers or mainframes (colloquially referred to as
"big iron") are computers used primarily by large organizations for critical applications;
bulk data processing, such as census, industry and consumer statistics, enterprise resource
planning; and transaction processing. They are larger and have more processing power than
some other classes of computers: minicomputers, servers, workstations, and personal
computers.
● Minicomputer: A minicomputer, or colloquially mini, is a class of smaller computers that
was developed in the mid-1960s and sold for much less than mainframe. Minicomputers
were also known as midrange computers. They grew to have relatively high processing
power and capacity. They were used in manufacturing process control, telephone switching
and to control laboratory equipment.
● Microcomputer: A microcomputer is a small, relatively inexpensive computer with a
microprocessor as its central processing unit (CPU). It includes a microprocessor, memory,
and minimal input/output (I/O) circuitry mounted on a single printed circuit board.
Microcomputers became popular in the 1970s and 1980s with the advent of increasingly

6 CSC1301@ 2024
powerful microprocessors. The predecessors to these computers, mainframes and
minicomputers, were comparatively much larger and more expensive

● Workstation: A workstation is a special computer designed for technical or scientific


applications. Intended primarily to be used by one person at a time, they are commonly
connected to a local area network and run multi-user operating systems.

● Supercomputer: A supercomputer is a computer with a high level of performance


compared to a general-purpose computer. The performance of a supercomputer is
commonly measured in floating-point operations per second (FLOPS) instead of million
instructions per second (MIPS). Supercomputers play an important role in the field of
computational science, and are used for a wide range of computationally intensive tasks in
various fields, including quantum mechanics, weather forecasting, climate research, oil and
gas exploration, molecular modelling (computing the structures and properties of chemical
compounds, biological macromolecules, polymers, and crystals), and physical simulations
(such as simulations of the early moments of the universe, airplane and spacecraft
aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Presently, China’s
"Sunway TaihuLight" is the world’s fastest Supercomputer at 93 petaflops, followed by
Tianhe-2 at 34 petaflops. Sunway TaihuLight can perform more than 93 quadrillions of
floating point operations per second. A recent list of top most powerful Supercomputers in
the world can be seen from www.top500.org.

● Personal computer: A personal computer (PC) is a multi-purpose computer whose size,


capabilities, and price make it feasible for individual use. Personal computers are intended
to be operated directly by an end user, rather than by a computer expert or technician.
Unlike large costly minicomputer and mainframes, time-sharing by many people at the
same time is not used with personal computers.

● Laptop: A laptop computer (also shortened to just laptop; or called a notebook computer)
is a small, portable personal computer. Laptops are folded shut for transportation, and thus
are suitable for mobile use. Its name comes from lap, as it was deemed to be placed on a
person's lap when being used. Laptops combine all the input/output components and
capabilities of a desktop computer, including the display screen, small speakers, a
keyboard, hard disk drive, optical disc drive, pointing devices (such as a touchpad or
trackpad), a processor, and memory into a single unit.

● Smartphone: Smartphones are a class of mobile phones and of multi-purpose mobile


computing devices. They are distinguished from feature phones by their stronger hardware
capabilities and extensive mobile operating systems, which facilitate wider software,
internet and multimedia functionality, alongside core phone functions such as voice calls
and text messaging
7 CSC1301@ 2024
● Tablet computer: A tablet computer, commonly shortened to tablet, is a mobile device,
typically with a mobile operating system and touchscreen display processing circuitry, and
a rechargeable battery in a single thin, flat package. Tablets, being computers, do what
other personal computers do, but lack some input/output (I/O) abilities that others have.
Modern tablets largely resemble modern smartphones, the only differences being that
tablets are relatively larger than smartphones.

THE COMPONENTS OF A COMPUTER


There are three major components of a computer system:

1. Hardware: The term 'computer hardware' is used to describe computer components that
can be seen and touched. The major components that constitute the hardware include: Input
Unit, Memory Unit, Storage Unit, Output Unit, Central Processing unit and
Communication Device Unit.

2. Software: This is basically the parts that can be seen but not touched. it is the abstract set
of instructions, or rules, that the machine follows. It comprises of the instructions,
programs, data, and protocols which run on top of hardware. There are basically three types
of software System Software, Application Software, Utility Software and Malicious
Software.

3. Humanware: This component refers to the person that uses the computer. More
specifically, it is about the individual that makes hardware and software components
productive. Typically, a great deal of testing is done on software packages and hardware
parts to ensure they enhance the end-user experience to aid in creating documents, musical
and video recordings, and all forms of raw and finished data.

Figure 2: Components of computer


8 CSC1301@ 2024
THE BASIC COMPUTER ARCHITECTURE
It should be noted that this architecture is common to almost all computers running today, from
the smallest industrial controller to the largest supercomputer. What sets the larger computers
apart from the typical PC is that many larger computers are built from a large number of processor
and memory modules that communicate and work cooperatively on a problem. The basic
architecture is the same.

Figure 3:Basic computer Architecture

The functions of the three top-level components of a computer seem to be obvious. The I/O
devices allow for communication of data to other devices and the users. The memory stores both
program data and executable code in the form of binary machine language. The CPU comprises
components that execute the machine language of the computer. Within the CPU, it is the function
of the control unit to interpret the machine language and cause the CPU to execute the instructions
as written. The Arithmetic Logic Unit (ALU) is that component of the CPU that does the
arithmetic operations and the logical comparisons that are necessary for program execution. The
ALU uses a number of local storage units, called registers, to hold results of its operations. The
set of registers is sometimes called the register file.

PROGRAMMING LANGUAGES
Computer programs (software) are developed using a programming language. Just like human
beings, the computer has languages which are made up of a set of symbols (and often group of
symbols) and rule for combining the symbols so that they represent instructions for a computer to
carry out a specific operation.

Types of Programming Language

Basically they are two types of Programming languages, namely:

● Low-Level Languages ● High- level languages

9 CSC1301@ 2024
Low-Level Languages
CPUs are designed to understand a fixed number of instructions. These instructions (that are
fetched, decoded and executed) need to be represented as bit patterns in order for them to be stored
and for the CPU to understand them (in the same way as data needs to be represented in binary
form). The collection of instructions a CPU can understand is known as the CPUs instruction set.
There are two representations of the low level language and they include:
1. The machine language or machine code: Machine language consists of strings of binary
numbers (i.e. 0s and 1s) and it is the only one language, the processor directly understands.
Machine language has the advantage of very fast execution speed and efficient use of
primary memory.
● Advantages
o It is directly understood by the processor so has faster execution time since the
programs written in this language need not be translated.
o It doesn‘t need larger memory
● Disadvantages
o It is very difficult to program since all the instructions are to be represented by
0s and 1s.
o Use of this language makes programming time consuming. o It is difficult to
find errors and to debug.
o It can be used by experts only.
2. Assembly Language: Assembly language is also known as low-level language because to
design a program programmer requires detailed knowledge of hardware specification. This
language uses mnemonics code (symbolic operation code like ADD for addition, MUL for
multiplication) in place of 0s and 1s. The program is converted into machine code by the
assembler. The resulting program is referred to as an object code.

● Advantages
o It makes programming easier and faster than Machine language since it uses
mnemonics code for programming. Eg: ADD for addition, SUB for subtraction,
DIV for division, etc.
o Error can be identified and debugged much easily compared to Machine
Language.
● Disadvantages
o Programs written in this language is not directly understood by computer so
translators are used. o It is a hardware dependent language so programmers are
forced to think in terms of computer‘s architecture rather than to the problem
being solved.
o Programmers must know its mnemonics codes to perform any task

10 CSC1301@ 2024
High-level Languages
High-level languages are programming languages that allows for a program to be written in form
readable to human beings. High level languages were developed to overcome the limitations of
Machine and assembly languages. In high level language, a program is written in a form that
resembles the statement of the given problem in English, then later converted into machine
language by translator programs (interpreter or compiler). In high level languages, there are
certain syntax and punctuation which must be learned but in most cases these languages are
designed to be problem oriented rather than machine oriented. High level language can be further
categorized as:

a) Procedural-Oriented language
Procedural Programming is a methodology for modelling the problem being solved, by
determining the steps and the order of those steps that must be followed in order to reach a
desired outcome or specific program state. These languages are designed to express the
logic and the procedure of a problem to be solved. It includes languages such as Pascal,
COBOL, C, FORTRAN, etc.
● Advantages:
o Because of their flexibility, procedural languages are able to solve a variety of
problems.
o Programmer does not need to think in terms of computer architecture which makes
them focused on the problem.
● Disadvantages:
o It is easier but needs higher processor and larger memory.
o It needs to be translated therefore its execution time is more.

b) Problem-Oriented language
It allows the users to specify what the output should be, without describing all the details of how
the data should be manipulated to produce the result. This is one step ahead from
procedural-oriented languages. These languages are usually result oriented and include
database query language. E.g: Visual Basic, C#, Java etc. The objectives of
ProblemOriented languages are to:
✔ Increase the speed of developing programs.
✔ Minimize user‘s effort to obtain information from a computer.
✔ Reduce errors while writing programs.
● Advantages:
o Programmers need not to think about the procedure of the program.
So, programming is much easier.
● Disadvantages:
o It is easier but needs higher processor and larger memory.
o It needs to be translated also and as such, has higher execution time.
11 CSC1301@ 2024
c) Natural language
Natural language is still in developing stage where we could write statements that would look like
normal sentences and programs would be developed from those statements. This implies
independence from syntax rules and other protocols needed in utilizing the programming
language.

● Advantages:
o Easy to program, since the program uses normal sentences, they are easy to
understand.
o The programs designed using Natural language will have artificial intelligence
(AI) and would be much more interactive and interesting.

● Disadvantage:
o Demands can be high in terms of the computing resources (memory, storage
and CPU)
o It is an expensive approach as it may require high end computers devices that
are not easy to purchase.

Translator
A computer stores and uses information in binary format, therefore the computer cannot understand
programs written in either high-level or assembly language. Program code written in either high
level or assembly language must be translated into a binary machine code that the computer
recognizes.
A translator is a program that translates a program written in high-level or assembly language into
Machine language. The translator is resident in the main memory of the computer and uses a high-
level or low-level program as input data. The output from the translator is a program in
machine-readable code. In addition to translation, a translator will report on any grammatical
errors made by the programmer in the language statements of the program. Translation from high-
level language is by a compiler or interpreter and from low-level language by assembler.

Types of Translators
✔ Assemblers
An assembler is a program that translates a computer program written in assembly language into
machine code.
✔ Compilers and Interpreters
A compiler is a program that translates source code written in high level language into machine
code. The compiler reads the entire piece of source code, collecting and reorganizing the
instructions. Compilation: is the process of translating a computer program from its original
or source program in machine code.
The interpreter, unlike the compiler, reads and interprets the source code line by line to produce
the machine code.
12 CSC1301@ 2024
✔ Byte-code/Pseudo-Code
There are techniques that try to obtain some of the best of both compiled and interpreted
languages. These typically compile the program to some other language that is not as high-
level as the original source code, but which is not as low-level as machine code. This
intermediate level is designed to be executed very e ciently by an interpreter. This
intermediate language is called byte-code or pseudo-code. The idea is used most recently
in Java.

ADVANTAGES AND DISADVANTAGES OF USING COMPUTER

Advantages of Using Computers


Benefits from using computers are possible because computers have the advantages of Speed,
reliability, consistency, storage, and communications.

▪ Speed: When data, instructions, and information flow along electronic circuits in a
computer, they travel at incredibly fast speeds. Many computers process billions or trillions
of operations in a single second. Processing involves computing (e.g., adding, subtracting),
sorting (e.g., alphabetizing), organizing, displaying images, recording audio, playing
music, and showing a movie or video.

▪ Reliability: The electronic components in modern computers are dependable and reliable
because they rarely break or fail.

13 CSC1301@ 2024
▪ Consistency: Given the same input and processes, a computer will produce the same results
— consistently. A computing phrase — known as garbage in, garbage out — points out
that the accuracy of a computer‘s output depends on the accuracy of the input. For example,
if you do not use the flash on a digital camera when indoors, the resulting pictures that are
displayed on the computer screen may be unusable because they are too dark.

▪ Storage: A computer can transfer data quickly from storage to memory, process it, and then
store it again for future use. Many computers store enormous amounts of data and make
this data available for processing anytime it is needed.

▪ Communications: Most computers today can communicate with other computers, often
wirelessly. Computers with this capability can share any of the four information processing
cycle operations — input, process, output, and storage — with another computer or a user.

DISADVANTAGES OF USING COMPUTERS

Some disadvantages of computers relate to health risks, violation of privacy, public safety, the
impact on the labor force, and the impact on the environment.

▪ Health Risks: Prolonged or improper computer use can lead to injuries or disorders of the
hands, wrists, elbows, eyes, neck, and back. Computer users can protect themselves from
these health risks through proper workplace design, good posture while at the computer,
and appropriately spaced work breaks. Two behavioral health risks are computer addiction
and technology overload. Computer addiction occurs when someone becomes obsessed
with using a computer. Individuals suffering from technology overload feel distressed
when deprived of computers and mobile devices.

▪ Violation of Privacy: Nearly every life event is stored in a computer somewhere in medical
records, credit reports, tax records, etc. In many instances, where personal and confidential
records were not protected properly, individuals have found their privacy violated and
identities stolen.

▪ Public Safety: Adults, teens, and children around the world are using computers to share
publicly their photos, videos, journals, music, and other personal information. Some of
these unsuspecting, innocent computer users have fallen victim to crimes committed by
dangerous strangers. Protect yourself and your dependents from these criminals by being
cautious in e-mail messages and on Web sites. For example, do not share information that
would allow others to identify or locate you and do not disclose identification numbers,
passwords, or other personal security details.

14 CSC1301@ 2024
▪ Impact on Labor Force: Although computers have improved productivity in many ways
and created an entire industry with hundreds of thousands of new jobs, the skills of millions
of employees have been replaced by computers. Thus, it is crucial that workers keep their
education up-to-date. A separate impact on the labor force is that some companies are
outsourcing jobs to foreign countries instead of keeping their homeland labor force
employed.

▪ Impact on Environment: Computer manufacturing processes and computer waste are


depleting natural resources and polluting the environment. When computers are discarded
in landfills, they can release toxic materials and potentially dangerous levels of lead,
mercury, and flame retardants.

▪ Green computing involves reducing the electricity consumed and environmental waste
generated when using a computer. Strategies that support green computing include
recycling, regulating manufacturing processes, extending the life of computers, and
immediately donating or properly disposing of replaced computers. When you purchase a
new computer, some retailers offer to dispose of your old computer properly.

15 CSC1301@ 2024
CHAPTER 2
DATA REPRESENTATION AND MANIPULATION

INTRODUCTION
In order for computers to process information, information must be represented in appropriate
formats and stored in appropriate places. Information today comes in different forms such as text,
numbers, images, audio, and video.

Computer science uses a 1 and a 0 for the two di erent possibilities. Hence, the most basic unit of
information is the binary digit (referred to as a bit of information). A bit of information can contain
either a 1 or a 0. A collection of eight bits of information is generally referred to as one byte, and
memory size is generally measured in the number of bytes of information a computer can store.

Text
Text is represented as a bit pattern, a sequence of bits (0s or 1s). Different sets of bit patterns have
been designed to represent text symbols. Each set is called a code, and the process of representing
symbols is called coding. Today, the prevalent coding system is called Unicode, which uses 32
bits to represent a symbol or character used in any language in the world. The American Standard
Code for Information Interchange (ASCII), developed some decades ago in the United States,
now constitutes the first 127 characters in Unicode and is also referred to as Basic Latin. It uses
7-bit strings to represent the English alphabet. Some other well-known codes are extended Binary-
Coded Decimal Interchange Code (EBCDIC) uses 8-bit. The number of characters that can be
represented by a particular coding system depends on the bit length of its code.

Numbers
Numbers are also represented by bit patterns. However, a code such as ASCII is not used to
represent numbers; the number is directly converted to a binary number to simplify mathematical
operations. However, if we represent all numbers as binary numbers (rather than decimal) we can
actually represent the numbers 0 to 127 with 8 bits.

The base or radix of a number system is defined as the number of digits it uses to represent the
number in the system. The decimal system uses 10 as a base, and the 10 digits available are 0, 1,
2, 3 . . . 9. And binary system uses 2 as a base hence can represent only this two digits 0 and 1.

Images
Images are also represented by bit patterns. In its simplest form, an image is composed of a matrix
of pixels (picture elements), where each pixel is a small dot. The size of the pixel depends on the
resolution. For example, an image can be divided into 1000 pixels or 10,000 pixels. In the second
case, there is a better representation of the image (better resolution), but more memory is needed
to store the image. The collection of these encoded pixels is known as the bitmap of the image.

16 CSC1301@ 2024
After an image is divided into pixels, each pixel is assigned a bit pattern. The size and the value of
the pattern depend on the image. For an image made of only black and white dots (e.g., a
chessboard), a 1-bit pattern is enough to represent a pixel. If an image is not made of pure white
and pure black pixels, you can increase the size of the bit pattern to include greyscale. For example,
to show four levels of grayscale, you can use 2-bit patterns. A black pixel can be represented by
00, a dark grey pixel by 01, a light grey pixel by 10, and a white pixel by 11.

There are several methods to represent color images. One method is called RGB, so called because
each color is made of a combination of three primary colors: red, green, and blue. The intensity of
each color is measured, and a bit pattern is assigned to it. Another method is called YCM, in which
a color is made of a combination of three other primary colors: yellow, cyan, and magenta.

Files that store a bitmap image can be rather large, and various compression methods have been
developed to reduce their size. Graphic Interchange Format (GIF), for example, is one such method
that reduces the size of a bitmap file by reducing the number of colors that can be assigned to a
pixel to 256, and the Joint Photographic Experts Group (JPEG) developed a compression method
which is commonly used to compress photographs.

Consider the following simple black and white image, by using a single bit for each pixel, black
pixel might be encoded as 1 and white pixel as 0. More sophisticated black-and-white pictures
with grey shades might require 8-bit sequences (a byte) for each pixel to represent the different
shades.

Figure 4: Image representation

Audio
Audio refers to the recording or broadcasting of sound or music. Audio is by nature Different from
text, numbers, or images. It is continuous, not discrete. Even when we use a microphone to change
voice or music to an electric signal, we create a continuous signal. Typically, the amplitude of the
sound wave is checked and recorded at regular time intervals as shown in the following figures.
These values can then be stored in binary form and used to re-construct the initial wave at a later
stage.
17 CSC1301@ 2024
Figure 5: Audio signal representation

Video
Video refers to the recording or broadcasting of a picture or movie. Video can either be produced
as a continuous entity (e.g., by a TV camera), or it can be a combination of images, each a discrete
entity, arranged to convey the idea of motion.

THE NUMBERING SYSTEM

Number systems are the technique to represent numbers in the computer system architecture, every
value that you are saving or getting into/from computer memory has a defined number system.

Computer architecture supports the following number systems.

● Binary number system


● Octal number system
● Decimal number system
● Hexadecimal (hex) number system

● Binary Number System: A Binary number system has only two digits that are 0 and
1. Every number (value) represents with 0 and 1 in this number system. The base of
binary number system is 2, because it has only two digits.
● Octal number system: Octal number system has only eight (8) digits from 0 to 7. Every
number (value) represents with 0,1,2,3,4,5,6 and 7 in this number system. The base of
octal number system is 8, because it has only 8 digits.

● Decimal number system: Decimal number system has only ten (10) digits from 0 to 9.
Every number (value) represents with 0,1,2,3,4,5,6,7,8 and 9 in this number system.
The base of decimal number system is 10, because it has only 10 digits.

18 CSC1301@ 2024
● Hexadecimal number system: A Hexadecimal number system has sixteen (16)
alphanumeric values from 0 to 9 and A to F. Every number (value) represents with
0,1,2,3,4,5,6, 7, 8, 9, A, B, C, D, E and F in this number system. The base of the
hexadecimal number system is 16, because it has 16 alphanumeric values. Here A is
10, B is 11, C is 12, D is 13, E is 14 and F is 15.

NUMBER SYSTEM CONVERSION

Conversion from Binary to Decimal Number System


It is in other words called conversion from base 2 to base 10. The rules for conversion from binary
to decimal are given below:
1. Multiply each bit by corresponding power of 2 (base).
2. Sum each product term to get a decimal equivalent number.
Note: A power of 2 is 0 for a left bit of binary point (or for a right most bit for the number that
does not contain fractional part) and increase the power by one for each bit towards left and
decrease power by one towards the right of binary point.
Example 1: convert (110011)2 to decimal.
Solution:
(110011)2 = 1×25 + 1×24 + 0×23 + 0×22 + 1×21 + 1×20
= 32 + 16 + 0 + 0 + 2 +1
= (51)10

Example 2: convert (1011.101)2 into decimal.


(1011.101)2 = 1×23 + 0×22 + 1×21 + 1×20 + 1×2-1 + 0×2-2 + 1×2-3
= 8 + 0 + 2 + 1 +0.5+ 0 +0.125
= 11 + 0.5 + 0.125
= (11.625)10

Conversion of hexadecimal to decimal (base 16 to base 10)


The rules for conversion from hexadecimal to decimal are as given below:
1. Multiply each digit by corresponding power of 16(base) as in decimal to binary.
2. Sum each product term to get decimal equivalent.
Example: convert (F4C)16 into decimal.
Solution:
(F4C)16 = F×162 + 4×161 + C×160
= 15×256 + 4×16 + 12×1
= 3840 + 64 + 12
= 3916
Therefore, (F4C)16 = (3916)10
19 CSC1301@ 2024
Conversion of decimal to binary (base 10 to base 2)
The rules for conversion from decimal to binary are as given below:
1. Divide the given number by 2 and note the remainder.
2. Repeatedly divide the quotient by two and note the remainder until quotient reduced to 0.
3. Collect the remainders, last obtained first and first obtained last to binary equivalent.

Example: convert (51)10 into binary Solution:


51÷2=25 remainder =1
25÷ 2=12 remainder =1
12÷ 2=6 remainder =0
6÷ 2=3 remainder =0
3÷ 2=1 remainder =1
1÷ 2=0 remainder =1
Therefore, (51)10 = (110011)2

Conversion of hexadecimal to binary (base 16 to base 2)


Substitute each hexadecimal symbol by equivalent 4 bit binary from table and collect bits for each
digit to get binary equivalent numbers.

Conversion of decimal to Octal (base 10 to base 8)

Decimal Number is: (12345)10

Therefore, Octal Number is (30071)8

20 CSC1301@ 2024
Conversion of Octal to Decimal (base 8 to base 10)

Octal Number is: (30071)8

=12288+0+0+56+1 =12345
Decimal Number is: (12345)10

21 CSC1301@ 2024
CHAPTER 3
FUNDAMENTALS OF DATA STORAGE

INTRODUCTION

Information can be stored in various ways. Books contain a great deal of information/data, so do
films and paintings. Even rocks contain a lot of information about geological changes. Computers
also store information in appropriate formats and appropriate places.

From the preceding chapters we know that one of the most important breakthroughs in the
development of computers was the use of the binary system. In the binary system, there are only
two types of values, 1s and 0s. Information/data can be represented in the binary system (see
Chapter 2). It is easy to store binary information/data in physical media. In this chapter we shall
examine how it is done.

MEMORY HIERARCHY
The memory hierarchy system consist of all storage devices employed in a computer from the slow
by high capacity auxiliary memory to a relatively faster main memory, to an even smaller and
faster cache memory. When designing a memory system for computers capacity, cost and speed
play a vital role.

Figure 6: Memory Hierarchy

It is important to note that as we go down the hierarchy system:

● Cost per bit decreases


● Capacity of memory increases
22 CSC1301@ 2024
● Access time increases
● Frequency of access of memory by processor also decreases
The memory unit is an essential component in any digital computer since it is needed for storing
program and data. The total memory capacity of a computer can be visualized as being a
“Connections in Component Hierarchy ".

Figure 7: Memory Connections in Component Hierarchy

The Components of the Hierarchy System

1. Registers: are contained in CPU. A register is a small piece of memory, usually


with the same number of bits the size of a word for the processor concerned. It is
used to track memory addresses and pieces of data that need to be operated on, a
typical CPU contains several registers. (Detail discussion in the next chapter).
2. Cache: Is put between the CPU and the main memory. Cache memory is small, but
can operate at (nearly) the same speed as the CPU. The cache is used for storing
segments of programs currently being executed in the CPU and temporary data
frequently needed in the present calculations
3. Main memory: Is also known as primary memory, is the part of a computer that
holds data that is being processed.
4. Secondary memory: Provides permanent storage of information and programs.
We will now ignore the Registers (until the next chapter) and proceed to explain
other components.

The Cache Memory


Cache memory is small, but can operate at (nearly) the same speed as the CPU. It is put between
Secondary and main memory. The cache contains a portion of the main memory. Di erent portions
may reside in the cache at di erent times. Through this, it speeds up the process of communicating
data to and from a main memory. It guesses what data is likely to be used by the CPU next and fetch
it before it is actually requested.

The cache memory contains a copy of the portion of the main memory. When the processor attempts
to read a word from memory, a check is made to determine if the word is the memory, if so, the
word is delivered to the processor. If not, a block of main memory, consisting of some fixed number
23 CSC1301@ 2024
of words, is read into the cache and then the word is delivered to the processor. Because of the
phenomenon of Locality Principle, when a block of data is fetched into a cache to satisfy a single
memory reference, it is likely that there will be future reference to the same memory location or
to the words in the block.

Figure 8: Position of the Cache Memory

As you might already be aware, a program is designed as a set of instructions, to be run by the
CPU. When you run a program, these instructions have to make their way from the primary storage
to the CPU. This is where the memory hierarchy comes into play.
The data first gets loaded up into the Main Memory and is then sent to the CPU. CPUs these days
are capable of carrying out a gigantic number of instructions per second. To make full use of its
power, the CPU needs access to superfast memory. This is where the cache comes in. The Data is
taken from the main memory and sent to the cache. The cache then carries out the back and forth
of data within the CPU.

The Levels of Cache: L1, L2, and L3


CPU cache is divided into three main ‘Levels’, L1, L2, and L3. The hierarchy here is again
according to the speed, and thus, the size of the cache.

● L1 (Level 1) cache: This is the fastest memory that is present in a computer system. In
terms of priority of access, L1 cache has the data the CPU is most likely to need while
completing a certain task. As far as the size goes, the L1 cache typically goes up to 256KB.
However, some really powerful CPUs are now taking it close to 1MB.

● L2 (Level 2) cache: This is slower than L1 cache, but bigger in size. Its size typically
varies between 256KB to 8MB, although the newer, powerful CPUs tend to go past that.
L2 cache holds data that is likely to be accessed by the CPU next. In most modern CPUs,
the L1 and L2 caches are present on the CPU cores themselves, with each core getting its
own cache.

● L3 (Level 3) cache: This is the largest cache memory unit, and also the slowest one. It can
range between 4MB to upwards of 50MB. Modern CPUs have dedicated space on the CPU
die for the L3 cache, and it takes up a large chunk of the space.
24 CSC1301@ 2024
The Main Memory
Main memory is intimately connected to the processor, so moving instructions and data into and
out of the processor is very fast. The Main memory is used to store programs and data that are
currently used by the CPU. If a program is to be run, it is first loaded into main memory, because
main memory is fast.
Main memory is highly volatile. Its content changes frequently, as di erent programs are run at di
erent times. When the power is o , all data in main memory are lost. So, main memory is not
for long-term storage. The Main memory is sometimes called RAM. RAM stands for Random
Access Memory. "Random" means that the memory cells can be accessed in any order.

The Organization of Main Memory

Main memory can be viewed as a pile of cells, and each cell is formed by a group of fixed number
of 1-bit memories. The size of a cell is then X bits, if it is formed by
X 1-bit memories. If a memory has Y cells, then its capacity is X × Y bits. That is, it can hold X ×
Y bits of information. The Figure below depicts this way of looking at a memory.

Figure 9: An overview of the structure of an electronic memory

Now, what is the size of a cell? To put this question di erently, how many bits of information can
a cell hold? In theory, a cell can be of any size. But in reality, most electronic memories have 8-
bit cells.
Memory Addresses
Each cell in the main memory is assigned a unique address. The address is in the form of a
nonnegative number. The number starts from 0. This gives us the means of selecting a memory
cell. It also allows us to think of cell occurring in particular order, so that we can talk of the next
cell and the previous cell.

Random Access
Each cell in the memory can be accessed directly using address, the access needs not be done in
any sequential order, and the time taken to access any cell is constant. Because we can access main
memory in any random order is called Random Access Memory.
25 CSC1301@ 2024
Access Time
Access time is the time taken to get ready to read some information from a certain part of memory,
or to write some information to that part. In the case of main memory, access time is the time taken
between the moment when the CPU wants to read/write a cell and the moment when the cell is
activated. From section ‘The organization of main memory’ above, we know that this time must
be very short, because it depends mainly on how fast electronic signals travel in wires (e.g. address
lines). And electronic signals travel at nearly the speed of light (3.08x108 meters/second). Typical
access time in main memory is about 60ns (nanoseconds, 10–9 seconds).
Transfer Rate
Transfer rate is the amount of information that can be transferred per second between one place
and another. In the case of main memory, this refers to the rate of information exchange between
main memory and the CPU. If the CPU can read X cells in a second, and each cell has Y bytes,
then the transfer rate will be X × Y bytes. Reading a cell involves activating the cell (by sending
signals along the address lines), and receiving the data in the cell (during which the data signals
travel along the data lines to the CPU). This process is fast, since it is about electronic signals
travelling within wires. Many RAMs support transfer rate on the scale of 100MB/second. But we
will see that even this is not fast enough when we study how cache memory works.

The Secondary Memory

Figure 10: Secondary Memory

If we need to store large amount of data or programs permanently, we need a cheaper and
permanent memory. Such memory is called secondary memory

Characteristics of Secondary Memory

These are some characteristics of secondary memory, which distinguish it from primary memory
(main memory) −

26 CSC1301@ 2024
● It is non-volatile, i.e. it retains data when power is switched off

● It is large in capacities to the tune of terabytes

● It is cheaper as compared to primary memory

There are two types of secondary memory – fixed and removable.

Hard Disk Drive

Hard disk drive is made up of a series of circular disks called platters arranged one over the other
almost ½ inches apart around a spindle. Disks are made of non-magnetic material like aluminum
alloy and coated with 10-20 nm of magnetic material.

Figure 11: Hard Disk Drive


Standard diameter of these disks is 14 inches and they rotate with speeds varying from 4200 rpm
(rotations per minute) for personal computers to 15000 rpm for servers. Data is stored by
magnetizing or demagnetizing the magnetic coating. A magnetic reader arm is used to read data
from and write data to the disks. A typical modern HDD has capacity in terabytes (TB).

Types of Hard Disk Drive (HDD)

1. SATA HDD (Serial Advanced Technology Attachment)


2. SSD HDD (Solid State Drive)
3. NVMe HDD (Non-Volatile Memory Express)

CD Drive

CD stands for Compact Disk. CDs are circular disks that use optical rays, usually lasers, to read
and write data. They are very cheap as you can get 700 MB of storage space for less than a dollar.
CDs are inserted in CD drives built into CPU cabinet. They are portable as you can eject the drive,
remove the CD and carry it with you. There are three types of CDs −
● CD-ROM (Compact Disk – Read Only Memory) − The data on these CDs are recorded
by the manufacturer. Proprietary Software, audio or video are released on CDROMs.

27 CSC1301@ 2024
● CD-R (Compact Disk – Recordable) − Data can be written by the user once on the CDR.
It cannot be deleted or modified later.
● CD-RW (Compact Disk – Rewritable) − Data can be written and deleted on these optical
disks again and again.
DVD Drive
DVD stands for Digital Video Display. DVDs are optical devices that can store 15 times the data
held by CDs. They are usually used to store rich multimedia files that need high storage capacity.
DVDs also come in three varieties – read only, recordable and rewritable

Figure 12: DVD drive


Pen Drive

Pen drive is a portable memory device that uses solid state memory rather than magnetic fields or
lasers to record data. It uses a technology similar to RAM, except that it is non-volatile. It is also
called USB drive, key drive or flash memory.

Figure 13: pen drive


Blu Ray Disk

Blu Ray Disk (BD) is an optical storage media used to store high definition (HD) video and other
multimedia filed. BD uses shorter wavelength laser as compared to CD/DVD. This enables writing
arm to focus more tightly on the disk and hence pack in more data. BDs can store up to 128 GB
data.

28 CSC1301@ 2024
CHAPTER 4

THE CENTRAL PROCESSING UNIT.

Alternately referred to as a processor, central processor, or microprocessor, the CPU


(pronounced sea-pea-you) is the central processing unit of the computer. A computer's CPU
handles all instructions it receives from hardware and software running on the computer. CPU is
the component that actually executes instructions. It typically performs arithmetic and logical
calculations and controls the operations of the other elements of the system. Because of this, some
call it the brain of the computer. To be able to execute program CPU needs to access the memory,
and to deal with instructions and data.

To control instructions and data flow to and from other parts of the computer, the CPU relies
heavily on a chipset, which is a group of microchips located on the motherboard. Some computers
utilize two or more processors. These consist of separate physical CPUs located side by side on
the same board or on separate boards.

Figure 14: The CPU Chipset

Components of the CPU

A typical CPU will consist of the following components (see Figure 4.1):
1. An Arithmetic/Logic Unit;
2. Control unit;
3. Registers;
4. Internal bus;

The Arithmetic and Logical Unit


The main task of the CPU is to process information. The arithmetic and logic unit (ALU) is used
to perform the computer‘s data-processing functions. It is the heart of the CPU. The ALU is a
sophisticated logic circuit which is made up of numerous logic gates. It performs arithmetic
operations, such as addition, subtraction, multiplication and division; it also performs Boolean
29 CSC1301@ 2024
logical operations, such as AND, OR, NOT, and other logical operations such as comparing two
numbers to see if one is greater than the other, comparing two letters to see whether they are the
same, and so on.

Figure 15: Components of the CPU

Register

A register is a small piece of electronic (or semiconductor) memory, which is used to hold certain
information temporarily. Below are some typical registers including:

● Memory Address Registers (MAR): Holds the address of the cell that the CPU is going
to access.
● Memory Bu er Register (MBR): Contains the instruction or data just read from memory,
or data that is about to be written into memory.
● Instruction Register (IR): Holds the instruction just fetched from memory.
● Program Counter (PC): Contains the address of the next instruction in memory, thus
keeps track of current position in a machine-code program in memory.
● Accumulator (AC): holds temporarily the result of a calculation.
● Control Registers: used by the control unit (to be discussed below) to control the
operations of the CPU and by privileged, operation system programs to control the
execution of programs.

Control Unit
Another important part of the CPU is the control unit. It is the portion of the processor that
actually causes things to happen. The control unit issues control signals external to the
processor, such as READ and WRITE, to cause data exchange with memory and I/O modules.
It also issues control signals internal to the processor to move data between registers, to cause
the ALU to perform a specific function and so on. The relationships between registers, the
ALU and the control unit can be described as follows. Data are presented to the ALU in
registers, and the results of operations are stored in registers. The control unit provides signals
that control the operation of the ALU and the movement of data into and out of the ALU.
30 CSC1301@ 2024
Internal Bus

Bus is a communication line that connects elements of computer together. The CPU is
connected to main memory, secondary memory and I/O devices through the system bus. The
data bus is connected to the memory buffer register (MBR) and the address bus to the memory
address registers (MAR). The control bus is connected to the control unit. This connection is
shown in the figure below:

Figure 16: Computer Buses

HOW THE CPU RUNS A PROGRAM

A program residing in the memory unit of a computer consists of a sequence of instructions. These
instructions are executed by the processor by going through a cycle for each instruction. This
cycle is known as the Instruction cycle. In a basic computer, each instruction cycle consists of the
following phases:

1. Fetch instruction from memory.


2. Decode the instruction.
3. Read the effective address from memory.
4. Execute the instruction.
Before we explore how a generic CPU runs a program, let's look at how an instruction is formed.

Instruction Format
Each instruction is usually consist of two parts:
● An op-code ● Operand.

31 CSC1301@ 2024
Figure 17: Instruction Cycle

The op-code indicates what operation is to be performed. As we just mentioned, there are four
basic types of operations. These include:

● Transfer of data between the CPU and memory (i.e. read from memory, and write to
memory).
● Transfer of data between CPU and some I/O devices (e.g. read from a device, write to
a device).
● Data processing (i.e. arithmetic and logical operations on data).
● Control: An instruction may specify that the sequence of execution be altered. For
example, an instruction I (1) at address 149 may tell the CPU to execute an instruction
stored at address 182. When I(1) is executed, the PC will be set to 182. Thus, on the next
instruction cycle, the CPU will fetch the instruction at address 182, rather than the
instruction at address 150.

The operand specifies the thing that is to be operated on. An operand is often an address of a cell
where some real data (number, letter, character, color, sound pitch, etc.) is stored.

Figure 18: Instruction format

The Figure above shows a simple format of instructions. In this example, an instruction is 16-bit
long. The first 4 bits are for the op-code, and the remaining 12 bits store one operand (an address
in this case).

The generic instruction cycle for an unspecified CPU consists of the following stages:
32 CSC1301@ 2024
1. Fetch instruction: Read instruction code from address in PC and place in IR. ( IR ←
Memory[PC] )

2. Decode instruction: Hardware determines what the opcode/function is, and determines
which registers or memory addresses contain the operands.

3. Fetch operands from memory if necessary: If any operands are memory addresses, initiate
memory read cycles to read them into CPU registers. If an operand is in memory, not a
register, then the memory address of the operand is known as the effective address, or EA
for short. The fetching of an operand can therefore be denoted as Register ← Memory
[EA]. On today's computers, CPUs are much faster than memory, so operand fetching
usually takes multiple CPU clock cycles to complete.

4. Execute: Perform the function of the instruction. If arithmetic or logic instruction, utilize
the ALU circuits to carry out the operation on data in registers. This is the only stage of
the instruction cycle that is useful from the perspective of the end user. Everything else is
overhead required to make the execute stage happen. One of the major goals of CPU
design is to eliminate overhead, and spend a higher percentage of the time in the execute
stage. A detail on how this is achieved is a topic for a hardware-focused course in
computer architecture.

5. Store result in memory if necessary: If the destination is a memory address, initiate a


memory write cycle to transfer the result from the CPU to memory. Depending on the
situation, the CPU may or may not have to wait until this operation completes. If the next
instruction does not need to access the memory chip where the result is stored, it can
proceed with the next instruction while the memory unit is carrying out the write
operation.
An example of a full instruction cycle is provided by the following instruction, which uses memory
addresses for all three operands.

Mul x, y, product
1. Fetch the instruction code from Memory[PC]
2. Decode the instruction. This reveals that it's a multiply instruction, and that the operands
are memory locations x, y, and product.
3. Fetch x and y from memory.
4. Multiply x and y, storing the result in a CPU register.
5. Save the result from the CPU to memory location product.

33 CSC1301@ 2024
CHAPTER 5

INPUT/OUTPUT

For a computer to be useful, it must be possible to input the data to be processed and output the
results of the processing. Input and output devices allow the computer system to interact with the
outside world by moving data into and out of the system. An input device is used to bring data into
the system. Some input devices are:
● Keyboard
● Mouse
● Microphone
● Barcode reader
● Graphics tablet

An output device is used to send data out of the system. Some output devices are:
● Monitor
● Printer
● Speaker

External Input/output devices are called peripherals. Some of the peripheral are meant to directly
communicate with human users (e.g. keyboard), other for communicating with attached devices
(e.g. tape) and some are for communicating with remote devices. There are four main I/O
operations:
● Control: tell the device to perform some operation
● Test: Check the status of a device.
● Read: Read data from a device.
● Write: Write data to device.

Input/output devices are usually called I/O devices. They are directly connected to an electronic
module inside the systems unit called a device controller. For example, the speakers of a
multimedia computer system are directly connected to a device controller called an audio card
(such as a SoundBlaster), which in turn is connected to the rest of the system.

The device controller generally constitutes a small computer in itself with its own functions and
temporary memory (buffer) for the data that is sent or received. The controller is entirely devoted
to managing the communication with the peripherals via the bus.

Some of these controllers communicate with the CPU as if they were memory addresses (i.e. data
can be LOADED and STORED via the controller). Obviously, in order for this not to interfere
with communication with the main memory, a set of addresses is reserved for the controller.

There are various ways in which a computer can communicate with input/output devices.
34 CSC1301@ 2024
We shall briefly look at three possibilities:

1. Programmed I/O

2. Interrupt I/O

3. Direct Memory Access

Programmed I/O

In the programmed I/O, the CPU controls the device directly (via controller). When writing data,
the CPU must send the data to the device and repeatedly check to see if the device is ready for
the next piece of data. When reading data, the CPU must check, or poll the device regularly to
see if there is any more data waiting. Although simple, the performance is poor because the CPU
is constantly waiting for the device.

Interrupt Driven I/O

An alternative to programmed I/O is for the CPU to go and do something else whilst the device
does it work. Communication is supported by use of interrupts. When a program calls for output,
for example, the relevant data is moved into the buffer of the controller and the controller is
instructed to start the output operation. After completion, the controller sends an interrupt to
notify the CPU that it is ready for further output/action, and the original program can send further
data.

Direct Memory Access I/O

Today‘s computers generally have controllers that can access main memory directly without the
need for any intervention by the CPU. This is referred to as direct memory access and has the
advantage that efficiency is increased as the CPU can continue its computations while data is
moved in or out of memory.

35 CSC1301@ 2024
CHAPTER 6

NETWORKING BASICS

Computer Network

A computer network is a group of computers linked to each other that enables the computer to
communicate with another computer and share their resources, data, and applications. In other
words. It is the interconnection of multiple devices, generally termed as Hosts connected using
multiple paths for the purpose of sending/receiving data or media.

A system which is
connected to the
Open Systems
network and is ready
for communication.
Systems
A system which is not
connected to the
Closed Systems
network and can’t be
communicated with.

Figure 19: Types of systems in computer networks

Uses of Computer Network

● Resource sharing: Resource sharing is the sharing of resources such as programs, printers,
and data among the users on the network without the requirement of the physical location
of the resource and user.

● Server-Client model: Computer networking is used in the server-client model. A server


is a central computer used to store the information and maintained by the system
administrator. Clients are the machines used to access the information stored in the server
remotely.

● Communication medium: Computer network behaves as a communication medium


among the users. For example, a company contains more than one computer has an email
system which the employees use for daily communication. Other network based
applications include WhatsApp where one can make a phone call, video call and send
instant messages all over the world

36 CSC1301@ 2024
● E-commerce: Computer network is also important in businesses. We can do business over
the internet. For example, amazon.com is conducting business transactions over the
internet, i.e., they are doing their business over the internet.

COMPONENTS OF COMPUTER NETWORK

Figure 20: Components of computer networks


Major components of a computer network are:

NIC (Network Interface Card)

NIC is a device that helps the computer to communicate with another device. The network interface
card contains the hardware addresses so that it transfers the data to the correct destination.

There are two types of NIC: wireless NIC and wired NIC.

● Wireless NIC: All the modern laptops use the wireless NIC. In Wireless NIC, a connection
is made using the antenna that employs the radio wave technology.
● Wired NIC: Cables use the wired NIC to transfer the data over the medium.
Hub

Hub is a central device that splits the network connection into multiple devices. When computer
requests for information from a computer, it sends the request to the Hub. Hub distributes this
request to all the interconnected computers.

37 CSC1301@ 2024
Switches

Switch is a networking device that groups all the devices over the network to transfer the data to
another device. A switch is better than Hub as it does not broadcast the message over the network,
i.e., it sends the message to the device for which it belongs to. Therefore, we can say that switch
sends the message directly from source to the destination.

Cables and Connectors

Cable is a transmission media that transmits the communication signals. There are three types of
cables:

● Twisted pair cable: It is a high-speed cable that transmits the data over 1Gbps or more.
● Coaxial cable: Coaxial cable resembles like a TV installation cable. Coaxial cable is more
expensive than twisted pair cable, but it provides the high data transmission speed.
● Fiber optic cable: Fiber optic cable is a high-speed cable that transmits the data using light
beams. It provides high data transmission speed as compared to other cables. It is more
expensive as compared to other cables.
Router

Router is a device that connects the Local Area Network to the internet. The router is mainly used
to connect the distinct networks or connect to the internet to multiple computers.

Modem

Modem connects the computer to the internet over the existing telephone line. A modem is not
integrated with the computer motherboard. A modem is a separate part on the PC slot found on the
motherboard. Based on the differences in speed and transmission rate, a modem can be classified
in the following categories:

● Standard PC modem or Dial-up modem


● Cellular Modem
● Cable modem

TYPES OF COMPUTER NETWORK

A computer network as earlier mentioned is a group of computers linked to each other that enables
the computer to communicate with another computer. However, the network formed by the
computer can be categorized by their sizes. These include:

✔ Local Area Network (LAN)


✔ Personal Area Network (PAN)
✔ Metropolitan Area Network (MAN)
38 CSC1301@ 2024
✔ Wide Area Network (WAN)

We will now take a brief look at each of these network types

Local Area Network (LAN)

Local Area Network is a group of computers connected to each other in a small area such as
building, office. It is used for connecting two or more personal computers through a
communication medium such as twisted pair, coaxial cable, etc. It is less costly as it is built with
inexpensive hardware such as hubs, network adapters, and Ethernet cables. The data is transferred
at an extremely faster rate and provides higher security.

Personal Area Network (PAN)

Thomas Zimmerman was the first research scientist to bring the idea of the Personal Area
Network. PAN is used for connecting the computer devices of personal use thus ―Personal Area
Network‖. It is usually arranged within an individual person, typically within a range of 10 meters
but can cover an area of 30 feet. Personal computer devices that are used to develop the personal
area network are the laptop, mobile phones, media player and PlayStation.

There are two types of Personal Area Network:

i. Wired Personal Area Network: Wired Personal Area Network is created by


using the USB
ii. Wireless Personal Area Network: Wireless Personal Area Network is
developed by simply using wireless technologies such as Wi-Fi, Bluetooth. It
is a low range network.

Local Area Network Personal Area Network

Figure 21: LAN and PAN


39 CSC1301@ 2024
Metropolitan Area Network (MAN)
A metropolitan area network is a network that covers a larger geographic area by interconnecting
a different LAN to form a larger network. Government agencies use MAN to connect to the citizens
and private industries. In MAN, various LANs are connected to each other through a telephone
exchange line. It has a higher range than Local Area Network (LAN) and as such MAN is used in
communication between the banks in a city, Airports, colleges within a city, and even
communication in the military zones or cantonments.

Figure 22: Metropolitan Area Network

Wide Area Network (WAN)


Wide Area Network is a network that extends over a large geographical area such as states or
countries. WAN is quite bigger network than the LAN and MAN and it is not limited to a single
location, but it spans over a large geographical area through a telephone line, fiber optic cable or
satellite links. The internet is one of the biggest WAN in the world. A Wide Area Network is
widely used in the field of Business, government, and education.

Figure 23: Wide Area Network

40 CSC1301@ 2024
NETWORK TOPOLOGY

Network Topology refers to layout of a network. In other words, it defines the structure of the
network of how all the components are interconnected to each other.

Types of Network Topology

The topologies include: The Bus, Ring, Star, Tree, Mesh and Hybrid topologies.

Bus Topology

The bus topology is designed in such a way that all the stations are connected through a single
cable known as a backbone cable. Each node is either connected to the backbone cable by drop
cable or directly connected to the backbone cable. When a node wants to send a message over the
network, it puts a message over the network. All the stations available in the network will receive
the message whether it has been addressed or not. The configuration of a bus topology is quite
simpler as compared to other topologies. The backbone cable is considered as a "single lane"
through which the message is broadcast to all the stations.

Figure 24: Bus topology

Ring Topology

Ring topology is like a bus topology, but with connected ends. The node that receives the message
from the previous computer will retransmit to the next node. The data flows in one direction, i.e.,
it is unidirectional. The data flows in a single loop continuously known as an endless loop. It has
no terminated ends, i.e., each node is connected to other node and having no termination point.
The data in a ring topology flow in a clockwise direction. The most common access method of the
ring topology is token passing. In a ring topology, a token is used as a carrier. Token passing is
a network access method data carried in token is passed from one device to another device until
the destination address matches. Once the token received by the destination device, then it sends
the acknowledgment to the sender.

41 CSC1301@ 2024
Figure 25: Ring Topology

Star Topology

Star topology is the most popular topology in network implementation. Star topology is an
arrangement of the network in which every node is connected to the central hub, switch or a central
computer. Hubs or Switches are mainly used as connection devices in a physical star topology.
The central computer is known as a server, and peripheral devices attached to the server are known
as clients.

Figure 26: Star Topology Tree Topology

Tree topology combines the characteristics of bus topology and star topology. A tree topology is a
type of structure in which all the computers are connected with each other in hierarchical fashion.
The top-most node in tree topology is known as a root node, and all other nodes are the
descendants of the root node. There is only one path exists between two nodes for the data
transmission. Thus, it forms a parent-child hierarchy.

42 CSC1301@ 2024
Figure 27: Tree Topology

Mesh Topology

Mesh technology is an arrangement of the network in which computers are interconnected with
each other through various redundant connections. There are multiple paths from one computer to
another computer. It does not contain the switch, hub or any central computer which acts as a
central point of communication. The Internet is an example of the mesh topology. Mesh topology
is mainly used for WAN implementations where communication failures are a critical concern.
Mesh topology is mainly used for wireless networks.

Figure 28: Mesh Topology

Hybrid Topology

The combination of various different topologies is known as Hybrid topology. A Hybrid topology
is a connection between different links and nodes to transfer the data. When two or more different
topologies are combined together is termed as Hybrid topology and if similar topologies are
connected with each other will not result in Hybrid topology. For example, if there exists a ring
topology in one branch of Zenith bank and bus topology in another branch of Zenith bank,
connecting these two topologies will result in Hybrid topology.

43 CSC1301@ 2024
Figure 29: Hybrid Topology

44 CSC1301@ 2024
CHAPTER 7
OPERATING SYSTEM
INTRODUCTION

In the previous chapters the focus was on computer hardware, but of equal importance is
software. Software you have seen is the part of a computer system that determines how computers
perform various tasks. While application software is used to perform a particular task, the System
Software controls the low-level interaction with hardware. In this chapter we are going to take a
look at one system software that is very special; the operating system.

Figure 30: Typical Devices Managed by the OS

WHAT IS OPERATING SYSTEM (OS)?


An operating system is a layer of software which takes care of technical aspects of a computer‘s
operation. It shields the user of the machine from the low-level details of the machine‘s operation
and provides frequently needed facilities. Although there is no universal consensus as to what an
operating system should consist of, almost all agree on what it is.

An operating system (OS) is a collection of software that manages computer hardware resources
and provides common services for computer programs. In other words, an Operating System is a
program that controls the execution of user programs and acts as an intermediary between users
and computer hardware. It performs basic tasks, such as recognizing input from the keyboard,
sending output to the display screen, keeping track of files and directories on the disk, and
controlling peripheral devices such as printers. Examples of Operating System includes: Android,
BSD (Unix family), iOS, Linux, Mac OS X, Microsoft Windows, Windows Phone, and IBM Z/OS
(for IBM machines).

45 CSC1301@ 2024
TYPES OF OPERATING SYSTEM
There are many types of operating systems, the complexity of which varies depending upon what
type of functions are provided, and what the system is being used for. Operating systems may be
classified by both how many tasks they can perform ‘simultaneously’ and by how many users
can be using the system ‘simultaneously’.

⮚ In terms of the number of users accessing computer:

i. Single-user OS allows only one user access to the computer at a time, earlier OS that
uses batch system are a typical example. Operating systems such as Windows 95,
Windows NT Workstation and Windows 2000 professional are essentially single user
operating systems.
ii. A multi-user OS allows multiple users to access a computer system at the same time.
Windows Server and UNIX all falls under this category.

⮚ In terms of how many tasks can be performed simultaneously: i.


A single-tasking OS has only one running program
ii. A multitasking OS allows more than one program to be running at a time, from the
point of view of human time scales.

COMPONENTS OF OPERATING SYSTEM


Operating system is basically composed of two parts:

The Kernel: This is also called the core. it provides the most basic level of control over all of the
computer‘s hardware devices. It manages memory access for programs in the RAM, it determines
which programs get access to which hardware resources, it sets up or resets the CPU‘s operating
states for optimal operation at all times, etc. The kernel is not something, which can be used
directly, although its services can be accessed through system calls.

The Shell: This is the interface between human users and the core. That is why it is also called
User Interface. This layer just basically wraps the kernel in more acceptable clothes. This day we
usually use either Command-Line Interface (CLI) or Graphical User Interface (GUI). In CLI
all users’ instruction to a program must be typed, DOS terminal is an example. While in the GUI,
visual elements like windows, icons, and menus are used for interaction.

SOME BASIC OS FUNCTIONS

Scheduling
In multi-tasking OS one fundamental issues is to determine which task should run at any given
time. This is because there may be several programs which need to receive input or write output
simultaneously and thus the operating system may have to share these resources between several
running programs.

46 CSC1301@ 2024
Scheduling is the systematic way that OS uses to select a task to run at any given time and decide
the length of the time it will take executing. The unit of scheduling is a process, which often
corresponds to a running program (it is possible for one program to produce many processes).
Scheduler is the part of OS that determine how much time each process will spend executing, and
in which order execution control should be passed to processes.

Memory Management
In single-task systems, main memory is divided into two parts, one part for the OS and the other is
user part that holds the programming currently executing. In multi-tasking, the user part is divided
or partitioned further into areas for each of the process in the main memory. The task of
dynamically subdividing the user part is known as memory management.

47 CSC1301@ 2024
CHAPTER 8
PROGRAM DEVELOPMENT

INTRODUCTION
Application programs are what end users employ to carry out specific tasks, such as word
processors, web browser, and email client. Our goal in this and the next chapter is to convince you
that writing a program is easier than writing a piece of text such as a paragraph or an essay. We
can harness the computer to help us solve all sorts of fascinating problems that would be otherwise
unapproachable.

Like all engineering tasks, that programming is, one needs a systematic way of planning and
execution. Take for instance window operating system, which was built by thousands of engineers
across many counties. This chapter delves into the design aspect. The process of developing
computer program consists of six steps:

1. Problem definition/Analysis
2. Program design
3. Coding (Implementation)
4. Testing
5. Documentation
6. Maintenance

These six steps in program development are known as the Software Development Life Cycle
(SDLC).

THE SOFTWARE DEVELOPMENT LIFE CYCLE (SDLC).

1. Problem Definition/Analysis: The first thing that every programmer should try to do and
do it right, is to understand the problem the program is to solve. This will help him avoid
having a right solution to the wrong problem. The programmer should conduct interviews
with the expected user so that he understands their needs, then write a narrative of what
he understand the program should solve. And finally determine what data is to be input
and what data is to be output.

2. Program Design: Having identified the problem, the next step is to plan a solution to meet
the objectives. This second phase is called program design stage. A good design should
always try to minimize complexity. Some common design approaches are:
o Modular Design: Program designer should try to separate a program into a
collection of units, each unit is called module. It is most desirable if each module
is self-contained and attempt should be made to minimize interaction between
modules. Some of the advantages of this approach are: unit can be designed, coded

48 CSC1301@ 2024
and tested independently of the remaining units. Hence, program modification will
be easier as changes to a unit do not a ect the entire program.
o Flowcharts: One of the most widely used devices for designing computer programs
is the flowchart, which graphically represent the logic needed to solve a
programming problem. There are only a few standard flowchart symbols necessary
to solve any programming problem. A flowchart is a descriptive algorithm that
graphically represents the logic/steps needed to solve a programming problem. A
program flowchart represents the detailed sequence of steps needed to solve a
problem.

o Pseudo code/Algorithm: An alternative or supplement to flowcharts, Pseudo code


is a narrative rather than a graphical form of describing structured program logic. It
allows a program designer to focus on the logic of the program and not on the details
of programming. A Pseudo code is an outline of steps needed to solve a problem.
Thus an algorithm is a finite computational procedure. It must contain enough
specifics as to what information or calculations are needed to be performed within
the major “outlined” areas- the modules- that no major points will be missed

3. Coding: Writing the program is called coding. In this stage the logic developed in program
design is used to actually write the computer program. Some qualities of good computer
program are:
● It should be easily readable and understandable by people rather than the original
programmer. This is accomplished by including comments within the program.
● It should be e cient, increasing the programmers productivity.
● It should not take excessive time to process, or occupy any more computer memory
than necessary.
● It should be reliable to work under all reasonable conditions, and always get the
correct output.

4. Testing/Debugging: Testing means running a program and checks if conforms to the


specification of analysis stage, and fixing any error that might have been found. Whereas,
debugging means the science of finding errors or bugs. There are three types of bugs or
errors:
● Syntax error: syntax error is a violation of the rules of whatever programming language
the programmer is writing in.
● Logical error: A logical error is when the programmer uses an incorrect calculation or left
out a computational procedure. Even if there is a logical error the program may work, but
may fail to produce correct results.
● Run-time error: This is an error that occurs during program execution, runtime error is
caused by over flow in computations within a program or by an attempt to perform an
illegal or undefined operation.
49 CSC1301@ 2024
5. Maintaining the Program
After a program has been fully tested and has become operational. It typically will require
maintenance to modify or update it. Thus program maintenance refers to adding or removing
additional modules to or from an operational program.

50 CSC1301@ 2024
CHAPTER 9
PYTHON PROGRAMMING LANGUAGE

INTRODUCTION

Python is a general purpose, dynamic, high level, and interpreted programming language. It
supports Object Oriented programming approach to develop applications. It is simple and easy to
learn and provides lots of high-level data structures. Python is a versatile scripting language, which
makes it attractive for Application Development.

PYTHON FEATURES:

Python provides lots of features that are listed below.


● Easy to Learn and Use: Python is easy to learn and use. It is developer-friendly high level
programming language.
● Expressive Language Python language is more expressive i.e. is more understandable and
readable.
● Interpreted Language: Python is an interpreted language i.e. interpreter executes the code
line by line at a time. This makes debugging easy and thus suitable for beginners.
● Cross-platform Language: Python can run equally on different platforms such as
Windows, Linux, UNIX and Macintosh etc. So, we can say that Python is a portable
language.
● Free and Open Source: Python language is freely available at official web address. The
source-code is also available. Therefore it is open source.
● Object-Oriented Language
Python Supports Object Orientation i.e. the concept of defining programs in terms of
objects and classes.
● Large Standard Library
Python has a large and broad library and provides a rich set of module and functions for
rapid application development.

● GUI Programming Support


Graphical user interfaces can be developed using Python.

● Integrated
It can be easily integrated with languages like C, C++, JAVA etc.

Python Integrated Development Environment:


This is the first thing you will need to write your python program. You can download a mobile
version or window version of python compiler to execute your program. Python run from a

51 CSC1301@ 2024
Graphical User Interface (GUI) environment as well, if you have a GUI application on your system
that supports Python.

Let us consider some of the programs using python language

A PROGRAM TO PRINT A STATEMENT

CODE: 0

Print “Welcome to csc1301”


Print “learning python is fun!”

OUTPUT

Welcome to csc1301
Learning python is fun!

Let's consider a simple Arithmetic program in python such as simple Addition, Subtraction
Multiplication and Division of any given two numbers let‘s say X AND Y

CODE: 1

# EXAMPLE: SUM OF TWO NUMBERS


1. x = int(input("Enter a value for x: "))
2. y = int(input("Enter a value for y: "))
3. sum = x+ y
4. print(sum)

OUTPUT:

Enter a value for x: 20


Enter a value for y: 20
40

52 CSC1301@ 2024
CODE REVIEW: 1

In this program, we created two arbitrary storages (Variables) X and Y.

X is a variable with an input type which requires an integer data type (int). Input prompts the
compiler to ask you for a value. That is why when compiled it first output is Enter a value
for x: the text in quote is immaterial .it returns the exact text. Meaning you can even write
it in your own local dialect e.g. “Bani balu din x” is written in Hausa language. Our
expected output therefore should be: Bani balu din x:
You should not be dogmatic in the use of x and y in defining your variable as well. Any other
letter/word is equally acceptable, it just has to be meaningful .for instance in place of x and
y, I might decide to rewrite the program in this format and it works just fine.

1. FirstNum = int(input("Shigar da FirstNum: "))


2. SecondNum = int(input("Second Number Please: "))
3. sum = FirstNum+ SecondNum
4. print(sum)

New output:
Shigar da FirstNum: 30
Second Number Please: 40
70

In line 3 , we created another variable sum, don‘t forget it mustn‘t be always “sum” you can use
any meaningful name in its place such as ADD , RESULT, ANSA etc. what you need to know is
that this is where the processing is done(arithmetic operation). sum =x+y what you are simply
saying is : get the value of x from the variable X then add (+) to the value of y you have gotten
from variable Y then store it in the variable SUM. Isn‘t this fun? Just a simple command and here
we go. In the rewritten code we will be having it as thus:
Sum = FirstNum+SecondNum i.e. the value of FirstNum + the Value of SecondNum should be
kept in sum.

Line 4 Print (sum) display the value of sum which is going to be the added value of x and y

Code: 2

# EXAMPLE: DIVISION OF TWO NUMBERS


1. F = float(input("Enter the first Number: "))
2. S = float(input("supply the value for the second: "))
3. Result = F/S
4. print(Result)

53 CSC1301@ 2024
OUTPUT

Enter the first Number: 3


Supply the value for the second: 4
0.75

CODE REVIEW: 2
● In this program, we created two arbitrary storages (Variables) F and S. F is a variable with
an input type which requires a float data type (Float). Input prompt the compiler to ask you
for a value. That is why when compiled it first output is Enter the first Number: A closer
look at S shows that the same applies
● The Result is a variable that keep the calculated value of F divides S. we therefore expect
the Result to also store a float data type even though it is not explicitly defined.
● Print(Result) and Print(result) are two different variables entirely. The first Print (Result)
will return 0.75 as you have seen in the output of the program above. While Print (result)
returns an error message variable not defined. So we need to be extremely careful while
naming and calling a variable.
A closer look at the two examples of code shows that we can always improve it for clarity.
A float was used in code 2 instead of int, this is because in division, we expect a decimal
result. E.g 0.3, 0.78, 2.44 etc. lets us rewrite the code for the division.

● We can modify the division code for more clarity. The written code compared to the first
does some conversion from the float data type to string to have a better and clear output. A
closer look at this modified code shows more interactivity.

CODE 3

# EXAMPLE: DIVISION OF TWO NUMBERS MODIFIED.

1. f = float(input("Enter the first Number: "))


2. s = float(input("supply the value for the second: "))
3. f_str=str(f)
4. s_str=str(s)
5. Result = f/s
6. Result_str = str(Result)
7. print("when S=",s_str +" divides F=",f_str + " the Result is:", Result_str)

54 CSC1301@ 2024
OUTPUT

Enter the first Number: 8


Supply the value for the second: 3
When S= 8.0 divide F= 3.0 the Result is: 2.6666666666666665

Try me Session:

⮚ Pick your android phones, download python compiler and try code 0, code 1, code 2 and
code 3.
⮚ Write a python program to subtract two numbers. Share the process and the result with your
colleagues to just have fun!
⮚ Write a python program to multiply any given two numbers. Please ensure you share the
process and the result with your colleagues.

String Manipulation program (variables and value): Variables are names (identifiers) that map
to objects. A namespace is a dictionary of variable names (keys) and their corresponding objects
(values).

CODE: 4
## EXAMPLE: strings

1. hi = "Hello there"
2. name = "ana”
3. greet = hi + name
4. print(greet)
5. greeting = hi + " " + name
6. print(greeting)
7. silly = hi + (" " + name)*3
8. print(silly)

Output Code 4:
Hello there ana
Hello there ana
Hello there ana ana ana

CODE: Review 4
Let‘s explain the program line by line to have a full understanding of a variable, value and a print
statement
55 CSC1301@ 2024
1. hi is a variable and its holds/stores (Hello there) as it value
2. name is a variable and it stores (ana) as it value
3. greet is a variable and its reference the content of hi and name
4. print(greet) here we have (hello there ana) as output because we are displaying the content
of variable (greet) print display the result
5. here we assigned hi, + sign indicates concatenate, “ ” empty quotations: indicate an empty
space between hi and name to the variable greeting
6. display the contents of the variable greeting which is (Hello there ana)
7. we created a variable silly and assigned a concatenated name in multiple of three to an
empty space which is concatenated with the content of the variable hi (Hello there)
8. print(silly) displays Hello there ana ana ana

INTERACTIVE PROGRAM USING PYTHON

Let us consider a program that takes an input from users and returns an output based on the user’s
inputs. We are more of considering an interactive program in this sense. For example a class
lecturer may request for the following information from a Student. Student’s: Name, age and
Department.

Normal flow will be:


Lecturer: what is your name?
Students’ Response: My Name is Aliyu Isah
Lecturer: How old are you?
Students’ Response: 20
Lecturer: Which department are you?
Students’ Response: Computer science

Observations: we noticed from the line of conversation between the lecturer and the student that
we deal basically with two data type
1. Strings values are Names and Department
2. Integer value is age

From this we now know we have three variables which are Name, Department and Age and each
of the variables stores the following value respectively Name, department and Age.

CODE 5

0. # EXAMPLE: Lecturer students’ conversation


1. Name = str(input("What is your Name Please? "))
2. Age= float(input("How old are you? "))
56 CSC1301@ 2024
3. Department= str(input("Which Department? "))
4. Age_str=str(Age)
5. print("Thanks Mr/Mrs=",Name +" you are =",Age_str + " Years old. Good to have
you in ", Department +" Department. ")

Output 5
What is your Name Please? Idris Baba Abdullahi
How old are you? 27.9
Which Department? Computer Science
Thanks Mr/Mrs= Idris Baba Abdullahi you are = 27.9 Years old. Good to have you
in Computer Science Department.

Try me Session:

● Pick your android phones, download python compiler and try code 4 and code 5
● Explain this line of code “Age_str=str(Age)” why do we need this? Delete this code in the
one you have rewritten to see the error message.
● Write a python program to find out from your friend his Hostel name, BlockName,
RoomNumber and theTotalRoomOccupants.

The output of your program should be in this format:


“I am in HostelName BlockName in room RoomNumber we are TotalRoomOccupants
students in the room thanks”

57 CSC1301@ 2024

You might also like