0% found this document useful (0 votes)
18 views18 pages

FP Chapter-1

Chapter 1 introduces the fundamentals of computer science, focusing on computational problem-solving, algorithms, hardware, and software. It emphasizes the importance of algorithms in efficiently solving problems and the roles of computer hardware and software in executing these solutions. By the end of the chapter, readers will understand key concepts such as binary representation, operating systems, and the distinction between system and application software.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views18 pages

FP Chapter-1

Chapter 1 introduces the fundamentals of computer science, focusing on computational problem-solving, algorithms, hardware, and software. It emphasizes the importance of algorithms in efficiently solving problems and the roles of computer hardware and software in executing these solutions. By the end of the chapter, readers will understand key concepts such as binary representation, operating systems, and the distinction between system and application software.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Chapter 1

Introduction

Today, we embark on a fascinating exploration of computer science, starting with its very
core: computational problem-solving. In this chapter, we will dig into the essential concepts and
techniques that enable computers to tackle complex challenges with ease and efficiency. Our
journey begins with a closer examination of computer algorithms, the lifeblood of computational
problem-solving, and their application in practical contexts.
Next, we will venture into the intricate world of computer hardware, where we will
investigate the intricacies of binary representation and the role of operating systems in
managing computer resources. Here, we will gain a deeper understanding of how computer
hardware functions and how it supports the execution of computer programs.
As we continue our odyssey, we will turn our attention to computer software, focusing on
the critical aspects of syntax, semantics, and program translation. Through this lens, we will
examine how software developers craft programs that can be executed by computer hardware,
and how compilers and interpreters facilitate communication between human programmers and
machines.
Lastly, we will wrap up our chapter by demonstrating the process of computational
problem-solving in action, using the versatile Python programming language as our guide. By
the end of this chapter, you will possess a solid grasp of the fundamentals of computer science
and be well-prepared to embark on your own adventures in this captivating field. So, let us
embark on this thrilling journey together!
Upon completing this chapter and the associated exercises, you will be able to:
1. Demonstrate an understanding of computational problem-solving and its essence.
2. Define and differentiate a computer algorithm.
3. Identify and describe the key components of digital hardware.
4. Illustrate the significance of binary representation in digital computing.
5. Describe the functions and importance of operating systems in computing.
6. Elaborate on the fundamental concepts underlying computer software.
What is Computer Science?
Computer Science is a multidisciplinary field that encompasses various areas of study.
It involves the theory, design, development, and application of computers and
computational systems. This field exhibits a broad range of subjects, including software
engineering (large software systems design and implementation), database management,
computer networks, computer graphics, computer simulation, data mining, information
security, programming language design, systems programming, computer architecture,
human-computer interaction, robotics, artificial intelligence, and more.
Computer science is like a mighty sword, and programming is but one of its many blades.
With this sword, we can conquer the realms of computational problem-solving, slicing
through even the most daunting of challenges. And, just as a skilled warrior must master
many weapons, a true computer scientist must be proficient in a wide range of techniques
and tools.
So, what is computation, Computation is simply a series of steps that can be
systematically followed to produce the answer to a certain type of problem. It is the process
of taking in input, processing it, and producing output. Think of it like a recipe for cooking
a delicious feast – you follow the steps, and voila! Out comes a tasty solution to your
problem.
And, just as a chef must choose the right ingredients and cooking methods to create a
culinary masterpiece, a computer scientist must select the appropriate algorithms and
data structures to solve a computational problem effectively. We will dig deeper into the
wondrous world of algorithms and data structures in due time.
The core concept of computer science revolves around computational problem-solving.
Computation is characterized by the notion of an algorithm, which can be defined as a
series of systematic steps utilized to produce solutions to specific types of problems.

Computational Problem Solving


The essence of computational problem-solving can be understood through the process of
representing and solving problems using algorithms. In order to solve a problem
computationally, we need two essential components: a representation that captures all the
relevant aspects of the problem, and an algorithm that utilizes the representation to find a
solution.
Let's take the example of the "Man, Cabbage, Goat, Wolf" problem, where a man needs to
transport a cabbage, a goat, and a wolf across a river, but his boat can only carry one of them
at a time. The challenge is to find a sequence of steps that safely brings all the items to the other
side of the river without the goat eating the cabbage or the wolf eating the goat.
To computationally solve this problem, we use abstraction, which means representing
only the relevant details while omitting irrelevant ones. In this case, the relevant information is
the location of each item at each step, which defines the state of the problem.
What Is an Algorithm?
An algorithm is a step-by-step computational method used to solve general problems by
providing a series of instructions that a computer can execute. It is a precise set of rules or
procedures that lead to the desired solution for a given problem instance. Algorithms are not
tailored to specific instances of a problem but offer a general approach to solving similar
problems.
The term "algorithm" originates from Al-Khwarizmi, a ninth-century Arab
mathematician, who worked on written processes to achieve specific goals. In computer
science, algorithms play a central role as they enable computers to carry out computations
efficiently and reliably. High-speed computers can consistently follow and execute the
instructions provided by an algorithm, resulting in effective computation.
It's important to note that the quality of computation performed by a computer heavily
relies on the underlying algorithm. Understanding and designing effective algorithms are
crucial for determining what can be efficiently programmed and executed by computers. Thus,
the study of algorithms is fundamental to computer science, allowing us to develop solutions
to a wide range of problems and optimize the performance of computational systems.

Following example illustrates a simple algorithm to put a book in the box.


1. Open the box.
2. Pick up the book.
3. Put the book inside the box.
4. Close the box.
For example: An algorithm for sum of two numbers can be written as:
1. Start
2. Read number n1 and n2;
3. Sum = n1 + n2;
4. Write sum “the sum is”;
5. Stop;
An algorithm can be written in following two ways:
1. Pseudo code
2. Flow chart

Pseudo code A pseudo code is used by almost all the professional programmers. Pseudo codes
are easy
to understand. These are used to depict the design of an algorithm.
For example: The pseudo code for a sum of two numbers can be written as:
a) Read a, b
b) Add two numbers
Sum = a + b;
c) Result sum “the sum is”;
d) End

Flow Charts
A flow chart is a simple and visual diagram used to represent the sequence of
operations or steps involved in obtaining a solution for a specific problem or process. It allows
individuals to understand the exact sequence of events that a product or service follows
during its execution. Flow charts are highly effective in visualizing and comprehending the
flow of a process. With just a quick glance at a flow chart, one can gain a clear understanding
of the steps involved and the order in which they occur.
In a flow chart, the flow of data or the sequence of actions is represented by arrows,
connecting different shapes or symbols that represent each step of the process. These symbols
typically include rectangles for actions, diamonds for decision points, and ovals for the start
and end points of the process.
Flow charts serve as graphical representations of algorithmic solutions, enabling users
to easily follow the logical flow of a problem-solving procedure. They are widely used in various
fields, including computer science, software development, project management, and business
analysis, to depict complex processes and aid in their understanding and optimization.
Flow charts are also known as process maps that can be used to identify:
a) Flow of information
b) Number of steps in a process
c) Branches in a process
d) Inter-dependent operations
Flow Charts Symbols
1. Start/Stop: Every flow chart has a starting point and a terminating point. The symbol
that is used for
both the starting and terminating points is a rounded rectangle, a rectangle with round
corners. It is called
a ‘terminal’.

2. Input/Output: Every time you take an input from a user and return an output to the
user, an input/
output symbol is used in the flow chart. The symbol that is used for both input/output-
related actions is a
parallelogram

3. Process: If you are running a processing instruction, you need to use a rectangular box
in the flow chart.
This rectangular box.
4. Decision Symbol: In a flow chart, a decision symbol, it is used for answering questions
in the form of either true/false or yes/no. Please note that each answer can lead you to
a different path in
the flow chart. A ‘yes’ to a question can take you to one path and a ‘No’ to the same
question can generate a completely new path.

5. Flow Lines: Flow lines depict the direction of a flow in a flow chart. There are four types
of flow lines. Flow lines, can depict a left, right, top or bottom direction.

6. Connector: As the name suggests, a connector connects. It connects different steps in a


flow chart that are
on different pages and gives a sense of continuation. Generally, it is used in extremely
complex flow charts
and it is denoted by a small circle.

Selection Structure
Repetition Structure
Sequential Structure

Guidelines for Drawing Flow Charts


● Firstly, describe the process to be charted.
● Start with a trigger event. For example, in Figure below, ‘count your money’ is the
trigger event for starting the process of counting.
● Usually, direction of flow of a process is from left to right or top to bottom.
● Please note that only one flow line should come out from a process symbol.
Also, only one flow line should enter a decision symbol. However, two or three flow lines
can leave
the same decision symbol. For example, in Figure below, decision after the question ‘do
you have more
than 100 ?’ led to two flow lines, each representing a ‘yes’ and a ‘no’, respectively.
● Only one flow line, as shown in Figure above, is used in conjunction with a terminal
symbol.
● It is important to ensure that a flow chart has a logical start and end. A flow chart
can have only one
start terminal. However, it can sometimes lead to more than one terminal symbols.
● It is also important to stop a flow chart at a logical conclusion.
EXAMPLES OF ALGORITHMS AND FLOW CHARTS

Examples of Algorithms
A. Write an algorithm to log in to your Gmail account.
1. Go to www.gmail.com.
2. Enter your email id and password.
3. Click the Sign in button.

Draw a flow chart to log in to your Gmail account.


Write an algorithm to multiply two numbers 5 and 6.
1. Start.
2. Read two numbers 5 and 6.
3. Multiply two numbers, Mul = 5 * 6.
4. Write “the multiplication is”: mul.
5. Stop.

Draw a flow chart to multiply two numbers 5 and 6.


Computer Hardware

Computer systems have become an essential part of our life. Most of our work is done
with the help of
computer system in a fast and efficient manner. Hardware refers to the tangible objects that
can be run
using software. Software refers to a set of instructions to the computer. Without hardware,
software cannot
work and vice versa. For example, a car without a driver is like hardware without software.
Software tells
hardware what to do and how to do it. To reiterate, the computer system is made up of two
major components:
Hardware and Software that are essential for functioning of the system.

Hardware
Hardware are the physical components of the computer system. The hardware
components consist of several parts like input devices, Central Processing Unit (CPU), primary
storage, output devices and auxiliary
storage devices.

1. Input Devices: These are the devices such as keyboards that are used to enter the program
and data. Mouse
and audio input also fall in the category of input devices.
2. CPU: It processes all the instructions given to the computer and is also used for doing
arithmetic
calculations and comparisons, and for controlling the movement of data.
3. Primary Storage: It is the main memory of the computer system. In primary storage,
programs and data
are stored temporarily for processing. The data in the primary device is erased when the
computer is turned
off.
4. Output Devices: Devices such as monitor or printer are used to get the output.
5. Auxiliary Storage: Programs and data are stored permanently in auxiliary storage. It is also
known as
secondary storage and used for both input and output. This storage is very useful as the data
remains stored
even when the computer is turned off.

Fundamental Hardware Components

The central processing unit (CPU) is the “brain” of a computer system, containing
digital logic circuitry able to interpret and execute instructions. Main memory is where
currently executing programs reside, which the CPU can directly and very quickly access.
Main memory is volatile; that is, the contents are lost when the power is turned off. In
contrast, secondary memory is nonvolatile, and therefore provides long-term storage of
programs and data. This kind of storage, for example, can be magnetic (hard drive), optical
(CD or DVD), or nonvolatile fl ash memory (such as in a USB drive). Input/output devices
include anything that allows for input (such as the mouse and keyboard) or output (such as a
monitor or printer). Finally, buses transfer data between components within a computer
system, such as between the CPU and main memory.
Computer Software
What Is Computer Software?

Computer software is a set of program instructions, including related data and


documentation,
that can be executed by computer. This can be in the form of instructions on paper, or in
digital form.
While system software is intrinsic to a computer system, application software fulfi lls users’
needs,
such as a photo-editing program. We discuss the important concepts of syntax, semantics,
and program
translation next.

The first computer programs ever written were for a mechanical computer designed by Charles
Babbage in the mid-1800s. The person who wrote these programs was a woman, Ada Lovelace
, who was a talented mathematician. Thus, she is referred to as “the first computer
programmer.” This section discusses fundamental issues of computer software.

Computer software is a collection of programs used to manage the entire file system of the
computer. It is
also necessary for the running of computer hardware. The working of the computer hardware
depends on
the computer software. Computer software is classified into two categories, namely, System
software and
Application software.
1. System Software: The system software provides interface between the user and the
hardware (components
of the computer). It also manages the system resources, enabling the working of all hardware
components
(hard disk, RAM, CD drive, etc.) of the computer. Computer hardware resources are managed
through this
system software with the help of programs.
These programs fall into following three types:
• Operating System: It provides the interface between the user and computer hardware,
managing
all files and folders, and providing ease of access to the database. The operating system makes
the
computer perform efficiently.
• System Support Software: It provides all the services of the operating system and
system utilities.
For example, disk format program is the system utility made to do the formatting of the
storage. Other
services include data encryption and bit lock for locking storage devices.
• System Development Software: It works as a language translator that converts program
language to
machine level language for debugging and execution of the programs.

2. Application Software: The application software runs under the system software. It helps the
user to solve
problems. It can be further classified into general-purpose software and application-specific
software.
• General-Purpose Software: It refers to software meant to be used for more than one
application.
For example, Word Processor.
• Application-Specific Software: As the name suggests, it refers to software generally used
for
a specific, intended purpose. For example: a general account ledger used by the accountants
for
managing accounts.
The examples of application software are as follows:
a) Microsoft Internet Explorer
b) VLC Media Player
c) Adobe Reader X

OPERATING SYSTEMS
An operating system is a software environment in which the program runs. Most of the
operating systems
are described as a combination of the software and the underlying hardware. The operating
system works as
an interface between hardware and user. It controls file and database access besides providing
the interface
to communication systems such as internet protocol. The working together of the various
hardware and
software can only be achieved by the operating system. It is the mother of the computer
without which
computer is nothing more than blank box. The functioning of every component of the
computer depends on
the operating system.
Some commonly used operating systems are Windows 98, Windows server 2000, Windows XP,
Windows
Vista, Linux, Ubuntu, UNIX, Macintosh (for apple computer), Windows 7, and Windows 8.
The main functions of an operating system includes:
1. The main objective of the operating system is to ensure the efficient working of the
computer system
and to stimulate various hardwares.
2. The operating system performs basic tasks, such as taking input from the keyboard,
displaying output
on the screen, managing files and operation on files on disk drives, and managing other
devices
including keyboard, mouse and printers.
3. The operating system can enable users to do multitasking. Multitasking refers to the
situation where
two or more than two programs can run simultaneously on a single operating system.
4. The operating system also allows users to do multithreading. Multithreading refers to the
situation
where two or more parts of the single program can run concurrently on single operating
Systems.
5. The users can interact with the operating system with the help of commands.

PROGRAMMING LANGUAGES

A computer language is used to make a computer understand what the user wants to say.
When a user writes
a program, he/she uses the computer language.

A program, written in a programming language, is a set of instructions by which the computer


comes
to know what is to be done. It is a coding language used by programmers to write the
instructions that a
computer can understand.

There are three types of computer languages


• High-level Language
• Assembly Language
• Machine Language

High-level Language
Symbolic languages are very tedious to work with because each machine instruction
needs to be coded
individually. High-level languages on the other hand uses English-like languages allowing the
programmer
to focus on application problems instead of focusing on the intricacies of the particular
computer. High level languages are converted into machine level language using a converting
software called compiler. It
is a computer programming language that does not requires great efforts from the
programmer. It is called
high-level language because it is close to the user. The first high-level language used was
FORTRAN, which
was followed by COBOL.
Assembly Language
Assembly language is a low-level programming language. It is more machine friendly
and requires more
efforts from the programmer. Assembly (or symbolic) language closely resembles machine
language.
Symbols and mnemonics are used in this language to represent various machine language
instructions.
Assembly language is directly converted into binary language and is machine-dependent.
This language is known as symbolic language because of the symbols it employs. Since the
computer
does not understand symbolic language, a program called assembler is used to translate the
symbolic code
into machine language, and is the reason why it is called assembly language.

Machine Language

Machine language consisting of 0s and 1s, was the earliest mode of programming language.
The computer
understands only 0’s and 1’s because it is made of switches, transistors, and other electronic
devices which
can only be in the state of either on or off. The off state is represented by 0 and on state by 1.
A machine
language is a low-level computer programming language and is more machine friendly. This
language is
known as machine language because it is close to the machine.

TRANSLATORS
A translator is a computer program that can instantly translate between any languages.
It converts program language to machine level language for the debugging and execution of
the programs. While the computer understands only binary code i.e. 1’s and 0’s, it is not easy
for humans to read and write in such code. So, the translators are used to translate a
computer program into binary code. There are three types of translator
programs, namely Compiler, Assembler, and Interpreter.
Compiler
A compiler is very important in giving the application a performance boost. The
compiler of a language is
a computer program that converts the source code of an application written in the computer
programming
language to the target language with its binary form.
The compiler checks for syntax errors in a source code of a program. If no error is
found, the program
is declared to be successfully compiled. If the program does not contain any syntax error, the
compiler
translates the source code of the program into the machine language of the computer, so that
the computer
is able to understand the instructions given to it.
Source files are the program files created by a programmer. They contain information
and instructions
written by the programmer, which are checked by the compiler during the process of
compilation. These
source files are compiled by a compiler and run with an executable file.

Assembler
To translate the assembly language into machine language, a translator is needed. This
translator is also
called an assembler. Each assembly language is unique to the particular computer
architecture. In assembly
language, we use some mnemonic such as ‘add’, ‘sub’, ‘mul’ etc. for all the operations.

For example, if we want to add 4 and 3, then in assembly language, we will write Add 4 3
where Add
is a mnemonic and both 4 and 3 are the arguments of the operand. Now, the assembler will
map this to the
binary code.

Interpreter
Like a compiler, an interpreter also translates high-level language into low-level
machine language. An
interpreter reads the statement and first converts it into an intermediate code and executes it,
before reading
the next statement. It translates each instruction immediately one by one. This is a rather
slow process
because the interpreter has to wait while each instruction is being translated.
The interpreter stops execution at the time of error occurrence and reports it, whereas a
compiler reads
the whole program even if it encounters several errors.

Syntax, Semantics, and Program Translation

What Are Syntax and Semantics?


Programming languages (called “artificial languages”) are languages just as “natural
languages” such as English and Mandarin (Chinese). Syntax and semantics are important
concepts that apply to all languages.

The syntax of a language is a set of characters and the acceptable arrangements


(sequences) of those characters. English, for example, includes the letters of the alphabet,
punctuation, and properly spelled words and properly punctuated sentences. The following is
a syntactically correct sentence in English,

“Hello there, how are you?” The following, however, is not syntactically correct,
“Hello there, hao are you?”
In this sentence, the sequence of letters “hao” is not a word in the English language. Now
consider
the following sentence,
“Colorless green ideas sleep furiously.”
This sentence is syntactically correct, but is semantically incorrect, and thus has no meaning.

The semantics of a language is the meaning associated with each syntactically correct
sequence of characters. In Mandarin, “Hao” is syntactically correct meaning “good.” (“Hao” is
from a system called pinyin, which uses the Roman alphabet rather than Chinese characters
for writing Mandarin.) Thus, every language has its own syntax and semantics

Program Translation

A central processing unit (CPU) is designed to interpret and execute a specific set of
instructions
represented in binary form (i.e., 1s and 0s) called machine code . Only programs in machine
code
can be executed by a CPU, depicted in Figure

Writing programs at this “low level” is tedious and error-prone. Therefore, most
programs are written in a “high level” programming language such as Python. Since the
instructions of such programs are not in machine code that a CPU can execute, a translator
program must be used. There are two fundamental types of translators. One, called a compiler
, translates programs directly into machine code to be executed by the CPU, denoted in Figure
The other type of translator is called an interpreter , which executes program instructions in
place of (“running on top of”) the CPU, denoted in Figure

Program Debugging: Syntax Errors vs. Semantic Errors

Program debugging is the process of finding and correcting errors ( “bugs” ) in a


computer program. Programming errors are inevitable during program development. Syntax
errors are caused by invalid syntax (for example, entering prnt instead of print). Since a
translator cannot understand instructions containing syntax errors, translators terminate
when encountering such errors indicating where in the program the problem occurred.

In contrast, semantic errors (generally called logic errors ) are errors in program logic.
Such errors cannot be automatically detected, since translators cannot understand the intent
of a given computation. For example, if a program computed the average of three numbers as
follows,
(num1 1 num2 1 num3) / 2.0
a translator would have no means of determining that the divisor should be 3 and not 2.
Computers do not understand what a program is meant to do, they only follow the
instructions given . It is up to the programmer to detect such errors. Program debugging is not
a trivial task, and constitutes much of the time of program development.

You might also like