Introduction to Computers & Programming in
C
Unit 1: Introduction to Computer, Programming &
Algorithms
June 20, 2023
Contents
1 Introduction to Computer Programming & Algorithms 3
1.1 Component of a Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Computer Architecture: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Principles of Computer Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 CISC and RISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Definition of software and hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Types of Programming Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Assembler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Compiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.9 Linker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.10 Loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.11 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.12 Characteristics of Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.13 Complexity of an Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.14 Flowcharts: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
This unit covers
• Defining Computer and its Components.
• Introducing Programming Languages & Algorithms.
1 Introduction to Computer Programming &
Algorithms
A computer is an electronic device that accepts data, performs operations, displays results, and stores the data or
results as needed. It is a combination of hardware and software resources that integrate together and provides various
functionalities to the user. Hardware is the physical components of a computer like a processor, memory devices, monitor,
keyboard, etc.
1.1 Component of a Computer
There are basically three important components of a computer:
1. Input Unit
2. Central Processing Unit(CPU)
3. Output Unit
1. Input Unit The input unit consists of input devices that are attached to the computer. These devices take input and
convert it into binary language that the computer understands. Some of the common input devices are keyboard,
mouse, joystick, scanner etc.
(a) The Input Unit is formed by attaching one or more input devices to a computer.
(b) A user input data and instructions through input devices such as a keyboard, mouse, etc.
(c) The input unit is used to provide data to the processor for further processing.
2. Central Processing Unit(CPU) Once the information is entered into the computer by the input device, the processor
processes it. The CPU is called the brain of the computer because it is the control centre of the computer. It first
fetches instructions from memory and then interprets them so as to know what is to be done. If required, data is
fetched from memory or input device. Thereafter CPU executes or performs the required computation, and then
either stores the output or displays it on the output device. The CPU has three main components, which are
responsible for different functions: Arithmetic Logic Unit (ALU), Control Unit (CU) and Memory registers
(a) Arithmetic and Logic Unit (ALU):
i. Arithmetic Logical Unit is the main component of the CPU
ii. It is the fundamental building block of the CPU.
iii. Arithmetic and Logical Unit is a digital circuit that is used to perform arithmetic and logical operations.
(b) Control Unit: The Control unit coordinates and controls the data flow in and out of the CPU, and also controls
all the operations of ALU, memory registers and also input/output units. It is also responsible for carrying out
all the instructions stored in the program. It decodes the fetched instruction, interprets it and sends control
signals to input/output devices until the required operation is done properly by ALU and memory.
i. The Control Unit is a component of the central processing unit of a computer that directs the operation
of the processor.
ii. It instructs the computer’s memory, arithmetic and logic unit, and input and output devices on how to
respond to the processor’s instructions.
iii. In order to execute the instructions, the components of a computer receive signals from the control unit.
iv. It is also called the central nervous system or brain of the computer.
(c) Memory Registers: A register is a temporary unit of memory in the CPU. These are used to store the data,
which is directly used by the processor. Registers can be of different sizes(16 bit, 32 bit, 64 bit and so on) and
each register inside the CPU has a specific function, like storing data, storing an instruction, storing address
of a location in memory etc. The user registers can be used by an assembly language programmer for storing
operands, intermediate results etc. Accumulator (ACC) is the main register in the ALU and contains one of
the operands of an operation to be performed in the ALU.
i. Memory Unit is the primary storage of the computer.
ii. It stores both data and instructions.
iii. Data and instructions are stored permanently in this unit so that they are available whenever required.
3. Output Unit The output unit consists of output devices that are attached to the computer. It converts the binary
data coming from the CPU to human understandable form. The common output devices are monitor, printer,
plotter, etc.
(a) The output unit displays or prints the processed data in a user-friendly format.
(b) The output unit is formed by attaching the output devices of a computer.
(c) The output unit accepts the information from the CPU and displays it in a user-readable form.
Characteristics of a Computer
• Speed: Computers can perform millions of calculations per second. The computation speed is extremely fast.
• Accuracy: Because computers operate on pre-programmed software, there is no space for human error.
• Diligence: They can perform complex and long calculations at the same time and with the same accuracy.
• Versatile: Computers are designed to be versatile. They can carry out multiple operations at the same time.
• Storage: Computers can store a large amount of data/ instructions in its memory, which can be retrieved at any
point of time.
How computer works ? A computer processes data by following a series of steps. Input devices capture user commands
and data, sending them to the central processing unit (CPU). The CPU executes instructions, manipulating data stored
in temporary memory (RAM). The operating system manages hardware resources and software, enabling applications to
run. Results are sent to output devices for user interaction. Storage devices store data and programs for long-term use.
1.2 Computer Architecture:
Computer architecture refers to the end-to-end structure of a computer system that determines how its components
interact with each other in executing the machine’s purpose of processing data.
Examples of computer architectures include Von Neumann Architecture (a) and Harvard Architecture (b).
Computers are integral to any organization’s infrastructure, from office equipment to remote devices like cell phones
and wearables. Computer architecture establishes the principles governing how hardware and software connect to make
Figure 1:
these systems function.
1.3 Principles of Computer Architecture
Computer architecture specifies the arrangement of components within a computer system and the processes at the
core of its functioning. It defines the machine interface for which programming languages and associated processors are
designed.
Two predominant approaches to architecture are Complex Instruction Set Computer (CISC) and Reduced Instruction
Set Computer (RISC), influencing how computer processors operate.
1.3.1 CISC and RISC
CISC processors have one processing unit, auxiliary memory, and a large register set with hundreds of unique com-
mands, simplifying programming by executing tasks with single instructions. However, this approach may require more
time to execute instructions despite utilizing less memory.
RISC architecture emerged to create high-performance computers with simpler hardware designed for faster execution
of sophisticated instructions by breaking them down into simpler ones.
How does computer architecture work? Computer architecture allows a computer to compute, retain, and
retrieve information. This data can be digits in a spreadsheet, lines of text in a file, dots of color in an image, sound
patterns, or the status of a system such as a flash drive.
• Purpose of computer architecture: Everything a system performs, from online surfing to printing, involves the
transmission and processing of numbers. A computer’s architecture is merely a mathematical system intended to
collect, transmit, and interpret numbers.
• Data in numbers: The computer stores all data as numerals. When a developer is engrossed in machine learning
code and analyzing sophisticated algorithms and data structures, it is easy to forget this.
• Manipulating data: The computer manages information using numerical operations. It is possible to display an
image on a screen by transferring a matrix of digits to the video memory, with every number reflecting a pixel of
color.
• Multifaceted functions: The components of a computer architecture include both software and hardware. The
processor — hardware that executes computer programs — is the primary part of any computer.
• Booting up: At the most elementary level of a computer design, programs are executed by the processor whenever
the computer is switched on. These programs configure the computer’s proper functioning and initialize the different
hardware sub-components to a known state. This software is known as firmware since it is persistently preserved
in the computer’s memory.
• Support for temporary storage: Memory is also a vital component of computer architecture, with several types
often present in a single system. The memory is used to hold programs (applications) while they are being executed
by the processor and the data being processed by the programs.
• Support for permanent storage: There can also be tools for storing data or sending information to the external world
as part of the computer system. These provide text inputs through the keyboard, the presentation of knowledge
on a monitor, and the transfer of programs and data from or to a disc drive.
• User-facing functionality: Software governs the operation and functioning of a computer. Several software ‘layers’
exist in computer architecture. Typically, a layer would only interface with layers below or above it.
1.4 Definition of software and hardware
Computer Hardware Hardware refers to the physical components of a computer. Computer Hardware is any part of
the computer that we can touch these parts. These are the primary electronic devices used to build up the computer.
Examples of hardware in a computer are the Processor, Memory Devices, Monitor, Printer, Keyboard, Mouse, and
Central Processing Unit.
Types of Computer Hardware
1. Input Devices: Input Devices are those devices through which a user enters data and information into the Computer
or simply, User interacts with the Computer. Examples of Input Devices are Keyboard, Mouse, Scanner, etc.
2. Output Devices: Output Devices are devices that are used to show the result of the task performed by the user.
Examples of Output Devices are Monitors, Printers, Speakers, etc.
3. Storage Devices: Storage Devices are devices that are used for storing data and they are also known as Secondary
Storage Data. Examples of Storage Devices are CDs, DVDs, Hard Disk, etc
4. Internal Component: Internal Components consists of important hardware devices present in the System. Examples
of Internal Components are the CPU, Motherboard, etc.
Computer Software Software is a collection of instructions, procedures, and documentation that performs different
tasks on a computer system. we can say also Computer Software is a programming code executed on a computer processor.
The code can be machine-level code or code written for an operating system. Examples of software are MS- Word, Excel,
PowerPoint, Google Chrome, Photoshop, MySQL, etc. Types of Computer Software
1. System Software: System Software is a component of Computer Software that directly operates with Computer
Hardware which has the work to control the Computer’s Internal Functioning and also takes responsibility for
controlling Hardware Devices such as Printers, Storage Devices, etc. Types of System Software include Operating
systems, Language processors, and Device Drivers.
2. Application Software: Application Software are the software that works the basic operations of the computer. It
performs a specific task for users. Application Software basically includes Word Processors, Spreadsheets, etc.
Types of Application software include General Purpose Software, Customized Software, etc.
1.5 Types of Programming Language
Programming language can be divided into three categories based on the levels of abstraction:
1. Low-level Language:
The low-level language is a programming language that provides no abstraction from the hardware and is represented
by machine instructions in 0 and 1 forms.
There are two types of Low-level programming language. The Machine level language and Assembly language are
two languages that fall into this category.
(a) Machine Language:
A machine-level language is one that consists of a set of binary instructions that are either 0 or 1. Because
computers can only read machine instructions in binary digits, i.e., 0 and 1, the instructions sent to the
computer must be in binary codes.
i. It is difficult for programmers to write programs in machine instructions, hence creating a program in a
machine-level language is a challenging undertaking.
ii. It is prone to errors because it is difficult to comprehend, and it requires a lot of upkeep.
iii. Distinct processor architectures require different machine codes.
A machine-level language is not portable since each computer has its own set of machine instructions, therefore
a program written on one computer will no longer work on another.
(b) Assembly Language
Some commands in the assembly language are human-readable, such as move, add, sub, and so on. The
challenges we had with machine-level language are mitigated to some extent by using assembly language,
which is an expanded form of machine-level language.
i. Assembly language instructions are easier to write and understand since they use English words like move,
add, and sub.
ii. We need a translator that transforms assembly language into machine code since computers can only
understand machine-level instructions.
iii. Assemblers are the translators that are utilized to translate the code. Because the data is stored in
computer registers, and the computer must be aware of the varied sets of registers, the assembly language
code is not portable.
2. High-Level Language:
A high-level language is a programming language that allows a programmer to create programs that are not
dependent on the type of computer they are running on. High-level languages are distinguished from machine-level
languages by their resemblance to human languages. When writing a program in a high-level language, the logic of
the problem must be given complete attention. To convert a high-level language to a low-level language, a compiler
is necessary. Examples of High-Level Programming Language:
(a) COBOL used for business application
(b) FORTRAN used for Engineering and Scientific Application
(c) PASCAL used for General use and as a teaching tool
(d) C and C++ used for General purposes and it is very popular
(e) PROLOG used for Artificial intelligence
(f) JAVA used for General purpose programming
(g) .NET used for General or web applications
Advantages of High-level language:
(a) Because it is written in English like words, the high-level language is simple to read, write, and maintain.
(b) The purpose of high-level languages is to overcome the drawbacks of low-level languages, namely portability.
(c) The high-level language is machine-independent.
(d) High-level programming language is portable.
3. Medium Level Language:
Programming languages with features of both Low Level and High-Level programming languages are referred to as
“Middle Level” programming languages.
(a) Medium-level language is also known as the intermediate-level programming language.
(b) There is no such thing as a programming language category.
(c) Medium level language is a type of programming language that has features of both low-level and high-level
programming languages.
Examples of Medium Level Programming Language:
C, C++, and JAVA programming languages are the best example of Middle-Level Programming languages since
they combine low-level and high-level characteristics.
1.6 Assembler
In computer science, an assembler is a program that converts the assembly language into machine code.The output
of an assembler is called an object file, which contains a combination of machine instructions as well as the data required
to place these instructions in memory
Assembly LanguageIt is a low-level programming language in which there is a very strong correspondence between
the instructions in the language and the computer’s hardware’s machine code instructions
1.7 Compiler
1. A Compiler is a program that translates source code from a high-level programming language to a lower level
language computer understandable language(e.g. assembly language, object code, or machine code) to create an
executable program
2. It is more intelligent than interpreter because it goes through the entire code at once
3. It can tell the possible errors and limits and ranges.
4. But this makes it’s operating time a little slower
5. It is platform-dependent
6. It help to detect error and get displayed after reading the entire code by compiler.
7. In other words we can say that, “Compilers turns the high level language to binary language or machine code at
only time once”, it is known as Compiler.
1.8 Interpreter
1. An interpreter is also a program like a compiler that converts assembly language into Machine Code
2. But an interpreter goes through one line of code at a time and executes it and then goes on to the next line of the
code and then the next and keeps going on until there is an error in the line or the code has completed.
3. It is 5 to 25 times faster than a compiler but it stops at the line where error occurs and then again if the next line
has an error too.
4. Where as a compiler gives all the errors in the code at once.
5. Also, a compiler saves the machine codes for future use permanently but an interpreter doesn’t, but an interpreter
occupies less memory.
Interpreter is differ from compiler such as,
1. Interpreter is faster than compiler.
2. It contains less memory.
3. Interpreter executes the instructions in to source programming language.
1.9 Linker
For a code to run we need to include a header file or a file saved from the library which are pre-defined if they are not
included in the beginning of the program then after execution the compiler will generate errors, and the code will not
work.
Linker is a program that holds one or more object files which is created by compiler, combines them into one executable
file.Linking is implemented at both time,load time and compile time. Compile time is when high level language is turns
to machine code and load time is when the code is loaded into the memory by loader.
Linker is of two types:
1. Dynamic Linker:-
(a) It is implemented during run time.
(b) It requires less memory.
(c) In dynamic linking there are many chances of error and failure chances.
(d) Linking stored the program in virtual memory to save RAM,So we have need to shared library
2. Static Linker:-
(a) It is implemented during compilation of source program.
(b) It requires more memory.
(c) Linking is implemented before execution in static linking.
(d) It is faster and portable.
(e) In static linking there are less chances to error and No chances to failure.
1.10 Loader
A loader is a program that loads the machine codes of a program into the system memory. It is part of the OS of the
computer that is responsible for loading the program. It is the bare beginning of the execution of a program. Loading a
program involves reading the contents of an executable file into memory. Only after the program is loaded on operating
system starts the program by passing control to the loaded program code. All the OS that supports loading have loader
and many have loaders permanently in their memory.
1.11 Algorithm
What is an Algorithm?
An algorithm is a set of commands that must be followed for a computer to perform calculations or other problem-
solving operations.According to its formal definition, an algorithm is a finite set of instructions carried out in a specific
order to perform a particular task. It is not the entire program or code; it is simple logic to a problem represented as an
informal description in the form of a flowchart or pseudocode.
1. Problem: A problem can be defined as a real-world problem or real-world instance problem for which you need to
develop a program or set of instructions. An algorithm is a set of instructions.
2. Algorithm: An algorithm is defined as a step-by-step process that will be designed for a problem.
3. Input: After designing an algorithm, the algorithm is given the necessary and desired inputs.
4. Processing unit: The input will be passed to the processing unit, producing the desired output.
5. Output: The outcome or result of the program is referred to as the output.
How do Algorithms Work?
Algorithms are step-by-step procedures designed to solve specific problems and perform tasks efficiently in the realm
of computer science and mathematics. These powerful sets of instructions form the backbone of modern technology and
govern everything from web searches to artificial intelligence. Here’s how algorithms work:
1. Input: Algorithms take input data, which can be in various formats, such as numbers, text, or images.
Figure 2:
2. Processing: The algorithm processes the input data through a series of logical and mathematical operations,
manipulating and transforming it as needed.
3. Output: After the processing is complete, the algorithm produces an output, which could be a result, a decision,
or some other meaningful information.
4. Efficiency: A key aspect of algorithms is their efficiency, aiming to accomplish tasks quickly and with minimal
resources.
5. Optimization: Algorithm designers constantly seek ways to optimize their algorithms, making them faster and more
reliable.
6. Implementation: Algorithms are implemented in various programming languages, enabling computers to execute
them and produce desired outcomes.
Example: Now, use an example to learn how to write algorithms. Problem1: Create an algorithm that multiplies two
numbers and displays the output.
Step 1 - Start
Step 2 - declare three integers x, y and z
Step 3 - define values of x and y
Step 4 - multiply values of x and y
Step 5 - store result of step 4 to z
Step 6 - print z
Step 7 - Stop
Algorithms instruct programmers on how to write code. In addition, the algorithm can be written as:
Step 1 - Start mul Step 2 - get values of x and y Step 3 - z ← x * y Step 4 - display z Step 5 - Stop
In algorithm design and analysis, the second method is typically used to describe an algorithm. It allows the analyst
to analyze the algorithm while ignoring all unwanted definitions easily. They can see which operations are being used and
how the process is progressing. It is optional to write step numbers. To solve a given problem, you create an algorithm.
A problem can be solved in a variety of ways.
Figure 3: Flowchart of area circle
As a result, many solution algorithms for a given problem can be derived. The following step is to evaluate the
proposed solution algorithms and implement the most appropriate solution.
1.12 Characteristics of Algorithm:
1. Efficiency: A good algorithm should perform its task quickly and use minimal resources.
2. Correctness: It must produce the correct and accurate output for all valid inputs.
3. Clarity: The algorithm should be easy to understand and comprehend, making it maintainable and modifiable.
4. Scalability: It should handle larger data sets and problem sizes without a significant decrease in performance.
5. Reliability: The algorithm should consistently deliver correct results under different conditions and environments.
6. Optimality: Striving for the most efficient solution within the given problem constraints.
7. Robustness: Capable of handling unexpected inputs or errors gracefully without crashing.
8. Adaptability: Ideally, it can be applied to a range of related problems with minimal adjustments.
9. Simplicity: Keeping the algorithm as simple as possible while meeting its requirements, avoiding unnecessary
complexity.
1.13 Complexity of an Algorithm:
The algorithm’s performance can be measured in two ways:
1. Time Complexity The amount of time required to complete an algorithm’s execution is called time complexity. The
big O notation is used to represent an algorithm’s time complexity. The asymptotic notation for describing time
complexity, in this case, is big O notation. The time complexity is calculated primarily by counting the number of
steps required to complete the execution. Let us look at an example of time complexity.
mul = 1;
https://www.overleaf.com/project/656da41d8da926cf98e3dde7 // Suppose you have to calculate the multiplication
of n numbers.
for i=1 to n
mul = mul *1;
// when the loop ends, then mul holds the multiplication of the n numbers return mul;
The time complexity of the loop statement in the preceding code is at least n, and as the value of n escalates,
so does the time complexity. While the code’s complexity, i.e., returns mul, will be constant because its value is
not dependent on the importance of n and will provide the result in a single step. The worst-time complexity is
generally considered because it is the maximum time required for any given input size.
2. Space Complexity The amount of space an algorithm requires to solve a problem and produce an output is called
its space complexity. Space complexity, like time complexity, is expressed in big O notation. The space is required
for an algorithm for the following reasons:
(a) To store program instructions.
(b) To store track of constant values.
(c) To store track of variable values.
(d) To store track of function calls, jumping statements, and so on.
Space Complexity = Auxiliary Space + Input Size Finally after understanding what is an algorithm, its analysis and
approches, you will look at different types of algorithms.
1.14 Flowcharts:
A flowchart is a diagram that illustrates the steps, sequences, and decisions of a process or workflow. While there are
many different types of flowcharts, a basic flowchart is the simplest form of a process map. It’s a powerful tool that can
be used in multiple fields for planning, visualizing, documenting, and improving processes.
Figure 4:
Some examples of Flowcharts:
Now, we will discuss some examples on flowcharting. These examples will help in proper understanding of flowcharting
technique. This will help you in program development process in next unit of this block.
Problem1: Find the area of a circle of radius r.
Figure 5:
Problem2: Flowchart for an algorithm which gets two numbers and prints sum of their value.
Figure 6:
Problem3: Algorithm for find the greater number between two numbers.
Figure 7:
problem4: Flowchart for the calculate the average from 25 exam scores.
Figure 8: