POP Module 1
POP Module 1
Textbook:
1. Computer fundamentals and programming in c, “Reema Thareja”, Oxford University, Third
edition, 2023.
Takeaways:
• Characteristics of computers
• Digital computers
• Stored program concept
• Generations of computers
• Types of computers
• Applications of computers
• Basic organization of a computer
Principles of Programming using C- BCSESC103
Chapter 1
Introduction to Computers
1.1 COMPUTER
A computer, in simple terms, can be defined as an electronic device that is designed to accept
data, perform the required mathematical and logical operations at high speed, and output the result. We
all have seen computers in our homes, schools, and colleges. In fact, in today’s scenario, we find
computers in most aspects of our daily lives. For some of us, it is hard to even imagine a world without
them.
In the past, computers were extremely large in size and often required an entire room for
installation. These computers consumed enormous amounts of power and were too expensive to be used
for commercial applications. Therefore, they were used only for limited tasks, such as computing
trajectories for astronomical or military applications. However, with technological advancements, the
size of computers became smaller and their energy requirements reduced immensely. This opened the
way for adoption of computers for commercial purposes.
BIET, Davangere-04
Principles of Programming using C- BCSESC103
Accuracy: A computer is a very fast, reliable, and robust electronic device. It always gives
accurate results, provided the correct data and set of instructions are input to it. Hence, in the event of an
error, it is the user who has fed the incorrect data/program is responsible. This clearly means that the
output generated by a computer depends on the given instructions and input data. If the input data is
wrong, then the output will also be erroneous. In computer
terminology, this is known as garbage-in, garbage-out (GIGO).
Automation: Besides being very fast and accurate, computers are automatable devices that can
perform a task without any user intervention. The user just needs to assign the task to the computer, after
which it automatically controls different devices attached to it and executes the program instructions.
Diligence: Unlike humans, computers never get tired of a repetitive task. It can continually work
for hours without creating errors. Even if a large number of executions need
to be executed, each and every execution requires the same duration, and is executed with the same
accuracy.
Versatile: Versatility is the quality of being flexible. Today, computers are used in our daily life
in different fields. For example, they are used as personal computers (PCs) for home use, for business-
oriented tasks, weather forecasting, space exploration, teaching, railways, banking, medicine, and so on,
indicating that computers can perform different tasks simultaneously. On the PC that you use at home,
you may play a game, compose and send e-mails, listen to music, etc. Therefore, computers are versatile
devices as they can perform multiple tasks of different nature at the same time.
Memory: Similar to humans, computers also have memory. Just the way we cannot store
everything in our memory and need secondary media, such as a notebook, to record certain important
things, computers also have internal or primary memory (storage space) as well as external or secondary
memory. While the internal memory of computers is very expensive and limited in size, the secondary
storage is cheaper and of bigger capacity.
The computer stores a large amount of data and programs in the secondary storage space. The
stored data and programs can be retrieved and used whenever required. Secondary memory is the key for
data storage. Some examples of secondary devices include floppy disks, optical disks (CDs and DVDs),
hard disk drives (HDDs), and pen drives.
No IQ: Although the trend today is to make computers intelligent by inducing artificial
intelligence (AI) in them, they still do not have any decision-making abilities of their own. They need
guidance to perform various tasks.
Economical: Today, computers are considered as short term investments for achieving long-term
gains. Using computers also reduces manpower requirements and leads to an elegant and efficient way
of performing various tasks. Hence, computers save time, energy, and money. When compared to other
systems, computers can do more work in lesser time.
BIET, Davangere-04
Principles of Programming using C- BCSESC103
• Instructions written by the users are performed sequentially until there is a break in the current
flow.
• Input/Output and processing operations are performed simultaneously. While data is being
read/written, the central processing unit (CPU) executes another program in the memory that is ready for
execution.
(a)
(b)
Figure 1.2: Von Neumann architecture (a) Shared memory for instructions and data
(b) Separate memories for instructions and data
A computer with a Von Neumann architecture stores data and instructions in the same memory.
There is a serial machine in which data and instructions are selected one at a time. Data and instructions
are transferred to and from memory through a shared data bus. Since there is a single bus to carry data
and instructions, process execution becomes slower.
BIET, Davangere-04
Principles of Programming using C- BCSESC103
1822: English mathematician Charles Babbage designed a steam-driven calculating machine that could
compute tables of numbers. Though the project failed as he could not complete the construction of the
engine, it laid the foundation for the first computer
1890: Herman Hollerith, an American inventor, designed a punched card system to calculate the 1880
census. The system completed the task in three years saving the US government $5 million. Later
Herman established a company that we today know as IBM.
1936: British mathematician Alan Turing introduced a universal machine called the Turing machine
capable of computing anything that is computable. The central concept of the modern computer is based
on this machine.
1941: John Vincent Atanasoff, a Bulgarian-American physicist, and his graduate student, Clifford Berry,
at Iowa State College designed Atanasoff–Berry computer (ABC) that could solve 29 equations
simultaneously. It was the first time a computer could store information in its main memory.
1943–1944: John W. Mauchly and J. Presper Eckert built the Electronic Numerical Integrator and
Calculator (ENIAC), which is considered as the grandfather of digital computers. It filled a 20 × 40 feet
room and had 18,000 vacuum tubes.
1946: Mauchly and Presper designed the UNIVAC, which was the first commercial computer for
business and government applications.
1947: William Shockley, John Bardeen, and Walter Brattain of Bell Laboratories invented the transistor.
Soon vacuum tubes in computers were replaced by transistors.
1953: Grace Hopper developed the first computer language COBOL.
1954: The FORTRAN programming language was developed.
1958: Jack Kilby of Texas Instruments and Robert Noyce at Fairchild Semiconductor Corporation
separately invented integrated circuit, which is commonly known as the computer chip.
1964: Douglas Engelbart developed a prototype of the modern computer, with a mouse and a graphical
user interface (GUI). This was a remarkable achievement as it shifted computers from a specialized
machine for scientists and mathematicians to general public.
1969: Unix operating system was developed at Bell Labs. It was written in the C programming language
and was designed to be portable across multiple platforms. Soon it became the operating system of
choice among mainframes at large companies and government entities.
1970: DRAM chip was introduced by Intel.
1971: Alan Shugart with his team in IBM invented the floppy disk which allowed data to be shared
among computers.
1973: Robert Metcalfe, a research member at Xerox, developed Ethernet for connecting multiple
computers and other hardware.
1974–1977: Personal computers started becoming popular.
1975: Paul Allen and Bill Gates started writing software for the Altair 8800 using the new BASIC
language. On April 4, they both formed their own software company, Microsoft.
1976: Steve Jobs and Steve Wozniak started Apple Computers and developed Apple I, the fi rst
computer with a single-circuit board.
BIET, Davangere-04
Principles of Programming using C- BCSESC103
1977: Apple II was launched that offered colour graphics and incorporated an audio cassette drive for
storage.
1978: WordStar, a word processor application, was released by MicroPro International.
1979: VisiCalc, the fi rst computerized spreadsheet program for personal computers, was unveiled.
1981: The fi rst IBM personal computer was introduced that used Microsoft’s MS-DOS operating
system. The term PC was popularized.
1983: The fi rst laptop was introduced. Moreover, Apple introduced Lisa as the fi rst personal computer
with a GUI with drop-down menus and icons.
1985: Microsoft announced Windows as a new operating system.
1986: Compaq introduced Deskpro 386 in the market, which was a 32-bit architecture machine that
provides speed comparable to mainframes.
1990: Tim Berners-Lee invented World Wide Web with HTML as its publishing language.
1993: The Pentium microprocessor introduced the use of graphics and music on PCs.
1994: PC games became popular.
1996: Sergey Brin and Larry Page developed the Google search engine at Stanford University.
1999: The term Wi-Fi was introduced when users started connecting to the Internet without wires.
2001: Apple introduced Mac OS X operating system, which had protected memory architecture and pre-
emptive multi-tasking, among other benefi ts. To stay competitive, Microsoft launched Windows XP.
2003: The fi rst 64-bit processor, AMD’s Athlon 64, was brought into the consumer market.
2004: Mozilla released Firefox 1.0 and in the same year Facebook, a social networking site, was
launched.
2005: YouTube, a video sharing service, was launched. In the same year, Google acquired Android, a
Linux-based mobile phone operating system.
2006: Apple introduced MacBook Pro, its first Intelbased, dual-core mobile computer.
2007: Apple released iPhone, which brought many computer functions in the smartphone.
2009: Microsoft launched Windows 7 in which users could pin applications to the taskbar.
2010: Apple launched iPad, which revived the tablet computer segment.
2011: Google introduced Chrome book, a laptop that runs on the Google Chrome operating system.
2015: Apple released the Apple Watch. In the same year, Microsoft launched Windows 10.
After reading these interesting developments in computing technology, let us also understand the
evolution of computers through different generations.
BIET, Davangere-04
Principles of Programming using C- BCSESC103
BIET, Davangere-04
Principles of Programming using C- BCSESC103
BIET, Davangere-04
Principles of Programming using C- BCSESC103
Memory Semiconductor memory is used as primary memory; large capacity magnetic disks are used as
built-in secondary memory. Magnetic tapes and floppy disks were used as portable storage devices,
which have now been replaced by optical disks and USB flash drives.
Software Technology Programming is done in high-level programming languages such as Java, Python,
and C#. Graphical User Interface (GUI)-based operating systems such as Windows, Unix, Linux,
Ubuntu, and Apple Mac are being used. These operating systems are more powerful and user friendly
than the ones available in the previous generations.
Used for Scientific, commercial, interactive online, multimedia (graphics, audio, video), and network
applications.
Examples IBM notebooks, Pentium PCs, SUM workstations, IBM SP/2, Param supercomputer.
Highlights
• Faster, smaller, cheaper, powerful, reliable, and easier to use than the previous generation computers
• Speed of microprocessors and the size of memory are growing rapidly.
BIET, Davangere-04
Principles of Programming using C- BCSESC103
BIET, Davangere-04
Principles of Programming using C- BCSESC103
copied this design and termed their microcomputers as PC-compatible, which refers to any PC that is
based on the original IBM PC design. Another type of popular PC is designed by Apple. PCs designed
by IBM and other PC-compatible computers have a different architecture from that of Apple computers.
Moreover, PCs and PC-compatible computers commonly use the Windows operating system, while
Apple computers use the Macintosh operating system (MacOS). PCs can be classified into the following
categories:
Desktop PCs
A desktop PC is the most popular model of PCs. The system unit of the desktop PC can be placed
flat on a desk or table. It is widely used in homes and offices.
Laptops
Laptops (Figure 1.9) are small microcomputers that can easily fi t inside a briefcase. They are
very handy and can easily be carried from one place to another. They may also be placed on the user’s
lap (thus the name). Hence, laptops are very useful, especially when going on long journeys. Laptops
operate on a battery and do not always have to be plugged in like desktop computers.
Workstations
Workstations are single-user computers that have the same features as PCs, but their processing
speed matches that of a minicomputer or mainframe computer. Workstation computers have advanced
processors, more RAM and storage capacity than PCs. Therefore, they are more expensive and powerful
than a normal desktop computer.
Network Computers
Network computers have less processing power, memory, and storage than a desktop computer.
These are specially designed to be used as terminals in a networked environment. For example, some
network computers are specifically designed to access data stored on a network (including the Internet
and intranet)
Handheld Computers
The mid-1990s witnessed a range of small personal computing devices that are commonly known
as handheld computers, or mobile computers. These computers are called handheld computers because
they can fit in one hand, while users can use the other hand to operate them.
Handheld computers are very small in size, and hence they have small-sized screens and
keyboards. These computers are preferred by business travellers and mobile employees whose jobs
require them to move from place to place. Some examples of handheld computers are as follows:
• Smartphones • Tablet PCs
Smartphones These days, cellular phones are web-enabled telephones. Such phones are also known as
smartphones because, in addition to basic phone capabilities, they also facilitate the users to access the
Internet and send e-mails, edit Word documents, generate an Excel sheet, create a presentation, and lots
more. Smartphones run an advanced mobile operating system that enables it to run various applications.
The four major mobile operating systems are iOS, Android, BlackBerryOS, and Windows Mobile.
Smartphones also have a CPU, more storage space, more memory, and a larger screen than a regular cell
phone. In a nutshell, smartphone refers to a multi-functional mobile phone handset that packs in varied
functionalities from a camera to a web browser to a high-density display.
BIET, Davangere-04
Principles of Programming using C- BCSESC103
Tablet PCs A tablet PC (see Figure 1.10) is a computing device that is smaller than a laptop, but bigger
than a smartphone. Features such as user-friendly interface, portability, and touch screen have made
them very popular in the last few years. These days, a wide range of high-performance tablets are
available in the market. While all of them look similar from outside, they may differ in features such as
operating system, speed of data connectivity, camera specifications, size of the screen, processing power,
battery life, and storage capability.
1.6 APPLICATIONS OF COMPUTERS
When the first computers were developed, they were used only in the fields of mathematics and
science. In fact, the first effective utilization of computers was for decoding
messages in military applications. Later on, computers were used in real-time control systems, like for
landing on the moon. However, with the advancement of technology, the cost of computers and their
maintenance declined. This opened the way for computers to be extensively used in the business and
commercial sector for information processing. Today, computers are widely used in fields such as
engineering, health care, banking, education, etc. Let us discuss how computers are being effectively
utilized to perform important tasks.
Word processing Word processing software enables users to read and write documents. Users can also
add images, tables, and graphs for illustrating a concept. The software automatically corrects spelling
mistakes and includes copy–paste features (which is very useful where the same text has to be repeated
several times).
Internet The Internet is a network of networks that connects computers all over the world. It gives the
user access to an enormous amount of information, much more than available in any library. Using e-
mail, the user can communicate in seconds with a person who is located thousands of miles away. Chat
software enables users to chat with another person in real-time (irrespective of the physical location of
that person). Video conferencing tools are becoming popular for conducting meetings with people who
are unable to be present at a particular place.
Digital video or audio composition Computers make audio or video composition and editing very
simple. This has drastically reduced the cost of equipment to compose music or make a fi lm. Graphics
engineers use computers for developing short or full-length films and creating 3-D models and special
effects in science fiction and action movies.
Desktop publishing Desktop publishing software enables us to create page layouts for entire books.
After discussing how computers are used in today’s scenario, let us now have a look at the different
areas where computers are being widely utilized.
Bioinformatics Bioinformatics is the application of computer technology to manage large amount of
biological information. Computers are used to collect, store, analyse, and integrate biological and genetic
information to facilitate gene-based drug discovery and development. The need for analysis has become
even more important with enormous amount of genomic information available publicly from the Human
Genome Project.
Health care Last few years have seen a massive growth of computers and smartphone users. Like in our
daily lives, computers have also become a necessary device in the health care industry. The following
are areas in which computers.
o Storing records To begin with, computers are first and foremost used to store the
medical records of patients. Earlier, patient records were kept on paper, with separate
records dealing with different medical issues from separate healthcare organizations.
BIET, Davangere-04
Principles of Programming using C- BCSESC103
o Surgical procedures Computers are used for certain surgical procedures. They enable the
surgeon to use computer to control and move surgical instruments in the patient’s body
for a variety of surgical procedures. In such surgeries, a small incision is made, and then a
small surgical tool with an attached camera is placed inside the patient’s body. This
reduces the risk of complications from a larger surgical wound, and minimizes damage
done to the patient’s body.
Better diagnosis and treatment Computers help physicians make better diagnoses and recommend
treatments. Moreover, computers can be used to compare expected results with actual results in order to
help physicians make better decisions.
Meteorology Meteorology is the study of the atmosphere. This branch of science observes variables of
Earth’s atmosphere such as temperature, air pressure, water vapour, and the gradients and interactions of
each variable, and how they change over time. Meteorology has applications in many diverse fields such
as the military, energy production, transport, agriculture, and construction.
Multimedia and Animation Multimedia and animation that combine still images, moving images, text,
and sound in meaningful ways is one of most powerful aspects of computer technology. We all have
seen cartoon movies, which are nothing but an example of computer animation.
Retail Business Computers are used in retail shops to enter orders, calculate costs, and print receipts.
They are also used to keep an inventory of the products available and their complete description.
Sports In sports, computers are used to compile statistics, identify weak players and strong players by
analysing statistics, sell tickets, create training programs and diets for athletes, and suggest game plan
strategies based on the competitor’s past performance. Computers are also used to generate most of the
graphic art displays flashed on scoreboards.
Travel and Tourism Computers are used to prepare tickets, monitor the train’s or airplane’s route, and
guide the plane to a safe landing. They are also used to research about hotels in an area, reserve rooms,
or to rent a car.
Simulation Supercomputers that can process enormous amount of data are widely used in simulation
tests. Simulation of automobile crashes or airplane emergency landings is done to identify potential
weaknesses in designs without risking human lives.
Astronomy Spacecraft’s are usually monitored using computers that not only keep a continuous record
of the voyage and of the speed, direction, fuel, and temperature, but also suggest corrective action if the
vehicle makes a mistake. The remote stations on the earth compare all these quantities with the desired
values, and in case these values need to be modified to enhance the performance of the spacecraft,
signals are immediately sent that set in motion the mechanics to rectify the situation.
Education A computer is a powerful teaching aid and can act as another teacher in the classroom.
Teachers use computers to develop instructional material. Teachers may use pictures, graphs, and
graphical presentations to easily illustrate an otherwise difficult concept. Moreover, teachers at all levels
can use computers to administer assignments and keep track of grades. Students can also give exams
online and get instant results.
1.7 BASIC ORGANIZATION OF A COMPUTER
A computer is an electronic device that performs five major operations:
• Accepting data or instructions (input)
BIET, Davangere-04
Principles of Programming using C- BCSESC103
• Storing data
• Processing data
• Displaying results (output)
• Controlling and coordinating all operations inside a computer
In this section, we will discuss all these functions and see how one unit of a computer interacts with another to
perform these operations. Refer to Figure 1.9, which shows the interaction between the different units of a
computer system.
Central Processing Unit (CUP) – CPU is called the brain of a computer. An electronic circuitry that carries
out the instruction given by a computer program. CPU can be sub classified into three parts.
i .Control unit (CU)
ii. Arithmetic & Logic unit (ALU)
iii. Memory Unit (MU)
i. Control unit (CU)- the control unit manages the various components of the computer. It reads
instructions from memory and interpretation and changes in a series of signals to activate other parts
of the computer. It controls and co-ordinate is input output memory and all other units.
ii. Arithmetic & Logic unit (ALU) – The arithmetic logic unit (ALU), which performs simple
arithmetic operation such as +,-, *, / and logical operation such as >, <, =<, <= etc.
iii. Memory Unit (MU)- Memory is used to store data and instructions before and after processing.
Memory is also called Primary memory or internal memory. It is used to store data temporary or
permanently.
Function of CPU-
It controls all the parts and software and data flow of computer.
Principles of Programming using C- BCSESC103
Output Unit- Output unit is a unit that constituents a number of output device. An output device is used to
show the result of processing.
Function of Output unit:
It accepts data or information sends from main memory of computer
It converts binary coded information into HLL or inputted languages.
Principles of Programming using C- BCSESC103
Chapter 2
Input and output devices
These devices are used to enter information and instructions into a computer for storage or
processing and to deliver the processed data to a user. Input/Output devices are required for users
to communicate with the computer. In simple terms, input devices bring information INTO the
computer and output devices bring information OUT of a computer system. These input/output
devices are also known as peripherals since they surround the CPU and memory of a computer
system.
2.1 Input Devices
An input device is any device that provides input to a computer. There are many input devices, but
the two most common ones are a keyboard and mouse. Every key you press on the keyboard and
every movement or click you make with the mouse sends a specific input signal to the computer.
Keyboard: The keyboard is very much like a standard typewriter keyboard with a few additional
keys. The basic QWERTY layout of characters is maintained to make it easy to use the system. The
additional keys are included to perform certain special functions. These are known as function keys
that vary in number from keyboard to keyboard.
Principles of Programming using C-
Mouse: A device that controls the movement of the cursor or pointer on a display screen. A
mouse is a small object you can roll along a hard and flat surface. Its name is derived from its
shape, which looks a bit like a mouse. As you move the mouse, the pointer on the display
screen moves in the same direction.
Trackball: A trackball is an input device used to enter motion data into computers or other
electronic devices. It serves the same purpose as a mouse, but is designed with a moveable
ball on the top, which can be rolled in any direction.
Touchpad: A touch pad is a device for pointing (controlling input positioning) on a computer
display screen. It is an alternative to the mouse. Originally incorporated in laptop computers,
touch pads are also being made for use with desktop computers. A touch pad works by
sensing the user’s finger movement and downward pressure.
Touch Screen: It allows the user to operate/make selections by simply touching the display
screen. A display screen that is sensitive to the touch of a finger or stylus. Widely used on
ATM machines, retail point-of-sale terminals, car navigation systems, medical monitors and
industrial control panels.
Light Pen: Light pen is an input device that utilizes a light-sensitive detector to select objects
on a display screen.
Optical mark recognition (OMR): Optical mark recognition, also called mark sense reader
is a technology where an OMR device senses the presence or absence of a mark, such as
pencil mark. OMR is widely used in tests such as aptitude test.
Bar code reader: Bar-code readers are photoelectric scanners that read the bar codes or
BIET, Davangere-04
Principles of Programming using C-
vertical zebra strips marks, printed on product containers. These devices are generally used in
super markets, bookshops etc.
Scanner: Scanner is an input device that can read text or illustration printed on paper and
translates the information into a form that the computer can use. A scanner works by
digitizing an image.
Printer: Printer is a device that outputs text and graphics information obtained from the
computer and prints it on to a paper. Printers are available in the market in a variety of size,
speed, sophistication, and cost.
DOT MATRIX PRINTER: A dot matrix printer prints characters and images of all types as
a pattern of dots. It has a print head (or hammer) that consists of pins representing the
character or image. The print head runs back and forth, or in an up and down motion, on the
page and prints by striking an ink-soaked cloth ribbon against the paper, much like the print
mechanism on a typewriter. Advantages
It can produce carbon copies; offers lowest printing cost per page;
widely used for bulk printing where quality of the print is not of
much importance; is cheap; When the ink is about to finish, the
LINE PRINTER
Line printer is a high speed impact printer in which one typed line is printed at a time. The
speed of a line printer usually varies from 600 to 1200 lines-per-minute or approximately 10
to 20 pages per minute. They are widely used in datacenters and in industrial environments.
Band printer is a commonly used variant of line printers.
INKJET PRINTERS
BIET, Davangere-04
Principles of Programming using C-
• In inkjet printers, the print head has several tiny nozzles, also called jets.
• As the paper moves past the print head, the nozzles spray ink onto it, forming the
characters and images.
• The dots are extremely small (usually between 50 and 60 microns in diameter) and are
positioned very precisely, with resolutions of up to 1440x720 dots per inch (dpi).
• There is usually one black ink cartridge and one so-called color cartridge containing ink in
primary pigments (cyan, magenta, and yellow).
• While inkjet printers are cheaper than laser printers, they are more expensive to maintain.
LASER PRINTER:
• It is a non-impact printer that works at a very high speed and produces high quality
text and graphics.
• It uses the photocopier technology. When a document is sent to the printer, a laser beam
"draws" the document on a drum (which is coated with a photo-conductive material) using
electrical charges.
• After the drum is charged, it is rolled in toner (a dry powder type of ink).
PLOTTERS:
A plotter is used to print vector graphics with a high print quality. They are widely used to
draw maps, in scientific applications and in applications like CAD, CAM and CAE
BIET, Davangere-04
Chapter-8
Designing Efficient Programs
8.4 Program Design Tools:
8.4.1 Algorithms:
The typical meaning of an algorithm is a formally defined procedure for performing
some calculation. If a procedure is formally defined, then it must be implemented using some
formal language, and such languages are known as programming languages. The algorithm
gives the logic of the program, that is, a step-by-step description of how to arrive at a
solution. In general terms, an algorithm provides a blueprint to writing a program to solve a
particular problem. It is considered to be an effective procedure for solving a problem in a
finite number of steps. That is, a well-defined algorithm always provides an answer, and is
guaranteed to terminate.
Algorithms are mainly used to achieve software reuse. Once we have an idea or a
blueprint of a solution, we can implement it in any high-level language, such as C, C+, Java,
and so on. In order to qualify as an algorithm, a sequence of instructions must possess the
following characteristics:
Be precise
Be unambiguous
Not even a single instruction must be repeated infinitely.
After the algorithm gets terminated, the desired result must be obtained.
BIET, Davangere-04
8.4.2 Flowchart:
A flowchart is a graphical or symbolic representation of a process. It is basically used
to design and document virtually complex processes to help the viewers to visualize the logic
of the process, so that they can gain a better understanding of the process and find flaws,
bottlenecks and other less obvious features within it.
When designing a flowchart, each step in the process is depicted by a different
symbol and is associated with a short description. The symbols in the flowchart (refer Figure
8.10) are linked together with arrows to show the flow of logic in the process.
Start and end symbols are also known as the terminal symbols and are represented as
circles, ovals, or rounded rectangles. Terminal symbols are always the first and the last
symbols in a flowchart.
Arrows depict the flow of control of the program. They illustrate the exact sequence
in which the instructions are executed.
Generic processing step, also called as an activity, 1s represented using a rectangle.
Activities include instructions such as add a to b, save the result.
Input/Output symbols are represented using a Parallelogram and are used to get
inputs from the users or display the results to them.
A conditional or decision symbol is represented using a diamond. It is basically used
to depict a Yes/No question or a True/False test. The two symbols coming out of it, one from
the bottom point and the other from the right point, corresponds to Yes or True, and No or
False, respectively. The arrows should always be labelled. A decision symbol in a flowchart
can have more than two arrows, which indicates that a complex decision is being taken.
Labelled connectors are represented by an identifying label inside a circle and are
used in complex or multi-sheet diagrams to substitute for arrows. For each label, the 'outflow'
connector must have one or more 'inflow’ connectors. A pair of identically labelled
connectors is used to indicate a continued flow when the use of lines becomes confusing.
BIET, Davangere-04
Fig. 8.11: Flowchart to ADD two numbers.
BIET, Davangere-04
Fig. 8.13: Flowchart to compute salary of an employee.
8.4.3
Pseudocode:
Pseudocode is a compact and informal high-level description of an algorithm that uses
the structural conventions of a programming language. It facilitates designers to focus on the
logic of the algorithm without getting bogged down by the details of language syntax. An
ideal pseudocode must be complete, describing the entire logic of the algorithm, so that it can
be translated straightaway into a programming language.
It is basically meant for human reading rather than machine reading, so it omits the
details that are not essential for humans. Such details include variable declarations, system-
specific code, and subroutines.
Pseudocodes are an outline of a program that can easily be converted into
programming statements. They consist of short English phrases that explain specific tasks
within a program's algorithm. They should not include keywords in any specific computer
language.
Ex: Write a pseudocode for calculating the price of a product after adding the sales
tax to its original price.
BIET, Davangere-04
8.5 Types of Errors
While writing programs, very often we get errors in our programs. These errors if not
removed will either give erroneous output or will not let the compiler to compile the program.
These errors are broadly classified under four groups as shown in Figure 8.14.
BIET, Davangere-04
Introduction to C
Chapter 9
Introduction to C
9.1 INTRODUCTION
The programming language C was developed in the early 1970s by Dennis Ritchie at
Bell Laboratories to be used by the UNIX operating system. It was named C' because many
of its features were derived from an earlier language called B'. Although C was designed for
implementing system software, it was later on widely used for developing portable
application software.
9.1.1 Background
C is one of the most popular programming languages. It is being used on several
different software platforms. In a nutshell, there are a few computer architectures for which a
C compiler does not exist. lt is a good idea to learn C because few other programming
languages such as C++ and Java are also based on C which means you will be able to learn
them more easily in the future.
9.1.2 Characteristics of C
C is a robust language whose rich set of built-in functions and operators can be used
to write complex programs. The C compiler combines the features of assembly languages and
high-level languages, which makes it best suited for writing system software as well as
business packages. Some basic characteristics of C language that define the language and
have led to its popularity as a programming language are listed below.
C is a high-level programming language, which enables the programmer to
concentrate on the problem at hand and not worry about the machine code on which
the program would be run.
Small size C has only 32 keywords. This makes it relatively easy to learn as compared
to other languages.
C makes extensive use of function calls.
C is well suited for structured programming. In this programming approach, C enables
users to think of a problem in terms of functions/modules where the collection of all
the modules makes up a complete program. This feature facilitates ease in program
debugging, testing, and maintenance.
Unlike PASCAL it supports loose typing (as a character can be treated as an integer
and vice versa).
Structured language as the code can be organized as a collection of one or more
functions
9.1.3 Uses of C
C is a very simple language that is widely used by software professionals around the
globe. The uses of C language can be summarized as follows:
C language is primarily used for system programming The portability, efficiency, the
ability to access specific hardware addresses, and low runtime demand on system
resources make it a good choice for implementing operating systems and embedded
system applications
C has been so widely accepted by professionals that compilers, libraries, and
interpreters of other programming languages are often written in C.
For portability and convenience reasons, C is sometimes used as an intermediate
language for implementation of other languages. Examples of compilers which use C
this way are BitC, Gambit, the Glasgow Haskell Compiler, Squeak, and Vala.
Basically, C was designed as a programming language and was not meant to be used
as a compiler target language. Therefore, although C can be used as an intermediate
language it is not an ideal option.
C is widely used to implement end-user applications.
9.2 STRUCTURE OF AC PROGRAM
A C program is composed of pre-processor commands, a global declaration section,
and one or more functions (Figure 9.2). The pre-processor directives contain special
instructions that indicate how to prepare the program for compilation. One of the most
important and commonly used pre-processor commands is include which tells the compiler
that to execute the program, some information is needed from the specified header file.
Output
Welcome to the world of C Program
#include <stdio.h>
This is a pre-processor command that comes as the first statement in our code. All
pre-processor commands start with symbol hash (#). The #include statement tells the
compiler to include the standard input/output library or header file (stdio.h) in the program.
int main ()
Every C program contains a main() function which is the starting point of the
program. Int is the return value of the main() function. After all the statements in the program
have been written, the last statement of the program will return an integer value to the
operating system.
{ } The two curly brackets are used to group all the related statements of the main function.
All the statements between the braces form the function body. The function body contains a
set of instructions to perform the given task.
return 0;
This is a return command that is used to return the value 0 to the operating system to
give an indication that there were no errors during the execution of the program.
text editor, such as Windows notepad, or in an Integrated Design Environment. The source
file is then processed by a special program called a compiler.
The compilation process shown in Figure 9.4 is done in two steps. In the first step, the
pre-processor program reads the source file as text, and produces another text file as output.
Source code lines which begin with the # symbol are actually not written in C but in the pre-
processor language. The output of the pre-processor is a text file which does not contain any
pre-processor statements. This file is ready to be processed by the compiler. The linker
combines the object file with library routines (supplied with the compiler) to produce the
final executable file.
9.9 KEYWORDS
Like every computer language, C has a set of reserved words often known as
keywords that cannot be used as an identifier. All keywords are basically a sequence of
characters that have a fixed meaning. By convention al1 keywords must be written in
lowercase (small) letters. Table 9.2 shows the list of keywords in C.
Table 9.2: Keywords used in C
9.10 IDENTIFIERS
Identifiers, as the name suggests, help us to identify data and other objects in the
program. Identifiers are basically the names given to program elements such as variables,
arrays, and functions. Identifiers may consist of sequence of letters, numerals, or underscores.
9.10.1 Rules for Forming identifier Names
Some rules have to be followed while forming identifier names. They are as follows:
Identifiers cannot include any special characters or punctuation marks (like #,
s, ^, ?, ., etc.) except the underscore_.
There cannot be two successive underscores.
Keywords cannot be used as identifiers.
The case of alphabetic characters that form the identifier name is significant.
For example, "FIRST' is different from 'first' and 'First'.
Table 9.4 shows the variants of basic data types. As can be seen from the table, we
have unsigned char and signed char. Do we have negative characters? No, then why do we
have such data types? The answer is that we use signed and unsigned char to ensure
portability of programs that store non-character data as char.
While the smaller data types take less memory, the larger types incur a performance
penalty. Although the data type we use for our variables does not have a big impact on the
speed or memory usage of the application, we should always try to use int unless there is a
special need to use any other data type.
Module 1: Ch-9.1 to 9.14 BIET-DVG-04
Introduction to C
9.12 VARIABLES
A variable is defined as a meaningful name given to a data storage location in
computer memory. When using a variable, we actually refer to address of the memory where
the data is stored. C language supports two basic kinds of variables numeric and character.
9.12.1 Declaring variables
Each variable to be used in the program must be declared. To declare a variable,
specify the data type of the variable followed by its name. The data type indicates the kind of
values that the variable will store.
Variable names should always be meaningful and must reflect the purpose of their
usage in the program. The memory location of the variable is of importance to the compiler
only and not to the programmer. Programmers must only be concerned with accessing data
through their symbolic names. In C, variable declaration always ends with a semicolon, for
example:
int emp_num;
float salary;
char grade;
double balance_amount;
unsigned short int acc_no;
9.13 CONSTANTS
Constants are identifiers whose values do not change. While values of variables can
be changed at any time, values of constants can never be changed. Constants are used to
define fixed values like mathematical constant pie or the charge on an electron so that their
value does not get changed in the program even by mistake.
A constant is an explicit data value specified by the programmer. The value of the
constant is known to the compiler at the compile time. C allows the programmer to specify
constants of integer type, floating point type, character type, and string type (Figure 9.8).
placed at the beginning of the program to make them easy to find and modify at a later stage.
Look at the example given below which defines the value of pi using define.
#define pi 3.14159
#define service_tax 0.12
9.14 INPUT/OUTPUT STATEMENTS IN C
9.14.1 Streams
A stream is the source of data as well as the destination of data. Streams are associated with a
physical device such as a monitor or a file stored on the secondary memory. C uses two forms of
streams—text and binary.
9.14.2 Formatting Input/Output
C language supports two formatting functions printf and scanf. printf is used to
convert data stored in the program into a text stream for output to the monitor, and scanf is
used to convert the text stream coming from the keyboard to data values and stores them in
program variables.
9.14.3 printf()
The printf function (stands for print formatting) is used to display information
required by the user and also prints the values of the variables. For this, the print function
takes data values, converts them to a text stream using formatting specifications in the control
string and passes the resulting text stream to the standard output. The control string may
contain zero or more conversion specifications, textual data, and control characters to be
displayed.
printf("control string", variable list);
The function accepts two parameters-control string and variable list. The control
string may also contain the text to be printed like instructions to the user, captions, identifiers,
or any other text to make the output readable.
Examples:
The scanf() function stands for scan formatting and is used to read formatted data
from the keyboard. The scanf function takes a text stream from the keyboard, extracts and
formats data from the stream according to a format control string and then stores the data in
specified program variables. The syntax of the scanf() function can be given as:
Example program:
Write a program to demonstrate the use of printf and scanf statements to read and print
values of variables of different data types.
#include <stdio.h>
int main()
{
int num;
float amt;
char codes;
double pi;
long int population_of_india;
char msg[10];
printf("\n Enter the value of num ");
scanf("%d", &num)
printf("\n Enter the value of amt: ");
scanf("%f", &amt) ;
printf("\n Enter the value of pi : ");
scanf("%e", &pi);
printf("\n Enter the population of India:");
scanf("%ld", &population_of_india) ;
printf("\n Enter the value of code: ");
scanf("%c", &code);
printf("\n Enter the message :");
scanf("%s", msg);
printf("\n NUM = %d \n AMT = %f \n PI %e \n POPULATION OF INDIA = %1d \n
CODE = %c \n MESSAGE = %s", num, amt, pi, population_of_india, code, msg);
return 0;
}
OUTPUT: