Unit I - Introduction To Information Technology
Unit I - Introduction To Information Technology
1 Introduction to
Information Technology
Learning Objectives
l To recapitulate history of Information technology is much more than just computers.
computers Today, it is the convergence of hardware, software,
l To differentiate between telecommunication, data, networks, multimedia, images,
computers of various
generations
Internet, applications, and people. These all revolve round
one object—computer. But computers were not the same
l To appreciate revolution in
digital computing with the what we see today.
invention of microcomputer
l To define basic computer Y
hardware architecture
l To know various types of Computers have come a long way since Charles Babbage, a
software mathematician, designed his mechanical device in 1835. He
l To understand various called his machine Analytical Engine. Babbage drew his
generations of programming inspiration from the centuries-old abacus that was used for
languages and tools
counting numbers. Babbage’s machine could only perform
very simple arithmetical calculations. He replaced the abacus
beads with mechanical gears, which were similar to the
arithmetic logic unit of modern computers. There were lots
of developments by scientists world wide. During World
War II, there were many developments in the digital arena.
The electronic circuits, relays, capacitors, and vacuum tubes
replaced the mechanical parts. The architecture designed
by John von Neumann was most suitable for digital
computing. Even today, most contemporary computers use
this architecture. But the most noticeable development and
improvement over Charles’ machine was the invention of
ENIAC (Electronic Numerical Integrator and Computer)
in 1945 by John Mauchly and J. Presper at the University of
Pennsylvania. It was the first electronic general-purpose
machine and was 1000 times faster than other contemporary
computers.
Fig 1.3 MITS Altair 8800 Fig. 1.4 IBM’s first PC 5150 with two floppy drives
l Keyboard
l Floppy drive
l Speaker
The journey of personal computer started with Intel
Corporation introducing the tiny microprocessor or
integrated chip (IC) by the code name ‘8088’. The
technology was improved continuously. The size
became shorter and the speed and efficiency doubled
and quadrupled with every new release (Fig. 1.5). It is
Fig. 1.5 Modern-day personal computer: said that if the automobile industry had made the
really personal!
progress like the computers did, we would have a car
of size of match box with a speed of a jet within forty years. Table 1.1 shows a
tentative time of introduction of the chip and its speed.
The fast evolution of microcomputers was because of the invention of the
microchip called integrated circuits (ICs). In this technology, the computer
logic is ‘burnt into’ the layers of the microprocessor (Fig. 1.6). A chip of a
size of our thumbnail can hold enough logic to run the computer programs
we have in our computers. The pace of the evolution of computers was
always followed by the development in the field of peripheral devices.
Peripheral devices are the various input, output and storage equipment,
such as mouse, monitor, printer, and hard disk. Monitors evolved from
monochromes to high-resolution coloured to liquid crystal display (LCD) and
Year Chip Computers Built around the Chip Word-length* Clock Speed*
--
These devices, made mandatory by IBM, form a complete computer and are
ingredients of input, output, and processing system of a computer. Figure 1.7
explains the basic input and output devices. We can add the latest devices in
the subsequent diagram.
The central processing unit (CPU) consists of three main components—
arithmetic logic unit (ALU), control unit, and primary storage unit.
The arithmetic logic unit (ALU) performs all arithmetical and logical
calculations in the computer. It can add, subtract, multiply, and divide numbers.
Also, it can understand logical calculations and negative numbers. The control
unit controls and coordinates between various parts and components of a
computer. These parts are all input and output devices, storage units, and
other internal units of a computer. We will now briefly discuss primary storage
system in a computer.
Computer Speed
As memory is measured in bits and bytes, computer speed is measured in
word-length. It is the number of bits a computer processes at a time. If a
computer has a word-length of 16 bits, it can process 16 bits or 2 bytes at a
time. As we have seen in Table 1.1, a Pentium computer has a 32-bit word-
length, i.e. it can process up to 4 bytes at a time. The chip speed is guided and
affected by the ‘clock’ speed of the computer. This is an internal speed and
measured in megahertz (MHz). The megahertz speed is many millions of
cycles per second. If a computer has a clock speed of 2600 MHz, it means it
can perform 2600 million cycles per second.
A computer system is built of mainly two major components, hardware and
software. We can see, touch, and feel hardware. It is a tangible part of the
computer, whereas the software is intangible and stays in coded form either in
the hard disk drive, or in the memory of the computer.
awa
Hardware consists of the following components.
Cabinet It is a box that houses the main components of the computer such
as motherboard, microprocessor, memory chips, hard disk, floppy disk drive,
and the CD-ROM drive.
Monitor It is also known as visual display unit (VDU). Monitors come with
cathode-ray tube (CRT) technology, in which a beam of electrons is thrown
on the inner surface of the tube to form characters and images. Now, the new-
generation monitors are LCD and TFT.
Keyboard It is an input device. Computer keyboard has all numerals and
alphabets of the English language. Besides, it has some special-purpose keys
and function keys, which need no elaborate discussion here.
Mouse It is also an input device and is a must for running Windows (or
GUI) applications.
Motherboard This is the main board inside the computer box that
accommodates the main microprocessor, memory chips (RAM and ROM),
and input-output unit for connecting with the peripheral devices.
Software is a set of instructions to the computer to perform a certain task and
that resides in the computer in the coded form as binary digits of 0 and 1.
Windows, Word, Excel, SAP, Tally, Unix, Linux, Visual Basic, and Oracle
Forms are all examples of software. Software is an important part of a computer
system. You can compare software with the blood and life of a person, which
makes the person alive. W.S. Jawadekar (2004) defines software as ‘a set of
instructions to acquire inputs and manipulate them to produce the desired
output in terms of functions and performance as determined by the user of
the software.’ It also includes a set of documents such as the software manual,
meant for the users to understand the software system. Today’s software
comprises the source code, executables, design documents, operations and
system manuals, and installation and implementation manuals.
Software can be broadly classified into two categories—system software
and application software.
y wa
System software manages computer resources, such as the file system,
other hardware peripherals, and communication links. Various operating
systems, browsers, utilities, compilers, and device drivers are examples of
system software. As the computer technology evolved from mainframes to
minicomputers, to microcomputers, software systems also went through major
metamorphosis. Initially, the mainframe did not have an operating system at
all. Then, Unix was introduced and became a very popular operating system
for mainframes and minicomputers.
a wa
Application software is a program created to perform certain tasks for the end
users. Some of these tasks are maintaining a fee register in a school, keeping
transaction records of an account-holder in a bank, creating bills at the point-
of-sales location, managing inventory in a warehouse, playing a music or movie
file, creating a database system allowing users to construct database structure,
and developing a programming language that further facilitates creation of
software by a programmer.
All kinds of software that are used to manage, run and monitor businesses
and process data come under application software category. The word
processors, database management software, spreadsheet software, compressing-
decompressing software, media players, and software that can play a movie
file are all examples of application software. It is assumed that students are
aware of the basic office management tools such as word processors,
spreadsheets, and Internet browsers. Therefore, detailed study of these
applications is not a part of this text. Under application software, we will discuss
retail application software in Chapters 7, 10, and 11.
-a aa
In these languages, one can write software for both microcomputers and
mainframe computers. The main third-generation languages are C, COBOL,
FORTRAN, BASIC, and Pascal, which were developed and became popular
in development of the system, database and application programming. System
programming is the development of tools, compilers, utilities, and operating
systems. DOS, Windows, Internet browsers, and Linux are all examples of
operating systems. Database programming is the development of applications
around the database management systems, such as FoxPro, Access, and Oracle.
Application programming is development of applications to perform a certain
task and capture, manipulate and retrieve data from databases. Development
of websites and business software packages (such as Tally, RetailPro, SAP, and
MS-Office) are examples of application programming. These programming
languages are much easier than the assembly languages since these come with
a compiler, which converts the normal English-like phrases into binary
language that the machines understand.
-a aa
The third-generation languages needed deep procedural details and step-by-
step instruction to carry out a task. The job of the programmer was not only to
tell the computer what to do but also how to do it. Besides, these languages
are not capable of handling GUI and multimedia convergence. So, computer
technologists worldwide developed more languages to overcome the drawbacks
of the third-generation languages. These languages were called fourth-
generation languages (4GL). Visual, C++, Visual Basic, Oracle Developer
Tools, SQL, PowerBuilder, FrontPage, Java, CGI-Perl, PHP, and Dot.Net are
examples of 4GL.
'--------------------------------------------------------------------------
' Procedure : LastDay
' DateTime : 19-Oct-2006
' Developer : Sunil
' Purpose : Return Last Date of Month and Year
' Modify History :
' Date Dr/Cr Developer Details
'--------------------------------------------------------------------------
Public Function fniLastDay(p_intMonth As Integer, p_intYear As Integer) As
Integer
On Error GoTo ErrorHandler
Assembly 320
C 128
Fortran 105
C++ 56
Java 55
Visual Basic 35
Other 4GL 20
Code Generators 15
Table 1.2 shows how lines of codes per function point drop sharply. (The
function point is a measure of the complexity of the software.)
The Internet, which is the network of the networks, has changed the world.
It has not only been instrumental in transforming world economies but also
has shaken the basic fabric of social interaction. It has changed the way we
shop, the way we socialize, the way we study, the way we bank, make payments,
the way scholars conduct research. It has also changed the way a father writes
a letter to his daughter, and the way we looked at our old family album.
We would not be doing business the way we did a decade back. E-commerce
has opened new avenues in business. Organizations cannot ignore the impact.
The dot.com burst in 2000 was a necessary shake-up for unprofitable businesses
that had built their foundation on faulty revenue models. Now, businesses are
taking care of this aspect. E-commerce has emerged as an enabler of the existing
business systems and is adding revenues and facilitating business operations
on click of a mouse.
Information technology influences every aspect of life and is going to have
a big impact on commerce and industry. Therefore, our progress depends on
what our system will be able to do. The market share of a company will depend
upon what its technology can support. The modern organizations will have to
be prepared for it.
Y
The journey of modern computing started with the invention of the digital
computers in 1950s. The microcomputer revolution started in 1980s after the
invention of the microprocessor in early 1970. Very large-scale integrated
(VLSI) circuits technology was behind this revolution. Computers have the
same architecture of input-process-output. Data is entered in computers through
various input devices; it is processed in the CPU having an ALU, a control
unit, and memory units. The output is sent to devices such as monitor, printer.
Computers consist of various hardware components (such as CPU, input and
output devices, and peripherals) and software components.
Broadly, software is classified into system software and application software.
System software consists of various operating systems, drivers, compilers,
assemblers. DOS, Windows, Unix, Linux, and OS/2 are examples of operating
systems. The Internet browsers and modem drivers are utilities under the
same category. Computer programming languages, database management
software, business application packages, and retail EPOS software are examples
of application software. With the invention and evolution of new generation
of computer hardware, programming languages also evolved. As the
programming languages became rich in functionalities and delivery, it became
easier for a programmer to write a piece of software using them. Today,
languages and tools of fourth generation and beyond are available, with which
even a not-so-technical person can try to build programs.
1. Visit a retail store in your neighbourhood and prepare a report on the use of IT at the store.
2. Your organization is implementing retail software. You, as a head of the implementing team, have
to decide about the operating system. Your liking is for Linux. Build a business case so that your
CEO is convinced.
Jawadekar, W.S. 2005, Software Engineering, Tata McGraw-Hill, New Delhi.
Laudon, Kenneth and Jane Laudon 2002, Management Information Systems, Pearson Education (Singapore).
Linux For You, www.lfymag.com.
CASE STUDY
The Company
The Haldia Dock Complex (HDC) has a uniquely designed Linux installation with an active-
active cluster set-up, which is considered as one of the best of its kind. This has ensured
efficiency and cost-effectiveness. The HDC, situated in the Bay of Bengal, is one of the most
active ports in the eastern front of the country. It manages day-to-day operations on Linux for
heavy-duty work applications. It is one of the uniquely designed Linux installations and presently
one of the best active-active clusters.
Initial Groundwork
The officials at HDC were keen on modernizing their IT and administrative network
infrastructure. At the same time, they were studying the cost-effectiveness of implementing a
solution to handle the overall load balancing that was required at the complex.
This set-up involved several major players, including (a) IBM, which supplied the machines
and its custom DB2 software in addition to other hardware infrastructure, (b) NIC, which was
the overall governing council body (unless NIC sanctioned the project, it couldnt take off,
and (c) Officials from Haldia complex and Red Hat India, who configured and implemented
the active-active cluster.
Unlike earlier organizations that had proprietary systems in their infrastructure, HDC was
a fresh set-up. Without much ado, almost every one agreed to go in for Linux from the very
beginning.
The project was based on Red Hat Enterprise Linux cluster suite using DB2, a SAN switch
(for shared storage), WAS (Web Application Sphere) from IBM, and DRS (Disaster Recovery
Site) about 8 km away from the main installed base to provide redundancy, all connected
over a fibre-optic channel network.
Active-active clustering
In this type of application environment there are two servers simultaneously accessing
data. If one server shut downs, the other picks up the load of running all services. These
servers access different partitions. So, in a typical cluster manager environment, the user
can simply spread the application services across the two servers in any way that seems
appropriate. The applications are processed faster as both the servers share the total
load. This is called active-active clustering, since both servers are simultaneously active.
Application Application
LAN/WAN
server A server B
Failover
Data A Data B
these criteria, the experts on the project had to carefully design and implement a smart
maintenance and support strategy.
Sandeep Khuperkar, technical specialist, Red Hat India, one of the main architects
responsible for the active-active cluster set-up at HDC, agrees, we at Red Hat saw the overall
design plan and recommended Linux as the de-facto platform for the mission-critical
applications. This decision was based on the compatibility and configurability of the OS with
other software and hardware such as IBMs DB2 and WAS server applications in sync with
the DRS.
The HDC set-up for the active-active cluster involves one pair of cluster for the database,
three servers for WAS and one for the database. The NIC had developed application for the
HDC active-active cluster, which is being used by the finance, marine, payroll and human
resource departments. At HDC, Red Hat Enterprise Linux forms the basic platform the entire
server set-up, with multiple instances running on active-active cluster accessing their own
respective DB (database) partitions on external storage (Fig. 1.13).
In the set-up depicted in Fig. 1.14, the cluster manager configuration developed by Red
Hat through the SAN switch comprises pair of server connected to an external storage array of
devices. The cluster manager software at the HDC is used to control access to storage partitions,
so that only one instance of DB2 can access a particular partition at a time. In the event that
one of the servers shuts down or fails, the other server will detect the event and will automatically
start to run the applications that were previously running on the failed server. This migration
of the application from the failed server to the remaining server is called fail-over.
Each server will then operate in the same manner, as if it were a single stand-alone system,
running applications and accessing data on its allocated storage partitions. But at the HDC,
multiple servers have been used to streamline the steady, demanding flow of data. This typical
layout is often referred to as scale-out computing. In addition to their connection to the shared
storage array, the two servers are interconnected using a network or serial interface so that
Application A
Application B
Application C
they can communicate with each other. So, in this case, there are two nodes: Node 1 and
Node 2.
In case a particular application instance fails on Node 1, it is invoked at Node 2, thereby
giving continuity of data access to users. As soon as Node 1 is rectified, the application instance
that was earlier relocated to Node 2 is relocated back to Node 1.
Sandeep Khuperkar points out, this is very important as scalability needs to be enhanced
without any breakdown in the service. So the entire application must be designed for fail-over
and maintenance.
Cluster communication
Cluster manager configuration comprises two server systems connected to a shared
external storage array. Typically in most environments, the two servers are connected by
LAN or RS-232 serial cables. Cluster manager software uses these connections for heart
beating between the servers. Each server heart beats with other with regular short messages
to check that if it is operating correctly. If the heart beats do not receive an
acknowledgement, then the server will use this as an indication that the remote server
has failed.
The HDC is a very high-rate active-active cluster and there are various applications being
developed and ported on the system. As a result, the design had to consider compatibility
with other operating systems and high-end performance. Therefore, it is natural that the cluster
must be able to work with the most common database and applications (interconnectivity)
and must be easy to deploy and use with custom applications.
On this aspect, Sandeep adds, we have tested the cluster successfully and it has worked
quite efficiently. Looking at the mission-critical applications required at HDC, the cluster
management suite (Red Hat) was configured to use multiple instances of DB2 running on
several nodes.
How does one monitor the network effectively? Sandeep explains, the enterprise suite
has a very good GUI, which helps the system administrator to check the status and monitor
the nodes on the cluster.
So, whats new?
Ask any expert or user of Linux and opinion will invariably be, its a very strong, stable,
robust OS with scope for further enhancements and future requirements. And what do the
people at the site had to say? Abhijit Das, Deputy Secretary, HDC, says, we are completely
satisfied with the active-active cluster set-up and have found the Red Hat application suite to
be perfectly stable for our heavy-duty work.
Questions:
1. What was the major activity at the Haldia Dock complex?
2. Which activities carried out at HDC were LAN establishing and which ones indicated
towards an operating system change?
3. Which part of the operations was most critical and why?
Source: Adapted from Nilesh Kakade, Linux For You, December 2003, www.lfymag.com.