0% found this document useful (0 votes)
6 views132 pages

Computer Technologies and Programming-1

The document provides an overview of computer science, covering its theoretical and practical aspects, including programming, computer architecture, artificial intelligence, and human-computer interaction. It discusses the automation of information processes, the capabilities and limitations of computers, and categorizes computers based on function and performance. Additionally, it outlines the generations of computers from the first to the fifth, detailing their technological advancements and characteristics.

Uploaded by

Santos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views132 pages

Computer Technologies and Programming-1

The document provides an overview of computer science, covering its theoretical and practical aspects, including programming, computer architecture, artificial intelligence, and human-computer interaction. It discusses the automation of information processes, the capabilities and limitations of computers, and categorizes computers based on function and performance. Additionally, it outlines the generations of computers from the first to the fifth, detailing their technological advancements and characteristics.

Uploaded by

Santos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 132

Computer technologies and programming

Lecture 1
Basic aspects of information technology

Computer science is the study of computers, includes their design, operation,


and use in processing information.
Computer science combines both theoretical and practical aspects of
engineering, electronics, information, theory mathematics, logic and human
behaviour. Aspects of computer science range from programming and computer
architecture to artificial intelligence and robotics.
Computer science is a combination of theory, engineering and experimentation.
In some cases, computer scientists develop a theory, then engineer a combination
of computer hardware and software based on that theory, and experimentally tests
it. A theory- driven approach is the development of new software engineering tools
that are then evaluated in actual use.
In other cases, experimentation may result in new theory, such as the discovery
that an artificial neural network exhibits behaviour similar to neurons in the brain,
leading to a new theory in neurophysiology.
Computer science can be divided into four main fields: software development,
computer architecture (hardware), human- computer interacting (the design of the
most efficient ways for humans to use computers), and artificial intelligence (the
attempt to make computers be have intelligently).
Software development is concerned with creating computer programs that perform
efficiently.
Computer architecture is concerned with developing optimal hardware for
specific computational needs.
The areas of artificial intelligence (AL) and human- computer interacting often
involve the development of both software and hardware to solve specific problems.
Automation of information processes
Information is information about certain objects, phenomena or processes in
the environment. Any form of human activity involves the transmission and
processing of information. It is necessary for the correct management of the
surrounding reality, the attainment of the goals set and, ultimately, for the
existence of man. Any system: socio-economic, technical, or a system in animate
nature operates in a constant relationship with the external environment - other
systems of higher and lower levels. The relationship is carried out through
information that conveys both management commands and the information needed
to make the right decisions. The concept of information as the most important
element of a system embracing all aspects of its life activity can be considered
universal, applicable to any system.
A single scientific opinion about the quantitative meaning of the concept of
"information" does not exist. Different scientific directions give different
definitions based on those objects and phenomena that they study. Some of them
believe that information can be expressed quantitatively, giving definitions of the
amount and volume of information (information measures), others are limited to
qualitative interpretations.
The syntactic measure of information is used for the quantitative expression of
impersonal information that does not express a semantic relation to objects.
The semantic (semantic) amount of information is measured by the thesaurus
measure. It expresses the ability of the observer (user) to receive the incoming
message.
Information processes (collection, processing and transmission of
information) have always played an important role in science, technology and the
life of society. In the course of the evolution of mankind, there is a steady tendency
towards automation of these processes, although their internal content has
essentially remained unchanged.
The collection of information is the activity of the subject, in the course of which
he receives information about the object of interest to him. Information can be
collected either by humans, or by means of technical means and systems -
hardware. For example, a user can obtain information about the movement of
trains or aircraft himself, having studied the schedule, or from another person
directly, or through some documents compiled by this person, or by means of
technical means (automatic help, telephone and etc.).
The task of gathering information can not be solved in isolation from other tasks,
in particular, the tasks of information exchange (transfer).
Exchange of information is a process in which the source of information transfers
it, and the recipient accepts it. If errors are detected in transmitted messages, then
this information is retransmitted. As a result of the exchange of information
between the source and the recipient, an "information balance" is established, in
which, ideally, the recipient will have the same information as the source.
Information is exchanged with the help of signals that are its material carrier. The
sources of information can be any objects of the real world, which have certain
properties and abilities. If an object belongs to an inanimate nature, then it
produces signals that directly reflect its properties. If the source object is a person,
the signals it generates can not only directly reflect its properties, but also
correspond to the signs that a person develops for the purpose of information
exchange.
The recipient can use the received information more than once. With this goal, he
must fix it on the material carrier (magnetic, photo, cinema, etc.). The process of
forming the initial, non-systematized array of information is called the
accumulation of information. Among the recorded signals there may be those that
reflect valuable or frequently used information. Part of the information at a given
time of special value may not represent, although, possibly, as required in the
future.
Storage of information is the process of maintaining the original information in a
form
COMPUTER CAPABILITIES AND LIMITATIONS
Like all machines, a computer needs to be directed and controlled to perform a
task successfully.
Let’s discuss the capabilities and the limitations of a computer system.
First of all computers are capable of doing repetitive operations. A computer can
perform similar operation thousands of times without becoming tired.
Secondly, computers can process information extremely fast. Modern computer
can solve a problem millions of times faster than a mathematician.
Thirdly, computers are very accurate and reliable, especially when they perform
a number of operations per second. Sometimes computers break down and have to
be repaired.
In the fourth place, general- purpose computers can solve various kinds of
problems. Every big problem can be solved by solving a number of little problems
one after another.
Finally, a computer unlike a human being has no intuition. A person may
suddenly find the answer to a problem without working out the details, but a
computer can only proceed because it has been programmed. Using the very
limited computer capabilities, the task can be done quite easily.

CATEGORIES OF COMPUTER
Computer traditionally has been divided into four categories, based on their
function, physically size, cost and performance.

1. Algorithm and its properities


2. Algorithm. The types of the description of the algorithm
3. Elements of block diagram and examples
Modern digital computers are all conceptually similar, regardless of size. They
can be divided into several categories on the basis of cost and performance: the
personal computer or microcomputer is a relatively low- cost machine, usually of
desk- top size. The workstation is a microcomputer with enhanced graphics and
communication capabilities which makes it useful for office work.
The minicomputer is generally too expensive for personal use, with capabilities
suited to a business, school or laboratory.
The mainframe computer is a large, expensive machine with the capability of
serving the needs of major business enterprises, government departments, scientific
research establishments. The largest and fastest of these are called supercomputers.
Minicomputers are larger than microcomputers and are generally used in
business and industry for specific tasks, such as processing payroll. Minicomputer
is a computer intermediate in size between a mainframe computer and a
microcomputer. The Digital Equipment Corporation VAX and the IBM AS/400 are
the examples for it.
A minicomputer occupies a large area within a room and supports 10 to 100
users at a time. Minicomputers are used by medium- sized businesses and
academic institution. They are rapidly being replaced by microcomputers.
Minicomputer is a mid- level computer built to perform complex computations.
Minicomputers are also connected to other minicomputers on a network and
distribute processing among all the attached machines. They are used heavily in
transaction processing applications and as interfaces between mainframe computer
systems and wide area networks.
Mainframes are large, fast and expensive computers. They are generally used in
business by government to provide centralized storage, processing and
management for large amounts of data. Mainframe computer is a large computer. It
occupies a specially air- conditioned room and supports typically 100 to 500 users
at one time. The IBM 370 and IBM 3090 are examples of mainframe computers.
Mainframe computer is a high- level computer designed for the most intensive
computational tasks. The most powerful mainframes are called supercomputers.
They perform highly complex and time- consuming computations. These
computers are used in both pure and applied research by scientists, large businesses
and the military.
Generation of computers

The period of first generation was from 1946-1959. The computers of first
generation used vacuum tubes as the basic components for memory and circuitry
for CPU (Central Processing Unit). These tubes, like electric bulbs, produced a lot
of heat and the installations used to fuse frequently. Therefore, they were very
expensive and only large organizations were able to afford it.
In this generation, mainly batch processing operating system was used. Punch
cards, paper tape, and magnetic tape was used as input and output devices. The
computers in this generation used machine code as the programming language.
The main features of the first generation are −
 Vacuum tube technology
 Unreliable
 Supported machine language only
 Very costly
 Generated a lot of heat
 Slow input and output devices
 Huge size
 Need of AC
 Non-portable
 Consumed a lot of electricity
The period of second generation was from 1959-1965. In this generation,
transistors were used that were cheaper, consumed less power, more compact in
size, more reliable and faster than the first generation machines made of vacuum
tubes. In this generation, magnetic cores were used as the primary memory and
magnetic tape and magnetic disks as secondary storage devices.
 In this generation, assembly language and high-level programming
languages like FORTRAN, COBOL were used. The computers used batch
processing and multiprogramming operating system.
The main features of second generation are −
 Use of transistors
 Reliable in comparison to first generation computers
 Smaller size as compared to first generation computers
 Generated less heat as compared to first generation computers
 Consumed less electricity as compared to first generation computers
 Faster than first generation computers
 Still very costly
 AC required
 Supported machine and assembly languages

The period of third generation was from 1965-1971. The computers of third
generation used Integrated Circuits (ICs) in place of transistors. A single IC has
many transistors, resistors, and capacitors along with the associated circuitry.
The IC was invented by Jack Kilby. This development made computers smaller in
size, reliable, and efficient. In this generation remote processing, time-sharing,
multiprogramming operating system were used. High-level languages
(FORTRAN-II TO IV, COBOL, PASCAL PL/1, BASIC, ALGOL-68 etc.) were
used during this generation.
The main features of third generation are −
 IC used
 More reliable in comparison to previous two generations
 Smaller size
 Generated less heat
 Faster
 Lesser maintenance
 Costly
 AC required
 Consumed lesser electricity
 Supported high-level language
The period of fourth generation was from 1971-1980. Computers of fourth
generation used Very Large Scale Integrated (VLSI) circuits. VLSI circuits having
about 5000 transistors and other circuit elements with their associated circuits on a
single chip made it possible to have microcomputers of fourth generation.
Fourth generation computers became more powerful, compact, reliable, and
affordable. As a result, it gave rise to Personal Computer (PC) revolution. In this
generation, time sharing, real time networks, distributed operating system were
used. All the high-level languages like C, C++, DBASE etc., were used in this
generation.
The main features of fourth generation are −
 VLSI technology used
 Very cheap
 Portable and reliable
 Use of PCs
 Very small size
 Pipeline processing
 No AC required
 Concept of internet was introduced
 Great developments in the fields of networks
 Computers became easily available
The period of fifth generation is 1980-till date. In the fifth generation, VLSI
technology became ULSI (Ultra Large Scale Integration) technology, resulting in
the production of microprocessor chips having ten million electronic components.
This generation is based on parallel processing hardware and AI (Artificial
Intelligence) software. AI is an emerging branch in computer science, which
interprets the means and method of making computers think like human beings. All
the high-level languages like C and C++, Java, .Net etc., are used in this
generation.

AI includes −
 Robotics
 Neural Networks
 Game Playing
 Development of expert systems to make decisions in real-life situations
 Natural language understanding and generation
The main features of fifth generation are −
 ULSI technology
 Development of true artificial intelligence
 Development of Natural language processing
 Advancement in Parallel Processing
 Advancement in Superconductor technology
 More user-friendly interfaces with multimedia features
 Availability of very powerful and compact computers at cheaper rates

Lecture 2
Technical means of information processing
Computer architecture . The main components of the computer
What is hardware? Webster’s dictionary gives us the following definition
of the hardware – the mechanical, magnetic, electronic, and electrical
devices composing a computer system.
Computer hardware can be divided into four categories:
1)input hardware
2)processing hardware
3)storage hardware
4)output hardware
The purpose of the input hardware is to collect data and convert them
into a form suitable for computer processing. The most common input
device is a keyboard. It looks very much like a typewriter. The mouse is a
hand held device connected to the computer by small cable. As the mouse is
rolled across the mouse pad, the cursor moves across the screen. When the
cursor reaches the desired location, the user usually pushes a button on the
mouse once or twice to signal a menu selection or a command to the
computer.
The light pen uses a light sensitive photoelectric cell to signal screen
position to the computer. Another type of input hardware is optic-electronic
scanner that is used to input graphics as well as typeset characters.
Microphone and video camera can be also used to input data into the
computer.
The purpose of processing hardware is retrieve, interpret and direct the
execution of software instructions provided to the computer. The most
common components of processing hardware are the Central Processing
Unit and main memory.
The Central Processing Unit (CPU) is the brain of the computer. It reads
and interprets software instructions and coordinates the processing activities
that must take place. The design of the CPU affects the processing power
and the speed of the CPU affects the processing power and the speed of the
computer. With a well-designed CPU in your computer, you can perform
highly sophisticated tasks in a very short time.
Memory is the system of component of the computer in which
information is stored. There are two types of computer memory: RAM and
ROM.
RAM (random access memory) is the volatile computer memory, used for
creating loading, and running programs and for manipulating and
temporarily storing data.
ROM (read only memory) is nonvolatile, nonmodifiable computer
memory, used to hold programmed instructions to the system.
The more memory you have in your computer, the more operations you
can perform.
The purpose of storage hardware is to store computer instructions and
data in a form that is relatively permanent and retrieve when needed for
processing. The most common ways of storing data are Hard disk, floppy
disk and CD-ROM.
Hard disk is a rigid disk coated with magnetic material, for storing
programs and relatively large amounts of data.
Floppy disk (diskette)–thin, usually flexible plastic disk coated with
magnetic material, for storing computer data and programs. There are two
formats for floppy disks: 5.25 and 3.5. 5.25 is not used in modern computer
systems because it has relatively large size, flexibility and small capacity.
3.5 disks are formatted 1.4 megabytes and are widely used.
CD-ROM (compact disk read only memory) is a compact disc on which
a large amount of digitized read-only data can be stored.
The purpose of output hardware is to provide the user with the means to
view information produced by the computer system. Information is output in
either hardcopy or softcopy form. Hardcopy output can be held in your hand,
such as paper with text (word or numbers) or graphics printed on it.
Softcopy output is displayed on a monitor.
Monitor is a component with a display screen for viewing computer data,
television programs, etc.
Printer is a computer output device that produces a paper copy of data or
graphics.
Modem is an example of communication hardware – an electronic device
that makes possible the transmission of data to or from computer via
telephone or other communication lines.
Processing hardware
The purpose of processing hardware is retrieve, interpret and direct the
execution of software instructions provided to the computer. The most common
components of processing hardware are the Central Processing Unit and main
memory.
The Central Processing Unit (CPU) is the brain of the computer. It reads and
interprets software instructions and coordinates the processing activities that must
take place. The design of the CPU affects the processing power and the speed of
the CPU affects the processing power and the speed of the computer. With a well-
designed CPU in your computer, you can perform highly sophisticated tasks in a
very short time.
It should be noted that there is not a single processor on the computer -
it can be up to them on the computer! Videoplata, sound card, a large number of
external ones devices (eg printer) are equipped with their own processors. This is
often the case processor processors with central processing, from productivity
point of view they can also compete. But all of them It should be noted that there is
not a single processor on the computer -it can be up to them on the computer.
Videoplata, sound card, a large number of external ones devices (eg printer) are
equipped with their own processors. This is often the case processor processors
with central processing, from productivity point of view they can also compete.
However, unlike the Central Processor, they are all specialized in a narrow
framework.
One of them is engaged in voice processing, and the other is setting up 3-
dimensional images. The central and distinctive feature of the Central Processor is
its universality. If needed, if the central processor can perform any task, the music
file's code, even if the processor of the video clip is desirable can not clarify.
Any processor - a silica crystal of special technology and, therefore, sometimes
they are called "stones". But inside this "stone." as well as a large number of
transistors connected by bridges to contacts they become separate elements. It's
these elements that make the computer "think", more Actually, perform counting
tasks on the numbers included in the computer to help.

Of course, a transistor can not perform any calculations. This is an electronic


converter but it can either hold the signal or keep it alive. The signal is logical unit,
and the lack of logical zero.
But the processor is not a set of simple transistors but a number of important
devices collection. Any processor crystal can include:
- The core or main part of the processor. This is called the basic calculation device.
Here is the process of processing over all the data entered into the processor
is made.
- Soprosessor - The most sophisticated mathematical calculations, including
"sliding point" is an additional block for operations. Active when used with
graphical and multimedia programs as it uses it,
- Cache memory - Buffer is a memory and plays the role of a stacker for data.
Modern
The processor uses 2 types of cache memory: up to 100 pounds
the extreme fast-running I-level cache-memory, whose speed is slightly less and its
capacity is 128
II level cache memory with 2 MB of kilobytes.
- The data transmitter is the information highway, thanks to which the processor
share data with other computer devices.
MICROPROCESSOR–A BRAIN TO THE COMPUTER
The microprocessor forms the heart of a microcomputer. The first
microprocessors were developed in 1971 as a branch of pocket calculator
development. Since then there has been a tremendous raise of work in this
field and there appeared dozens of different microprocessors.
Microprocessors are used primarily to replace or increase random logic
design.
As it is known computer actually refers to a computing system including
hardware and software. Processor refers to the processing circuits; control
processing unit, memory, interrupt unit, clock and timing. Most processors
also include computer software.
Central processing unit-heart of the processor consists of the register
array, arithmetic and logical unit, control unit (including micro ROM) and
bus control circuits. Microsoftware may also include microinstruction
manual, microassembler, etc.

CONTROL UNIT
The control unit (often called a control system or central controller)
manages the computer’s various components; it reads and interprets the
program instructions, transforming them into a series of control signals
which activate other parts of the computer. Control systems in advanced
computers may change the order of some instructions so as to improve
performance.
A key component common to all CPUs is the program counter, a special
memory cell that keeps track of which location in memory the next
instruction is to be read from.
The control system’s functions are as follows- note that this is a simplified
description, and some of these steps may be performed concurrently or in a
different order depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the
program counter.
2. Decode the numerical code for the instruction into a set of commands or
signals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or
perhaps from an input device). The location of this required data is typically
stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6.If the instruction requires an ALU or specialized hardware to complete,
instruct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register
or perhaps an output device.
8. Jump back to step (1).
Since the program counter is conceptually just another set of memory cell,
it can be changed by calculations done in the ALU. Adding 100 to the
program counter would cause the next instruction to be read from a place
100 locations further down the program.
Instructions that modify the program counter are often known as “jumps”
and allow for loops and often conditional instruction execution.
It is noticeable that the sequence of operations that the control unit goes
through to process an instruction is in itself like a short computer program-
and indeed, in some more complex CPU designs, there is another yet smaller
computer called a micro sequencer that runs a microcode program that
causes all of these events to happen
Lecture 3
Development models of Information Systems
An information system (IS) is a formal, sociotechnical, organizational
system designed to collect, process, store, and distribute information. In a
sociotechnical perspective, information systems are composed by four
components: task, people, structure (or roles), and technology.
A computer information system is a system composed of people and computers
that processes or interprets information. The term is also sometimes used to simply
refer to a computer system with software installed.
Information Systems is an academic study of systems with a specific
reference to information and the complementary networks of hardware and
software that people and organizations use to collect, filter, process, create and also
distribute data. An emphasis is placed on an information system having a definitive
boundary, users, processors, storage, inputs, outputs and the aforementioned
communication networks.
Any specific information system aims to support operations, management and
decision-making. An information system is the information and communication
technology that an organization uses, and also the way in which people interact
with this technology in support of business processes.
Some authors make a clear distinction between information systems,
computer systems, and business processes. Information systems typically include
an ICT component but are not purely concerned with ICT, focusing instead on the
end use of information technology. Information systems are also different from
business processes. Information systems help to control the performance of
business processes.
Alter argues for advantages of viewing an information system as a special type of
work system. A work system is a system in which humans or machines perform
processes and activities using resources to produce specific products or services for
customers. An information system is a work system whose activities are devoted to
capturing, transmitting, storing, retrieving, manipulating and displaying
information.
As such, information systems inter-relate with data systems on the one hand
and activity systems on the other. An information system is a form of
communication system in which data represent and are processed as a form of
social memory. An information system can also be considered a semi-formal
language which supports human decision making and action.
Information systems are the primary focus of study for organizational informatics.
The six components that must come together in order to produce an information
system are: (Information systems are organizational procedures and do not need a
computer or software, this data is erroneous, i.e., an accounting system in the
1400s using a ledger and ink utilizes an information system)
1. Hardware: The term hardware refers to machinery. This category includes
the computer itself, which is often referred to as the central processing unit
(CPU), and all of its support equipment. Among the support, equipment are
input and output devices, storage devices and communications devices.
2. Software: The term software refers to computer programs and the manuals
(if any) that support them. Computer programs are machine-readable
instructions that direct the circuitry within the hardware parts of the system
to function in ways that produce useful information from data. Programs are
generally stored on some input/output medium, often a disk or tape.
3. Data: Data are facts that are used by programs to produce useful
information. Like programs, data are generally stored in machine-readable
form on disk or tape until the computer needs them.
4. Procedures: Procedures are the policies that govern the operation of a
computer system. "Procedures are to people what software is to hardware" is
a common analogy that is used to illustrate the role of procedures in a
system.
5. People: Every system needs people if it is to be useful. Often the most
overlooked element of the system are the people, probably the component
that most influence the success or failure of information systems. This
includes "not only the users, but those who operate and service the
computers, those who maintain the data, and those who support the network
of computers."
6. Feedback: it is another component of the IS, that defines that an IS may be
provided with a feedback (Although this component isn't necessary to
function).
Data is the bridge between hardware and people. This means that the data we
collect is only data until we involve people. At that point, data is now information.
Types of information system

A four level hierarchy


The "classic" view of Information systems found in textbooks in the 1980s was a
pyramid of systems that reflected the hierarchy of the organization,
usually transaction processing systems at the bottom of the pyramid, followed
by management information systems, decision support systems, and ending
with executive information systems at the top. Although the pyramid model
remains useful since it was first formulated, a number of new technologies have
been developed and new categories of information systems have emerged, some of
which no longer fit easily into the original pyramid model.
Some examples of such systems are:
 data warehouses
 enterprise resource planning
 enterprise systems
 expert systems
 search engines
 geographic information system
 global information system
 office automation.
A computer(-based) information system is essentially an IS using computer
technology to carry out some or all of its planned tasks. The basic components of
computer-based information systems are:
 Hardware- these are the devices like the monitor, processor, printer and
keyboard, all of which work together to accept, process, show data and
information.
 Software- are the programs that allow the hardware to process the data.
 Databases- are the gathering of associated files or tables containing related
data.
 Networks- are a connecting system that allows diverse computers to
distribute resources.
 Procedures- are the commands for combining the components above to
process information and produce the preferred output.
The first four components (hardware, software, database, and network) make up
what is known as the information technology platform. Information technology
workers could then use these components to create information systems that watch
over safety measures, risk and the management of data. These actions are known as
information technology services
Certain information systems support parts of organizations, others support entire
organizations, and still others, support groups of organizations. Recall that each
department or functional area within an organization has its own collection of
application programs or information systems. These functional area information
systems (FAIS) are supporting pillars for more general IS namely, business
intelligence systems and dashboards. As the name suggests, each FAIS support a
particular function within the organization, e.g.: accounting IS, finance IS,
production-operation management (POM) IS, marketing IS, and human resources
IS. In finance and accounting, managers use IT systems to forecast revenues and
business activity, to determine the best sources and uses of funds, and to perform
audits to ensure that the organization is fundamentally sound and that all financial
reports and documents are accurate. Other types of organizational information
systems are FAIS, Transaction processing systems, enterprise resource
planning, office automation system, management information system, decision
support system, expert system, executive dashboard, supply chain management
system, and electronic commerce system. Dashboards are a special form of IS that
support all managers of the organization. They provide rapid access to timely
information and direct access to structured information in the form of reports.
Expert systems attempt to duplicate the work of human experts by applying
reasoning capabilities, knowledge, and expertise within a specific domain.
Information system development
Information technology departments in larger organizations tend to strongly
influence the development, use, and application of information technology in the
business. A series of methodologies and processes can be used to develop and use
an information system. Many developers use a systems engineering approach such
as the system development life cycle (SDLC), to systematically develop an
information system in stages. The stages of the system development lifecycle are
planning, system analysis and requirements, system design, development,
integration and testing, implementation and operations and maintenance. Recent
research aims at enabling and measuring the ongoing, collective development of
such systems within an organization by the entirety of human actors themselves.
An information system can be developed in house (within the organization) or
outsourced. This can be accomplished by outsourcing certain components or the
entire system. A specific case is the geographical distribution of the development
team (offshoring, global information system).
A computer-based information system, following a definition of Langefors,[25] is a
technologically implemented medium for:
 recording, storing, and disseminating linguistic expressions,
 as well as for drawing conclusions from such expressions.
Geographic information systems, land information systems, and disaster
information systems are examples of emerging information systems, but they can
be broadly considered as spatial information systems. System development is done
in stages which include:
 Problem recognition and specification
 Information gathering
 Requirements specification for the new system
 System design
 System construction
 System implementation
The most commonly used modes of information processing include dialog, packet,
and telecast modes.
Modes of organization of information processing
The solution of economic problems in the form of information exchange between
the user and the computer is called a dialogue mode. The following is provided
during the implementation of the relevant technology:
- human and computing via local or remote terminals
providing information reception between systems;
- search for information (data) programs needed by the user;
- Rapid processing of received information on a computer and results
is delivery to the user without delay;
There are two types of dialogue mode: passive dialogue and active dialogue.
During the passive dialog, the user sends a message-request. In return, the
computer responds. In the passive dialogue mode of information processing, news
and responses can also be sent at the initiative of the computing system.
In the active dialogue mode of information processing, information is sent by the
user and the machine, in other words, there is an active exchange.
In the passive dialogue mode, the form of "question-answer" is more developed. In
this case, the search, processing (processing) of the necessary information for the
user on the finished programs is provided.
In the active dialogue mode, a number of learning systems are involved in the
exchange of information. It is also planned to use programming systems in this
mode.
The current dialog modes are characterized by the fact that the news is presented in
a formal language. Therefore, natural human language is not yet used in dialogue
mode.
During the organization of the dialogue mode of information processing, a
dialogue program of information processing is created. This program reads the
information sent from the terminal and sends a message to the primary terminal.
Note that the terminals can be connected to the computer system locally or can be
an element of the computer system.
The reaction of the system is considered as an important indicator when
applying the dialogue mode of information processing. As a rule, the reaction is
measured by the time interval between the moment the request is made and the
moment the answer is received. Issues resolved in the dialogue mode should differ
in terms of the intensity of the appeal. In the process of "question and answer"
exchange, the limit set for specific conditions must be taken into account. Thus, the
limit of the effectiveness of the feedback is currently equal to two seconds in time.
What is Information System?
An Information System is a set of linked components that handles
information. It collects, stores, processes, and delivers them to the correct
routes. It breaks down the data into sub-parts for decision-making. These
are required to manage loads of information.
These systems are collections of both software and hardware. These help
in making decisions by utilizing the information stored.
Dimensions of Information Systems
There are three main dimensions of information systems.
Organizational Dimension
Information systems are vital parts of a company. This dimension includes
the following aspects:
 Culture of an organization

 Standard Business Procedures

 Political interest groups

 Specialties

Management Dimension
Managers in companies face different business challenges. These systems
provide the correct tools to win over these challenges. These tools help the
managers. They can do the following with these features.
 Allocate resources
 Monitor the performance

 Coordinate

 Making efficient decisions.

It helps in achieving long-term goals.


Technology Dimension
The managing department uses technology to execute its plans. It consists
of software, hardware, networking, and data management. Managers use it
as a technique to help them achieve system goals. Processing of data,
mode of data, goals, etc. are required to classify the information systems.

Types of Information System


Different types of information systems include the following.
 Transaction Processing System
 Management Information System
 Decision Support Systems
 Expert System
 Office Automation System
 Knowledge Management Systems

Let us study these types in detail.


Transaction Processing System(TPS)
1. It collects, modifies, and processes the data of business transactions. It
boosts the performance and reliability of the business transactions. It
processes large amounts of data in real time and thus offers customer
satisfaction.

2. The information gained from this system is detailed. The data acquired
is stored to update a storage record. It can store data as a set of
records. This is called the store keeping function. This system helps in
making reports of the transactions.

3. The transaction occurs by two methods. These are Online


Transaction Processing and Batching Processing.

Examples of TPS include the following.


1. Hotel reservation systems.

2. Airline booking systems.

3. Employee record management.

4. Bank payroll.
Management Information System(MIS)
1. This system takes the raw data generated from TPS as its input. It then
converts it into a form like a report for the manager. The system
compares and summarizes the data and then generates the report. The
reports can be summary, ad hoc, reports-on-demand, etc.

2. These reports help the management control and predict the company's
future performance. It performs tasks like aggregation and comparisons
of the data. So, data analysis and management are at the core of this
system.

3. Since MIS is fast in generating these reports which help make


decisions. This system helps the management team set and prioritize
their goals.

Examples of MIS include the following.


1. Customer relationship management.

2. Human resource management.

3. Sales management.
Decision Support System(DSS)
1. DSS is an information system that helps in making decisions. It gathers
and computes relevant information.

2. This system provides substitutes and different options to the user. It


helps in an efficient decision-making process.

3. This system is interactive. This system is ready with the information


and correct tools whenever the manager requires them. And since it is
interactive, the management can add or delete some data from the set
and then analyze its effect on the output. This helps in efficient
decision-making.

4. This system helps the management to visualize the data. It makes the
management make efficient decisions fastly.

Examples of DS include the following.


1. Bank loan management.

2. Online map systems.


Expert System(ES)
1. Expert System represents the information in a form that is executed by
the computer. This system helps in the problem-finding and solving
process. It tries to replicate human intelligence to some extent.

2. It is a knowledge-based system.
 It offers expertise to the management by utilizing gained
knowledge.

 It helps the management in identifying and predicting problems.

 It is also used in the problem-solving process.

 Software modules and knowledge base are the two main


components of this system.

Examples include CaDet(Cancer Detection Support Tool), etc.


Office Automation System
Office automation systems (OAS) automate scheduling, project tracking,
email handling, document management, and other administrative tasks.
 It increases productivity at work by minimizing manual labor and
fostering collaboration. Additionally, the OAS simplifies data entry,
analysis, and customer relationship management.

 Employees can concentrate on more strategic aspects of their work


thanks to improved workflow and streamlined processes, which boosts
productivity and lowers operational costs.
Knowledge Management Systems
Knowledge Management Systems (KMS) are software tools that collect,
organize, and share knowledge within organizations.
 They store information in repositories, support easy search and
retrieval, and foster collaboration.

 KMS captures tacit knowledge, maintains version control, and uses


metadata for effective organization.

 These systems enhance organizational learning, improve decision-


making, and boost productivity by promoting knowledge sharing and
reducing redundancy.

 They are crucial in facilitating continuous improvement and innovation


across the organization.
Frequently Asked Questions
What is an information system?
An Information System is a set of components that handle data. It collects,
store, process, and deliver information to the correct routes. It breaks down
the data into sub-parts for decision-making. These are required to manage
loads of information.
What are the six types of information systems?
There are six significant types of information systems:
Transaction Processing Systems (TPS), Management Information Systems
(MIS), Decision Support Systems (DSS), Executive Information Systems
(EIS), Knowledge Management Systems (KMS), and Enterprise Resource
Planning (ERP) Systems.
What are the 5 main components of information system?
The five main components of an information system are: Input, Processing,
Storage, Output, and Feedback. They work together to collect, process,
store, and deliver information for organizational needs.

What is a transaction processing system (TPS)?

A transaction processing system (TPS) is a type of data


management information-processing software used during a
business transaction to manage the collection and retrieval of
both customer and business data.
A TPS creates a fast and accurate execution environment,
ensuring data availability, security and integrity through various
forms of information processing. A TPS also provides
customization and automation features to expedite computer
system processing activities and enable reporting for business
intelligence (BI) forecasting and higher-level trend analysis
The first TPS, Sabre, was built by IBM for American Airlines in the
early 1960s. Sabre was designed to process up to 83,000 daily
transactions and ran on two IBM 7090 computers. Later iterations
of Sabre, such as Airline Control Program (ACP) and Transaction
Processing Facility (TPF), would be adopted by large banks, credit
card companies and hotel chains. These days, companies across
every major industry rely on modern TPS software for processing
business transactions.
Distinct from a merchant’s point of sale (POS) system—which is
used for activities like reading credit card data, printing receipts
and managing cash payments—a TPS stores, sends and receives
transactional data necessary to validate and complete a business
transaction. For example, a customer at a grocery store
purchasing a bag of coffee beans with a credit card will swipe
their card at the POS, and the TPS will collect their card
information, communicate with the customer’s bank and approve
or decline the purchase.
An online merchant will also use a TPS called an online
transactional processing (OLTP) system to verify and complete a
similar purchase. In this case, the OLTP might also communicate
with the merchant’s fulfillment center to check product
availability and distribute shipping instructions for fulfilling
customer orders.

OLTP vs. OLAP

When considering online transaction processing systems it is


worth noting the distinction between OLTP and similar online
analytical processing (OLAP) systems. Although both are used for
data processing, each serves a different function.

What is an online transaction processing system (OLTP)?


OLTP is designed for executing online database transactions.
These types of systems are typically built for service workers
(cashiers, bank tellers, airline desk clerks) or customer self-
service portals (online banking, e-commerce, hotel or travel
bookings).
What is an online analytical processing system (OLAP)?
Conversely, online analytical processing (OLAP) systems are
optimized for complex data analysis. These types of systems are
used to generate useful reports and insights from complex data
sets and are typically used by data scientists and business
analysts to facilitate business intelligence (BI), data mining, and
improve big-picture decision-making.
Transaction processing system (TPS) functions

Regardless of the provider, a sufficient TPS fulfills three main


functions.
1. System runtime functions: Basic functions associated
with the purpose of processing a transaction while
maintaining data integrity, availability and security—all with
fast response times and high transaction throughput.
2. System administration functions: Functions associated
with system administration, such as the configuration,
monitoring and management of the TPS.
3. Application development functions: To better suit the
particular business application, a modern TPS offers
customization features to access data, perform
intercomputer communications and design and manage
unique user interfaces.
Types of transaction processing systems
Transaction processing systems (TPS) and online transactional
processing systems (OLTP) can be categorized into two main
information processing methodologies. A company’s TPS choice
will be dependent on their unique business needs, while a hybrid
model may also be employed.
Batch processing
Batch transaction processing methods collect transactions over a
set period of time and process them all at once in scheduled
intervals. Batch processing is an ideal method for handling large
volumes of transactions efficiently, such as payroll transactions or
bulk data updates. While batch processing is designed to
efficiently process complex data sets, there is an inherent delay in
response time.
Real-time processing
TPS systems like OLTP use a real-time processing methodology in
which the TPS will process each transaction as it occurs. These
systems offer an immediate response which make POS
transitions, online purchases and reservation systems possible.
Four components of a transaction processing system

For both batch processing systems and real-time systems, a


transaction processing system (TPS) can be divided into four main
components.
Inputs
Any number of transactions—including invoices, bills, coupons
and other types of orders like a purchase order—may be treated
as inputs in a TPS. Theoretically, any type of order entry can be
considered input data.
Outputs
A TPS can generate a variety of use-case-specific outputs ranging
from cash flow reports to receipts, and it can be utilized for
record-keeping, data analysis, tax reporting and other official
business purposes.
Processing system
The processing system of a TPS reads the input, completes any
data modifications or updates, and creates a useful output, such
as a confirmation of sale or inventory report.
Storage
While storage may, in some cases, refer to physical data storage
hardware, an average TPS will also create easily navigable
directories for storing both input and output data, typically in
some form of database.
Transaction processing system features

The goal of any transaction processing system (TPS) is to enable


smooth business transactions. To this end, a viable TPS should
offer the following critical features:

 Controlled access: As a critical component of any


business’s information processing system, a robust TPS
should provide secure controlled access for only authorized
users and administrators.
 Connection with external environments: By definition, a
TPS is designed to connect seamlessly with various external
systems to distribute and receive information between
customers, merchants, suppliers and, where applicable,
banks and creditors.
 Expedited response times: For real-time TPS, fast
response times are considered table stakes for businesses
seeking to provide quick and easy transactions for their
customers.
 Inflexibility: Although a TPS might be customizable to suit
different organizational demands, it’s important for a TPS to
also provide a rigid, replicable experience so that all
transactions are processed similarly despite variables like
time of day, location, user or customer.
 Reliability: Stability and security are critical components of
a quality TPS. Transaction data must be secured without
error, ensuring that source documents are regularly backed
up and available for validation.
 Inter-system distribution: A company’s TPS does not
operate in a vacuum and must be able to distribute data and
instructions with other internal information systems, such as
sales processing systems or ledger systems.

management information systems,


MANAGEMENT INFORMATION SYSTEMS

A management information system (MIS) is an information


system[1] used for decision-making, and for the coordination, control,
analysis, and visualization of information in an organization. The study of
the management information systems involves people, processes and
technology in an organizational context. In other words, it serves, as the
functions of controlling, planning, decision making in the management level
setting.[2][3]

In a corporate setting, the ultimate goal of using management information


system is to increase the value and profits of the business.[4][5]

History
While it can be contested that the history of management information
systems dates as far back as companies using ledgers to keep track of
accounting, the modern history of MIS can be divided into
five eras originally identified by Kenneth C. Laudon and Jane Laudon in
their seminal textbook Management Information Systems.[6][7]

 First era – Mainframe and minicomputer computing


 Second era – Personal computers
 Third era – Client/server networks
 Fourth era – Enterprise computing
 Fifth era – Cloud computing
The first era (mainframe and minicomputer computing) was ruled
by IBM and their mainframe computers for which they supplied both the
hardware and software. These computers would often take up whole rooms
and require teams to run them. As technology advanced, these computers
were able to handle greater capacities and therefore reduce their cost.
Smaller, more affordable minicomputers allowed larger businesses to run
their own computing centers in-house / on-site / on-premises.

The second era (personal computers) began in 1965 as microprocessors


started to compete with mainframes and minicomputers and accelerated
the process of decentralizing computing power from large data centers to
smaller offices. In the late 1970s, minicomputer technology gave way to
personal computers and relatively low-cost computers were becoming
mass market commodities, allowing businesses to provide their employees
access to computing power that ten years before would have cost tens of
thousands of dollars. This proliferation of computers created a ready
market for interconnecting networks and the popularization of the Internet.
(The first microprocessor—a four-bit device intended for a programmable
calculator—was introduced in 1971, and microprocessor-based systems
were not readily available for several years. The MITS Altair 8800 was the
first commonly known microprocessor-based system, followed closely by
the Apple I and II. It is arguable that the microprocessor-based system did
not make significant inroads into minicomputer use until 1979,
when VisiCalc prompted record sales of the Apple II on which it ran. The
IBM PC introduced in 1981 was more broadly palatable to business, but its
limitations gated its ability to challenge minicomputer systems until perhaps
the late 1980s to early 1990s.)
The third era (client/server networks) arose as technological complexity
increased, costs decreased, and the end-user (now the ordinary employee)
required a system to share information with other employees within an
enterprise. Computers on a common network shared information on a
server. This lets thousands and even millions of people access data
simultaneously on networks referred to as Intranets.

The fourth era (enterprise computing) enabled by high speed networks,


consolidated the original department specific software applications into
integrated software platforms referred to as enterprise software. This new
platform tied all aspects of the business enterprise together offering rich
information access encompassing the complete managerial structure.

Technology
The terms management information system (MIS), Information
management system (IMS), information system (IS), enterprise resource
planning (ERP), computer science, electrical computer engineering,
and information technology management (IT) are often confused. MIS is a
hierarchical subset of information systems. MIS is more organization-
focused narrowing in on leveraging information technology to increase
business value. Computer science is more software-focused dealing with
the applications that may be used in MIS. Electrical computer engineering
is product-focused mainly dealing with the hardware architecture behind
computer systems. ERP software is a subset of MIS and IT management
refers to the technical management of an IT department which may include
MIS.

A career in MIS focuses on understanding and projecting the practical use


of management information systems. It studies the interaction, organization
and processes among technology, people and information to solve
problems.[8]

Management
While management information systems can be used by any or every level
of management, the decision of which systems to implement generally falls
upon the chief information officers (CIO) and chief technology
officers (CTO). These officers are generally responsible for the overall
technology strategy of an organization including evaluating how new
technology can help their organization. They act as decision-makers in the
implementation process of the new MIS.
Once decisions have been made, IT directors, including MIS directors, are
in charge of the technical implementation of the system. They are also in
charge of implementing the policies affecting the MIS (either new specific
policies passed down by the CIOs or CTOs or policies that align the new
systems with the organization's overall IT policy). It is also their role to
ensure the availability of data and network services as well as the security
of the data involved by coordinating IT activities.

Upon implementation, the assigned users will have appropriate access to


relevant information. It is important to note that not everyone inputting data
into MIS needs to be at the management level. It is common practice to
have inputs to MIS be inputted by non-managerial employees though they
rarely have access to the reports and decision support platforms offered by
these systems.

Types
The following are types of information systems used to create reports,
extract data, and assist in the decision-making processes of middle and
operational level managers.

 Decision support systems (DSSs) are computer program applications


used by middle and higher management to compile information from a
wide range of sources to support problem solving and decision making.
A DSS is used mostly for semi-structured and unstructured decision
problems.
 Executive information system (EIS) is a reporting tool that provides
quick access to summarized reports coming from all company levels
and departments such as accounting, human resources and operations.
 Marketing information systems are management Information Systems
designed specifically for managing the marketing aspects of the
business.
 Accounting information systems are focused accounting functions.
 Human resource management systems are used for personnel aspects.
 Office automation systems (OAS) support communication and
productivity in the enterprise by automating workflow and eliminating
bottlenecks. OAS may be implemented at any and all levels of
management.
 School Information Management Systems (SIMS) cover school
administration, often including teaching and learning materials.
 Enterprise resource planning (ERP) software facilitates the flow of
information between all business functions inside the boundaries of the
organization and manage the connections to outside stakeholders.[9]
 Customer Relationship Management (CRM) managing and analyzing
customer interactions and data to improve customer relationships and
enhance satisfaction.[citation needed]
 Local databases, can be small, simplified tools for managers and are
considered to be a primal or base level version of a MIS.
 Dealership management systems (DMS) or auto dealership
management systems are created specifically for the automotive
industry, car dealerships or large equipment manufacturers,[10]. These
systems contain software that meets the needs of the finance, sales,
workshop, parts, inventory, and administration components of running
the dealership.
Advantages and disadvantages
The following are some of the benefits that can be attained using MIS:[11]

 Improve an organization's operational efficiency, add value to existing


products, engender innovation and new product development, and help
managers make better decisions.[12]
 Companies are able to identify their strengths and weaknesses due to
the presence of revenue reports, employee performance records etc.
Identifying these aspects can help a company improve its business
processes and operations.
 The availability of customer data and feedback can help the company to
align its business processes according to the needs of its customers.
The effective management of customer data can help the company to
perform direct marketing and promotion activities.
 MIS can help a company gain a competitive advantage.
 MIS reports can help with decision-making as well as reduce downtime
for actionable items.
Some of the disadvantages of MIS systems:

 Retrieval and dissemination are dependent on technology hardware and


software.
 Potential for inaccurate information.

DECISION SUPPORT SYSTEMS


A decision support system (DSS) is an information system that supports
business or organizational decision-making activities. DSSs serve the
management, operations and planning levels of an organization (usually
mid and higher management) and help people make decisions about
problems that may be rapidly changing and not easily specified in advance
—i.e., unstructured and semi-structured decision problems. Decision
support systems can be either fully computerized or human-powered, or a
combination of both.

While academics have perceived DSS as a tool to support decision making


processes, DSS users see DSS as a tool to facilitate organizational
processes.[1] Some authors have extended the definition of DSS to include
any system that might support decision making and some DSS include
a decision-making software component; Sprague (1980)[2] defines a
properly termed DSS as follows:

1. DSS tends to be aimed at the less well structured,


underspecified problem that upper level managers typically face;
2. DSS attempts to combine the use of models or analytic techniques
with traditional data access and retrieval functions;
3. DSS specifically focuses on features which make them easy to use
by non-computer-proficient people in an interactive mode; and
4. DSS emphasizes flexibility and adaptability to accommodate changes
in the environment and the decision making approach of the user.
DSSs include knowledge-based systems. A properly designed DSS is an
interactive software-based system intended to help decision makers
compile useful information from a combination of raw data, documents,
personal knowledge, and/or business models to identify and solve
problems and make decisions.

Typical information that a decision support application might gather and


present includes:

 inventories of information assets (including legacy and relational


data sources, cubes, data warehouses, and data marts),
 comparative sales figures between one period and the next,
 projected revenue figures based on product sales assumptions.
History
The concept of decision support has evolved mainly from the theoretical
studies of organizational decision making done at the Carnegie Institute of
Technology during the late 1950s and early 1960s, and the implementation
work done in the 1960s.[3] DSS became an area of research of its own in
the middle of the 1970s, before gaining in intensity during the 1980s.

In the middle and late 1980s, executive information systems (EIS), group
decision support systems (GDSS), and organizational decision support
systems (ODSS) evolved from the single user and model-oriented DSS.
According to Sol (1987),[4] the definition and scope of DSS have been
migrating over the years: in the 1970s DSS was described as "a computer-
based system to aid decision making"; in the late 1970s the DSS
movement started focusing on "interactive computer-based systems which
help decision-makers utilize data bases and models to solve ill-structured
problems"; in the 1980s DSS should provide systems "using suitable and
available technology to improve effectiveness of managerial and
professional activities", and towards the end of 1980s DSS faced a new
challenge towards the design of intelligent workstations.[4]

In 1987, Texas Instruments completed development of the Gate


Assignment Display System (GADS) for United Airlines. This decision
support system is credited with significantly reducing travel delays by aiding
the management of ground operations at various airports, beginning
with O'Hare International Airport in Chicago and Stapleton Airport
in Denver, Colorado.[5] Beginning in about 1990, data warehousing and on-
line analytical processing (OLAP) began broadening the realm of DSS. As
the turn of the millennium approached, new Web-based analytical
applications were introduced.

DSS also have a weak connection to the user interface paradigm


of hypertext. Both the University of Vermont PROMIS system (for medical
decision making) and the Carnegie Mellon ZOG/KMS system (for military
and business decision making) were decision support systems which also
were major breakthroughs in user interface research. Furthermore,
although hypertext researchers have generally been concerned
with information overload, certain researchers, notably Douglas Engelbart,
have been focused on decision makers in particular.

The advent of more and better reporting technologies has seen DSS start
to emerge as a critical component of management design. Examples of this
can be seen in the intense amount of discussion of DSS in the education
environment.

Applications
DSS can theoretically be built in any knowledge domain. One example is
the clinical decision support system for medical diagnosis. There are four
stages in the evolution of clinical decision support system (CDSS): the
primitive version is standalone and does not support integration; the
second generation supports integration with other medical systems; the
third is standard-based, and the fourth is service model-based.[6]

DSS is extensively used in business and management. Executive


dashboard and other business performance software allow faster decision
making, identification of negative trends, and better allocation of business
resources. Due to DSS, all the information from any organization is
represented in the form of charts, graphs i.e. in a summarized way, which
helps the management to take strategic decisions. For example, one of the
DSS applications is the management and development of complex anti-
terrorism systems.[7] Other examples include a bank loan officer verifying
the credit of a loan applicant or an engineering firm that has bids on several
projects and wants to know if they can be competitive with their costs.

A growing area of DSS application, concepts, principles, and techniques is


in agricultural production, marketing for sustainable development.
Agricultural DSSes began to be developed and promoted in the 1990s.
[8]
For example, the DSSAT4 package,[9] The Decision Support System for
Agrotechnology Transfer[10] developed through financial support
of USAID during the 1980s[citation needed] and 1990s, has allowed rapid
assessment of several agricultural production systems around the world to
facilitate decision-making at the farm and policy levels. Precision
agriculture seeks to tailor decisions to particular portions of farm fields.
There are, however, many constraints to the successful adoption of DSS in
agriculture.[11]

DSS is also prevalent in forest management where the long planning


horizon and the spatial dimension of planning problems demand specific
requirements. All aspects of Forest management, from log transportation,
harvest scheduling to sustainability and ecosystem protection have been
addressed by modern DSSs. In this context, the consideration of single or
multiple management objectives related to the provision of goods and
services that are traded or non-traded and often subject to resource
constraints and decision problems. The Community of Practice of Forest
Management Decision Support Systems provides a large repository on
knowledge about the construction and use of forest Decision Support
Systems.[12]

A specific example concerns the Canadian National Railway system, which


tests its equipment on a regular basis using a decision support system. A
problem faced by any railroad is worn-out or defective rails, which can
result in hundreds of derailments per year. Under a DSS, the Canadian
National Railway system managed to decrease the incidence of
derailments at the same time other companies were experiencing an
increase.

DSS has been used for risk assessment to interpret monitoring data from
large engineering structures such as dams, towers, cathedrals, or masonry
buildings. For instance, Mistral is an expert system to monitor dam safety,
developed in the 1990s by Ismes (Italy). It gets data from an automatic
monitoring system and performs a diagnosis of the state of the dam. Its first
copy, installed in 1992 on the Ridracoli Dam (Italy), is still operational
24/7/365.[13] It has been installed on several dams in Italy and abroad
(e.g., Itaipu Dam in Brazil),[14] and on monuments under the name of
Kaleidos.[15] Mistral is a registered trade mark of CESI. GIS has been
successfully used since the '90s in conjunction with DSS, to show on a map
real-time risk evaluations based on monitoring data gathered in the area of
the Val Pola disaster (Italy).[16]

Components
Design of a drought mitigation
decision support system

Three fundamental components of a DSS architecture are:[17][18][19][20][21]

1. the database (or knowledge base),


2. the model (i.e., the decision context and user criteria)
3. the user interface.
The users themselves are also important components of the architecture.[17]
[21]

Taxonomies
Using the relationship with the user as the criterion,
Haettenschwiler[17] differentiates passive, active, and cooperative DSS.
A passive DSS is a system that aids the process of decision making, but
that cannot bring out explicit decision suggestions or solutions. An active
DSS can bring out such decision suggestions or solutions. A cooperative
DSS allows for an iterative process between human and system towards
the achievement of a consolidated solution: the decision maker (or its
advisor) can modify, complete, or refine the decision suggestions provided
by the system, before sending them back to the system for validation, and
likewise the system again improves, completes, and refines the
suggestions of the decision maker and sends them back to them for
validation.

Another taxonomy for DSS, according to the mode of assistance, has been
created by D. Power:[22] he differentiates communication-driven DSS, data-
driven DSS, document-driven DSS, knowledge-driven DSS, and model-
driven DSS.[18]
 A communication-driven DSS enables cooperation, supporting more
than one person working on a shared task; examples include integrated
tools like Google Docs or Microsoft SharePoint Workspace.[23]
 A data-driven DSS (or data-oriented DSS) emphasizes access to and
manipulation of a time series of internal company data and, sometimes,
external data.
 A document-driven DSS manages, retrieves, and
manipulates unstructured information in a variety of electronic formats.
 A knowledge-driven DSS provides specialized problem-
solving expertise stored as facts, rules, procedures or in similar
structures like interactive decision trees and flowcharts.[18]
 A model-driven DSS emphasizes access to and manipulation of a
statistical, financial, optimization, or simulation model. Model-driven
DSS use data and parameters provided by users to assist decision
makers in analyzing a situation; they are not necessarily data-intensive.
Dicodess is an example of an open-source model-driven DSS
generator.[24]
Using scope as the criterion, Power[25] differentiates enterprise-wide
DSS and desktop DSS. An enterprise-wide DSS is linked to large data
warehouses and serves many managers in the company. A desktop,
single-user DSS is a small system that runs on an individual manager's PC.

Development frameworks
Similarly to other systems, DSS systems require a structured approach.
Such a framework includes people, technology, and the development
approach.[19]

The Early Framework of Decision Support System consists of four phases:

 Intelligence – Searching for conditions that call for decision;


 Design – Developing and analyzing possible alternative actions of
solution;
 Choice – Selecting a course of action among those;
 Implementation – Adopting the selected course of action in decision
situation.
DSS technology levels (of hardware and software) may include:

1. The actual application that will be used by the user. This is the part of
the application that allows the decision maker to make decisions in a
particular problem area. The user can act upon that particular
problem.
2. Generator contains Hardware/software environment that allows
people to easily develop specific DSS applications. This level makes
use of case tools or systems such as Crystal, Analytica and iThink.
3. Tools include lower level hardware/software. DSS generators
including special languages, function libraries and linking modules
An iterative developmental approach allows for the DSS to be changed and
redesigned at various intervals. Once the system is designed, it will need to
be tested and revised where necessary for the desired outcome.

Classification
There are several ways to classify DSS applications. Not every DSS fits
neatly into one of the categories, but may be a mix of two or more
architectures.

Holsapple and Whinston[26] classify DSS into the following six frameworks:
text-oriented DSS, database-oriented DSS, spreadsheet-oriented DSS,
solver-oriented DSS, rule-oriented DSS, and compound DSS. A compound
DSS is the most popular classification for a DSS; it is a hybrid system that
includes two or more of the five basic structures.[26]

The support given by DSS can be separated into three distinct, interrelated
categories:[27] Personal Support, Group Support, and Organizational
Support.

DSS components may be classified as:

1. Inputs: Factors, numbers, and characteristics to analyze


2. User knowledge and expertise: Inputs requiring manual analysis by
the user
3. Outputs: Transformed data from which DSS "decisions" are
generated
4. Decisions: Results generated by the DSS based on user criteria
DSSs which perform selected cognitive decision-making functions and are
based on artificial intelligence or intelligent agents technologies are
called intelligent decision support systems (IDSS)[28]

The nascent field of decision engineering treats the decision itself as an


engineered object, and applies engineering principles such
as design and quality assurance to an explicit representation of the
elements that make up a decision.

EXECUTIVE INFORMATION SYSTEMS


An executive information system (EIS), also known as an executive
support system (ESS),[1] is a type of management support system that
facilitates and supports senior executive information and decision-
making needs. It provides easy access to internal and external information
relevant to organizational goals. It is commonly considered a specialized
form of decision support system (DSS).[2]

EIS emphasizes graphical displays and easy-to-use user interfaces. They


offer strong reporting and drill-down capabilities. In general, EIS are
enterprise-wide DSS that help top-level executives analyze, compare, and
highlight trends in important variables so that they can monitor performance
and identify opportunities and problems. EIS and data
warehousing technologies are converging in the marketplace.

The term EIS lost popularity in favor of business intelligence (with the sub
areas of reporting, analytics, and digital dashboards).

History
Traditionally, executive information systems were mainframe computer-
based programs. The purpose was to package a company's data and to
provide sales performance or market research statistics for decision
makers, such as, marketing directors, chief executive officer, who were not
necessarily well acquainted with computers. The objective was to develop
computer applications that highlighted information to satisfy senior
executives' needs. Typically, an EIS provides only data that supported
executive level decisions, not all company data.

Today, the application of EIS is not only in typical corporate hierarchies, but
also at lower corporate levels. As some client service companies adopt the
latest enterprise information systems, employees can use their personal
computers to get access to the company's data and identify information
relevant to their decision making. This arrangement provides relevant
information to upper and lower corporate levels.

Components
EIS components can typically be classified as:

 Hardware
 Software
 User interface
 Telecommunications
Hardware
When talking about computer hardware for an EIS environment, we should
focus on the hardware that meets the executive's need. The executive must
be put first and the executive's needs must be defined before the hardware
can be selected. The basic hardware needed for a typical EIS includes four
components:

1. Input data-entry devices. These devices allow the executive to enter,


verify, and update data immediately
2. The central processing unit (CPU), which is the most important
because it controls the other computer system components
3. Data storage files. The executive can use this part to save useful
business information, and this part also helps the executive to search
historical business information easily
4. Output devices, which provide a visual or permanent record for the
executive to save or read. This device refers to the visual output
device such as monitor or printer
In addition, with the advent of local area networks (LAN), several EIS
products for networked workstations became available. These systems
require less support and less expensive computer hardware. They also
increase EIS information access to more company users.
Software
Choosing the appropriate software is vital to an effective EIS.[citation
needed]
Therefore, the software components and how they integrate the data
into one system are important. A typical EIS includes four software
components:

1. Text: handling software—documents are typically text-based


2. Database: heterogeneous databases on a range of vendor-specific
and open computer platforms help executives access both internal
and external data
3. Graphic base: graphics can turn volumes of text and statistics into
visual information for executives. Typical graphic types are: time
series charts, scatter diagrams, maps, motion graphics, sequence
charts, and comparison-oriented graphs (i.e., bar charts)
4. Model base—EIS models contain routine and special statistical,
financial, and other quantitative analysis
User interface
An EIS must be efficient to retrieve relevant data for decision makers, so
the user interface is very important. Several types of interfaces can be
available to the EIS structure, such as scheduled reports,
questions/answers, menu driven, command language, natural language,
and input/output.
Telecommunication
As decentralizing is becoming a trend in companies, telecommunications
plays a pivotal role in networked information systems. Transmitting data
from one place to another has become crucial for establishing a reliable
network. In addition, telecommunications within an EIS can accelerate the
need for access to distributed data. It can be both by scientific and
business means.

Applications
EIS helps executives find data according to user-defined criteria and
promote information-based insight and understanding. Unlike a
traditional management information system presentation, EIS can
distinguish between vital and seldom-used data, and track different key
critical activities for executives, both which are helpful in evaluating if the
company is meeting its corporate objectives. After realizing its advantages,
people have applied EIS in many areas, especially, in manufacturing,
marketing, and finance areas.
Manufacturing
Manufacturing is the transformation of raw materials into finished goods for
sale, or intermediate processes involving the production or finishing of
semi-manufactures. It is a large branch of industry and of secondary
production. Manufacturing operational control focuses on day-to-day
operations, and the central idea of this process is effectiveness.
Marketing
In an organization, marketing executives' duty is managing available
marketing resources to create a more effective future. For this, they need
make judgments about risk and uncertainty of a project and its impact on
the company in short term and long term. To assist marketing executives in
making effective marketing decisions, an EIS can be applied. EIS provides
sales forecasting, which can allow the market executive to compare sales
forecast with past sales. EIS also offers an approach to product price,
which is found in venture analysis. The market executive can evaluate
pricing as related to competition along with the relationship of product
quality with price charged. In summary, EIS software package enables
marketing executives to manipulate the data by looking for trends,
performing audits of the sales data, and calculating totals, averages,
changes, variances, or ratios.
Financial analysis
Financial analysis is one of the most important steps to companies today.
Executives needs to use financial ratios and cash flow analysis to estimate
the trends and make capital investment decisions. An EIS integrates
planning or budgeting with control of performance reporting, and it can be
extremely helpful to finance executives. EIS focuses on financial
performance accountability, and recognizes the importance of cost
standards and flexible budgeting in developing the quality of information
provided for all executive levels.

Advantages and disadvantages


Advantages of ESS

 Easy for upper-level executives to use, extensive computer experience


is not required in operations
 Provides strong drill-down capabilities to better analyze the given
information.
 Information that is provided is better understood
 EIS provides timely delivery of information. Management can make
decisions promptly.
 Improves tracking information
 Offers efficiency to decision makers
Disadvantages of ESS
 System dependent
 Limited functionality, by design
 Information overload for some managers
 Benefits hard to quantify
 High implementation costs
 System may become slow, large, and hard to manage
 Need good internal processes for data management
 May lead to less reliable and less secure data
 Excessive cost for small company
Future trends
This trend frees executives from learning different computer operating
systems, and substantially decreases implementation costs. Because this
trend includes using existing software applications, executives don't need
to learn a new or special language for the EIS package.

Interactive visualizations are trending. 3D visualizations in a VR/AR


environment looks like a possibility already. Also, predictive analytics open
the doors for (machine) learning what is going to be next based on data
from the past. While the data processing can be done in many ways,
learning is not completely unsupervised. There is still a good deal of
classification using expert personnel analysis. In near realtime scenarios,
latencies while doing ML can be a barrier. Optimizing data models, size
and processing path/time are ongoing work. As more data is captured at
different data stages in not only EIS apps but also other enterprise apps,
audio and video tagging can catalyse data discovery.

What is Network Topology?


Topology defines the structure of the network of how all the
components are interconnected to each other. There are two
types of topology: physical and logical topology.

Types of Network Topology

Physical topology is the geometric representation of all the nodes


in a network. There are six types of network topology which are
Bus Topology, Ring Topology, Tree Topology, Star Topology, Mesh
Topology, and Hybrid Topology.
Types of Network Topology:

Bus, Ring, Star, Mesh, Tree Diagram

What is Topology?

Network topologies describe the methods in which all the elements of a network
are mapped. The topology term refers to both the physical and logical layout of a
network.

Types of Networking Topologies

Two main types of network topologies in computer networks are 1) Physical


topology 2) Logical topology

Physical topology This type of network is an actual layout of the computer cables
and other network devices

Logical topology Logical topology gives insight’s about network’s physical


design.

Different types of Physical Topologies are:

 P2P Topology

 Bus Topology

 Ring Topology

 Star Topology

 Tree Topology

 Mesh Topology

 Hybrid Topology
1) Bus Topology
o The bus topology is designed in such a way that all the
stations are connected through a single cable known as a
backbone cable.
o Each node is either connected to the backbone cable by drop
cable or directly connected to the backbone cable.
o When a node wants to send a message over the network, it
puts a message over the network. All the stations available
in the network will receive the message whether it has been
addressed or not.
o The bus topology is mainly used in 802.3 (ethernet) and
802.4 standard networks.
o The configuration of a bus topology is quite simpler as
compared to other topologies.
o The backbone cable is considered as a "single
lane" through which the message is broadcast to all the
stations.
o The most common access method of the bus topologies
is CSMA (Carrier Sense Multiple Access).
CSMA: It is a media access control used to control the data flow
so that data integrity is maintained, i.e., the packets do not get
lost. There are two alternative ways of handling the problems that
occur when two nodes send the messages simultaneously.
o CSMA CD: CSMA CD (Collision detection) is an access
method used to detect the collision. Once the collision is
detected, the sender will stop transmitting the data.
Therefore, it works on "recovery after the collision".
o CSMA CA: CSMA CA (Collision Avoidance) is an access
method used to avoid the collision by checking whether the
transmission media is busy or not. If busy, then the sender
waits until the media becomes idle. This technique
effectively reduces the possibility of the collision. It does not
work on "recovery after the collision".

Advantages of Bus topology:

o Low-cost cable: In bus topology, nodes are directly


connected to the cable without passing through a hub.
Therefore, the initial cost of installation is low.
o Moderate data speeds: Coaxial or twisted pair cables are
mainly used in bus-based networks that support upto 10
Mbps.
o Familiar technology: Bus topology is a familiar technology
as the installation and troubleshooting techniques are well
known, and hardware components are easily available.
o Limited failure: A failure in one node will not have any
effect on other nodes.

Disadvantages of Bus topology:

o Extensive cabling: A bus topology is quite simpler, but still


it requires a lot of cabling.
o Difficult troubleshooting: It requires specialized test
equipment to determine the cable faults. If any fault occurs
in the cable, then it would disrupt the communication for all
the nodes.
o Signal interference: If two nodes send the messages
simultaneously, then the signals of both the nodes collide
with each other.
o Reconfiguration difficult: Adding new devices to the
network would slow down the network.
o Attenuation: Attenuation is a loss of signal leads to
communication issues. Repeaters are used to regenerate the
signal.

2) Ring Topology

o Ring topology is like a bus topology, but with connected


ends.
o The node that receives the message from the previous
computer will retransmit to the next node.
o The data flows in one direction, i.e., it is unidirectional.
o The data flows in a single loop continuously known as an
endless loop.
o It has no terminated ends, i.e., each node is connected to
other node and having no termination point.
o The data in a ring topology flow in a clockwise direction.
o The most common access method of the ring topology
is token passing.
o Token passing: It is a network access method in which
token is passed from one node to another node.
o Token: It is a frame that circulates around the network.

Working of Token passing


o A token moves around the network, and it is passed from
computer to computer until it reaches the destination.
o The sender modifies the token by putting the address along
with the data.
o The data is passed from one device to another device until
the destination address matches. Once the token received
by the destination device, then it sends the acknowledgment
to the sender.
o In a ring topology, a token is used as a carrier.

Advantages of Ring topology:

o Network Management: Faulty devices can be removed


from the network without bringing the network down.
o Product availability: Many hardware and software tools for
network operation and monitoring are available.
o Cost: Twisted pair cabling is inexpensive and easily
available. Therefore, the installation cost is very low.
o Reliable: It is a more reliable network because the
communication system is not dependent on the single host
computer.

Disadvantages of Ring topology:

o Difficult troubleshooting: It requires specialized test


equipment to determine the cable faults. If any fault occurs
in the cable, then it would disrupt the communication for all
the nodes.
o Failure: The breakdown in one station leads to the failure of
the overall network.
o Reconfiguration difficult: Adding new devices to the
network would slow down the network.
o Delay: Communication delay is directly proportional to the
number of nodes. Adding new devices increases the
communication delay.

3) Star Topology
o Star topology is an arrangement of the network in which
every node is connected to the central hub, switch or a
central computer.
o The central computer is known as a server, and the
peripheral devices attached to the server are known
as clients.
o Coaxial cable or RJ-45 cables are used to connect the
computers.
o Hubs or Switches are mainly used as connection devices in
a physical star topology.
o Star topology is the most popular topology in network
implementation.
Advantages of Star topology

o Efficient troubleshooting: Troubleshooting is quite


efficient in a star topology as compared to bus topology. In a
bus topology, the manager has to inspect the kilometers of
cable. In a star topology, all the stations are connected to
the centralized network. Therefore, the network
administrator has to go to the single station to troubleshoot
the problem.
o Network control: Complex network control features can be
easily implemented in the star topology. Any changes made
in the star topology are automatically accommodated.
o Limited failure: As each station is connected to the central
hub with its own cable, therefore failure in one cable will not
affect the entire network.
o Familiar technology: Star topology is a familiar technology
as its tools are cost-effective.
o Easily expandable: It is easily expandable as new stations
can be added to the open ports on the hub.
o Cost effective: Star topology networks are cost-effective as
it uses inexpensive coaxial cable.
o High data speeds: It supports a bandwidth of approx
100Mbps. Ethernet 100BaseT is one of the most popular Star
topology networks.

Disadvantages of Star topology

o A Central point of failure: If the central hub or switch


goes down, then all the connected nodes will not be able to
communicate with each other.
o Cable: Sometimes cable routing becomes difficult when a
significant amount of routing is required.

4) Tree topology
o Tree topology combines the characteristics of bus topology
and star topology.
o A tree topology is a type of structure in which all the
computers are connected with each other in hierarchical
fashion.
o The top-most node in tree topology is known as a root node,
and all other nodes are the descendants of the root node.
o There is only one path exists between two nodes for the data
transmission. Thus, it forms a parent-child hierarchy.

Advantages of Tree topology

o Support for broadband transmission: Tree topology is


mainly used to provide broadband transmission, i.e., signals
are sent over long distances without being attenuated.
o Easily expandable: We can add the new device to the
existing network. Therefore, we can say that tree topology is
easily expandable.
o Easily manageable: In tree topology, the whole network is
divided into segments known as star networks which can be
easily managed and maintained.
o Error detection: Error detection and error correction are
very easy in a tree topology.
o Limited failure: The breakdown in one station does not
affect the entire network.
o Point-to-point wiring: It has point-to-point wiring for
individual segments.
Disadvantages of Tree topology

o Difficult troubleshooting: If any fault occurs in the node,


then it becomes difficult to troubleshoot the problem.
o High cost: Devices required for broadband transmission are
very costly.
o Failure: A tree topology mainly relies on main bus cable and
failure in main bus cable will damage the overall network.
o Reconfiguration difficult: If new devices are added, then it
becomes difficult to reconfigure.

5) Mesh topology

o Mesh technology is an arrangement of the network in which


computers are interconnected with each other through
various redundant connections.
o There are multiple paths from one computer to another
computer.
o It does not contain the switch, hub or any central computer
which acts as a central point of communication.
o The Internet is an example of the mesh topology.
o Mesh topology is mainly used for WAN implementations
where communication failures are a critical concern.
o Mesh topology is mainly used for wireless networks.
o Mesh topology can be formed by using the formula:
Number of cables = (n*(n-1))/2;
Where n is the number of nodes that represents the network.

Mesh topology is divided into two categories:

o Fully connected mesh topology


o Partially connected mesh topology

o Full Mesh Topology: In a full mesh topology, each


computer is connected to all the computers available in the
network.
o Partial Mesh Topology: In a partial mesh topology, not all
but certain computers are connected to those computers
with which they communicate frequently.

Advantages of Mesh topology:

Reliable: The mesh topology networks are very reliable as if any


link breakdown will not affect the communication between
connected computers.
Fast Communication: Communication is very fast between the
nodes.

Easier Reconfiguration: Adding new devices would not disrupt


the communication between other devices.

Disadvantages of Mesh topology

o Cost: A mesh topology contains a large number of


connected devices such as a router and more transmission
media than other topologies.
o Management: Mesh topology networks are very large and
very difficult to maintain and manage. If the network is not
monitored carefully, then the communication link failure
goes undetected.
o Efficiency: In this topology, redundant connections are high
that reduces the efficiency of the network.

6) Hybrid Topology
o The combination of various different topologies is known
as Hybrid topology.
o A Hybrid topology is a connection between different links
and nodes to transfer the data.
o When two or more different topologies are combined
together is termed as Hybrid topology and if similar
topologies are connected with each other will not result in
Hybrid topology. For example, if there exist a ring topology
in one branch of ICICI bank and bus topology in another
branch of ICICI bank, connecting these two topologies will
result in Hybrid topology.

Advantages of Hybrid Topology

o Reliable: If a fault occurs in any part of the network will not


affect the functioning of the rest of the network.
o Scalable: Size of the network can be easily expanded by
adding new devices without affecting the functionality of the
existing network.
o Flexible: This topology is very flexible as it can be designed
according to the requirements of the organization.
o Effective: Hybrid topology is very effective as it can be
designed in such a way that the strength of the network is
maximized and weakness of the network is minimized.

Disadvantages of Hybrid topology

o Complex design: The major drawback of the Hybrid


topology is the design of the Hybrid network. It is very
difficult to design the architecture of the Hybrid network.
o Costly Hub: The Hubs used in the Hybrid topology are very
expensive as these hubs are different from usual Hubs used
in other topologies.
o Costly infrastructure: The infrastructure cost is very high
as a hybrid network requires a lot of cabling, network
devices, etc.

Transmission modes

o The way in which data is transmitted from one device to


another device is known as transmission mode.
o The transmission mode is also known as the communication
mode.
o Each communication channel has a direction associated with
it, and transmission media provide the direction. Therefore,
the transmission mode is also known as a directional mode.
o The transmission mode is defined in the physical layer.
The Transmission mode is divided into three categories:
o Simplex mode
o Half-duplex mode
o Full-duplex mode

Simplex mode

o In Simplex mode, the communication is unidirectional, i.e.,


the data flow in one direction.
o A device can only send the data but cannot receive it or it
can receive the data but cannot send the data.
o This transmission mode is not very popular as mainly
communications require the two-way exchange of data. The
simplex mode is used in the business field as in sales that do
not require any corresponding reply.
o The radio station is a simplex channel as it transmits the
signal to the listeners but never allows them to transmit
back.
o Keyboard and Monitor are the examples of the simplex mode
as a keyboard can only accept the data from the user and
monitor can only be used to display the data on the screen.
o The main advantage of the simplex mode is that the full
capacity of the communication channel can be utilized
during transmission.
Advantage of Simplex mode:

o In simplex mode, the station can utilize the entire bandwidth


of the communication channel, so that more data can be
transmitted at a time.

Disadvantage of Simplex mode:

o Communication is unidirectional, so it has no inter-


communication between devices.

Half-Duplex mode

o In a Half-duplex channel, direction can be reversed, i.e., the


station can transmit and receive the data as well.
o Messages flow in both the directions, but not at the same
time.
o The entire bandwidth of the communication channel is
utilized in one direction at a time.
o In half-duplex mode, it is possible to perform the error
detection, and if any error occurs, then the receiver requests
the sender to retransmit the data.
o A Walkie-talkie is an example of the Half-duplex mode. In
Walkie-talkie, one party speaks, and another party listens.
After a pause, the other speaks and first party listens.
Speaking simultaneously will create the distorted sound
which cannot be understood.

Advantage of Half-duplex mode:


o In half-duplex mode, both the devices can send and receive
the data and also can utilize the entire bandwidth of the
communication channel during the transmission of data.

Disadvantage of Half-Duplex mode:

o In half-duplex mode, when one device is sending the data,


then another has to wait, this causes the delay in sending
the data at the right time.

Full-duplex mode

o In Full duplex mode, the communication is bi-directional, i.e.,


the data flow in both the directions.
o Both the stations can send and receive the message
simultaneously.
o Full-duplex mode has two simplex channels. One channel
has traffic moving in one direction, and another channel has
traffic flowing in the opposite direction.
o The Full-duplex mode is the fastest mode of communication
between devices.
o The most common example of the full-duplex mode is a
telephone network. When two people are communicating
with each other by a telephone line, both can talk and listen
at the same time.

Advantage of Full-duplex mode:

o Both the stations can send and receive the data at the same
time.
Disadvantage of Full-duplex mode:

o If there is no dedicated path exists between the devices,


then the capacity of the communication channel is divided
into two parts.

Differences b/w Simplex, Half-duplex and Full-duplex mode

Basis for Simplex mode Half-duplex Full-duplex


comparison mode mode

In half-duplex
In simplex mode, In full-duplex
mode, the
Direction of the mode, the
communication is
communication communication is communication is
bidirectional, but
unidirectional. bidirectional.
one at a time.

A device can only


send the data but
Both the devices Both the devices
cannot receive it
can send and can send and
Send/Receive or it can only
receive the data, receive the data
receive the data
but one at a time. simultaneously.
but cannot send
it.

Performance The performance The performance The Full-duplex


of half-duplex of full-duplex mode has better
mode is better mode is better performance
than the simplex than the half- among simplex
mode. duplex mode. and half-duplex
mode as it
doubles the
utilization of the
capacity of the
communication
channel.

Examples of
Example of the
Simplex mode Example of half-
Full-duplex mode
Example are radio, duplex is Walkie-
is a telephone
keyboard, and Talkies.
network.
monitor.

Lecture 4
Computer networks
A computer network is a set of computers connected together for the
purpose of sharing resources. The most common resource shared today is
connection to the Internet. Other shared resources can include a printer or a file
server. A network is a collection of computers connected to each other by means of
data transmission. Data transmission means can consist of the following elements:
computers connected to each other by cable, satellite, telephone, fiber-optic, radio,
etc. various types of transducers based on transmitters, as well as other elements
and devices.
The architecture of a computer network defines the principles of operation
and installation of hardware and software of network elements.
Modern networks can be classified according to a number of characteristics:
the distance between computers; topology, purpose; the number of services
provided; principles of centralized or decentralized management; non-switching,
telephone switching, circuit switching, data, packet and datagram switching
methods; according to the types of transmission medium, etc.
Depending on the distance between computers, networks are divided into
two classes: local and global networks.
Any global network can be connected to other global networks, local area
networks, as well as computers that are connected to it separately and remotely, or
separate I / O devices.
There are four main types of global networks: urban, regional, national and
transnational. Printing or copying devices, cash registers and bank machines,
displays and fax machines located at certain distances from each other may be used
as input and output devices. Global networks will expand the scope of local
networks, including networks located in different buildings, cities, regions and
countries. Typically, global networks limit their reach to the range of services
provided by a regional company. These companies include Bell, Pacific Bell,
AT&T, Sprint, MCI and others. an example can be given.
Global networks are connected by serial lines, which also have lower data
rates than local networks. Typically, global networks include the following
devices:
Routing. These provide connectivity between local area networks
and manages the global network through the interface.
ATM switches. Cells between local and global networks
Used for high speed switching.
X.25 Switches and Frame Relay. Number signals are sent
connects personal and public data transmission channels.
Modems. Personal and public data to which analog signals are sent
connects transmission channels to each other.
Channels, data service modules (CSU, DSU - Channel Service
Unit, Data Service Unit). Whether the equipment is located in the client area
(CPE - Customer Processing Equipment), it is used by the client as the final
equipment of the digital channel. These devices are connected to a central
telephone exchange (CO - Central Office), ie the switching node of the telephone
company closest to the customer.
Switching servers. These are usually calling servers (dual in, out
server), which allows remote users to communicate with the required client
and connect to its local network. An example of this is the AS5200 Cisco series
communication server.
Multiplexers. Several at once through one physical channel allows you to
send signals.
The centralized management scheme of computing processes on the basis of
medium and large computers (Main frame) has recently been replaced by "client-
server" technology.
In a centralized management scheme, all computing resources, data and their
processing programs are concentrated in one computer. Users access machine
resources through terminals (displays). The terminals are connected to the
computer either via an interface or through telephone lines (if the terminals are
located at a distance). The main function of the terminal is to describe the
information provided to the user. The advantages of such a scheme are ease of
management, the possibility of software improvements and information security.
The disadvantages are low reliability (computer failure means the collapse of the
entire computing process), the difficulty of increasing hardware and software, and,
as a rule, a decrease in efficiency as the number of network users increases, and so
on.
In the client-server architecture, the terminal is replaced by one or more
powerful computers (computer-servers), which are owned by the client, and the
mainframe, which is allocated to solve common information processing problems.
The advantages of such a model are its more vividness and reliability of the
computing system, the ability of the user to work with several applications
simultaneously, high efficiency of information processing, providing the user with
a high-quality interface, etc.
Because LCNs operate within an organization (corporation, enterprise), such
networks are often referred to as corporate systems or networks. In this case
Computers are usually located inside a room, building or adjacent buildings.
Regardless of the network on which a computer operates, the function of the
software installed on that computer can be divided into two groups: those that
manage the computer's own resources and those that manage the exchange with
other computers.
Usually the computer's own resources are managed by the operating system.
Network resources are managed by network software, which is implemented either
as a separate package in the form of a network program or through a network
operating system.
A hierarchical approach is used in network software. Here, the free levels
and the interfaces between them must be predefined. As a result, it is possible to
improve the program of any level, provided that other levels are not touched. In
general, the function of any level can be simplified and, if necessary, completely
eliminated.
The International Organization for Standardization (ISO) has proposed the
Open System Interconnection (OSI) model, which provides for the interconnection
of open systems in order to regulate the operation of network software and to
organize the interaction of any computer system.
The OSI benchmark model defines the following seven levels:
Physical layer;
Network layer;
Transport layer;
Session layer;
Presentation layer;
Application layer

Local computer networks


The following are used as the main hardware components of local computer
networks (LCN):
Workstations;
Servers;
Interface cards;
Cables.
Workstations (WS) - used as a network user's workplace are personal
computers. The demand for WS is determined by the characteristics of the
problems solved in the network, the principle of organization of computational
processes, the OS used and a number of other factors. For example, if the network
uses MS Windows for Workgroup OS, then it is necessary to use Pentium-type
processors as the processor of WS.
In some cases, if the WS is connected directly to the network cable, in this
case
There is no need for memory on magnetic disks. Such WS s are called WS s
without disks. However, in this case, when the OS is downloaded from the file-
server to the WS, the network adapter must have a suitable chip that allows remote
download. This chip is used as an extension of the input-output base system
(BIOS). This chip writes the OS loading program to the WS's RAM. The main
advantage of such diskless WS s is that they are cheap and do not allow
unauthorized access to the user's program and computer viruses. The disadvantage
is that it does not work autonomously (provided it is not connected to the server),
as well as does not have its own data and software archive.
In LCN, servers perform the function of distributing network resources.
Usually the server function can be performed by a personal computer, mainframe
or special computer, which is quite powerful. Each server can be both separate and
part of the WS. In the latter case, the server is not complete, only part of the
resources can be shared.
If there are several servers in the PC, then each server serves the WS
connected to it. The domain is called the server's set of computers and the WS
connected to them. In some cases, a domain has multiple servers. One of these
servers is the main server, and the rest are the backup server or the logical
extension of the main server.
Computer Network Types
A computer network is a group of computers linked to each other
that enables the computer to communicate with another
computer and share their resources, data, and applications.

A computer network can be categorized by their size.


A computer network is mainly of four types:

o LAN(Local Area Network)


o PAN(Personal Area Network)
o MAN(Metropolitan Area Network)
o WAN(Wide Area Network)

LAN(Local Area Network)

o Local Area Network is a group of computers connected to


each other in a small area such as building, office.
o LAN is used for connecting two or more personal computers
through a communication medium such as twisted pair,
coaxial cable, etc.
o It is less costly as it is built with inexpensive hardware such
as hubs, network adapters, and ethernet cables.
o The data is transferred at an extremely faster rate in Local
Area Network.
o Local Area Network provides higher security.

PAN(Personal Area Network)

o Personal Area Network is a network arranged within an


individual person, typically within a range of 10 meters.
o Personal Area Network is used for connecting the computer
devices of personal use is known as Personal Area Network.
o Thomas Zimmerman was the first research scientist to
bring the idea of the Personal Area Network.
o Personal Area Network covers an area of 30 feet.
o Personal computer devices that are used to develop the
personal area network are the laptop, mobile phones, media
player and play stations.
There are two types of Personal Area Network:

o Wired Personal Area Network


o Wireless Personal Area Network
Wireless Personal Area Network: Wireless Personal Area
Network is developed by simply using wireless technologies such
as WiFi, Bluetooth. It is a low range network.

Wired Personal Area Network: Wired Personal Area Network is


created by using the USB.

Examples Of Personal Area Network:


o Body Area Network: Body Area Network is a network that
moves with a person. For example, a mobile network
moves with a person. Suppose a person establishes a
network connection and then creates a connection with
another device to share the information.
o Offline Network: An offline network can be created inside
the home, so it is also known as a home network. A home
network is designed to integrate the devices such as
printers, computer, television but they are not connected to
the internet.
o Small Home Office: It is used to connect a variety of
devices to the internet and to a corporate network using a
VPN

MAN(Metropolitan Area Network)

o A metropolitan area network is a network that covers a


larger geographic area by interconnecting a different LAN to
form a larger network.
o Government agencies use MAN to connect to the citizens
and private industries.
o In MAN, various LANs are connected to each other through a
telephone exchange line.
o The most widely used protocols in MAN are RS-232, Frame
Relay, ATM, ISDN, OC-3, ADSL, etc.
o It has a higher range than Local Area Network(LAN).
Uses Of Metropolitan Area Network:

o MAN is used in communication between the banks in a city.


o It can be used in an Airline Reservation.
o It can be used in a college within a city.
o It can also be used for communication in the military.

WAN(Wide Area Network)

o A Wide Area Network is a network that extends over a large


geographical area such as states or countries.
o A Wide Area Network is quite bigger network than the LAN.
o A Wide Area Network is not limited to a single location, but it
spans over a large geographical area through a telephone
line, fibre optic cable or satellite links.
o The internet is one of the biggest WAN in the world.
o A Wide Area Network is widely used in the field of Business,
government, and education.
Examples Of Wide Area Network:

o Mobile Broadband: A 4G network is widely used across a


region or country.
o Last mile: A telecom company is used to provide the
internet services to the customers in hundreds of cities by
connecting their home with fiber.
o Private network: A bank provides a private network that
connects the 44 offices. This network is made by using the
telephone leased line provided by the telecom company.

Advantages Of Wide Area Network:

Following are the advantages of the Wide Area Network:

o Geographical area: A Wide Area Network provides a large


geographical area. Suppose if the branch of our office is in a
different city then we can connect with them through WAN.
The internet provides a leased line through which we can
connect with another branch.
o Centralized data: In case of WAN network, data is
centralized. Therefore, we do not need to buy the emails,
files or back up servers.
o Get updated files: Software companies work on the live
server. Therefore, the programmers get the updated files
within seconds.
o Exchange messages: In a WAN network, messages are
transmitted fast. The web application like Facebook,
Whatsapp, Skype allows you to communicate with friends.
o Sharing of software and resources: In WAN network, we
can share the software and other resources like a hard drive,
RAM.
o Global business: We can do the business over the internet
globally.
o High bandwidth: If we use the leased lines for our
company then this gives the high bandwidth. The high
bandwidth increases the data transfer rate which in turn
increases the productivity of our company.

Disadvantages of Wide Area Network:

The following are the disadvantages of the Wide Area Network:

o Security issue: A WAN network has more security issues as


compared to LAN and MAN network as all the technologies
are combined together that creates the security problem.
o Needs Firewall & antivirus software: The data is
transferred on the internet which can be changed or hacked
by the hackers, so the firewall needs to be used. Some
people can inject the virus in our system so antivirus is
needed to protect from such a virus.
o High Setup cost: An installation cost of the WAN network is
high as it involves the purchasing of routers, switches.
o Troubleshooting problems: It covers a large area so fixing
the problem is difficult.

Internetwork
o An internetwork is defined as two or more computer network
LANs or WAN or computer network segments are connected
using devices, and they are configured by a local addressing
scheme. This process is known as internetworking.
o An interconnection between public, private, commercial,
industrial, or government computer networks can also be
defined as internetworking.
o An internetworking uses the internet protocol.
o The reference model used for internetworking is Open
System Interconnection(OSI).

Types Of Internetwork:

1. Extranet: An extranet is a communication network based on


the internet protocol such as Transmission Control
protocol and internet protocol. It is used for information
sharing. The access to the extranet is restricted to only those
users who have login credentials. An extranet is the lowest level
of internetworking. It can be categorized as MAN, WAN or other
computer networks. An extranet cannot have a single LAN,
atleast it must have one connection to the external network.

2. Intranet: An intranet is a private network based on the


internet protocol such as Transmission Control
protocol and internet protocol. An intranet belongs to an
organization which is only accessible by the organization's
employee or members. The main aim of the intranet is to share
the information and resources among the organization
employees. An intranet provides the facility to work in groups and
for teleconferences.

Intranet advantages:

o Communication: It provides a cheap and easy


communication. An employee of the organization can
communicate with another employee through email, chat.
o Time-saving: Information on the intranet is shared in real
time, so it is time-saving.
o Collaboration: Collaboration is one of the most important
advantage of the intranet. The information is distributed
among the employees of the organization and can only be
accessed by the authorized user.
o Platform independency: It is a neutral architecture as the
computer can be connected to another device with different
architecture.
o Cost effective: People can see the data and documents by
using the browser and distributes the duplicate copies over
the intranet. This leads to a reduction in the cost.

Lecture 5 Central Processor Unit and its components

A computer is an electronic device that processes input data and produces result
(output) according to a set of instructions called program.
A computer performs basically five major functions irrespective of its size and
make.
 It accepts data or instructions by way of input
 It stores data
 It processes data as required by the user
 It controls operations of a computer
 It gives results in the form of output
In order to carry out the operations mentioned above the computer allocates the
task among its various functional units.
Let consider each node of this structure.Computer receive data and instraction
(command).Through input device,which get processes in CPU and result is shown
thr4ough output devices.
The main and secondary memory are use to store data inside the computer.There
are basic components that is computer process.Now lets consider following parts:
1) INPUT DEVICES
– Whatever is put into a computer system.
• Converts the external world data to a binary format, which can be understood by
CPU.These are used to enter data and instructions into the computer. Let us discuss
some of them.
KEYBOARD
This is the most common input device which uses an arrangement of buttons or
keys. In a keyboard each press of a key typically corresponds to a single written
symbol. However some symbols require pressing and holding several keys
simultaneously or in sequence. While most keyboard keys produce letters, numbers
or characters, other keys or simultaneous key presses can produce actions or
computer commands. In normal usage, the keyboard is used to type text and
numbers while in a modern computer, the interpretation of key press is generally
left to the software. A computer keyboard distinguishes each physical key from
every other and reports the key-presses to the controlling software. Keyboards are
also used for computer gaming, either with regular keyboards or by using
keyboards with special gaming features. Apart from alphabet keys (26 keys), there
are several other keys for various purposes such as;
 Number keys - The 10 number keys 0-9 are there on each keyboard.
Sometimes, there are two sets of these keys.
 Direction keys - There are four direction keys : left, right, up and down
which allow the curser to move in these directions. Unlike alphabet and number
keys, these keys do not display any thing.
 Function keys - There are generally 12 functions keys F1-F12. These keys
have special tasks and the tasks may change from program to program. Just like
direction keys, these too do not print anything.
 Other keys - There are several other non-printable keys for various different
purposes. These include caps lock, tab, ctrl, pause, delete, backspace, spacebar,
shift, enter etc which are used for special purposes.

Whenever a key is pressed, a specific signal is transmitted to the computer. The


keyboard uses a crossbar network to identify every key. When a key is pressed, an
electrical contact is formed. These electric signals are transmitted to a
microcontroller in a coded form to the computer describing the character which
corresponds to that key. The theory of codes in itself is a vast field of study.
However, in Appendices I, II, III and IV we have discussed the most common
codes namely BCD, ASCII, ISCII and Unicode.

MOUSE
A mouse is a pointing device that functions by detecting two-dimensional motion
relative to its supporting surface. The mouse's motion typically translates into the
motion of a cursor on a display, which allows for fine control of a Graphical User
Interface. A mouse primarily comprises of three parts: the buttons, the handling
area, and the rolling object. By default, the mouse is configured to work for the
right hand. If you are left-handed, the settings can be changed to suit your needs.
All mouse do not use the same mechanical operation but all of them accomplish
the same task. Some of them use a tracking ball at the bottom and some of them
use a type of light beam to detect the motion of mouse. Laptops are equipped with
a small flat surface or sometimes with a very short stick for performing same job as
mouse. Using left button of mouse different operations like selection, dragging,
moving and pasting can be done. With the right button we can open a context
menu for an item, if it is applicable.

OTHER INPUT DEVICES


Light Pen
It is a light sensitive stylus attached to a video terminal to draw pictures or to
select menu options.

Touch screen
This device allow interacting with the computer without any
intermediate device. You may see it at as KIOSKS installed in
various public places.

Graphics tablet
This device is used to enter data using a stylus. Most commonly it is
used to enter digital signatures.

Joystick
It is an input device consisting of a stick that pivots on a base and
translates its angle or direction as data. Joysticks are often used to
control inputs in video games.

Microphone
It is used to input audio data into the computer. They are mainly
used for sound recording.

It is used to convert images of text into machine editable text. It is


widely used to convert books and documents into electronic files,
to computerize a record-keeping system in an office, or to publish
the text on a website.

It is a device that optically scans images, printed text or an object


and converts it to a digital image.

It is used to access the microprocessor of a smart card. There are


two broad categories of smart cards - Memory cards and
microprocessor cards. Memory cards contain only non-volatile
memory storage components, and some specific security logic.
Microprocessor cards contain volatile memory and
microprocessor components. The card is made of plastic, generally PVC. Smart
cards are used in large companies and organizations for strong security
authentication.
Bar Code Reader
This device read the bar code as input data. It consists of a light
source, a lens and a light sensor which translates optical impulses
into electrical signals. Also it contains decoder circuitry which
analyzes the barcode's image data and sends the barcode's content
to the scanner's output port.

Biometric Sensors
It is used to recognize individuals based on physical or
behavioral traits. Biometric sensor is used to mark attendance of
employees/students in organizations/institutions. It is also
popular as a security device to provide restricted entry for
secured areas.

This captures video as data for computer with reasonably good


quality. It is commonly used for Web Chats.

2) OUTPUT DEVICES:
These are used to display results on video display or are used to print the result.
These can also be used to store the result for further use.
Monitor or VDU :
It is the most common output device. It looks like a TV.Its
display may be CRT, LCD, Plasma or touch sensitive.

Speakers :
These are used to listen to the audio output of computer.

Printers :
These are used to produce hard copy of output as text or
graphics.

Dot Matrix Printer :


This printer prints characters by striking an ink soaked
ribbon
against the paper. These can be used to generate carbon
copies also.

Inkjet/Deskjet/Bubblejet printers:
These all are low cost printers which use a controlled stream
of inkfor printing

Laser Printers:
These printers use laser technology to produce printed
documents. These are very fast printers and are used for
high
quality prints.

Plotters:
These are used to print graphics. It is mainly used in
computeraided designing

3) CPU(CENTRAL PROCESSING UNIT):


this device is responsible to processing data and instructions (commands)
• The “brain” of the machine
• Responsible for carrying out computational task
• Contains ALU, CU, Registers
•ALU Performs Arithmetic and logical operations
• CU Provides control signals in accordance with some timings which in turn
controls the execution process
• Register Stores data and result and speeds up the operation
This unit can be divided:
 Control Unit
 Arithmetic and Logical Unit (ALU)
That also can be divided into two support Arithmetical and logical. The first
consider Control Unit.
CONTROL UNIT
This unit coordinates various operations, following operations on the computers:
 It directs the sequence of operations
 It interprets the instructions of a program in storage unit and produces
signals to execute the instructions
 It directs the flow of data and the last it directs the flow of data and
instructions in the computer system
ARITHMETIC AND LOGICAL UNIT
This unit is responsible for performing various arithmetically operation,including
addition,subtraction, multiplication, division and relational operations etc. and also
logical operation (yes or no; true or false-logical functions).

UNDER LOGICAL UNITS


Logical part of this unit as usually are used to conduct control operations.
MEMORY UNITS
• Stores data, results, programs
• Two class of storage (i) Primary (ii) Secondary
• Two types are RAM or R/W memory and ROM read only memory
• ROM is used to store data and program which is not going to change.
• Secondary storage is used for bulk storage or mass storage
The main or primary memory stores information and instructions consist of two
main parts:
 Random Access Memory (RAM)
 Read Only Memory(ROM)

RAM
Random Access Memory is used for primary storage in computers to hold active
information of data and instructions.
ROM
ROM (Read Only Memory) is used to store the instructions provided by the
manufacturer, which holds the instructions to check basic hardware interconnecter
and
to load operating system from appropriate storage device.
UNITS OF MEMORY:
The elementary unit of memory is a bit. A group of 4 bits is called a nibble and a
group of
8 bits is called a byte. One byte is the minimum space required to store one
character.
Other units of memory are:
10 1 KB(Kilo Byte) = 2 bytes = 1024 bytes
10 1 MB(Mega Byte) = 2 KB = 1024 KB
10 1 GB(Giga Byte) = 2 MB = 1024 MB
10 1 TB(Tera Byte) = 2 GB = 1024 GB
10 1 PB(Peta Byte) = 2 TB = 1024 TB
BUS STRUCTURE

MEMORY
Data bus

MAR MDR
RRR
Data bus
Address bus

CPU Control bus INPUT/OUTPUT

Bus structure is a group of wires which carries information form CPU to


peripherals or vice – versa
Communication bus
In computer architecture, a bus is a system that transfers data between computer
components or between computers.
Address bus
This is a system of bus, which is used to specify the address of a memory location.
The width of a bus determines the number of memory locations that can be
addressed. For 64 example a system with 64-bit address bus can address 2 memory
locations.
Data bus
This system of bus is a medium, which transfer the data from one place to another
in a computer system.
Control bus
This system of bus carries the signals that give the report about the status of a
device. For ex one wire of bus indicates whether the CPU is currently reading or
writing from the main memory.
REGISTERS
Registers are fast stand-alone storage locations that hold data
temporarily. Multiple registers are needed to facilitate the
operation of the CPU. Some of these registers are
 1.Two registers-MAR (Memory Address Register) and
MDR (Memory Data Register) : To handle the data
transfer between main memory and processor. MARHolds
addresses, MDR-Holds data
 2.Instruction register (IR) : Hold the Instructions that is
currently being executed
 3.Program counter: Points to the next instructions that is
to be fetched from memory.

Computer Network Architecture


Computer Network Architecture is defined as the physical and
logical design of the software, hardware, protocols, and media of
the transmission of data. Simply we can say that how computers
are organized and how tasks are allocated to the computer.

The two types of network architectures are used:

o Peer-To-Peer network
o Client/Server network
Peer-To-Peer network

o Peer-To-Peer network is a network in which all the computers


are linked together with equal privilege and responsibilities
for processing the data.
o Peer-To-Peer network is useful for small environments,
usually up to 10 computers.
o Peer-To-Peer network has no dedicated server.
o Special permissions are assigned to each computer for
sharing the resources, but this can lead to a problem if the
computer with the resource is down.

Advantages Of Peer-To-Peer Network:

o It is less costly as it does not contain any dedicated server.


o If one computer stops working but, other computers will not
stop working.
o It is easy to set up and maintain as each computer manages
itself.

Disadvantages Of Peer-To-Peer Network:

o In the case of Peer-To-Peer network, it does not contain the


centralized system . Therefore, it cannot back up the data as
the data is different in different locations.
o It has a security issue as the device is managed itself.

Client/Server Network

o Client/Server network is a network model designed for the


end users called clients, to access the resources such as
songs, video, etc. from a central computer known as Server.
o The central controller is known as a server while all other
computers in the network are called clients.
o A server performs all the major operations such as security
and network management.
o A server is responsible for managing all the resources such
as files, directories, printer, etc.
o All the clients communicate with each other through a
server. For example, if client1 wants to send some data to
client 2, then it first sends the request to the server for the
permission. The server sends the response to the client 1 to
initiate its communication with the client 2.

Advantages Of Client/Server network:

o A Client/Server network contains the centralized system.


Therefore we can back up the data easily.
o A Client/Server network has a dedicated server that
improves the overall performance of the whole system.
o Security is better in Client/Server network as a single server
administers the shared resources.
o It also increases the speed of the sharing resources.

Disadvantages Of Client/Server network:

o Client/Server network is expensive as it requires the server


with large memory.
o A server has a Network Operating System(NOS) to provide
the resources to the clients, but the cost of NOS is very high.
o It requires a dedicated network administrator to manage all
the resources.

Lecture 5

Technological aspects of the Internet

General Information. Internet network architecture Types of Internet


connection. Telecommunications

The Internet is the global system of interconnected computer networks


that uses the Internet protocol suite (TCP/IP) to communicate between networks
and devices. It is a network of networks that consists of private, public, academic,
business, and government networks of local to global scope, linked by a broad
array of electronic, wireless, and optical networking technologies. The Internet
carries a vast range of information resources and services, such as the inter-linked
hypertext documents and applications of the World Wide Web (WWW), electronic
mail, telephony, and file sharing.

The origins of the Internet date back to the development of packet


switching and research commissioned by the United States Department of Defense
in the 1960s to enable time-sharing of computers. The primary precursor network,
the ARPANET, initially served as a backbone for interconnection of regional
academic and military networks in the 1970s. The funding of the National Science
Foundation Network as a new backbone in the 1980s, as well as private funding for
other commercial extensions, led to worldwide participation in the development of
new networking technologies, and the merger of many networks. The linking of
commercial networks and enterprises by the early 1990s marked the beginning of
the transition to the modern Internet, and generated a sustained exponential growth
as generations of institutional, personal, and mobile computers were connected to
the network. Although the Internet was widely used by academia in the 1980s,
commercialization incorporated its services and technologies into virtually every
aspect of modern life.

It is defined as the arrangement of different types of parts of computer or


the network hardware to configure or setup the internet technology is known as
internet network architecture. Different types of devices or the hardware is
required to setup up the internet network architecture. The architecture of the
Internet was commonly described as having four layers above the physical media,
each providing a distinct function: a “link” layer providing local packet delivery
over heterogeneous physical networks, a “network” layer providing best-effort
global packet delivery

Most traditional communication media, including telephony, radio,


television, paper mail and newspapers are reshaped, redefined, or even bypassed
by the Internet, giving birth to new services such as email, Internet telephony,
Internet television, online music, digital newspapers, and video streaming websites.
Newspaper, book, and other print publishing are adapting to website technology, or
are reshaped into blogging, web feeds and online news aggregators. The Internet
has enabled and accelerated new forms of personal interactions through instant
messaging, Internet forums, and social networking. Online shopping has grown
exponentially both for major retailers and small businesses and entrepreneurs, as it
enables firms to extend their "brick and mortar" presence to serve a larger market
or even sell goods and services entirely online. Business-to-business and financial
services on the Internet affect supply chains across entire industries.
Information on the Internet is stored on servers. Servers have their own addresses
and are managed by specialized programs. With their help you can send mail and
files, search for information in the database, etc. it is possible to carry out.
Information exchange between servers is carried out through high-speed
communication channels. Individual users have access to Internet information
resources, usually through telephone network providers or corporate networks.
Any organization that can communicate with customers as a provider and have
access to the World Wide Web can participate.

Working on the Internet means using a family of communication


protocols. This family of protocols is called TCP / IP (Transmission Control
Protocol / Internet Protocol), used to transmit data over global networks and many
local area networks.

Types of Internet connection

Initially, the Internet (then called ARPANET) consisted of computers connected to


a permanent network, each of which had a specified address (domain name).

Then there is the idea of accessing the network via a telephone line with the help of
connecting sessions. With the help of the phone, you can already connect to a
computer, which is a permanent "citizen" of the network, connect to it, and thus
become part of the Internet. Naturally, in this case, there are many organizations
that provide paid services to all users. Thus, the first providers are formed.

The role of providers can be played by anyone with a strong server, a large number
of incoming telephone lines, or rather a certain amount of money to buy a
dedicated communication channel. This channel is the main factor that
distinguishes the provider from the end user. Providers use special high-speed
communication channels, such as fiber-optic cable or satellite, to transmit
information over the Internet. Thanks to these communication channels, hundreds
and thousands of users can work on the Internet at the same time in a very
comfortable environment. Of course, at certain moments the capacity of the
channel is not enough, in which case they either improve it and increase the
capacity, or the communication connection deteriorates significantly .... Depending
on the type of connection and the bandwidth of the communication channel
between user and provider Types of stay are divided into 2 major groups:
Joining a session. In this type of connection, the user is not permanently connected
to the network, but communicates with the network for a short time via a telephone
line. In this case, the appropriate amount of money is paid for each hour of
connection to the network, and the data in the network is transmitted in analog
form.

Permanent connection. In this case, the computer is connected to a permanent and


fast channel, and the data is transmitted digitally in the network. Traffic is paid
only for the amount of data received and sent by the computer.

Encoding of information

What Does Encoding Mean?

Encoding is the process of converting data into a format required for a number of
information processing needs, including:

01 Program compiling and execution

02 Data transmission, storage and compression/decompression

03 Application data processing, such as file conversion

What is encoding?

Encoding can have two meanings:

● In computer technology, encoding is the process of applying a specific code,


such as letters, symbols and numbers, to data for conversion into an
equivalent cipher.
● In electronics, encoding refers to analog to digital conversion.
Encoding and decoding are used in many forms of communications, including
computing, data communications, programming, digital electronics and human
communications. These two processes involve changing the format of content for
optimal transmission or storage.

In computers, encoding is the process of putting a sequence of characters (letters,


numbers, punctuation, and certain symbols) into a specialized format for efficient
transmission or storage. Decoding is the opposite process -- the conversion of an
encoded format back into the original sequence of characters.

These terms should not be confused with encryption and decryption, which focus
on hiding and securing data. (We can encrypt data without changing the code or
encode data without deliberately concealing the content.)

What is encoding and decoding in data communications?

Encoding and decoding processes for data communications have interesting


origins. For example, Morse code emerged in 1838 when Samuel Morse created
standardized sequences of two signal durations, called dots and dashes, for use
with the telegraph. Manchester encoding was developed for storing data on
magnetic drums of the Manchester Mark 1 computer, built in 1949. In that
encoding model, each binary digit, or bit, is encoded low then high, or high then
low, for equal time. Also known as phase encoding, the Manchester process of
encoding is used in consumer infrared protocols, radio frequency identification and
near-field communication.

What is encoding and decoding in programming?

Internet access relies on encoding. A Uniform Resource Locator (URL), the


address of a webpage, can only be sent over the internet using the American
Standard Code for Information Interchange (ASCII), which is the code used for
text files in computing.

In an ASCII file, a 7-bit binary number represents each character, which can be
uppercase or lowercase letters, numbers, punctuation marks and other common
symbols. However, URLs cannot contain spaces and often have characters that
aren't in the ASCII character set. Uniform resource locator (URL) encoding, also
known as “percent encoding,” is often done when some characters can’t be
included in URLs. URL encoding thus allows unrecognized ASCII characters to be
represented in the Unicode format so all computers can read them.Other commonly
used codes in programming include BinHex, Multipurpose Internet Mail
Extensions, Unicode and Uuencode.

Some ways encoding and decoding are used in various programming languages
include the following.
In Java

Encoding and decoding in Java is a method of representing data in a different


format to efficiently transfer information through a network or the web. The
encoder converts data into a web representation. Once received, the decoder
converts the web representation data into its original format.

In Python

In the Python programming language, encoding represents a Unicode string as a


string of bytes. This commonly occurs when you transfer an instance over a
network or save it to a disk file. Decoding transforms a string of bytes into a
Unicode string. This happens when you receive a string of bytes from a disk file or
the network

In Swift

In the Apple Swift programming language, encoding and decoding models


typically represent a serialization of object data from a JavaScript Object Notation
string format. In this case, encoding represents serialization, while decoding
signifies deserialization.

What is encoding and decoding in digital electronics?

In electronics, the terms encoding and decoding reference analog-to-digital


conversion and digital-to-analog conversion. These terms can apply to any form of
data, including text, images, audio, video, multimedia and software, and to signals
in sensors, telemetry and control systems.

What is encoding and decoding in human communication?


People don't think about it as an encoding or decoding process, but human
communication begins when a sender formulates (encodes) a message. They
choose the message they will convey and a communication channel. People do this
every day with little thought to the encoding process.

The receiver must make sense of (decode) the message by deducing the meaning of
words and phrases to interpret the message correctly. They then can provide
feedback to the sender.

Both the sender and receiver in any communication process must deal with noise
that can get in the way of the communication process. Noise involves the various
ways that messages get disrupted, distorted or delayed. These can include actual
physiological noise, technical problems or semantic, psychological and cultural
issues that get in the way of communication.

These processes occur almost instantly in any of these three models:

Transmission Model
This model of communication is a linear process where a sender transmits a
message to a receiver.

Interaction Model

In this model, participants take turns as senders and receivers.

Transaction Model

Here, communicators generate social realities within cultural, relational and social
contexts. They communicate to create a relationship, engage with communities and
form intercultural alliances. In this model, participants are labeled as
communicators, not senders and receivers.

Decoding messages in your native tongue feels effortless. When the language is
unfamiliar, however, the receiver may need a translator or tools like Google
Translate for decoding the message.

Beyond the basics of encoding and decoding, machine translation


capabilities have made significant progress of late. Find out more about
machine translation technology and tools.

What is Encoding?

Encoding is the process of converting data into a different format. When


you convert temperature readings from Celsius to Fahrenheit or money
from Japanese yen to U.S. dollars, the original values remain the same.
They are just represented in a different form.

In the world of computers, encoding works in the same way. The computer
converts data from one form to another. It does this to save on storage
space or make transmission more efficient.

One example of encoding is when you convert a huge .WAV audio file to a
tiny .MP3 file that you can easily send to a friend via email. The files are
encoded in different formats but will play the same song.
Read More about “Encoding”

What Is the Purpose of Encoding?

The primary purpose of encoding is to make data safely and adequately


consumable by different users using various systems. The idea is to make
the data readable and available to all possible end-users. The process can
be likened to effectively translating text from Hebrew, for instance, to
English, making the information digestible for more users.

Without character encoding, a website will display text a lot differently than
intended. Improper encoding spoils text readability, which may also result
in search engines failing to display data correctly or machines to process
inputs incorrectly.

What Are the Different Types of Encoding Standards?

American Standard Code for Information Interchange

The American Standard Code for Information Interchange (ASCII) is the


most commonly used language by computers for text files. It was
developed by the American National Standards Institute (ANSI). It
represents alphabetic characters (both lowercase and uppercase),
numerals, symbols, and punctuation marks using seven-bit binary numbers
(strings made up of combinations of seven 0s or 1s). ASCII has 128
characters.

Unicode Encoding

The Unicode standard is a universal character set that allows writing in


most languages on computers. It is categorized into 8-, 16-, and 32-bit
character sets, amounting to over a billion characters.

URL Encoding

Uniform resource locator (URL) encoding, also known as “percent


encoding,” is often done when some characters can’t be included in URLs.
URL encoding thus allows unrecognized ASCII characters to be
represented in the Unicode format so all computers can read them.

Base64 Encoding
Before, Base64 was only used to represent binary data in printable
characters. It is commonly used in basic HyperText Transfer Protocol
(HTTP) authentication when encoding user credentials. It is also used to
encode email attachments to allow their transmission over the Simple Mail
Transfer Protocol (SMTP) and send binary data within cookies to make it
less readable to tamperers.

Most mail systems can’t deal with binary data. Without Base64 encoding,
images or other files sent become corrupted. Computers deal with data in
bytes, making ASCII unsuitable for transmission.

What Is the Difference between Encoding and Decoding?

Encoding refers to converting data into a different form, while decoding is


the opposite—data is converted back to its original form.

For computers, the encoding process happens every time you store a file.
Since they can only understand series of 0s and 1s, your files are
converted to such a format. When you view the file, the computer decodes
it back to its original format to make the file human-readable.
What Is Encoding in Human Communication?

The process comes so naturally in human communication that we seldom


pay attention to it. Still, encoding happens every time we formulate a
message, whether vocally or digitally. When you type a text message to a
friend, encoding happens as you think about how to phrase the message.

When your friend receives the message, he or she tries to understand its
meaning. In essence, he or she decodes the message.
What Is Encoding in Data Communications?

Encoding in data communications is the process of converting data into


digital signals or values that computers can understand. As previously
mentioned, these are series of binary digits whose value can only be 0 or 1.

What Is Encoding in Programming?

Encoding in programming is the critical process of converting data into


different formats to make it easier to transmit over a network. The process
can differ depending on the programming language.

For instance, encoding in Python occurs when transferring an instance


through a network. On the other hand, encoding in Java happens when
transferring data over the Internet.

Key Takeaways

 Encoding is simply converting data to different formats to make it


easier to transmit.

 Encoding and decoding are opposite processes.

 Computers can only understand binary digits, making encoding


essential.

 Encoding happens in day-to-day communication with other people as


well as in computers and programming languages.

Linear network coding

In computer networking, linear network coding is a program in which


intermediate nodes transmit data from source nodes to sink nodes by
means of linear combinations.

Linear network coding may be used to improve a network's throughput,


efficiency, and scalability, as well as reducing attacks and eavesdropping.
The nodes of a network take several packets and combine for
transmission. This process may be used to attain the maximum possible
information flow in a network.
It has been proven that, theoretically, linear coding is enough to achieve
the upper bound in multicast problems with one source.[1] However linear
coding is not sufficient in general; even for more general versions of
linearity such as convolutional coding and filter-bank coding.[2] Finding
optimal coding solutions for general network problems with arbitrary
demands is a hard problem, which can be NP-hard[3][4] and even
undecidable.[5][6]

Encoding and decoding

Karl Menger proved that there is always a set of edge-disjoint paths


achieving the upper bound in a unicast scenario, known as the max-flow
min-cut theorem. Later, the Ford–Fulkerson algorithm was proposed to find
such paths in polynomial time. Then, Edmonds proved in the paper "Edge-
Disjoint Branchings"[which?] the upper bound in the broadcast scenario is also
achievable, and proposed a polynomial time algorithm.

However, the situation in the multicast scenario is more complicated, and in


fact, such an upper bound can't be reached using traditional routing ideas.
Ahlswede et al. proved that it can be achieved if additional computing tasks
(incoming packets are combined into one or several outgoing packets) can
be done in the intermediate nodes.[8]

The Butterfly Network

Butterfly Network.
The butterfly network[8] is often used to illustrate how linear network coding
can outperform routing. Two source nodes (at the top of the picture) have
information A and B that must be transmitted to the two destination nodes
(at the bottom). Each destination node wants to know both A and B. Each
edge can carry only a single value (we can think of an edge transmitting a
bit in each time slot).

If only routing were allowed, then the central link would be only able to
carry A or B, but not both. Supposing we send A through the center; then
the left destination would receive A twice and not know B at all. Sending B
poses a similar problem for the right destination. We say that routing is
insufficient because no routing scheme can transmit both A and B to both
destinations simultaneously. Meanwhile, it takes four time slots in total for
both destination nodes to know A and B.

Using a simple code, as shown, A and B can be transmitted to both


destinations simultaneously by sending the sum of the symbols through the
two relay nodes – encoding A and B using the formula "A+B". The left
destination receives A and A + B, and can calculate B by subtracting the
two values. Similarly, the right destination will receive B and A + B, and will
also be able to determine both A and B. Therefore, with network coding, it
takes only three time slots and improves the throughput.

Random Linear Network Coding[edit]

Random linear network coding[9] (RLNC) is a simple yet powerful encoding


scheme, which in broadcast transmission schemes allows close to optimal
throughput using a decentralized algorithm. Nodes transmit random linear
combinations of the packets they receive, with coefficients chosen
randomly, with a uniform distribution from a Galois field. If the field size is
sufficiently large, the probability that the receiver(s) will obtain linearly
independent combinations (and therefore obtain innovative information)
approaches 1. It should however be noted that, although random linear
network coding has excellent throughput performance, if a receiver obtains
an insufficient number of packets, it is extremely unlikely that they can
recover any of the original packets. This can be addressed by sending
additional random linear combinations until the receiver obtains the
appropriate number of packets.

Operation and key parameters


There are three key parameters in RLNC. The first one is the generation
size. In RLNC, the original data transmitted over the network is divided into
packets. The source and intermediate nodes in the network can combine
and recombine the set of original and coded packets.

Entropy coding

In information theory, an entropy coding (or entropy encoding) is any


lossless data compression method that attempts to approach the lower
bound declared by Shannon's source coding theorem, which states that
any lossless data compression method must have an expected code length
greater than or equal to the entropy of the source.[1]

Two of the most common entropy coding techniques are Huffman coding
and arithmetic coding.[2] If the approximate entropy characteristics of a data
stream are known in advance (especially for signal compression), a simpler
static code may be useful. These static codes include universal codes
(such as Elias gamma coding or Fibonacci coding) and Golomb codes
(such as unary coding or Rice coding).

Since 2014, data compressors have started using the asymmetric numeral
systems family of entropy coding techniques, which allows combination of
the compression ratio of arithmetic coding with a processing cost similar to
Huffman coding.

Entropy as a measure of similarity

Besides using entropy coding as a way to compress digital data, an entropy


encoder can also be used to measure the amount of similarity between
streams of data and already existing classes of data. This is done by
generating an entropy coder/compressor for each class of data; unknown
data is then classified by feeding the uncompressed data to each
compressor and seeing which compressor yields the highest compression.
The coder with the best compression is probably the coder trained on the
data that was most similar to the unknown data.

Delta encoding

Delta encoding is a way of storing or transmitting data in the form of


differences (deltas) between sequential data rather than complete files;
more generally this is known as data differencing. Delta encoding is
sometimes called delta compression, particularly where archival histories
of changes are required (e.g., in revision control software).

The differences are recorded in discrete files called "deltas" or "diffs". In


situations where differences are small – for example, the change of a few
words in a large document or the change of a few records in a large table –
delta encoding greatly reduces data redundancy. Collections of unique
deltas are substantially more space-efficient than their non-encoded
equivalents.

From a logical point of view the difference between two data values is the
information required to obtain one value from the other – see relative
entropy. The difference between identical values (under some equivalence)
is often called 0 or the neutral element.

Simple example

Perhaps the simplest example is storing values of bytes as differences


(deltas) between sequential values, rather than the values themselves. So,
instead of 2, 4, 6, 9, 7, we would store 2, 2, 2, 3, −2. This reduces the
variance (range) of the values when neighbor samples are correlated,
enabling a lower bit usage for the same data. IFF 8SVX sound format
applies this encoding to raw sound data before applying compression to it.
Not even all 8-bit sound samples compress better when delta encoded, and
the usability of delta encoding is even smaller for 16-bit and better samples.
Therefore, compression algorithms often choose to delta encode only when
the compression is better than without. However, in video compression,
delta frames can considerably reduce frame size and are used in virtually
every video compression codec.

Variants

A variation of delta encoding which encodes differences between the


prefixes or suffixes of strings is called incremental encoding. It is
particularly effective for sorted lists with small differences between strings,
such as a list of words from a dictionary.

Implementation issues

The nature of the data to be encoded influences the effectiveness of a


particular compression algorithm.
Delta encoding performs best when data has small or constant variation;
for an unsorted data set, there may be little to no compression possible
with this method.

In delta encoded transmission over a network where only a single copy of


the file is available at each end of the communication channel, special error
control codes are used to detect which parts of the file have changed since
its previous version. For example, rsync uses a rolling checksum algorithm
based on Mark Adler's adler-32 checksum.

Sample C code

The following C code performs a simple form of delta encoding and


decoding on a sequence of characters:

void delta_encode(unsigned char *buffer, int length)

unsigned char last = 0;

for (int i = 0; i < length; i++)

unsigned char current = buffer[i];

buffer[i] = current - last;

last = current;

void delta_decode(unsigned char *buffer, int length)

{
unsigned char last = 0;

for (int i = 0; i < length; i++)

unsigned char delta = buffer[i];

buffer[i] = delta + last;

last = buffer[i];

Examples

Delta encoding in HTTP

Another instance of use of delta encoding is RFC 3229, "Delta encoding in


HTTP", which proposes that HTTP servers should be able to send updated
Web pages in the form of differences between versions (deltas), which
should decrease Internet traffic, as most pages change slowly over time,
rather than being completely rewritten repeatedly:

This document describes how delta encoding can be supported as a


compatible extension to HTTP/1.1.

Many HTTP (Hypertext Transport Protocol) requests cause the retrieval of


slightly modified instances of resources for which the client already has a
cache entry. Research has shown that such modifying updates are
frequent, and that the modifications are typically much smaller than the
actual entity. In such cases, HTTP would make more efficient use of
network bandwidth if it could transfer a minimal description of the changes,
rather than the entire new instance of the resource.

The suggested rsync-based framework was implemented in the rproxy


system as a pair of HTTP proxies.[1] Like the basic vcdiff-based
implementation, both systems are rarely used.
References https://www.techtarget.com/searchnetworking/definition/encoding-
and-decoding

https://www.techopedia.com/definition/948/encoding

https://www.techslang.com/definition/what-is-encoding/

Lecture 6

Introduction to programming

Stages of execution of the task on the computer. Algorithm and its types.

Rules of algorithmization of issues.

The process of solving a problem on a computer is a joint activity of a


person and a computer. This process can be represented in several sequential
stages. On the share of a person are the stages associated with creative activity -
setting, algorithmizing, programming tasks and analyzing the results, and on the
computer share - the stages of processing information in accordance with the
developed algorithm:
1. Statement of the problem;
2. Analysis and research of the problem, model;
3. Development of the algorithm;
4. Programming;
5. Testing and debugging;
6. Analysis of the results of the solution of the problem and refinement, if
necessary, of a mathematical model with the repetition of steps 2-5;
7. Maintenance of the program
An algorithm (pronounced AL-go-rith-um) is a procedure or formula for
solving a problem, based on conducting a sequence of specified actions. A
computer program can be viewed as an elaborate algorithm. In mathematics and
computer science, an algorithm usually means a small procedure that solves a
recurrent problem.Algorithm-a precise prescription that determines the order of the
actions of an agent to solve the problem for a finite time.
Algorithm-a precise prescription that defines the order of the actions of the
executor to solve the problem for the finite time. The word "algorithm" is derived
from the great median Asian scholar of the 8th-9th centuries Al-Khorezmi, the
autocrack "Book Restoration and Opposing," the elimination of linear and
quadratic equations
The algorithm for solving the problem has a number of mandatory
properties:
- Discreteness - a breakdown of the process of processing information into simpler
steps (steps of execution), the performance of which by computer or person does
not cause difficulties. Algorithm of the consistency of the execution of some of the
individual steps. To complete each algorithm, you need a finite number of times in
the interval of time
- Determination of the algorithm - uniqueness of the execution of each individual
step of information transformation. Determination (determinism) for each step in
the selection of the available data can be uniquely calculated results of the
execution of the algorithm. Results are not dependent on random factors. The
algorithm produces one result for the results of the technical data.
- Mass - the suitability of the algorithm for solving a certain class of problems. The
input data for the algorithm can be selected from a number of functions. This
property assumes that the algorithm should be used for some of the problems of the
problems that differ from the original data. Example: Algorithms for applying the
pair of natural numbers.
Not all of the objectives of the algorithmic control? The problem canmeet
several different algorithmical solutions.
The algorithm is an exact instruction, and instructions are found in all areas
of human activity. However, not every instruction can be called an algorithm.
Solving the problem, a person often does not think about how he does it, and
sometimes, it is difficult to write down the sequence of actions performed. But in
order to assign the solution of the problem to the automatic device, it is necessary
to make an algorithm with a clear indication of the sequence of actions. For an
automatic device to solve a task in accordance with the algorithm, it must
understand every indication of the algorithm. The algorithm is applied to the
desired set of initial values, called arguments. The goal of the algorithm execution
is to obtain a certain result, if as a result of the algorithm execution a certain goal is
not achieved, then the algorithm is either incorrect or incomplete.
The types of the description of the algorithm
A write algorithms use a variety of means. The choice of the tool is determined by
the type of the executable algorithm. There are the following main ways of
recording algorithms:

- verbal, when the algorithm is described in human language;

- symbolic, when the algorithm is described using a set of symbols;

- graphic, when the algorithm is described with a set of graphic images.

Common methods of recording are graphical recording using block diagrams


and character recording using some algorithmic language.

The description of the algorithm using the block diagrams is done by


drawing a sequence of geometric shapes, each of which implies the execution of a
certain algorithm action. The order of the actions is indicated by arrows.
Depending on the sequence of actions in the algorithm, algorithms of linear,
branched and cyclic structure are distinguished. In linear structure algorithms, the
actions are performed sequentially one by one:
In the algorithms of the branched structure, depending on the fulfillment or
non-fulfillment of any condition, different sequences of actions are performed.
Each such sequence of actions is called the branch of the algorithm.

In the algorithms of a cyclic structure, depending on whether a condition is


fulfilled or not, a repetitive sequence of actions is performed, called the cycle
body. A nested is a cycle that is inside the body of another cycle. Distinguish
between cycles with precondition and post-conditions:
Elements of block diagram and examples
The task of algorithms with the help of block diagrams turned out to be a
very convenient means of depicting algorithms and was widely used.
Block diagram of the algorithm is a graphic representation of the algorithm
in the form of linked with each other by arrows (transition lines) and blocks -
graphic symbols, each of which corresponds to one step of the algorithm. Inside
the block, a description of the corresponding action is given.
The table shows the most commonly used symbols
Lecture 7

Basic elements of programming

Visual Basic programming language. Language symbols and operators.

Like the BASIC programming language, Visual Basic was designed for an
easy learning curve. Programmers can create both simple and
complex GUI applications. Programming in VB is a combination of visually
arranging components or controls on a form, specifying attributes and actions for
those components, and writing additional lines of code for more functionality.
Since VB defines default attributes and actions for the components, a programmer
can develop a simple program without writing much code. Programs built with
earlier versions suffered performance problems, but faster computers and native
code compilation has made this less of an issue. Though VB programs can be
compiled into native code executables from version 5 on, they still require the
presence of around 1 MB of runtime libraries. Core runtime libraries are included
by default in Windows and later, but extended runtime components still have to be
installed. Earlier versions of Windows require that the runtime libraries be
distributed with the executable.

An empty form in Visual Basic 6

Forms are created using drag-and-drop techniques. A tool is used to place


controls (e.g., text boxes, buttons, etc.) on the form (window). Controls
have attributes and event handlers associated with them. Default values are
provided when the control is created, but may be changed by the programmer.
Many attribute values can be modified during run time based on user actions or
changes in the environment, providing a dynamic application. For example, code
can be inserted into the form resize event handler to reposition a control so that it
remains centered on the form, expands to fill up the form, etc. By inserting code
into the event handler for a keypress in a text box, the program can automatically
translate the case of the text being entered, or even prevent certain characters from
being inserted.

Visual Basic can create executables (EXE files), ActiveX controls, or DLL
files, but is primarily used to develop Windows applications and to interface
database systems. Dialog boxes with less functionality can be used to provide pop-
up capabilities. Controls provide the basic functionality of the application, while
programmers can insert additional logic within the appropriate event handlers. For
example, a drop-down combination box automatically displays a list. When the
user selects an element, an event handler is called that executes code that the
programmer created to perform the action for that list item. Alternatively, a Visual
Basic component can have no user interface, and instead provide ActiveX objects
to other programs via Component Object Model (COM). This allows for server-
side processing or an add-in module.

The runtime recovers unused memory using reference counting, which


depends on variables passing out of scope or being set to Nothing, avoiding the
problem of memory leaks that are possible in other languages. There is a large
library of utility objects, and the language provides basic support for object-
oriented programming. Unlike many other programming languages, Visual Basic is
generally not case-sensitive—though it transforms keywords into a standard case
configuration and forces the case of variable names to conform to the case of the
entry in the symbol table. String comparisons are case sensitive by default. The
Visual Basic compiler is shared with other Visual Studio languages (C, C++).
Nevertheless, by default the restrictions in the IDE do not allow creation of some
targets (Windows model DLLs) and threading models, but over the years,
developers have bypassed these restrictions.

Characteristic

The code windows in Visual Basic, showing a function using the If, Then,
Else and Dim statements.
Visual Basic builds upon the characteristics of BASIC.

 There are no line numbers as in earlier BASIC, code is grouped into


subroutines or methods: Sub...End Sub .

 Code Statements have no terminating character other than a line ending


(carriage return/line feed). Versions since at least VB 3.0 allowed that
statements can be implicitly multi-line with concatenation of strings or
explicitly using the underscore character (_) at the end of lines.[18][19]

 Code comments are done with a single apostrophe (') character. ' This is a
comment

 Looping statement blocks begin and end with keywords: Do...Loop,


While...End While, For...Next .[20]

 Multiple variable assignment is not possible. A = B = C does not imply that


the values of A, B and C are equal. The Boolean result of "Is B = C?" is
stored in A. The result stored in A would therefore be either false or true.

 Boolean constant True has numeric value −1.[21] This is because the Boolean
data type is stored as a two's complement signed integer. In this construct −1
evaluates to all-1s in binary (the Boolean value True), and 0 as all-0s (the
Boolean value False). This is apparent when performing a
(bitwise) Not operation on the two's complement value 0, which returns the
two's complement value −1, in other words True = Not False. This inherent
functionality becomes especially useful when performing logical operations
on the individual bits of an integer such as And, Or, Xor and Not.[22] This
definition of True is also consistent with BASIC since the early 1970s
Microsoft BASIC implementation and is also related to the characteristics of
CPU instructions at the time.
Programming algorithms with different structures.
VB Statements
H Assignments are the Same as in C
H Case is not significant
– Case will be adjusted for you on keywords
– For Variable Names, Case is ignored
H The Usual Operators can be used
– AND is the same as both & and && depending
on context
– OR = | and ||
– NOT = !
VB IF Statements :

If Then
<List of Statements> Else
<List of Statements> EndIf If

DON’T FORGET THE ENDIF!


Comparators: =,, <=, >=, < > (not equal) Connectives: And, Or, Not

VB While Statements

While <condition> do
<List of Statements>
Wend

VB For Statements

For <Variable> = <start> to <finish>


<List of Statements>
Next <Variable>
For <Variable> = <start> to <finish> Step <increment>
<List of Statements>
Next <Variable>
Example:
The following code snippet displays a message box saying "Hello, World!"
as the window loads:
Private Sub Form_Load()
' Execute a simple message box that says "Hello, World!"
MsgBox "Hello, World!"
End Sub
This snippet makes a counter that moves up 1 every second (a label and a
timer control need to be added to the form for this to work) until the form is closed
or an integer overflow occurs:
Option Explicit
Dim Count As Integer
Private Sub Form_Load()
Count = 0
Timer1.Interval = 1000 ' units of milliseconds
End Sub
Private Sub Timer1_Timer()
Count = Count + 1
Label1.Caption = Count
End Sub

Lecture 8
Software for information processing
SOFTWARE AND FIRMWARE
A computer requires more than just the actual equipment or hardware we see
and touch. It requires Software- programs for directing the operation of a
computer or electronic data.
PROGRAMS
What is a computer program (or just a program)?
It is a set of instructions arranged in sequence. Computer program is an algorithm.
It directs the computer to perform necessary operations for the solution of a
problem or the completion of a task. The instructions in the program must be
written in a language the computer can understand in a particular computer
language. The computer follows the instructions one at a time in order.
Computer programs handle data, which is held in the computer’s memory. Data
can be of two types:
1.Variable data which may change during the execution of program.
2.Constant data which cannot change during the execution of the program.
The program and the data are stored in a binary code in the memory of the
computer (as a sequence of bits). A bit is a single cell holding a value of 0 or 1. As
the computer is an electronic machine, the bit is an electrical potential which is off
(for 0) or on (for 1).
Computer’s memory is organized in such a way that each 8 bits compose a byte,
which is taken as a unit of computer’s information and memory.
A byte can have address which allows us to refer to a particular collection of
bits. The address of a byte is its decimal number converted to 16-base system.
The compiler needs to know how many bytes must be given to store the data.
This is done when constants and variables are declared within the program.
Software is the final computer system component. These computer programs
instruct the hardware how to conduct processing. The computer is merely a general
–purpose machine which requires specific software to perform a given task.
Computers can input, calculate, compare, and output data as information. Software
determines the order in which these operations are performed. Programs usually
fall in one of two categories: system software and applications software.
System software controls standard internal computer activities. An operating
system, for example, is a collection of system programs that aid in the operation of
a computer regardless of the application software being used. When a computer is
first turned on, one of system programs is booted or loaded into the computers
memory. This software contains information about memory capacity, the model of
the processor, the disk drivers to be used. Once the system software is loaded, the
applications software can be brought in.
System programs are designed for the specific pieces of hardware. These
programs are called drivers and coordinate peripheral hardware and computer
activities. User needs to install a specific driver in order to activate a peripheral
device. For example, if you intend to buy a printer or a scanner you need to worry
in advance about the driver program. By installing the driver you ‘teach’ your
mainboard to ‘understand’ the newly attached part.
Applications software satisfies your specific need. The developers of
applications software rely mostly on marketing research strategies trying to do
their best to attract more users (buyers) to their software. As the productivity of the
hardware has increased greatly in recent years, the programmers nowadays tend to
include as much as possible in one program to make software interface look more
attractive to the user. This class of programs is the most numerous and perspective
from the marketing point of view.
Data communication within and between computer systems is handled by system
software. Communication software transfers data from one computer system to
another. These programs usually provide users with data security and error
checking along with physically transferring data between the two computer’s
memories. During the past five years the developing electronic network
communication has stimulated more and more companies to produce various
communications software, such as Web-Browsers for Internet.
Firmware is a term that is commonly used to describe certain programs that
stored in ROM. Firmware often refers to a sequence of instructions (software) that
is substituted to hardware. For example, in an instance where cost is more
important than performance, the computer system architect might decide not to use
special electronic circuits (hardware) to multiply two numbers, but instead write
instructions (software) to cause the machine to accomplish the same function by
repeated use of circuits already designed to perform addition.
Often programs, particularly systems software, are stored in an area of memory
not used for applications software. These protected programs are stored in an area
of memory called – read – only memory (ROM), which can be read from but not
written on.
OPERATING SYSTEM.
An operating system (OS) is a program that runs a computer, manages all
the other programs in it. DOS, (the Disk Operating System), Windows 98,
Windows 2000, Windows 8, 10 are all examples of operating systems.
All operating systems perform the same basic tasks: control to the
computer hardware, files and folders management, applications management, built-
in utility programs support.
When programs need some hardware resources, they address to the
operating system(OS), which in its turn, accesses the hardware through the BIOS
or through the device drivers(OS “sit” between the programs and the Basic Input
Output System(BIOS)) and after that the BIOS controls the hardware. The
Windows 2000 NOS bypasses the system BIOS and controls the hardware directly.
To organize and manage files the operating system uses the file
management system. A file is a collection of data that is given a single name and
treated as a single unit. In fact all of the information that a computer stores is in the
form of a file.
There are many types of files, including program files, data files, and text
files. The way an operating system organizes information into files is called the file
system. Most operating systems use a hierarchical file system, which organizes
files into directories under a tree structure.
The beginning of the directory system is called the root directory. An
operating system creates a file structure on the computer hard drive where user
data can be stored and retrieved. When a file is saved, the operating system saves
it, attaches a name to it, and remembers where it put the file for future use.
When a user requests a program, the operating system locates the
application and loads it into RAM (Random-Access Memory, or main memory) of
the computer.
COMPUTER VIRUS
Do you know what a computer virus is? You think it is a microbe. No, it
is not. A computer virus is a program, which is capable to create copies of itself
and “inject” them into different objects of a computer system (files, system
sectors). The copies of a virus can be different from the ‘master’. These usually
fully functional copies allow a virus to spread very quickly.
Viruses are usually classified by the place where they reside on a
computer (e.g. file viruses, boot viruses, boot-sector viruses, network viruses), the
infection method (e.g. memory resident/non-resident, slow/fast etc), their
destruction capabilities (“harmless”, not dangerous, very dangerous) and any
special features of the virus algorithm (e.g. polymorphic, stealth, etc).
A “harmless” virus is a virus which does not affect a computer’s
operation. This should not lead to a conclusion that some viruses are to be
considered as being “good”. Even if such viruses may not cause direct damage,
they at least cause “economic” damage in a sense that you have to spend time to
get rid of them.
A virus is called “not dangerous” if it only manifests itself by using
e.g. disk-space and performing some “entertaining” graphics, sounds or other
effects.
A dangerous virus affects a computer’s operation, for example, by
slowing it down more and more.
Very dangerous viruses usually perform destructive actions, such as
corrupting data, deleting data and or messing up settings that are vital for proper
computer operations.
Polymorphic viruses (self-encrypting viruses or ghost viruses) are able to
change their main body from copy to copy by making use of encryption algorithms
and modifications of the decryption routine. Through these code variations the
virus hopes, that virus-scanners (antiviruses) will not be able to detect all instances
of the virus.
Stealth viruses are capable, while being active, to hide their presence
modifications they have done to files or system sectors. This is usually achieved by
the virus through intercepting DOS calls to access files or sectors and “giving
back” the “clean” information.
To fight with computer viruses there were created programs called
antiviruses programs (AVP) or virus-scanners. AVP Scanner can test your system
for virus presence in: System memory; Files, including archive and packed files;
System sectors, e.g. Master Boot Record (MBR) of harddisks and Boot sectors of
floppy and harddisks.
Virus-scanner can detect and remove thousands of viruses (it also handles
highly polymorphic viruses). AVP recursively scans in packed and archive files,
tests and disinfects resident viruses in system memory, checks files and system
sectors discover changes in them.
PROGRAMMING LANGUAGES
Programming - theoretical and practical activities related to the creation of
programs.
Programming is a collective concept and can be regarded both as a science
and as an art, on this a scientific and practical approach to program development is
based. The program is the result of intellectual work, for which creativity is
characteristic, and it, as is known, has no clear boundaries. In any program there is
an individuality of its developer, the program reflects a certain degree of art of the
programmer. At the same time, programming also involves routine work that can
and must have strict execution rules and standards.
A programming language is a formal sign system designed to describe
algorithms in a form that is convenient for the executor (for example, a computer).
The programming language defines a set of lexical, syntactic and semantic rules
used in the compilation of a computer program. It allows the programmer to
accurately determine what events the computer will respond to, how data will be
stored and transmitted, and what actions should be performed on these data under
different circumstances
A programming language is an artificial language designed to express
computations that can be performed by a machine, particularly a computer.
Programming languages can be used to create programs that control the behaviour
of a machine, to express algorithms precisely, or as a mode of human
communication.
A programming language is a very concise language with strict rules in
which a computer program must be written. There are two kinds of programming
languages:
1) low-level languages and high-level languages.
Low-level languages (assembly language) are similar to the binary codes
that the computer uses itself. Both assembly language and machine code are
complex to use and are often designed for a particular processor and can’t be easily
transferred to another. The advantages of low-level languages are their speed as
they need little or no translation.
High-level languages use English-like words what made programming
very much easier. They are BASIC, PASCAL, FORTRAN, C, C++, ADA, and
COBAL. Each language has a unique set of keywords and a special syntax for
organizing program instructions. A vocabulary is a set of grammatical rules for
instructing a computer to perform specific tasks.
High-level programming languages are more complex than the languages
the computer actually understands, called machine languages. Each different type
of CPU has its own unique machine language.
Lying between machine languages and high-level languages are languages
called assembly languages. Assembly languages are similar to machine languages,
but they are much easier to program in because they allow a programmer to
substitute names for numbers. Machine languages only consist of numbers.
Lying above high-level languages are languages called fourth-generation
languages (usually abbreviated 4GL). 4GLs are far removed from machine
languages and represent the class of computer languages closest to human
languages.
Regardless of that language you use, you need to convert your program
into machine language so that the computer can understand it. There are two ways
to do this: 1) interpret the program, 2) compile the program
An interpreter takes a single line of source code, translates it and carries
out the instruction immediately. This process is repeated line by line until the
whole program is translated and run.
A compiler translates the whole program before the program is run and
turns it into a self- contained program which can be run independently.

1 AVR microcontrollers and their programming


2 VisSim package
3 Features of the Visual Basic language.
4 Raspberry devices and their programming
5 Fritzing package
6 Programming language Object Pascal
https://www.javatpoint.com/router-vs-bridge

You might also like