0% found this document useful (0 votes)
48 views142 pages

001-2022-1114 DLAPITE02 Course Book

The document is a course book for A Level preparation in Information Technologies, published by IU Internationale Hochschule GmbH. It includes various units covering topics such as emerging technologies, communications technology, project management, and programming for the web, along with self-check questions for each section. The course aims to provide a structured learning experience with a focus on key concepts and assessments to track progress.

Uploaded by

Claire Pagril
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views142 pages

001-2022-1114 DLAPITE02 Course Book

The document is a course book for A Level preparation in Information Technologies, published by IU Internationale Hochschule GmbH. It includes various units covering topics such as emerging technologies, communications technology, project management, and programming for the web, along with self-check questions for each section. The course aims to provide a structured learning experience with a focus on key concepts and assessments to track progress.

Uploaded by

Claire Pagril
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 142

A LEVEL PREPARATION:

INFORMATION
TECHNOLOGIES A
DLAPITE02
A LEVEL PREPARATION:
INFORMATION TECHNOLOGIES A
MASTHEAD

Publisher:
IU Internationale Hochschule GmbH
IU International University of Applied Sciences
Juri-Gagarin-Ring 152
D-99084 Erfurt

Mailing address:
Albert-Proeller-Straße 15-19
D-86675 Buchdorf
[email protected]
www.iu.de

DLAPITE02
Version No.: 001-2022-1114
Vladyslava Volyanska

© 2022 IU Internationale Hochschule GmbH


This course book is protected by copyright. All rights reserved.
This course book may not be reproduced and/or electronically edited, duplicated, or dis-
tributed in any kind of form without written permission by the IU Internationale Hoch-
schule GmbH.
The authors/publishers have identified the authors and sources of all graphics to the best
of their abilities. However, if any erroneous information has been provided, please notify
us accordingly.

2
PROF. DR. GERHARD SÄLZER

Mr. Sälzer studied business administration at the Universities of Münster and Innsbruck. After
graduating with a diploma in business administration from the Westfälische-Wilhelms Uni-
versity in Münster, he worked as a research assistant at the Julius Maximilians University in
Würzburg. Later, he worked as a lecturer at the Hochschule RheinMain in Wiesbaden. Since
2012, Mr. Sälzer has taught on-campus English-language study programs at the IU Interna-
tional University of Applied Sciences, initially as an external lecturer and from 2013 as a Pro-
fessor for Business Administration and Corporate Finance. In 2014, he took on the role of
Study Program Director for Finance Management at the IU International University of Applied
Sciences.

Mr. Sälzer has developed a wide range of professional experiences over more than 20 years,
working in the finance sector of telecommunications, IT, and cleantech companies. He began
his professional career in the RWE Group's Telecoms holding company in 1995, where he
oversaw the operational and strategic development of a new business unit. From 2000 to
2010, he served as Chief Financial Officer in venture capital and private equity-financed tele-
communication companies. In 2000, he was a co-founder and CFO of a start-up company
later taken over by Deutsche Bank. Since 2010, Mr. Sälzer has been working as a consultant,
venture coach, and interim CFO as well as a supervisory board member. His areas of specialty
are in controlling, corporate finance, mergers & acquisitions (in particular, due diligence and
company valuation) as well as restructuring and start-up scenarios.

3
TABLE OF CONTENTS
A LEVEL PREPARATION: INFORMATION TECHNOLOGIES A

Module Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Introduction
Signposts Throughout the Course Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Basic Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Learning Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Unit 1
IT in Society and Emerging Technologies 13
Author: Vladyslava Volyanska

1.1 Digital Currencies and Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15


1.2 Social Networking Services and Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.3 The Impact of IT on Society, Monitoring, and Surveillance . . . . . . . . . . . . . . . . . . . . . . . . 24
1.4 Technology Enhanced Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Unit 2
New and Emerging Technologies 29
Author: Vladyslava Volyanska

2.1 Near Field Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30


2.2 Ultra-HD Television Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3 Artificial Intelligence and Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4 Augmented and Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.5 Computer-Assisted Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.6 Holographic Imaging and 3D Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Unit 3
Communications Technology 43
Author: Vladyslava Volyanska

3.1 Network Hardware, Servers, and Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45


3.2 Network Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3 Switching, Routing, and Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4 Wireless Transmission and Mobile Communication Systems . . . . . . . . . . . . . . . . . . . . . 58
3.5 Network Security and Disaster Recovery Management . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4
Unit 4
Project Management 65
Author: Vladyslava Volyanska

4.1 The Stages of the Project Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66


4.2 Project Management Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Tools and Techniques for Project Management Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Unit 5
Life Cycle Management 77
Author: Vladyslava Volyanska

5.1 Analysis and Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78


5.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.3 Development and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.4 Implementation, Evaluation, and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.5 Prototyping and Methods of Software Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Unit 6
Mail Merge 87
Author: Vladyslava Volyanska

6.1 Master Documents and Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88


6.2 Rules and Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.3 Standard Letters and Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Unit 7
Graphics Creation and Animation 93
Author: Vladyslava Volyanska

7.1 Common Graphics Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95


7.2 Text, Vector, and Bitmap Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
7.3 Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Unit 8
Animation and Morphing 111
Author: Vladyslava Volyanska

8.1 Frames and Canvas Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113


8.2 Timing and Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.3 Tweening and Morphing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
8.4 Sound Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5
Unit 9
Programming for the Web 121
Author: Vladyslava Volyanska

9.1 HTML and CSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123


9.2 JavaScript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
9.3 Operators and Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Appendix
List of References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
List of Tables and Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

6
INTRODUCTION
WELCOME
SIGNPOSTS THROUGHOUT THE COURSE BOOK

This course book contains the core content for this course. Additional learning materials
can be found on the learning platform, but this course book should form the basis for your
learning.

The content of this course book is divided into units, which are divided further into sec-
tions. Each section contains only one new key concept to allow you to quickly and effi-
ciently add new learning material to your existing knowledge.

At the end of each section of the digital course book, you will find self-check questions.
These questions are designed to help you check whether you have understood the con-
cepts in each section.

For all modules with a final exam, you must complete the knowledge tests on the learning
platform. You will pass the knowledge test for each unit when you answer at least 80% of
the questions correctly.

When you have passed the knowledge tests for all the units, the course is considered fin-
ished and you will be able to register for the final assessment. Please ensure that you com-
plete the evaluation prior to registering for the assessment.

Good luck!

8
BASIC READING
Brown, G., Sargent, B., Gillinder, B. & White, A. (2021). Cambridge International A Level
information technology student’s book. Hodder Education.

Long, P., Ellis, V., & Lawrey, S. (2019). Cambridge International AS & A Level IT coursebook.
Cambridge University Press.

9
FURTHER READING
UNIT 1

Wiberg, M. (2005). The interaction society: practice, theories and supportive technologies.
Information Science Pub. (pp. 1–25).

UNIT 2

Mirza T, Tuli N, Mantri A. (2022). Virtual reality, augmented reality, and mixed reality appli-
cations: present scenario. 2nd International Conference on Advance Computing and
InnovativeTtechnologies in Engineering. IEEE. (pp. 1405–1412).

UNIT 3

Tanenbaum, A. S., Wetherall D., & Feamster, N. (2020). Computer networks (6th ed.) Pear-
son Education Limited. (pp. 29–37).

UNIT 4

Kerzner H. (2017). Project management: A systems approach to planning, scheduling, and


controlling (12th ed.). Wiley. (pp. 39–85).

UNIT 5

Rossberg, J. (2014). Beginning application lifecycle management. Apress Media LLC. (pp.
13–32).

UNIT 6

Office Technology Today (2018). Making magic with mail merge. Office Technology Today,
9(11) 1–2.

UNIT 7

Chopine A. (2011). 3D art essentials: The fundamentals of 3D modeling, texturing, and ani-
mation. Focal Press. (pp. 21–44).

UNIT 8

Chopine A. (2011). 3D art essentials: The fundamentals of 3D modeling, texturing, and ani-
mation. Focal Press. (pp. 103–126).

10
UNIT 9

Libby, A., Guarav, G., & Asoj, T. (2016). Responsive web design with HTML5 and CSS3 essen-
tials. Packt Publishing. (pp. 6–31).

11
LEARNING OBJECTIVES
A Level Preparation: Information Technologies A aims to help students preparing for
the Cambridge International A-Level Information Technology exams. It communicates the
necessary basics to complete the tests and provides the foundation for continuing self-
studies to efficiently prepare for passing the test. The course assumes knowledge of the
Cambridge International AS Level and additionally covers commonly used information
technologies, their concepts, development, applications, and their impact on societal
development.

On successful completion, students will be able to assess the impact of information tech-
nologies on society and social development and understand the principles of computer
networking. Students will understand project planning, organization, tools, and assess-
ment as well as be able to analyze and design software development lifecycles. Students
will have the skills to develop form letters and apply mail merging, apply fundamental
image editing, and use basic animation techniques.

12
UNIT 1
IT IN SOCIETY AND EMERGING
TECHNOLOGIES

STUDY GOALS

On completion of this unit, you will be able to ...

– define different types of digital currencies.


– describe the process of data mining.
– understand the essence of social networking services.
– differentiate types and features of technology enhanced learning.
1. IT IN SOCIETY AND EMERGING
TECHNOLOGIES

Introduction
Every day we encounter technologies that are changing the way we are living. These new
technologies are determining how we are cooperating with the world. They facilitate our
daily life and help us feel connected to other people. These technologies represent the
practical realization of new ideas, bringing fresh, unexpected solutions to problems. They
are knwon as emerging technologies.

There is no precise definition to group technologies categorized by this term, but we can
say emerging technologies are designed for a verity of purposes and fields of use. Some of
them represent completely new solutions whereas others are based on the continuous
development of well-known approaches. Different institutions and organizations create
their own criteria, which should be fulfilled by the technology to be included in the classi-
fication of emerging technologies. These criteria are usually widespread, starting with
estimating future market potential and ending with the impact of the trend. However, an
important common factor is that they should bring social or economic effects within the
next five to ten years.

The European Innovation Council (EIC) supports innovative technologies and adds candi-
date technologies to the group of emerging technologies, making the chosen ones part of
a so-called “bottom-up model”, complemented by funding for innovative technologies of
strategic interest (Lopatka et al., 2022). At the first stage of the model, the identification of
areas of interest is conducted according to EU policy. These areas of interest are grouped
into three large categories that identify the target strategies of the EU policy, as shown in
the following table.

Table 1: Areas of Emerging Technology and Innovation Identified

Green deal Health Digital and industry

Energy harvesting, conversion, Space-based regenerative medi- Next generation computing


and storage cine and tissue engineering devices and architectures

Cooling and cryogenics Cardiogenomics Chip-scale frequency combs

Industry and agriculture decar- AI-enabled drug discovery Photon, phonon, electron trian-
bonization and pollution abate- gle
ment

Environmental intelligence and Companion diagnostics in can- DNA-based digital data storage
monitoring systems cer

Water-energy nexus Optimization of the healthcare Alternative approaches to quan-


continuum tum computation

14
Green deal Health Digital and industry

Sustainable, safe, and regenera- From single biomarkers to AI-based local digital twins
tive buildings multi-marker big data maps

High-tech mental health practi- New uses of space


tioner

RNA-based therapies for cancer, 2D materials for low-power elec-


complex and rare genetic disea- tronics
ses

Synthetic biology for industrial Sustainable electronics


biotech

Cell and gene therapies

Source: Vladyslava Volyanksa (2022), based on Lopatka et al. (2022, p. 8).

The world and the social environment are changing under the pressure of emerging tech-
nologies. On one side, they bring benefits and comfort into our lives; on the other side,
they expose us to new risks and challenges. Today we are endangered by new threats that
we didn’t face a couple of decades ago because of emerging technologies. Prevalent
among these issues are user privacy; compliance and legal violation; ethical and legal con-
cerns; data breaches; and spoofed chatbots. This accelerating usage of emerging technol-
ogies has brought to us the question of how we can operate in this world while continu-
ously at risk. Unfortunately, there is no definitive answer, and it’s doubtful that these
issues will all be solved quickly.

It is now important for every organization to find a balance between the successful opera-
tion in this increasingly innovative environment and maintaining control. The appropriate
level of control and governance is important. Too much control, and we can miss out on
benefits and profits from using the emerging technologies; too little control could put us
at risk. The IT administrators are put under pressure by daily changing circumstances,
forced to continuously change of rules and principles of operation.

1.1 Digital Currencies and Data Mining


Digital currencies are an exciting new technology that are likely to have an influence in the
future. They are often generated through the process of data mining.

Digital Currencies

Before considering the concept of digital currencies, we should reflect on the concept of
money. Money is a unit of value that can be exchanged for goods and services. The barter
system was already used eight thousand years ago; this system was followed by coins and
later by printed currency. This was followed by the first form of check, which appeared in

15
the eighteenth century. The idea of digital money was proposed by David Chaum in the
late 1970s and the first digital money system, PayPal, was introduced in 1998. In 2009, the
first decentralized cryptocurrency, bitcoin, was created.

These changes in the form that money took and was exchanged were caused not only by
the historical process, but also by their imperfections. For example, the main drawback of
the barter system is the lack of a common unit of value. Similarly, coins and printed cur-
rency, despite their advantages, don’t provide security and are subject to inflation. The
check payment system improved the security situation because a check indicates the
identity of the issuer and holder, confirmed by a signature, however, they have longer pro-
cessing times.

It is important to differentiate between the terms digital currency, virtual currency, and
cryptocurrency to use them correctly. Incorrect usage of these terms creates confusion.
The term “digital currency” (also known as electronic currency, digital or electronic
money, or e-cash) can be applied to any currency that exists in digital form and typically
has no physical equivalent. This term has broad meaning and includes a variety of differ-
ent categories, starting with the digital edition of money issued by governments and end-
ing with cryptocurrencies. Digital currency needs designated applications, software, and
networks to operate on. Digital currency is used for the same purpose as physical cur-
rency: to pay for goods and services. Some digital currencies have general use, and some
are only accepted in certain communities, for example, game tokens.

Digital currency can be centralized or decentralized. There are three types of control sys-
tems: centralized, decentralized, and distributed systems. The difference lies in the pres-
ence or absence of control centers and the number of them. Centralized systems have a
single control point. All transactions are performed at this point and all decisions are
made there too. It makes the system particularly vulnerable because this single control
center can deactivate the entire system. Advantages of a centralized system include ease
of implementation and scaling because the single control point can make decisions
quickly and must not coordinate with other entities. The disadvantages are its depend-
ency on the single control point, making the system vulnerable since any attack on this
single control point will destabilize the entire system. Furthermore, a system with a single
control point is inherently bureaucratic and is not transparent, and therefore prone to
fraud.

A common example of a centralized system is a bank system. In this system, the stored
value instruments are important. Stored value instruments denote anything that can be
used to transfer values, including debit cards, credit cards, checks, and gift cards, but not
cash. These are used to transfer an amount which can be later exchanged for goods and
Stored value cards services. The first example of stored value cards is considered to be checks. Debit cards
A payment card where are attached to bank accounts. It is important that all transactions are controlled electron-
the value is stored in the
item, rather than in an ically. Credit cards work in a different way: using a credit card creates a debt that must be
external account. repaid later, often with interest. A stored value card (such as a gift card) functions like a
debit card and has a preloaded amount available to spend. Here, we differentiate between
closed-loop and open-loop cards. Closed-loop cards can only be used once, whereas
open-loop cards can be reloaded and used many times.

16
Decentralized systems have many control points and responsibility is diversified. The
advantages are that decisions are made at a level closer to the user and the system is less
vulnerable because it has multiple control points. Failure at one point will not necessarily
lead to destabilization of the entire system. However, the disadvantages are an increased
scale of the system. In decentralized systems, due to the higher number of control points,
there is an increased chance of task duplication.

In distributed systems, any point is a control point. This type of system is relatively new.
Advantages include the fact such systems are less vulnerable. To disable this type of sys-
tem, an intruder must take control of more than 50 percent of the control points. It makes
hacking attempts unprofitable and senseless. Additionally, due to the horizontal hierar-
chy, where all control points are equal and any participant in the system is a control point,
this system is completely transparent. The disadvantages are that these systems are
expensive, and it will take a lot of investment to stabilize such systems, although it may be
possible to save by operating on a large scale over time.

The best example of a distributed system is blockchain: a distributed database manage-


ment system. The basic characteristic of blockchain is transparency because any partici-
pant in the system has a full version of the database (transaction register). This means that
each participant has access to a complete list and history of transactions. Consensus is
required to allow a transaction. Any declared transaction is strictly confirmed after it has
been approved by most of the participants. This is done through consensus algorithms
and through a cryptographic hash function.

Blockchain is the basic technology behind the bitcoin system, but it they are two separate
concepts. Blockchain is a technology used by the bitcoin system. In 2008, when Satoshi
Nakamoto released a paper on a peer-to-peer (P2P) system called bitcoin, people first peer-to-peer
experimented with the idea of bitcoin and then realized that the inherent blockchain tech- A computer system that
breaks up tasks between
nology of this system could be used for other purposes (Narayanan et al., 2016). equal participants.

Digital currency can be regulated or unregulated. Regulated digital currencies are curren-
cies that are controlled by the developers or issuing body. Central bank digital currencies
(CBDC) are an example of a regulated digital currency; they are digital tokens issued by the
central bank of the corresponding country and are regulated by that bank. CBDCs only
exist in digital form and can be used as a replacement for or a supplement to the physical
currency of the country. There are two types of CBDCs: wholesale and retail. Wholesale
CBDCs are used by financial institutions whereas retail CBDCs are used by consumers and
are almost the same as ordinary currency. Retail CBDCs can be further categorized as
either token-based or account based. Token-based currencies use private and public keys
as a validation method, meaning users can stay anonymous during transactions. Account-
based currencies require digital identification to access the account. There are nine coun-
tries that have already started their own CBDCs as of the start of 2022: Antigua and Bar-
buda, Grenada, Nigeria, Saint Lucia, the Bahamas, St. Kitts and Nevis, Montserrat,
Dominica, St. Vincent, and the Grenadines (Atlantic Council, 2022). Other nations are also
considering the use of CBDCs. The use of CBDCs should bring consumers financial safety
and provide privacy and accessibility.

17
Virtual currency is a form of unregulated digital currency issued by private developers that
exists within certain communities or virtual environments. In 2012 the European Central
Bank provided the following definition of virtual currency: “a type of unregulated, digital
money, which is issued and usually controlled by its developers, and used and accepted
among the members of a specific virtual community” (European Central Bank, 2012, p. 5).
Virtual currencies are divided into two groups: open virtual currencies and closed virtual
currencies. Open virtual currencies are also known as convertible virtual currencies
Fiat currency because they can be converted to another form of currency: virtual or fiat currency.
A currency not backed by Closed virtual currencies are operated only in special environments (for example video
a tangible asset, such as
gold. games); they cannot be converted to other currencies.

Cryptocurrencies belong to virtual currencies. The distinguishing feature of cryptocurren-


cies is that they use cryptographic techniques to secure and verify transactions and to
control the creation process of new units of the cryptocurrency. As mentioned above, the
idea of digital money was proposed by David Chaum in the late 1970s. Later, in the 1990s,
he carried it out it in the first cryptographic electronic payment system, Digicash. Bitcoin
was then launched in 2009 by Satoshi Nakamoto (a pseudonym).

We know that cryptocurrency uses different cryptographic techniques, but it is also impor-
tant to understand the main principle behind them. This will be explained with a simple
example. Assume there are six classmates that usually have lunch together at a single
table. The university canteen puts a new rule in place saying that there can be only one
receipt and one payment per table. Therefore, one student must pay and use the bill to
figure out how much is owed by each member of the group. For a while, this system works,
but soon problems arise because some of them do not agree with the amount they owe,
and others eat too much cannot not pay later. As a solution, one of them proposes to take
a notebook and update the notes after each lunch. The notebook should be kept at a place
where each of them had access. This works until one of the students discovers that some
amounts have been manipulated or didn’t add up correctly, leading to less trust in each
other. Then, one of the students proposes to use six notebooks. This way, each student
will keep records of every account and how much is owed to whom. This enables the
group to verify claims of who is owed money. If more than 51 percent of the students
agree that the proposed operation is correct, it will be approved, the notebooks will be
updated, and it will be certified with signatures. This ensures that no entry has been
deleted and all entries have been reviewed and approved. The records can only be
changed by offering all members of the group a new transaction. In this case, if someone
wants to cheat the system, they will have to change the entries in at least four of the note-
books. If we consider that the students are not dealing with physical money but with some
form of fictional, digital currency, then this is the principle of bitcoin. The main character-
istics and requirements for bitcoin are that it has no physical form and is currency is com-
pletely digital.

The group in the example above moved from a shared notebook to individual ones. This is
called a “decentralized database system”. Bitcoin is a decentralized currency. The bitcoin
system doesn’t create tension among participants due to distrust of each other and is
completely anonymous. Members do not have to trust each other to participate in the sys-
tem, only the system itself needs to be trusted. The bitcoin system, unlike the example
above, doesn’t allow for personal identification. The identifiers are a set of alphanumeric

18
characters generated randomly to maintain privacy. This is achieved through a crypto-
graphic hash function. Bitcoin is a system without intermediaries. At the center of this sys-
tem, there is no one who controls the approval of the transactions. Everyone in the group
is involved in this approval process and a minimum consensus of 51 percent is required to
approve a transaction.

The bitcoin system is immutable. This means that none of the previous transactions can
be changed to cancel its effect from the point in time when it is offered to the system. Bit-
coin is a fiat currency like most free market currencies. It is not backed by any physical
resources, only by trust. Bitcoin has value simply because members of the group believe it
has value. There is a limit of 21 million bitcoins that can ever be mined, and each bitcoin
can be divided into very small parts down to 0.00000001 (and potentially even smaller).
The bitcoin system provides a high level of security. In the example above, the classmates
used signatures to identify themselves in transactions. The bitcoin system uses crypto-
graphic methods for the same purpose. The cryptographic hash function used by bitcoin is
SHA-256.

Bitcoin wasn’t the first attempt to create a digital currency, however, previous attempts
had failed to solve the so-called “double-spending probelm”. One unit of digital currency
could be copied, allowing the user to use it more than once. Bitcoin solves this problem by
using a decentralized verification model. Another example of “old” cryptocurrency is lite-
coin. Litecoin is also a decentralized P2P cryptocurrency. It was created in 2011 from a fork
in the bitcoin blockchain by former Google engineer Charlie Lee. It has similar features to
bitcoin, but a different algorithm and it works faster. The most widely used cryptocurren-
cies are bitcoin (BTC), ethereum (ETH), tether (USDT), USD coin (USDC), binance coin
(BNB), binance USD (BUSD), XRP, cardano (ADA), solana (SOL), dogecoin (DOGE), dai (DAI),
and polkadot (DOT).

When bitcoin was created, Satoshi Nakamoto considered it a “peer-to-peer cash”. This
means that a transaction takes place directly between the members without intermedia-
ries. A typical intermediary for a financial operation is a bank. The parties in a P2P transac-
tion use platforms or applications and pay commission to them.

To operate with digital currency, we need designated software-based digital wallets


known as “e-wallets”. Such applications store payment information of users along with
other items, for example, gift coupons. We can distinguish between three types of e-wal-
lets:

• Closed wallets: These only allow users make transactions with the issuer of the wallet,
for example, Amazon.
• Semi-closed wallets: These allow shopping and transfer of virtual funds to another
user in the same wallet network or to a bank account, for example. Paytm and Mobik-
wik.
• Open wallets: In addition to the functions of a semi-closed wallet, these allow users to
transfer and withdraw funds from banks and automated teller machines (ATMs), for
example PayPal, One Tough, Apple Pay, and Google Pay.

19
The use of digital currencies and especially cryptocurrencies can bring high profits but
also comes with a high risk. Here, we are speaking not only about financial or economical
risks but also about technological risks and cyberthreats that may lead to considerable
follow-up effects. The highest risks are assosiated with a CBDC becasue the influence on
the society in this case can be severe. The risks can be classified as economic, financial,
and human rights risks. The key economic risk is inflation. The financial risks include risks
associated to the exchange rate, operational risks, and higher lending costs. CBDCs can be
used as a control tool. Each transaction is recorded and persons with access to the CBDC
ledger can see all transactions and control individuals.

Data Mining

Data mining is a type of decision support process that searches for hidden patterns in data
(patterns of information). The term “data mining” is named to reflect the concept of
searching (mining) for valuable information in a large database (data). Both processes
require either the sifting of large amounts of raw material, or the intelligent exploration
and search for the desired values. The concept of data mining appeared in 1978 and has
gained high popularity in a modern interpretation in 1990s (Russell & Klassen, 2019). Until
that point, the analysis and processing of data was carried out within applied statistics
(the task of processing small databases). Data mining has a multidisciplinary origin
because it developed based on a mix of sciences, including artificial intelligence, applied
statistics, database theory, pattern recognition, and machine learning. Data mining tech-
nology is based on the concept of patterns, which are inherent in subsamples of data and
can be expressed in a form understandable to humans.

The Gartner Group (a technological consulting agency) introduced the term business intel-
ligence (BI) in the 1980s (Power, 2007). This term describes various concepts and methods
that improve business decisions using decision support systems. The concept of BI com-
bines various tools and technologies for analyzing and processing enterprise-wide data. BI
systems are created based on these tools with the main purpose of improving the quality
of information for managerial decisions. That is why BI systems are also known as “deci-
sion support systems”. BI systems can be categorized into the following classes:

• Tools for building data warehouses (DW)


• Online analytical processing systems (OLAP)
• Enterprise Information Systems (EIS)
• Tools for intellectual data analysis (data mining)
• Tools for executing queries and building reports (query and reporting tools)

To maximize the efficiency of data mining tools, it is necessary to select, clean, and trans-
form data. Data is the raw material offered by data providers and used by consumers to
derive information from it. The term “data” covers facts, graphics, pictures, text, sounds,
and video segments. Data can be obtained as a result of measurements and experiments,
as well as arithmetic and logical operations. They must be presented in a form suitable for
storage, transmission, and processing. Sometimes, to maximize the efficiency of data min-
ing, it is necessary to integrate additional information from external sources and create a
special environment for the operation of data mining algorithms.

20
The results of data mining strongly depend on the level of data preparation, and not on
the capabilities of a certain algorithm or a set of algorithms. The majority of the work for
data mining consists of collecting data, which is done before the tools themselves are
even launched. Improperly applying such tools can waste their potential and also waste
money. To get use from data mining technology, its problems, limitations, and critical
issues must be analyzed and understood. The most important aspect is to understand
what this technology cannot do.

Data mining cannot replace an analyst and data mining cannot answer the questions that
were not asked. Accurate model selection and interpretation of the patterns that are
found are required. Therefore, such tools require close cooperation between a domain
expert and a specialist in data mining tools. The obtained data models must be integrated
into business processes to be updated. Recently, data mining systems have been applied
as part of data warehousing technology. Successful analysis requires high-quality data
preprocessing. The pre-processing will take a lot of effort and time to perform preliminary
data analysis, model selection, and adjustment.

Data mining can also lead to false conclusions. Data mining tools can produce a huge
amount of statistically unreliable results. To avoid this, it is necessary to test the obtained
models on sample data. Data mining tools do not require a strictly defined amount of ret-
rospective data, in contrast to statistical tools. This can cause the creation of unreliable,
false models and, as a result, the tool will make incorrect decisions based on them. There
are many data mining techniques to turn raw data into information including data clean-
ing and preparation, tracking patterns, classification, association, outlier detection, clus-
tering, regression, prediction, sequential patterns, decision trees, statistical techniques,
visualization, artificial neural networks, data warehousing, long-term memory processing,
machine learning, and artificial intelligence.

Traditional statistical methods of data analysis and online analytical processing (OLAP)
are predominantly focused on verifying pre-formulated hypotheses and on “rough”
exploratory analysis, while one of the main purposes of data mining is the search for less
obvious patterns. Data mining tools can find such patterns on their own and create
hypotheses and correlations. The formulation of a hypothesis regarding correlations is the
most difficult task; that is why data mining provides an obvious advantage over other
analysis methods. Most statistical methods for identifying correlations use the concept of
averaging of an example. This causes calculations on non-existent values. Data mining
works with real values. OLAP is good to understand historical data and data mining (rely-
ing on historical data) looks for answers about the future. The following examples are dif-
ferent ways to store and process data, depending on the goal to be achieved:

• Database: the most popular type of storage. That storage is managed by the database
management system (DBMS). There are relational (e.g., SQL-based) and non-relational
(e.g., NoSQL) databases.
• Data warehouse: a type of multipurpose software. This type of software can collect
data from different applications for later storage and management. They usually use
server query language (SQL) in order to query the data. Data is stored in tables and
organized by types with keys and indexes.

21
• Data mart: a repository for subject-oriented data from the data warehouse. Data marts
allow to access specific types of data stored in the data warehouse, among other kinds.
There are three types of data mart:
◦ Independent: doesn’t rely on an existing data warehouse, data is taken from original
sources
◦ Dependent: relies on an existing data warehouse
◦ Hybrid: data comes from external operational sources and data warehouses
• Data lake: storage for raw data. A data lake differs from a data warehouse in that data
are in raw form, whereas in a data warehouse they are processed.

Data mining was first mostly directed towards businesses. Here, data mining products
have already become almost as essential as email, and are used for a variety of tasks, such
as to find the lowest prices for a particular product, the cheapest tickets, or are used to
understand the purchasing habits of customers. Today, almost every business collects
customer data to analyze it. In the financial and banking sector, data mining can help to
find correlations and trends in the market. They are also used in the healthcare sector; for
instance, data mining with artificial intelligence can help to search for new treatments for
various diseases or can help healthcare insurance companies to identify fraudulent activ-
ity. The main purpose of using data mining in education is to study the learning needs of
students. In manufacturing and engineering, data mining can help to identify the depend-
ence between different elements, including customer needs and architecture. In network
security it is used to collect data, and then categorize them into fraudulent or non-fraudu-
lent activities, or for anomaly detection. Anomalies are data that does not match the pat-
tern. Pattern monitoring (one of the fundamental data mining techniques) can be used for
a variety of purposes from security to involving tracing data patterns to draw business
conclusions. Data mining can be used in scientific research, for example, for understand-
ing the nature of the universe with intelligent agents.

Data mining has brought a lot of benefits, however, it is also fraught with potential danger.
At first, an increasing amount of information, including private information, can be
obtained through data mining. Firms and organizations are collecting information about
customers, employees, and partners but one day this information could be hacked and
then sold or used in other unethical manners. The collected information can be misused
to take advantage of vulnerable people or discriminate against a group of people. There-
fore, privacy, security, and the misuse of information are the big problems caused by data
mining tools.

1.2 Social Networking Services and


Platforms
Social networking services (SNS), also called “social networking sites” or simply “social
media”, are internet or mobile-based platforms where people can create relationships
with others or share interests. These services are assigned for active participation. SNS
vary in the tools and functionality they provide. They usually have four common elements:

22
member profiles, the ability to add other members as contacts, tools supporting commu-
nication between members from contacts, and user generated content (such as photos,
videos, and messages). SNS usually offer several levels of publicity:

• Private: the information about the member is not available


• Restricted visibility: the information is shown only to the users contacts or to a certain
group
• Public: the account information is visible to all members of the SNS or to anonymous
guests

There are many different types of SNS. Some the of the main categoreies include email,
instant messaging, blogs, message boards, social media, chat room sand forums. Email is
perhaps the most common and the most used type of SNS. The user logs in to the account
to write and read messages (emails). Emails are widely used for private and professional
purposes. They served as a quick and reliable way to send documents, photos, videos, or
share information with a group of people.

Instant messaging (also known as “texters”) is a two-way communication channel. It


allows communication with another person or group of people. Texters are the main
source of a special language used by members, with many abbreviations and emoticons.
The popular examples are Windows Live Messenger, ICQ, WhatsApp, and Facetime.

The word blog comes from “web logs”. This is a platform where people can share their
ideas with other others like a broadcasting service. The common feature is a feedback
forum, where individuals can leave their opinions after reading. A micro-blog is a subtype
of blog with limited content size (e.g., TikTok, Twitter).

Social media is possibly the most popular type of SNS. It is so presnt in modern life that
the term is somethimes used to refer to all SNS. The most widespread social media plat-
forms, like Facebook and Instagram, are used to share daily activities, photos, comments,
and republish the information from other members. Members create themselves a web-
presence; they create a profile with biographical data. Recently, social media have
become widely used for private and business purposes. They are not only used by individ-
uals but also by organizations and firms.

Chat rooms are virtual spaces (websites) that allow users to communicate in real-time.
Usually it is a text-based communication. The use of nicknames is widespread in order to
maintain anonymity. Most of chat rooms offer a private mode to communicate in limited
groups. The most popular example is IRC, but chat rooms focusing on different topics are
quite popular. Different chat rooms have different rules and manners, for example, to
enter a private room, it is necessary to ask permission in a public room first. Chat rooms
usually have a moderator who is following the communication to observe adherence to
rules and can exclude an unwanted user.

Forums are virtual spaces designed for communication on different topics. They differ
from chat rooms in that communication can be kept over time and not exclusively in real-
time mode. They tend to have longer-form content in comparison to chat rooms. Many
corporations set up forums a part of their brand’s websites since such close communica-

23
tion with the product users brings benefits for both sides. Message boards allow users to
post message with defined subject. They are popular media between people who are
looking for help or defined wares or services.

The business model of social networking services is based on advertising and marketing.
They target the content to an individual’s personal information, habits, location, and other
private data or they sell this personal information to other parties to process. The impact
of social networking should not be underestimated. On one side, it expands our horizons
by taking us beyond our local neighborhoods. For private persons, it makes it easier to
find contacts all over the world according to their interests . It allows users to learn knew
activities and habits, be professionally fulfilled, and participation in social media com-
munities creates the perception of affiliation, often alleviating loneliness. For businesses,
social networking provides the opportunity to increase the customer audience, under-
stand the market trends, and realize marketing purposes. Governments are also using
social networking, not just for analyzing moods and trends in society, but also for estab-
lishing a sense of trust, or even for manipulating social opinions and trying to eliminate
prorespective threats.

Participation in social networking almost always involves sharing private information.


Such information belongs to sensitive data and must be processed in an appropriate man-
ner. The leaks of such information, as a result of hacking or due to user incompetence,
leads to serious consequences. By enaging in social networking, the user is at risk not only
because of privacy threats but also computer viruses, data theft, or harassment from other
users. Users must take care who they connect with online and be aware of hostililty that
Troll may be encountered from individuals trying to take advantage of them or from trolls. Giv-
A person who intention- ing to much private information to governments or corporations allows them to create a
ally antagonizes other
users online profile of the user based on their behavior in the internet. Furthermore, there is a problem
of data control becasue information that is altered or should have been removed may
have been retained.

1.3 The Impact of IT on Society,


Monitoring, and Surveillance
People are used to relying on technologies and utilizing them; their technical demands are
growing daily. IT technologies have conquered professional, social, private, and cultural
sides of life of almost every human being. We are using them without second-thought.

Consider the example of sport. Video cameras and timing systems have settled firmly in
elite sport. Cameras provide us with views of sport performances or provide a reference
for analyzing actions, for example, trainers use videos to analyze and improve the effi-
ciency of their athletes. IT also influences sport in other ways. There are several applica-
tions dedicated to different sports disciplines, starting with fitness programs for beginners
and ending with applications for performance measurement for professionals. Most of
them are integrated with SNS, forums, messengers, and chat rooms that help users to get
in touch with people with the same interests.

24
The impact of IT on education should not be neglected. The most important is, of course,
the ease of access to information. Something that used to require a lot of effort is now,
with the help of search engine technologies and online libraries, completed in a matter of
minutes. The educational methods, trends, and dynamics are transformed by technolo-
gies. For example, today we are using educational videos to represent the course material,
which should make information easier to learn. The visual presentation allows us to high-
light the lecture and helps to enage students. The technologies have increased the level of
interactivity for students; for example, the application of virtual experiments have made
the practical tasks in physics and chemistry accessible to everyone. Popular are online lec-
tures for gifted students from different nations can be conducted by the best specialists in
the field, regardless of location. E-learning has made it easier to obtain high-quality edu-
cation.

IT is also already deeply integrated in the healthcare sector. It can be seen in control sys-
tems with report options (such systems can monitor the patient’s condition, provide elec-
tronic records, or trigger an alarm in a critical situation), artificial intelligence systems
helping in operations, highly-precise diagnostic instruments, and even smartphone appli-
cations that inform users about taking a pill or drinking water. Managing technologies in
healthcare are now a standard practice. They help to create a treatment plan or to deter-
mine the place of treatment. The most innovative IT trends in healthcare relate to artificial
intelligence systems and data mining.

Many people have a smartphone, tablet, or personal computer. This equipment is used for
a variety of purposes. Families and friends have the possibility to stay connected via social
media and messengers. We take notes, entertain children, and do shopping. In everyday
life we are used to utilizing IT. Even if we don’t have the latest smart home technology
installed, we are liekely dealing with versions of smart technology by programming an
electric stove, microwave, or washing machine when they should start and stop operating.
It is possible to give commands to televisions and smart speakers with voice commands.
Almost all equipment has a variation utilizing Wi-Fi modules and can be operated
remotely by smartphone or computer. Often, it is easier to interact with technology than
with other humans. For example, it is simpler and faster to warm up a frozen cake in the
microwave than baking one from scratch. This all makes our life easier, but at the same
time affects social interactions by reducing face-to-face communication and decreasing
the time spent together.

IT has fundamentally changed how individuals and organizations interact with money.
Financial institutions make the interaction easier and faster, as well as reducing errors. At
the same time, they get the benefits; for example, 24 hour online and mobile banking has
extended the working hours and opened new markets, and chatbots have reduced the
number of human employees required in telephone services. Banking sectors use IT more
than most. It is also essential in the back-office to take care of financial accounting, bank-
ing transactions, to analyze risks, and predict future actions. Some technologies that are
considered as trends in emerging technologies in the financial sector are digital experi-
ence platforms with application programming interfaces (APIs) that allow customers to
integrate data from bank accounts with other applications; blockchain as a solution for
payments; chatbots and artificial intelligence; and robotic process automation for repeti-
tive processes.

25
IT Technologies are also influencing politics and shaping governments. Emerging commu-
nication technologies have changed the way that governments interact with society. Gov-
ernments of the world have recently launched solutions such as electronic identification
cards with electronic signatures, and platforms with integrated databases for contact with
public sector services. Blockchain technology is introduced to securely share and store
information. For security reasons, governments initiate AI systems to speed up finding and
analyzing potential threats and hazards for the society. Physical devices like drones with
capabilities beyond human abilitities are used in hazardous environments.

Governments obtain sophisticated technologies to monitor their citizens but this technol-
ogy is not only used by governments. The rapid development of camera technologies, the
spread of closed-circuit television (CCTV), image recognition algorithms, and other devices
have contributed to a state of near permanent surveillance. Initially, cameras were instal-
led for security reasons in banks and public places, but with the advancement of the tech-
nology cameras are installed universally, for example, for monitoring your own home.
Such cameras are connected to a network and are programed with algorithms for face rec-
ognition.

Life under surveillance does not only mean life under being captured by camera, but that
social media surveillance is even more invasive. Social media surveillance is the collection
and pocessing of personal data pulled from different platforms. This process is mostly
automated and allows for real-time analysis of large amounts of data. In a society where
everything is registered, stored, analyzed, and classified, users can also be penalized. This
is the potential consequence of so much surveillance.

1.4 Technology Enhanced Learning


Enhanced learning technology denotes the use of technology to maximize learning effi-
ciency. Sometimes these are technologies and applications designated for learning, and
sometimes technology created for other purposes is used in the educational process; for
instance, Skype or Zoom for distance learning. Enhanced learning technology comes in
the form of a variety of devices, tools, platforms, applications, and delivery methods, such
as computer-based or web-based learning, digital collaboration, or virtual classrooms and
learning environments.

Computer-based learning (computer-based training) is an interactive educational process


without the participation of a teacher or instructor. Students learn the material by inter-
acting with the computer. The course material can be represented in different forms, such
as video courses with lecturers, text tutorials, or interactive programs utilizing virtual real-
ity. Materials for computer-based learning can be stored locally on a hard drive or CD.

Web-based learning (e-learning) is an educational process that uses the Internet as a


delivery tool. Web-based learning can take a form of online tutorials with the lecturer simi-
lar to conventional education, or in form of a webinar or teleconference. Networked cour-
ses are another type of remote learning that use the internet technology to transfer infor-
mation between participants (students and teacher). Massive open online courses

26
(MOOCs) are a representative of networked courses. These are free online courses created
for an unlimited number of participants. Video conferencing is an online technology that
allows participants to communicate using video and sound.

There are proponents and opponents of distance learning. The advantages are obvious
becasue not only can students study anywhere they have access to a computer and the
internet, but distance learning reduces travel costs and can fit the individual’s schedule.
The opponents address as main problems possible technical problems, like hard- and
software incompatibility or connection problems, as well as the feeling of isolation as a
result of a lack of social interaction. It is indisputable that technology has brought a lot of
innovation to the learning process, but they it is not a perfect replacement for traditional
methods. Technology can be used to enrich educational processes.

SUMMARY
Technology has become an integral part of life for not only individual
persons, but for society as a whole.

A business that would not use the achievements of modern technologi-


cal progress can be hardly imagined. Business technologies have been
developed to utilize innovations and technological solutions that assist
in the development of a project. The use of modern business technolo-
gies expands the possibilities of commercial projects and creates a
ground for new possibilities. The principle aim of modern business tech-
nologies is to promote their commercial activities and to conquer those
markets that had been inaccessible through traditional means.

Modern educational technologies provide new information and commu-


nication methods in the learning process, which allow students to
acquire primarily skills by working with information. The self-organiza-
tion of each individual is encouraged.

Rapidly developing technologies are actively used in modern medicine.


The use of modern medical technologies in patient diagnosis is espe-
cially remarkable. Innovative technologies in medicine allow professio-
nals to perform less invasive operations and are used for the treatment
of oncological diseases, in cardiac surgery, during cell therapy, in vascu-
lar surgery, in plastic surgery, in orthopedics, as well as in ophthalmol-
ogy.

There are many companies that are devote their efforts to the develop-
ment of modern technologies. These companies operate is every possi-
ble sector, varying from medicine to heavy industry. These firms analyze

27
the market trends and the possibility of using new technology, estimat-
ing parameters including the availability of analogues, efficiency, cost,
relevance, and applicability.

28
UNIT 2
NEW AND EMERGING TECHNOLOGIES

STUDY GOALS

On completion of this unit, you will be able to ...

– define near field communication.


– identify HDTV systems.
– describe artificial intelligence.
– distinguish augmented and virtual reality.
– decide where computer-assisted translation can be used.
– understand the holography technology.
2. NEW AND EMERGING TECHNOLOGIES

Introduction
The introduction of innovative and effective solutions to the problems of everyday life is
the main task of new technologies. An important role is also assigned to information and
telecommunication systems. Modern information and communication systems include

• wired and wireless communication,


• satellites,
• data transmission devices,
• antennas, and
• surveillance cameras.

This list is not complete. Due to the existence of information systems control centers, it
has become possible to resolve problems quickly by using automated tools.

The principal advantage of modern computerized technology is the ability to obtain large
amounts of information. They have an impact on the development of the job market.
Thanks to implementation of such technology, it became possible to considerably speed
up work and establish communication between different people, even if they are far away
from each other. In modern medicine the computer technologies have very important
place; without these technologies it is quite difficult, or even impossible, to diagnose vari-
ous diseases.

2.1 Near Field Communication


One of the most creative innovations in the field of device connection is the technology
allowing one device to be connected another device in the vicinity without a physical con-
nection. Near field communication (NFC) technology is one of the key applications that
helps to connect devices and interact while the application is running on both devices.
NFC works with electromagnetic sensors, which resonate with each other if the devices
are only separated by a few centimeters. The devices can effectively establish a connec-
tion that starts the data exchange process. When this happens between two copies of an
application running on different devices, this process is known as pairing. As a result,
applications can communicate continuously. This standard operates within the publicly
available and unlicensed ISM (industrial, scientific, medical) band radio frequency at 13.56
megaherz. This function works up to a distance of ten centimeters with compact standard
antennas.

Interacting devices can be very different. For example, one device could be tablet and the
other could be anything a large monitor or a simple radio frequency identification (RFID)
tag. Contactless interaction of devices is useful in different fields, including one-time
transfer of any data or a permanent connection. NFC was primarily created to be used in

30
digital mobile devices and “is an extension of the contactless card standard (ISO 14443)…
The NFC device can communicate with existing smart cards, ISO 14443 readers, and with
other NFC devices and is thus compatible with the existing card standards, that are
already used in payment or public transport systems” (Bashynska et al., 2018, p. 1065).

There are three main conditions for the device to start contactless interaction:

• The application must declare the proximity (the possibility of contactless interaction) in
the manifest.
• The data exchange is only possible for foreground applications (background mode is
not provided).
• The application must ask for the user’s consent to establish a connection. The applica-
tion should display connection waiting, connection established, active connections,
and should allow the user to terminate connections at any time.

NFC is an open platform technology. This technology is described in ECMA-340 and ISO/IEC
18092. These standards define bit rates, modulation schemes, coding, structure of the Standards
NFC device interface, initialization schemes, and conditions required to control collisions Documentation that pro-
vides consistency accross
during initialization. IT services, particularly
communication services.

2.2 Ultra-HD Television Systems


Ultra-High-Definition Television (UHDTV) is a type of decomposition television standards
that provide image clarity that is much higher than high-definition television as well as
most modern cinematographic standards. The image format for UHDTV is specified by the
SMPTE ST 2036-1 standard as 3,840 by 2,160 pixels (8.3 megapixels), which is exactly four
times that of full HD (1,080 by 1,080). UHDTV is also known as “Quad HD”. The term “4K” is
often used to describe UHDTV, however, this is actually refers to a resolution of 4,096 by
2,160 pixels.

In September 2003, the Japan Broadcasting Corporation (NHK) completed the develop-
ment of a pilot UHDTV system and in November 2005, it demonstrated a live UHDTV pro-
gram over 260 kilometers via a fiber optic connection line for the first time (NHK, n.d.). The
European Broadcasting Union issued a regulation for UHDTV in 2014, which is intended for
strategic planning to improve technical parameters, including clarity, frame rate, dynamic
range and color of the image, and sound transmission technologies (Puigefragut, 2014).

UHDTV2 is an image format defined by the SMPTE ST 2036-1 standard as 7,680 by 4,320
pixels (33.2 megapixels). Also known informally as “8K”, the resolution of this standard is
considered comparable to IMAX 15/70 film and is about 16 times higher than HDTV high-
definition television.

31
2.3 Artificial Intelligence and Robotics
Today the productivity of a human programmer can only be maximized when computers
take over part of work. The best ways to achieve maximum efficiency here is using artifi-
cial intelligence (AI), when the computer not only performs part of the routine operations
but can also learn for itself. AI is interpreted as the property of automatic systems to epito-
mize individual features of human intelligence. AI, similarly to human beings, is capable of
making the best possible decision based on previous experience and a rational analysis of
the external data. There are at least two points of view on what should be considered arti-
ficial intelligence. Neurobionics aims to reproduce the processes that take place in the
human brain whereas in informational artificial intelligence the goal is not to build a tech-
nical copy of a biological system, but to create tools for solving intellectual problems.

There are three principal trends in traditional AI modeling. The first approach focuses on
understanding of the structure and mechanisms of the human brain. The main goal of this
approach is to understand how we are thinking. In this first approach, the models are built
based on psychophysiological data. After the experiments conducted with them, the new
hypotheses regarding the mechanisms of intellectual activity are put forward. In the sec-
ond approach the intellectual activity is modeled with the help of computers. AI is consid-
ered as an object of study. The main goal is to create the algorithm and appropriate soft-
ware to imitate intellect. The third approach focuses on the creation of a mixed human-
machine (interactive intelligent systems). The goal is to find the symbiosis of the natural
and artificial intelligence. The problem is the optimal distribution of functions between
man and machine. Today, these approaches are merged, and we can differentiate
between the following categories of AI:

• Top-down AI: This approach focuses on creation of expert systems. The high-level men-
tal processes, like thought, speech, creativity, and emotion are fundamental for knowl-
edge bases and inference systems.
• Bottom-up AI: This approach models intelligent behavior based on biological elements.
It deals with the study of neural networks and evolutionary calculations. The goal of this
approach is the creation of appropriate computing systems, such as a neurocomputer
or biocomputer.

Alan Turing proposed an empirical test to determine the possibility of artificial intelligence
thinking like humans. The standard interpretation of the test is that a person interacts
with one computer and one human, then, based on the answers to the questions, they
must determine whether the correspondent was a human or a computer program. The
task of the computer program is to mislead the person. The most general understanding of
this test is that machine will become intelligent when it will be able to communicate with
a person without being recognized.

The first intellectual tasks were logic games (such as checkers and chess) and proof of the-
orems. Shannon’s electronic mouse should also be mentioned here. This “mouse” could
explore the maze and find a way out of it. When it was placed in the already known laby-
rinth, it could leave immediately without looking into dead-end passages (Klein, 2018).

32
The American cybernetician A. Samuel created a program that allowed a computer to play
checkers, and during the game it was learning (Subarna, n.d.). The rules of the game were
programmed so that the computer could choose its next move. At each stage of the game,
the machine chose the next move from a set of possible moves according a quality crite-
rion. By combining criteria (for example, in the form of a linear combination with experi-
mentally selected coefficients or in a more complex way), it is possible to obtain a per-
formance indicator (an evaluation function) to evaluate the next move of the machine.
Then the machine, comparing these performance indicators, will choose the next move
corresponding to the highest indicator. Such automation of the choice of the next move
does not necessarily provide an optimal choice, but it serves as a basis for the machine to
continue the game and improving its strategy in the process of learning from experiences.
Formally, learning consists of adjusting the parameters (coefficients) of the evaluation
function based on the analysis of the moves and games performed.

Programs that allow machines to play business or military games of great practical impor-
tance already exist. Here, it is important to explain a program’s “human” abilities to learn
and adapt. One of the most interesting intellectual problems, which also has great practi-
cal importance, is the problem of pattern recognition. The interest in that problem was
stimulated by the possibility of wide practical use, such as reading machines, AI systems
that make medical diagnoses, and robots that can recognize and analyze complex data
from sensors.

For AI to perform the assigned tasks, it must first be trained on real or similar tasks. Two of
the prevalant methods that are used are machine learning and deep learning. Machine
learning is a collection of mathematical methods for pattern recognition. Machine learn-
ing is divided into three types:

• Supervised learning: This is used on labeled data sets with obvious patterns.
• Unsupervised learning: This is used on unlabeled data sets without obvious patterns.
• Reinforcement learning: This is sequential learning on labeled and unlabeled datasets.

For machine learning, a relatively small amount of training data is sufficient. The learning
process is divided into stages, the results of each of which are combined into a block of
output data. Features must be precisely defined and created by the user. The output is
presented as a number in the form of a score or classification. Developing machine learn-
ing requires the participation of a human expert.

In the deep learning (or “neurobionic”) approach, we work with artificial neural networks.
The idea of copies of neuron-like structures was an important step in AI development. The
first models of formal neurons were proposed by McCulloch and Pits (Chandra, 2018). In
fact, these elements implemented the threshold function because the signal at the output
of the element arose only when the weighted sum of the allowed input signals exceeded
the weighted sum of the prohibiting input signals by more than the value determined by
the threshold value of the element. By varying the values of the weights and the threshold,
it was possible to achieve the desired triggering of the formal neuron. Further intercon-
nected by neural networks, such neurons seemed to be a very powerful way to implement
various procedures. One of the most famous neurobionic concepts is the perceptron: a
mathematical representation of a neuron used in machine learning.

33
Figure 1: Mathematical Model of a Perceptron

Source: Vladyslava Volyanska (2022).

This mathematical model of a neural network consists of a single neuron that performs
two consecutive operations. First, it calculates the sum of the input signals according to
their weights of the connection (conductivity or resistance).

n
sum = XT W + B = ∑i = 1 xiwi + b

Then, it applies an activation function to the total exposure sum of the input signals.

out = φ sum .

Almost any differentiable function can be used as an activation function. The most used
activation functions are the linear function, SoftMax function, sigmoid function, hyper-
bolic tangent function, rectified linear unit (ReLU), and leaky ReLU.

In practice, artificial neural networks consist of many neurons and have three types of lay-
ers: the input layer for collecting data; the hidden layer, where the interactions between
neurons take place, i.e., the learning process); and the output layer for conclusions and
analysis results. As a result of data processing each following layer takes data from the
previous one. A neural network works like a human brain, whereby each neuron carries
out its own calculations, and the network multiplies its potential. The neural networks
used in artificial intelligence have connections between neurons that can be adjusted, mir-
roring human learning. There are dozens of types of neural networks that differ in archi-
tecture, features of operation, and areas of application. The most common are as follows:

34
• Feed forward neural networks (FFNN): A rectilinear type of neural network, in which
the neighboring nodes of the layer are not connected, and the information is transfer-
red only from the input layer to the output layer. FFNNs have little functionality, so they
are often used in combination with other types of networks.
• Convolutional neural networks (CNN): This consists of five types of layers: the input,
coagulative, unifying, connected, and output layer. Each layer performs a specific task.
Convolutional neural networks are used for image classification, object recognition,
natural language processing, and other tasks.
• Recurrent neural networks (RNN): These networks use a directed communication
sequence between nodes. In RNN, the result of each step is used as input for the next
step. RNN is used for text generation, language modeling, speech recognition, machine
translation, and other tasks.

Therse different networks each has their strengths. There are several basic tasks where
neural networks are used, as shown in the following list:

• Classification: This involves the choice of a specific object from a predefined set such
as recognizing a square amongst triangles. This technology is used for recognition of
faces, emotions, types of objects, and pattern recognition.
• Regression: This can generate a specific number as a result of processing. This technol-
ogy is used to estimate the age of a subject in a photograoh, to forecast exchange rates,
to estimate the property value.
• Time series forecasting: This can make long-term forecasts based on a dynamic series
of values. This technology is used to predict prices, physical phenomena, consumption
volume, and other indicators. Even the Tesla autopilot can be attributed to the process
of time series forecasting.
• Clustering: This can study and sort a large amount of data if the number of output
classes is unknown by combining data by features. This technology is used to identify
image classes and customer segmentation.
• Generation: This can automate content creation or transformation. This technology is
used to create unique texts, audio files, videos, to colorize black and white films, and to
change the environment in a photo. For example, the ruDALL-E neural network can gen-
erate unique images based on a text descriptions (Dimitrov, 2021).

The uses of artificial intelligence provides several advantages, such as the following:

• Exclusion of the human factor: The use of programmable, self-learning algorithms


eliminates the factor of human error and makes it possible to find solutions that are not
obvious to humans.
• Risk reduction: AI machines can be used in situations where there is a risk to humans.
For example, AI-powered robots can replace humans in certain production areas or
when working in natural disaster areas.
• Availability: Intelligent machines can be used without breaks and days off.

35
• Adaptability: The use of AI allows us to find quick solutions. For example, AI in chatbots
helps to understand customers better, find answers to complex questions, and cope
with a large stream of simultaneous calls and questions.
• Fast decision making: AI-powered applications, machines, devices, and other tools
make decisions faster than humans, which is used in manufacturing processes, data
analytics, predictive modeling, and calculations.

Despite the clear advantages, there are several factors slowing down the implementation
of artificial intelligence. Data sets for controlled training of neural networks need to be
marked manually, which takes a lot of time. Furthermore, to train the models, a large
amount of data is needed; the data must first be collected from different sources, be struc-
tured and brought to a common format. The results obtained from AI algorithms are diffi-
cult to interpret and understand from the point of view of decision-making. The models
are focused on solving certain problems. For example, if an AI algorithm is used to detect a
specific type of fraud, it will not recognize other types of fraud; each task and each condi-
tion requires its own model. If the training dataset is mispresented or insufficient, the
results of the AI system may be faulty. For example, if only red objects are used in the
training sample, the errors are likely to increase when a blue object appears during self-
learning. To work with AI, it is important to have competence to assess risks and make
decisions at each stage of the implementation of algorithms.

Artificial intelligence is one of the fastest developing areas. But today, even the most
sophisticated AI models use only “narrow” AI, the most basic of the three types of AI. The
tree types are as follows:

• Artificial Narrow Intelligence (ANI): This is also known as “weak” AI and already exists
today. Although the tasks that weak AI can perform uses complicated algorithms and
neural networks, they nevertheless remain singular and goal-oriented. Facial recogni-
tion, web searches, and self-driving cars are all examples of narrow-purpose AI. It is
categorized as weak, not because it lacks scale and power, but because it is still far from
having the components that we attribute to true intelligence.
• Artificial General Intelligence (AGI): This is also known as “strong” AI and must man-
age any intellectual task that a human can do. Like narrow-purpose AI systems, general-
purpose systems can learn from experience, identify and predict patterns, but they are
able to extrapolate this knowledge to a wide range of problems and situations that can-
not be solved using previously obtained data or existing algorithms. The Summit super-
computer is considered to be a precursor of general-purpose AI (Rosenfield, 2018).
• Artificial Super Intelligence (ASI): These types of systems are theoretically fully self-
aware. They do not just imitate or understand human behavior but comprehend it at a
fundamental level.

A large area of AI systems is robotics. What is the main difference between the intelligence
of a robot and the intelligence of the computers? The robot’s intellectual components
serve primarily to provide its purposeful movements. Robotics is not a recent invention; it
has been used for many years, especially in manufacturing. However, without the use of
AI, automation must be done with manual programming and calibration. The human
operator often has no way of knowing what caused a problem and what changes can be
made to improve efficiency and productivity. When artificial intelligence is added to the

36
system (usually with the help of internet of things (IoT) sensors) it allows us to signifi- Internet of things
cantly expand the scope, volume, and type of tasks performed by robots. Examples of The term for devices with
sensors and network
applications of robotics in industry are order robots in large warehouses and agricultural capability to exchange
robots that can be programmed to harvest or process crops. data with other devices
via the internet

AI systems undoubtedly speed up the progress, but they also create a threat to fundamen-
tal rights. For example, on social media platforms where the content moderating algo-
rithms may unfairly restrict freedom of information and influence public debate. Technol-
ogies used for mass biometric surveillance can violate our right to privacy. Algorithms
operate based on huge amounts of personal data; their collection, processing, and storage
can violate our right to protection of personal data. Algorithm errors and bias in results
can perpetuate inequalities that already exist in our societies and lead to further discrimi-
nation. An example of bias can be found in the employment algorithms that tend to favor
men over women. This is because the training data fed to the algorithms compounds
existing bias and suggests that the “best candidates” are more frequently males.

These challenges are reinforced by the fact that artificial intelligence is dealing with com-
plex problems. The research carried at the University of California distinguished three
types of opacity of AI systems: those that are intentionally unclear because companies or
states want to keep them secret; those that result from technical illiteracy, which are sim-
ply too complicated to be understood by the public; and those that result from the com-
plex characteristics of the learning process of algorithms and technology (Burrell, 2016).
The benefits of using new technologies are enormous, however, there is also a risk. There-
fore, work is underway to define rules and regulations both at the national and suprana-
tional levels.

2.4 Augmented and Virtual Reality


Augmented reality (AR) is one of the various forms of technology that enables human-
computer interaction. The basic principle is that it uses a computer program to visually
combine two independent worlds: the world of physical objects around us and the virtual
world. A new virtual environment is created by positioning programmed virtual objects on
the video signal from a camera and by using special markers and adding interactivity to it
(via an optical tracking system). These markers are the basis of AR technology. The camera
is able to recognize the markers and translate this into a virtual environment. We can iden-
tify three major categories in the development of this technology as follows:

37
• Markerless technology: The special recognition algorithms superimpose the image
from the camera on the surrounding landscape using a special grid. The reference
points on this grid are used to determine the exact position to which the virtual model
will be placed. The advantage is that real-world objects already exist, and there is no
need to create special visual identifiers for them.
• Marker technology: This technology is based on special markers. They are easier for
the camera to recognize and give it a more rigid reference to the position of the virtual
model. This technology tends to be more reliable than the markerless variety.
• Spatial technology: This technology is based on the spatial location of an object. The
data from positioning tools such as GPS or a compass built into the device are used. The
location of a virtual object is determined by using its three-dimensional coordinates.
The AR program is activated when the coordinates and the program’s data match.

To work with AR technology, a few different pieces of equipment are required. Firstly, a
graphic station where the program witll be processed, such as a smartphone or a laptop,
This must be connected to a display. The system also requires a camera to capture a rep-
lica of the real world, which special software superimposes virtual objects onto. Another
requirement is marker-software, that allows the camera to recognize the markers, deter-
mine which model is programmatically tied to it, and overlay the marker with the model in
such a way that the virtual two-dimensional (2D) or three-dimensional (3D) object repro-
duces any movement of the real marker.

AR technology is software that connects the camera, markers, and a computer into a uni-
ted interactive system. The system must determine the 3D position of the real object cap-
tured by the camera. The process of recognition takes place in stages. The markers on
each frame are recognized in each image from the camera and the predefined patterns are
searched. Then, video from the camera is transmitted in 2D format and a 2D contour is
created. A virtual 3D model is built and tied with the 2D coordinate system of the camera
image.

AR is often confused with virtual reality (VR). VR is a computer-generated world where all
objects are created in the software whereas AR is virtual objects in the real world. To
access VR, different devices such as helmets, gloves, and headphones are used to block
out input from the physical world and replace it with the digital version. The real world is
completely replaced by virtual environment. AR devices, however, only provide visual and
audio information on top of the physical world. The example of such devices are headsets
like Google Glass or mobile phones cameras. An immersive headset is used to engage in
VR. A VR device can provide images, sounds, as well as tactile sensations to the user.

Virtual and augmented reality are strongly utilized for professional purposes. In architec-
ture and design they is used to present projects and in medicine they are used as a teach-
ing and verifying tool. Many solutions for educational purposes in the field of virtual and
augmented reality are already on offer. In the coming years, these technologies will be
used in video games, real-life events, healthcare, education, and for military purposes. VR
and AR projects will become more interesting, useful, and complex. In industry, VR and AR
will assist the quality control process as well as the production of finished products.

38
2.5 Computer-Assisted Translation
The term “computer-assisted translation” (CAT) is often replaced or used instead of
machine translation, however, there is a difference. Machine translation is a translation
carried out by a computer program. Although the development of machine translation has
recently experienced rapid growth (mostly due to the introduction of artificial intelligence
technologies), the quality of machine translation cannot be compared to a professional
translation made by a competent specialist. Computer-assisted translation is a translation
performed not by a machine, but by a person using special translation software. They are
based on translation memory technologies.

The working principle of CAT software is as follows: During translation, pairs of source text
and output text are created and saved in a database, then the databases are used to trans-
late more documents in the future. This process benefits from a large assortment of docu-
ments. To facilitate the processing of a document, the translation memory system sepa-
rates the text into manageable pieces. The pieces are often sentences, but segmentation
rules can also produce samller or larger segments. When translating a new text, the sys-
tem provides the comparison of the text segments with the database. If a full or partial
match is found, the system displays the translation and indicates the quality of the match
with a percetnage. The discrepancies in words and phrases from the stored text are high-
lighted, which serve as hints to reduce the time required for editing the translation. Nor-
mally, the threshold of coincidences is set at a level not lower than 75 percent. A lower
match rate begins to have negative effects on the cost of editing the text. By using CAT, the
professional must only spend time working on the new segments and edit the partial
matches. Each change is saved in the database to improve it for the future. This technol-
ogy helps to reduce the cost and time spent on translation of repetitive text fragments. In
addition, this helps to keep the uniformity of the translation of terminology throughout
the document and accross all users of the same database. This is important in written
technical, legal, and economic translations, which use many specific terms.

CAT systems are also divided into rule-based and example-based ststems. In rule-based
systems, the language grammar is established inb greater depth and there are more lan-
guage rules. The example-based systems can be considered as the “self-learning” type.
The dissimilarity between example-based and rule-based systems are not always clear as
both use dictionaries and rules for working with dictionaries, however, rule-based systems
have more input from human experts.

2.6 Holographic Imaging and 3D Printing


Holography is a method that allows users to record information about an object and
restore its image, in a 3D form. This is achieved by recording not only the amplitude of
light (as in a standard photography), but also the phase of light, which makes it possible Phase
to observe the image reconstructed from the hologram at different angles. To record a A mathematical expres-
sion of the position of a
hologram, the total amplitude of two light beams is measured: the object beam (reflected point in time on a wave-
from the object or passed through it) and the reference beam. If they are coherent with form cycle
each other (meaning they have a constant difference in phase), an interference pattern is

39
formed in the beam superposition plane, which is recorded by digital photosensors or
photosensitive media. A real 3D visualization of objects and scenes can be created only by
using digital holography. It does not require special glasses to observe scenes or special
positioning of the observer.

Based on this principle, 3D displays are being actively developed. One of the current
examples of emerging technology is in fifth generation (5G) communication using holo-
graphic principles to create an image of the conversation partner. Another promising use
of hologrophy is 3D printing using holograms. The holograph is divided by sections into
projections and then, under software control, a fast, layer-by-layer printing of each projec-
tion is carried out. Digital holography is widely used in scientific and applied research, for
example, holographic microscopy (the visualization of micro- and nano-objects) and holo-
graphic interferometry (the dynamic registration of changes in object parameters, such as
temperature, shape, and refractive index). Digital holography is already widely used in
medical and biological imaging, in coding, and in transmission and storage systems. It
also improves the security of products, bank notes, and bank cards.

The perspective use of holographic memory is “fourth generation optical data storage”, a
potential replacement for high-capacity data technology. This technology is currently
mostly used with magnetic and optical media. Data is recorded on one or two layers using
separate pits. In holographic memory, data can be written across the entire memory space
using different laser tilt angles.

SUMMARY
Some emerging technologies have already settled deep in our everyday
lives. They have spread themselves widely in the field of communica-
tion, such as NFC technology allowing effective wireless data exchange.
The most common example of this technology is contactless payment
systems, supporting almost all existing card standards.

Since 2003, the first presentation of UHDTV already became a standard.


It is difficult to imagine a television package offered by different provid-
ers without at least a couple of channels in high definition format. Exam-
ples for such channels in Germany are SAT1 UHD, Kabel Eins UHD, Pro
Sieben Maxx UHD, and others.

AI used in daily life becomes better at imitating us, it seems more and
more human. The processing speed and analytical capabilities of a mod-
ern AI-driven computers might have been seen incredible by Alan
Turing, nevertheless he would have perhaps understood the ethical
dilemma associated with it. As we generate more and more personal
data through digital channels, we must trust the AI applications that
power our daily activities. As governments and companies gain more
access to personal information, it is important to establish regulations to
protect privacy and minimize risks.

40
There is no need to buy expensive goggles and helmets to try augmen-
ted reality; an ordinary smartphone is enough. Strategy Analytics has
released a report stating that 4 billion people used smartphones in 2021
(Mawston, 2021). This means that half of the Earth’s population can
already download applications with AR technology. People like to try
new things and gain unusual experiences, that is why the new technolo-
gies will flood our lives.

41
UNIT 3
COMMUNICATIONS TECHNOLOGY

STUDY GOALS

On completion of this unit, you will be able to ...

– describe different types of computer networks.


– identify the differences between network protocols.
– understand the different types of internet connection.
– identify how to secure a network.
3. COMMUNICATIONS TECHNOLOGY

Introduction
The history of any science or technology allows us to better understand the essence of the
main achievements in that industry, as well as to identify trends and correctly predict its
future development. Data transmission networks were developed thanks to the evolution
of two scientific and technical branches: computer and telecommunication technologies.
To understand the structure and processes of a computer network, we should turn back to
the origin: the idea of connected computing.

A computer network is a set of computers connected by a communication system and


equipped with appropriate software that provides network users with access to the
resources of this set of computers. This network can include computers of various types,
such as small microprocessors, workstations, mini-computers, personal computers, or
supercomputers. The transmission between any pair of network computers is provided by
a communication system, which includes cables, repeaters, switches, routers, and other
devices. A computer network allows the user to work with their computer as if it were
autonomous, and adds to this the ability to access the information and hardware resour-
ces of other computers in the network.

The origin of connected computers, distributed operating systems (OS), and later cloud
computing, can be found in software monitors. This was the first type of software created
not to process data, but to control computing processes. The system receives the instruc-
tions in a formalized control language, which describes actions, and in which order they
should be performed. A typical set of directives included a sign of the beginning of a sepa-
rate transaction, a translator call, a loader call, as well as indicators of the beginning and
end of the source data.

With the increased computer capabilities, the execution of only one program at a time
Multiprogramming turned out to be inefficient, leading to the development of multiprogramming. Multi-
A method of organizing a programming was implemented in two versions: batch processing and time-division pro-
computational process in
which several programs cessing. Batch processing systems were intended to solve tasks of computational nature
are running simultane- that did not require fast results. The main goal and criterion for the effectiveness is the
ously in the computers maximum “throughput”, that is, the solution of the maximum number of tasks per unit of
memory.
time. Time-sharing systems have less throughput than batch processing systems, since
every task started by the user is accepted for execution, and not necessarily the one that
that would be beneficial from the perspective of the system. In addition, system perform-
ance is degraded due to the additional processing power required to switch the processor
from task to task more frequently.

The real prototype of networking were multi-terminal systems. Multi-terminal modes are
used in time-sharing systems and in batch processing systems. Multi-terminal centralized
systems already had the external characteristics of local area networks. At the end of the
1960s, the interaction of mainframe class machines was realized, signalling the first practi-
cal results in connecting computers in a network using global communications and packet

44
switching technology. The advanced research projects agency network (ARPANET) was the
starting point for the creation the most famous global network today: the internet. The
ARPANET connected computers of different types, with different operating systems imple-
menting communication protocols that were common to all computers on the network.
Such systems, unlike multi-terminal ones, made it possible not only to disperse users, but
also to organize distributed storage and processing of data between several connected
computers. Wide area networks (WAN) appeared first. Many of the main ideas and con-
cepts of modern computer networks were first proposed and worked out during the con-
struction of global networks, for example, the multilevel construction of communication
protocols, packet switching technology, and packet routing in composite networks. In the
1980s, the situation in local networks began to change. Standard technologies for con-
necting computers to a network were established, such as ethernet, arcnet, token ring,
token bus, and fiber distributed data interface (FDDI). All standard local area network
(LAN) technologies are based on the same switching principle, namely the principle of
packet switching.

3.1 Network Hardware, Servers, and


Clouds
In the process of creating computer networks, developers had to solve many problems
related to coding and the synchronization of signals; choosing the configuration of physi-
cal and logical connections; developing device addressing schemes; creating various
switching methods; multiplexing and demultiplexing data streams; and sharing the trans-
mission medium. The simplest connection method is a direct connection of two devices
by a physical channel, known as “point-to-point connection”. A special case of point-to-
point connection is the connection of a computer with a peripheral device. For data
exchange, the computer and the peripheral device are equipped with external interfaces
or “ports”. From the computer side, the logic is controlled by a peripheral device control-
ler, which is a hardware unit (often implemented as a separate board) and device driver
that controls the peripheral device. From the perspective of the peripheral device, the
interface is often implemented by the hardware control device, although there are also
software-controlled peripheral devices. Communication between the peripheral devices
and the computer is usually bi-directional. Even a printer, being strictly an output device,
returns data about its state to the computer. The impotent basic concept for the device
connection is the client-server concept. Client-server concept
The client is a special
software module that
Considering the point-to-point connection of two machines, we can find a series of prob- generates requests to a
lems inherent in to network, such as issues with the physical transmission of signals, for remote device and
example, data coding and modulation; mutual synchronization of the transmitter on the receives results. The
server is another software
one side with the receiver on another; and the calculation of a checksum. By transmitting module that constantly
signals, it is necessary to solve the problem of mutual synchronization of the transmitter waits for requests.
and receiver. The modules within a computer receive their timing from a common clock
generator. By communication in the network, the problem of synchronization can be
solved in different ways. For example, by exchanging special clock pulses over a separate
line or periodic synchronization with preset codes or pulses with a different shape to the

45
data pulses. A method of appropriate data exchange rate for communication lines with
given characteristics was chosen for synchronizing the receiver and transmitter. For more
reliable data transmission, a checksum is calculated and transmitted over the communi-
cation lines after a certain number of bytes.

Each network interface, a router port, hub, or switch, has built-in tools that solve the prob-
lem of reliable exchange of binary signals. Some network devices, such as modems and
network adapters, specialize in the physical transmission of data. Modems, for example,
perform modulation and demodulation of discrete signals in global networks, synchronize
the transmission of electromagnetic signals over communication lines, and check the cor-
rectness of each transmission. Network adapters are designed to work with a specific
transmission medium, such as a coaxial cable, twisted pair, or fiber optics. Each type of
transmission medium has certain electrical characteristics that affect the way the medium
is used, and that determine the speed of signal transmission, the way of coding, and other
variables.

Computer networks can be classified in different ways depending on how they access
resources and depending on the area of operation. Reagrding how resources are accessed,
we distinguish two types of networks: Client-server (a network with one central server),
and peer-to-peer networks (a network where all devices serve and equal purpose). For
example, many internet services, like web pages, are located on a server and are available
for download from there (client-server). The Windows workgroup can be an example of
peer-to-peer type because each computer is a client and server for others at the same
time. Depending on the operational area, the following networks are distinguished:

• Local area network (LAN): a network operating in a small, limited area


• Metropolitan area network (MAN): a network operating in a larger area (e.g., cities)
• Wide area network (WAN): a network operating over a large area

An example of a LAN can be a corporate network or school network. Such networks are
mostly built with the use of ethernet technology. One of the most important communica-
tion protocols used in a LAN is the internet protocol (IP). In IP-based networks, data is sent
in the form of blocks referred to as “packages”. By transmission with IP, no virtual session
between two devices is set up before the transmission starts. The IP does not guarantee
that the packets will reach the addressee or that they will not be fragmented or duplica-
ted. In addition, the data can reach recipients in a different order than that in which they
were transmitted. The reliability of the transmission data is provided by upper layer proto-
cols, such as transmission control protocol (TCP). An example of a WAN are networks
belonging to large companies linking branches throughout a country and beyond. These
kinds of networks use long-distance transmission technologies such as frame relay and
digital subscriber line (DSL). Sometimes, the literature classifies just LAN and WAN, and
MAN are not distinguished as a separate type. This is due to the fact that MAN and WAN
predominantly use the same transmission technologies. Recently, terms such as “GAN”
(global area network), “PAN” (personal area network), “BAN” (body area network),
“LPWAN” (low power wide area network), or “CAN” (campus area network), are also
becoming more common.

46
Virtual private networks (VPN) are logical networks built on top of another network, typi-
cally the internet. Even though communication goes over public networks, often using
insecure protocols, a layer of encryption creates closed exchange channels. A VPN allows
users to combine several locations into a single network using uncontrolled channels for
communication between them (Makhmetov, n.d.). A VPN consists of two parts: an internal
network and an external network. A remote user is connected to the VPN through an
access server that is connected to both the internal and external networks. There are
implementations of VPNs under the internaet protocol scuite (TCP/IP), Internetwork
packet exchange (IPX), and AppleTalk. The trend is towards a general transition using the
TCP/IP, and most VPN solutions support it. For security purposes, multi-protocol label
switching (MPLS) and layer 2 tunnelling protocol (L2TP) is used. We can say that these pro-
tocols shift the task of providing security to other protocols, for example, L2TP is typically
used with internet protocol security (Makhmetov, n.d.).

Mobile networks are based on cellular communication technology. It a type of mobile


radio communication. The crucial feature of this technology is that the area it covers is
divided into cells with base stations. The cells must partially overlap to form a network.
The principal components of a cellular network are cell phones and base stations. When
the cell phone is turned on, it searches for a signal from a base station. The station
becomes a unique identification from the phone. The phone and the station remain in
contact all the time, periodically exchanging packets. The phone can communicate with
the station using an analog protocols (such as AMPS, NAMPS, and NMT-450) or digital ones
(such as DAMPS, CDMA, GSM, and UMTS). If the phone leaves of range of the base station,
it establishes a communication with another one.

Networks are created to exchange data between end devices. This process is ensured by
appropriate hardware and software. The basic devices are

• network interface cards (NICs);


• repeaters, hubs, and switches;
• access points;
• gateways and bridges; and
• routers.

There are different types of network components: end devices, passive, and active compo-
nents. Cables and distribution panels belong to the most important passive components,
whereas repeaters, hubs, switches, servers, and distribution boxes are considered as
active components. The end devices include user devices, such as computers, tablets,
smartphones, as well as servers.

A server, in the conventional meaning, is a computer that enables other services, pro-
grams, or devices (called clients) to function. Its purpose is to share data and resources.
Hypothetically, any computer can be a server, and this makes sense in small local net-
works. In practice, servers are specialized devices with highly-specialized software. We
define it as a computer permanently connected to the network. It is located in a special IT
cabinet knwn as a “rack” cabinet. Professional servers that are operating continuously are
equipped with uninterruptible power supplies, ventilation, many hard drives, and exten-
sive random access memory (RAM). They also have numerous security systems that pro-

47
tect users from data loss. Servers may be also implemented as a software (i.e., a virtual
server). There are different types of servers which are described in the following para-
graphs.

File servers host file resources and computer disk space, making them available on a given
computer network. It is typical for companies and organizations to have common files on
which they are working and to conveniently exchange these files,have full access to them,
and edit them together, they are stored on a file server. Web servers are mainly used to
publish documents on the internet, such as websites, forums, and blogs. These servers
store websites so that when the user enters the website by entering its address, they
receive the requested data.

Proxy servers, also known as “intermediary servers”, mediate the exchange of information
between the computer user and the target server belonging, for example, to a hosting
company. The main feature of the proxy server is that it requests access to network resour-
ces on your behalf, and does not reveal the user’s identity. External proxy servers are very
often used by people who want to be anonymous on the internet. This is because most
proxy servers allow us to hide our IP address, and in the journal of the servers we connect
to, they leave their own address. Proxy servers can differ in the level of anonymity.

Virtual servers, in contrast to dedicated servers, will share both hardware and software
resources with other operating systems. Several virtual servers can be implemented on a
single server, instead of implementing multiple dedicated servers. This solution is cheaper
and provides faster resource control. Cloud servers allow to run programs remotely in a
virtual environment. More and more software is offered as a service (i.e., software as a
service (SaaS)), and these services operate on the principle of the cloud. This allows to
work on one program from many devices. It also reduces the risk of data loss. Platforms
based on cloud solutions provide the opportunity to adjust costs to real needs. The server
in the cloud is scalable. An important advantage of using this type of server is a high level
of protection against data loss, however, cloud services are also often targeted by hackers.

An applications server includes both the software and the hardware, which provides the
environment for the programs to run. Application servers are used for running internet
applications; distribution and monitoring of software updates; hosting a hypervisor that
manages virtual machines; and processing of data sent from another server. The most
important hardware that ensures performance are the central processing unit (CPU) and
RAM. On the software side, the operating system (OS) is the most important because it
determines what software the server can run.

Some servers have a more precise purpose. Mail servers are set up to send and receive
emails. Such a server can be a separate, independent device, however, this function is
mostly assigned to the virtual World Wide Web (WWW) or file trasfer protocol (FTP) server.
FTP servers ensure the possibility of exchanging files between users using the FTP com-
munication protocol. Both servers (WWW and FTP) are typically virtual servers, offered as
an internet service from a specific provider. Print servers allow multiple users to use a sin-
gle printer. Additionally, it offers the ability to manage print jobs (known as “queuing”).
Database servers are used to store large databases. It is possible not only to store databa-

48
ses, but also to work on them thanks to the availability of various types of programs, such
as MySQL or PostgreSQL. Game servers will allow many people to play video games at the
same time.

All end devices connect the net with the help of a network interface card (NIC). The task of
an NIC is to convert data frames into signals that are sent over a network. The ethernet
standard of NIC is the most common. The NIC has a unique physical media access control
(MAC) address, assigned to it during production and stored in the read-only memory
(ROM) cards can work with different speeds. The card is equipped with interfaces for con-
necting to the network via different types of transmission media; for example, twisted
pair, optical fiber, or coaxial cable. Usually, it is mounted to the computer in a PCI (periph-
eral component interconnect) or USB (universal serial bus) slot. Wireless network interface
controller (WNIC) is a type of NIC that connects to wireless networks, such as Wi-Fi or blue-
tooth.

Repeaters are used in places where signal amplification or regeneration is required to


increase the range of the network. Typically, the repeater function is performed by a net-
work device with its own power supply, for example, a concentrator. Wi-Fi repeaters are
currently the most commonly used ones. They allow users to improve the range of the
wireless network by receiving the signal from the router, strengthening it, and then for-
warding it. The most important characteristics are transmission speed, supported Wi-Fi
standards, and supported bands.

A hub is a device that allows many network devices to be connected to a computer net-
work with a star topology. They are typically found in four, eight, 16, or 24 port versions.
Hubs can be passive or active. The passive hub acts only as a junction box, distributing the
signal received from one port to the other ports. Active hubs also amplify the signals. Cur-
rently, these devices are no longer used in practice and have been replaced by switches.

Switches can complete the same actions as the concentrator, and also enables users to
divide the network into segments (like a bridge). Switch has many ports that allow to con-
nect different devices. The ports in the switch can function at the same speeds (symmetri-
cal switches) or with differing speeds (asymmetrical switches). Switches can also be
equipped with network management and monitoring functions.

Bridges are devices with two ports used to connect network segments. They are equipped
with the memory to remember the MAC addresses of devices connected to particular
ports. It checks the destination address after receiving the data frame and works out to
which segment should be sent to the frame. When a computer from one segment sends a
message, the bridge analyzes the MAC addresses contained in it and decides whether to
send the signal to the second segment or block it. No unnecessary frames are sent over
the network, which increases efficiency. Access points are devices that provide wireless
access to network resources via wireless transmission media. They perform a function
similar to a bridge that connects the wireless network to the wired network. The access
point can be combined with the router into one device.

49
Gateways are devices allowing computers from the local network connect to and share
information with devices on other networks. In a TCP/IP network, the typical gateway is
the router. The computers on the local network send packets addressed to another net-
work. Gateways allow communication between networks with different protocols. Routers
are devices used to connect networks, for example to join a LAN to the internet. It is a cus-
tomizable device, allowing to control the network bandwidth and ensuring security.

In the field of IT, the concept of the cloud has been present for a long time. To understand
the concept of the cloud, we can imagine a whole network of servers that are connected to
each other. If the user has access to such a network (via connecting to the internet), they
can place any data in it. It can be assumed that the cloud is a specific type of mass mem-
ory but located in the space of the internet, in a virtual area created by a network of con-
nected servers. The aim of cloud computing is to provide simplified, flexible access to
resources and IT services, regardless of whether it’s a private or public cloud. Cloud com-
puting combines many advantages of traditional, local, and convenient computer sys-
tems, and remote access to high-power units, including scalability, flexibility, and ease of
service delivery as well as access control, security, and adaptation of local infrastructure
resources.

Essentially, cloud computing is the use of services and storage to remotely store, manage,
and process data. Cloud computing can be divided into the following three types:

• Infrastructure as a service (IaaS): This is the most basic form of cloud computing serv-
ice. With IaaS, the entire IT infrastructure, including servers, storage, networks, operat-
ing systems, and virtual machines (VM), can be rented from a cloud provider.
• Platform as a service (PaaS): This type of system operation relates to cloud computing
services that provide environments on-demand. The PaaS cloud computing system is
made to help developers easily and quickly create web applications without the need to
spend time configuring the entire infrastructure of devices, hard drives, servers, as well
as network software and databases.
• Software as a service (SaaS): This is a method of delivering programs and applications
on demand via the internet; often via a subscription model. Using SaaS cloud services,
providers host and manage the application and infrastructure. They also take care of its
maintenance, software updates, and security patches. Users only connect to the appli-
cation, usually via a web browser.

There is an additional, fourth form of cloud services which is gaining popularity called
“serverless technologies” or “serverless computing”. This is an overlay for PaaS services.

3.2 Network Protocols


Network protocols are a set of rules that allow computer network communication. Net-
working models are used to visualize the interaction between different protocols. There
are two basic types of networking models: reference models and protocol models. The
most popular reference model is the open systems interconnection (OSI) model. This
model has seven layers that work together. The following image shows a comparison of

50
the OSI and the TCP/IP model. In the OSI model, the transmission is performed down the
subsequent layers then up on the source device and target device respectively. The proc-
ess of moving data between each of the protocol layers is known as encapsulation. The
most popular protocol model is the TCP/IP model. The assumptions of the TCP/IP model
are given below in terms of layer organization, similar to the assumptions of the OSI
model, but there are fewer layers and it better reflects the protocol environment of the
internet.

Figure 2: Comparison of the OSI and the TCP/IP model

Source: Vladyslava Volyanska (2022).

In the network layer of the TCP/IP model are three protocols: one of them provides access
to networks in local networks (i.e., ethernet), and two ensure access in wide area networks
(i.e., ATM and frame relay). The network layer of the TCP/IP model is responsible for con-
necting to the physical medium and forwarding of the IP packets to the transmission
medium; mapping IP addresses to hardware addresses; and encapsulating IP packets into
frames. It specifies a connection to a physical network medium depending on the type of
the equipment and network interface. The task of the internet layer is to choose the best
path for packets sent over the network. The basic protocol operating at this layer is the IP
protocol. Here, the best path determination and packet routing occurs. Routing protocols
also work in the network layer of the TCP/IP model, including RIP, IGRP, EIGRP, OSPF, and
BGP.

IP is a protocol for network communication where the client computer sends a request
while the server computer responds to it. This protocol uses special network addresses of
computers known as IP addresses. An IP address is a 32-bit numbers (for IPv4) written as a
sequence of four eight-bit decimal numbers (which can range from 0 to 255), separated by
periods. An IP address is divided into two parts: network IDand host ID. There are several
address classes with different lengths for the two components. This addressing method
limits the number of available addresses, which posed a risk given the rapid development

51
of the internet. This problem was overcome with the advent of IPv6 which uses 128-bits. In
order to make it easier to remember addresses, symbolic names have been introduced,
which are translated into numerical addresses by special computers in the network, called
domain name system (DNS) servers. Internet control message protocol (ICMP) is an exten-
sion of the IP. The ICMP is used to generate error messages, send test packets, and diag-
nostic messages related to the IP protocol.

The address resolution protocol (ARP) is a network protocol belonging to the TCP/IP family
but it is not directly related to data transport. It is used to dynamically determine the
physical low-level addresses (MAC addresses) correspond to the higher-level IP addresses
for a specific computer. This protocol is limited to physical network systems that support
packet broadcasting. Reverse ARP (RARP) uses a client computer in the local network to
request an IP address from the gateway router’s ARP table. The network administrator cre-
ates a table with the router that is used to generate an IP address from a corresponding
MAC address. A specially configured host in the local network, called a RARP server, will be
responsible for this type of broadcast packet. This server tries to find an entry in the IP-to-
MAC mapping table. If any of the entries in the table match, the RARP server will send a
reply packet to the requesting device with the IP address. Inverse ARP (InARP) uses the
MAC address to find an IP address instead of using IP address to find a MAC address. InARP
is the opposite of ARP. Reverse ARP has been replaced by bootstrap protocol (BOOTP) and
later dynamic host configuration protocol (DHCP), but reverse ARP is only used for device
configuration.

TCP an IP are two protocols but they are mostly used as a set. TCP/IP is a set of network
protocols used in the internet. The IP allows packets to be sent between networks but
does not guarantee that the sent data will reach the addressee. This feature causes the IP
to be called “connectionless” becasue data is sent only one way without confirmation. The
TCP (also called the “connection protocol”) is responsible for the reliability of data trans-
mission. It is TCP which, after receiving each piece of data, sends a confirmation to the
sender that the data has been received. In the absence of confirmation, the data are sent
again. The task of the TCP/IP protocol is to divide the data into packets of the appropriate
size; number them in such a way that the recipient can check that all packets have arrived;
and arrange them in the correct order. Subsequent pieces of information are put into TCP
envelopes, and these in turn are placed in IP envelopes. The receiver’s TCP software col-
lects all sent envelopes by reading the transmitted data. If an envelope is missing, it will
ask the sender to resubmit it. Packets are sent without checking if their transmission is
possible. It is possible that a network node (where the router is located) receives more
packets than the device can process. Every router has a buffer that collects packets of data
waiting to be sent. The new packets that arrive will be discarded and irretrievably lost
when the buffer is full. The protocol that handles the completion of the packets will then
request them to be sent again.

The transport layer of the TCP/IP model provides data transfer services from the source
host to the destination host. It establishes a logical connection between the sending and
receiving host. Transport protocols share and aggregate the data sent by applications in
the data stream flowing between endpoints. The transport layer protocols are TCP and
UDP (user datagram protocol). UDP is another protocol supporting the IP. It is a type of
connectionless protocol that is used to send datagrams without confirmation or guaran-

52
tee of their delivery. The protocols of the application layer are responsible for error pro-
cessing and retransmission. The UDP is designed for applications that do not need an
assembly sequence of segments. It does not send information about the order in which
they are to be created. Such information is included in the TCP segment header.

The application layer deals with providing services to the user. Protocols of the applica-
tion layer define the standards of communication between applications. The most popu-
lar protocols of the application layer are as follows:

• Dynamic host configuration protocol (DHCP): This is a protocol for dynamic configu-
ration of devices responsible for assigning IP addresses, the default address gateways,
and DNS server addresses.
• HyperText transfer protocol (HTTP): This is is an internet protocol used to support
web pages. HTTP is the primary protocol allowing communication between web-clients
and servers. It is an application-level protocol for distributed information systems.
HTTP is a generic object-oriented protocol. A characteristic feature of this protocol is the
possibility of entering and negotiating data representation, which enables the construc-
tion of systems regardless of the type of data transferred. HTTPS (secure) is an encryp-
ted version of HTTP.
• File transfer protocol (FTP): This is a protocol for transferring files. Usually, an FTP
service is used to transfer data between remote machines. This protocol is based on the
client-server principle. FTP technology provides protection using passwords.
• Secure shell (SSH): This is a network terminal protocol that provides encryption of the
connection.
• Telnet: This is a network terminal protocol that allows remote work with the use of a
text console.
• Transport layer security (TLS): This is a cryptographic protocol providing end-to-end
security for transfer of data between applications over the internet. It was adopted as a
standard extension of the secure sockets layer (SSL) protocol. TLS is recognized in the
form of a padlock icon displayed in web browsers after a secure session is established.
The TLS certificate can also be used for other applications such as email, file transfer,
conferencing, and instant messaging, as well as for internet services, such as DNS and
network time protocol (NTP).
• Simple mail transfer protocol (SMTP): This is the basic protocol that enables the
transfer of emails.
• Post office protocol (POP3): This is an email receiving protocol.
• Internet message access protocol (IMAP): This is a mail protocol that is responsible for
synchronizing and receiving email messages.
• Layer 2 tunneling protocol (L2TP): This establishes connections (also known as “vir-
tual lines”) to provide remote users with a cost-effective method of access, allowing net-
work systems to manage the IP addresses assigned to users. In addition, L2TP connec-
tions, when used with IPSec protocols, provide secure access to systems and networks.
L2TP supports tunnels in two modes: voluntary and compulsory. The main difference
between the two tunnel modes is the endpoints. The voluntary tunnel ends at the
remote customer and the compulsory tunnel ends at the internet service provider (ISP).

53
3.3 Switching, Routing, and Flow Control
Theoretically, a message can be sent through a network as one massive stream of bits, but
it could create communicational problems to other devices because such large data
streams could result in transmission delays. Devices working in one network can only
communicate with each other. To connect them to another network, a router is required. It
is a device that will redirect the packet to a destination located in a different logical lP net-
work. Communication in TCP/IP networks only allows data exchange with devices in the
same network. To send a message outside the network, it is necessary to adjust the IP pro-
tocol default gateway parameter. The default gateway address instructs the router how to
reach the selected network. Routers are nodes in the network. Their task is to send pack-
ets to the addressee, namely to a network where its IP address is located. A packet
addressed to the computer in a home network is directed straight to it. If it must be deliv-
ered outside the network, it goes to the router, which checks whether this packet is direc-
ted to the network directly connected to this router, or whether it is to be sent to the
device located in outside of the network. Packets travel from one node (router) to another
via a multitude of intermediary nodes and can also be transmitted through different
routes.

A router’s job is to choose the best available path. The decision which routes to take is
based on entries in the routing table. The routing table can be created either by the
administrator or dynamically by routing protocols. The term “routing” means deciding
which physical port, or through which network, the packets have to be sent so that they
reach the addressee as soon as possible. Each entry in the routing table contains the
address of the destination network and the address of the network or interface through
which the target network can be reached. If the router knows more than one route to the
destination network, it selects the most advantageous route based on the value showing
the quality of the route. These values depend on the specific routing protocol. They can be
based on the number of routers on the way to their destination, but also on temporary
load or transmission delays and a combination of these factors.

There are four types of static routes: the static network route, static host route, fixed static
route, and floating static route. The routing protocols shouldn’t be confused with routed
protocols that implement logical addresses to devices. Routing protocols create the
entries in the routing table gathering the information. Routers can create routing tables
based on information exchanged with other routers. This exchange is based on routing
protocols. They should inform other network nodes about the networks to which the
router has access. This solution allows routers to build a dynamic structure. Joining
another network to one of the routers does not require reconfiguration of the other net-
work nodes; they will be automatically informed about the changes. To determine which
of the available routes is the best, the router uses a metric, which calculates a value is
derived from specific factors depending on the routing protocol, such as hop count, band-
width, delay, load, and link failure. Dependiong on their modes of operation, the following
routing protocols are distinguished:

54
• Distance vector protocols: These send the routing table with metrics to neighboring
routers in specified time intervals. If a route is not available (or no information from a
neighboring router has arrived), then the entry about the route and network is removed
from the routing table and, if possible, replaced with another entry.
• Link state protocols: There send information to all routers, but this information only
contains data about the subnets connected to the router. Update information is sent
periodically or triggered by changes in the network.

The general goal regarding the operation of a network is that the network performs the set
of services for which it is intended, such as providing access to file archives or internet
websites, interactive voice messaging, and emails. All other requirements, such as com-
patibility, performance, reliability, manageability, security, extensibility, and scalability,
are supplementary to this basic task. Although all of the pupplementary tasks are impor-
tant, the interpretation of the quality of service often only includes the two the most
important network characteristics: performance and reliability. The main performance
characteristics are traffic response time, bandwidth, transmission delay, and transmission
delay variation.

Network response time is an important characteristic of network performance from the Response time
user’s point of view. This is the characteristic the user is referring to when they say that the The interval between the
occurrence of a user
network is slow. The most common characteristic is bit rate (the number of bits used to request to a network serv-
transmit or process data in a given unit of time). The bit rate is calculated by measuring ice and the receipt of a
the transmission rate of a data stream over the channel (the minimum size of a channel response to it.

that can pass this stream without delay). Bit rate is measured in bits per second (Bit/s or
bps), as well as derivatives with the prefixes kilo- (Kbit/s or kbps), mega- (Mbit/s or Mbps),
and so on.

The bandwidth is the maximum possible speed of traffic in the network per unit of time. It
is determined by the capabilities of the technology that the network is built on. Band-
width indicates the speed of the internal operations of the network (the transfer of data
packets between network nodes). It directly exemplifies the quality of the principal func-
tion of the network: message transport. It is therefore more frequently used for network
performance analysis than measuring response time or speed. It is measured in bits per
second or, sometimes, in packets per second. Bandwidth is dependent on the features of
the physical transmission technology (such as copper cable, optical fiber, and twisted
pair) and the method of data transmission (such as ethernet, fast ethernet, and ATM).
Bandwidth is used as a characteristic of the technology on which the network is built. The
importance of this characteristic for network technology is shown by the fact that its value
often becomes part of the name of the product, for example, 10 Mbps thernet or 100 Mbps
fast ethernet. Unlike response time or traffic speed, bandwidth does not depend on net-
work overload and has a constant value determined by the technologies used in the net-
work. Almost all networks are heterogenous networks. This means that different transmis-
sion medias and different technologies are used, so in different parts of the network
bandwidth can be limited. For the capacity of a multipart path, we must first pay attention
to the slowest elements.

55
The bandwidth and bit rate are often confused. The main difference is that bandwidth is
the maximum amount of data that can be transferred per unit of time and the bit rate is
the speed at which data can be transferred. We can say that the bandwidth is the physical
characteristic of the channel and bit rate is the logicalcharacteristic. For example, given a
channel with a bandwidth of 100 Mbps and a bit rate 20 Mbps mean the channel will trans-
port data at only 20 Mbps.

Transmission delay Another very important characteristic is the transmission delay. This performance
The delay between the parameter is similar in meaning to the network response time, but it characterizes only
moment data arrives to
the input of a network the network data processing, without processing delays by the end nodes. The values of
device and the moment it the maximum transmission delay and delay variation characterize the quality of the net-
appears at the output of work. Note that data traffic is easily affected by transmission delays. Bandwidth and trans-
this device.
mission delays are independent parameters but introduce significant delays in the trans-
mission of each packet. For example, a communication channel formed by a geostationary
satellite can have very high bandwidth while the transmission delay is always at least 0.24
seconds, which is determined by the speed of propagation of the electrical signal (about
300,000 kilomenters per second) and the channel length (72,000 kilometers).

The ordinary delays are typically a matter of milliseconds and up to a few seconds. The
order of latency of packets is generated by a file service, email services, or a printing serv-
ice, and has little effect on the services individual quality from the user’s point of view.
However, similar delays in packets carrying audiovisual data can cause a significant
decrease in the quality of information transmission, for example the appearance of an
“echo” effect, the inability to understand some words, and blurred images. The main fea-
ture of the traffic generated during the transmission of voice or images is the strict require-
ments for the synchronism of the transmitted messages. To enable high-quality continu-
ous transmission despite sound vibrations or changes in light intensity, it is necessary to
obtain measured and encoded signal amplitudes with the same frequency with which
they were measured on the transmitting side. If messages are delayed, distortions will be
observed.

Networks very a lot and to get the best efficiency, different transmission methods and
switching techniques are used. We differentiate between three types of transmission
methods:

• Unicast: one device sends packets to one other device in the network
• Multicast: one device sends packets to multiple devices in the netork
• Broadcast: one device sends packets to all other devices in the network.

Switching techniques are used for one-to-one connection. The switching techniques
should choose the optimal route for data transmission. Message switching was a prede-
cessor of the packet switching. By message switching method the whole message is sent
as a unit. The main characteristic of the packet switching technique is that each message
is divided into small pieces and sent individually.

56
Figure 3: Switching Techniques

Source: Vladyslava Volyanska (2022).

An example of packet switching technique is the frame relay technique, where the mes-
sage is divided into frames of variable length. Frame relay is already an old standard, cre-
ated in the early 1990s. Frame relay provides multiple independent virtual circuits on the
same link and can guarantee a minimum rate for each virtual circuit. It is mainly used in
the construction of geographically distributed corporate networks and in the solutions
related to guaranteed bandwidth of the data transmission channel (for example, voice
over internet protocol (VoIP) and video conferencing).

Frame relay works at fist two layers of ISO model. On the transport layer of the OSI model
there is the TCP, another example of switching protocol. The TCP is connection-oriented,
meaning that it gives priority to connection over communication. The connection will stay
active until communication is over. The TCP is used together with the IP, so we often speak
about the TCP/IP model. The IP works on the third layer of the OSI model. The main task of
IP is to provide addressing to the host, encapsulating data into packets, and routing to the
destination. The alternative to the TCP is the UDP, it runs also in the suit with the IP, so we
also speak about the UDP/IP model. UDP is quite simple and unreliable becasue there is
no control mechanism; it is used with the applications that can accept the lost of data (for
example, it can be used with a DNS protocol). In comparison with the TCP, the UDP is
much faster, but when we have sensitive data, TCP should be used. The basic packet-
switching technique is datagram, where a part of the message transmitted in units with-
out establishing connection and creating virtual channel. The protocol that doesn’t previ-
ously establish a connection can be called a datagram protocol (for example IP, UDP, and
ethernet). The word “datagram” was taken as a comparisson to “telegram”. The most
important factor is that each datagram has the complete destination address in the
header and it is independent from other datagrams. Even if they are parts of a larger mes-
sage, they can be delivered separately through different routes.

57
Circuit switching is similar to the technique used for telephones. It is necessary to estab-
lish an end-to-end path before the transmission starts, and this path remains open until
the transmission is terminated. To communicate across the network we need two identifi-
ers: a MAC address and an IP address. The MAC is a unique identifier that can be consid-
ered the physical address of the recipient. It is assigned to the NIC of the device that is
connecting to the network. A MAC address has a hexadecimal form and is 48 bits long. The
IP address is used to identify the device through the networks and the MAC address is nee-
ded to identify device within the network. Each device communicating in the network has
an IP address. IP addresses are divided into two parts: a network address followed by a
host address. Today, we must differentiate between two types of IP address: IP4 and IP6.
IP4 is a 32-bit unique for each device in the network address that is divided into four
groups separated by periods (for example, 64.108.27.132). Each part is separated by a
period is called an “octet” and can have a value between 0 and 255. This means that with
IP4 addressing we can get generate 4,294,967,296 (4.2 billion) different addresses. The
successor of IP4 is IP6, which is already a 128-bit hexadecimal address. IP6 provides a
larger addressing space and some additional benefits, such as the possibility of using both
IP4 and IP6 for the same device (known as “dual stacking”) and the possibility to commu-
nicate between hosts with different IP versions (network address translation). IP6 has 16
octets: 8 fields separated with periods where each field has two octets. IP6 allows for
3 . 4 × 1038 different addresses.

Transmission media has a strong influence on data transmission methods (i.e., the physi-
cal channel through which data is sent). All data transmission media can be divided into
two groups: guided and unguided media. Cooper cables (coaxial and twisted pair) and
fiber optic belong to guided media and radio waves, microwaves, and infrared belong to
unguided media. A coaxial cable is a commonly used method. There are two types of
coaxial cable (from the transmission point of view): the baseband which is transmitting a
single signal and broadband which is transmitting multiple signals. A twisted pair is the
most popular method, perhaps because of the cost and ease of installation. We differenti-
ate between shielded and unshielded twisted pairs. The biggest disadvantage of the twis-
Attenuation rate ted pair is its attenuation rate, but this means it can only be for short distances. Fiber
The rate at which signal optic provides faster data transmission then coaxial cables. The signal spreads inside glass
strength is lost
or plastic. This type of cable allows transmission over much longer distances.

3.4 Wireless Transmission and Mobile


Communication Systems
It is generally accepted that the history of wireless technologies began with the transmis-
sion of the first radio signal and the appearance of the first radio receivers with amplitude
modulation. Then the global system for mobile communications standard (GSM)
appeared, starting the transition to digital standards by providing better spectrum, signal
quality, and security. Ever since the 1990s, the position of wireless networks has been
strengthening. Wireless technologies are firmly established in our lives; they stimulate the
creation of new devices and services. The main advantage of wireless networks is the
mobility of the solution. There is no need to lay cables to connect a network device, just to

58
place the device within the network’s operating range and configure it properly. This
allows quick and easy network expansion. The disadvantages of wireless networks include
the possibility of radio waves being disturbed by obstacles on the path of the signal or by
weather conditions, as well as lower security of the transmitted data in comparison to
cabled networks because there is less control over access to the transmission medium.
Two types of wireless medium are used in networks transmission:

• Infrared waves: Light emitting diodes (LEDs) or laser diodes are used as sources of
electromagnetic waves.
• Radio waves: The planning of transmission frequency, the maximum allowable power
of transmitters, and the type of modulation are important factors.

Today, wireless technologies exist in different forms, as depicted int he following image.

Figure 4: Wireless Technologies

Source: Vladyslava Volyanska (2022).

A wireless private area network (WPAN) is a private network with a small range described
in the IEEE 802.15.1 standard. Bluetooth technology belongs to this type of network. Blue-
tooth devices enable transmissions at the distance up to to 1,500 meters (using the best
fifth generation bluetooth technology) and use the transmitters’ power is up to 100 mega-
watts. Bluetooth radio communication is carried out in the ISM (industry, science, and
medicine) band and uses the frequency hopping spread spectrum (FHSS) method. The
FHSS method is easy to implement, provides resistance to broadband interference, and
the equipment is not expensive. According to the FHSS algorithm, the signal frequency
“hops” 1,600 times per second.

Another technology that uses radio waves is a wireless local area network (WLAN). This is a
mobile alternative to cable LAN. Wireless networking standards (such as 802.11a) describe
the transmission speed and frequency band. The wireless network infrastructure consists
of

59
• network cards,
• access points, and
• antennas with cables.

WLAN networks can operate in two modes: when devices connect directly to each other or
with the use of access points. An access point is the central point of a wireless network. It
transfers data between devices and allows to connect a wireless network to a cable net-
work. Access points have two network interfaces: a wireless interface (a socket for con-
necting the antenna) and a cable network interface (for example, an RJ45 socket for con-
necting to an ethernet network). Access points can communicate with each other, which
allows users to construct a complex network. Access points allow users to build two types
of networks:

• Basic service set: This is when the whole transmission in the network occurs through
the one access point.
• Extended service set: This is when there are several access points connected in the
“backbone network” communicating with each other with the inter-access point proto-
col (IAPP). In this type of network, devices connect to any of the access points and can
move between them. These types of networks are used to create public hot spots.

Wireless metropiltan area networks (WMAN) enable data flow between remote LANs and
include communication technologies designed for institutions dealing with data transmis-
sion.

Security is the primary concern in wireless networks. The generally available transmission
medium allows any device in the range to access the network’s resources. Access points
can provide security by filtering MAC and IP addresses or by securing access to the net-
work with an encryption key. It is possible for the wireless network to function without
any encryption but because due to security risks this is not recommended. Commonly
used encryption methods are:

• Wired equivalent privacy (WEP): This enables the use of 64-bit or 128-bit keys. It is not
considered particularly secure.
• Wi-Fi protected access (WPA): This is a security standard that uses cyclical changes of
the encryption key during transmission, it can operate in two modes:
◦ Enterprise: Keys are assigned by the radius server for each network user
◦ Personal: All network users use a shared key.

It is recommended to use WPA-2, which is the revised version of the WPA protocol.

There are communication technologies that make it possible to communicate not only
between stationary but also between moving users. These are known as mobile communi-
cation systems. Mobile communication systems can be divided into different types
depending on many factors. Consider the following eamples:

60
• Paging: This is one of the most primitive systems, in which text encoded messages are
sent to the user’s pager. The reverse process of message transmission is impossible, it
works only in one direction.
• Twaging: This is a more advanced form of paging that allows user to confirm that a
message is received.
• Cellular telephony: This provides data transmission between moving users. It allows
the exchange of multimedia, text, voice, and video messages, and also provide access to
the internet.

The communication technology of 1G (first genearation) allowed users to communicate


only using voice messages. The data transfer rate was 1.9 Kbps. All standards of this gener-
ation used frequency modulation to transmit speech and control information. The second
genration (2G) was the first that worked with digital encryption of information. Users
could already exchange messages with pictures and multimedia messages. The data
transfer rate was up to 14.4 Kbps. It was followed by 3G (third generation), which com-
bined the ability to make phone calls and exchange messages with internet access. These
mobile communication technologies are based on packet data transmission. Data rate
standards set by the International Communications Union for 3G are as follows:

• High-speed users (up to 120 kmh) up to 144 Kbps


• Sedentary users (up to 3 kmh) up to 384 Kbps
• Stationary users up to 2048 Kbps.

The fourth generation (4G, LTE) of mobile communication is completely based on packet
data transfer protocols, it provides users with high-speed Internet access, the ability to
exchange voice and text messages, graphics and other information. The transmission
speed for stationary users is up to a gigabyte per second, and for moving users it can reach
100 Mbps. 5G technology is the next (fifth) generation of telecommunications networks,
designed to support increased data transmission to ensure higher performance and relia-
bility of the connection, enabling the development of areas of new technologies such as
the internet of things (IoT), smart homes, and augmented reality or virtual reality. The fifth
generation of the telecommunications network provides faster access to the internet. The
bit rate can reach 20 Gbps when transferring data to the device and 10 Gbps when sending
data to the network.

3.5 Network Security and Disaster


Recovery Management
Information security threats are possible events, processes, and actions which can lead to
damage of information or computer systems. There are two key groups of information
security threats: natural and artificial. Natural threats, such as fires or hurricanes, are not
dependent on humans. Artificial threats, however, depend directly on human action and
can be either intentional or unintentional. Unintentional threats are the results of negli-
gence, inattention, and ignorance. Intentional threats are created on purpose.

61
One of the sources of threats are individual intruders (“hackers”), who use all available
cyber-tools to gain access to private information. These individuals will use weaknesses
and bugs in software, mistakes in firewall configurations, listen to communication chan-
Keyloggers nels, and use keyloggers. To protect information security the following technologies can
Software that keeps track be used:
of the keys used on a key-
board and in which order,
enabling hackers to steal • Protection against unwanted content (antivirus, antispam, web filters, anti-spyware)
passwords and sensitive • Firewalls and intrusion detection systems (IPS)
information.
• Identity management (IDM)
• Privileged user control (PUM)
• DDoS protection
• Web application protection (WAF)
• Source code analysis
• Antifraud
• Protection against targeted attacks
• Security event management (SIEM)
• Systems for detecting anomalous user behavior (UEBA)
• APCS protection
• Data leak protection (DLP)
• Encryption
• Protection of mobile devices
• Back-ups
• Fault tolerance systems

Network security is a part of information security. It is a set of policies and requirements to


prevent and monitor unauthorized access attempts, modification of information, possible
failure of the entire computer network, and other network resources. There are four main
principles which should be implementated to achieve realtive network security:

• Protection of end network devices: It is only possible to ensure a sufficient level of


device security if the latest technologies are used. For example, personal computers can
be attacked by viruses, worms, or web-browser vulnerabilities. Using antivirus software
with updated signature databases reduces the risk of an attack.
• Infrastructure monitoring: This is a must to protect the network. In order to under-
stand the status of services and applications, it is necessary to use network access pro-
tection tools.
• Network bandwidth control: By controlling bandwidth and using intrusion prevention
tools, users can reduce the chances of a successful attack.
• Fault tolerance of the internal network: It is important to be aware of the vulnerability
and ability to recover after an attack. It is not possible to protect the network’s perime-
ter completely and in this case, it is worth considering the possibility of switching from
one resource to another, in case of a failure.

Because there are many (both active and passive) attacks, the term network security cov-
ers many software and hardware tools to protect information, such as proxies, firewalls,
intrusion detection and prevention tools (IPS/IDS), antivirus. Network monitoring tools
indicate those for protection against targeted attacks, such as wireless network security
tools and a VPN. The use of these tools gives protection of the internal network from unau-

62
thorized access and ensures secure connection of devices to the external network and the
possibility of remote access. It also blocks monitoring and control applications that have
access to personal data.

Network monitoring systems are software that allows users to monitor the status of net-
work devices. At the same time, network monitoring systems allow administrators to be
notified of any failures by sending messages. Network monitoring systems can be divided
into those that monitor network performance when channels are overloaded and those
that monitor the network searching for failures related to the server and other systems.
The main way to check the status is to send requests to devices. These can be HTTP
requests for checking a web server or SMTP requests for email servers (with the obligatory
receipt of a response via IMAP or POP3). While sending such requests, parameters such as
response time, availability, uptime, and others are monitored. To monitor a complex net-
work, it is necessary to use an integrated platform that would allow the administrator to
monitor WAN connections; software and network infrastructure; physical and virtual serv-
ers; cloud services; network applications; and mobile devices connected to the network.

Network access control (NAC) is a set of technical tools to implement policies and rules for
accessing a network. The purpose of NAC systems is to establish if the device attempting
to access the network is secure and if its configuration aligns with the access rules. After
identifying this, the system decides what level of access to grant the user. Intrusion detec-
tion and prevention systems (IPS/IDS) are software or hardware tools that detect facts and
prevent attempts of unauthorized access. They are usually divided into two main groups:
intrusion detection systems (IDS) and intrusion prevention systems (IPS). The functional-
ity of IPS and IDS systems is similar but IPS does not allow constant monitoring in real-
time.

Network intrusion prevention systems (NIPS) are a set of software or hardware tools cre-
ated to protect the network and block targeted attacks in real-time. Such systems perform
a deep analysis of network traffic of network nodes or segments, as well as protocols of
the network, transport, and application levels. They can prevent unauthorized access
attempts and prevent prohibited network activity. In order to detect intrusions, traffic bit
sequences are compared with the attack pattern adopted as a reference. Another method
is to continuously analyze the data stream and capture suspicious network activity.

A firewall prevents unauthorized access, which is carried out by using vulnerabilities in


software or network protocols. It allows or denies traffic based on comparison with the
configured rules. Since modern attacks can also be carried out from internal network
nodes, a popular place to install a firewall is not only the perimeter boundary, as it was
before, but also between the network segments.

Until recently, only corporations and large organizations secured themselves for the loss
of information. The disaster recovery plan only referred first to the largest IT and telecom-
munications systems. This situation is changing dynamically, because also a minor failure
and a short break in activity means big losses even for small companies and individuals.
That is why the disaster recovery management becomes more and more current. The
basic point of the aisaster recovery management is the creation of disaster recovery plan.
The loss of critical data is one of the worst scenarios for companies operating in every

63
industry. The occurrence of digital failures, data loss, and forced downtime of the compa-
ny’s operation can be determined by many external and internal factors. There are three
key groups among the threats. The first is natural disasters (tornadoes, floods, earth-
quakes). The second group of threats includes human errors (data leakage or ill-consid-
ered implementation of software changes). The third group is IT infrastructure failures.
The following four key indicators should help to define the disaster recovery:

• Recovery point objective (RPO): This specifies an acceptable period without access to
IT services that will not cause significant losses.
• Recovery time objective (RTO): This specifies the maximum time for IT failure repair
and data recovery.
• Network recovery objective (NRO): This specifies the time elapsed between the
moment of failure and the time of recovering emergency network connections.
• Maximum data loss (MDL): This specifies the maximum amount of data lost as a result
of an incident, taking into account the possibility of restoring from additional sources
outside the system (for example, data registration from paper documents and transac-
tion logs).

The disaster recovery plan should be created based on these four indicators but the most
critical resources should be identified first. It is necessary to create documentation indi-
cating key systems, applications, data, and resources. The next step is risk assessment;
defining and estimating the risks will allow to prepare for failures and marginalize its
effects by appropriate planning of preventive and corrective actions. The absolute essen-
tial element of each disaster recovery plan should be creating the back-up copy of critical
data. This plan should describe the resources to be backed-up, the frequency of back-ups,
and where they are stored. To keep the disaster recovery strategy up-to-date it is neces-
sary to monitor, test, and optimize the plan to adjust it to evolving threats.

SUMMARY
Information technologies and communication systems have become
global and cover all areas of modern life. As a result, ensuring informa-
tion security is becoming increasingly important. Understanding, the
structure, components, and operating principles of the networks can
help us to use the resources more effectively. Not only professionals
from the IT branch need this knowledge. Today, companies are inten-
sively looking for solutions that will help them increase the security of
their business. IT issues are associated with a loss of profit and extrusion
from the market for companies in almost every sector. This is the reason
why the disaster recovery management is crucial in the evolution strat-
egy of every business. The idea of a disaster recovery plan is based on
calculating the risk of losing corporate data before it even happens.

64
UNIT 4
PROJECT MANAGEMENT

STUDY GOALS

On completion of this unit, you will be able to ...

– define stages of the life cycle management.


– identify performance techniques.
– apply information to develop a solid project plan.
4. PROJECT MANAGEMENT

Introduction
Management in any field is a complex and heterogeneous activity. The major part of
today’s business worldwide is project oriented. This is because more and more companies
focus their efforts on creating new products and services or achieving new results in
established areas. In the past, the only documentation that was created for the project
was technical documentation and an execution diary. Projects are events that are aimed
at realizing individual goals of the company and the success of the entire enterprise
depends on their ability to implement it and perform them effectively. That is why project
management is becoming a one of the most relevant and important topics in business
administration.

It is essential to understand how projects are distinct from daily activities in the company.
Each project aims to achieve its own goals and is subject to time constraints. The project
ends when it reaches a given goal. Everyday activities are never-ending, and their purpose
is to keep the business running smoothly.

Project management assists organizations to accomplish goals quickly and efficiently.


Project management is a management activity with rules and standards. The most well-
known resource is the Project Management Body of Knowledge (PMBOK). This describes
the best practices and knowledge in the area. A project management system includes sev-
eral actions, such as

• setting clear and comprehensible goals;


• description and development of requirements for the project;
• implementation of communication between all parties working on the project;
• consideration of project constraints such as deadlines, risks, and resources; and
• communicating with the team, considering their needs and correcting existing plans in
accordance with the material received.

All these actions are segmented into separate stages: project initiation, planning, execu-
tion and control, and completion.

4.1 The Stages of the Project Life Cycle


The most basic component of project management is the project life cycle: a comprehen-
sive model of the project management process. There are several schemes of the project
life cycle, each with a different set of recommendations for its application. However, the
most widely used one is the model developed by the Project Management Institute (PMI)
which describes both the overall project lifecycle as well as the knowledge required in
each phase.

66
The term “project life cycle” must be distinguished from “project phases”. The life cycle
consists of stages of varying length and intensity depending on the needs of a given
project. We have four stages of the project life cycle and five project phases. The general
project life cycle consists of four overarching stages covering the entirety of the project:

• Beginning of the project: This is an introduction to the project, where goals are deter-
mined, and a suitable team is formed. At this stage, brainstorming sessions are taking
place to determine the final product or service to be achieved.
• Preparation and planning: This is the most important part in project management. As
stated in the PMBOK, this should take approximately 50 percent of the total time in the
project implementation process (Project Management Institute, 2021). The project is
split into parts and collections of small tasks. A schedule is created, in which deadlines
are set for each tasks. The list of required resources is developed. Simultaneously, the
planning process includes periodic adjustments, because in the process of implementa-
tion new issues and tasks constantly appear. The next step is for the team to decide how
it will achieve the purpose defined in the previous stage and identifies stakeholders.
Sometimes an additonal fifth stage is specified: the development stage. This stage is not
applied for all projects and it is generally part of the preparation and planning stage. In
the development stage, whihc is typical for technology projects, the structure of the
project or product are determined. For example, a programming language is chosen.
• Implementation of work: At this stage, code writing or construction takes place. The
previously defined content of the project is created and at the same time the controlling
is carried out according to the predefined rules. In the second part of this stage, the
product is tested and checked if it fulfils the requirements. In terms of testing, product
deficiencies are identified and corrected. This stage alternates with the previous one. In
the ideal project management system: the task is set, completed, controlled, the neces-
sary adjustments are made, and the next task is set. At the stage of implementation,
tools are frequently used to ease the flow of processes, such as delegation, time man-
agement, and the Eisenhower matrix. Eisenhower matrix
• End of the project: at this stage, a check of completeness is performed, and the original A grid created to help to
make decisions where
data and instructions are saved. It is crucial that even a new member of the team can problems are arranged on
figure out what and how it was done before. This stage may be a relatively simple trans- axes of importance and
fer of the project results to a customer, or a long process of interaction with clients, urgency.

depending on the project.

Five phases are described in the Project Management Body of Knowledge published by PMI
operate according to different life cycle patterns (Project Management Institute, 2021):

• Initiation: The establishment of the purpose of the project based on the business and
demands of the stakeholders
• Planning: The development of a project plan and scope according to the chosen man-
agement process, schedule, and end goals
• Execution: The completeion of tasks, including resource allocation, regular meetings,
and actual project implementation
• Monitoring and controlling: Supervision of the work performed using documents such
as a burn chart and estimating according to key purposes
• Closing: Evaluation of the efficiency of the work over the project taking the most impor-
tant conclusions and optimizations for future projects

67
These phases may be applied to almost any type of project. They only differ in how phases
manifest themselves in the particular project management process. Larger projects
almost always require some overlap between phases. Even the initiation and closing pha-
ses may overlap. Planning may be needed to make a project feasible or part of a project
may require a controlled closure ahead of others. When we consider the five phases, some
of them can be more or less important for different projects. For example, the planning
phase is not as important for firms that are repeatedly producing the same products.

A different functional approach is used in each stage of the project lifecycle. We differenti-
ate two main types: a pedictive approach and an adaptive approach. A predictive
approach is rigid and sequential. The adaptive approach can be divided into iterative and
incremenmtal subcategories. Iterative projects are broken down into smaller consecutive
phases, where each phase is tested against its own requirements. Incremental projects are
similar, but the results of the phases are immediately implemented in the final product,
which is tested at the end. A hybrid method is also possible, using a combination of pre-
dictive and adaptive approaches according to the needs of the project. The predictive life
cycle works best for projects that require a specific sequence of work. For example, when
building a house, the walls cannot be painted until their construction is finished. On the
other hand, adaptive lifecycles work better for projects that require flexibility or where key
data is missing at the start. This model is becoming more and more popular due to the
dynamics of industries.

Many different project management methodologies have been created for almost any
needs over the course of the history of project management but there are no universal sol-
utions to project management. The solutions can be made up from features of preexisting
systems or even developed entirely from scratch. The most apparent method of making a
project more manageable is to separate the process into sequential steps.

Project management has many special terms and phrases. A few key terms are as follows:

• Critical path: This is the continuous sequence of activities from start to end, that takes
the longest time to be completed.
• Event chain of processes (EPC) diagram: This shows the order of tasks depending on
the access to resources.
• Slack time: This is the time that a task can be delayed by without affecting the total
duration of the project.
• Milestone: This is a major event in the process, such as the end of a stage.
• Project scope: This is a description of the work to be done.
• Sprint: This is a work cycle which lasts from less than a month, during which a a specific
goal is set and part of the product created.

In addition to these key terms, there are some concepts and project management styles
that have become very prevelant and it is relevant to understand them in their own right.
Some of the major ideas are detailed in the following paragrahs.

Traditional project management (the PMBOK model) is based on this linear structure. It is
almost like a computer game in which you cannot progress to the next level without com-
pleting the current one. It is the most widely used project management method, based on

68
the so-called “waterfall” or “cascading” cycle, where the task is transferred sequentially
through the stages, resembling the flow of water. The disadvantage of this approach is
that it requires that the fist stage is prioritized and has all the required information. If this
is feasible, the waterfall approach brings a level of consistency to the project work and the
strict planning allows a simplified execution of the project however there is no capability
to react to changes in the project goal.

Agile is a flexible set of methods for product management. In 2001 the Agile Manifesto was
published. It consolidated the core principles of Agile software development, which are
based on teamwork and adaptation (Beck et al. 2001). The project is divided into small
subprojects, (or “iterations”), which are then gathered into a finished product. The plan-
ning is carried out for the entire project. It means each phase is carried out for each itera-
tion separately. This allows developers to transfer the results (or “increments”) of the iter-
ations more efficiently and starting a new subproject. These changes can be made to it
without high costs and impact on the rest of the project. Agile is not a classic project man-
agement method, but rather a set of ideas and principles how projects should be imple-
mented. Based on these principles, separate flexible methods (frameworks) have been
developed, such as Scrum, Kanban, Crystal, and many others. All these methods are dis-
tinct, yet they follow the same principles. The advantage of Agile is its flexibility and
adaptability. Agile is the method of choice. The main principles of Agile say that it is more
important to respond to change than to follow a plan. Agile is suitable for open-ended
projects and the development of new, innovative products. The main disadvantage of
Agile is that it is neither a methodology nor a standard. It is a set of principles. Each team
must develop its own system of management guided by the Agile principles.

Scrum is the framework from Agile that combines elements of the traditional process and
the ideas of an Agile project management methodology. Working with Scrum, we break
the project into parts which can be immediately used by the customer. They are called the
product “backlogs”. The most important backlogs are the first to be selected for execution
of the sprint. Sprint is the term for an iteration in Scrum. They last for short periods under
four weeks. At the end of each Sprint, the customer should receive a functioning part of
their product that can be used. For example, a website with partial functionality. Subse-
quently, the project team continues with the next Sprint. The duration of a Sprint is
chosen by the team prior to the start of work on the project and is fixed. To ensure that the
project fulfils the desires of the customer, before each Sprint begins, the project scope
that is yet to be completed is re-evaluated and changes are made. In the Scrum project
team, the Scrum Master and the Product Owner are involved in this process. The basic
structure of Scrum can be defined by five key meetings: The backlog refinement meetings
(also called “backlog grooming”), sprint planning, daily meetings, sprint debriefings, and
sprint retrospectives. Scrum is designed for projects that require quick results and are tol-
erant tochange. Scrum has strong demands to the project team so it should be small and
cross-functional.

Lean is another branch of Agile principles. Agile proposes to break the task into small,
manageable packages of work, but it doesn’t give any guidance about how the develop-
ment of this package should be managed. Scrum provides processes and procedures.
Lean adds a workflow scheme to the principles of Agile so that each iteration is executed
with the same quality. Like in Scrum, the Lean process is split into small units of delivery

69
that are implemented separately and independently. The difference is that in Lean there is
a workflow with stages for each of these deliveries. Lean does not have clear stage boun-
daries as Scrum does with sprint limits. Lean also allows the team to perform several par-
allel tasks at different stages, which can accelerate the project and increases flexibility.
Lean is also rather a concept then a tool, much like Agile. Not every part of the project
requires the same attention but the fact that Lean has the same approach to each task and
stage makes it unsuitable for large and varied projects. Lean does not establish a clear
workflow for the application of separate parts, unlike Scrum.

Kanban was created by Toyota engineer Taiichi Ono in 1953 (Hiranabe, 2008). Kanban has
parallels to industrial production where the product increment is passed on from one
stage to another, and at the end, a ready-to-ship item is obtained. In Kanban, the unfin-
ished task can be left at one of the stages if priorities change and other urgent tasks
appear. Kanban is therefore less strict than Scrum. For example, it does not limit the time
of a sprint and there are no fixed roles, except for the Product Owner. Kanban even per-
mits a team member to execute multiple tasks at the same time. Kanban has four pillars:
an individual card is generated for each task (providing all necessary information for com-
pletion); a limit is set on the number of tasks per stage; a continuous flow tasks from a
backlog come into the flow in order of urgency; there is continuous improvement (known
in Japanese as kaizen). Like Scrum, Kanban is suitable for a team with well-established
communication and cross-functionality.

Six sigma adds more planning and improves quality. This is a more systematized version
of Lean than Kanban. The aim of the project prioritizes customer satisfaction and quality
of the product, which is achieved through a continual improvement process across the
entire project. In the six sigma concept, special attention is paid to eradicating emerging
problems. This is achieved via a five step process known as “DMEDI”: Define, measure,
explore, develop, control. In general, six sigma is comparable to Kanban, except it has
established stages of the task’s implementation: planning, goal setting, and quality test-
ing.

PRINCE2 stands for projects in controlled environments version two. When compared to
other products, PRINCE2 looks like a amalgamation of the classical approach to project
management and a focuses on quality like six sigma. In the PRINCE2 method, each team
member has a distinct function in each of its seven processes: starting up a project, initia-
tion a project, directing a project, controlling a stage, managing product delivery, manag-
ing a stage boundary, and closing a project.

4.2 Project Management Software


Project management software is a comprehensive tool that includes support for schedul-
ing, budget management, resource allocation, communication, quick management, docu-
mentation, and administration of a system that are used together for managing large
projects. It helps to complete tasks and manage time, resource, and scope constraints.
The main features of project management software are:

70
• Planning
• Critical path calculation
• Data and information management
• Communication management

There are five types of project management software:

• Desktop-based
• Web-based
• Personal
• Single user
• Multi-user

Desktop based software is located on the desktop of the individual user. This allows it to
provide the most feature-rich interface. Applications of this kind usually allow information
to be saved to the file system, which can later be shared with other users, or stored in a
centralized database. Examples of such software are TaskJuggler, Cerebro, and GanttPro-
ject.

Web-based project management software is available through any web browser connec-
ted to the internet with a Software as a Service (SaaS) subscription. Online platforms of
this kind are designed for businesses across all sizes and industries. Users in different
locations can use the tool from various devices including desktops, tablets, and smart-
phones to obtain current project status and information from the central database. Exam-
ples of web-based project management software are Zoho Projects, Microsoft Project, and
Basecamp.

Personal software is typically used for home project management. Generally, these are
single-user systems with a simple user interface. Single user systems can be used as per-
sonal systems or as business systems to manage small projects in companies.

Multi-user systems are designed to coordinate the actions of many users and rely on cli-
ent-server technology. Examples of multi-user project management software are Easy
Projects, OpenProj, ProjectMate, and TeamLab.

4.3 Tools and Techniques for Project


Management Tasks
The most frequently used tool for planning of projects of any scale is a Gantt chart. This is
a horizontal bar chart that illustrates the timeline of a project and the tasks attached to it. I
also tracks links between tasks and critical points, as well as start and completion dates
for a task. It provides a visual overview of the project completion and forthcoming mile-
stones. Each horizontal bar on the Gantt chart represents a task and its length represents
the amount of time it takes to complete the task. The entire Gantt chart offers an overview
of the project status and usually contains the following components:

71
• Start dates and task durations
• Individual tasks
• People or teams responsible for tasks
• Milestones

Figure 5: Gantt Chart in MS Excel

Source: Vladyslava Volyanska (2022).

In most cases, Gantt chart software also provides additional features. The Gantt chart
allows the user to control the workflow, track delays, create links between tasks and move
towards the goal with minimal losses. If the deadlines for the task are shifted or the task
was completed with delay, it will be displayed by colored indicators on the Gantt chart. We
can display projects, tasks, and subtasks with the help of a Gantt diagram. We can define
or change due dates for a task without editing the task itself. We can also establish links
between tasks. For example: it is necessary to present the finished project prototype to
the client, but before that, the team must discuss and approv it. In this case, the comple-
tion dates of the task and approval of the prototype can be linked on the Gantt chart with
the start date of the the presentation of the finished prototype to the client.

Critical path method It is important to understand what role the critical path method plays in a project. With
A technique to determine the help of Gantt charts, we can determine what tasks need to be completed on time to
the tasks necessary to
complete the project. It is achieve the entire project punctually. The path leading through such critical tasks is called
the longest sequence of the critical path. The search for the critical path is carried out by analyzing the duration of
activities to complete the tasks and their timely tolerance (or “buffer”). All tasks that do not allow a buffer are parts
entire project on time.
This approach aims to of the critical path. That is, any delay in such tasks leads to a delay in the deadline of the
avoid cases where one project. Although Gantt charts may look different, there are general basic steps for creat-
task delays the comple-
ing them:
tion of others.

1. Task breakdown: First, we should make a list of all the work or tasks on the project
that need to be done and arrange them in a task breakdown structure.

72
Figure 6: Task Breakdown

Source: Vladyslava Volyanska (2022).

For example, the marketing department is creating a new interactive blog post. A task
breakdown structure can include the following tasks.

Table 2: Tasks to Create a Blog Post

Task ID Task Duration (hours)

A Create outline 4

B Write draft 7

C Design post visuals 5

D Upload post 2

Source: Valdyslava Volyanska (2022).

2. Task dependencies: Once we have a general idea of what needs to be done, we can
begin to identify dependencies between tasks:
• Task B depends on task A.
• Tasks B and C can be run parallel.
• Task C depends on tasks B and A.
The list of interdependent tasks is called the “work sequence”.
6. Diagram: The following step is to change the task breakdown structure into a dia-
gram. You should highlight an area for each task and use arrows to indicate depend-
encies between them.
7. Task duration: The next step is the estimation of task duration. First, we need to esti-
mate how much time each activity requires. You can base this on experience, previous
project data, or industry standard assessments.

73
8. Critical path: The next step is calculating the critical path. First, we need to estimate
the start and end times for each activity. The start time of the first activity is zero. Add-
ing the duration time of the activity, we will arrive at the end time. The start time of
the next activity is the end time of the preceding activity. This calculation is done with
all tasks. To establish the duration of the entire sequence, the completion time of the
last task in the sequence is taken. The sequence of activities with the longest duration
without any buffers is the critical path.
9. Slack: The next step is the calculation of the time reserve. The slack reflects the
degree of flexibility of working on a particular task. This value denotes how much an
assignment can be delayed without impacting other tasks or the project completion
date. The most important tasks have zero slack and all deadlines for them are fixed.
Tasks with a positive amount of slack are classified as non-critical, it means that their
execution can be postponed without affecting the completion of the project. There
are two types of slack: total slack and free slack. Total slack is the amount of time
(starting with the earliest start date) that work can be delayed without disrupting the
project deadline or the work schedule. Total slack = LS (late start) – ES (early start) or
LF (late finish) – EF (early finish). Free slack is the amount of time that work can be
put-off without affecting the next task. Free time reserve is possible only if two or
more tasks have a common follow-up task. Free reserve = ES (subsequent task) - EF
(current task). The direct move method is used to calculate early start (ES) and early
finish (EF) dates. The reverse method is used to calculate late start (LS) and late finish
(LF) dates.

Sometimes it is necessary to shift the timing of the project. In such cases, two methods of
time compression can be used:

• Acceleration: This is is the analysis of the critical path in order to determine the work
that can be carried out concurrently. Parallel execution of processes reduces the overall
duration of work.
• Reinforcement: This is a process that involves the allocation of additional resources to
speed up work.

Once the critical path is identified, the appropriate strategy can be chosen to meet the
adjusted deadlines.

The program evaluation review technique (PERT) method is used to estimate the uncer-
tainty about project activities by means of a weighted average between best- and worst-
case scenarios. This method estimates the time required to complete a particular task.
The PERT method uses

• highest probability score (M),


• optimistic estimate (O), and
• pessimistic assessment (P).

The calculation is performed as follows:

O + 4M + P
Estimated time = 6

74
The main difference between the PERT and CPM methods is the level of certainty regard-
ing the duration of the work. The PERT method is used to estimate the time required to
complete the work, while the CPM method is applied when the approximate duration of
the work has already been calculated.

Both CPM and Gantt charts depict the interdependencies of tasks. Defining features of the
CPM are as follows (Asana, 2021):

• It visualizes critical and non-critical paths and allows users to calculate the duration of
the project.
• It is displayed as a network diagram with related areas.
• It doesn’t show required resources.
• It displays tasks on a network diagram without a timeline.

For contrast, the definitng features of the Gantt chart are as follows (Asana, 2021):

• It visualizes project progress.


• It is displayed as a horizontal bar chart.
• It shows the resources required for each task.
• It displays tasks on the timeline.

Gantt charts can be used together with the CPM method to track critical paths over time to
ensure that the project is on schedule.

Managers who deal with complex projects will confirm that dividing tasks into smaller,
more manageable pieces makes the workflow much easier. Work breakdown structure
(WBS) is a method of organizing project management, in which all works are divided into
smaller and simpler operations so that each can be planned, assigned to a specific person
or team, and evaluate the result. This approach helps to describe the content of the
project, allocate responsibilities, evaluate the costs, and control the intermediate status
and results. The hierarchical structure of WBS is usually depicted graphically in the form of
a multi-level scheme. At the top level, the concept of the project is shown, and below are
its stages, procedures, individual operations, and even very simple actions. Another way
to visualize the WBS is to build a Gantt chart. The plan for WBS implementation is as fol-
lows:

• Develop and approve the concept of the project: The aim is to describe the expected
result to get after the completion of the project in one or two sentences. This is the top
level of the hierarchy.
• Highlight the key stages of the project: This will be the second level of the hierarchical
structure.
• Identify the results of each step: Write down the end results we want to achieve before
moving on to the next step.

75
• Divide the tasks: The end results of each stage are split into manageable tasks for indi-
vidual performers or a small team.
• Assign each task: Each task is given to a specialist who will be responsible for the pro-
gress of work and the result in their area.

SUMMARY
Implementing any of the project management methods is usually
impossible without a certain set of technological and organizational
tools like a project management system. This is a set of methods that
can be used to fulfill all the tasks. But this term is often used in a nar-
rower sense, as a designation for a specific software tool. All these sys-
tems have three main goals: to increase employees efficiency; to make
the process of project management more productive and efficient; and
to make the management of the project’s profile more convenient and
transparent to understand.

There are several methods for project management. In addition to the


classical PMBOK methodology, Agile can be used for one large project
divided into a series of small projects with phased implementation;
Scrum can divide the project into its component parts; Lean can distrib-
ute the project into small work packages; and Kanban is an option for
projects that are not limited by deadlines. Each of the methods has its
own nuances, advantages, and disadvantages. The choice of a suitable
system depends on the specifics of the organization and the team that
will work on a particular project.

Project management systems are relevant when the projects themselves


are more or less similar in their execution. They allow to form a single
approach to project management, follow the stages of its implementa-
tion at different levels, and control the budget and deadlines.

76
UNIT 5
LIFE CYCLE MANAGEMENT

STUDY GOALS

On completion of this unit, you will be able to ...

– define stages of the software life cycle.


– explain software development and testing techniques.
– apply information to understand issues in the field of analysis, design, and implemen-
tation of information systems.
5. LIFE CYCLE MANAGEMENT

Introduction
The concept of the software development life cycle (SDLC) appeared when the program-
ming community realized the need to shift from “home-made” methods of software devel-
opment to industrial production. The programmers tried to utilize the experience from
other branches and the notion of the life cycle was borrowed in particular. Programs are
not subject to physical wear, but errors and wekanesses are found during their operation
that require correction. Therefore, we can talk about the programs aging. Historically, the
development of life cycle concepts is associated with the search for appropriate models
for optimal performance. The difference in the purpose of applying each model is deter-
mined by their diversity.

There three main reasons why it is necessary to study the software life cycle models. First,
this knowledge helps to understand what should be expected when ordering or purchas-
ing software. Most software bugs are usually fixed in during development, and there is no
reason to expect future versions to be better. However, fundamental changes in the con-
cepts of the program are a task of another project, which will not necessarily be better
than old one. Second, the software life cycle models are the basis for understanding pro-
gramming technologies and tools. Any technology that is based on certain ideas about the
software life cycle builds its methods and tools around the phases and stages of the that
cycle. Third, the general understanding of how a software project is developed provides
the foundational knowledge for its planning, and allows us to spend resources more eco-
nomically and achieve a higher quality of management.

5.1 Analysis and Design


The software life cycle covers the phases that a software product transitions through from
the appearance of an idea to its implementation and subsequent support. The software
life cycle models largely predetermine the methods of software development. The stages
of the life cycle typically include

• analysis of the reqirements,


• design,
• programming, and
• testing.

The life cycle can be represented through various models. The most used are waterfall,
incremental (a phased model with intermediate control), and spiral models.

In the waterfall model, the development process is a sequence of independent steps. Each
step commences following the completion of the previous one. At all steps, the organiza-
tional processes and work are executed, including management, verification, certification,

78
and documentation development. As a result of the completion of a stage, intermediate
products are created. These increments cannot be altered in subsequent steps. The water-
fall model is suitable for creating simple software, which can have all its requirements for-
mulated at the start of the project.

The incremental model applies software development with a linear sequence of stages.
Software development is performed in iterations with feedback loops between stages.
Intermediate stage adjustments allow developers to consider the actual mutual influence
of results at various stages. In the initial stages of the project, all the basic requirements
for the product are determined and categorized by importance. After that, the develop-
ment of the system is carried out on an incremental basis. Each increment should add an
aspect of functionality to the system. The process starts with the components with the
highest priority.

With the spiral model, each “turn” of the spiral generates the next version of the product,
the requirements to the project are defined, and the next task is planned. Particular atten-
tion is paid to the first stages of development, analysis, and design, because technical sol-
utions are tested through the creation of prototypes. Every time the spiral turns, a worka-
ble fragment or version of the product is produced.

The choice and construction of a software life cycle model is based on the conceptual idea
of the system being designed, considering its complexity in accordance with standards
that allow the creation of a work execution scheme. The life cycle model is divided into
implementation processes, which should include individual tasks to be implemented. By
choosing a general scheme of the life cycle model, the question of whoich tasks will be
included is very important. Currently, the basis for the creation of a new life cycle is the
ISO/1EC12207 standard, which describes a complete set of processes covering all possible
types of tasks associated with software creation. According to this standard, only those
processes should be selected that are most suitable for the implementation of the project.
The basic processes that are present in all known life cycle models are mandatory.
Depending on the goals and objectives of the project area, they can be supplemented by
processes from the group of auxiliary or organizational processes (or sub-processes).

Software system design is a process that can be defined as the process of creating a set of
diagrams, terms of reference, and other documentation containing a description of the
product that will be developed. The systems project ensures that all participants under-
stand the purpose of the development. Depending on the complexity of the software, the
design process can be provided either manually or be fully automated. Currently, indus-
trial technologies for software creation are widely used. Such technologies are represen-
ted by descriptions of the principles, methods, applied processes, and operations and are
supported by a set of computer-aided software engineering tools (CASE tools). These tools
cover all stages of the software life cycle and are used to solve practical problems. Exam-
ples of such technologies are Microsoft Solution Framework (MSF), Rational Unified Proc-
ess (RUP) and Extreme Programming (ХР).

Various graphic tools can also be used in the desing process. Flowcharts, ER diagrams,
UML diagrams, and layouts are all helpful tools. They are used for a schematic representa-
tion of a process. They are often used to study, plan, and explain complex processes with

79
simple diagrams. Rectangles, ovals, rhombuses, and some other shapes are used to build
flowchart, as well as connecting arrows that indicate the sequence of steps or the direc-
tion of the process. Each shape indicates a specific operation.

Flowcharts range from simple, hand-drawn sketches to detailed, computer-generated dia-


grams with many steps and processes. Given all possible variations, flowcharts can be rec-
ognized as one of the most common types of diagrams. They are widely used in various
fields, both technical and non-technical. Flowcharts are occasionally given dedicated
names, such as process diagram, workflow diagram, functional block diagram, business
process modeling, business process model and notation (BPMN), or process flow diagram
(PFD). They are closely related to other common types of diagrams, such as data flow dia-
grams (DFD) and unified modeling language (UML) activity diagrams.

Figure 7: The Most Used Flowchart Symbols

Source: Vladyslava Volyanksa (2022).

During the execution of the project, many people are involved in the process and take part
in the analysis and development. The project participants are assigned different roles to
define the functions that they perform in the project. The following roles are typically dis-
tinguished, however, each software product will have a different set of roles:

• Customer
• Project manager
• System administrator
• Database administrator
• System architect
• Database architect
• Business analyst
• (System) analys
• Tester

80
5.2 Documentation
One of the important stages of software development is the creation of documentation.
Software documentation (or system documentation) helps developers and the mainte-
nance teams understand the structure of the system and the purpose of its elements. In
addition, part of the documentation is intended for end users and administrators to allow
them to understand how to install and deploy the system, as well as describe what func-
tions it provides and how to deal with errors that may occur during the operation. The
software documentation can be divided into four main catgeories:

• Project documentation: This is a description of the main provisions used in the process
of software creation. Project documentation describes the software product in general
terms. For example, a programmer can justify why data structures are organized in a
particular way or why a class is constructed in a certain manner. Class
• Technical documentation: This incorporates algorithms, code, interfaces, API-docu- A blueprint for a part of
the software that can be
mentation, and libraries. Technical documentation is highly technical in nature and is re-used when the same
used primarily to describe and define APIs, algorithms, and data structures. The techni- type of function is
cal documentation can be part of the source code. The same tools can be used both to required

build the program and to create the documentation within the source code at the same
time. The example of documentation generators are Javadoc, Doxygen, and NDoc. This
method of documentation creation keeps it up-to-date.
• User documentation: This includes frequently asked questions (FAQs), tutorials, quick-
start guides, and “read-me” documents. A complete package of users documentation
consists of an introductory guide that covers general issues for performing typical tasks;
a thematic guide, where each chapter is devoted to a section of the program; and an
indexed guide, for advanced users who know well what features they are searching for.
• Marketing documentation: This is used to advertise both the software product and its
components, as well as other software products of the company. It often informs the
consumer about the properties of the product and explains its advantages.

5.3 Development and Testing


The software development process includes low-level design and coding. Low-level design
is a detailed study of software architecture, for example, designing classes in object-orien-
ted programming (OOP), working out the structure of a database in a database manage-
ment system (DBMS), or organizing web-applications and their components.

Coding is the process of writing program code. This is the implementation of the high-
level and low-level architecture of the project in the form of a program. In some projects,
the construction phase is combined with design. Design and development processes vary
for different categories of software, among the most common are:

81
• Database development: Databases are classified as a separate category of software.
Database development is typically directly related to the development of one type of
application that manages the information stored in the database. Quite often, database
programming is done by designated developers.
• Development of applications based on structured programming: Structured pro-
gramming is used in several programming languages for a specific class of applications,
like device drivers and operating systems.
• Development of applications based on OOP: Object-oriented languages are used in a
large number of applications. One of the main tasks in the development of these appli-
cations is the design of the class hierarchy. Class design errors do not allow developers
to make quick improvements, which can lead to a delay in development, cost increases,
and other negative consequences.
• Web application development: Web applications belong to another large category of
software products that have their own development specifics, for example, the develop-
ment of applications for web browsers.

To facilitate the development of software, different design patterns are used. Design pat-
terns are effective ways to solve common problems that occur in both design and con-
struction of software. Such templates cannot be directly converted to code, but solutions
can be used in different situations. Such object-oriented patterns define relationships and
exchanges between the classes or objects without stipulating which final classes or appli-
cation objects will be used. Algorithms don’t belong to templates because they solve com-
putational problems, rather than design problems. Any template should describe a certain
problem that occurs many times, as well as the principle of its solution. It should do this in
such a way that the solution can be used as many times as desired. Usually, the template
consists of four main elements:

• Name: By referring to it, we can describe the design problem, solutions, and conse-
quences. Naming templates allows us to think with a higher level of abstraction.
• Task: A description of when the template should be applied. It is necessary to formulate
the task and its context. A particular design problem may be described, such as the way
in which algorithms are represented as objects. It is sometimes noted which class or
object structures are indicative of inflexible design. A list of conditions under which it
makes sense to apply the template may also be included.
• Solution: A description of design elements, relationships between them, and functions
of each element. It is not meant to be a specific design. The pattern should be applied
to a wide variety of situations. An abstract explanation of the design problem and how it
can be solved using some generalized combination of elements (classes and objects) is
given.
• Results: This is the consequences of the using of templates. Often, when describing
design decisions, the consequences are not mentioned, but it is necessary to know
about them so that we can choose between different options for this pattern, evaluat-
ing their advantages and disadvantages.

A design pattern names and distinguishes central aspects of a common solution’s struc-
ture that allow it to be used to create a reusable design. It extracts the participating
classes and instances, their roles and relationships, and their functions. When describing

82
each pattern, attention is focused on a specific object-oriented design problem. Design
patterns provide a variety of ways to solve many problems that designers of object-orien-
ted applications constantly face.

Testing is an important process in IT development. Testers should not only be versed in


information but should also be able to understamd the product from the perspective of
the end-user and be very attentive to details. The entire testing process aim to find and
eliminate errors that might occur when using the system. The test team is provided with
two documents: a description of the business logic of the released product and a descrip-
tion of the functional design. The business logic of the released product describes the pro-
cesses that the designed product must perform from the point of view of the end user. The
functional design describes the functions that the designed product must feature and may
also contain a description of the user interface and its requirements.

After the technical documents are created and approved by the business requirements
department, the testing team starts writing the test plan, test scenarios, and test cases. A
test plan is a piece of documentation that defines the complete scope of testing. It
includes a description of the object, strategy, schedule, and criteria for starting and ending
testing. Test scenarios are a description of the initial conditions, input data, user actions,
and the expected result. Test cases are a description of the set of steps, specific condi-
tions, and parameters necessary to test the implementation of the tested function or part
of it. These documents are the basis for further testing of the product. Simultaneously
with the work of the testing group, programmers write the software code. When the code
is stable, the testers are given a version of the software to work with. During the tests,
defects are found and programmers fix them. A new version of the software is then issued
it for testing. This continues until the program acquires the suitable level of quality assur-
ance or the deadline for the final release is reached.

5.4 Implementation, Evaluation, and


Maintenance
In most cases software products will still be changed after the final implementation. This
can be becasue defects have arisen, users have new requirements, or operating conditions
have changed. The whole range of activities aimed at providing effective and efficient sup-
port of software systems is called software maintenance. The IEEE 1219 standard defines
maintenance as the modification of a software product after it has been released to elimi-
nate failures, improve performance and other characteristics of the product or adapt it for
use in a modified environment (IEEE Computer Society, 1998). Program maintenance can
be much more expensive than creating a basic version of the application, since it allows a
software system to be used for a long period. Software maintainability is an important
characteristic and must be agreed with the customer in advance and considered during
development.

83
The main task of maintenance is to support the functioning of the software throughout
the entire period of its operation. The content side of maintenance is largely determined
by requests for modification coming from users. Typically, requests are most intensively
received in the first six weeks. Further requests, as a rule, are related to the adaptation of
the software or to the expansion of its functionality. The organization of the maintenance
process involves

• determination of the purpose of the processes maintenance;


• determination the causes and types of changes in the software tool in the process of its
maintenance;
• organization of processes and transfer for maintenance of the developed software;
• an agreement between the customer and the developer for the maintenance of the soft-
ware;
• development of the concept of methods and processes for support of the software;
• development of a requirement specification for modifications while maintaining the
software;
• approval of the concept, contract, and terms of reference for software support by the
customer;
• organization of control over the implementation of software support.

It is possible to improve a software product endlessly, until one day the feasibility comes
into question. Without special adjustments, the structure of the maintained program
becomes so complex that the cost of changing it becomes unacceptable.

5.5 Prototyping and Methods of Software


Development
A prototype is an initial version of a system that is used to test design possibilities and
present software ideas. Prototypes can be used at different stages and phases of develop-
ment. For example, at the design stage the prototype vasn be used for exploring the
choice of features and planning the user interface. The main advantages of prototyping
are the reduction of development time and cost because the evaluation of the prototype
enables the detection of insufficiencies or inconsistencies in the requirements at a previ-
ous stage. The prototyping process usually consists of the following steps:

1. Definition of initial requirements


2. Development of the original prototype
3. Examination of the prototype by the customer and end-users, who provide feedback
4. Reworking and improving the prototype

After this, the specifications and the prototype are changed. Steps three and four can be
repeated. It is important that a prototype can be created in short time using auxiliary tools
(rapid prototyping language and working tools). The prototype does not have to contain

84
all of the software’s functionality. It should focus on the points that are yet not well under-
stood. There is no need for error control during the prototyping process. Prototyping has
many different options. However, all methods are based on two main types:

• Rapid prototyping: This is also known as “throwaway prototyping”. It is assumed that


the prototype will be thrown away and will not become part of the system. The principal
advantage of this method is speed.
• Evolutionary prototyping: The purpose of this type of prototyping is consistently cre-
ate system mockups that will get closer and closer to the final product. The advantage
of this approach is that every step has a working system, which doesn’t have full func-
tionality but improves with each iteration. At the same time, resources are not wasted
on the code that will be discarded.

Software development is a complex process that should deliver a product fully compliant
with the client’s expectations within the intended time frame and budget. The complexity
of the process determines the development of specific activities that the project team per-
forms. The waterfall method is one of the first defined software development methodolo-
gies. It divides the software development process into linear, successive stages. It is called
a “waterfall” because the progress of the design follows a straight path to the goal. Each
subsequent step depends on the results provided by the preceding steps. This model has
been criticized for its inflexibility and the long duration from the idea to the creation of a
working program.

The prototyping method takes the application scenario first, followed by a graphical user
interface, and eventually the logic. This allows it to estimate an unfinished application,
where only part of the functionality is implemented and works, but the product is usable.
The incremental, or iterative, method consists of comprehensive planning of the system
and dividing it into separate modules. Next, the selected module or modules are program-
med, tested, and are presented to the customer. These actions are repeated until a com-
plete system is created. It is called “incremental” because each stage extends the product
with new, ready-made functionality.

In the case of rapid application development (RAD), ready-made modules such as classes
or libraries are delivered, which have separate, ready-made functionality. The programm-
er’s job is to then put it all together. The term “RAD” is also used to describe development
environments that are distinguished by many ready-made components and visual appli-
cation designs. They are often produced using a “drag and drop” method. Examples of
RAD environments are Microsoft Visual Studio (compatible with many programming lan-
guages), Qt Designer (C ++) and Lazarus (Free Pascal).

The term “Agile development” was proposed in the Agile Manifesto. Agile methods were
created mainly because customer requirements often change frequently during the
project duration. The tasks are distributed among teams that include not only program-
mers, but also people from marketing, operations, and human resources. Agile is used for
project management and has elements of Lean management. Besides Scrum, extreme
programming is a very popular type of Agile methodology. In this approach, the most

85
important thing is the quality of the produced software and the adaptation to the chang-
ing needs of the client. The process of creation is divided into short cycles that end with a
checkpoint, during which it is possible to adapt new changes provided by the customer.

SUMMARY
Developing and implementing software is not an easy task as it involves
many steps and aspects. Today, many development teams solve the
problem in a unique way. They use various software development pro-
cesses such as Agile. The software development process does not end
with the implementation of the application. The development team con-
stantly monitors the software to make sure it works properly and meets
the end user needs. If bugs or errors are identified in post-production,
developers fix them. To prevent regression (where patches cause other
problems), the team re-submits the software to a shortened software
development process.

86
UNIT 6
MAIL MERGE

STUDY GOALS

On completion of this unit, you will be able to ...

– explain what mail merge is.


– use mail merge tools.
– design a document that contains a mail merge.
6. MAIL MERGE

Introduction
Generating and sending personalized documents to many recipients occurs in almost
every company. It is used in communication with clients and business partners. Docu-
ments of all kinds, such as invoices, settlements, letters, and contracts were previously
printed and sent by post but now travel to customers electronically. This type of corre-
spondence addressed to many recipients is called mail merge. The counterpart of mail
merge is, for example, advertising post in letterboxes. The prepared text is the same for
everyone but is combined with an address and a personal salutaion to each recipient.
Using the mail merge option in word processors, it is easy to create batches of letters liek
this. Instead of changing the name of each addressee many times, it is enough to mark
which parts of the program should be changed using the electronic address book or
another database containing the necessary data.

Companies deal with this challenge in different ways. In large organizations, designated
modules are implemented. After integration with other systems, such solutions generate
documents on demand. Unfortunately, these modules are often very technical, menaing
their integration requires programming work and document templates must be prepared
is special file formats. More often, companies use simple mechanisms available in Micro-
soft Office such as mail merge. Some may simply fill in the documents manually, however,
the lack of automation makes this process tedious and prone to human error, as well as
time-consuming.

6.1 Master Documents and Forms


The creation of a merge mail requires writing a master document (or template) with text
and inserting fields corresponding to data in database, such as name, surname, postal
address, and email address. Mail merge docuemnts can be differentiated by several crite-
ria.

First, they can be categorized based on the creation of mailing list. The mailing list is a
function provided by many email servers, as well as specialized programs for mailing. The
server accepts a message from any subscriber to a particular address after this the mes-
sage is redirected to all subscribers of the mailing list. This technology allows organiza-
tions to structure communication between groups of people. Typically, this software
allows subscribers to manage their own settings, such as the status of their subscription,
change address, and more. Group addressing allows pooling for almost all email servers.
It allows several people to read mail by entering a single address. The incoming mail will
be sent to several recipients at the same time, each in their own mailbox. This can also be
used for informative purposes or advertising. The message prepared by one operator is
automatically sent to all addresses from the list, but without the opportunity to respond.
In any case when no subscription had previously taken place, such mail is called “spam”.

88
Mail merge documents can also be categorized by content, depending on whether it is
informational, transactional, commercial or triggered by an event. The final method of
categorization is orientation. This can be classed as thematic (due to the target audience),
regional, or simply mass mailing (such as spam).

A mail merge is formed of two key parts: a document and a data source (database). There
are three main steps to create a mail merge document:

1. First, the list of recipients with relevant information (names, surnames, addresses)
must be obtained or established. Depending on the application being used, existing
external databases can be connected or a data source can be created in text editors,
word processors, spreadsheets, or a database. The data source should contain infor-
mation to be merged with the main document and should be arranged in an organ-
ized format, such as a table.
2. Next, the main document ( known as the “master document” or “template”). Prepara-
tion of the master document consists of creating the constant content (the same for
all recipients), including text, graphics, and the merge fields, that will be later filled
with the data from the data source.
3. Finally, the main document can be merged with the data from the data source.

This method can be utilized to create a batch of letters, envelopes, labels, catalogs, and
much more.

6.2 Rules and Fields


Mail merge is a technique for creating mass documents, such as forms, envelopes, and
labels, that differ from each other only with some minor elements, for example, an
address. A typical example is sending an identical letter notifying students about the
results of an examination. To create letters, two elements are needed: the form (master
document) and the database, which can be combined using the appropriate application.
The form can be any document created in a text editor, while the database is a table con-
taining data to be inserted. The table must contain column names in the first row and can
be a file in Word, Excel, or Access.

89
Figure 8: Address Data Breakdown

Source: Vladyslava Volyanska (2022).

The figure above shows an example of a table and its connection to the forms. Each field
name of the table should be unique, and each row should denote information about a
specific item.

Suppose that the form is open and a table with data is saved on the hard drive. In Office
2010 we click “Mailings” on the “Tools” menu. Choose “Select Recipients”, then “Use an
Existing List”. In the pop-up-window you choose the file with your database. In the window
“Selecting a data source” you choose our master document with the template and confirm
the choice. No special message informs about connecting the form to the database, this is
only indicated by additional active icons on the tool bar (“Edit Recipient List” and “Insert
Merge Field”).

For data from the table (database) to appear in the form, the user must insert the appro-
priate fields from the table into the form. These fields are taken from the first row of the
table (the table headers). These fields can be inserted directly into the text, tables, text
fields, or controls. Text in the fields can also be formatted in any way. To place a merge
field in the master document, navigate to where you want to insert the field and click the
“Insert Merge Field” icon. A list with possible fields will appear. Choose the appropriate
field and the corresponding field name will appear on the form, surrounded by angled
brackets (“<< ... >>”). After this operation you will still only see the names of the fields sur-
rounded by symbols. To see the specific data from the table, turn on “Preview Results”.
You can also use the “Find Recipient” function to search for any information.

Mail merge can display data from the table and also perform additional operations knwon
as “rules”. The most useful and most used rules are “if”, “then”, “otherwise”, and “skip if”.
For example, suppose we want to display the following text depending on the gender of
the recipient: if there is the letter “F” in the geder field of the database (signifying female),

90
then it should display the salutation “Dear Ms.”, alternatively (when there is letter “M” in
the field, for male) display “Dear Mr.” To implement such a condition, select “Mailing”, then
“Rules”, then “if then else”, and then fill in the necessary information.

In another example, imagine we would like to insert the results of the exam and inform the
studnet if they have passed and were accepted or not. The appearance of the text “you
have been accepted” can be inserted depending on the field in the table which conatines
one of two possible options: yes or no.

Forms with changing data can be viewed, printed, or sent by email. All these operations
are available under the button “Finish & Merge” on the “Mailings” toolbar. A manual
search for appropriate data is uncomfortable, so before printing, it is recommended to
sort and filter data collected in the tables. Users can use the “Edit Recipient List” and
“Address Block” functions.

6.3 Standard Letters and Labels


Sometimes users will need to create unique labels so that several records fit on the one
card. For example, we may have 12 labels on one card. There will be different data on each
label.

Labels can be used for the following reasons:

• To design small address stickers that we will be stuck to the envelopes where each label
contains the data of a different addressee
• To share small cards containing unique discount codes, which were previously gener-
ated in Excel
• To print small cards containing the names of people for a seating plan
• To design price labels for individual products
• To design small cards with descriptions for an organizer

An Excel sheet containing uniquely generated discount codes can be used as an example.
First of all, the file containing discount codes should be generated. Then, in Microsoft
Word choose “Mailing”, the “Start Mail Merge”, then “Labels”. The window titled “Label
option” will appear, where you can choose the type of the suitable template. In the next
step, choose “Select Recipients”, then “Use an existing list” and then choose the Excel file
containing the discount codes.

SUMMARY
Mail merge is the automated distribution of email messages to a group
of recipients according to a pre-compiled list. It is a type of mass com-
munication. It is widely used as a tool in internet marketing. Using mail
merge tools we can merge the main document with the address list from
a separate table or database to generate a set of output documents. The

91
main document contains a basic text that is the same for all output
documents and serial fields for inserting the data from database (such
as names and addresses), which differ in individual output documents.

92
UNIT 7
GRAPHICS CREATION AND ANIMATION

STUDY GOALS

On completion of this unit, you will be able to ...

– define vector and raster graphics.


– decide which program tools are to be used.
– apply practical knowledge to create an image in graphics editor.
7. GRAPHICS CREATION AND ANIMATION

Introduction
Computer graphics is a branch of computer science that studies the means and methods
of creating and processing graphic images using computer technology. Today, computer
graphics tools allow users to create realistic images that are similar to photographic pic-
tures. There is a variety of hardware and software for obtaining images of various types
and origin, ranging from simple drawings to realistic images. Computer graphics are used
in almost all scientific and engineering disciplines to improve the perception of informa-
tion. Based on the scenario in which it is applied, the following areas in computer graphics
can be distinguished:

• Scientific
• Business
• Design
• Illustrative
• Artistic
• Animation
• Multimedia

Scientific graphics are used to depict complex data and ideas. Originally, the only purpose
of the first computers was solving scientific and industrial problems. To better understand
the results that were obtained, graphs, diagrams, and drawings of calculated structures
were built. Modern scientific computer graphics enable experts to carry out computa-
tional experiments with a visual presentation. As a result, a new trend has appeared: cog-
nitive computer graphics.

Business graphics are designed for visualization of performance indicators, plans, pro-
cesses, and statistical reports. Business graphics are included in spreadsheet software.

Design graphics are used in the work of architects, engineers, and designers. This type of
computer graphics is a mandatory element of computer-aided design (CAD). By using
design graphics, it is possible to generate both two-dimensional graphics (projections,
sections) and three-dimensional images. Illustrative graphics incorporate digitalized free-
hand illustrations and drawings, created with designated graphics editors.

Artistic and advertising graphics are used in commercials, computer games, cartoons,
video presentations, and video tutorials. This type of graphics typically requires higher
computer performance and memory for generating and processing the images, especially
when realistic rendering is required. Changes in the texture and illumination of objects
Projective transforma- must be calculated as projective transformations to ensure a realistic perspective and
tions perception for the observer.
The process describing
the perceived positions of
observed objects when
the point of view of the
observer changes.

94
Computer animation is used to obtain moving images. The artist creates an initial sketch
and defines the direction of movement in either two or three dimensions. The computer
hardware (in particular the graphics card) then calculates trajectories and intermediate
images to create the illusion of movement. The calculation of factors, such as the anima-
ted rotation, approximation, deformation, and clipping of three-dimensional drawings
requires thousands of calculations per second. In animations, multimedia is the combina-
tion of high-quality images with sound.

7.1 Common Graphics Skills


Computer images can be stored in three principal ways: as raster graphics, as vector
graphics or as fractal graphics.

Raster Graphics

Raster graphics are represented as a rectangular matrix of colored pixels. Consequently,


each pixel must be stored either individually (for instance, with bitmap images) or multi- Bitmap
ple pixels are stored in a compressed form, which must be reconstructed spontaneously An arrangement (map) of
pixels (bits) representing
when the image is displayed (for instance, with JPEG images). The quality of an image an image.
then mainly depends on the resolution of the sensor of the digital camera used to capture
it. Raster images containing pixels can easily be transferred into the dot-based representa-
tion used by most printers. Software for raster images are graphics editors rather than
generators.

The advantages of raster graphics are as follows:

• They provide a photorealistic representation of the recorded image.


• The allow software independence.
• They use standardized file types, independent of the generating device.
• They can be directly converted to printed dots.

However, the disadvantages of raster graphics are as follows:

• The files can be large, depending on the resolution of the camera.


• Smaller files require sophisticated compression techniques.
• Magnification is limited as enlarging images beyond the original resolution leads to
blurring.

There are a number of programs that focus on the drawing process, using convenient
drawing tools. Some of these tools for image creation are

• Fauve Matisse,
• Krita, and
• Paint.NET.

95
There are also specific software applications designed to process finished raster graphics
to improve their quality and implement creative ideas, rather than creating images from
scratch. The source material for processing can be obtained in different ways, such as
scanning, loading an image created in another editor, obtaining an image from a digital
camera, using image fragments from clipart libraries, or by exporting vector images. This
type of program includes

• Adobe Photoshop,
• Corel Photo-Paint,
• GIMP,
• Canva, and
• Pixelmator.

Specific programs for viewing and cataloging graphics also exist. These programs allow
users to view graphic files in many different formats, create albums, move and rename
files, and annotate illustrations. Examples of this type of software include

• Adobe Bridge CC,


• Adobe Lightroom,
• ACDSee,
• XnViewMP, and
• IrfanView.

Vector Graphics

Vector graphics represent an image as a set of line segments and arcs, rather than pixels.
In this case, a “vector” is a set of data characterizing an object. Vector images are com-
posed of contours, consisting of one or more adjacent segments bound by nodes. The seg-
ments can be straight or curved and contours may be closed or open. Their filling can be
solid, gradient, patterned, or textured. The contour is a mathematical concept and has no
thickness. To make the contour visible, a stroke is defined as a line with a given thickness
and color drawn strictly along the contour. Vector images are usually created manually,
but they can also be obtained from bitmaps using tracing. Vector graphics software is
designed primarily for creating and processing vector illustrations. They are used for
designing regular and reusable graphical elements (such as fonts, shapes, and icons), for
technical design, and for construction and planning.

The advantages of vector graphics are as follows:

• They have a small file size compared to bitmap images.


• They can be scaled and transformed without loss of quality and without increasing the
size of the file.
• They have accurate and excellent print quality.
• They can be easily converted to a raster image.
• Each element of the image can be edited separately.

However, the disadvantages of vector graphics are as follows:

96
• It is hard to convert raster images to vector images.
• There are limited possibilities for processing entire images.

There are also many programs for working with vector graphics including

• Corel Draw,
• Adobe Illustrator,
• Sketch, and
• Inkscape.

Three-dimensional graphics systems are a special group of software tools based on the
principles of vector graphics and include

• 3D Studio Max,
• Maya,
• Adobe Dimension,
• LightWave 3D, and
• Corel Bryce.

Another group of programs based on the principles of vector graphics are mobile applica-
tions, such as

• Adobe Illustrator Draw,


• iDesign,
• Infinite Design,
• Omber, and
• Scedio.

Fractal Graphics

Fractal graphics, like vector graphics, are calculated. Software tools for processing fractal Fractal
graphics are designed to automatically generate images from mathematical calculations. A geometric figure con-
sisting of recursive parts,
A fractal is not drawn but generated using an iterative mathematical function. By changing each of which is a smaller
the coefficients in the function, a completely different picture can be obtained. While frac- copy of the whole. Any
tals originate from mathematical disciplines, fractal graphics can be used for creating nat- fragment of a fractal
reproduces its global
ural repetitive structures, such as clouds, snow, trees, mountain landscapes, or the surface structure in some way.
of the seas and oceans. Fractal images are used in a variety of areas, ranging from creating
simple textures for webpages and desktop backgrounds to book illustrations and fantastic
landscapes for computer games.

There are fewer programs for generating fractal images available on the market compared
to raster and vector graphics. Some examples include

• Ultra Fractal,
• ChaosPro,
• Fractal Design Painter,
• Fractracer, and
• XaoS.

97
The Most Common Graphics File Formats

Graphics file formats determine how graphics data are stored. File formats for raster
images and vector images differ considerably. Vector image data is stored as mathemati-
cal vectors. The pixel information required for displaying the image on the screen is calcu-
lated during the rendering process. Consequently, vector images are already stored in a
compressed form. Raster image data is stored as pixels. If not compressed, each pixel
requires at least three bytes for each color component (red, green, blue). This means the
file size is equivalent to three times the resolution. A picture with a resolution matching a
conventional full high-definition computer screen of 1,920 by 1,080 pixels requires approx-
imately six megabytes of storage. Therefore, most raster file formats support compression
algorithms. Such algorithms require computationally intensive calculations to create the
file as well as to display the compressed file contents on the computer screen. In addition
to the vector or pixel data, graphics files incorporate metadata, such as the file type, the
resolution, the compression algorithm, and information about the generator and creator.

The most common raster graphics file formats

A bitmap picture (BMP), also known as a “Windows device independent bitmap” is a com-
mon file format for bitmaps in Microsoft Windows. The file includes color palette informa-
tion, a bitmap array, and metadata. The format is primarily used to exchange raster
images between Windows applications. BMP images may be stored uncompressed or
compressed via run-length encoding or Huffman coding.

Graphics interchange format (GIF) supports color transparency functions and some ani-
mations. The image is recorded through a line, in half-frames, like in a television scanning
system. That is why a low-resolution picture appears on the screen first, allowing viewers
to see a general image, and then the remaining lines are loaded. This format supports 256
colors or grey scales. One of the colors can be defined as transparent due to the presence
of an additional alpha channel. It allows users to include several raster images in a single
file, which are reproduced at a specified frequency to provide a simple animation on the
screen (GIF animation). All data in the file is compressed using a lossless method, which
gives the best results in areas with a uniform fill. The disadvantage of the GIF format is the
limited palette size of 256 colors.

In comparison, Joint Photographic Expert Group file interchange format (JFIF, commonly
known as JPEG or JPG) uses up to 16.7 million colors in an image. The JFIF algorithm
takes advantage of color variations of neighboring pixels that are indiscernible to the
human eye. It identifies the variations and simplifies them to a single color, thus increas-
ing the color redundancy. The higher the redundancy, the more efficient the compression
is. Rather than storing multiple pixels of the same color, just the color and the number of
pixels is stored in the file. The actual algorithm performs multiple mathematical opera-
tions, such as a color space transformation, integral transformation, quantization, and
compression. Since not all these steps are fully revertible, some information is lost in the
back transformation, leading to a so-called “lossy” compression.

98
Portable network graphics (PNG) are the successor of the outdated GIF. PNG supports up
to 16.7 million colors and includes transparency and lossless compression. However, the
files are considerably larger than JPEGs.

Photo CD (PCD) is an image format that was originally developed by Kodak to store high
quality digital raster images. The file has an internal structure that stores images with fixed
resolutions, and therefore the sizes of files differ only slightly from each other and are in
the range of four to five megabytes. This format provides high quality for halftone images. Halftone
A method of producing
images using dots to pro-
Paintbrush file format (PCX) appeared in Paintbrush for Microsoft disk operating system duce a gradient effect.
(MS-DOS). Since the licensing of Paintbrush for Windows, it has been used by several Win-
dows applications.

Photoshop document (PSD) is the native file format of Adobe Photoshop, which allows
users to store a bitmap with multiple layers, additional color channels, and masks.

Tagged image file format (TIFF) is a special format used in the printing industry, where
high image quality is required. It is primarily used for data exchange in the prepress stages
of printing. It allows lossless as well as lossy compression algorithms and supports 16.7
million colors including transparency.

The most common vector graphics file formats

Adobe Illustrator artwork (AI) is the standard format for the Adobe Illustrator software.

CorelDraw file (CDR) is a proprietary vector file format of the Corel Corporation, developed
for their CorelDraw software. The format specification was never fully published, and few
other programs can read CDR files.

Scalable vector graphic (SVG) is a popular vector graphics format, specified by the World
Wide Web Consortium (W3C) and widely used on websites. It is based on extensible
markup manguage (XML) and allows users to store vector graphics and animation (using XML
JavaScript, HTML, and CSS). While SVG is fully documented, many functions of proprietary A markup language for
transmitting and recon-
graphics programs (such as shadows and other effects in Adobe Illustrator) cannot be structing data.
stored.

Windows meta file (WMF) is a vector metafile format for Windows applications. It is used to
exchange graphics data via the clipboard.

Universal graphic file formats

There are also universal graphic file formats that support both vector and raster images at
the same time. Encapsulated PostScript (EPS) is used to store single pages of vector
graphics in a form that allows them to be embedded in other documents. It is supported
by applications for various operating systems. It is recommended for printing and creating
illustrations in publishing systems. This format is not created for a specific program and
most vector graphics applications should be able to open it. EPS was created by Adobe
and was the base for the creation of early versions of the Adobe Illustrator format.

99
Portable document format (PDF) is designed to work with the Acrobat software package.
Both vector and raster images, text with many fonts, hypertext links, and printer settings
can be saved in this format. It is the most popular format for presenting and transferring
text and graphic content. The advantage is the fact that it can be opened by most users,
even in a web browser.

Color Basics

By working with color, concepts such as color depths and color space are used. Color
depths is the number of bits that are used to encode the color of a single pixel. To encode
a black and white image, it is enough to use one bit per color of each pixel. One byte
allows users to encode 256 different colors. Two bytes (16 bits) allows users to define
65,536 different colors. This is called “high color” mode. Three bytes (24 bits) are used to
display up to 16.7 million colors. This is called “true color” mode. The color resolution
determines the size of the image file.

Colors in nature are rarely simple. Most color shades are formed by mixing primary colors.
A color space describes the method of color definition and its separation into color com-
ponents. There are many different color spaces, but in computer graphics there are three
RGB predominant varieties: RGB, CMYK, and HSV.
A color space model
where the primary colors
are red (255.0.0), green In the RGB color model, a color consists of three components: red, green, and blue (which
(0.255.0), and blue is where it gets its name). The colors are created through additive composition; for
(0.0.255). instance, adding red to green results in yellow. With increasing red, green, or blue inten-
sity, the overall brightness increases until white is achieved when all three components
are at maximum (similar to sunlight). The equal combination of the three components
produces a grey scale from black to white. Any emitting system, such as a computer
screen, uses this color space for displaying images.

The CMYK (cyan, magenta, yellow, key) color model is used for preparation of printed
images. The color is composed by overlaying of the subtractive colors cyan, magenta, and
yellow, which is inverse to the RGB system. The reason for this is that the color of printed
images is produced by reflection rather than emission. For instance, if white light hits a red
surface, the blue and green portions are absorbed, and only the red portion is reflected.
The mixture of two colors in the RGB system result in the color of the CMYK system, and
vice versa. For example, red plus blue makes magenta and magenta plus cyan makes blue.
The more ink is on the paper, the more light it absorbs and the less it reflects. The combi-
nation of the three primary colors absorbs almost all the incident light and the image
appears almost black. Since, the mixture of cyan, magenta, and yellow does not really pro-
duce a saturated black, printers use a “key plate” (K), which produces an additional black
color layer on top of the printout, if required.

Hue, saturation, value (HSV) as well as variants HSB (brightness), HSL (lightness), and HSI
(intensity), are more human-oriented models for color description. They use a predefined
color scale, similar to the colors of the rainbow. The hue defines the position on that scale
and, thus, the base color. The saturation defines the strength of the color with respect to a
(colorless) average grey tone. The term “value” either stands for absolute brightness (B),
relative brightness (lightness, L), or intensity (I), all of which generally represent a bright-

100
ness based on slightly different color spaces. By adjusting these three components, it is
possible to get as many colors as with other models. The HSB color model is convenient
for use in those graphic editors that are focused not on processing ready-made images,
but on their creation. Images created using the HSB model should be transformed into an
RGB or CMYK model depending on what they are created for: displaying on the screen or
being printed, respectively.

7.2 Text, Vector, and Bitmap Images


Graphic editors provide a choice of tools for creating and editing images. These tools are
merged into toolbars. Graphic editors have a set of tools for drawing simple shapes, like a
straight line, a curve, a rectangle, an oval, or a polygon. After selecting an object on the
toolbar, it is possible to draw it anywhere in the editor window. Such tools are available in
both raster and vector graphic editors, but the principles of processing them are different.
In a raster graphics editor, a shape ceases to exist as an independent element after draw-
ing is completed and it becomes just a group of pixels. In a vector editor, the vector infor-
mation of the shape is retained, and it can be scaled and moved. In raster graphics, this
can only be achieved by using an additional layer, which itself stores vector information,
such as a text layer.

There are tools for grouping and ungrouping objects in a vector editor (or a raster graphics
layer). The grouping operation combines several separate objects, which allows users to
perform common operations on them, such as transform, move, or delete. In vector edi-
tors, the different operations are available for any vector element, like copying, moving,
deleting, rotating, or resizing. Such operations can only be applied in raster editors to a
portion of the raster or the entire picture.

In the following example, we will utilize shapes, text, and colors to create a logo for an
imaginary company. To better understand those functions, we will create a logo for a com-
pany called “VR Reality” in GIMP. GIMP is an open-source image editor, available online
from their website.

101
Figure 9: VR Reality Logo

Source: Vladyslava Volyanska (2022).

To create this logo the following steps should be performed:

1. Start GIMP.
2. Select “File”, then “Create a New Image”.
3. Enter an image size of 600 by 600 pixels and click “OK”.
4. Set the background color as transparent.
5. Draw a circle with the ellipse select tool and fill it with color using “Fill with FG Color”.
The color seen in the image above is b20d0d.
6. Cancel selection of the element by choosing “Select”, then “None”.
7. Create a smaller circle with ellipse select tool.
8. Create a new layer.
9. Create a rectangle form with the rectangle select tool.
10. Transform this rectangle form using the 3D transform tool.
11. Set the opacity of this layer to 30 percent.
12. Create text saying “VR” by using the text tool. Set the font size to 355 pixels (px) and
font style to Serif Bold. Use the 3D transform tool to transform the text.
13. Create another new layer, take the ellipse select tool and create an ellipse.
14. Change the ellipse to path with “Selection to Path”.
15. Deselect everything with “Select”, then “None”.
16. Add text saying “Reality” with the text tool. Set the size to 100 px and the font style to
Serif.
17. Duplicate the path with the “Duplicate this Path” function.
18. Flip the selection with the flip tool (the direction should be set to horizontal).
19. Go to the layer with the text “Reality”, choose the text tool, right-click and choose
“Text along Path”.
20. Choose “Path to Selection” in the layers table in the “Path” tab, go to the “Layers” tab
in the layers table and create a new layer.

102
21. Use the bucket fill tool to paint the selection on the new layer.
22. Use the rotate tool and 3D transform tool to transform the text.
23. Move the text to the position and save the results.

Figure 10: GIMP Tool Menu

Source: Vladyslava Volyanska (2022).

In GIMP, the tools are gathered in a tool menu. The general view of this menu is represen-
ted on the image above. Familiarize yourself with these functions. Next to each tool, there
is an arrow that opens the next tool selection; the tools are grouped by functionality.

We will now create the logo step by step. Open the software and create a new file by
selecting “File”, then “New”.

103
Figure 11: Create a New Image

Source: Vladyslava Volyanska (2022).

Set the image size (both width and height) to 600 pixels. By clicking “Advanced Options”,
you can check if the background color is set to transparent. To be able to create the logo in
the center of the space, activate the guides lines. To do this, go to the “Image” menu, then
“Guides”, and select “New Guide (By Percent…)” First, set a vertical guide to 50. Percent
and then repeat the process for the horizontal guide. We will now create the outer element
of the logo. Choose the ellipse select tool. Make sure that the option “Expand from center”
option is activated in the menu on the left-hand side of the window.

104
Figure 12: Expand from Center

Source: Vladyslava Volyanska (2022).

Next, choose the foreground color tool and set HTML notation to “b20d0d”. Draw a circle
starting from the center point (hold the shift key to get a perfectly symmetrical shape).
Then, click on the “Edit” menu and select “Fill with FG Color”.

Figure 13: Change Foreground Color

Source: Vladyslava Volyanska (2022).

The image should now look something like the following figure.

105
Figure 14: Logo Process

Source: Vladyslava Volyanska (2022).

Next, cancel selection of the element by going to the “Select” menu and choosing “None”.
Choose the ellipse select tool again and create a smaller circle, then cancel your selection.
The result is displayed in the following image.

Figure 15: Logo Process 2

Source: Vladyslava Volyanska (2022).

106
The next step is to create a new layer. To do this, choose the “Layers” menu and select
“Create a New Layer”. Add it to the image by clicking “OK”. Next, take the rectangle select
tool and create a rectangle form. We can transform this form using the 3D transform tool.
The result should look something like the following image.

Figure 16: Logo Process 3

Source: Vladyslava Volyanska (2022).

Set the opacity of the layer to 30 percent. This can be done in the layers table found in the
right-hand side of the window, shown in the following image.

107
Figure 17: Layers Table

Source: Vladyslava Volyanska (2022).

Next, take the text tool and type “VR”. Double-click on the text tool to show the window
where the features of the text can be changed. We will set the font size to 355 px and font
style to Serif Bold. Following this, use the 3D transform tool on the text. The image should
now be similar to the image below.

Figure 18: Logo Process 4

Source: Vladyslava Volyanska (2022).

108
We will continue with text and add the word “Reality”. This word should be a wrapped text.
Start by creating a new layer and then choosing the ellipse select tool to create an ellipse.
Navigate to the “Paths” tab in the layers table and choose “Selection to Path”. Now dese-
lect everything following the previously described process. Next, take the text tool and
type “Reality”. We will use the Serif font style and font size 100 px. In the “Paths” tab of the
layers table, create a duplicate of the path by choosing “Duplicate this Path”. Next, take
the flip tool and use it on your selection (the direction should be set to horizontal). This
will ensure that the wrapping text starts at the correct place on the path. Go back to your
layers and choose the layer with “Reality”; choose the text tool and mark the word with
the cursor then right-click and choose “Text along Path” from the menu. This will create
the contour of the text, but it is still in the wrong position. First, we will fill the text with the
color by going to the “Paths” tab in the layers table and choosing “Path to Selection”. Next,
we go back to the “Layers” tab create a new layer. We can use the bucket fill tool to paint
the text, but just the selection on the new layer. Then use the rotate tool and 3D transform
tool on the text to achieve the desired result. When you are satisfied, move the text to the
correct position and save the results. In GIMP we can only save the image in XCF format
(GIMP’s own special format). To save an image in another format, for example JPEG, we
need to use the “Export as” function.

7.3 Compression
Compression is the presentation of information in a more efficient form through a reduc-
tion in the amount of data. Compression algorithms use the following properties of
graphic data:

• Redundancy: groups of identical pixels


• Predictability: frequently repeated identical combinations of pixels
• Optionality: data that has little effect on human perception

Compression can be achieved by software using the following techniques:

• Color space transformation: transforming to a color space that increases the redun-
dancy of colors
• Chroma subsampling: averaging or dropping insusceptible color information
• Integral transformation: changing the dimension of color representation (e.g., from a
spatial arrangement of pixels to a frequency vector of their appearance)
• Color quantization: mapping the frequency vector to a predefined value matrix to
increase the redundancy of zero values at the end of the vector
• Run-length encoding: summarizing consecutive identical vector components
• Entropy coding: representing the most frequently occurring components in a shorter
bit representation

These techniques achieve compression based on the following factors:

109
• Compression ratio: the ratio of the volume of the compressed file to the volume of the
source file
• Restoration accuracy: the standard deviation of the pixel values of the compressed
image from the original
• Compression and decompression speed: the total time of compression and recovery

All existing compression algorithms can be categorized into two large classes: Lossless
and lossy. Lossless compression algorithms have two main subcategories: the Huffman
and Lempel-Ziv-Welch (LZW) algorithms. Lossless compression is used when the integrity
of the compressed data is important to the original. A common example is executable files
and source code. Some graphic file formats (like PNG) only use lossless compression,
while others (TIFF or GIF) can use both lossy and lossless compression. Lossy compression
algorithms encode information by averaging or dropping information that is not essential
for its visualization. Lossy compression methods take advantage of the limits of human
vision. The idea behind all lossy compression algorithms is quite simple: remove irrelevant
information and then apply the most appropriate lossless compression algorithm to the
remaining data. The most common formats are JPEG and MPEG.

There are several compression algorithms, and it is advisable to use different compression
algorithms for different types of images. To compress pictures with large areas of mono-
chromatic colors, the most effective method is to use a compression algorithm that repla-
ces a sequence of repeating values (pixels of the same color) by two values (a pixel and the
number of repetitions). An example of this type of algorithm is used in BMP files. For draw-
ings such as diagrams, it is good to use another compression method that uses the search
for patterns. This algorithm is used in GIFs and allows users to compress the file several
times. To compress scanned photos and illustrations, the JPEG compression algorithm is
used. The human eye is very sensitive to alterations in the brightness of individual points
of the image, but not so good at noticing color changes. The computer reproduces more
than 16 million different colors that a person is barely able to distinguish. For documents
that are transmitted over the internet, a small file size is important, since the speed of
access to information depends on it. Therefore, when creating webpages, the types of
graphic formats that have a high data compression ratio are preferred, such as JPEG, GIF,
and PNG.

SUMMARY
Computer graphics is a field of computer science that is associated with
the use of computer technology to obtain and process various graphic
images. There are three types of computer graphics: raster, vector, and
fractal graphics. They differ from each other in the principles of genera-
tion, processing, and storage, as well as in their areas of application.
Computer graphics are not limited to image design. In all branches of
science, technology, medicine, commercial. and administration compu-
terized schemes, graphs, and diagrams are used and specially designed
to visually represent information.

110
UNIT 8
ANIMATION AND MORPHING

STUDY GOALS

On completion of this unit, you will be able to ...

– describe different types of animation.


– understand what kind of animation is used for different purposes.
– apply practical knowledge to create a small animation yourself.
8. ANIMATION AND MORPHING

Introduction
Animation brings images to life, and animated movies are the best example of it. Anima-
tion is a quick change from one image to another, which creates the impression of move-
Frame rate ment of portions of the image. Animation typically requires a frame rate of at least 24
The rate at which individ- images per second for the moving object to be perceived as a smooth transition by the
ual images are shown,
typically within a second. human eye. The most common way to create animation is the keyframe method. By using
this method, so-called “keyframes” are created manually, where individual moments of
the animation are shown depicting objects at intermediate stages of their movement and
the computer system automatically generates all the missing frames between them.

Another way to create computer animation is to use special programs for working with
images. There are many special software applications for creating computer animation.
Graphics editors can be used to draw single frames and arrange them in the required
sequence. Single frames can be later saved in the required computer format or exported
to video. Animation can also be performed by simulating movements or effects that are
difficult to reproduce using individual keyframes. In this case, mathematical algorithms
are used that predict changes in the image based on the desired effect, such as a character
disappearing in the dark. Such algorithms are used, for instance, in computer games to
mimic a realistic environmental response to a player’s interaction with the surrounding.

Animation is an effective marketing and design tool. For marketing purposes, several
forms are used, such as

• animated advertising banners;


• animated text and dynamic backgrounds;
• animated presentations;
• cartoons; and
• programmed animations on a website.

To create a simple animation, online video editors can be used. The animation can be cre-
ated from scratch or from ready-made photos and videos. On websites, animation is used
for many purposes, including the following:

• Page loading: An animation engages the user while waiting for the content to appear.
• Storytelling: An animation clearly and quickly explains the situation to the visitor.
• Usability enhancement: An animated photo is easier to look at than a list of images.
• Communication with the user: Animated greetings, warnings, and error notifications
can be generated.

112
On websites, animation is often based on the JavaScript programming language, which
displays frames by making small changes to HTML and CSS properties. It is a solution for
creating simple animation effects, such as pause, rewind, bounce, shine, speed, and
touch. JavaScript is lightweight, easy to read by browsers, and does not require much
computational power.

8.1 Frames and Canvas Operations


Simple animations can also be created in GIMP and saved as a GIF. GIFs can contain multi-
ple images from videos or photos. Sometimes, an animated GIF is an alternative to a
video. This format is easy to place on any website. Animated GIFs are widely used to create
presentations and educational materials in electronic format. There are many programs
aimed specifically at creating animated GIFs, however, most of them can only work with
finished images, distorting them or moving them in space. GIMP allows users to create ani-
mated images from scratch.

The GIF format allows users to store an image in multiple layers. Each layer is a static
image that is displayed for defined duration. By presenting the static images sequentially,
an animated picture is generated. Animation is created in GIMP using the following proc-
ess:

• Multiple layers are added to the image.


• Each layer contains a separate image.
• The resulting image is saved in GIF format.

To understand this process better, let’s create a simple example with a countdown from
five to one in GIMP. First, create a new file by clicking on the “File” menu and selecting
“New”. Set the image size to 600 by 600 px and background color to white. Then, create a
new layer by choosing the “New Layer” function at the bottom of the layers table. Take the
text tool and set font size to 350 px and font style to Serif Bold, then type the number “5”.
Repeat the procedure for numbers four to one. Each number should be placed at the sepa-
rate layer. The image should look like the following image.

113
Figure 19: Layered Images for a GIF

Source: Vladyslava Volyanska (2022).

The layers are all displayed in the layers table as depicted in the following image.

Figure 20: Layers Window

Source: Vladyslava Volyanska (2022).

Now go to the “Filters” menu and select “Animation”, then “Playback”. At the bottom of
the pop-up window, choose “One frame per layer (replace)”. The animation is now cre-
ated, so save the file and set the parameters.

114
In GIMP there is no option to save a file in GIF format. Instead, use the “Export as” option.
At the top of the window, by file name, change the extension of the file to “.gif” and press
“Export”. In the next window, there is the option to interlace the layers and to create a GIF
image, or to create an animation by choosing “As animation”. Under the “Animated GIF
Options”, users have the following options:

• Loop forever: When this option is enabled, the sequential demonstration of layers will
be performed indefinitely. After the last layer is displayed, the first one will be displayed
again. If this option is disabled, the animation will play once and stop at the image of
the last layer.
• Delay between frames: The time in milliseconds (ms) during which each layer will be
displayed. The delay between frames can be configured for each frame separately, or it
will be the same for all and equal to the value specified in the box.
• Frame disposal: This can be set to “One frame per Layer (replace)”, “Cumulative Layers
(Combined)”, or “I don’t care”.
• Delay and Disposal should be used for all frames: This offers the option to apply the
previous two settings to all frames universally.

In our example, we will set the delay between frames to 500 ms, frame disposal to one
frame per layer, and both options should be enabled for all frames. Now, save the file by
clicking “Export”. You can play the file in your web browser and see how it works.

8.2 Timing and Coordinates


The key point that determines the quality of any animation is the number frames appear-
ing per second. The more frames an animation contains, the smoother the playback of the
animation will be. When the number of frames per second is too low, the movement will
be choppy. In web applications, an increased frame rate also increases the file size and
download time. When creating animations for webpages, a balance between the anima-
tion quality and file size must be found, since both are affected by the number of frames.
There are several technologies for creating animations for the web such as

• animated GIFs,
• Flash,
• Java, and
• JavaScript.

GIF animation is the easiest to create and almost any modern browser can display it. Ani-
mated GIFs can be easily prepared in Adobe Image Ready, Ulead GIF Animator, Adobe Pho-
toshop, or GIMP. Whereas GIF animation only allows users to place images in a file, Flash
technology allows the combination of animation, sound, text, graphics, and interactive
elements that enable the user to change data on a webpage.

There are several different methods of animation. According to different technologies, the
following categories can be identified:

115
• Traditional animation: This is a time-consuming process where a series of still pictures
is created separately.
• Stop frame animation: Objects are arranged in space and captured by a camera, then
their position is changed slightly, and a new frame is created.
• Sprite animation: This is implemented using programming languages, where each
object keeps a record of its parameters and, depending on their value, changes its
appearance.
• Morphing: This is the transformation of one object into another using intermediate
frames.
• Color animation: This is when only the color of the changes and not the position of the
object.
• 3D animation: 3D objects, constructed from triangles or polygons, form a basic wire
model of the object, and are layered with pictures and textures.
• Motion capture animation: Sensors are attached to control points of a live actor. The
actor’s spatial coordinates are transmitted to the graphics hardware and processed to
create an animation.

There are two main types of image creation in animation: keyframe animation and skele-
tal animation. In keyframe animation the main frames are created, and then additional
intermediate frames are added by a computer software. The process of adding intermedi-
ate frames is called “tweening”. This technology assumes that we need to determine how
the object should look at certain moments in the movement and assign these frames as
keyframes.

Figure 21: Tweening

Source: Vladyslava Volyanska (2022).

116
In skeletal animation, “bones” are created that are later used to trigger the animation. A
tree-like structure of bones is created, where each subsequent bone is tied to the previous
one and repeats movements after it, depending on the hierarchy in the skeleton. When a
separate bone moves, all the skin vertices (surfaces) attached to it also move.

In 2D animation, depending on the type of used images, we can differentiate between vec-
tor and raster animation. Among the many editors that allow users to work with pixel
graphics, the most popular are Adobe Animate (formerly known as Adobe Flash Professio-
nal, Macromedia Flash, and Future Splash Animator), Adobe Image Ready, Adobe Photo-
shop, and GIMP. Working with vector graphics is easier because it allows the transition of
an object from one frame to another using mathematical calculations.

To create 3D graphics, special programs called 3D editors are used; the most popular
being 3D Max and Maya. To obtain a three-dimensional image of an object, it is necessary
to create a volumetric model. This model is produced using spline surfaces and primitive
geometric shapes, such as cubes, balls, and cones. The appearance of the surface is deter-
mined by a grid of reference points located in space. A coefficient is assigned to each of
these points, which determines its degree of influence on the nearby surface of the object.
The shape and smoothness of the surface depend on the relative position of the points
and the value of the coefficients. The deformation of the object is provided by moving
points. The next step is surface rendering. The surface properties are described in an array
of textures. After the construction and visualization of the object are completed, the
motion parameters are set. The computer animation is based on keyframe technology. 3D
modeling can be reduced to the following steps:

1. Create an internal frame of the object that best matches its real shape.
2. Create virtual materials that look similar to real ones.
3. Assign materials to different parts of an object’s surface.
4. Adjust the physical characteristics of the space in which the object will operate (i.e.,
set the illumination and properties of interacting objects).
5. Set the trajectory of the movement of objects.
6. Calculate the resulting sequence of frames.
7. Apply the effects to the finished animation.

8.3 Tweening and Morphing


As discussed previously, tweening is the process of adding intermediate frames to an ani-
mation.

Morphing is a technique to provide smooth changes from one image to another. Well-
made morphing produces great effects and is used in movies and computer animations.
To better understand how morphing works we will create an example in GIMP. To use
morphing in GIMP it is necessary to install the add-on called “GAP” (GIMP Animation Pack-
age).

117
You will need two images of the same size. Open GIMP with the added GAP plugin and cre-
ate a new file, the background should be set to transparent. Now, in the layers window,
create a new layer and copy your first image onto it. Create one more layer and copy the
second image onto it. Next, go to the “Video” menu and select “Morphing”, then “Morph”.
A pop-up window will open where we can see our images and set the parameters for
morphing.

Figure 22: Morphing

Source: Vladyslava Volyanska (2022).

In this example, the “Source” image is a cat and the “Destination” image is a wolf. To ena-
ble a successful morph, we need to identify the nodes in the source image, such as the
eyes and nose, and their equivalents in the destination image. Save the file as a GIF and
watch it in your web browser.

8.4 Sound Implementation


Sound can be a valuable addition to animations. Technically, the GIF format can only save
a series of images with text, while a video file format can store audio, video, subtitles, and
other metadata. It is not possible to add audio to a GIF directly. An alternative is to create
an animated GIF, convert it to a video file, and then add a background sound. Several web-
applications provide a feature to add audio to GIFs by converting it to a video file, such as
Kapwing and VEED.

118
We will create a small animation with sound representing a countdown with an explosion
at the end. We will use the web-based application Powtoon to do this. The website is very
intuitive. After logging in, we can create a new project and choose the number of slides
(layers). Select the text tool from the sidebar and type one number on each slide starting
from five to one.

Figure 23: Powtoon Sidebar

Source: Vladyslava Volyanska (2022).

On the last slide, use the “explosion” from the choice of templates. It can be found on
sidebar under “Shapes”, then “Animated Shapes”. Users can also create a background for
the numbers. We can use background templates from “Backgrounds” tool on sidebar.

To animate the project, it is necessary to set a slide duration for each slide on the timeline.
By using the plus and minus icons on the right side of the timeline you can set the time
duration of the slide. We can add the sounds to the project by using the “Sound” button.
The sound line can be added to separate slides or to the whole project. In different editors
there are different possibilities to work with sounds, like mixing, volume change, and dif-
ferent filters. At the end we need to export the animation and save it in the appropriate
format for our needs, namely MP4.

119
Figure 24: Powtoon Export Options

Source: Vladyslava Volyanska (2022).

SUMMARY
Computer animation is an art. It can be found in cartoons and movies,
as well as on websites and in digital advertising. Animation is a change
in the properties of one or more objects of an image to give the illusion
of movement. With animation, it is possible to bring any vector drawing
to life or create a frame-by-frame scene from various photographs. For
example, if you animate a character, then they can move, speak, and
perform any actions as if they were alive.

2D animation combines simple animation transitions with the concept


of scripts or a picture storyboard. 2D animation can be used in multime-
dia presentations, advertising films, videos for billboards, promotional
films, corporate films, and much more.

In 3D animation, more advanced visualization methods are used. It is an


illustrative and creative approach. 3D animation can be used in numer-
ous situations, including architectural designs, interactions of products,
physical phenomena, industrial and technological processes.

120
UNIT 9
PROGRAMMING FOR THE WEB

STUDY GOALS

On completion of this unit, you will be able to ...

– define, create, and change HTML content.


– use JavaScript to react to common HTML events.
– use and understand the structure and syntax of JavaScript.
9. PROGRAMMING FOR THE WEB

Introduction
The most popular service available via the internet is the Word Wide Web (WWW). Most
WWW services rely on the client-server principle, where servers are accessed by client
applications or browsers. Websites are hosted on servers, which provide the diverse infor-
mation displayed. The website is a set of webpages interconnected by hyperlinks using a
navigation system. The website is accessed and located using a uniform resource locator
(URL), consisting of the communication protocol (for websites: hypertext transfer protocol
(HTTP) or HTTP secure (HTTPS)), the host name, the domain, and the top-level domain,
and the location of the webpage, e.g., http[s]://[host.]domain.tld[/folder/filename. To
exchange data between the client and the server, an HTML form can be inserted into a
webpage. A form is a part of a webpage where the user can enter information. Requests on
a form can be made using the “GET” or “POST” functions of the HTTP request-response
protocol.

Depending on how the website is generated, we distinguish between the following catego-
ries:

• Static pages containing static HyperText Markup Language (HTML) or Extensible HTML
(XHTML) pages. Static webpages are static files (i.e., a set of text, tables, and pictures)
that are created using HTML (so they have the .HTML or .htm extension) and are stored
ready-made in the server’s file system.
• Dynamic pages that are created by executing a user’s request. Dynamic webpages may
be created on the server side, and subsequently sent to the client or in the client’s
browser by using minimal information from the server.
• Webpages displayed using Adobe Flash software which provides interactivity and con-
trol beyond conventional web applications. However, they are regularly criticized due
to cybersecurity issues.
• Websites using a combination of the above-mentioned technologies.

We can further distinguish between passive websites, whose content is static and does not
alter through user interaction, and active websites that exchange data about the users’
interaction with the server and update the website accordingly. Creating webpages for
static sites by writing HTML or cascading style sheets (CSS) code is a time-consuming proc-
ess. Therefore, creation of new webpages or the editing of existing pages is typically per-
formed by the user on the PC in a locally installed or web-based editor, and then uploaded
again to the website.

Static sites with passive webpages are typically used to create small sites with a constant
structure, hosted on public or free servers. To make static webpages interactive and
dynamic, JavaScript and VBScript can be implemented in a webpage to extend the limited
HTML and CSS functionality. Such scripts can be triggered either as a response to a user
interaction or to any external event, such as a timer. To create an interactive website,
dynamic HTML (DHTML) elements can be inserted into an HTML document. DHTML is built

122
on the JavaScript language, CSS, and the document object model (DOM). Flash snippets
or Flash movies (.swf files) can be also inserted into an HTML document. Flash provides
interactivity through the ActionScript scripting language.

The webpages for dynamic sites are generated from data stored on the server in the data-
base. To develop and maintain dynamic sites, a content management system (CMS) is
used. Popular CMSs are Drupal, Joomla, and WordPress. Currently, various applications
are used to create dynamic sites. There are two main approaches to develop web applica-
tions: based on compiled modules and based on interpretable scripts. Compiled modules
are based on the common gateway interface (CGI) standard for data exchange between cli-
ent and server. Such modules are translated into executable files and executed by the web
server. The first web applications for creating dynamic sites were individual CGI modules
(scripts written primarily in Perl programming language) that ran on a server. CGI scripts
behave like normal programs. The result of the module execution is a page in HTML for-
mat.

Interpreted scripts are server-side scripts, written in a scripting languages, and are inter-
preted immediately rather than compiled. Such scripts can be implemented in HTML
(which itself is a kind of an interpreted language). The most common server scripting lan-
guages are Perl, ASP, Node.js, PHP, Golang, Python, Java, and Ruby. The server script lan-
guage is identifiable via the file name extension (.php, .asp, .aspx, .js). When a webserver
receives a scripting request, it interprets its contents and generates an HTML webpage
that is sent back to the browser. Scripts may also be executed on the client side, for
instance when a webpage opens.

Flash technology is used to create vector graphics applications or a complete movies to be


displayed in the web browser. Movies are dissected into smaller portions to reduce the
download time. Adobe Flash is, not least because of its regular security vulnerabilities,
increasingly being replaced by HTML5.

9.1 HTML and CSS


HTML is a hypertext markup language for creating webpages. The origins of HTML date
back to 1991. The current version of the language is HTML5. Based on HTML, a new XHTML
language was constructed which takes advantage of the stricter XML structure. There is an
ongoing debate as to which language is more appropriate for creating webpages. XHTML
offers the ability to link to other XML-compliant languages, but not all browsers support
these extensions. Besides that, XHTML needs a correct syntax, contributing to the fact that
many programmers prefer HTML.

The basic element of the HTML syntax is the tag. This is special text in angle brackets, for Tag
example, <head>. Tags have a direct impact on the structure and appearance of the web- A feature of HTML with an
opening and closing part.
site. The tags are not visible on the page, but they are responsible for the appearance of The opening part is for-
the page. The function of a tag can be extended by assigning it an appropriate attribute. matted as <...>, the clos-
ing part as </...>.

123
There are tags that absolutely require a pair: an opening tag and a closing tag, for exam-
ple, <body> … </body>, and there are tags that work with only the opening tag. Some
common tags are described in the following paragraphs.

The <p> tag is used to break into paragraphs. The <p> tag contains the align attribute,
which displays the text alignment type and can take the following values:

• Left (default)
• Right
• Center
• Justify

<p> is a block element, which means that the content of the tag takes up the entire availa-
ble width, and the height of the element is determined by the height of its content. An
unpaired <br> tag is used if it is necessary to move the text to the next line without start-
ing a new paragraph (for example, in publishing poetry).

HTML distinguishes six levels of headings, which differ in text size. The heading of the first
level is displayed in the largest font size and the heading of the sixth is the same size as the
body text. Headings are block elements, like paragraphs. Headings are designed with the
paired tags, for example, <H6> Level 6 heading </H6>.

Lists are very important elements of webpages. HTML has the following kinds of lists:

• <ol>: numbered list


• <ul>: bulleted list

Each list item is placed in a pair of tags. Thus, the code for a numbered list will look as
follows:

<ol>
<li> First element of numbered list </li>
<li> The second element of the numbered list </li>
<li> The third element of the numbered list </li>
</ol>

A bulleted list is defined in a similar way. In addition to the numbered and bulleted lists,
the <dl> tag should also be mentioned, which allows users to create a list of definitions.
The <dl> tag is used together with the paired <dt> and <dd> tags. The term is placed in
the <dt> tag, and its definition is placed in the <dd> tag, as shown in the following exam-
ple:

<dl>
<dt> Term 1 </dt>
<dd> Term definition 1</dd>

124
<dt> Term 2 </dt>
<dd> Term Definition 2</dd>
</dl>

Another way to separate sections of text is to draw horizontal lines. For this, an unpaired
<hr> tag is used, the line created with this tag will be drawn to the full width of the parent
element.

All these tags are used to change the appearance of the entire paragraphs, but there are
also many tags that allow users to directly change the font of individual sections of text,
such as the following examples:

• <b>…</b>: bold
• <i>…</i>: italic
• <u>…</u>: underlined
• <sup>…</sup>: superscript
• <sub>…</sub>: subscript
• <font>…</font>: font size, typeface, and color

HTML has a lot of tags, but many tags do not directly affect the appearance of the HTML
document. Most of them are rather a type of “pointer”, explaining how to interpret the
content placed within them. This is the purpose of CSS.

HTML is a text file, that is why it can be generated in any text editor, but it is more conven-
ient to use more advanced solutions, like Microsoft Visual Studio, Atom, or Coffee Cup.
Good editors help users with built-in functions, such as auto-completion, syntax highlight-
ing, error detection, search and replace, or File Transfer Protocol (FTP) integration. Using
webpages only with text would never have made the internet so popular. Graphics, video,
and sound can be used to diversify a webpage. The structure of multimedia information is
completely different from the structure of text information; therefore, multimedia cannot
be directly described in HTML code. All multimedia is contained in separate files and is
usually stored on separate servers, which are linked to HTML by scripting. To add an
image, an unpaired <img> tag is used, containing the required “src” attribute indicating
the location of the graphics file to display. HTML does also provide tags for working with
audio and video: <audio> and <video>. These tags are components of the browser’s own
environment. This means that no third-party means are used to display the multimedia
information. The use of <audio> and <video> also allows users to control media from
web scripts.

General Structure of an HTML File

Every HTML document starts with a basic structure. It includes the tags found in any HTML
file. These tags and service information are needed by the browser to display the informa-
tion correctly. This structure tells the browser a lot of useful information:

125
<!DOCTYPE HTML>
<HTML lang="de">
<head>
<meta charset="UTF-8">
<title> The only website for cat owners </title>
</head>
<body></body>
</HTML>

The <!DOCTYPE> element tells the browser which HTML standard is used in this docu-
ment. The HTML language follows the rules contained in the Document Type Definition
(DTD) file. A DTD is an XML document that defines which elements, attributes, and their
values are valid for a particular HTML type. Each version of HTML has its own DTD. DOC-
TYPE is responsible for the correct display of the webpage by the browser. DOCTYPE
defines not only the HTML version but also the corresponding DTD file. The elements
inside the <HTML> element form a document tree, the DOM. The <HTML>…</HTML> tag is
the root element of the document. All other elements are contained within <HTML>...</
HTML>. Anything outside this element is not treated as HTML by the browser and is not
processed in any way.

The <head>…</head> tag is used to store service information for the browser. A variety of
combinations are possible here; they tell the browser the page title, description, key-
words, and so on. This type of information is called “meta-information” and is responsible
not only for service information for the browser but is also used for website promotion.
Search engines read all this information and determine the rank of the website for differ-
ent search queries. Any data that is specified inside the <head> tag is not visible on the
webpage. This means that there is no need to place information here that is intended to
be displayed. The <meta> element is an optional element of the <head> section. Here, a
description of the page content and keywords for search engines, the author of the HTML
document, and other metadata can be defined. The <head> element can contain multiple
<meta> elements because, depending on the attributes used, they represent different
information. By using the <meta> element it is possible to disable or enable indexing of a
webpage by search engines. <title> is a required element of the <head> section. The
text placed inside the <title> element is displayed in the title bar of the web browser.
The length of the title should not be more than 60 characters to fit in the title. The title text
should contain a sufficient description of the contents of the webpage. The <body>…</
body> paired tag is the body of the entire page. This is here where all the information that
will be displayed on the page is located.

CSS

CSS is a design and formatting language that predefines styles for fonts, such as margins,
font type, font face and color, as well as font emphasis, navigation elements or hover func-
tionality. It can be used with XML-type documents, markup languages and graphics for-
mats. CSS ensures a consistent appearance of paragraphs and text functionality through-
out multiple webpages. The HTML file of the webpage refers to a cascading style sheet and
the browser uses this information for consequent formatting. CSS additionally supports
positioning of text flow around images, print layout and audio response.

126
CSS describes how elements are formatted using properties and the valid values for those
properties. For each element, a limited set of properties can be used; other properties will
not have any effect. The style declaration consists of two parts: the selector and the decla-
ration.

p {
color: red;
font-size: 20px;
}

There are external and internal style sheets. An external style sheet is stored separately
from the HTML code. The file is created in the code editor, just like an HTML page, and is a
plain text file with the extension “.css”. The file can either just contain the styles or addi-
tional HTML markups as well. An external style sheet is linked to a webpage using a
<link> element located inside the <head> section. These styles work for all pages of the
site that include the link. Multiple style sheets can be attached to the webpage by adding
several <link> elements, specifying the purpose of that style sheet in the media attribute.
Internal styles are embedded in the <style>…</style> element of <head> section.
Internal styles take precedence over external styles but override inline styles (specified via
the style attribute).

9.2 JavaScript
JavaScript is a scripting language. A script is a set of instructions that are immediately
interpreted by the calling application. JavaScript code on a webpage is interpreted by the
browser engine while the webpage is loaded. The browser interpreter parses, evaluates,
and executes the script line by line. JavaScript can either be incorporated in the markup
language or referred to if it exists in a separate file (with the “.js” file extension). In the lat-
ter case, the script file can be referenced in the <head> or the <body> portion of a HTML
code, as shown in the following examples:

<head>
<script src="ClickButton.js"></script>
</head>

<body>
<script src="ClickButton.js"></script>
</body>

Technically, there almost no difference in terms of the execution, however, since HTML
text is interpreted sequentially, it is recommended to place heavy scripts at the end of the
<body> section. Embedded scripts can be used as an event handler, that is, we can add an
event to the HTML element as an attribute and specify the required function as the value
of this attribute; the function that is called in response to an event is an event handler. As

127
a result of triggering the event, the code associated with it will be executed. This method
is mainly used for short scripts; for example, we can change the background color as a
result of a button-click as the event.

<script>
var colorArray = ["#5A9C6E", "#A8BF5A", "#FAC46E", "#FAD5BB", "#F2FEFF"];
// create array with colors
var i = 0;

function changeColor(){
document.body.style.background = colorArray[i];
i++;
if( i > colorArray.length - 1){
i = 0;
}
}
</script>
<button onclick="changeColor();">Change background</button>

The <script> element can be inserted anywhere in the document. The script code is exe-
cuted immediately after being read by the browser or triggered when the corresponding
event is called. The description of the function can be placed anywhere but it is important
that by the time it is called, the function code has already been loaded.

The data handled by JavaScript are known as variables. Variables are named containers
that store data (values) that can change during the program execution. Variables have a
name, a type, and a value. A variable name, or identifier, can only include the letters a-z, A-
Z, numbers 0-9 (a number cannot be the first character in a variable name), and under-
score; spaces are not allowed. The length of the variable name is not limited. Naturally,
JavaScript keywords cannot be used as a variable name. Variable names in JavaScript are
case sensitive. A variable is declared using the var keyword followed by the name of the
variable, although let fulfils the same function. A variable must be declared before using
it. A variable is initialized with a value using the = assignment operator, as shown in this
example:

var message ="Hello"


let message ="Hello"

A variable can be declared without a value, in this case it is assigned the default value,
undefined. JavaScript is an untyped language meaning the data type for a particular var-
iable does not need to be specified when it is declared. The data type of a variable
depends on the values it takes. The type of a variable can change in the process of per-
forming operations with data. Type conversion is performed automatically depending on
the context in which they are used. The data type of a variable can be obtained using the
typeof operator. This operator returns a string that identifies the corresponding type, as
shown below:

128
typeof "text"; // returns "string"
typeof 17; // returns "number"

All data types in JavaScript are divided into two main groups: primitive data types and
composite data types. Primitive data types include string, numeric, boolean, null,
and undefined. Composite data types consist of more than one value. These include
objects and special types of objects, such as arrays and functions. Objects contain
properties and methods, whereas arrays are an indexed collection of elements, and func-
tions are made up of a collection of statements.

Variables can be global and local by scope. A scope is the part of a script within which a
variable name is associated with that variable and returns its value. Variables declared
within the body of a function are called local variables and can only be used within that
function. Local variables are created and removed along with the corresponding function.
Variables declared inside a <script> element, or inside a function but without using the
var keyword, are called global variables. They can be accessed whenever the page is loa-
ded in the browser. Such variables can be used by all functions, allowing them to
exchange data. In JavaScript, keywords are reserved. They stand for operators, data types,
functions, methods; for example, instanceof, delete, public, local, and typeof.

9.3 Operators and Functions


Expressions in JavaScript are combinations of operands and operators. Operations in
expressions are executed according to their priority. The returned result is not always of
the same type as the type of data being processed.

Figure 25: Example of JavaScript

Source: Vladyslava Volyanska (2022).

129
JavaScript widely utilizes loops. They optimize the coding process by repetitively execut-
ing the same instruction or block of instructions that form the loop body a given number
of times or while a given condition is true. Loops iterate over a sequence of values. Execut-
ing a loop once is called an iteration. The loop performance is affected by the number of
iterations and the number of operations performed in the body of the loop in each itera-
tion. JavaScript has the following loop statements:

• for is used when we know in advance how many times we need to do something.
• for...in is used to specify a list of running indices.
• while is used until a certain condition becomes true.
• do...while works similarly to the while statement. The difference is that do...while
always evaluates the expression at least once, even if the condition test returns false.

Figure 26: Example of JavaScript Output

Source: Vladyslava Volyanska (2022).

There is a possible risk to create an infinite loop that will never end. Most modern brows-
ers can detect this and prompt the user to stop running the script. To avoid creating an
infinite loop, we must be sure that the given condition will return false at some point.

130
Figure 27: Example of a While Loop in JavaScript

Source: Vladyslava Volyanska (2022).

The loop can be controlled using break and continue statements. break terminates the
execution of the current loop. It is used in cases where the loop cannot be executed for
some reason, such as if the application encounters an error. continue stops the current
iteration of the loop and starts a new iteration.

Figure 28: Example of Continue in JavaScript

Source: Vladyslava Volyanska (2022).

An important concept in JavaScript is the document object model (DOM). The DOM speci-
fies an application programming interface (API) for HTML and XML documents. An API
defines the commands used for communicating with a program. In other words, an API
allows one program to interact with another. The DOM specification defines a set of inter-
faces for accessing and manipulating objects in HTML and XML documents. Interfaces are
an abstraction; they define how to access and manipulate the internal representations of a
document in an application, rather than defining the actual data. The DOM terminology is

131
considered a tree and each standalone element or text block is known as a node. The
node is the fundamental building block. Nodes have a parent-child relationship when one
container contains another. A node is a generic name for an object of any type in the DOM
hierarchy. When the page is loaded, the browser’s memory maintains the structure of
objects generated according to the HTML tags used in the document. Each node has a
parent node (excluding the document root node) and several child nodes. Some types of
nodes can have children of various types, others cannot have children. The tree is made
up of nodes, but only some of them are HTML elements.

The core DOM API consists of Node, Element, Document, and other interfaces. The DOM
standard also includes interfaces that are specific for HTML documents and the tags of
many HTML elements. These interfaces, like HTMLBodyElement and HTMLTitleElement,
define a set of properties of the tag’s attributes, which provide convenient access to attrib-
ute values. The HTMLElement interface is the base interface from which all HTML element
interfaces derive must be used by elements that do not have additional attributes.

Working with JavaScript, it is convenient to use jQuery. jQuery is website which provides a
JavaScript library that contains ready-made functions of the JavaScript language. All
jQuery operations are performed from JavaScript code. The jQuery library manipulates
HTML elements, their behavior and uses the DOM to change the structure of the webpage.
The source HTML and CSS files do not change; changes are made only to the page display
for the user.

SUMMARY
JavaScript is a scripting language to bring interactivity to webpages. The
JavaScript provides webpages the ability to respond to user actions and
turns static pages into dynamic ones. JavaScript code is either executed
when the web page is loaded or as a result of a specific event. Most web-
pages rely on an interplay of multiple servers which are typically called
from a central page via JavaScript. JavaScript extends the functionality
of a webpage beyond the pure HTML code.

132
BACKMATTER
LIST OF REFERENCES
Asana (n.d.). Critical path method: How to use CPM for project management. Asana. https://
asana.com/resources/critical-path-method?gclid=CjwKCAjw-rOaBhA9EiwAUkLV4n1ZY
c12nDVgERvggoDJEJGK_L-F4aNgexe9UTRrxOOKOv6QJD1HqhoCvTcQAvD_BwE&gclsr
c=aw.ds

Atlantic Council (n.d.). Central bank digital currency tracker. https://www.atlanticcouncil.o


rg/cbdctracker

Bashynska, I., Filippov, V., & Novak, N. (2018). Smart solutions: protection NFC cards with
shielding plates. International Journal of Civil Engineering and Technology, 9(11),
1063–1071.

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Gren-
ning, J., Highsmith, A., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R., Mellor, S.,
Schwaber, K., Sutherland, J., & Thomas, D. (2001) Manifesto for Agile software develop-
ment. https://agilemanifesto.org

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning
algorithms. Big Data & Society. 3(1).

Chandra, A. L. (2018, July 24). McColloch-Pitts neuron – Mankind’s first mathematical model
of a biological neuron. Towards Data Science. https://towardsdatascience.com/mccull
och-pitts-model-5fdf65ac5dd1

Dimitrov, D. (2021, November 17). ruDALL-E: Generating images from text. Facing doen the
biggest computaional challenge in Russia. Habr. https://habr.com/en/company/sberba
nk/blog/589673/

European Central Bank (2012). Virtual currency schemes. European Central Bank. https://w
ww.ecb.europa.eu/pub/pdf/other/virtualcurrencyschemes201210en.pdf

Hiranabe, K (2008, January 14). Kanban applied to software development: from agile to
lean. InfoQ. https://www.infoq.com/articles/hiranabe-lean-agile-kanban/

IEEE Compuer Society. (1998). IEEE 1219-1998. IEEE standard for software maintainance.
IEEE Standards assosiation. https://standards.ieee.org/ieee/1219/1842/

Klein, D. (2018, December 19). Mighty mouse. MIT Technology Review. https://www.technol
ogyreview.com/2018/12/19/138508/mighty-mouse/

Lopatka, M., Pólvora, A., Manimaaran, S., & Borissov, R. (2022). EIC working paper 1/2022:
Identification of emerging technologies and breakthrough innovations. European Inno-
vation Council. https://eic.ec.europa.eu/system/files/2022-02/EIC-Emerging-Tech-and
-Breakthrough-Innov-report-2022-1502-final.pdf

134
Makhmetov, G. (n.d.). Virtual private networks (VPN). VPN clients and their configuration.
What is VPN. Olacom. https://olacom.ru/en/interesting/virtualnye-chastnye-seti-vpn-v
pn-klienty-i-ih-nastroika-chto-takoe/

Mawtson, N. (2021). Half of the world owns a smartphone. Strategy Analytics. https://www.
strategyanalytics.com/access-services/devices/mobile-phones/smartphone/smartph
ones/reports/report-detail/half-the-world-now-owns-a-smartphone

Narayanan, A., Bonneau, J., Felten, E., Miller, A., & Goldfeder, S. (2016). Bitcoin and crypto-
currency technologies: A comprehensive Introduction. Princeton University Press.

NHK (n.d.). History. NHK Corporate Info. https://www.nhk.or.jp/corporateinfo/english/hist


ory/

Power, D. J. (2007). A brief history of decision support systems. DSSResources. http://dssres


ources.com/history/dsshistory.html

Project Management Institute (2021). A guide to the project management body of knowl-
edge (PMBOK guide) (7th ed.). Project Management Institute.

Puigrefagut (2014). The roadmap for UHDTV. European Broadcasting Union. https://www.it
u.int/en/ITU-D/Regional-Presence/Europe/Documents/EBU_UHDTV%20ITU-D%20Se
minar%20Budapest_2014.pdf

Rosenfield, M. (2018, June 14). We’ve reached the summit. IBM. https://www.ibm.com/blog
s/research/2018/06/summit/

Russell, M. A. & Klassen, M. (2019). Mining the social web: Data mining Facebook, Twitter,
LinkedIn, Google+, GitHub, and more (3rd ed.). O‘Reilly Media.

Subarna, D. (n.d.). Machine learning through game playing: Artificial intelligence. Engineer-
ing Notes. https://www.engineeringenotes.com/artificial-intelligence-2/machine-lear
ning-artificial-intelligence-2/machine-learning-through-game-playing-artificial-intelli
gence/34844

135
LIST OF TABLES AND
FIGURES
Table 1: Areas of Emerging Technology and Innovation Identified . . . . . . . . . . . . . . . . . . . . 14

Figure 1: Mathematical Model of a Perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Figure 2: Comparison of the OSI and the TCP/IP model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Figure 3: Switching Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Figure 4: Wireless Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Figure 5: Gantt Chart in MS Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Figure 6: Task Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Table 2: Tasks to Create a Blog Post . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Figure 7: The Most Used Flowchart Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Figure 8: Address Data Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

Figure 9: VR Reality Logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

Figure 10: GIMP Tool Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

Figure 11: Create a New Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Figure 12: Expand from Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Figure 13: Change Foreground Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Figure 14: Logo Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Figure 15: Logo Process 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Figure 16: Logo Process 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Figure 17: Layers Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Figure 18: Logo Process 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

136
Figure 19: Layered Images for a GIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Figure 20: Layers Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Figure 21: Tweening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

Figure 22: Morphing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

Figure 23: Powtoon Sidebar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Figure 24: Powtoon Export Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Figure 25: Example of JavaScript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Figure 26: Example of JavaScript Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

Figure 27: Example of a While Loop in JavaScript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Figure 28: Example of Continue in JavaScript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

137
IU Internationale Hochschule GmbH
IU International University of Applied Sciences
Juri-Gagarin-Ring 152
D-99084 Erfurt

Mailing Address
Albert-Proeller-Straße 15-19
D-86675 Buchdorf

[email protected]
www.iu.org

Help & Contacts (FAQ)


On myCampus you can always find answers
to questions concerning your studies.

You might also like