Key Dynamics in Computer Programming - Adele Kuzmiakova
Key Dynamics in Computer Programming - Adele Kuzmiakova
Programming
KEY DYNAMICS IN COMPUTER
PROGRAMMING
Edited by:
Adele Kuzmiakova
ARCLER
P r e s s
www.arclerpress.com
Key Dynamics in Computer Programming
Adele Kuzmiakova
Arcler Press
224 Shoreacres Road
Burlington, ON L7L 2H2
Canada
www.arclerpress.com
Email: [email protected]
This book contains information obtained from highly regarded resources. Reprinted material
sources are indicated and copyright remains with the original owners. Copyright for images and
other graphics remains with the original owners as indicated. A Wide variety of references are
listed. Reasonable efforts have been made to publish reliable data. Authors or Editors or Publish-
ers are not responsible for the accuracy of the information in the published chapters or conse-
quences of their use. The publisher assumes no responsibility for any damage or grievance to the
persons or property arising out of the use of any materials, instructions, methods or thoughts in
the book. The authors or editors and the publisher have attempted to trace the copyright holders
of all material reproduced in this publication and apologize to copyright holders if permission has
not been obtained. If any copyright holder has not been acknowledged, please write to us so we
may rectify.
Notice: Registered trademark of products or corporate names are used only for explanation and
identification without intent of infringement.
Arcler Press publishes wide variety of books and eBooks. For more information about
Arcler Press and its products, visit our website at www.arclerpress.com
ABOUT THE EDITOR
List of Figures.........................................................................................................xi
List of Tables.........................................................................................................xv
List of Abbreviations........................................................................................... xvii
Preface............................................................................................................ ....xix
viii
6.7. Continuous State-Space Problems.................................................... 177
6.8. Dynamic Programming Under Uncertainty...................................... 179
References.............................................................................................. 187
ix
8.18. The Future of Windows.................................................................. 244
8.19. Main Features of Microsoft Windows............................................. 245
References.............................................................................................. 254
Index...................................................................................................... 261
LIST OF FIGURES
xii
Figure 5.7. Code for looking at range limits of types
Figure 5.8. Code for printing size of various types
Figure 5.9. Display (printf) code in C
Figure 6.1. Street map with intersection delays
Figure 6.2. Compact representation of the network
Figure 6.3. Decisions and delays with one intersection to go
Figure 6.4. Decisions and delays with two intersections to go
Figure 6.5. Charts of optimal delays and decisions
Figure 6.6. Solution by forward induction
Figure 6.7. Multistage decision process
Figure 6.8. Allowable capacity (states) for each stage
Figure 6.9. Tables to complete power-plant example
Figure 6.10. Shortest-path network for minimum-delay problem
Figure 6.11. Finding the longest path in an acyclic network
Figure 6.12. Shortest paths in a network without negative cycles
Figure 6.13. Decision tree for deterministic dynamic programming
Figure 6.14. Decision tree for dynamic programming under uncertainty
Figure 7.1. Computer system
Figure 7.2. A modern computer system
Figure 7.3. Storage device hierarchy
Figure 7.4. Structure of input/output diagnosis module
Figure 7.5. History of operating systems
Figure 7.6. Operating system functions
Figure 7.7. Kinds of operating systems
Figure 7.8. A batch operating system is depicted in the diagram
Figure 7.9. Time-sharing OS’s process state diagram
Figure 7.10. The schematic diagram for the real-time operating system
Figure 7.11. Off-line UPS topology
Figure 7.12. Online UPS topology
Figure 7.13. Diagram of distributed systems
Figure 7.14. Dual mode operations in operating system
Figure 8.1. Windows 1.0 image
Figure 8.2. Windows 2.0 image
Figure 8.3. Windows 3.0 image
xiii
Figure 8.4. Windows 3.1 image
Figure 8.5. Image of Windows 95
Figure 8.6. Screenshot of Windows 98
Figure 8.7. Image of Windows 2000
Figure 8.8. Image of Windows ME
Figure 8.9. Windows XP image
Figure 8.10. Screenshot of Windows Vista
Figure 8.11. Windows 7 image
Figure 8.12. Screenshot of Windows 8
Figure 8.13. Windows 8.1 image
Figure 8.14. Windows 10 image
Figure 8.15. Perfect interface design
Figure 8.16. Multiple window flexibility
Figure 8.17. Windows 11 removes the complexity and replaces it with simplicity
Figure 8.18. A more efficient method of communicating with the individuals you care
about
Figure 8.19. Offering the best possible PC gaming experiences
Figure 8.20. Obtaining knowledge in a more expedient manner
Figure 8.21. A latest Microsoft Store which combines your favorite programs and
entertainment together in one place
Figure 8.22. Android applications on a PC
Figure 8.23. Display of the control panel
Figure 8.24. Cortana interface
Figure 8.25. Device manager interface
Figure 8.26. Disk clean-up display
Figure 8.27. Display of the event viewer
Figure 8.28. File explorer display
Figure 8.29. Display of notification area
Figure 8.30. Registry editor application
Figure 8.31. System information window
Figure 8.32. Task manager window
LIST OF TABLES
xv
LIST OF ABBREVIATIONS
AI artificial intelligence
ALU arithmetic logic unit
ANSI American National Standards Institute
BAL basic assembly language
CBOL common commercial-oriented language
CPUs computer processing units
DD domain-dependent
DDEX DD-software experimental
DMA direct memory access
EEPROM electronically erasable programmable read-only memory
GUI graphical user interface
ISO International Standards Organization
ISVs independent software vendors
MSB most significant bit
OOP object-oriented programming
OS operating system
PC personal computers
PWA progressive web app
ROM read-only memory
SSEC electronic selective sequence calculator
UWP universal windows app
xvii
PREFACE
CONTENTS
1.1. Introduction......................................................................................... 2
1.2. Hardware............................................................................................ 3
1.3. Software.............................................................................................. 8
1.4. How Do Computers Store Data?........................................................ 10
1.5. How a Program Works?..................................................................... 16
1.6. Using Python..................................................................................... 24
References................................................................................................ 29
2 Key Dynamics in Computer Programming
1.1. INTRODUCTION
Take a look at various different approaches people utilize computers.
In school, students employ computers for sending emails, looking for
articles, taking online classes, and writing papers. Computers are used in
the workplace to analyze data, generate presentations, perform business
transactions, interact with clients and coworkers, and drive machinery in
industrial plants, among other things. People use computers at home to pay
bills, shop online, communicate with family and friends, and play video
games. Car navigation systems, iPods®, cell phones, and a variety of
other devices are all computer devices. Computers have nearly unlimited
applications in our daily lives (Liu, 2020).
Because computers can be programmed, they can do a wide range of
tasks. This means that computers are not meant to make a single task but
rather to perform any task that their programs instruct them to perform. A
program is a set of instructions afterward a computer to complete a task.
Figure 1.1, for example, depicts interfaces from two widely used programs:
Adobe Photoshop and Microsoft Word. Microsoft Word is a word processing
tool that lets you use your computer to generate, modify, and print documents.
Adobe Photoshop is a visual image editing tool that lets you work with
photos captured with your digital camera (Feurzeig et al., 1970).
The term “software” refers to computer programs. The software on
a computer is critical since it uses everything the machine does. All the
software we employ to make our computers usable is created by people
who work as software developers or programmers. A programmer, also
known as a software developer, is a person who has completed the necessary
training and acquired the necessary abilities to design, create, and test
computer programs. Computer programming is an interesting and fulfilling
professional path (Knuth and Pardo, 1980). Programmers are employed in
a wide range of fields today, including business, medical, agricultural, law
enforcement, government, academia, entertainment, and many more.
Fundamentals of Computers and Programming 3
Source: https://www.fiverr.com/broewnis/convert-adobe-photoshop-to-micro-
soft-word.
Python is the programming language that is used in this book to expose
you to the basic ideas of computer programming. Before we can begin to
explore those notions, you must first understand some fundamental principles
about computers and how they function. This chapter will provide you with a
firm basis of understanding that you will be able to draw on throughout your
computer science studies. First, we will go through the physical components
that are often used in the construction of computers. Following that, we will
see how computers store data and run programs. Ultimately, we will get a
brief overview of the Python programming language and the software that
you will need to create Python programs (Horn et al., 2009).
The physical devices that make up a computer are referred to as the
computer’s hardware in this context. Software is the term used to describe
the programs that operate on a computer.
1.2. HARDWARE
In computing, the phrase “hardware” indicates all the physical components
or devices that make up a computer’s structure. A computer is not a separate
device but rather a collection of devices that all operate at the same time to
form a system. Each device in a computer is like the different instruments
in a symphony orchestra in that each device has a specific function (Tejada
et al., 2001).
For anyone who has done any computer shopping, you have probably
seen sales narratives listing elements like microprocessors, memory,
graphics cards, video displays, hard disc drives, etc. Microprocessors,
4 Key Dynamics in Computer Programming
memory, and hard disc drives are just a few of the components that can be
found in a computer. Without prior computer knowledge or at least a buddy
who is knowledgeable about computers, it may be difficult to comprehend
what each of these separate components performs on its own. As shown
in Figure 1.2, a distinctive computer system is comprised of the following
major components (So and Brodersen, 2008):
• CPU;
• Secondary storage devices;
• Main memory;
• Output devices;
• Input devices.
Source: https://www.tutorialsmate.com/2020/04/computer-fundamentals-tuto-
rial.html.
Let us take a deeper look at each of these elements individually.
Source: https://www.indiatimes.com/technology/news/eniac-75-years-old-
world-1st-programmable-digital-computer-534387.html.
6 Key Dynamics in Computer Programming
Source: https://www.fool.com/investing/2021/01/15/why-intels-competitive-
edge-is-crumbling/.
Source: https://www.indiamart.com/proddetail/ram-memory-chip-2686040191.
html.
data, are sluggish to obtain data, and are potentially untrustworthy. Recently,
the usage of floppy disc drives has decreased considerably in support of
more advanced devices such USB drives. USB drives are small devices that
connect to a computer’s USB interface and be seen as a disc drive to the
operating system (OS). These drives, on the other hand, do not contain a
disc. They keep data in flash memory, which is a unique sort of memory.
USB drives often called flash drives or memory sticks, are affordable,
dependable, and small enough to fit in your pouch (Babad et al., 1976).
For data storage, optical media such as DVD and CD are common.
Data is encrypted as a sequence of pits on the disc surface rather than being
recorded magnetically. A laser is used in DVD and CD drives to identify the
pits and hence read the encoded data. Visual discs can carry a lot of data, and
since recordable DVD and CD players are now popular, they are a handy
way to make backup copies of your data (Chismar and Kriebel, 1982).
1.3. SOFTWARE
Software is required for a computer to function properly. Everything that a
computer performs, from the moment the power switch is turned on up to
the moment the system is shut off, is controlled by software. Application
Fundamentals of Computers and Programming 9
software and system software are the two broad categories of software that
may be found in most computer systems. Most computer programs may be
classified into one of these two types (Bazeley, 2006).
Figure 1.6. Screens from the Fedora Linux operating systems, Mac OS X, and
Windows Vista.
Source: https://www.dmxzone.com/go/16325/os-smackdown-linux-vs-mac-os-
x-vs-win-vista-vs-win-xp/.
10 Key Dynamics in Computer Programming
Source: https://www.pearsonhighered.com/assets/samplechap-
ter/0/3/2/1/0321537114.pdf.
When a byte of data is gathered, the computer turns the eight bits of the
byte into an off/on display that contains all the data. For instance, in Figure
1.8, the pattern on the left depicts how the numeral 77 would be kept in
a byte, whereas the design on the right depicts how the letter A would be
collected in a byte. We will go through how these shapes are created in more
detail below.
Figure 1.8. The number 77 and the letter ‘A’ have different bit patterns.
Source: https://www.pearsonhighered.com/assets/samplechap-
ter/0/3/2/1/0321537114.pdf.
system, as you can see. In the binary numbering system, all numeric
numbers are represented as a series of 0s and 1s. This is known as the binary
representation of numbers. A number written in binary format is shown
below as an illustration (Amit et al., 1985):
10011101
Every digit in a binary number has a value associated with it based on
its place in the number. As illustrated in Figure 1.9, the position values are
as follows: 20, 21, 22, 23, and so on, starting with the rightmost digit and
working your way left. Figure 1.10 depicts the same diagram as in Figure
1.9, but with the position values determined. The position values are as
follows: 1, 2, 4, 8, and so on, starting with the rightmost digit and working
your way left (Grinko et al., 1995).
The music that you listen to in iTunes, on your CD player, iPod, or MP3
device is also digitally encoded in some way. Samples are little bits of a
digital song that are divided up into smaller portions. It is possible to store
each sample in memory because each sample is transformed into a binary
number. When a song is broken down into samples, the more closely it
resembles the original music when it is played again. Approximately 44,000
samples per second are used to create CD-quality music (Iudici and Faccio,
2014).
Figure 1.16. After copying a program into the main memory, it is run.
The fetch-decode-execute cycle is the process that a CPU goes across
as it executes the instructions of a program. This cycle, which is made up of
three phases, is frequent for each program instruction. The steps are (Eckert,
1987):
• Fetch: A program is a set of instructions written in machine
language. The next instruction is read from memory into the CPU
in the first phase of the cycle.
• Decode: It is a binary number that signifies the command to the
computer’s central processing unit (CPU) to conduct a certain
task. When the CPU decodes an instruction that has just been
retrieved from memory, it may identify which operation it should
do.
• Get Data and Execute: The operation is performed, or executed,
as the final stage in the cycle. These processes are depicted in
Figure 1.17.
Source: https://www.pinterest.com/pin/438115869999300657/.
Fundamentals of Computers and Programming 19
Source: https://www.educba.com/assembly-language-vs-machine-language/.
20 Key Dynamics in Computer Programming
Language Description
Ada Ada was developed in the 1970s mainly for use by the United States
Department of Defense. Countess Ada Lovelace, a significant and im-
portant person in the world of computers, is honored by the language’s
name.
BASIC All-purpose for beginners Symbolic Instruction Code is a speech known
that was created in the early 1960s with the goal of being easy to learn
for novices. There are many multiple variations of BASIC available
today.
Fundamentals of Computers and Programming 21
FORTRAN The first high-level programming language was TRANslator. It was cre-
ated in the 1950s to handle complicated mathematical operations.
Pascal Pascal was established in 1970 with the intention of being used to teach
programming. Blaise Pascal, a mathematician, physicist, and philoso-
pher, was honored with the language’s name.
Source: https://tutorials.one/computer-science-engineering/.
The interpreter in the Python language is a software that both interprets
and implements the guidelines in a high-level language program. Each
individual instruction in the program is read by the interpreter, which
transforms it to machine language guidelines and then performs them
instantly. This procedure is repeated for each of the program’s instructions.
Figure 1.20 illustrates this method. Interpreters seldom generate separate
machine language programs since they mix translation and execution
(Danvy, 2008).
Source: https://slideplayer.com/slide/7417033/.
Source code refers to the statements that a programmer gives in a high-
level language. Typically, a programmer writes the code for a program
into a text editor and then saves it to the computer’s disc. The programmer
then uses an interpreter or a compiler to convert the code into a machine
language program that can be executed. However, if the code has a syntactic
24 Key Dynamics in Computer Programming
issue, it will not be translated. A syntax error occurs when a crucial word
is misspelled, a punctuation character is missing, or an operator is used
incorrectly. When this occurs, the interpreter or compiler generates an error
message identifying a syntactic mistake in the program. The programmer
fixes the problem and then tries to translate the program again (Rossum,
2007).
Python
If you are using Windows, you may also go to the Start menu and choose All
Programs. You should notice a software group called Python 2.5 or something
like that. There should be a Python item in this program group (command
line). This menu option launches the Python interpreter in interactive mode
when you click it (Frydenberg and Xu, 2019).
When you start the Python interpreter in interactive mode, you will see
something like this in the console window:
Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit
(Intel)] on win32
Type “help,” “copyright,” “credits” or “license” for more information.
>>>
The >>> you see is a prompt from the interpreter, indicating that it
is waiting for you to input a Python statement. Let’s give it a go. A print
statement, which enables a message to be shown on the screen, is one of the
most basic statements you may make in Python. The following sentence, for
example, causes the notice to appear. Python programming is entertaining!
to be exhibited:
print ‘Python programming is fun!’
It’s worth noting that we’ve written Python programming is enjoyable
following the word print. Between a pair of single-quote marks, quotation
marks are required, but they will not be used.
Shown. They merely indicate the start and finish of the text we want to
show. Here’s how you’d enter this print statement at the interpreter’s prompt:
>>> print ‘Python programming is fun!’
When you press the Enter key after inputting the sentence, the Python
interpreter runs it, as illustrated above:
>>> print ‘Python programming is fun!’ [ENTER] Python programming is
fun!
26 Key Dynamics in Computer Programming
>>>
The >>> prompt occurs after the message has been shown, indicating
that the translator is waiting for you to enter another statement. Let’s look
at another scenario. We’ve entered two print statements in the following
example session.
>>> print ‘To be or not to be’ [ENTER]
To be or not to be
>>> print ‘That is the question.’ [ENTER] That is the question.
>>>
The interpreter will display a message if you input a sentence improperly
in interactive mode. This will help you learn Python by allowing you to use
interactive mode. You may test out new sections of the Python language in
interactive mode and get instant feedback from the interpreter as you learn
them.
On a Windows machine, press Ctrl-Z subsequently Enter to exit the
Python interpreter in interactive mode. Ctrl-D on a Mac, Linux, or UNIX
computer (Chaudhury et al., 2010).
navigate to the list where the file is saved and enter the subsequent command
from the OS command prompt (Price and Barnes, 2015):
Python test.py
This switches the Python translator to script mode and affects the
statements in test.py to be executed. The Python interpreter terminates after
the program is completed.
Source: https://www.softwaretestinghelp.com/basics-of-computer-program-
ming/.
During the installation of the Python programming language, an
application entitled IDLE, which is named after the Python programming
language, will be automatically installed. IDL (Integrated Development
Environment) is an acronym that holds for Integrated Development
Environment. When you start IDLE, the window seen in Figure 1.22. You’ll
see that the >>> timely displays in the IDLE window, which indicates that
the translator is now operating in collaborative mode. Python statements can
be typed into this prompt, and the results will be shown in the IDLE window
(Swinehart et al., 1986).
IDLE also has a developed-in text editor that includes features that are
specially designed to assist you in the development of Python applications.
In the IDLE editor, for example, code may be “colorized” so that key phrases
and other sections of a program are shown in different hues. This contributes
28 Key Dynamics in Computer Programming
Source: https://web.mit.edu/6.s189/www/handouts/GettingStarted.html.
Fundamentals of Computers and Programming 29
REFERENCES
1. Abali, B., Shen, X., Franke, H., Poff, D. E., & Smith, T. B., (2001).
Hardware compressed main memory: Operating system support and
performance evaluation. IEEE Transactions on Computers, 50(11),
1219–1233.
2. Aerts, A. T. M., Goossenaerts, J. B., Hammer, D. K., & Wortmann,
J. C., (2004). Architectures in context: On the evolution of business,
application software, and ICT platform architectures. Information &
Management, 41(6), 781–794.
3. Ahmed, A., Appel, A. W., Richards, C. D., Swadi, K. N., Tan, G.,
& Wang, D. C., (2010). Semantic foundations for typed assembly
languages. ACM Transactions on Programming Languages and
Systems (TOPLAS), 32(3), 1–67.
4. Amit, D. J., Gutfreund, H., & Sompolinsky, H., (1985). Storing infinite
numbers of patterns in a spin-glass model of neural networks. Physical
Review Letters, 55(14), 1530.
5. Aroca, R. V., Caurin, G., & Carlos-SP-Brasil, S., (2009). A real time
operating systems (RTOS) comparison. In: WSO-Workshop de Sistemas
Operacionais (Vol. 12, pp. 1–10).
6. Ashraf, M. U., Fouz, F., & Eassa, F. A., (2016). Empirical analysis of
HPC using different programming models. International Journal of
Modern Education & Computer Science, 8(6), 3–12.
7. Atkinson, M. P., & Buneman, O. P., (1987). Types and persistence in
database programming languages. ACM Computing Surveys (CSUR),
19(2), 105.
8. Babad, J. M., Balachandran, V., & Stohr, E. A., (1976). Management of
program storage in computers. Management Science, 23(4), 380–390.
9. Balsamo, S., Personè, V. D. N., & Inverardi, P., (2003). A review on
queueing network models with finite capacity queues for software
architectures performance prediction. Performance Evaluation, 51(2–
4), 269–288.
10. Bazeley, P., (2006). The contribution of computer software to
integrating qualitative and quantitative data and analyses. Research in
the Schools, 13(1), 64–74.
11. Ben-Akiva, M., De Palma, A., & Isam, K., (1991). Dynamic network
models and driver information systems. Transportation Research Part
A: General, 25(5), 251–266.
30 Key Dynamics in Computer Programming
48. Logan, B. E., (2012). Essential data and techniques for conducting
microbial fuel cell and other types of bioelectrochemical system
experiments. ChemSusChem, 5(6), 988–994.
49. Martinovic, G., Balen, J., & Cukic, B., (2012). Performance evaluation
of recent Windows operating systems. J. Univers. Comput. Sci., 18(2),
218–263.
50. Melot, B. C., & Tarascon, J. M., (2013). Design and preparation of
materials for advanced electrochemical storage. Accounts of Chemical
Research, 46(5), 1226–1238.
51. Mészárosová, E., (2015). Is python an appropriate programming
language for teaching programming in secondary schools? International
Journal of Information and Communication Technologies in Education,
4(2), 5–14.
52. Newville, M., (2001). IFEFFIT: Interactive XAFS analysis and FEFF
fitting. Journal of Synchrotron Radiation, 8(2), 322–324.
53. Okabe, T., Yorifuji, H., Yamada, E., & Takaku, F., (1984). Isolation and
characterization of vitamin-A-storing lung cells. Experimental Cell
Research, 154(1), 125–135.
54. Olson, D. R., (2004). The triumph of hope over experience in the search
for “what works”: A response to slavin. Educational Researcher, 33(1),
24–26.
55. Ottenstein, K. J., Ballance, R. A., & MacCabe, A. B., (1990). The
program dependence web: A representation supporting control-,
data-, and demand-driven interpretation of imperative languages. In:
Proceedings of the ACM SIGPLAN 1990 Conference on Programming
Language Design and Implementation (Vol. 2, No. 1, pp. 257–271).
56. Price, T. W., & Barnes, T., (2015). Comparing textual and block
interfaces in a novice programming environment. In: Proceedings
of the Eleventh Annual International Conference on International
Computing Education Research (Vol. 2, No. 1, pp. 91–99).
57. Puckette, M., (1991). Combining event and signal processing in the
MAX graphical programming environment. Computer Music Journal,
15(3), 68–77.
58. Radwin, R. G., Vanderheiden, G. C., & Lin, M. L., (1990). A method
for evaluating head-controlled computer input devices using Fitts’ law.
Human Factors, 32(4), 423–438.
34 Key Dynamics in Computer Programming
59. Rasure, J., Argiro, D., Sauer, T., & Williams, C., (1990). Visual
language and software development environment for image processing.
International Journal of Imaging Systems and Technology, 2(3), 183–
199.
60. Rutherford, T. F., (1999). Applied general equilibrium modeling
with MPSGE as a GAMS subsystem: An overview of the modeling
framework and syntax. Computational economics, 14(1), 1–46.
61. Sage, D., & Unser, M., (2003). Teaching image-processing programming
in java. IEEE Signal Processing Magazine, 20(6), 43–52.
62. Sanner, M. F., (1999). Python: A programming language for software
integration and development. J Mol Graph Model, 17(1), 57–61.
63. Slavin, R. E., (2008). Perspectives on evidence-based research in
education—What works? Issues in synthesizing educational program
evaluations. Educational Researcher, 37(1), 5–14.
64. So, H. K. H., & Brodersen, R., (2008). A unified hardware/software
runtime environment for FPGA-based reconfigurable computers
using BORPH. ACM Transactions on Embedded Computing Systems
(TECS), 7(2), 1–28.
65. Steere, D. C., Shor, M. H., Goel, A., Walpole, J., & Pu, C., (2000).
Control and modeling issues in computer operating systems: Resource
management for real-rate computer applications. In: Proceedings of
the 39th IEEE Conference on Decision and Control (Vol. 3, pp. 2212–
2221).
66. Stroud, C. E., Munoz, R. R., & Pierce, D. A., (1988). Behavioral model
synthesis with cones. IEEE Design & Test of Computers, 5(3), 22–30.
67. Summer, C. E., (1967). Critique of: “An overview of management
science and information systems.” Management Science, 13(12),
B-834.
68. Swift, J. A., & Mize, J. H., (1995). Out-of-control pattern recognition
and analysis for quality control charts using lisp-based systems.
Computers & Industrial Engineering, 28(1), 81–91.
69. Swinehart, D. C., Zellweger, P. T., Beach, R. J., & Hagmann, R. B.,
(1986). A structural view of the Cedar programming environment. ACM
Transactions on Programming Languages and Systems (TOPLAS),
8(4), 419–490.
Fundamentals of Computers and Programming 35
70. Tanenbaum, A. S., Van, S. H., Keizer, E. G., & Stevenson, J. W., (1983).
A practical tool kit for making portable compilers. Communications of
the ACM, 26(9), 654–660.
71. Tanimoto, S. L., (1990). VIVA: A visual language for image processing.
Journal of Visual Languages & Computing, 1(2), 127–139.
72. Tejada, J., Chudnovsky, E. M., Del Barco, E., Hernandez, J. M., &
Spiller, T. P., (2001). Magnetic qubits as hardware for quantum
computers. Nanotechnology, 12(2), 181.
73. Thomasian, A., & Bay, P. F., (1986). Analytic queueing network
models for parallel processing of task systems. IEEE Transactions on
Computers, 35(12), 1045–1054.
74. Ton, R. V. B. D. K., Mosterd, K. T. K. B., & Smeulders, A. W., (1994).
Scillmage: A multi-layered environment for use and development of
image processing software. Experimental Environments for Computer
Vision and Image Processing, 11, 107.
75. Trichina, E., (1999). Didactic instructional tool for topics in computer
science. ACM SIGCSE Bulletin, 31(3), 95–98.
76. Uieda, L., Ussami, N., & Braitenberg, C. F., (2010). Computation of
the gravity gradient tensor due to topographic masses using tesseroids.
Eos Trans. AGU, 91(26), 1–21.
77. Van, R. G., & De Boer, J., (1991). Interactively testing remote servers
using the python programming language. CWI Quarterly, 4(4), 283–
303.
78. Van, R. G., (2007). Python programming language. In: USENIX Annual
Technical Conference (Vol. 41, No. 1, pp. 1–36).
79. Vandersteen, G., Wambacq, P., Rolain, Y., Dobrovolný, P., Donnay, S.,
Engels, M., & Bolsens, I., (2000). A methodology for efficient high-
level dataflow simulation of mixed-signal front-ends of digital telecom
transceivers. In: Proceedings of the 37th Annual Design Automation
Conference (Vol. 4, pp. 440–445).
80. Walski, T. M., Brill, Jr. E. D., Gessler, J., Goulter, I. C., Jeppson, R.
M., Lansey, K., & Ormsbee, L., (1987). Battle of the network models:
Epilogue. Journal of Water Resources Planning and Management,
113(2), 191–203.
81. Wang, Y., Wei, H., Lu, Y., Wei, S., Wujcik, E. K., & Guo, Z., (2015).
Multifunctional carbon nanostructures for advanced energy storage
applications. Nanomaterials, 5(2), 755–777.
36 Key Dynamics in Computer Programming
82. Wasserman, A. I., & Prenner, C. J., (1979). Toward a unified view
of database management, programming languages, and operating
systems—A tutorial. Information Systems, 4(2), 119–126.
83. Yan, K. K., Fang, G., Bhardwaj, N., Alexander, R. P., & Gerstein, M.,
(2010). Comparing genomes to computer operating systems in terms
of the topology and evolution of their regulatory control networks.
Proceedings of the National Academy of Sciences, 107(20), 9186–
9191.
84. Zhu, J., Luo, A., Li, G., Zhang, B., Wang, Y., Shan, G., & Liu, L.,
(2021). Jintide: Utilizing low-cost reconfigurable external monitors to
substantially enhance hardware security of large-scale CPU clusters.
IEEE Journal of Solid-State Circuits, 56(8), 2585–2601.
CHAPTER 2
CLASSIFICATION OF COMPUTER
PROGRAMS
CONTENTS
2.1. Introduction....................................................................................... 38
2.2. Software Systems............................................................................... 48
2.3. General Behavior of Software Systems............................................... 51
2.4. Program Types................................................................................... 53
2.5. Computer Architecture....................................................................... 54
2.6. Examples........................................................................................... 56
2.7. Discussion......................................................................................... 57
References................................................................................................ 58
38 Key Dynamics in Computer Programming
2.1. INTRODUCTION
A computer program is a set of instructions written in a programming language
that a computer may perform or understand in imperative programming. A
computer program is a collection of instructions in declarative programming.
Source code is the human-readable version of a computer program.
As computers may only execute their native machine instructions, source
code requires the execution of another computer program. As a result,
utilizing the language’s compiler, source code can be converted to machine
instructions. (An assembler is used to convert machine language programs.)
An executable is a name given to the generated file. Instead, source code
can run in the interpreter of the language. The Java programming language
generates an intermediate form that is subsequently processed by a Java
interpreter (Wilson and Leslie, 2001).
If the operating system (OS) receives a request to run the executable,
it loads it into memory and initiates a procedure to carry out the request
(Silberschatz and Abraham, 1994). The central processing unit (CPU) would
be switched to this procedure so that it may fetch and decode every machine
instruction before executing them. As soon as the source code is required
for implementation, the OS loads the relevant interpreter into memory and
begins the execution of the procedure. The interpreter then puts the source
code into memory, where it may be translated and executed one statement
at a time by the processor (Tanenbaum and Andrew, 1990). Compared to
launching an executable, running the source code is more time-consuming.
In addition, the interpreter should be installed on the PC in question.
It is possible to make advances in the development of software as an
outcome of advancements in computer hardware. Throughout the history
of hardware, the work of computer programming has undergone significant
transformations.
Charles Babbage had been motivated via Jacquard’s loom to create the
Analytical Engine in 1837 (McCartney and Scott, 1999). The names of the
calculating device’s elements had been taken from the textile industry. The
yarn had been carried from the shop to be processed in the textile business.
The gadget contained a “store,” or memory, that could keep 1,000 numbers,
each with 50 decimal digits (Tanenbaum and Andrew, 1990). Numbers
were transported from the “storage” to the “mill” for processing. Two sets
of perforated cards were used to program it. One set is for the operation’s
direction, while the other is for the input variables (McCartney and Scott,
Classification of Computer Programs 39
Source: https://www.csmonitor.com/Technology/2012/1210/Ada-Lovelace-
What-did-the-first-computer-program-do.
Charles Babbage commissioned Ada Lovelace to write information on
the Analytical Engine (1843) (Fuegi and Francis, 2003). The explanation
included Note G, which described in detail how to use the Analytical Engine
to compute Bernoulli numbers. Certain historians consider this note to be
the world’s first computer program (Tanenbaum and Andrew, 1990).
Source: https://www.wikiwand.com/en/Universal_Turing_machine.
Source: https://en.wikipedia.org/wiki/Computer_program.
Its successor, the Z4, had been created at the same time. (Z3 was
destroyed by an airstrike on April 6, 1945). The Z4 was first produced in
1950 at the Federal Technical Institute in Zurich.
The Harvard Mark I was a programmable and digital computer created
via IBM in 1944 (Stroustrup and Bjarne, 2013). The computer had 7 main
units and supported twenty-three signed integer digits (Elgot et al., 1982):
• The machine’s activities were directed by a single unit;
• One unit contained 60 dial switches for configuring the application
constants;
• Multiplication and division were done with a single unit;
• One unit did addition and subtraction and stored the intermediate
results in 72 registers;
• Interpolation was utilized to compute logarithmic functions with
a single unit;
• Interpolation was utilized to compute trigonometric functions by
using a single unit;
• The machine’s output medium was either a typewriter printer or
a punched card printer, and one unit was employed to direct it.
Harvard’s Mark I was 3,304 relays and 530 miles of wire on my system.
The input was given by two punched tape readers (Kernighan et al., 1988).
The directions were typed in by one of the readers. Howard H. Aiken
compiled a codebook that listed all of the known algorithms. A programmer
punched the coded commands onto a tape from this book. The data to be
processed was entered by the other reader.
42 Key Dynamics in Computer Programming
2.1.3. ENIAC
Between July 1943 and the fall of 1945, the computer (ENIAC) and Electronic
Numerical Integrator was created. It had been a Turing complete, general-
purpose computer with circuits made out of 17,468 vacuum tubes (Haigh et al.,
2016). It had been essentially a collection of Pascalines that had been linked
together. Its 40 units weighed thirty tones, took up 1,800 square feet (167 m2),
and used $650 in power each hour (in 1940s money). There were 20 base-10
accumulators in it. It took up to 2 months to program the ENIAC. 3 function
tables had to be moved to constant function panels since they had been on
wheels (Kerrisk and Michael, 2010; Weik, 1961). Heavy black wires had been
utilized to link function tables to function panels. Every function table had 728
knobs that rotated. Setting some of the 3,000 switches on the ENIAC was also
part of the programming process. It took a week to debug a program. It operated
at Aberdeen Proving Ground from 1947 to 1955, computing hydrogen bomb
characteristics, forecasting weather patterns, and providing firing tables for
artillery cannon aiming (Figure 2.4) (Jones et al., 2012).
Source: https://commons.wikimedia.org/wiki/File:ENIAC-changing_a_tube.
jpg.
Figure 2.5. On a data general nova 3 from the mid-1970s, there are switches
for manual input.
Source: https://en.wikipedia.org/wiki/Data_General_Nova.
44 Key Dynamics in Computer Programming
Source: https://en.wikipedia.org/wiki/Very_Large_Scale_Integration.
Robert Noyce, a co-founder of Intel (1968) and Fairchild Semiconductor
(1957), improved the field-effect transistor manufacturing technology
(1963). The objective is to change a semiconductor junction’s electrical
resistance and conductivity. The Siemens technique has been the first one
Classification of Computer Programs 45
Source: https://www.quora.com/Was-the-IBM-System-360-mainframe-comput-
er-built-with-all-transistors-or-did-it-utilize-integrated-circuits.
46 Key Dynamics in Computer Programming
Source: https://www.researchgate.net/figure/The-3D-reconstruction-of-the-
Sac-State-8008-microcomputer-circa-1972-73-credit-Ryan_fig1_303697288.
IBM’s basic assembly language (BAL) was used to program the disc OS.
A BASIC interpreter was used to program the medical records application.
The computer, on the other hand, had been an evolutionary dead-end due to
its exorbitant cost. It was also designed for a particular purpose in a public
university laboratory (Damer, 2011). Despite this, the effort aided in the
creation of the Intel 8080 instruction set (1974).
Classification of Computer Programs 47
Figure 2.9. The original IBM personal computer (1981) utilized an Intel 8088
microprocessor.
Source: https://www.pcmag.com/news/project-chess-the-story-behind-the-
original-ibm-pc.
Intel’s microprocessor development accelerated as customer demand for
personal computers (PC) grew. The x86 series refers to the development
sequence. The x86 assembly language is a set of computer instructions
that are backwards compatible. Machine commands stored in older
microprocessors were carried over to newer microprocessors. Customers
were allowed to buy the latest computers without needing to buy the latest
application software as a result of this. The following are the primary types
of instructions (Draper and Ingraham, 1968):
• Random-access memory instructions for setting and accessing
integers and strings;
• Instructions for performing elementary arithmetic operations on
integers using the integer arithmetic logic unit (ALU);
• Floating-point ALU commands for performing real-number
arithmetic operations;
• Use call stack commands to allocate interface and memory with
functions by pushing and popping words;
48 Key Dynamics in Computer Programming
Figure 2.10. The DEC VT100 (1978) was an extensively utilized computer
terminal.
Source: https://en.wikipedia.org/wiki/Computer_terminal.
Source: https://link.springer.com/chapter/10.1007/978-3-642-78612-
9_3?noAccess=true.
The connection between the class of jobs to be completed and the
program is a feature of DDEM software. The program may modify the
application area and, as a result, the expression of the work to be completed.
Software engineering systems, office automation systems, consecutive
generations of large-scale OS and factory control systems are instances
of DDEM software. The diagram below depicts how a software system is
dependent on its surroundings and how it alters its environment (Bar-Sinai
et al., 2018).
Adequate life cycles and strategies should be used to construct
domain-based software systems. Life cycles that begin with insufficient
criteria and end with the design and execution of an unfinished system
must be employed. The development procedure then moves on to a phase
of requirement formulation. Then the design of the 2nd stage is prepared
and implemented the 2nd stage, and so forth. This software development
Classification of Computer Programs 51
strategy also necessitates that the method of the specification allows for
the declaration of partial criteria (Noldus, 1991). To survive the frequent
changes, the design and design technique should be adaptable.
Source: https://link.springer.com/chapter/10.1007/978-3-642-78612-
9_3?noAccess=true.
Classification of Computer Programs 55
Source: https://zitoc.com/multiprocessor-system/.
Source: https://www.researchgate.net/figure/Distributed-System-Structure_
fig2_287975451.
56 Key Dynamics in Computer Programming
Source: https://edux.pjwstk.edu.pl/mat/264/lec/main119.html.
Tools like pre-compilers and compilers may hide the system architecture
and the program kind. It is only necessary to note a system’s unique design
in software development if it is to be utilized explicitly. As previously said,
developing a distributed system that works on distributed system architecture
necessitates the use of unique methodologies (Agha, 1985). We presume that
a developer is aware that he must construct a distributed software system.
In this approach, he may make explicit usage of the benefits of distributed
systems (Robins et al., 2003).
2.6. EXAMPLES
To demonstrate the categories introduced in the preceding sections,
many software systems are studied. A compiler, an editor, a rapid Fourier
transformation, a flight reservation system, chess software, a flight control
system, and a physical model are all detailed in Table 2.1 (Hasselbring et al.,
2006). Some characteristics may be included in the software, but they are
not required. Optional qualities are highlighted in Table 2.1 by a question
mark, for example, an editor may be fault-tolerant but does not have to be.
A flight reservation system, for example, may function on a distributed or
centralized system, including a distributed database system (Park et al.,
1997; Liskov, 1972).
Classification of Computer Programs 57
2.7. DISCUSSION
We recognize that the classifications presented in the preceding sections
cannot be complete, but they do cover a broad range of software systems.
Artificial intelligence (AI) is one topic that isn’t fully addressed. AI
necessitates knowledge, yet because the information is vast, difficult to
identify precisely, and continually changing, it necessitates the adoption of
unique ways to express it. It is feasible to address AI issues without applying
AI techniques, such as theorem proving and natural language comprehension;
likewise, such solutions are unlikely to be particularly successful. AI
systems are now implemented in PROLOG or LISP. Such languages may
be classified as sequential, but it doesn’t tell us anything about the jobs
they’re supposed to execute. This demonstrates that our software taxonomy
is merely 1st step toward expressing the complexity of a software system;
yet, we believe it may be useful in conventional software fields.
58 Key Dynamics in Computer Programming
REFERENCES
1. Agha, G. A., (1985). Actors: A Model of Concurrent Computation
in Distributed Systems (Vol. 1, p. 1–10). Massachusetts Inst of Tech
Cambridge Artificial Intelligence Lab.
2. Atkins, D., Neshatian, K., & Zhang, M., (2011). A domain-independent
genetic programming approach to automatic feature extraction
for image classification. In: 2011 IEEE Congress of Evolutionary
Computation (CEC) (pp. 238–245). IEEE.
3. Bach, M. J., (1986). The Design of the UNIX Operating System (p.
152). Prentice-Hall, Inc. ISBN 0-13-201799-7.
4. Bar-Sinai, M., Weiss, G., & Shmuel, R., (2018). BPjs: An extensible,
open infrastructure for behavioral programming research. In:
Proceedings of the 21st ACM/IEEE International Conference on Model-
Driven Engineering Languages and Systems: Companion Proceedings
(pp. 59–60).
5. Bromley, A. G., (1998). Charles Babbage’s analytical engine, 1838
(PDF). IEEE Annals of the History of Computing, 20(4), 29–45. doi:
10.1109/85.728228. S2CID 2285332.
6. Chandy, K. M., & Kesselman, C., (1991). Parallel programming in
2001. IEEE Software, 8(6), 11–20.
7. Damer, B., (2011). TIMELINES the DigiBarn computer museum: A
personal passion for personal computing. Interactions, 18(3), 72–74.
8. Draper, R. D., & Ingraham, L. L., (1968). A potentiometric study of
the flavin semiquinone equilibrium. Archives of Biochemistry and
Biophysics, 125(3), 802–808.
9. Elgot, C. C., & Robinson, A., (1982). Random-access stored-program
machines, an approach to programming languages. In: Selected Papers
(pp. 17–51). Springer, New York, NY.
10. Fleischmann, A., (1994). Classification of software system types. In:
Distributed Systems (pp. 35–44). Springer, Berlin, Heidelberg.
11. Fuegi, J., & Francis, J., (2003). Lovelace & Babbage and the creation
of the 1843 ‘notes.’ IEEE Annals of the History of Computing, 25(4),
16–26.
12. Gordon, M., (1996). From LCF to HOL: A Short History (Vol. 5, No.
2, pp. 5–12).
Classification of Computer Programs 59
27. Koren, I., (2018). Computer Arithmetic Algorithms (Vol. 3, No. 2, pp.
1–5). AK Peters/CRC Press.
28. Kuhl, J. G., & Reddy, S. M., (1980). Distributed fault tolerance for large
multiprocessor systems. In: Proceedings of the 7th Annual Symposium
on Computer Architecture (pp. 23–30).
29. Kuo, W., & Prasad, V. R., (2000). An annotated overview of system-
reliability optimization. IEEE Transactions on Reliability, 49(2), 176–
187.
30. Lacamera, D., (2018). Embedded Systems Architecture (p. 8). Packt.
ISBN 978-1-78883-250-2.
31. Lee, C. M., (2000). The Silicon Valley Edge: A Habitat for Innovation
and Entrepreneurship (Vol. 25, No. 3, pp. 2–8). Stanford University
Press.
32. Lee, J. M., & Lee, J. H., (2004). Approximate dynamic programming
strategies and their applicability for process control: A review and
future directions. International Journal of Control, Automation, and
Systems, 2(3), 263–278.
33. Linz & Peter (1990). An Introduction to Formal Languages and
Automata (p. 234). D. C. Heath and Company. ISBN 978-0-669-
17342-0.
34. Liskov, B. H., (1972). A design methodology for reliable software
systems. In: Proceedings of the 1972, Fall Joint Computer Conference,
Part I (pp. 191–199).
35. Luus, R., (1975). Optimization of system reliability by a new nonlinear
integer programming procedure. IEEE Transactions on Reliability,
24(1), 14–16.
36. Martin-Löf, P., (1982). Constructive mathematics and computer
programming. In: Studies in Logic and the Foundations of Mathematics
(Vol. 104, pp. 153–175). Elsevier.
37. McCartney, S., (1999). ENIAC: The Triumphs and Tragedies of the
World’s First Computer (p. 16). Walker and Company. ISBN 978-0-
8027-1348-3.
38. Moser, L. E., Melliar-Smith, P. M., Agarwal, D. A., Budhia, R. K., &
Lingley-Papadopoulos, C. A., (1996). Totem: A fault-tolerant multicast
group communication system. Communications of the ACM, 39(4),
54–63.
Classification of Computer Programs 61
52. Sommer, S., Camek, A., Becker, K., Buckl, C., Zirkler, A., Fiege, L.,
& Knoll, A., (2013). Race: A centralized platform computer-based
architecture for automotive applications. In: 2013 IEEE International
Electric Vehicle Conference (IEVC) (pp. 1–6). IEEE.
53. Stair, R. M., (2003). Principles of Information Systems (6th edn., p.
159). Thomson. ISBN 0–619-06489-7.
54. Stroustrup, B., (2013). The C++ Programming Language (4th edn., p.
40). Addison-Wesley. ISBN 978-0-321-56384-2.s.
55. Tan, S. D., & Shi, C. J., (2003). Efficient very large-scale integration
power/ground network sizing based on equivalent circuit modeling.
IEEE Transactions on Computer-Aided Design of Integrated Circuits
and Systems, 22(3), 277–284.
56. Tanenbaum, A. S., (1990). Structured Computer Organization (3rd
edn., p. 32). Prentice-Hall. ISBN 978-0-13-854662-5.
57. Thota, C., Sundarasekar, R., Manogaran, G., Varatharajan, R., &
Priyan, M. K., (2018). Centralized fog computing security platform for
IoT and cloud in healthcare system. In: Fog Computing: Breakthroughs
in Research and Practice (pp. 365–378). IGI global.
58. Tolpygo, S. K., Bolkhovsky, V., Weir, T. J., Wynn, A., Oates, D. E.,
Johnson, L. M., & Gouker, M. A., (2016). Advanced fabrication
processes for superconducting very large-scale integrated circuits.
IEEE Transactions on Applied Superconductivity, 26(3), 1–10.
59. Turski, W. M., & Wasserman, A. I., (1978). Computer programming
methodology. ACM SIGSOFT Software Engineering Notes, 3(2), 20–
21.
60. Weik, M. H., (1961). The ENIAC story. Ordnance (3rd edn., Vol. 45,
No. 244, pp. 571–575).
61. Weiss, M. A., (1994). Data Structures and Algorithm Analysis in C++
(p. 29). Benjamin/Cummings Publishing Company, Inc. ISBN 0-8053-
5443-3.
62. Wilson, L. B., (2001). Comparative Programming Languages (3rd edn.,
pp. 7, 29). Addison-Wesley. ISBN 0-201-71012-9.
63. Zhang, M., Ciesielski, V. B., & Andreae, P., (2003). A domain-
independent Window approach to multiclass object detection using
genetic programming. EURASIP Journal on Advances in Signal
Processing, 2003(8), 1–19.
Classification of Computer Programs 63
CONTENTS
3.1. Introduction....................................................................................... 66
3.2. Purpose of Programming Languages.................................................. 67
3.3. Imperative Languages........................................................................ 70
3.4. Data-Oriented Languages.................................................................. 77
3.5. Object-Oriented Languages............................................................... 83
3.6. Non-Imperative Languages................................................................ 84
3.7. Standardization.................................................................................. 86
3.8. Computability.................................................................................... 87
References................................................................................................ 88
66 Key Dynamics in Computer Programming
3.1. INTRODUCTION
A competent programmer may develop good software in every language,
just as a good pilot may fly every plane. A passenger plane has been built
for luxury, security, and economic feasibility; a military plane is built for
performance and mission capabilities, and an ultralight plane is built for
cheap cost and ease of operation. Once it is asserted that the well-designed
system may be implemented equally effective in every language, the function
of language in programming is lowered in favor of software tools and
methodologies; not only reduced but completely rejected (Sammet, 1972).
However, programming languages are more than simply a tool; they provide
the raw resources for software, which is what we spend the majority of our
time looking at on our computers. The programming language is among the
most essential, if not the most essential, aspects that manipulate the overall
quality of a software system. However, several programmers are illiterate.
He is enamored with his “native” programming language although is unable
to examine and compare language structures, as well as comprehend the
benefits and drawbacks of current languages and concepts. “Language L1 is
more effective (or efficient) as compared to language L2,” for example, is a
statement that frequently demonstrates conceptual ambiguity (Rosen, 1971).
Because of this lack of understanding, there are 2 important difficulties
in software that must be addressed. For starters, there is an extreme
conservatism when it comes to the selection of programming languages.
However, despite the rapid advancements in computer technology and the
sophistication of current software systems, the vast majority of programming
has still been performed in languages that had been invented about 1970, if
not before that. Comprehensive programming language study is never put to
the test in the real world, and software developers are forced to depend on
instruments and approaches to balance for outmoded programming language
technology. It’s like if airlines will deny experimenting with jet planes on
the basis that a traditional propeller aircraft is completely able to transport
passengers from point A to point B just as efficiently (Sammet, 1991).
In addition, language structures are employed arbitrarily, with no or
little concern for the security or effectiveness of the system. This results in
faulty software that may not be sustained, and also inefficiencies that are
rectified through assembly language coding instead of by improvement of
the programming paradigms and algorithms themselves (King, 1992).
It is solely for the aim of bridging the gap in the level of abstraction
among the actual world and hardware that programming languages have
Fundamentals of Programming Languages 67
Source: https://www.bmc.com/blogs/programming-languages/.
Fundamentals of Programming Languages 69
Because computers are binary processors that only recognize zeros and
ones, maintaining programs in them is mathematically simple but difficult
in practice, because every command must be recorded as binary digits (bits)
that may be shown electronically or mechanically. The symbolic assembler
had been the one of the first software tools developed to solve this issue.
An assembler analyzes an assembly language program that represents
every instruction like a symbol and converts it to a binary form appropriate
for computer execution. For instance, consider the following instruction
(Ahmed et al., 2014):
load R3,54
Its meaning is far more comprehensible than the corresponding string
of bits: “load register 3 having the information in memory location 54.”
Believe this or not, the phrase “automated programming” initially applied
to assemblers, which chose the correct bit sequence for every symbol
automatically. Pascal and C are more advanced as compared to assemblers
since they “automatically” select registers and addresses, as well as
“automatically” select instruction orders to construct arithmetic expressions
and loops (Antolík and Davison, 2013).
We’re now able to respond to the question posed at the start of this
section.
A programming language is a method for abstraction. It allows a
programmer to abstractly express a computation and then have a program
(typically referred to as an interpreter, compiler, or assembler) execute the
specification in the exact format required for computer performance.
It may also see why there are various programming languages: there are
two distinct types of issues can necessitate various abstraction levels, and
various programmers can have various opinions about how abstraction must
be accomplished. A “C” programmer has been quite pleased to operate at
an abstraction level that necessitates the definition of calculations utilizing
indices and arrays, but a report author likes to “program” utilizing a language
made up of word-processor functions (Vella et al., 2014).
The degrees of abstraction in computer hardware may be seen clearly.
Separate components like resistors and transistors were linked directly
at first. Then simple plug-in modules and smaller-scale ICs were used.
Currently, whole computers may be manufactured from only a few chips,
every one of which has many components. No computer specialist would
try to construct a “perfect” circuit from single parts if a group of chips that
could be changed to fulfill the same function existed (Flatt et al., 1999).
70 Key Dynamics in Computer Programming
3.3.1. FORTRAN
FORTRAN was the 1st language of programming that advanced substantially
beyond assembly code. It had been created via an IBM team has led by
John Backus in the 1950s to give an abstract manner of defining scientific
calculations. FORTRAN faced stiff resistance for the same reasons that
all succeeding ideas for high-level abstractions did: many programmers
thought that a compiler might not create optimum code when compared to
hand-coded assembly language (Figure 3.2) (Ottenstein et al., 1990).
Fundamentals of Programming Languages 71
Source: https://www.absoft.com/products/windows-fortran-compiler-suite/.
FORTRAN, like other early programming languages, had severe flaws,
both in terms of language construction and support for module organizing
notions and current data. In retrospect, Backus remarked, “Which as it
had been known already, we just built up the language whenever we went
alongside.” We didn’t see design of language as a challenging task, but
rather as a straightforward prolog to the main challenge: creating an effective
compiler (Cann, 1992).
Nonetheless, the benefits of abstraction rapidly won over so many
programmers: fast and reliable design, as well as reduced machine reliance
due to the abstraction of register and machine instructions. FORTRAN
had become the standard language in research and engineering as most
early computers had been focused on scientific issues, and it is only now
being supplanted by newer languages. FORTRAN has been extensively
modernized (in 1966, 1977, 1990) to meet the needs of current software
development (Burgess and Saidi, 1996).
72 Key Dynamics in Computer Programming
Source: https://marketplace.visualstudio.com/items?itemName=bitlang.cobol.
Afterwards, IBM developed PL/I as a worldwide language that included
the characteristics of COBOL, Algol, and FORTRAN. On several IBM
systems, PL/I have supplanted COBOL and FORTRAN, however, this huge
language had never been generally supported outside of IBM, particularly
on the microcomputers and minicomputers that are becoming incredibly
common in the processing of data firms (Heller and Logemann, 1966).
Fundamentals of Programming Languages 73
Source: https://slideplayer.com/slide/2368014/.
Jovial, which is utilized through the Air Force of United States for real-
time systems, and Simula, one of the earliest simulation languages, are 2
prominent languages that had been evolved from Algol. Pascal, invented
through Niklaus Wirth in the late 1960s, is possibly the most renowned
descendant of Algol. Pascal had been born out of a desire to build a language
that might be utilized to teach concepts such as type checking and type
declarations (McCusker, 2003).
Pascal has one major benefit and one major shortcoming as a practical
language. Because the first Pascal compiler had been written in Pascal,
it was simple to transfer to any machine. The language spread swiftly,
particularly among the microcomputers and minicomputers that were being
developed at the time. Regrettably, the Pascal programming language is
just excessively limited. The standard language has no way of breaking a
program into modules on distinct files, therefore it can’t be utilized to write
74 Key Dynamics in Computer Programming
programs with more than a few thousand lines. Although practical Pascal
compilers enable module deconstruction, there is no standard mechanism;
therefore, huge applications are not portable (Valverde and Solé, 2015).
Wirth realized the need for modules in every practical language and
created the Modula language as a result. Modula has a famous option to
non-standard Pascal dialects (currently in version 3 with support for object-
oriented programming (OOP)) (Ginsburg and Rose, 1963).
Dennis Ritchie of Bell Laboratories created the C programming language
in the starts of 1970s as an implementation language for the operating system
(OS) of UNIX. As higher-level languages had been deemed wasteful, OSs
had been usually developed in assembly code. By providing data structures
and structured control statements (records and arrays), the C abstracts away
the complexities of assembly language programming whilst retaining all of
the flexibility of lower-level programming in assembly language (bit-level
operations and pointers) (Reddy, 2002).
UNIX soon became the choice system in research and academic
institutions because it had been freely available to universities and had
been designed in a portable language instead of raw assembly code. When
modern computers and programs came out of these universities and into the
commercial sphere, they brought UNIX and C with them.
Because harmful constructs aren’t examined via the compiler, C is
supposed to be as versatile as assembly language. The difficulty is that
this flexibility makes it very simple to develop programs with cryptic
problems. When it is used correctly on tiny programs, the C is a precise
language, but when utilized on huge software systems produced via teams
of varied abilities, it may cause major problems. Several of the hazards of
constructions in C would be discussed, as well as how to avoid key mistakes
(Yang et al., 2006).
The American National Standards Institute (ANSI) standardized the C
programming language in 1989, and the International Standards Organization
(ISO) approved virtually the same standard a year later. The C in this book
refers to ANSI C rather than older versions of the language.
3.3.4. C++
Bjarne Stroustrup, also of Bell Laboratories, created the C++ language in
the 1980s, expanding C to incorporate OOP features comparable to that
of the Simula language. Furthermore, C++ corrects numerous errors in C
and must be utilized instead of C in tiny applications where object-oriented
Fundamentals of Programming Languages 75
Source: https://www.educba.com/features-of-c-plus-plus/.
Please keep in mind that C++ is a dynamic language, thus your reference
compiler or manual cannot be completely up to date.
3.3.5. Ada
The US Department of Defense decided to standardize on a single
programming language in 1977, mostly to keep money on training and on
the expense of sustaining program creation environments for every system of
military, according to the official history. Following an evaluation of current
languages, they decided to request the development of a novel language that
would be dependent upon a competent existing language, like the Pascal
programming language. Ultimately, one of the proposals for a language had
been selected and named Ada, and a standard had been established in 1983.
Ada is exceptional in various ways (Sward et al., 2003):
• A single team created and developed the majority of programming
languages (Pascal, C, FORTRAN, etc.), and they had been only
standardized after being widely used. All of the unintentional
mistakes made by the original teams had been included in the
standard for the sake of compatibility. Ada was exposed to
extensive study and criticism before being standardized.
76 Key Dynamics in Computer Programming
Source: https://peakd.com/ada-lang/@xinta/learning-ada-5-object-oriented-
paradigm.
Although its technological superiority and the benefits of early
standardization, Ada has been unable to gain general acceptance exterior
of military and larger-scale applications (like commercial aviation and
transportation by rail). Ada has a repute for being a tough language. It has
been because the language covers several areas of programming those other
languages (such as Pascal and C) leave to the OS, therefore there is just
more to learn. In addition, better, and affordable educational development
settings weren’t readily accessible. Ada is becoming more widely utilized in
Fundamentals of Programming Languages 77
3.3.6. Ada 95
A new standard for the Ada language is issued exactly 12 years after the
initial standard for the Ada language was finalized in 1983. The latest
version, dubbed Ada 95, fixes a few flaws in the previous version. The most
significant addition is support for real OOP, including inheritance, which had
been left out of Ada 83 due to perceived inefficiency. Annexes to the Ada
95 standard define standard (although optional) additions for information
systems, real-time systems, secure systems, numeric’s, and distributed
systems (Bailes, 1992).
If the subject is exclusive to single version: “Ada 95” or “Ada 83,”
the name “Ada” would be used in this text. Because the actual year of
standardization was unknown during development, Ada 95 had been referred
to as Ada 9X in the literature.
3.4.1. Lisp
The linked list is the most fundamental data structure in Lisp. Significant
work on artificial intelligence (AI) had been done in Lisp, which had been
created for study in computation theory. Because the language was so vital,
machines were created and built specifically to run Lisp applications (Figure
3.7) (Murphree and Fenves, 1970).
78 Key Dynamics in Computer Programming
Source: https://www.electroniclinic.com/artificial-intelligence-using-lisp-pro-
gramming-examples/.
The growth of numerous dialects when the language had been
implemented on various devices was one issue with the language.
Subsequently, the Common Lisp programming language had been created
to allow applications to be transferred from one machine to another. CLOS,
a prominent dialect of Lisp that allows OOP, is now a famous dialect.
Cdr(L) and car (L), which remove the tail and head of a list L,
correspondingly, and cons(E, L), which builds a fresh list from a component
E and an old list L, are the 3 basic Lisp operations. Functions to processing
lists comprising non-numeric data may be constructed utilizing these
techniques; these functions will be exceedingly complex to implement in
FORTRAN (Rajaraman, 2014).
Lisp is a long-lived programming language that has been in use for
about a quarter-century. Just FORTRAN has a longer history amongst
active programming languages. Both languages have met the programming
requirements.
Fundamentals of Programming Languages 79
FORTRAN for scientific and technical calculation and Lisp for AI are
two prominent areas of application. These 2 fields are still vital, and their
programmers are so dedicated to such 2 languages that FORTRAN and Lisp
may stay in usage for another quarter-century.
AI research, as one might assume given its aims, creates a slew of
serious programming issues. This rash of difficulties has spawned new
languages in different programming cultures. Likewise, controlling, and
isolating traffic inside work modules by the development of language is a
valuable organizational approach in any extremely big programming effort.
As one reaches the limits of the system wherein, we humans interact more
frequently, such languages start to become less rudimentary (Adeli and
Paek, 1986).
As a result, these systems have several copies of complicated language-
processing functions. Because Lisp’s semantics and syntax are so basic,
parsing might be considered a trivial process. As a result, parsing technology
plays essentially no part in Lisp programs, and the development of language
processors is seldom a hindrance to the rate at which big Lisp systems
expand and evolve. Ultimately, it is the freedom and burden that all Lisp
programmers bear because of the simplicity of syntax and semantics. There
is no way to write a Lisp program larger than several lines without using
discretionary functions (Swift and Mize, 1995).
3.4.2. APL
The APL programming language arose from a mathematical notation for
describing computations. Matrices and vectors are the most fundamental
data structures. Operations are performed directly on them without the use
of loops. As a result, when compared to equivalent programs written in
other languages, the programs are extremely brief. One issue with APL is
that it retains a huge number of mathematical signs from basic formalism.
This necessitates the usage of a particular terminal, making it impossible
to test with APL with no investing in expensive hardware; newer graphical
user interfaces (GUIs) that employ fonts of software have eliminated such
difficulty, hastening APL’s adoption (Figure 3.8) (McIntyre, 1991).
80 Key Dynamics in Computer Programming
Source: https://computerhistory.org/blog/the-apl-programming-language-
source-code/.
3.4.4. SETL
The set is the most fundamental data structure in SETL. SETL may be
utilized to generate generalized programs that are highly abstract and hence
very brief because sets have been the most generic mathematical structure
by which all mathematical structures have been created. In the sense that
mathematical descriptions may be directly executed, the programs are
similar to logic programming. Set theory notation is utilized: {x | p(x)},
which denotes the set of all x for whom the logical expression p(x) is true.
A mathematical specification of the prime numbers set, for instance, maybe
phrased as follows:
{n | ¬∃m[(2 ≤ m ≤ n – 1)∧(n mod m = 0)]}
This formula is written as follows: the set of integers such that no number
m among 2 and n – 1 divides n without leaving a remainder.
We simply interpret the description into a one-line SETL program to
print all primes in the range 2 to 100:
print({n in {2.100} — not exists m in {2.n–1} — (n mod m) = 0});
Essentially, all such languages approach creation from a mathematical
standpoint, asking how may an understanding theory be executed, instead
of from an engineering standpoint, asking how may instructions be given
to the memory and CPU. These sophisticated programming languages are
extremely beneficial for tough programming jobs when it has been critical
to concentrate on the issue rather than on lower-level aspects such as syntax
and semantics (Dubinsky, 1995).
Data-oriented languages are not very much famous as compared to
they once were, owing to competition from new language approaches
like functional and logical programming, as well as the ability to integrate
these data-oriented processes into regular languages such As C++ and Ada
utilizing object-oriented approaches. Nonetheless, the languages are both
technically fascinating and extremely useful for the programming tasks for
which they had been created. Students must try to learn at least one of such
languages since they expand their understanding of how a programming
language might be organized (Grove et al., 1997).
Fundamentals of Programming Languages 83
Source: https://www.geeksforgeeks.org/object-oriented-programming-in-cpp/.
Simula, the 1st OOP language, had been developed by K. Nygaard
and O.J. Dahl in the 1960s for the simulation of system: every sub-system
participating in the simulation had been written as an object. Because every
subsystem might have several instances, a class may be created to describe
every subsystem, and objects of this type may then be allocated (Ferber,
1989).
With the Smalltalk programming language, the Xerox Palo Alto
Research Center promoted OOP. The same research gave birth to today’s
popular windowing systems, and one of Smalltalk’s biggest advantages is
that it is not just a language, although an entire programming environment.
Smalltalk’s technological breakthrough was to demonstrate that a language
84 Key Dynamics in Computer Programming
and then leave it to the computer to find out how to resolve it, instead of
detailing in greater depth how to transfer information from one location to
another (Kumar and Wyatt, 1995).
Newer software packages are composed of computer languages that are
quite abstract. It is possible to define a sequence of database structures and
screens using an application generator, as well as the generator will then
automatically generate the lower-level instructions required to implement
the program. Similarly, simulation programs, desktop publishing software,
spreadsheets, and other similar applications provide substantial abstraction
programming capabilities. However, one downside of this form of software
is that it is typically restricted in terms of the kinds of applications that
may be readily programmed. In the perspective that you may customize the
package to run the program you require simply by the supply of descriptions
as parameters, it seems logical that they are referred to as parameterized
programs (Figure 3.10) (Raihany and Rabbianty, 2021).
Source: https://www.learncomputerscienceonline.com/computer-program-
ming/.
Another way to express a computation in abstract programming is to use
logical implications, functions, equations, or any other formalism. Because
mathematical formalisms are employed, these languages are truly basic-
purpose programming languages that are not restricted to a single application
domain. The compiler does not convert the program into machine code;
86 Key Dynamics in Computer Programming
3.7. STANDARDIZATION
The significance of standardization cannot be overstated. Programs may be
translated from one machine to another if a standard for the language exists
Fundamentals of Programming Languages 87
and compilers follow it. If you are building software that will work on a
variety of systems, you should follow a set of guidelines. However, keeping
track of dozens or even hundreds of computer-specific elements would make
your maintenance duty incredibly difficult (Rao et al., 2021).
For most of the languages covered here, standards are available (or are
in the works). Regrettably, the standards had been submitted years after
the languages gained popularity and therefore should maintain computer-
specific peculiarities from premature executions. The language of Ada
is unique in that the standards (1983 and 1995) had been developed and
assessed concurrently with the language’s design and execution. Moreover,
the standard is maintained, allowing compilers to be compared primarily on
cost and performance instead of standard conformance. Other languages’
compilers can feature a mode that warns you if you use a non-standard
construct. If these constructions are required, they must be contained in a
small number of well-documented modules (Patel et al., 2022).
3.8. COMPUTABILITY
Logicians researched abstract principles of computing in the 1930s, long
before digital computers had been conceived. Both Alan Turing and Alonzo
Church created exceedingly basic models of computing (referred to as
Turing machines and Lambda calculus, correspondingly), and subsequently
established the Church-Turing Thesis (Kari and Thierrin, 1996):
In one of these models, you may do any useful calculation.
Turing machines are relatively basic; there are only two data declarations
in C syntax:
char tape[…]; int current = 0;
Wherein the tape has the ability to go on forever. A program is made up
of every number of statements of the following format:
L17: if (tape[current] == ‘g’) {tape[current++] = ‘j’; go to L43;}
A Turing machine’s statement is executed by the following stages:
• Read and inspect the current character on the tape’s current cell;
• Replace the character with a different one (optional);
• Increase or decrease the current cell’s pointer.
88 Key Dynamics in Computer Programming
REFERENCES
1. Adeli, H., & Paek, Y. J., (1986). Computer-aided design of structures
using LISP. Computers & Structures, 22(6), 939–956.
2. Aguado-Orea, J., & Pine, J. M., (2002). There is no evidence for a ‘no
overt subject’ stage in early child Spanish: A note on Grinstead (2000).
Journal of Child Language, 29(4), 865–874.
3. Ahmed, F. Y., Yusob, B., & Hamed, H. N. A., (2014). Computing with
spiking neuron networks a review. International Journal of Advances
in Soft Computing & its Applications, 6(1), 1–14.
4. Ali, M. S., Babar, M. A., Chen, L., & Stol, K. J., (2010). A systematic
review of comparative evidence of aspect-oriented programming.
Information and software Technology, 52(9), 871–887.
5. Alkhatib, G., (1992). The maintenance problem of application software:
An empirical analysis. Journal of Software Maintenance: Research
and Practice, 4(2), 83–104.
6. America, P., & Van, D. L. F., (1990). A parallel object-oriented language
with inheritance and subtyping. ACM SIGPLAN Notices, 25(10), 161–
168.
7. Antolík, J., & Davison, A. P., (2013). Integrated workflows for spiking
neuronal network simulations. Frontiers in Neuroinformatics, 7, 34.
8. Ashraf, M. U., Fouz, F., & Eassa, F. A., (2016). Empirical analysis of
HPC using different programming models. International Journal of
Modern Education & Computer Science, 8(6), 3–12.
9. Atkinson, M. P., & Buneman, O. P., (1987). Types and persistence in
database programming languages. ACM Computing Surveys (CSUR),
19(2), 105–170.
10. Bailes, P. A., (1992). Discovering functional programming through
imperative languages. Computer Science Education, 3(2), 87–110.
11. Bajre, P., & Khan, A., (2019). Developmental dyslexia in Hindi readers:
Is consistent sound‐symbol mapping an asset in reading? Evidence
from phonological and visuospatial working memory. Dyslexia, 25(4),
390–410.
12. Berger, U., (2002). Computability and totality in domains. Mathematical
Structures in Computer Science, 12(3), 281–294.
Fundamentals of Programming Languages 89
13. Berry, G., & Gonthier, G., (1992). The esterel synchronous programming
language: Design, semantics, implementation. Science of Computer
Programming, 19(2), 87–152.
14. Blackwell, A. F., Whitley, K. N., Good, J., & Petre, M., (2001).
Cognitive factors in programming with diagrams. Artificial Intelligence
Review, 15(1), 95–114.
15. Blanchet, B., (1999). Escape analysis for object-oriented languages:
Application to Java. ACM SIGPLAN Notices, 34(10), 20–34.
16. Borning, A., (1981). The programming language aspects of ThingLab,
a constraint-oriented simulation laboratory. ACM Transactions on
Programming Languages and Systems (TOPLAS), 3(4), 353–387.
17. Burgess, C. J., & Saidi, M., (1996). The automatic generation of test
cases for optimizing Fortran compilers. Information and Software
Technology, 38(2), 111–119.
18. Burnett, M. M., & Baker, M. J., (1994). A classification system for
visual programming languages. Journal of Visual Languages and
Computing, 5(3), 287–300.
19. Cann, D., (1992). Retire Fortran? a debate rekindled. Communications
of the ACM, 35(8), 81–89.
20. Cordy, J. R., (2004). TXL-a language for programming language tools
and applications. Electronic Notes in Theoretical Computer Science,
110, 3–31.
21. Davidsen, M. K., & Krogstie, J., (2010). A longitudinal study of
development and maintenance. Information and Software Technology,
52(7), 707–719.
22. Davison, A. P., Brüderle, D., Eppler, J. M., Kremkow, J., Muller, E.,
Pecevski, D., & Yger, P., (2009). PyNN: A common interface for
neuronal network simulators. Frontiers in Neuroinformatics, 2, 11.
23. Davison, A. P., Hines, M., & Muller, E., (2009). Trends in programming
languages for neuroscience simulations. Frontiers in Neuroscience, 3,
36.
24. Denning, P. J., (1978). Operating systems principles for data flow
networks. Computer, 11(07), 86–96.
25. Dubinsky, E., (1995). ISETL: A programming language for learning
mathematics. Communications on Pure and Applied Mathematics,
48(9), 1027–1051.
90 Key Dynamics in Computer Programming
26. Egli, H., & Constable, R. L., (1976). Computability concepts for
programming language semantics. Theoretical Computer Science,
2(2), 133–145.
27. Ferber, J., (1989). Computational reflection in class-based object-
oriented languages. ACM SIGPLAN Notices, 24(10), 317–326.
28. Fisher, D. A., (1978). DoD’s common programming language effort.
Computer, 11(3), 24–33.
29. Flatt, M., Findler, R. B., Krishnamurthi, S., & Felleisen, M., (1999).
Programming languages as operating systems (or revenge of the son of
the lisp machine). ACM SIGPLAN Notices, 34(9), 138–147.
30. Fourment, M., & Gillings, M. R., (2008). A comparison of common
programming languages used in bioinformatics. BMC Bioinformatics,
9(1), 1–9.
31. Ghannad, P., Lee, Y. C., Dimyadi, J., & Solihin, W., (2019). Automated
BIM data validation integrating open-standard schema with visual
programming language. Advanced Engineering Informatics, 40, 14–
28.
32. Gilmore, D. J., & Green, T. R. G., (1984). Comprehension and recall
of miniature programs. International Journal of Man-Machine Studies,
21(1), 31–48.
33. Ginsburg, S., & Rose, G. F., (1963). Some recursively unsolvable
problems in ALGOL-like languages. Journal of the ACM (JACM),
10(1), 29–47.
34. Green, T. R. G., & Petre, M., (1996). Usability analysis of visual
programming environments: A ‘cognitive dimensions’ framework.
Journal of Visual Languages & Computing, 7(2), 131–174.
35. Grove, D., DeFouw, G., Dean, J., & Chambers, C., (1997). Call graph
construction in object-oriented languages. In: Proceedings of the
12th ACM SIGPLAN Conference on Object-Oriented Programming,
Systems, Languages, and Applications (Vol. 3, pp. 108–124).
36. Hansen, P. B., (1975). The programming language concurrent pascal.
IEEE Transactions on Software Engineering, (2), 199–207.
37. Heim, B., Soeken, M., Marshall, S., Granade, C., Roetteler, M., Geller,
A., & Svore, K., (2020). Quantum programming languages. Nature
Reviews Physics, 2(12), 709–722.
38. Heller, J., & Logemann, G. W., (1966). PL/I: A programming language
for humanities research. Computers and the Humanities, 2, 19–27.
Fundamentals of Programming Languages 91
39. Hermenegildo, M. V., Bueno, F., Carro, M., López, P., Morales, J. F.,
& Puebla, G., (2008). An overview of the ciao multiparadigm language
and program development environment and its design philosophy.
Concurrency, Graphs, and Models, 2, 209–237.
40. Hjelle, K. L., Halvorsen, L. S., & Overland, A., (2010). Heathland
development and relationship between humans and environment along
the coast of western Norway through time. Quaternary International,
220(1, 2), 133–146.
41. Holgeid, K. K., Krogstie, J., & Sjøberg, D. I., (2000). A study of
development and maintenance in Norway: Assessing the efficiency of
information systems support using functional maintenance. Information
and Software Technology, 42(10), 687–700.
42. Hutcheon, A. D., & Wellings, A. J., (1988). Supporting Ada in a
distributed environment. ACM SIGAda Ada Letters, 8(7), 113–117.
43. Ishikawa, Y., Hori, A., Sato, M., Matsuda, M., Nolte, J., Tezuka,
H., & Kubota, K., (1996). Design and implementation of metalevel
architecture in C++-MPC++ approach. In: Proceedings of Reflection
(Vol. 96, pp. 153–166).
44. Japaridze, G., (2003). Introduction to computability logic. Annals of
Pure and Applied Logic, 123(1–3), 1–99.
45. Jeffery, C., Thomas, P., Gaikaiwari, S., & Goettsche, J., (2016).
Integrating regular expressions and SNOBOL patterns into string
scanning: A unifying approach. In: Proceedings of the 31st Annual
ACM Symposium on Applied Computing (Vol. 3, pp. 1974–1979).
46. Jones, N. D., (2004). Transformation by interpreter specialization.
Science of Computer Programming, 52(1–3), 307–339.
47. Kaijanaho, A. J., (2014). The extent of empirical evidence that could
inform evidence-based design of programming languages: A systematic
mapping study. Jyväskylä Licentiate Theses in Computing, 2(18), 1–13.
48. Kari, L., & Thierrin, G., (1996). Contextual insertions/deletions and
computability. Information and Computation, 131(1), 47–61.
49. Kennedy, K., & Schwartz, J., (1975). An introduction to the set
theoretical language SETL. Computers & Mathematics with
Applications, 1(1), 97–119.
50. King, K. N., (1992). The evolution of the programming languages
course. In: Proceedings of the Twenty-Third SIGCSE Technical
Symposium on Computer Science Education (pp. 213–219).
92 Key Dynamics in Computer Programming
51. Kiper, J. D., Howard, E., & Ames, C., (1997). Criteria for evaluation
of visual programming languages. Journal of Visual Languages &
Computing, 8(2), 175–192.
52. Krogstie, J., (1996). Use of methods and CASE-tools in Norway:
Results from a survey. Automated Software Engineering, 3(3), 347–
367.
53. Krogstie, J., Jahr, A., & Sjøberg, D. I., (2006). A longitudinal study
of development and maintenance in Norway: Report from the 2003
investigation. Information and Software Technology, 48(11), 993–
1005.
54. Kumar, D., & Wyatt, R., (1995). Undergraduate AI and its non-
imperative prerequisite. ACM SIGART Bulletin, 6(2), 11–13.
55. Lorenzen, T., (1981). The case for in class programming tests. ACM
SIGCSE Bulletin, 13(3), 35–37.
56. McCusker, G., (2003). On the semantics of the bad-variable constructor
in Algol-like languages. Electronic Notes in Theoretical Computer
Science, 83, 169–186.
57. McIntyre, D. B., (1991). Language as an intellectual tool: From
hieroglyphics to APL. IBM Systems Journal, 30(4), 554–581.
58. Mészárosová, E., (2015). Is python an appropriate programming
language for teaching programming in secondary schools. International
Journal of Information and Communication Technologies in Education,
4(2), 5–14.
59. Moot, R., & Retoré, C., (2019). Natural language semantics and
computability. Journal of Logic, Language, and Information, 28(2),
287–307.
60. Murphree, E. L., & Fenves, S. J., (1970). A technique for generating
interpretive translators for problem-oriented languages. BIT Numerical
Mathematics, 10(3), 310–323.
61. Ottenstein, K. J., Ballance, R. A., & MacCabe, A. B., (1990). The
program dependence web: A representation supporting control-,
data-, and demand-driven interpretation of imperative languages. In:
Proceedings of the ACM SIGPLAN 1990 Conference on Programming
Language Design and Implementation, 2(1), 257–271.
62. Palumbo, D. B., (1990). Programming language/problem-solving
research: A review of relevant issues. Review of Educational Research,
60(1), 65–89.
Fundamentals of Programming Languages 93
63. Patel, P., Torppa, M., Aro, M., Richardson, U., & Lyytinen, H., (2022).
Assessing the effectiveness of a game‐based phonics intervention for
first and second grade English language learners in India: A randomized
controlled trial. Journal of Computer Assisted Learning, 38(1), 76–89.
64. Persson, M., Bohlin, J., & Eklund, P., (2000). Development and
maintenance of guideline-based decision support for pharmacological
treatment of hypertension. Computer Methods and Programs in
Biomedicine, 61(3), 209–219.
65. Raihany, A., & Rabbianty, E. N., (2021). Pragmatic politeness of the
imperative speech used by the elementary school language teachers.
OKARA: Journal Bahasa dan Sastra, 15(1), 181–198.
66. Rajaraman, V., (2014). JohnMcCarthy—Father of artificial intelligence.
Resonance, 19(3), 198–207.
67. Rao, C., TA, S., Midha, R., Oberoi, G., Kar, B., Khan, M., & Singh,
N. C., (2021). Development and standardization of the DALI-DAB
(dyslexia assessment for languages of India – dyslexia assessment
battery). Annals of Dyslexia, 71(3), 439–457.
68. Reddy, U. S., (2002). Objects and classes in Algol-like languages.
Information and Computation, 172(1), 63–97.
69. Rosen, S., (1971). Programming languages: History and fundamentals
(Jean E. Sammet). SIAM Review, 13(1), 108.
70. Sammet, J. E., (1972). Programming languages: History and future.
Communications of the ACM, 15(7), 601–610.
71. Sammet, J. E., (1991). Some approaches to, and illustrations of,
programming language history. Annals of the History of Computing,
13(1), 33–50.
72. Sanner, M. F., (1999). Python: A programming language for software
integration and development. J Mol Graph Model, 17(1), 57–61.
73. Smith, D. C., Cypher, A., & Spohrer, J., (1994). KidSim: Programming
agents without a programming language. Communications of the ACM,
37(7), 54–67.
74. Snyder, A., (1986). Encapsulation and inheritance in object-oriented
programming languages. In: Conference Proceedings on Object-
Oriented Programming Systems, Languages, and Applications (Vol. 2,
No. 2, pp. 38–45).
94 Key Dynamics in Computer Programming
CONTENTS
4.1. Introduction....................................................................................... 96
4.2. Output: Print Statement..................................................................... 98
4.3. Arithmetic Expressions: A First Look................................................. 104
4.4. Variables in Python.......................................................................... 104
4.5. Arithmetic Expressions in Python..................................................... 109
4.6. Reading User Input In Python.......................................................... 115
4.7. Examples of Programs Using The Input() Statement.......................... 117
4.8. Math Class....................................................................................... 119
References.............................................................................................. 124
96 Key Dynamics in Computer Programming
4.1. INTRODUCTION
Computers, in their innate machine language, can understand 0s and 1s.
All your computer’s executable programs are made up of these 1s and 0s
that inform your computer simply what to do. Humans, on the other hand,
are terrible at communicating with 0s and 1s (Van Rossum, 2003). Things
would go extremely slowly if we had to write our instructions to computers
in this fashion all the time, and we would have a lot of disgruntled computer
programmers, to say the least (Zhang, 2015). Fortunately, there are two
typical ways that programmers may use to avoid having to write their
instructions in 0s and 1s to a computer:
• Compiled languages; and
• Interpreted languages.
Compiled languages allow programmers to build programs in a
programming language that is easily understandable by humans (Ekmekci
et al., 2016). An executable file is created by converting this program into
a series of zeros and ones, which is known as an executable file, which the
computer can read and comprehend (Liang, 2013; Fangohr, 2015). It is
through this executable file that the computer can function. If one wants to
make changes to the way their program operates, they must first make the
necessary modifications to the program and then recompile (retranslate) the
program in order to produce an updated executable file that the computer can
recognize and use (Figure 4.1) (Nosrati, 2011; Linge and Langtangen, 2020).
Source: https://www.javatpoint.com/python-applications.
Introduction to Python Programming 97
Source: https://zbook.org/read/2752b_-python-chapter-1-introduction-to-pro-
gramming-in-python-.html.
98 Key Dynamics in Computer Programming
Source: https://computercorner.ca/python-print-function/.
The syntax rules of programming languages are quite rigorous. The
difference between programming languages and English is that even if a
grammatical rule is violated, most people yet grasp the substance of the
message; however, in programming languages, if the tiniest rule is shattered,
the interpreter cannot offset by repairing the error (Radenski, 2006; Kadiyala,
and Kumar, 2017). Instead, the interpreter generates an error message that
Introduction to Python Programming 99
informs the programmer of the problem that has occurred. As a result, the
message itself is not very useful in this situation because it is not very
detailed. In certain circumstances, the error messages are more precise than
they are here. When comparing the two statements, it is easy to see that the
only thing that separates them is the absence of a pair of double quotes in
the second one (which worked). This is the syntax error that was committed
previously (Kadiyala and Kumar, 2018).
Having established the correct syntax of the print statement in Python,
we can now present it informally:
print(‘string expression’) is a function that prints a string expression.
The term “print” is used first, followed by a pair of enclosing parentheses
to complete the sentence (). It is necessary to supply a proper string expression
within the parentheses.
The string literal is the first class of string expression we will study. For
the purposes of this definition, “literal” means “according to or involving,
or consisting of in, or consisting of the fundamental or exact meaning of
the word; neither metaphorical nor figurative.” Literal simply refers to the
concept of “constant” in programming (Srinath, 2017). A literal expression
is one that does not have the ability to modify its value. String literals in
Python, along with many other programming languages, are denoted by a
pair of double quotes that match exactly. With a few exceptions, everything
included within the double quotes is regarded as a series of characters, or a
string, in the exact same manner as it was written (Tanganelli et al., 2015;
Kadiyala and Kumar, 2018).
Consequently, the importance of print (“Hello World!”) in Python is
just to print out precisely what is included within the double quotes of that
phrase. First, try printing out numerous texts that you have written yourself
before continuing.
The term “end of the line” refers to the end of a process. Because the
entire Python statements should fit on a single line, the interpreter was
waiting to read, represented by the second double quotation, ahead of the
close of the line. The interpreter understood the string literal had not been
finished when it reached the close of the line, which also signified the close
of the statement (Cai et al., 2005; Nagpal and Gabrani, 2019).
To “correct” this problem, we will need a mechanism to tell the translator
that we want to go on to the next line without typing the enter key. Python,
like several other programming languages, has broken sequences to cope
with the problems. A code for a character that should not be interpreted
accurately is an avoid sequence (Holkner and Harland, 2009; Saabith et
al., 2019). The evade sequence for the latest line character, for example,
is n. When these two characters appear in a string literal in such order,
the interpreter understands not to display a backslash and an. Instead, it
recognizes these two characters as the code for a different line character
when they are combined (Manaswi et al., 2018; Kumar and Panda, 2019).
Therefore, to print out.
Python is fun!
print(“Python\nis\nfun!”)
Now there is a list of frequently employed escape sequences:
a literal string, single quotes can be used instead. Either option is acceptable
(van Rossum, 1995; De Pra et al., 2018). As a result, the above message may
be written out more easily as follows:
print(‘Joe says, “Hi!”‘)
The python interpreter understands from the start of the statement that
the programmer is employing single quotes to signify the end and start of
the string literal, so it may regard the double-quote it finds as a double quote
rather than the string’s end (Nosrati, 2011).
Source: https://python-course.eu/python-tutorial/data-types-and-variables.
php.
106 Key Dynamics in Computer Programming
>>> print(area)
36
Since we expressly reassessed side*side and placed this new value in
return into the variable area, we can see that area has now changed to 36.
Source: https://www.slideshare.net/p3infotech_solutions/python-program-
ming-essentials-m5-variables.
Introduction to Python Programming 109
4.4.6. Comments
Others find it difficult to read large chunks of code. Programmers frequently
include comments in their code to assist others. A comment is a section
of code that the interpreter ignores but that anyone viewing the code may
see. It provides some fundamental information to the reader. At the start of
each program, a header comment is added. It contains information about the
file’s author(s), the date it was created/edited, and the program’s purpose.
In Python, the pound sign (#) is used to indicate a comment. The translator
treats all text after the pound symbol on a line as a comment (Tateosian,
2015; Poole, 2017).
since it takes priority over addition. For instance, if the tax rate is 7, we
divide 7 by 100 to get 0.07, which we then multiply by 1 to obtain 1.07. The
present value of item price is then multiplied by 1.07, and the total price is
allocated (Meurer et al., 2017; Lukasczyk et al., 2020).
Python additionally gives us with three more operators:
• %, for modulus;
• **, for exponentiation;
• //, for integer division.
Further subsections explain how each of these operators works, as well
as the order in which they should be used.
–0.008
>>> –5 ** 3
–125
>>> 1.6743 ** 2.3233
3.311554089370817
When both operands of a multiplications operation are integers, and the
result is also an integer, the result is stated as one. The response will be written
as a real number with decimals if both operands are integers; however, the
answer is not. If an exponent b is –ve, ab is described as 1/a–b, as shown in
the instances above (Vanhoenacker and Sandra, 2006; Furduescu, 2019).
>>> –12//6 –2
>>> –11//6
–2
>>> 0//5
0
Please keep in mind that Python approaches this operation in a different
way than several other programming languages and in a different way
than most people’s instinctive understanding of integer division. When the
majority of people see –13/6, they are likely to conclude that this is quite
near to –2, and hence that the answer should be –2, which is incorrect. In
contrast, if we look at the technical definition of integer division in Python,
we can see that –2 is more than –13/6, which is about –2.166667, and that
the highest integer less than or equal to this figure is –3.166667 (Gálvez et
al., 2009; Munier et al., 2019).
In addition, the definition of integer division in Python does not need that
the two numbers that are being divided be integers in order for the division
to take place. Consequently, integer division procedures are permitted even
for values that are not in the integer range. Think the following examples:
>>> 6.0 // 3.0
2.0
>>> 2.4 // 2.5 0.0
>>> 6.6 //.02
330.0
>>> –45.3 // –11.2
4.0
>>> 45.3 // –11.2
–5.0
Because this Python feature is infrequently utilized, no more information
will be provided at this time.
on the other hand, the modulus operator is provided for both real numbers
and integers, as opposed to other programming languages. It is possible that
the result will be an integer if both integers are operands; otherwise, the
answer will be a real number (Henry et al., 1984; Reas and Fry, 2006).
To put it another way: logically, the modulus operator determines the
residual in a division, whereas integer division calculates the proportion in a
division. Another way of thinking about modulus is that it is just the amount
of remaining when two integers are divided by each other. Positive integers
in Python are represented correctly by the intuitionistic notion given above.
Negative numbers, on the other hand, are a different matter (Bielak, 1993;
Nakhle and Harfouche, 2021).
The formal description of a % b is as observes:
a % b assesses to a – (a // b)*b.
The full number of times b splits into an is represented by a / b. As
a result, we are searching for the total number of times b enters a and
subtracting that many multiples of b from a to get the “leftover.”
Here are several traditional mod examples that only use non-negative
values:
>>> 17 % 3
2
>>> 37 % 4
1
>>> 17 % 9
8
>>> 0 % 17
0
>>> 98 % 7
0
>>> 199 % 200
199
In these cases, we can see that if the first value is ever smaller than
the second value, the operation’s answer is just the first value because the
second value divides it 0 times. In the remaining cases, we can see that we
can get the solution by simply subtracting the proper number of multiples of
the second number (Iyengar et al., 2011; Kopec, 2014).
114 Key Dynamics in Computer Programming
Negative integers, on the other hand, must be plugged into the formal
definition rather than relying on instinct. Consider the following examples:
>>> 25 % –6
–5
>>> –25 % –6
–1
>>> –25 % 6
5
>>> –48 % 8
0
>>> –48 % –6
0
>>> –47 % 6
1
>>> –47 % 8
1
>>> –47 % –8
–7
The essential problem that determines the first two outcomes is that the
two integer divisions 25/–6 and –25/–6 have different answers. The first
yields a score of –5, whereas the second yields a score of 4. As a result, we
compute –6 × –5 = 30, from which we remove 5 to get 25. For the second,
we multiply –6 × 4 to get – 24, then remove 1 to get –25.
Examine if you can use the description provided to understand each of
the other responses listed directly above.
The modulus operator is also provided for real numbers in Python, with
a similar definition as before. Here are some instances of its use:
>>> 12.4 % 6.1
0.20000000000000018
>>> 3.4 % 3.5 3.4
>>> 6.6 %.02
7.216449660063518e–16
>>> –45.3 % –11.2
Introduction to Python Programming 115
–0.5
>>> 45.3 % –11.2
–10.7
Looking at the third and first cases, we can see that the result provided
by Python is somewhat different from what we would have expected.
The first is expected to be just 0.2, and the third is expected to be zero.
Sadly, a large quantity of real numbers is not correctly kept in the computer
(Jackowska-Strumiłło et al., 2013; Jun Zhao et al., 2013). In fact, this is true
in all computer programming languages. Because of this, there are some
minor round-off mistakes in computations using real numbers every now
and again. Round-off errors are represented by the numbers 1 and 8 at the
very end of the number, on the right-hand side of the number. It is worth
noting that the last component of the third example is simply the number
10–16 multiplied by the previously revealed number; therefore, the entire
section written indicates the round-off error, which is still negligible since
it is < 10–15. If we do not require extreme accuracy in our calculations, we
may live with the little inaccuracies created by real number computations
performed on a standard computer. The more sophisticated the calculations
are, the higher the possibility of a mistake cascading to other computations
(Chapman and Chang, 2000). However, for the sake of this discussion, we
will just believe that our real number of solutions is “near sufficient” to our
requirements (Kuhlman, 2009; Chen et al., 2019).
and then the assignment declaration, which had the equal sign, allocated
this value to the variable name. After that, we were free to use any name we
wanted, aware that the value given by the user would be saved (Liang, 2013;
Derezińska and Hałas, 2014).
It was difficult to expand upon our prior algorithms, which computed
the price of an item with tax and the surface area of a square, so they
constantly computed the same price and area. The user’s input would make
our software far more effective if we enabled them to enter the necessary
numbers so that our program could assess the information, THEY were keen
on (Xu et al., 2021).
Before we get into the specific changes that must be done to these
programmers for them to accept user input, we should briefly discuss the
input function. It always returns a string representation of whatever data
it has received from the user. If the user submits an invalid number, like
“79,” the input statement will return “79” because of the invalid number. If
you want to be literal, this is a string that is the letter “7” subsequently the
character “9,” rather than the number “79.” As a result, we require a technique
for converting the text “79” into the number 79. This is accomplished by the
use of a function that transforms its input into a new type (Goldbaum et al.,
2018; Ortin and Escalada, 2021). The int function can be used to convert a
string into an integer:
>>> age = int(input(“What is your name, “+name+”?\n”))
What is your name? Joe
>>> print(“Your name is ”,name,”. You are ”,age,” years old.”, sep=““) Your
name is Simone. You are 22 years old.
We had to utilize the int function to convert the string returned by the
input function keen on an integer in this example. The variable age was then
assigned to this. As a result, age saves an integer rather than a string.
You will notice that rather than commas, which we originally used while
learning the print statement, plus signs, which denote string concatenation,
were used to prompt Simone. This is because, whereas the print statement
accepts multiple things separated by commas, the input statement only accepts
a single string. As a result, we were obliged to use string concatenation to
generate a single string (Ade-Ibijola, 2018; Verstraelen et al., 2021). We
were enabled to concatenate the variable name with the remainder of the
message since it is a string. If we had struggled to feed the input function
Introduction to Python Programming 117
from a function called the main function. While it is not required in Python,
it is a good practice to get into the habit of identifying a function main in
any programming language. It will be beneficial when transferring to other
programming languages, and as the python programs, you develop become
longer than a few lines, having the main function will be handy from an
organizational standpoint. This may be included in your program by simply
including the subsequent line just before your program’s guidelines:
def main():
Python necessitates constant indenting; therefore, every statement within
the main function must be indented as well. Four spaces or a tab are used
to indicate a normal indentation. Following the completion of your code in
main, you must invoke the function mainly because all you have done so far
has been to declare the function (Combrisson et al., 2017). However, just
because a function is defined does not imply that it will be used. It is only
utilized if and when it is requested. The following is an example of how to
invoke the function main:
main()
Putting this all together, we have the subsequent program:
# Joe Clark
# 9/10/2019
# Analyzes the number of possible combo meals. import math
# Several languages define a function main, which starts execution. def
main():
# Get the user information. numapps = int(input(“How many total appetizers
are there?\n”)) yourapps = int(input(“How many of those do you get to
choose?\n”)) numentrees = int(input(“How many total entrees are there?\n”))
yourentrees = int(input(“How many of those do you get to choose?\n”))
#Calculate the combinations of appetizers and entrees.
appcombos = (math.factorial(numapps)/math.factorial(yourapps)
/math.factorial(numapps-yourapps)) entreecombos = (math.
factorial(numentrees)/math.factorial(yourentrees) /math.
factorial(numentrees-yourentrees))
# Output the final answer.
print(“You can order,” int(appcombos*entreecombos), “different meals.”)
#Call main!
main()
Introduction to Python Programming 123
A single line of code that spans two lines is another new feature in this
application. This happens with both the app combos and entree combos’
assignment statements. An extra set of parentheses is needed to persuade
Python to understand that the whole expression goes on a single line. There
are various ways to signal that many lines of code correspond to a single line
of code, but this is the recommended one:
appcombos = math.factorial(numapps)/math.factorial(yourapps) \
/math.factorial(numapps-yourapps)
124 Key Dynamics in Computer Programming
REFERENCES
1. Agarwal, K. K., & Agarwal, A., (2006). Simply python for CS 0.
Journal of Computing Sciences in Colleges, 21(4), 162–170.
2. Agarwal, K., Agarwal, A., & Celebi, M. E., (2008). Python puts a
squeeze on java for CS0 and beyond. Journal of Computing Sciences
in Colleges, 23(6), 49–57.
3. Alzahrani, N., Vahid, F., Edgcomb, A., Nguyen, K., & Lysecky, R.,
(2018). Python versus C++ an analysis of student struggle on small
coding exercises in introductory programming courses. In: Proceedings
of the 49th ACM Technical Symposium on Computer Science Education
(pp. 86–91).
4. Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R.,
Desjardins, G., & Bengio, Y., (2010). Theano: A CPU and GPU math
compiler in python. In: Proc. 9th Python in Science Conf. (Vol. 1, pp.
3–10).
5. Bielak, R., (1993). Object oriented programming: The fundamentals.
ACM SIGPLAN Notices, 28(9), 13–14.
6. Bogdanchikov, A., Zhaparov, M., & Suliyev, R., (2013). Python to
learn programming. In: Journal of Physics: Conference Series (Vol.
423, No. 1, p. 012027). IOP Publishing.
7. Bynum, M. L., Hackebeil, G. A., Hart, W. E., Laird, C. D., Nicholson,
B. L., Siirola, J. D., & Woodruff, D. L., (2021). A brief python tutorial.
In: Pyomo—Optimization Modeling in Python (pp. 203–216). Springer,
Cham.
8. Cai, X., Langtangen, H. P., & Moe, H., (2005). On the performance
of the python programming language for serial and parallel scientific
computations. Scientific Programming, 13(1), 31–56.
9. Chapman, B., & Chang, J., (2000). Biopython: Python tools for
computational biology. ACM SIGBIO Newsletter, 20(2), 15–19.
10. Chapman, C., & Stolee, K. T., (2016). Exploring regular expression
usage and context in python. In: Proceedings of the 25th International
Symposium on Software Testing and Analysis (pp. 282–293).
11. Chen, T., Hague, M., Lin, A. W., Rümmer, P., & Wu, Z., (2019). Decision
procedures for path feasibility of string-manipulating programs
with complex operations. Proceedings of the ACM on Programming
Languages, 3(POPL), 1–30.
Introduction to Python Programming 125
12. Combrisson, E., Vallat, R., Eichenlaub, J. B., O’Reilly, C., Lajnef, T.,
Guillot, A., & Jerbi, K., (2017). Sleep: An open-source python software
for visualization, analysis, and staging of sleep data. Frontiers in
Neuroinformatics, 11, 60.
13. Craven, P. V., (2016). Create a custom calculator. In: Program Arcade
Games (pp. 11–31). A press, Berkeley, CA.
14. De Pra, Y., Fontana, F., & Simonato, M., (2018). Development of
real-time audio applications using python. In: Proceedings of the XXII
Colloquium of Musical Informatics, Udine, Italy (pp. 22–23).
15. De-Ibijola, A., (2018). Syntactic generation of practice novice programs
in python. In: Annual Conference of the Southern African Computer
Lecturers’ Association (pp. 158–172). Springer, Cham.
16. Derezińska, A., & Hałas, K., (2014). Analysis of mutation operators
for the python language. In: Proceedings of the Ninth International
Conference on Dependability and Complex Systems DepCoS-
RELCOMEX (pp. 155–164). Brunów, Poland. Springer, Cham.
17. Donat, W., (2014). Introducing python. In: Learn Raspberry Pi
Programming with Python (pp. 31–50). A press, Berkeley, CA.
18. Dubois, P. F., Hinsen, K., & Hugunin, J., (1996). Numerical python.
Computers in Physics, 10(3), 262–267.
19. Ekmekci, B., McAnany, C. E., & Mura, C., (2016). An introduction
to programming for bioscientists: A python-based primer. PLoS
Computational Biology, 12(6), e1004867.
20. Elumalai, A., (2021). Python loves numbers. In: Introduction to Python
for Kids (pp. 39–58). Apress, Berkeley, CA.
21. Fangohr, H., (2015). Introduction to python for computational
science and engineering. Faculty of Engineering and the Environment
University of Southampton, 68.
22. Furduescu, B. A., (2019). Neuro-linguistic programming: History,
conception, fundamentals, and objectives. Valahian Journal of
Economic Studies, 10(1).
23. Gálvez, J., Guzmán, E., & Conejo, R., (2009). A blended e-learning
experience in a course of object oriented programming fundamentals.
Knowledge-Based Systems, 22(4), 279–286.
24. Gerrard, P., (2016). Input and output. In: Lean Python (pp. 35–41).
Apress, Berkeley, CA.
126 Key Dynamics in Computer Programming
25. Goldbaum, N. J., ZuHone, J. A., Turk, M. J., Kowalik, K., & Rosen, A.
L., (2018). unyt: Handle, Manipulate, and Convert Data with Units in
Python. arXiv preprint arXiv:1806.02417.
26. Gorgolewski, K., Burns, C. D., Madison, C., Clark, D., Halchenko,
Y. O., Waskom, M. L., & Ghosh, S. S., (2011). Nipype: A flexible,
lightweight, and extensible neuroimaging data processing framework
in python. Frontiers in Neuroinformatics, 5, 13.
27. Guo, P. J., Markel, J. M., & Zhang, X., (2020). Learner sourcing at scale
to overcome expert blind spots for introductory programming: A three-
year deployment study on the python tutor website. In: Proceedings of
the Seventh ACM Conference on Learning@ Scale (pp. 301–304).
28. Hajja, A., Hunt, A. J., & McCauley, R., (2019). PolyPy: A web-platform
for generating quasi-random python code and gaining insights on
student learning. In: 2019 IEEE Frontiers in Education Conference
(FIE) (pp. 1–8). IEEE.
29. Hall, T., & Stacey, J. P., (2009). Variables and data types. Python 3 for
Absolute Beginners, 27–47.
30. Hamrick, T. R., & Hensel, R. A., (2013). Putting the fun in programming
fundamentals-robots make programs tangible. In: 2013 ASEE Annual
Conference & Exposition (pp. 23–1012).
31. Hedges, L. O., Mey, A. S., Laughton, C. A., Gervasio, F. L.,
Mulholland, A. J., Woods, C. J., & Michel, J., (2019). BioSimSpace:
An interoperable python framework for biomolecular simulation.
Journal of Open Source Software, 4(43), 1831.
32. Henry, R. C., Lewis, C. W., Hopke, P. K., & Williamson, H. J., (1984).
Review of receptor model fundamentals. Atmospheric Environment
(1967), 18(8), 1507–1515.
33. Holkner, A., & Harland, J., (2009). Evaluating the dynamic behavior of
python applications. In: Proceedings of the Thirty-Second Australasian
Conference on Computer Science (Vol. 91, pp. 19–28).
34. Hunt, J., (2019). A first python program. In: A Beginners Guide to
Python 3 Programming (pp. 23–31). Springer, Cham.
35. Hunt, J., (2019). Python modules and packages. In: A Beginners Guide
to Python 3 Programming (pp. 281–297). Springer, Cham.
36. Iyengar, S. S., Parameshwaran, N., Phoha, V. V., Balakrishnan, N., &
Okoye, C. D., (2011). Fundamentals of Sensor Network Programming:
Applications and Technology (Vol. 41, pp. 1–36). John Wiley & Sons.
Introduction to Python Programming 127
37. Izaac, J., & Wang, J., (2018). Python. In: Computational Quantum
Mechanics (pp. 83–162). Springer, Cham.
38. Jackowska-Strumiłło, L., Nowakowski, J., Strumiłło, P., & Tomczak,
P., (2013). Interactive question based learning methodology and
clickers: Fundamentals of computer science course case study. In: 2013
6th International Conference on Human System Interactions (HSI) (pp.
439–442). IEEE.
39. Jun, Z. Y., Ying, Z. C., & Wang, J., (2013). Innovative practices
teaching mode research of the fundamentals of computer. In: 2013
8th International Conference on Computer Science & Education (pp.
1154–1159). IEEE.
40. Kadiyala, A., & Kumar, A., (2017). Applications of python to evaluate
environmental data science problems. Environmental Progress &
Sustainable Energy, 36(6), 1580–1586.
41. Kadiyala, A., & Kumar, A., (2018). Applications of python to
evaluate the performance of decision tree‐based boosting algorithms.
Environmental Progress & Sustainable Energy, 37(2), 618–623.
42. Kadiyala, A., & Kumar, A., (2018). Applications of python to evaluate
the performance of bagging methods. Environmental Progress &
Sustainable Energy, 37(5), 1555–1559.
43. Kelly, S., (2019). Introducing python. In: Python, PyGame, and
Raspberry Pi Game Development (pp. 11–31). Apress, Berkeley, CA.
44. Khoirom, S., Sonia, M., Laikhuram, B., Laishram, J., & Singh, T. D.,
(2020). Comparative analysis of python and Java for beginners. Int.
Res. J. Eng. Technol., 7(8), 4384–4407.
45. Kopec, D., (2014). Some programming fundamentals. In: Dart for
Absolute Beginners (pp. 15–24). Apress, Berkeley, CA.
46. Krause, F., & Lindemann, O., (2014). Expyriment: A python library
for cognitive and neuroscientific experiments. Behavior Research
Methods, 46(2), 416–428.
47. Kuhlman, D., (2009). A Python Book: Beginning Python, Advanced
Python, and Python Exercises (pp. 1–227). Lutz: Dave Kuhlman.
48. Kumar, A., & Panda, S. P., (2019). A survey: How python pitches in
it-world. In: 2019 International Conference on Machine Learning, Big
Data, Cloud, and Parallel Computing (COMITCon) (pp. 248–251).
IEEE.
128 Key Dynamics in Computer Programming
63. Nakhle, F., & Harfouche, A. L., (2021). Ready, steady, Go AI: A
practical tutorial on fundamentals of artificial intelligence and its
applications in phenomics image analysis. Patterns, 2(9), 100323.
64. Nanjekye, J., (2017). Printing and backtick repr. In: Python 2 and 3
Compatibility (pp. 1–10). Apress, Berkeley, CA.
65. Nosrati, M., (2011). Python: An appropriate language for real world
programming. World Applied Programming, 1(2), 110–117.
66. O’Boyle, N. M., Morley, C., & Hutchison, G. R., (2008). Pybel: A
python wrapper for the OpenBabel cheminformatics toolkit. Chemistry
Central Journal, 2(1), 1–7.
67. Oliphant, T. E., (2007). Python for scientific computing. Computing in
Science & Engineering, 9(3), 10–20.
68. Ortin, F., & Escalada, J., (2021). Cnerator: A python application for the
controlled stochastic generation of standard C source code. SoftwareX,
15, 100711.
69. Pajankar, A., (2017). Introduction to python. In: Python Unit Test
Automation (pp. 1–17). Apress, Berkeley, CA.
70. Pajankar, A., (2022). Introduction to python 3. In: Hands-on Matplotlib
(pp. 1–28). Apress, Berkeley, CA.
71. Pilgrim, M., & Willison, S., (2009). Dive into Python 3 (Vol. 2, pp.
20–30). New York, NY, USA: Apress.
72. Poole, M., (2017). Extending the design of a blocks-based python
environment to support complex types. In: 2017 IEEE Blocks and
Beyond Workshop (B&B) (pp. 1–7). IEEE.
73. Radenski, A., (2006). “ Python first” a lab-based digital introduction to
computer science. ACM SIGCSE Bulletin, 38(3), 197–201.
74. Rajagopalan, G., (2021). Getting familiar with python. In: A Python
Data Analyst’s Toolkit (pp. 1–43). Apress, Berkeley, CA.
75. Rak-amnouykit, I., McCrevan, D., Milanova, A., Hirzel, M., & Dolby,
J., (2020). Python 3 types in the wild: A tale of two type systems. In:
Proceedings of the 16th ACM SIGPLAN International Symposium on
Dynamic Languages (pp. 57–70).
76. Rashed, M. G., & Ahsan, R., (2012). Python in computational science:
Applications and possibilities. International Journal of Computer
Applications, 46(20), 26–30.
130 Key Dynamics in Computer Programming
90. Tateosian, L., (2015). Beginning python. In: Python for ArcGIS (pp.
13–35). Springer, Cham.
91. Tateosian, L., (2015). Python for ArcGIS (p. 544). Cham, Switzerland:
Springer.
92. van Rossum, G., & de Boer, J., (1991). Interactively testing remote
servers using the python programming language. CWi Quarterly, 4(4),
283–303.
93. Van, R. G., & Drake, Jr. F. L., (1995). Python Tutorial (Vol. 620, pp.
250–290). Amsterdam, The Netherlands: Centrum Voor Wiskunde en
Informatica.
94. Van, R. G., (2003). In: Drake, F. L., (ed.), An Introduction to Python (p.
115). Bristol: Network Theory Ltd.
95. Van, R. G., (2007). Python programming language. In: USENIX Annual
Technical Conference (Vol. 41, No. 1, pp. 1–36).
96. Van, R. G., Warsaw, B., & Coghlan, N., (2001). PEP 8-style guide for
python code. Python. Org., 1565.
97. VanderPlas, J., Granger, B. E., Heer, J., Moritz, D., Wongsuphasawat,
K., Satyanarayan, A., & Sievert, S., (2018). Altair: Interactive statistical
visualizations for python. Journal of Open Source Software, 3(32),
1057.
98. Vanhoenacker, G., & Sandra, P., (2006). Elevated temperature and
temperature programming in conventional liquid chromatography:
Fundamentals and applications. Journal of separation Science, 29(12),
1822–1835.
99. vanRossum, G., (1995). Python reference manual. Department of
Computer Science [CS], (R 9525).
100. Verstraelen, T., Adams, W., Pujal, L., Tehrani, A., Kelly, B. D.,
Macaya, L., & Heidar‐Zadeh, F., (2021). IOData: A python library for
reading, writing, and converting computational chemistry file formats
and generating input files. Journal of Computational Chemistry, 42(6),
458–464.
101. Watkiss, S., (2020). Getting started with python. In: Beginning Game
Programming with Pygame Zero (pp. 11–49). Apress, Berkeley, CA.
102. Xu, D., Liu, B., Feng, W., Ming, J., Zheng, Q., Li, J., & Yu, Q., (2021).
Boosting SMT solver performance on mixed-bitwise-arithmetic
expressions. In: Proceedings of the 42nd ACM SIGPLAN International
132 Key Dynamics in Computer Programming
CONTENTS
5.1. Introduction..................................................................................... 134
5.2. A First Program................................................................................ 135
5.3. Variants of Hello World.................................................................... 136
5.4. A Numerical Example...................................................................... 138
5.5. Another Version of the Conversion Table Example............................ 139
5.6. Identifiers......................................................................................... 140
5.7. Types............................................................................................... 141
5.8. Constants......................................................................................... 143
5.9. Symbolic Constants......................................................................... 145
5.10. Printf Conversion Specifiers........................................................... 146
References.............................................................................................. 147
134 Key Dynamics in Computer Programming
5.1. INTRODUCTION
C is a common programming language that may be used to create programs
for a wide range of purposes, including operating systems (OSs), numerical
computation, and graphical applications. With just 32 keywords, it is a little
language. It supports both “high-level” structured programming tools like
looping, decision making, and statement grouping, along with “low-level”
capabilities like manipulating addresses and bytes (Embree et al., 1991;
Rajon, 2016).
Because C is a tiny language, it can be explained in a short amount
of time and learned rapidly. A programmer may fairly expect to know,
comprehend, and utilize the complete language on a regular basis (Figure
5.1) (Mészárosová, 2015).
Source: https://talentcode.blogspot.com/2020/04/fundamentals-of-c-program-
ming.html.
As a result, C is able to maintain its small size by offering just the most
basic functions inside the language itself and by omitting several of the
higher-level elements that are often found in other languages. In contrast
to other programming languages, C does not include any operations that
interact with composite objects such as arrays or lists. Aside from the static
declaration of local variables and the stack-allocation of those variables,
there are no memory management features. In addition, there are no input/
output capabilities, like writing to a file or printing to the screen on the
computer (Vogel-Heuser et al., 2014).
A large portion of C functionality is provided by software routines known
as functions. An extensive standard library of functions is provided with
Fundamentals of C Programming 135
Source: https://freecomputerbooks.com/C-Programming-Language-and-Soft-
ware-Design.html.
Comments in C begin with a /* and end with a */. They are not nestable
and can span numerous lines. For instance,
/* this makes an effort to nest two comments /* results in just one
comment, ending here: */ and the residual text is a syntax error. */
A typical library header file is included. Libraries provide the majority
of C’s functionality. Header files include information such as function
definitions and macros that are required to utilize these libraries.
The entry-point function for all C programs is main(). There are two
types of this function:
int main(void)
int main(int argc, char *argv[])
136 Key Dynamics in Computer Programming
Source: https://www.getfreeebooks.com/an-introduction-to-the-c-program-
ming-language-and-software-design/.
Fundamentals of C Programming 137
“Hello, World!” is also printed by the following software. But rather than
publishing the entire string at once, it prints each character as it is received.
A number of new ideas are introduced as a result of this exercise: identifiers,
variables, types, pointers, the 0 (NUL) escape character, array subscripts,
increment operators, logical operators, whereas loops, and text formatting,
among others (Backus, 2003).
This may appear to be a lot, but do not be concerned; you are not required
to grasp everything right once, and everything will be discussed in greater
detail in the following chapters. At this point, it is sufficient to comprehend
the fundamental structure of the code: an index argument, a loop, a string,
and a print statement (Figure 5.4) (McMillan, 2018).
Source: https://freecomputerbooks.com/C-Programming-Language-and-Soft-
ware-Design.html.
Before they may be utilized, all variables must be defined. They must be
defined before any statements at the head of a block. When declared, they
can be started by an expression or a constant.
The variable with the identifier I is of the type int, which is an integer
with a value of zero. The identifier str refers to a variable of type char *,
which is a character pointer. The characters in a string constant are referred
to as str in this example.
A while-loop repeats through the string, printing each character one by
one. The loop continues to run as long as the expression (str[i]!= ‘0’) is non-
zero. NOT EQUAL TO is the meaning of the operator!=. The i-th character
in a string is referred to as str[i] (where str[0] is ‘H’). The escape letter ‘0’
specifies that all string constants be indirectly ended with a NUL character
(Caprile and Tonella, 1999; Dimovski et al., 2021).
138 Key Dynamics in Computer Programming
While the loop expression is TRUE, the while-loop runs the following
sentence. The printf() function accepts two inputs in this case: a format
string “ percent c” and a constraint str[i++], then outputs the i-th character
of str. The post-increment operator is the expression i++, which returns the
value of I and then increases it to I = I + 1.
This version of the program, unlike earlier versions, provides an exact
return statement indicating the program’s exit status.
• Style Note: Take note of the structuring style employed in the
sample code throughout this text, especially the indentation.
Indentation is an important part of designing readable C programs.
Indentation is not important to the compiler, although it does
make the program simpler to understand for programmers (Wang
et al., 2014; Medeiros et al., 2015).
Source: https://codecondo.com/20-ways-to-learn-c-programming-for-free/c-
programming-language-and-software-design-by-tim-bailey/.
Fundamentals of C Programming 139
In this step, we initialize the floating-point variable fahr. Take note of the
fact that the two variables are of a distinct type. For types that are compatible
with one another, the compiler conducts automated type conversion
(Schilling, 1995, Duff, 2015).
The while-loop is activated whenever the expression (fahr = upper)
evaluates to FALSE. The operator = denotes that something is < or =
something else. This loop performs a compound statement encased in
braces, which corresponds to the three statements on the first and second
lines of code (Austin et al., 1994).
The printf() command, in this case, is made up of two variables and a
format string, Celsius, and fahr, that are used to display the results. With two
conversion specifiers, percent 3.0f and percent 6.1f, and tab and newline,
two escape characters, the format string can be easily read. For example,
the conversion specifier percent 6.1f formats a floating-point number by
providing space for at least six digits and printing one digit just after the
decimal point, and printing one digit after the decimal point (Westerståhl,
1985; Kimura and Tanaka-Ishii, 2014).
+= generates an expression that is equal to the expression fahr = fahr plus
step.
• Style Note: In order to make the code more understandable,
comments should be utilized. They should explain the goal
of the algorithm and point out intricacies in the method. They
should refrain from repeating code slang. It is possible to
significantly minimize the number of comments necessary to
make understandable code by carefully selecting identifiers.
Source: http://www.freebookcenter.net/programming-books-download/An-
Introduction-to-the-C-Programming-Language-and-Software-Design-(PDF-
158P).html.
Names for numerical constants are known as symbolic constants. These
are defined using #define, and they allow us to avoid having numbers pollute
our code. Magic numbers are numbers that are strewn about in code and
should be avoided at all costs (Feldmann et al., 1998; Ferreira, 2003).
Two semicolons divide the three components of the for-loop (;). The
first modifies the loop, the second verify the condition, and the third is
an expression that is run after every loop iteration. The real conversion
expression is contained within the printf() statement; an expression can be
employed everywhere a variable can be used (Gravley and Lakhotia, 1996).
• Style Note: Multi-word names should be written like this,
and variables should always start with a lowercase letter. To
distinguish them from variables, symbolic constants should
always be written in UPPERCASE.
5.6. IDENTIFIERS
Identifiers (variable names, function names, and so on) are case-sensitive
and made up of letters and numbers. An identifier’s initial character must be
a letter, including underscore ().
The C programming language features 32 reserved keywords that cannot
be employed as identifiers (e.g., int, while, etc.). Furthermore, avoiding
redefining identifiers employed against the C standard library is a smart idea
Fundamentals of C Programming 141
5.7. TYPES
C is a typed programming language. Every variable has a type that specifies
what values it may signify, how its data is kept in memory, and what actions
it can execute. The type system lets the compiler catch type-disparity issues
by compelling the programmer to explicitly declare a type for all variables
and interfaces, therefore preventing a large source of faults (Miné, 2006;
Majumdar and Xu, 2007).
In the C programming language, there are three main types: characters,
integers, and floating-point numbers.
The numerical kinds are available in a variety of sizes. A collection of
C types and their generally Works Data Types may be found in Table 5.1.
Sizes may differ from one platform to the next. Almost every modern
processor represents an integer with a minimum of 32 bits, and several
increasingly utilize 64 bits. In general, the size of an int indicates a machine’s
natural word size, the indigenous size with which the CPU processes data
and instructions (Lahiri et al., 2012; Irlbeck, 2015).
The standard simply says that a short int must be as a minimum of 16
bits and a long int must be at least 32 bits in size, and
142 Key Dynamics in Computer Programming
Source: http://www.freebookcenter.net/programming-books-download/An-
Introduction-to-the-C-Programming-Language-and-Software-Design-(PDF-
158P).html.
• Note: The size of the operator can be used to determine the size
of a type in characters. This operator is not a function, despite its
appearance. It is a keyword. It yields a size t unsigned integer,
which is specified in the stddef.h header file (Figure 5.8).
Source: https://www.getfreeebooks.com/an-introduction-to-the-c-program-
ming-language-and-software-design/.
Fundamentals of C Programming 143
Since they change the size of a fundamental int type, the keywords short
and long are known as type qualifiers. Note the difference between long and
short when employed alone, as in short x.
short a; and long a; are the equivalents of long int and short int. Unsigned,
signed, volatile, and const are other types of qualifiers. The qualifiers
unsigned and signed can be applied to any integer type, including char.
A signed type can hold negative values; the sign-bit is the number’s most
significant bit (MSB), and the value is usually stored in 2’s complement
binary. A 16-bit signed short, for example, may signify the integers 32,768
to 32,767, but a 16-bit unsigned short can express the numbers 0 to 65,535
(Ball and Rajamani, 2001; Alturki, 2017).
• Note: By default, integer types are signed. Plain chars, on the
other hand, are either unsigned or signed by default, depending
on the computer.
The qualifier const indicates that the variable to which it references is
immutable.
const int DoesNotChange = 5;
DoesNotChange = 6; /* Error: will not compile */
The qualifier volatile is used to refer to variables whose value may vary
in a way that is outside the power of the program’s usual operations. This
is important for things like multi-threaded programming or interacting with
hardware, which are issues that are outside the range of this document.
The volatile qualifier is not appropriate to standard-compliant C programs,
and as a result, it will not be discussed in any further detail in this chapter
(McMillan, 1993; Yang and Seger, 2003).
Furthermore, there is a type called void, which denotes a type that has
“no value” associated with it. In functions that do not take any arguments,
it is used as an argument, and in functions that return no value, it is used as
a return type.
5.8. CONSTANTS
Different types and presentations of constants exist. This section gives
examples of various constant types. First, the type of the integer constant
1234 is int. The suffix L, 1234L, is added to a long int constant. A U, 1234U,
denotes an unsigned int, while UL denotes an unsigned long (Bryant et al.,
2002; Ringenburg and Grossman, 2005).
144 Key Dynamics in Computer Programming
Source: https://www.getfreeebooks.com/an-introduction-to-the-c-program-
ming-language-and-software-design/.
• Important: A conversion specifier and the variable it refers to
must be of the same type. If they are not, the software will either
crash or output junk. printf(“percent f,” 52); / is an example. *
Integer value with floating-point specifier */
Fundamentals of C Programming 147
REFERENCES
1. Alturki, M. A., (2017). A symbolic rewriting semantics of the COMPASS
modeling language. In: 2017 IEEE International Conference on
Information Reuse and Integration (IRI) (pp. 283–290). IEEE.
2. Ambriola, V., Giannotti, F., Pedreschi, D., & Turini, F., (1985).
Symbolic semantics and program reduction. IEEE Transactions on
Software Engineering, (8), 784–794.
3. Austin, T. M., Breach, S. E., & Sohi, G. S., (1994). Efficient detection
of all pointer and array access errors. In: Proceedings of the ACM
SIGPLAN 1994 conference on Programming Language Design and
Implementation (pp. 290–301).
4. Backus, D. J. (2003). 2.9. 1 Obsolescence and Deletions 2.9. 2” Hello
World” Example 2.10 Fortran 95 2.10. 1 Conditional Compilation and
Varying Length Strings 2.11 Fortran 2003 (Vol. 3).
5. Ball, T., & Rajamani, S. K., (2001). Automatically validating temporal
safety properties of interfaces. In: International SPIN Workshop
on Model Checking of Software (pp. 102–122). Springer, Berlin,
Heidelberg.
6. Boudjema, E. H., Faure, C., Sassolas, M., & Mokdad, L., (2018).
Detection of security vulnerabilities in C language applications.
Security and Privacy, 1(1), e8.
7. Brooks, D. R., (1999). The basics of C programming. In: C
Programming: The Essentials for Engineers and Scientists (pp. 23–
69). Springer, New York, NY.
8. Bryant, R. E., Lahiri, S. K., & Seshia, S. A., (2002). Modeling and
verifying systems using a logic of counter arithmetic with lambda
expressions and uninterpreted functions. In: International Conference
on Computer Aided Verification (pp. 78–92). Springer, Berlin,
Heidelberg.
9. Caprile, C., & Tonella, P., (1999). Nomen est omen: Analyzing the
language of function identifiers. In: Sixth Working Conference on
Reverse Engineering (Cat. No. PR00303) (pp. 112–122). IEEE.
10. Chan, S. W., McOmish, F., Holmes, E. C., Dow, B., Peutherer, J. F.,
Follett, E., & Simmonds, P., (1992). Analysis of a new hepatitis C virus
type and its phylogenetic relationship to existing variants. Journal of
General Virology, 73(5), 1131–1141.
148 Key Dynamics in Computer Programming
47. Yang, J., & Seger, C. J., (2003). Introduction to generalized symbolic
trajectory evaluation. IEEE Transactions on Very Large Scale
Integration (VLSI) Systems, 11(3), 345–353.
CHAPTER 6
DYNAMIC PROGRAMMING
CONTENTS
6.1. Introduction..................................................................................... 154
6.2. An Elementary Example................................................................... 154
6.3. Formalizing the Dynamic-Programming Approach........................... 163
6.4. Optimal Capacity Expansion............................................................ 167
6.5. Discounting Future Returns.............................................................. 172
6.6. Shortest Paths in a Network.............................................................. 173
6.7. Continuous State-Space Problems.................................................... 177
6.8. Dynamic Programming Under Uncertainty...................................... 179
References.............................................................................................. 187
154 Key Dynamics in Computer Programming
6.1. INTRODUCTION
Dynamic programming is an optimization approach that transforms a complex
problem into a sequence of simpler problems. The essential characteristic
of dynamic programming is the multistage nature of the optimization
procedure. More so than the optimization techniques described previously,
dynamic programming provides a general framework for analyzing many
problem types. Within this framework a variety of optimization techniques
can be employed to solve particular aspects of a more general formulation.
Usually, creativity is required before we can recognize that a particular
problem can be cast effectively as a dynamic program; and often subtle
insights are necessary to restructure the formulation so that it can be solved
effectively (Amini et al., 1990; Osman et al., 2005).
We begin by providing a general insight into the dynamic programming
approach by treating a simple example in some detail. We then give a
formal characterization of dynamic programming under certainty, followed
by an in-depth example dealing with optimal capacity expansion. Other
topics covered in the chapter include the discounting of future returns,
the relationship between dynamic-programming problems and shortest
paths in networks, an example of a continuous-state-space problem, and
an introduction to dynamic programming under uncertainty (Powell et al.,
2002; Momoh, 2009).
Source: https://www.researchgate.net/figure/Street-map-with-intersection-de-
lays-Taken-from-30_fig4_330557459.
156 Key Dynamics in Computer Programming
Source: https://www.researchgate.net/publication/330557459_Testing_Time-
of-Use_and_Subscription_based_grid_tariff_structures_using_a_Prosumer_
model.
Our first decision (from right to left) occurs with one stage, or intersection,
left to go. If for example, we are in the intersection corresponding to the
highlighted box in Figure 6.2, we incur a delay of three minutes in this
intersection and a delay of either eight or two minutes in the last intersection,
depending upon whether we move up or down. Therefore, the smallest
possible delay, or optimal solution, in this intersection is 3+2 = 5 minutes (Li
et al., 2014; Jamal et al., 2014). Similarly, we can consider each intersection
(box) in this column in turn and compute the smallest total delay as a result of
being in each intersection. The solution is given by the bold-faced numbers
in Figure 6.3. The arrows indicate the optimal decision, up or down, in any
intersection with one stage, or one intersection, to go (Sen and Head, 1997;
Guo et al., 2019).
Note that the numbers in bold-faced type in Figure 6.3 completely
summarize, for decision-making purposes, the total delays over the last
two columns. Although the original numbers in the last two columns have
been used to determine the bold-faced numbers, whenever we are making
decisions to the left of these columns, we need only know the bold-faced
Dynamic Programming 157
numbers. In an intersection, say the topmost with one stage to go, we know
that our (optimal) remaining delay, including the delay in this intersection,
is five minutes. The bold-faced numbers summarize all delays from this
point on. For decision-making to the left of the bold-faced numbers, the last
column can be ignored (Yagar and Han, 1994; Kappelman and Sinha, 2021).
With this in mind, let us back up one more column, or stage, and
compute the optimal solution in each intersection with two intersections to
go (Battigalli and Siniscalchi, 2002; Dayan and Daw, 2008). For example, in
the bottom-most intersection, which is highlighted in Figure 6.3, we incur a
delay of two minutes in the intersection, plus four or six additional minutes,
depending upon whether we move up or down. To minimize delay, we move
up and incur a total delay in this intersection and all remaining intersections
of 2 + 4 = 6 minutes. The remaining computations in this column are
summarized in Figure 6.4, where the bold-faced numbers reflect the optimal
total delays in each intersection with two stages, or two intersections, to go
(Van Damme, 1989; Hauk et al., 2002).
Once we have computed the optimal delays in each intersection with
two stages to go, we can again move back one column and determine the
optimal delays and the optimal decisions with three intersections to go.
In the same way, we can continue to move back one stage at a time, and
compute the optimal delays and decisions with four and five intersections to
go, respectively. Figure 6.5 summarizes these calculations (Al-Najjar, 1995;
Flint et al., 2010).
Figure 6.5(c) shows the optimal solution to the problem. The least
possible delay through the network is 18 minutes. To follow the least-cost
route, a commuter has to start at the second intersection from the bottom.
According to the optimal decisions, or arrows, in the diagram, we see that
he should next move down to the bottom-most intersection in column 4. His
following decisions should be up, down, up, down, arriving finally at the
bottom-most intersection in the last column (Hansen and Zilberstein, 2001;
Kraft et al., 2013).
158 Key Dynamics in Computer Programming
Source: https://www.ime.unicamp.br/~andreani/MS515/capitulo7.pdf.
Source: http://web.mit.edu/15.053/www/AMP-Chapter-11.pdf.
• However, the commuters are probably not free to arbitrarily choose
the intersection they wish to start from. We can assume that their
homes are adjacent to only one of the leftmost intersections, and
therefore each commuter’s starting point is fixed. This assumption
does not cause any difficulty since we have, in fact, determined
Dynamic Programming 159
vn(sn) = Optimal value (minimum delay) over the current and subsequent
stages (intersections), given that we are in state sn (in a particular intersection)
with n stages (intersections) to go.
The optimal-value function at each stage in the decision-making process
is given by the appropriate column of Figure 6.5(c).
Source: https://www.ime.unicamp.br/~andreani/MS515/capitulo7.pdf.
We can write down a recursive relationship for computing the optimal-
value function by recognizing that, at each stage, the decision in a particular
state is determined simply by choosing the minimum total delay (Curtis,
1997; Sauthoff, 2010). If we number the states at each stage as sn = 1 (bottom
intersection) up to sn = 6 (top intersection), then:
Dynamic Programming 161
(1)
where; tn(sn) is the delay time in intersection sn at stage n.
The columns of Figure 6.5(c) are then determined by starting at the right
while successively applying Eq. (1):
v0(s0) = t0(s0) (s0 = 1, 2, …, 6) (2)
Corresponding to this optimal-value function is an optimal-decision
function, which is simply a list giving the optimal decision for each state at
every stage. For this example, the optimal decisions are given by the arrows
leaving each box in every column of Figure 6.5(c).
The method of computation illustrated above is called backward
induction, since it starts at the right and moves back one stage at a time.
Its analog, forward induction, which is also possible, starts at the left and
moves forward one stage at a time (Boutilier et al., 1999; Shin et al., 2019).
The spirit of the calculations is identical but the interpretation is somewhat
different. The optimal-value function for forward induction is defined by:
un(sn) = Optimal value (minimum delay) over the current and completed
stages (intersections), given that we are in state sn (in a particular intersection)
with n stages (intersections) to go.
The recursive relationship for forward induction on the minimum-delay
problem is:
Source: http://web.mit.edu/15.053/www/AMP-Chapter-11.pdf.
Dynamic Programming 163
6.3.1. Stages
The essential feature of the dynamic-programming approach is the structuring
of optimization problems into multiple stages, which are solved sequentially
one stage at a time. Although each one-stage problem is solved as an ordinary
optimization problem, its solution helps to define the characteristics of the
next one-stage problem in the sequence (Bush et al., 2000; Westphal et al.,
2003).
Often, the stages represent different time periods in the problem’s
planning horizon. For example, the problem of determining the level of
inventory of a single commodity can be stated as a dynamic program. The
decision variable is the amount to order at the beginning of each month;
the objective is to minimize the total ordering and inventory-carrying costs;
the basic constraint requires that the demand for the product be satisfied. If
we can order only at the beginning of each month and we want an optimal
ordering policy for the coming year, we could decompose the problem into
12 stages, each representing the ordering decision at the beginning of the
corresponding month (Boutilier et al., 2000; Lewis et al., 2013).
Sometimes the stages do not have time implications. For example,
in the simple situation presented in the preceding section, the problem of
determining the routes of minimum delay from the homes of the commuters
to the downtown parking lots was formulated as a dynamic program. The
decision variable was whether to choose up or down in any intersection, and
164 Key Dynamics in Computer Programming
6.3.2. States
Associated with each stage of the optimization problem are the states of
the process. The states reflect the information required to fully assess the
consequences that the current decision has upon future actions. In the
inventory problem given in this section, each stage has only one variable
describing the state: the inventory level on hand of the single commodity
(Barnett et al., 2004; Johannesson et al., 2007). The minimum-delay problem
also has one state variable: the intersection a commuter is in at a particular
stage.
The specification of the states of the system is perhaps the most critical
design parameter of the dynamic programming model (Hunt, 1963; Sutton
et al., 1992). There are no set rules for doing this. In fact, for the most part,
this is an art often requiring creativity and subtle insight about the problem
being studied. The essential properties that should motivate the selection of
states are:
• The states should convey enough information to make future
decisions without regard to how the process reached the current
state; and
• The number of state variables should be small, since the
computational effort associated with the dynamic-programming
approach is prohibitively expensive when there are more than
two, or possibly three, state variables involved in the model
formulation.
This last feature considerably limits the applicability of dynamic
programming in practice.
backward induction process, where the first stage to be analyzed is the final
stage of the problem and problems are solved moving back one stage at a
time until all stages are included. Alternatively, the recursive procedure can
be based on a forward induction process, where the first stage to be solved
is the initial stage of the problem and problems are solved moving forward
one stage at a time, until all stages are included (Cohen, 1981; Goguen et
al., 1992). In certain problem settings, only one of these induction processes
can be applied (e.g., only backward induction is allowed in most problems
involving uncertainties).
The basis of the recursive optimization procedure is the so-called
principle of optimality, which has already been stated: an optimal policy
has the property that, whatever the current state and decision, the remaining
decisions must constitute an optimal policy with regard to the state resulting
from the current decision (Nolan et al., 1972; Gratton et al., 2008).
where; dn is the decision chosen for the current stage and state. Note that
there is no uncertainty as to what the next state will be, once the current
state and current decision are known. In Section 6.7, we will extend these
concepts to include uncertainty in the formulation.
Our multistage decision process can be described by the diagram given
in Figure 6.7. Given the current state sn which is a complete description of
the system for decision-making purposes with n stages to go, we want to
choose the decision dn that will maximize the total return over the remaining
stages. The decision dn, which must be chosen from a set Dn of permissible
decisions, produces a return at this stage of fn(dn,sn) and results in a new state
sn–1 with (n – 1) stages to go. The new state at the beginning of the next stage
is determined by the transition function sn–1 = tn(dn,sn), and the new state is
a complete description of the system for decision-making purposes with (n
– 1) stages to go. Note that the stage returns are independent of one another
(El Karoui et al., 2001; Ordonez, 2009).
In order to illustrate these rather abstract notions, consider a simple
inventory example. In this case, the state sn of the system is the inventory
level In with n months to go in the planning horizon. The decision dn is the
amount On to order this month. The resulting inventory level In–1 with (n – 1)
months to go is given by the usual inventory-balance relationship:
In–1 = In + On – Rn
Source: https://www.ime.unicamp.br/~andreani/MS515/capitulo7.pdf.
where; Rn is the demand requirement this month. Thus, formally, the
transition function with n stages to go is defined to be:
In–1 = tn(In, On) = In + On – Rn.
Dynamic Programming 167
it is assumed that there is no need ever to construct more than eight plants.
Figure 6.8 provides a graph depicting the allowable capacity (states) over
time. Any node of this graph is completely described by the corresponding
year number and level of cumulative capacity, say the node (n, p). Note
that we have chosen to measure time in terms of years to go in the planning
horizon (Bradford et al., 1971; Wu et al., 2004).
Source: http://web.mit.edu/15.053/www/AMP-Chapter-11.pdf.
Dynamic Programming 169
The cost of traversing any upward-sloping arc is the common cost of $1.5
million plus the plant costs, which depend upon the year of construction and
whether 1, 2, or 3 plants are completed. Measured in thousands of dollars,
these costs are:
1500 + cnxn
where; cn is the cost per plant in the year n; and xn is the number of plants
constructed. The cost for traversing any horizontal arc is zero, since these
arcs correspond to a situation in which no plant is constructed in the current
year (Megiddo, 1984).
Rather than simply developing the optimal-value function in equation
form, as we have done previously, we will perform the identical calculations
in Scheme form to highlight the dynamic-programming methodology. To
begin, we label the final state zero or, equivalently define the “stage-zero”
optimal-value function to be zero for all possible states at stage zero. We will
define a state as the cumulative total number of plants completed (Kaplan
et al., 1975; Gil et al., 2014). Since the only permissible final state is to
construct the entire cumulative demand of eight plants, we have s0 = 8 and,
v0(8) = 0.
Now we can proceed recursively to determine the optimal-value function
with one stage remaining. Since the demand data requires 7 plants by 1985,
with one year to go the only permissible states are to have completed 7 or 8
plants. We can describe the situation by Scheme 1.
The dashes indicate that the particular combination of current state
and decision results in a state that is not permissible. In this table there are
no choices, since, if we have not already completed eight plants, we will
construct one more to meet the demand. The cost of constructing the one
additional plant is the $1,500 common cost plus the $5,200 cost per plant,
for a total of $6,700. (All costs are measured in thousands of dollars.) The
column headed d gives the optimal decision function, which specifies
the optimal number of plants to construct, given the current state of the
system (Bickel, 1978; Ahmed et al., 2003).
Now let us consider what action we should take with two years (stages)
to go. Scheme 2 indicates the possible costs of each state:
170 Key Dynamics in Computer Programming
If we have already completed eight plants with two years to go, then
clearly, we will not construct any more. If we have already completed seven
plants with two years to go, then we can either construct the one plant we
need this year or postpone its construction. Constructing the plant now costs
$1,500 in common costs plus $5,500 in variable costs, and results in state
8 with one year to go (Sherali et al., 1982; Schapire et al., 1999). Since the
cost of state 8 with one year to go is zero, the total cost over the last two
years is $7,000. On the other hand, delaying construction costs zero this year
and results in state 7 with one year to go. Since the cost of state 7 with one
year to go is $6,700, the total cost over the last two years is $6,700. If we
arrive at the point where we have two years to go and have completed seven
plants, it pays to delay the production of the last plant needed. In a similar
way, we can determine that the optimal decision when in state 6 with two
years to go is to construct two plants during the next year (Myerson, 1982;
Guimaraes et al., 2010).
To make sure that these ideas are firmly understood, we will determine
the optimal-value function and optimal decision with three years to go.
Consider Scheme 3 for three years to go:
Dynamic Programming 171
Now suppose that, with three years to go, we have completed five plants.
We need to construct at least one plant this year in order to meet demand.
In fact, we can construct either 1, 2, or 3 plants. If we construct one plant, it
costs $1,500 in common costs plus $5,700 in plant costs, and results in state
6 with two years to go (Shier, 1979). Since the minimum cost following the
optimal policy for the remaining two years is then $12,500, our total cost for
three years would be $19,700. If we construct two plants, it costs the $1,500
in common costs plus $11,400 in plant costs and results in state 7 with two
years to go. Since the minimum cost following the optimal policy for the
remaining two years is then $6,700, our total cost for three years would be
$19,600. Finally, if we construct three plants, it costs the $1,500 in common
costs plus $17,100 in plant costs and results in state 8 with two years to
go (Shier, 1976; Sung et al., 2000). Since the minimum cost following the
optimal policy for the remaining two years is then zero, our total cost for
three years would be $18,600.
Hence, the optimal decision, having completed five plants (being in
state 5) with three years (stages) to go, is to construct three plants this year.
The remaining Schemes for the entire dynamic-programming solution are
determined in a similar manner (see Figure 6.9).
Since we start the construction process with no plants (i.e., in state 0) with
six years (stages) to go, we can proceed to determine the optimal sequence
of decisions by considering the Schemes in the reverse order (Azevedo et
al., 1993; Hamed, 2010). With six years to go it is optimal to construct three
plants, resulting in state 3 with five years to go. It is then optimal to construct
three plants, resulting in state 6 with four years to go, and so forth. The
optimal policy is then shown in the tabulation below:
received n periods from now is 1/(1 + i)n or, equivalently, βn (Murdoch et al.,
1998; Rezaee et al., 2012).
The concept of discounting can be incorporated into the dynamic-
programming framework very easily since we often have a return per period
(stage) that we may wish to discount by the per-period discount factor
(Tierney, 1996; Tao et al., 2006).
Source: https://www.ime.unicamp.br/~andreani/MS515/capitulo7.pdf.
Dynamic Programming 175
By ignoring the numbered nodes and their incident arcs, the procedure is
continued until all nodes are numbered (Sahinidis, 2004; Silva et al., 2020).
We can apply the dynamic-programming approach by viewing each
node as a stage, using either backward induction to consider the nodes in
ascending order, or forward induction to consider the nodes in reverse order
(Bertsekas et al., 1995; Berg et al., 2017). For backward induction, vn will be
interpreted as the longest distance from node n to the end node. Setting v1 =
0, dynamic programming determines v2, v3, …, vN in order, by the recursion
vn = Max[dnj + vj] j<n
where; dnj is the given distance on arc n–j. The results of this procedure are
given as node labels in Figure 6.11 for the critical-path example.
Source: http://web.mit.edu/15.053/www/AMP-Chapter-11.pdf.
For a shortest-path problem, we use minimization instead of
maximization. Note that the algorithm finds the longest (shortest) paths
from every node to the end node. If we want only the longest path to the start
node, we can terminate the procedure once the start node has been labeled.
Finally, we could have found the longest distances from the start node to all
other nodes by labeling the nodes in the reverse order, beginning with the
start node (Figure 6.12) (Sahinidis, 2004; Topaloglu et al., 2006).
176 Key Dynamics in Computer Programming
Source: https://www.ime.unicamp.br/~andreani/MS515/capitulo7.pdf.
A more complicated algorithm must be given for the more general
problem of finding the shortest path between two nodes, say nodes 1 and N,
in a network without negative cycles. In this case, we can devise a dynamic-
programming algorithm based upon a value function defined as follows:
vn(j) = Shortest distance from node 1 to node j along paths using at most
n intermediate nodes.
By definition, then:
v0(j) = d1j for j = 2, 3, …, N
the length d1j of arc 1–j since no intermediate nodes are used. The
dynamic-programming recursion is:
vn(j) = Min {dij + vn–1(i)} 1 ≤ j ≤ N (7)
which uses the principle of optimality: that any path from node 1 to node
j, using at most n intermediate nodes, arrives at node j from node i along
arc i–j after using the shortest path with at most (n – 1) intermediate nodes
from node j to node i. We allow i = j in the recursion and take djj = 0, since
the optimal path using at most n intermediate nodes may coincide with the
optimal path with length vn–1(j) using at most (n – 1) intermediate nodes.
The algorithm computes the shortest path from node 1 to every other
node in the network. It terminates when vn(j) = vn–1(j) for every node j, since
computations in Eqn. (7) will be repeated at every stage from n on. Because
no path (without cycles) uses any more than (N – 1) intermediate nodes,
where N is the total number of nodes, the algorithm terminates after at most
(N – 1) steps (Xu et al., 2013; Gaggero et al., 2014).
Dynamic Programming 177
for all possible values of B0. Note that the state transition function is simply:
Bn – 1 = tn(xn, Bn) = Bn – cnxn.
We can again illustrate the usual principle of optimality: Given budget
Bn at stage n, whatever decision is made with regard to funding the nth
program, the remaining budget must be allocated optimally among the first
(n – 1) programs. If these calculations were carried to completion, resulting
in v10(B10) and d , then the problem would be solved for all possible
budget levels, not just $3.4 M, and $4.2 M (Lee et al., 2006; Seuken and
Zilberstein, 2007).
Although this example has a continuous state space, a finite number of
ranges can be constructed because of the zero–one nature of the decision
variables. In fact, all breaks in the range of the state space either are the
breaks from the previous stage, or they result from adding the cost of the new
program to the breaks in the previous range. This is not a general property
of continuous state space problems, and in most cases such ranges cannot be
determined. Usually, what is done for continuous state space problems is that
they are converted into discrete state problems by defining an appropriate
grid on the continuous state space. The optimal-value function is then
computed only for the points on the grid. For our cost/benefit example, the
total budget must be between zero and $62.5 M, which provides a range on
the state space, although at any stage a tighter upper limit on this range is
determined by the sum of the budgets of the first n programs. An appropriate
grid would consist of increments of $0.1 M over the limits of the range at
each stage, since this is the accuracy with which the program costs have
been estimated. The difference between problems with continuous state
spaces and those with discrete state spaces essentially then disappears for
computational purposes (Gannon, 1974; Vogstad and Kristoffersen, 2010).
Source: http://web.mit.edu/15.053/www/AMP-Chapter-11.pdf.
When uncertainty is present in a dynamic-programming problem,
a specific decision for a given state and stage of the process does not, by
itself, determine the state of the system at the next stage. Furthermore, this
decision may not even determine the return for the current stage. Rather, in
dynamic programming under uncertainty, given the state of the system sn
with n stages to go and the current decision dn, an uncertain event occurs
which is determined by a random variable e˜n whose outcome en is not under
the control of the decision maker (Zhang et al., 2019; Liu et al., 2020).
The outcomes of the random variable are governed by a probability
distribution, pn(en|dn,sn), which may be the same for every stage or may be
conditional on the stage, the state at the current stage, and even the decision
at the current stage.
Dynamic Programming 181
Source: http://web.mit.edu/15.053/www/AMP-Chapter-11.pdf.
182 Key Dynamics in Computer Programming
The demand for the item is uncertain, but its probability distribution is
identical for each of the coming two months. The probability distribution of
the demand is as follows:
Demand Probability
0 0.25
1 0.40
2 0.20
3 0.15
The issue to be resolved is how many units to produce during the first
month and, depending on the actual demand in the first month, how many
units to produce during the second month. Since demand is uncertain, the
inventory at the end of each month is also uncertain. In fact, demand could
exceed the available units on hand in any month, in which case all excess
demand results in lost sales. Consequently, our production decision must find
the proper balance between production costs, lost sales, and final inventory
salvage value (Costa and Kariniotakis, 2007).
The states for this type of problem are usually represented by the
inventory level In at the beginning of each month. Moreover, the problem is
characterized as a two-stage problem, since there are two months involved
in the inventory-replenishment decision. To determine the optimal-value
function, let:
vn(In) = Maximum contribution, given that we have In units of inventory
with n stages to go.
We initiate the backward induction procedure by determining the
optimal-value function with 0 stages to go. Since the salvage value is $500/
unit, we have:
I0 v0(I0)
0 0
1 500
2 1,000
3 1,500
To compute the optimal-value function with one stage to go, we need to
determine, for each inventory level (state), the corresponding contribution
associated with each possible production amount (decision) and level of
sales (outcome). For each inventory level, we select the production amount
that maximizes the expected contribution.
184 Key Dynamics in Computer Programming
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
State Produce Sell Probability Resulting Production Sales Inventory V0(I0) Probability Expected
I1 d1 s1 (= State Cost Revenue Cost ×$ Contribu-
tion
0 0 0 1. 0 0 0 0 0 0 0
0 0.25 1 –1,000 0 500
1 –100 0 –150 750 600
1 0.75 0 –1,000 2,000 0
I1 v1(I1) d
0 600 1
1 1,600 0
2 2,560 0
3 3,200 0
Next, we need to compute the optimal-value function with two stages
to go. However, since we have assumed that there is no initial inventory on
hand, it is not necessary to describe the optimal-value function for every
possible state, but only for I2 = 0. Table 6.4 is similar to Table 6.3 and gives
the detailed computations required to evaluate the optimal-value function
for this case.
186 Key Dynamics in Computer Programming
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
State Produce Sell Probability Resulting Production Sales Inventory v1(I1) Probability Expected
I2 d2 s2 ( = State Cost Revenue Cost ×$ Contribu-
tion
0 0 0 1. 0 0 0 0 650 650 650
1 0 0.25 1 –1,000 0 0 1,600. 150
135
1 0.75 0 –1,000 2,000 0 600. 1,200
2 0 0.25 2 –2,000 0 –200 2,560 90
1 0.40 1 –2,000 2,000 –100 0 1,600 600 1,600
2 0.35 0 –2,000 4,000 600 910
I2 v2(I2) d
0 1,600 2
The optimal strategy can be summarized by the decision tree given
in Figure 6.14. The expected contribution determined by the dynamic-
programming solution corresponds to weighting the contribution of every
path in this tree by the probability that this path occurs (Kreps and Porteus,
1979). The decision tree in Figure 6.14 emphasizes the contingent nature of
the optimal strategy determined by dynamic programming under uncertainty
(Deisenroth et al., 2009).
Dynamic Programming 187
REFERENCES
1. Ahmed, S., King, A. J., & Parija, G., (2003). A multi-stage stochastic
integer programming approach for capacity expansion under
uncertainty. Journal of Global Optimization, 26(1), 3–24.
2. Aldasoro, U., Escudero, L. F., Merino, M., Monge, J. F., & Pérez,
G., (2015). On parallelization of a stochastic dynamic programming
algorithm for solving large-scale mixed 0–1 problems under uncertainty.
Top, 23(3), 703–742.
3. Al-Najjar, N., (1995). A theory of forward induction in finitely repeated
games. Theory and Decision, 38(2), 173–193.
4. Alterovitz, R., Branicky, M., & Goldberg, K., (2008). Motion planning
under uncertainty for image-guided medical needle steering. The
International Journal of Robotics Research, 27(11, 12), 1361–1374.
5. Amini, A. A., Weymouth, T. E., & Jain, R. C., (1990). Using dynamic
programming for solving variational problems in vision. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 12(9),
855–867.
6. Arnold, J., Shaw, S. W., & Pasternack, H. E. N. R. I., (1993). Efficient
target tracking using dynamic programming. IEEE transactions on
Aerospace and Electronic Systems, 29(1), 44–56.
7. Azevedo, J., Costa, M. E. O. S., Madeira, J. J. E. S., & Martins, E. Q.
V., (1993). An algorithm for the ranking of shortest paths. European
Journal of Operational Research, 69(1), 97–106.
8. Barnett, M., Leino, K. R. M., & Schulte, W., (2004). The spec#
programming system: An overview. In: International Workshop on
Construction and Analysis of Safe, Secure, and Interoperable Smart
Devices (pp. 49–69). Springer, Berlin, Heidelberg.
9. Bar-Shalom, Y., (1981). Stochastic dynamic programming: Caution
and probing. IEEE Transactions on Automatic Control, 26(5), 1184–
1195.
10. Barto, A. G., Bradtke, S. J., & Singh, S. P., (1995). Learning to act
using real-time dynamic programming. Artificial Intelligence, 72(1, 2),
81–138.
11. Basri, R., & Jacobs, D. W., (2003). Lambertian reflectance and linear
subspaces. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 25(2), 218–233.
188 Key Dynamics in Computer Programming
12. Battigalli, P., & Siniscalchi, M., (2002). Strong belief and forward
induction reasoning. Journal of Economic Theory, 106(2), 356–391.
13. Bellman, R. E., & Dreyfus, S. E., (2015). Applied Dynamic
Programming (pp. 10–15). Princeton university press.
14. Berg, J. V. D., Patil, S., & Alterovitz, R., (2017). Motion planning
under uncertainty using differential dynamic programming in belief
space. In: Robotics Research (pp. 473–490). Springer, Cham.
15. Bertsekas, D. P., & Tsitsiklis, J. N., (1995). Neuro-dynamic
programming: An overview. In: Proceedings of 1995 34th IEEE
Conference on Decision and Control (Vol. 1, pp. 560–564). IEEE.
16. Bickel, T. C., (1978). The Optimal Capacity Expansion of a Chemical
Plant Via Nonlinear Integer Programming (Vol. 2, pp. 100–147). The
University of Texas at Austin.
17. Birge, J. R., & Louveaux, F., (2011). Introduction to Stochastic
Programming (pp. 1–10). Springer Science & Business Media.
18. Boutilier, C., Dean, T., & Hanks, S., (1999). Decision-theoretic
planning: Structural assumptions and computational leverage. Journal
of Artificial Intelligence Research, 11, 1–94.
19. Boutilier, C., Dearden, R., & Goldszmidt, M., (2000). Stochastic
dynamic programming with factored representations. Artificial
Intelligence, 121(1, 2), 49–107.
20. Bradford, D. F., & Oates, W. E., (1971). Towards a predictive theory
of intergovernmental grants. The American Economic Review, 61(2),
440–448.
21. Bush, W. R., Pincus, J. D., & Sielaff, D. J., (2000). A static analyzer
for finding dynamic programming errors. Software: Practice and
Experience, 30(7), 775–802.
22. Chadès, I., Chapron, G., Cros, M. J., Garcia, F., & Sabbadin, R., (2014).
MDPtoolbox: A multi‐platform toolbox to solve stochastic dynamic
programming problems. Ecography, 37(9), 916–920.
23. Chan, E. P., & Zhang, N., (2001). Finding shortest paths in large network
systems. In: Proceedings of the 9th ACM International Symposium on
Advances in Geographic Information Systems (pp. 160–166).
24. Cohen, J. R., (1981). Segmenting speech using dynamic programming.
The Journal of the Acoustical Society of America, 69(5), 1430–1438.
25. Costa, L. M., & Kariniotakis, G., (2007). A stochastic dynamic
programming model for optimal use of local energy resources in a
Dynamic Programming 189
market environment. In: 2007 IEEE Lausanne Power Tech (pp. 449–
454). IEEE.
26. Cristobal, M. P., Escudero, L. F., & Monge, J. F., (2009). On stochastic
dynamic programming for solving large-scale planning problems under
uncertainty. Computers & Operations Research, 36(8), 2418–2428.
27. Curtis, S., (1997). Dynamic programming: A different perspective. In:
Algorithmic Languages and Calculi (pp. 1–23). Springer, Boston, MA.
28. Dai, C., Li, Y. P., & Huang, G. H., (2012). An interval-parameter
chance-constrained dynamic programming approach for capacity
planning under uncertainty. Resources, Conservation, and Recycling,
62, 37–50.
29. Dantzig, G. B., (2004). Linear programming under uncertainty.
Management Science, 50(12_supplement), 1764–1769.
30. Dayan, P., & Daw, N. D., (2008). Decision theory, reinforcement
learning, and the brain. Cognitive, Affective, & Behavioral Neuroscience,
8(4), 429–453.
31. De Moor, O., (1994). Categories, relations, and dynamic programming.
Mathematical Structures in Computer Science, 4(1), 33–69.
32. Deisenroth, M. P., Rasmussen, C. E., & Peters, J., (2009). Gaussian
process dynamic programming. Neurocomputing, 72(7–9), 1508–1524.
33. Dexter, G., Bello, K., & Honorio, J., (2021). Inverse reinforcement
learning in a continuous state space with formal guarantees. Advances
in Neural Information Processing Systems, 34–40.
34. Dorigo, M., & Di Caro, G., (1999). Ant colony optimization: A new
meta-heuristic. In: Proceedings of the 1999 Congress on Evolutionary
Computation-CEC99 (Cat. No. 99TH8406) (Vol. 2, pp. 1470–1477).
IEEE.
35. Eckstein, Z., & Wolpin, K. I., (1989). The specification and estimation
of dynamic stochastic discrete choice models: A survey. The Journal of
Human Resources, 24(4), 562–598.
36. El Karoui, N., Peng, S., & Quenez, M. C., (2001). A dynamic maximum
principle for the optimization of recursive utilities under constraints.
Annals of Applied Probability, 664–693.
37. Firdausiyah, N., Taniguchi, E., & Qureshi, A. G., (2019). Modeling
city logistics using adaptive dynamic programming based multi-
agent simulation. Transportation Research Part E: Logistics and
Transportation Review, 125, 74–96.
190 Key Dynamics in Computer Programming
38. Flint, A., Mei, C., Murray, D., & Reid, I., (2010). A dynamic
programming approach to reconstructing building interiors. In:
European Conference on Computer Vision (pp. 394–407). Springer,
Berlin, Heidelberg.
39. Gaggero, M., Gnecco, G., & Sanguineti, M., (2014). Approximate
dynamic programming for stochastic N-stage optimization with
application to optimal consumption under uncertainty. Computational
Optimization and Applications, 58(1), 31–85.
40. Gannon, C. A., (1974). Optimal intertemporal supply of a public facility
under uncertainty: A dynamic programming approach to the problem
of planning open space. Regional and Urban Economics, 4(1), 25–40.
41. Gelfand, S. B., & Mitter, S. K., (1991). Recursive stochastic algorithms
for global optimization in R^d. SIAM Journal on Control and
Optimization, 29(5), 999–1018.
42. Genc, T. S., Reynolds, S. S., & Sen, S., (2007). Dynamic oligopolistic
games under uncertainty: A stochastic programming approach. Journal
of Economic Dynamics and Control, 31(1), 55–80.
43. Geramifard, A., Walsh, T. J., Tellex, S., Chowdhary, G., Roy, N., &
How, J. P., (2013). A tutorial on linear function approximators for
dynamic programming and reinforcement learning. Foundations and
Trends® in Machine Learning, 6(4), 375–451.
44. Giegerich, R., & Meyer, C., (2002). Algebraic dynamic programming.
In: International Conference on Algebraic Methodology and Software
Technology (pp. 349–364). Springer, Berlin, Heidelberg.
45. Giegerich, R., (2000). A systematic approach to dynamic programming
in bioinformatics. Bioinformatics, 16(8), 665–677.
46. Giegerich, R., (2000). Explaining and controlling ambiguity in
dynamic programming. In: Annual Symposium on Combinatorial
Pattern Matching (pp. 46–59). Springer, Berlin, Heidelberg.
47. Gil, E., Aravena, I., & Cárdenas, R., (2014). Generation capacity
expansion planning under hydro uncertainty using stochastic mixed
integer programming and scenario reduction. IEEE Transactions on
Power Systems, 30(4), 1838–1847.
48. Goguen, J. A., & Burstall, R. M., (1992). Institutions: Abstract model
theory for specification and programming. Journal of the ACM (JACM),
39(1), 95–146.
Dynamic Programming 191
49. Gratton, S., Sartenaer, A., & Toint, P. L., (2008). Recursive trust-
region methods for multiscale nonlinear optimization. SIAM Journal
on Optimization, 19(1), 414–444.
50. Greene, M. L., Deptula, P., Nivison, S., & Dixon, W. E., (2020). Sparse
learning-based approximate dynamic programming with barrier
constraints. IEEE Control Systems Letters, 4(3), 743–748.
51. Guimaraes, P., & Portugal, P., (2010). A simple feasible procedure to fit
models with high-dimensional fixed effects. The Stata Journal, 10(4),
628–649.
52. Guo, Y., Ma, J., Xiong, C., Li, X., Zhou, F., & Hao, W., (2019). Joint
optimization of vehicle trajectories and intersection controllers with
connected automated vehicles: Combined dynamic programming
and shooting heuristic approach. Transportation Research Part C:
Emerging Technologies, 98, 54–72.
53. Hamed, A. Y., (2010). A genetic algorithm for finding the k shortest
paths in a network. Egyptian Informatics Journal, 11(2), 75–79.
54. Hansen, E. A., & Zilberstein, S., (2001). Monitoring and control of
anytime algorithms: A dynamic programming approach. Artificial
Intelligence, 126(1, 2), 139–157.
55. Hauk, E., & Hurkens, S., (2002). On forward induction and evolutionary
and strategic stability. Journal of Economic Theory, 106(1), 66–90.
56. Held, M., & Karp, R. M., (1962). A dynamic programming approach to
sequencing problems. Journal of the Society for Industrial and Applied
Mathematics, 10(1), 196–210.
57. Höner Zu, S. C., Prohaska, S. J., & Stadler, P. F., (2014). Dynamic
programming for set data types. In: Brazilian Symposium on
Bioinformatics (pp. 57–64). Springer, Cham.
58. Hu, T. C., (1968). A decomposition algorithm for shortest paths in a
network. Operations Research, 16(1), 91–102.
59. Huan, X., & Marzouk, Y. M., (2016). Sequential Bayesian Optimal
Experimental Design Via Approximate Dynamic Programming (pp.
1–25). arXiv preprint arXiv:1604.08320.
60. Huang, G. H., Baetz, B. W., & Patry, G. G., (1994). Grey dynamic
programming for waste-management planning under uncertainty.
Journal of Urban Planning and Development, 120(3), 132–156.
61. Huang, L., (2008). Advanced dynamic programming in semiring
and hypergraph frameworks. In: Coling 2008: Advanced Dynamic
192 Key Dynamics in Computer Programming
73. Kraft, H., & Steffensen, M., (2013). A dynamic programming approach
to constrained portfolios. European Journal of Operational Research,
229(2), 453–461.
74. Kreps, D. M., & Porteus, E. L., (1979). Dynamic choice theory and
dynamic programming. Econometrica: Journal of the Econometric
Society, 91–100.
75. Kunz, W., & Pradhan, D. K., (1994). Recursive learning: A new
implication technique for efficient solutions to CAD problems-test,
verification, and optimization. IEEE Transactions on Computer-Aided
Design of Integrated Circuits and Systems, 13(9), 1143–1158.
76. Lee, J. H., & Lee, J. M., (2006). Approximate dynamic programming
based approach to process control and scheduling. Computers &
Chemical Engineering, 30(10–12), 1603–1618.
77. Lewis, F. L., & Liu, D., (2013). Reinforcement Learning and
Approximate Dynamic Programming for Feedback Control (Vol. 17,
pp. 5–12). John Wiley & Sons.
78. Li, Z., Elefteriadou, L., & Ranka, S., (2014). Signal control
optimization for automated vehicles at isolated signalized intersections.
Transportation Research Part C: Emerging Technologies, 49, 1–18.
79. Liu, X., Wu, H., Wang, L., & Faqiry, M. N., (2020). Stochastic home
energy management system via approximate dynamic programming.
IET Energy Systems Integration, 2(4), 382–392.
80. Liu, Z., Zhou, Y., Huang, G., & Luo, B., (2019). Risk aversion based
inexact stochastic dynamic programming approach for water resources
management planning under uncertainty. Sustainability, 11(24), 6926.
81. Machemehl, R., Gemar, M., & Brown, L., (2014). A stochastic dynamic
programming approach for the equipment replacement optimization
under uncertainty. Journal of Transportation Systems Engineering and
Information Technology, 14(3), 76–84.
82. Megiddo, N., (1984). Linear programming in linear time when the
dimension is fixed. Journal of the ACM (JACM), 31(1), 114–127.
83. Mitten, L., (1974). Preference order dynamic programming.
Management Science, 21(1), 43–46.
84. Mohammadghasemi, M., Shahraki, J., & Sabohi, S. M., (2016).
Optimization Model of Hirmand River Basin Water Resources in the
Agricultural Sector Using Stochastic Dynamic Programming Under
Uncertainty Conditions (Vol. 3, pp. 150–177).
194 Key Dynamics in Computer Programming
85. Momoh, J. A., (2009). Smart grid design for efficient and flexible
power networks operation and control. In: 2009 IEEE/PES Power
Systems Conference and Exposition (pp. 1–8). IEEE.
86. Murdoch, D. J., & Green, P. J., (1998). Exact sampling from a continuous
state space. Scandinavian Journal of Statistics, 25(3), 483–502.
87. Myerson, R. B., (1982). Optimal coordination mechanisms in
generalized principal-agent problems. Journal of Mathematical
Economics, 10(1), 67–81.
88. Neuneier, R., (1995). Optimal asset allocation using adaptive dynamic
programming. Advances in Neural Information Processing Systems, 8,
pp. 1–14.
89. Nolan, R. L., & Sovereign, M. G., (1972). A recursive optimization and
simulation approach to analysis with an application to transportation
systems. Management Science, 18(12), B-676.
90. Ordonez, C., (2009). Optimization of linear recursive queries in SQL.
IEEE Transactions on knowledge and Data Engineering, 22(2), 264–
277.
91. Osman, M. S., Abo-Sinna, M. A., & Mousa, A. A., (2005). An effective
genetic algorithm approach to multiobjective resource allocation
problems (MORAPs). Applied Mathematics and Computation, 163(2),
755–768.
92. Pavoni, N., Sleet, C., & Messner, M., (2018). The dual approach to
recursive optimization: Theory and examples. Econometrica, 86(1),
133–172.
93. Pil, A. C., & Asada, H. H., (1996). Integrated structure/control design
of mechatronic systems using a recursive experimental optimization
method. IEEE/ASME Transactions on Mechatronics, 1(3), 191–203.
94. Powell, W. B., (2010). Approximate dynamic programming-II:
Algorithms. Wiley Encyclopedia of Operations Research and
Management Science (Vol. 2).
95. Powell, W. B., George, A., Bouzaiene-Ayari, B., & Simao, H. P., (2005).
Approximate dynamic programming for high dimensional resource
allocation problems. In: Proceedings 2005 IEEE International Joint
Conference on Neural Networks (Vol. 5, pp. 2989–2994). IEEE.
96. Powell, W. B., Shapiro, J. A., & Simão, H. P., (2002). An adaptive
dynamic programming algorithm for the heterogeneous resource
allocation problem. Transportation Science, 36(2), 231–249.
Dynamic Programming 195
97. Rezaee, K., Abdulhai, B., & Abdelgawad, H., (2012). Application of
reinforcement learning with continuous state space to ramp metering in
real-world conditions. In: 2012 15th International IEEE Conference on
Intelligent Transportation Systems (pp. 1590–1595). IEEE.
98. Rust, J., (1989). 12 a dynamic programming model of retirement
behavior. The Economics of Aging, 359.
99. Rust, J., (1996). Numerical dynamic programming in economics.
Handbook of Computational Economics, 1, 619–729.
100. Rust, J., (2008). Dynamic programming. The New Palgrave Dictionary
of Economics, 1, 8–15.
101. Sahinidis, N. V., (2004). Optimization under uncertainty: State-of-the-
art and opportunities. Computers & Chemical Engineering, 28(6, 7),
971–983.
102. Sali, A., & Blundell, T. L., (1990). Definition of general topological
equivalence in protein structures: A procedure involving comparison of
properties and relationships through simulated annealing and dynamic
programming. Journal of Molecular Biology, 212(2), 403–428.
103. Sauthoff, G., (2010). Bellman’s GAP: A 2nd Generation Language and
System for Algebraic Dynamic Programming, 15–25).
104. Schapire, R. E., & Singer, Y., (1999). Improved boosting algorithms
using confidence-rated predictions. Machine Learning, 37(3), 297–
336.
105. Sen, S., & Head, K. L., (1997). Controlled optimization of phases at an
intersection. Transportation Science, 31(1), 5–17.
106. Seuken, S., & Zilberstein, S., (2007). Memory-bounded dynamic
programming for DEC-POMDPs. In: IJCAI (pp. 2009–2015).
107. Sharma, H., Jain, R., & Gupta, A., (2019). An empirical relative value
learning algorithm for non-parametric MDPs with continuous state
space. In: 2019 18th European Control Conference (ECC) (pp. 1368–
1373). IEEE.
108. Sherali, H. D., Soyster, A. L., Murphy, F. H., & Sen, S., (1982). Linear
programming based analysis of marginal cost pricing in electric utility
capacity expansion. European Journal of Operational Research, 11(4),
349–360.
109. Shier, D. R., (1976). Iterative methods for determining the k shortest
paths in a network. Networks, 6(3), 205–229.
196 Key Dynamics in Computer Programming
110. Shier, D. R., (1979). On algorithms for finding the k shortest paths in a
network. Networks, 9(3), 195–214.
111. Shin, J., Badgwell, T. A., Liu, K. H., & Lee, J. H., (2019). Reinforcement
learning: Overview of recent progress and implications for process
control. Computers & Chemical Engineering, 127, 282–294.
112. Shuai, H., Fang, J., Ai, X., Tang, Y., Wen, J., & He, H., (2018).
Stochastic optimization of economic dispatch for microgrid based
on approximate dynamic programming. IEEE Transactions on Smart
Grid, 10(3), 2440–2452.
113. Silva, T. A., & De Souza, M. C., (2020). Surgical scheduling under
uncertainty by approximate dynamic programming. Omega, 95,
102066.
114. Simon, H. A., (1956). Dynamic programming under uncertainty with a
quadratic criterion function. Econometrica, Journal of the Econometric
Society, 74–81.
115. Snyder, W. L., Powell, H. D., & Rayburn, J. C., (1987). Dynamic
programming approach to unit commitment. IEEE Transactions on
Power Systems, 2(2), 339–348.
116. Sun, Y., Kirley, M., & Halgamuge, S. K., (2017). A recursive
decomposition method for large scale continuous optimization. IEEE
Transactions on Evolutionary Computation, 22(5), 647–661.
117. Sung, K., Bell, M. G., Seong, M., & Park, S., (2000). Shortest paths
in a network with time-dependent flow speeds. European Journal of
Operational Research, 121(1), 32–39.
118. Sutton, R. S., Barto, A. G., & Williams, R. J., (1992). Reinforcement
learning is direct adaptive optimal control. IEEE Control Systems
Magazine, 12(2), 19–22.
119. Tao, J. Y., & Li, D. S., (2006). Cooperative strategy learning in multi-
agent environment with continuous state space. In: 2006 International
Conference on Machine Learning and Cybernetics (pp. 2107–2111).
IEEE.
120. Tierney, L., (1996). Introduction to general state-space Markov chain
theory. Markov Chain Monte Carlo in Practice, 59–74.
121. Topaloglu, H., & Kunnumkal, S., (2006). Approximate dynamic
programming methods for an inventory allocation problem under
uncertainty. Naval Research Logistics (NRL), 53(8), 822–841.
Dynamic Programming 197
122. Ulmer, M. W., Goodson, J. C., Mattfeld, D. C., & Hennig, M., (2019).
Offline–online approximate dynamic programming for dynamic
vehicle routing with stochastic requests. Transportation Science, 53(1),
185–202.
123. Van, D. E., (1989). Stable equilibria and forward induction. Journal of
Economic Theory, 48(2), 476–496.
124. Vogstad, K., & Kristoffersen, T. K., (2010). Investment decisions
under uncertainty using stochastic dynamic programming: A case
study of wind power. In: Handbook of Power Systems I (pp. 331–341).
Springer, Berlin, Heidelberg.
125. Wang, C. L., & Xie, K. M., (2002). Convergence of a new evolutionary
computing algorithm in continuous state space. International Journal
of Computer Mathematics, 79(1), 27–37.
126. Webster, M., Santen, N., & Parpas, P., (2012). An approximate dynamic
programming framework for modeling global climate policy under
decision-dependent uncertainty. Computational Management Science,
9(3), 339–362.
127. Westphal, M. I., Pickett, M., Getz, W. M., & Possingham, H. P., (2003).
The use of stochastic dynamic programming in optimal landscape
reconstruction for metapopulations. Ecological Applications, 13(2),
543–555.
128. Wu, F., & Huberman, B. A., (2004). Finding communities in linear
time: A physics approach. The European Physical Journal B, 38(2),
331–338.
129. Xie, J., Wan, Y., & Lewis, F. L., (2017). Strategic air traffic flow
management under uncertainties using scalable sampling-based
dynamic programming and q-learning approaches. In: 2017 11th Asian
Control Conference (ASCC) (pp. 1116–1121). IEEE.
130. Xu, J., Zeng, Z., Han, B., & Lei, X., (2013). A dynamic programming-
based particle swarm optimization algorithm for an inventory
management problem under uncertainty. Engineering Optimization,
45(7), 851–880.
131. Yagar, S., & Han, B., (1994). A procedure for real-time signal control
that considers transit interference and priority. Transportation Research
Part B: Methodological, 28(4), 315–331.
132. Zhang, N., Leibowicz, B. D., & Hanasusanto, G. A., (2019). Optimal
residential battery storage operations using robust data-driven dynamic
programming. IEEE Transactions on Smart Grid, 11(2), 1771–1780.
CHAPTER 7
FUNDAMENTALS OF OPERATING
SYSTEMS
CONTENTS
7.1. Introduction..................................................................................... 200
7.2. Computer System Organization....................................................... 201
7.3. Computer System Structure.............................................................. 204
7.4. Operating System (OS) History........................................................ 204
7.5. Operating System (OS) Functions..................................................... 205
7.6. Operating System (OS) Categories................................................... 206
7.7. The Performance Development of OS.............................................. 209
7.8. Operating System (OS) Service........................................................ 212
7.9. Operating System (OS) Operations.................................................. 212
7.10. Operating System (OS) Components.............................................. 214
References.............................................................................................. 217
200 Key Dynamics in Computer Programming
7.1. INTRODUCTION
An operating system (OS) is a collection of programs that manage the
execution of application software and serve as a link between a computer’s
consumer and its hardware. The OS is software that both maintains computer
hardware and offers an environment in which application applications may
execute (Eager et al., 2016).
Windows/NT, Windows, MacOS, and OS/2 are instances of OSs.
The operating system goals are as follows (Hughes, 2000):
• To design the computer system user-friendly and simple to
operate;
• To make the most use of computer hardware;
• To run user applications and make it simpler to solve user issues.
Application programs, OSs, hardware, and users are the 4 elements that
make up a computer system. Figure 7.1 depicts an abstract representation of
system elements (Dandamudi, 2003).
• Users: These can be thought of as machines, people, or other
computers.
• Hardware: This includes memory, CPU, and input/output
devices.
• Application Programs: This includes database systems,
compilers, and web browsers, which help users solve their
computer challenges.
• Operating System (OS): It offers the mechanism for appropriate
usage of hardware in computer system operations.
Source: https://www.slideshare.net/SHIKHAGAUTAM4/3-basic-organization-
of-a-computer.
Fundamentals of Operating Systems 201
Source: https://429151971640327878.weebly.com/blog/12-computer-system-
organization.
When a computer is first turned on or restarted, it has to execute an
initial software to get it up and running. This first software, often known as
a bootstrap program, is usually rather simple (Musina et al., 2017). For the
most part, firmware is included within the computer hardware, and is saved
in read-only memory (ROM) or electronically erasable programmable read-
only memory (EEPROM). From the registers of CPU to the device controllers
to the contents of the RAM, it configures and configures everything about
202 Key Dynamics in Computer Programming
the system. The bootstrap software should understand how to load the OS
and begin running it. To achieve this, the bootstrap software must identify
and load the OS kernel into memory. After that, the OS launches the first
procedure, like “init,” and waits for anything to happen (Austin et al., 2002).
has been completed, the device controller notifies the device driver through
an interrupt that the operation has been completed successfully. Control
is subsequently transferred back to the OS via the device driver. Other
procedures are completed by returning status information from the device
driver.
Direct memory access (DMA) is utilized to transfer large amounts of
data. After configuring the Input/output device’s buffers, counters, and
pointers, the device controller transmits a whole block of data straight to or
from its buffer storage to memory, without the need for the CPU to intervene.
When using high-speed devices, just one interrupt is created every block to
notify the device driver that the operation is finished, as opposed to one
interrupt per byte when using lower-speed devices (Haber et al., 1990).
Source: https://www.abhishekshukla.com/windows-operating-system/history-
evolution-windows-os-operating/.
Source: https://electricalfundablog.com/operating-system-os-functions-types-
resource-management/.
Source: https://www.slideserve.com/arleen/operating-systems-for-wireless-
sensor-networks-in-space.
Fundamentals of Operating Systems 207
Source: https://www.techtud.com/short-notes/batch-operating-system.
Batch system drawbacks are given below (Brown et al., 1991):
• From the user’s perspective, the turnaround time might be
lengthy;
• The program is hard to debug.
Source: https://www.slideshare.net/KadianAman/aman-singh.
The CPU optimum time is reduced by using a time-sharing mechanism.
The drawback is a little more complicated.
7.6.3. Real-Time OS
A real-time OS is distinguished by its ability to respond quickly. It ensures
that time-sensitive jobs are executed on time. For every function to be done
on the computer, this kind should have a defined maximum time restriction.
Real-time systems have been utilized when there have been tight time limits
on the operation of a processor or the data flow, and real-time systems may
also be used as a control device in a specific application when strict time
constraints are required (Figure 7.10) (Clark et al., 1992).
Figure 7.10. The schematic diagram for the real-time operating system.
Source: https://www.polytechnichub.com/rtos-real-time-operating-system/.
Fundamentals of Operating Systems 209
Source: https://www.datacenterdynamics.com/en/opinions/ups-terminology-
101-online-and-offline-ups-topologies/.
Source: https://www.datacenterdynamics.com/en/opinions/ups-terminology-
101-online-and-offline-ups-topologies/.
210 Key Dynamics in Computer Programming
7.7.2. Buffering
A buffer is a major storage space used to retain data during input/output
transfers (Hildebrand, 1992). The data are deposited in the buffer via an
Input/output channel on input, and the data can be accessible by the CPU
once the transfer is complete. Single or double buffing is possible.
7.7.4. Multiprogramming
Multiple programs have been retained in primary memory at the identical
time in multiprogramming, and the CPU switches among them, ensuring
that the Central Processing Unit is constantly executing a program. The OS
starts by running one program from memory; if this application requires a
delay, like an input/output activity, the OS switches to a different program.
Multiprogramming makes the CPU work harder. Multiprogramming
systems create an environment where in the different system resources are
properly used, but they do not allow for the interaction of consumer with the
computer (Christopher et al., 1993).
• Benefits:
– Excessive central processing unit usage;
– It looks that numerous programs are given central processing
unit time virtually at the same time.
• Drawbacks:
– There is a need for CPU scheduling;
– Memory management is essential to support several jobs in
memory.
Source: https://www.researchgate.net/figure/The-block-diagram-of-the-distrib-
uted-system_fig5_4351563.
Benefits of distributed systems:
• Sharing of Resources: You may share printers and files.
• Increased Calculation Speed: A job may be divided so each
processor may work on a portion of it at the same time, which is
known as load sharing.
• Reliability: If one CPU fails, the other CPUs will continue to
work normally.
• Electronic mail, ftp, and other forms of communication (Jo et al.,
2014).
Source: https://www.geeksforgeeks.org/dual-mode-operations-os/.
214 Key Dynamics in Computer Programming
REFERENCES
1. Agarwal, A., Hennessy, J., & Horowitz, M., (1988). Cache performance
of operating system and multiprogramming workloads. ACM
Transactions on Computer Systems (TOCS), 6(4), 393–431.
2. Anderson, T. E., Levy, H. M., Bershad, B. N., & Lazowska, E. D.,
(1991). The interaction of architecture and operating system design.
ACM SIGPLAN Notices, 26(4), 108–120.
3. Austin, T., Larson, E., & Ernst, D., (2002). SimpleScalar: An
infrastructure for computer system modeling. Computer, 35(2), 59–67.
4. Brown, K. A., & Mitchell, T. R., (1991). A comparison of just-in-time
and batch manufacturing: The role of performance obstacles. Academy
of management Journal, 34(4), 906–917.
5. Cardenas, A. F., (1973). Evaluation and selection of file organization—A
model and system. Communications of the ACM, 16(9), 540–548.
6. Chen, P. M., Lee, E. K., Gibson, G. A., Katz, R. H., & Patterson, D. A.,
(1994). RAID: High-performance, reliable secondary storage. ACM
Computing Surveys (CSUR), 26(2), 145–185.
7. Christopher, W. A., Procter, S. J., & Anderson, T. E., (1993). The nachos
instructional operating system. In: USENIX Winter (pp. 481–488).
8. Clark, R. K., Jensen, E. D., & Reynolds, F. D., (1992). An architectural
overview of the Alpha real-time distributed kernel. In: Proceedings of
the USENIX Workshop on Microkernels and other Kernel Architectures
(pp. 27–28).
9. Comer, D., (2011). Operating System Design: The Xinu Approach,
Linksys Version (Vol. 1, pp. 3–9). Chapman and Hall/CRC.
10. Corral, L., Sillitti, A., & Succi, (2012). Mobile multiplatform
development: An experiment for performance analysis. Procedia
Computer Science, 10, 736–743.
11. Creasy, R. J., (1981). The origin of the VM/370 time-sharing system.
IBM Journal of Research and Development, 25(5), 483–490.
12. Dandamudi, S. P., (2003). Fundamentals of Computer Organization
and Design (Vol. 7, p. 1–5). Berlin, Heidelberg: Springer.
13. Eager, B., & Lister, A., (2016). Fundamentals of Operating Systems
(Vol. 1, pp. 1–10). Macmillan International Higher Education.
218 Key Dynamics in Computer Programming
14. Estrin, G., (1960). Organization of computer systems: The fixed plus
variable structure computer. In: Papers Presented at the May 3–5, 1960,
Western Joint IRE-AIEE-ACM Computer Conference, 5(3), 33–40.
15. Fassino, J. P., Stefani, J. B., Lawall, J., & Muller, G., (2002). Think:
A software framework for component-based operating system kernels.
In: 2002 USENIX Annual Technical Conference (USENIX ATC 02).
16. Gligor, V. D., (1984). A note on denial-of-service in operating systems.
IEEE Transactions on Software Engineering, (3), 320–324.
17. Haber, R., & Unbehauen, H., (1990). Structure identification of
nonlinear dynamic systems—a survey on input/output approaches.
Automatica, 26(4), 651–677.
18. Hansen, P. B., (1973). Operating System Principles (3rd edn., p. 4–9).
Prentice-Hall, Inc.
19. Hildebrand, D., (1992). An architectural overview of QNX. In:
USENIX Workshop on Microkernels and Other Kernel Architectures
(pp. 113–126).
20. Hughes, L., (2000). An applied approach to teaching the fundamentals
of operating systems. Computer Science Education, 10(1), 1–23.
21. Jo, K., Kim, J., Kim, D., Jang, C., & Sunwoo, M., (2014). Development
of autonomous car—Part I: Distributed system architecture and
development process. IEEE Transactions on Industrial Electronics,
61(12), 7131–7140.
22. Kronenberg, N. P., Levy, H. M., & Strecker, W. D., (1986). VAXcluster:
A closely-coupled distributed system. ACM Transactions on Computer
Systems (TOCS), 4(2), 130–146.
23. Lange, J., Pedretti, K., Hudson, T., Dinda, P., Cui, Z., Xia, L., &
Brightwell, R., (2010). Palacios and kitten: New high performance
operating systems for scalable virtualized and native supercomputing.
In: 2010 IEEE International Symposium on Parallel & Distributed
Processing (IPDPS) (pp. 1–12). IEEE.
24. Litzkow, M., & Livny, M., (1990). Experience with the condor
distributed batch system. In: IEEE Workshop on Experimental
Distributed Systems (pp. 97–101).
25. Mullender, S. J., Van, R. G., Tananbaum, A. S., Van, R. R., & Van,
S. H., (1990). Amoeba: A distributed operating system for the 1990s.
Computer, 23(5), 44–53.
Fundamentals of Operating Systems 219
39. Wulf, W., Cohen, E., Corwin, W., Jones, A., Levin, R., Pierson, C., &
Pollack, F., (1974). Hydra: The kernel of a multiprocessor operating
system. Communications of the ACM, 17(6), 337–345.
40. Zhang, F., Chen, J., Chen, H., & Zang, B., (2011). Cloudvisor:
Retrofitting protection of virtual machines in multi-tenant cloud
with nested virtualization. In: Proceedings of the Twenty-Third ACM
Symposium on Operating Systems Principles (Vol. 23, pp. 203–216).
CHAPTER 8
TIMELINE OF COMPUTER WINDOWS
AND ITS FEATURES
CONTENTS
8.1. Introduction..................................................................................... 222
8.2. Ms-Dos And What Came Before...................................................... 222
8.3. Windows 1.0................................................................................... 223
8.4. Windows 2.0................................................................................... 223
8.5. Windows 3.0................................................................................... 224
8.6. Windows 3.1................................................................................... 225
8.7. Windows 95.................................................................................... 226
8.8. Windows 98.................................................................................... 227
8.9. Windows 2000................................................................................ 228
8.10. Windows Me................................................................................. 229
8.11. Windows Xp.................................................................................. 230
8.12. Windows Vista............................................................................... 231
8.13. Windows 7.................................................................................... 231
8.14. Windows 8.................................................................................... 232
8.15. Windows 8.1................................................................................. 233
8.16. Windows 10.................................................................................. 234
8.17. Windows 11.................................................................................. 235
8.18. The Future of Windows.................................................................. 244
8.19. Main Features of Microsoft Windows............................................. 245
References.............................................................................................. 254
222 Key Dynamics in Computer Programming
8.1. INTRODUCTION
Whatever comes to mind when you consider the history of Windows? Logos
that are instantly recognizable? Changing the Start menu’s appearance? The
advent of Live Tiles? All of this and much more is included in the past
of Microsoft’s main operating system (OS). Windows has seen several
reincarnations during the last 35 years (Rushinek and Rushinek, 1997).
Here we examine 14 different Windows version because they all represent
significant milestones in the history of computers (Inglot and Liu, 2014;
Rajesh et al., 2015). Before we get in to Windows history, it is worth
considering the status of computing prior to Windows.
Source: https://winworldpc.com/product/windows-10/101.
Don’t be fooled by Windows 1.0’s bare-bones appearance: the OS
also had Windows Paint, Windows Write, a calendar, a clock, a notepad, a
cardfile, a file manager, a terminal app, as well as a game named Reversi,
according to The Verge.
Source: https://winworldpc.com/product/windows-20/20.
Source: https://winworldpc.com/product/windows-3/30.
Source: https://winworldpc.com/product/windows-3/31.
8.7. WINDOWS 95
You’re definitely thinking about Windows 95 when you consider the most
famous version of Windows. That is due to that it was like a radical leaving
artistically from earlier forms of Windows, and it set a quality for what
we’ve arise to know from Windows OS. Windows 95 was released in 1995,
as its name indicates. It was the oldest 32-bit version of Windows (prior
editions had been 16-bit), and this included a number of new characteristics
that would go down in history (Campbell, 1991; Nolze and Kraus, 1998).
The taskbar, the Start menu, lengthy file names, and plug-and-play features
are among them (at which marginal gadgets only required to be linked to
a PC for work correctly). Internet Explorer, Microsoft’s web browser, was
also introduced with Windows 95 (Figure 8.5) (Campbell, 1992).
Timeline of Computer Windows and Its Features 227
Source: https://microsoft.fandom.com/wiki/Windows_95.
Another noteworthy function? However, Windows 95 still operated with
MS-DOS, dissimilar its forerunner, it didn’t have to delay for the computer
to boot into DOS first, as PCMag points out. This was the first moment
Windows was permitted to boot straight from the hard drive (Barney, 1994).
8.8. WINDOWS 98
This is the Windows version with a title that corresponds to year this was
introduced. If Windows 95 (finally) gave us Internet Explorer, Windows 98
tightened the web browser’s hold on Microsoft’s OS. This form of Windows
not one included Internet Explorer 4.01, nonetheless also a bevy of additional
internet-related apps and features, like Microsoft Chat, the Web Publishing
Wizard. and Microsoft Outlook (Figure 8.6).
228 Key Dynamics in Computer Programming
Source: https://microsoft.fandom.com/wiki/Windows_98.
Windows 98 also included expanded compatibility for USB drives and
Macromedia apps (Shockwave and Flash).
Source: https://microsoft.fandom.com/wiki/Windows_2000.
Timeline of Computer Windows and Its Features 229
8.10. WINDOWS ME
“ME” stands for “Millennium Edition” in Windows ME. It also was called
“The Mistake Edition,” which was a less favorable title. When Windows
ME was released in 2000, it was given the moniker because “customers had
issues downloading it, enabling it to start, getting it to operate with other
software or hardware, and having it to quit operating” (Figure 8.8).
Source: https://microsoft.fandom.com/wiki/Windows_Me.
Even after its terrible start, this still handled in order to provide us
with a valuable device (Chau and Hui, 1998). System restores, a recovery
characteristic which, if your computer is turned up having issues because of
poorly executed installation of a program or upgrade, can eliminate some
these updated information and rebuild your computer to the way it was
formerly the infringing update screwed with it. System Restore, in classic
230 Key Dynamics in Computer Programming
Mistake Edition manner, had its own challenges to work out before becoming
genuinely fantastic. For example, it occasionally messed up the restoration
by reinstalling items such as viruses that had already been deleted (Dong,
1999; Zhang et al., 2016).
8.11. WINDOWS XP
Windows XP was introduced in 2001 and is largely regarded as one of
Microsoft’s best Windows OSs. The OS was available in two versions:
Home and Professional. However, the Professional was designed for usage
in business environments, while Home was designed for personal use. Share
of XP’s success, according to TechRadar, may be ascribed to the fact that it
was released at the same time as a surge in PC sales, thus for numerous new
operators, “Windows XP was what arrived with their oldest computer.”
The popularity of XP may be followed back to the OS itself. And besides,
since it lasted 13 years till Microsoft withdrew support for it in 2014, there
must have been something appealing about its design. Because it is actually
meant to be consumer-friendly, it has achieved some commercial success.
Bright colors, a bright green Start button, and configurable theme tune were
lastly included with this windows version, giving it a warm and appealing
design. It also included additional features including as native CD ripping
software, desktop search, remote desktop, and (soon) enhanced security
(Figure 8.9) (Sullivan, 1996; Dong, 1999).
Source: https://microsoft.fandom.com/wiki/Windows_XP.
Timeline of Computer Windows and Its Features 231
Source: https://www.digitaltrends.com/computing/the-history-of-windows/.
Vista attempted to do too much, too quickly, and was burnt as a result.
Although it included numerous valuable functions, such as DirectX 10,
Windows Defender, (for PC gaming), Windows DVD Maker, and speech
recognition, it was not without flaws (Gratze et al., 1998).
8.13. WINDOWS 7
Microsoft released Windows 7, a new windows version, 2 years later.
Microsoft needed to develop for Vista’s shortcomings, and Windows 7 did
exactly that. Windows 7 is more simplified than Vista, and it essentially
232 Key Dynamics in Computer Programming
Source: https://www.digitaltrends.com/computing/the-history-of-windows/.
8.14. WINDOWS 8
Visually, Windows 8 was a sea change from its forerunners. It’s time to
discuss the tiled display screen. The Start screen had slates dubbed Live
Tiles that served as dynamic app shortcuts, allowing you to start your apps
while simultaneously displaying mini-updates about them (like the quantity
of unread messages). The Start screen was intended for replacing the Start
menu. In this configuration, Windows 8 retains the classic Windows desktop,
which is where applications are executed (Figure 8.12) (Westerlund and
Danielsson, 2001; Naik, 2004).
Timeline of Computer Windows and Its Features 233
Source: https://news.microsoft.com/accessing-system-commands/.
While not everyone was delighted with Windows 8’s tablet-centric
redesign, it provided several more features like the option to USB 3.0
connectivity, login with a Microsoft account, a real lock screen (visually
comparable to a phone home screen), and Xbox Live integration (Warner,
2001; Swift et al., 2002).
Source: https://news.microsoft.com/windows-8-1-preview-lock-screen/.
Microsoft made certain changes in Windows 8.1, such as adding a real
Start button to the toolbar and allowing users to get the desktop immediately
afterward signing in (in its place of actuality received by the dreaded Start
screen). Microsoft didn’t waste any time in releasing this patched-up version
of Windows: Windows 8 was introduced in 2012, followed by Windows 8.1
in 2013.
8.16. WINDOWS 10
Windows 10 was released in 2015 and is the latest version of Microsoft’s
OS. Once it launched, it was clear that Microsoft sought to improve its usage
of Live Tiles instead of completely eliminate them. It harmed the following
in Windows 10: It replaced Windows 8’s hated Start screen with a wider
Start menu that makes usage of Live Tiles and other types of program icons.
It was successful (Bickel et al., 2002; Stiegler et al., 2006).
Additionally, according to the Verge, the 2015 edition of Windows 10
introduced Cortana, a native digital personal helper; the capability to convert
among tablet and desktop modes; and a new online browser (Microsoft
Edge).
Since its introduction in 2015, Windows 10 has also gotten quite
frequent upgrades. They are referred to as feature updates and occur each six
Timeline of Computer Windows and Its Features 235
months. They are always accessible for free via Windows Update. Indeed,
the following function is not that far away: Windows 10 20H1 is scheduled
for release in the spring of 2020, maybe around May. This update is likely to
bring a revamped Cortana experience and the capability to restore Windows
“simply selecting the choice to Cloud downloading Windows, in the absence
of having to produce installation discs” (Figure 8.14).
Source: https://www.digitaltrends.com/computing/the-history-of-windows/.
8.17. WINDOWS 11
Windows always has been to serve as a platform for global innovation. It
has served as the backbone of multinational enterprises and as a platform for
scrappy initiatives to become household names. Windows gave birth to and
raised the web. It’s where most of us sent our first mail, started playing our
first PC game, and coded our first line. Windows is the platform on which
over a trillion people these days rely to create, connect, learn, and succeed.
We don’t take the responsibility of creating for several individuals
casually. We moved from adapting the PC into our living to irritating to
integrate our entire life into the PC over the last 18 months, which has
resulted in an extraordinary shift in the way, we use our PCs. Our gadgets
were not just where we went for conferences, classes, and to get tasks
completed; they were also where we went out and played games with mates,
236 Key Dynamics in Computer Programming
Source: https://blogs.windows.com/windowsexperience/2021/06/24/introduc-
ing-windows-11/.
Timeline of Computer Windows and Its Features 237
Source: https://www.techrepublic.com/article/windows-11-cheat-sheet-every-
thing-you-need-to-know/.
238 Key Dynamics in Computer Programming
Figure 8.17. Windows 11 removes the complexity and replaces it with simplic-
ity.
Source: https://blogs.windows.com/windowsexperience/2021/06/24/introduc-
ing-windows-11/.
Source: https://www.techrepublic.com/article/windows-11-cheat-sheet-every-
thing-you-need-to-know/.
Timeline of Computer Windows and Its Features 239
Source: https://blogs.windows.com/windowsexperience/2021/06/24/introduc-
ing-windows-11/.
240 Key Dynamics in Computer Programming
Source: https://www.techrepublic.com/article/windows-11-cheat-sheet-every-
thing-you-need-to-know/.
premier first- and third-party apps to the Microsoft Store, such as Microsoft
Teams, Visual Studio, Disney+, Adobe Creative Cloud, Zoom, and Canva
– which all deliver fantastic experiences that entertain, inspire, and connect
you. When you download an app from the App Store, you could be certain
that it has already been properly vetted for security and family safety (Figure
8.21) (Talebi et al., 2012).
Figure 8.21. A latest Microsoft Store which combines your favorite programs
and entertainment together in one place.
Source: https://blogs.windows.com/windowsexperience/2021/06/24/introduc-
ing-windows-11/.
Source: https://www.pcmag.com/how-to/how-to-run-android-apps-in-win-
dows-11.
Windows 11 in the same way that you do now with Windows 10. Updating
to Windows 11 will be similar to updating to Windows 10. As you incorpo-
rate Windows 11 into your estate, the same organization practices you have
nowadays – such as Microsoft Endpoint Manager, cloud setup, Windows
Update for Business, and Autopilot – will support your future environment.
We are dedicated to app availability, which is a major design principle of
Windows 11, just as we were with Windows 10. With App Assure, a service
to help clients with 150 or more users address any app difficulties they may
have at no additional cost, we stand by our guarantee that your apps will
operate on Windows 11 (Mehreen and Aslam, 2015).
Windows 11 is also safe by default, with newly constructed security
mechanisms that provide security from the chips to the cloud while allowing
for increased efficiency and new experiences. To safeguard data and access
across devices, Windows 11 has a Zero Trust-ready OS. We’ve worked very
closely with our OEM and silicon suppliers to enhance security baselines
in response to the changing threat landscape and the emerging hybrid work
environment (Venčkauskas et al., 2015).
The Microsoft 365 blog has further information on Windows 11 as a
computer system for mixed work and learning.
to make utilizing your pen much more engaging and immersive, letting you
to feel and hear the sensations as you click, edit, or doodle. Finally, we’ve
made improvements to voice typing. Windows 11 recognizes what you say
really well; it can dynamically capitalize for you and has voice commands.
This is a terrific option for whenever you want to avoid typing and instead
voice your thoughts (Hofmeester and Wolfe, 2012; Gao et al., 2013).
Beginning this Christmas season, Windows 11 will be offered as a free
update for qualified Windows 10 PCs and for new PCs. Visit Windows.com
and download the PC Health Check app to see if your existing Windows 10
PC is capable for the free upgrading to Windows 11. We’re also collaborating
with our store teams to ensure that Windows 10 PCs purchased today are
prepared for the Windows 11 upgrade. The free upgrade will begin rolling
out to compatible Windows 10 PCs this Christmas season and will continue
until 2022. And, starting next week, we’ll start sharing an early copy of
Windows 11 with the Windows Insider Program — this is a dedicated group
of Windows lovers whose feedback we value.
Source: https://answers.microsoft.com/en-us/windows/forum/all/where-is-dis-
play-control-panel-in-windows-build/ce8fcc12-f3c2-4940-800c-ed95053cff00.
246 Key Dynamics in Computer Programming
8.19.2. Cortana
Cortana is a voice-activated virtual assistant that debuted with Windows
10. Cortana is a virtual assistant that can respond to questions, explore
your computer or the Internet, schedule appointments and reminders, make
online purchases, and much more. Cortana is comparable to other voice-
activated services like Siri, Alexa, or Google Assistant, with the additional
advantage of being able to search your computer’s information (Cheng et
al., 2016; Đuranec et al., 2019). In Windows 10, click Windows key+S to
open Cortana (Figure 8.24).
Source: https://cdn.windowsreport.com/wp-content/uploads/2020/06/How-to-
block-Cortana-from-starting-in-Windows-10.jpg.
8.19.3. Desktop
The desktop is a critical component of Windows’ standard GUI. It is a
container for apps, files, and documents, all of which show as icons. Your
desktop is constantly running in the background, alongside any other apps
you may be using.
When you turn on your desktop and sign in to Microsoft for the first
time, the desktop backdrop, icons, and taskbar are shown. From this point,
you may access your computer’s installed apps via the Start menu whether
by double-clicking any program shortcuts on your desktop.
At any moment, you may access your desktop by hitting Windows
key+D to minimize any currently active apps (Lazarescu et al., 2004; Fadhil
et al., 2016).
Timeline of Computer Windows and Its Features 247
Source: https://www.computerhope.com/issues/ch001967.htm.
Source: https://www.computerhope.com/issues/ch001967.htm.
248 Key Dynamics in Computer Programming
Source: https://www.computerhope.com/issues/ch001967.htm.
Source: https://www.computerhope.com/issues/ch001967.htm.
8.19.10. Notepad
Notepad is a straightforward text editor. It allows you to make, examine, and
alter text files. For example, you may use Notepad to create a batch file or
an HTML web page.
250 Key Dynamics in Computer Programming
Source: https://www.computerhope.com/issues/ch001967.htm.
Source: https://asmwsoft.com/registry-editor-tool.html.
The Registry Editor is available in the Start menu, under Windows
Administrative Tools, in Windows 10. Additionally, you may launch it by
clicking the Windows key and entering regedit, followed by pressing Enter.
Changing things to the register might result in your programs or system
becoming unresponsive. Avoid editing the registry unless you are certain of
what you are doing, and always backup your registry before making changes
by transferring it to a file (Vogel-Heuser et al., 2014).
8.19.14. Settings
Settings, which is included in Windows 8 and Windows 10, allows you to
customize various elements of Windows. You may customize the desktop
backdrop, power settings, and external device choices, among other things.
In Windows 10, click Windows key+I to launch Settings. Alternatively,
enter the Start menu and select the Gear icon.
Source: https://www.computerhope.com/jargon/s/sysinfo.html.
System Information may be found in the Menu bar, in Windows
Administrative Tools, in Windows 10. You may also open it by pressing
Windows key+R, typing msinfo32, and pressing Enter in the Run box.
8.19.17. Taskbar
The Windows taskbar displays open programs and a Rapid Launch section
for quick access to certain apps. The alert area is located to the right of the
toolbar and displays the date and time, as well as any background processes.
Source: https://windowsground.com/what-is-task-manager-in-windows-10/.
REFERENCES
1. Akbal, E., Günes, F., & Akbal, A., (2016). Digital forensic analyses of
web browser records. J. Softw., 11(7), 631–637.
2. Archibugi, D., & Pietrobelli, C., (2003). The globalization of
technology and its implications for developing countries: Windows of
opportunity or further burden? Technological Forecasting and Social
Change, 70(9), 861–883.
3. Bangalee, M. Z. I., Lin, S. Y., & Miau, J. J., (2012). Wind driven natural
ventilation through multiple Windows of a building: A computational
approach. Energy and Buildings, 45, 317–325.
4. Barney, D., (1994). CC: Mail for Windows 2.0 marred by security flaw.
InfoWorld, 16(6), 1–2.
5. Bickel, R., Cook, M., Haney, J., Kerr, M., Parker, D. C. T., & Parkes, U.
S. N. H., (2002). Guide to securing Microsoft Windows XP. National
Security Agency, 1–129.
6. Bird, C., Nagappan, N., Devanbu, P., Gall, H., & Murphy, B., (2009).
Does distributed development affect software quality? an empirical case
study of Windows vista. In: 2009 IEEE 31st International Conference
on Software Engineering (pp. 518–528). IEEE.
7. Blackman, C. F., Kinney, L. S., House, D. E., & Joines, W. T.,
(1989). Multiple power‐density Windows and their possible origin.
Bioelectromagnetics: Journal of the Bioelectromagnetics Society,
The Society for Physical Regulation in Biology and Medicine, The
European Bioelectromagnetics Association, 10(2), 115–128.
8. Bolosky, W. J., Corbin, S., Goebel, D., & Douceur, J. R., (2000). Single
instance storage in Windows 2000. In: Proceedings of the 4th USENIX
Windows Systems Symposium (pp. 13–24).
9. Campbell, G., (1991). Word for Windows 2.0: Getting better all the
time. PC World, 9(12), 91–92.
10. Campbell, G., (1992). Windows word processors. PC World, 10(4),
146–161.
11. Chau, P. Y., & Hui, K. L., (1998). Identifying early adopters of new IT
products: A case of Windows 95. Information & Management, 33(5),
225–230.
12. Cheng, Y., Jiang, C., & Shi, J., (2016). A fall detection system based
on SensorTag and Windows 10 IoT core. In: 2015 International
Timeline of Computer Windows and Its Features 255
35. Laric, M. V., (1995). Day-Timer organizer 1.0 for Windows. Journal of
Consumer Marketing, 12(3), 67–70.
36. Lazarescu, M. M., Venkatesh, S., & Bui, H. H., (2004). Using multiple
Windows to track concept drift. Intelligent Data Analysis, 8(1), 29–59.
37. Lee, T., Mitschke, K., Schill, M. E., & Tanasovski, T., (2011). Windows
PowerShell 2.0 Bible (Vol. 725, pp. 457–472). John Wiley & Sons.
38. Li, P. L., Ni, M., Xue, S., Mullally, J. P., Garzia, M., & Khambatti, M.,
(2008). Reliability assessment of mass-market software: Insights from
Windows vista®. In: 2008 19th International Symposium on Software
Reliability Engineering (ISSRE) (pp. 265–270). IEEE.
39. Lin, S. W., & Vincent, F. Y., (2012). A simulated annealing heuristic for
the team orienteering problem with time Windows. European Journal
of Operational Research, 217(1), 94–107.
40. Lio, P. A., & Nghiem, P., (2004). Interactive atlas of dermoscopy:
Giuseppe argenziano, MD, H. Peter Soyer, MD, Vincenzo De Giorgio,
MD, Domenico Piccolo, MD, Paolo Carli, MD, Mario Delfino, MD,
Angela Ferrari, MD, Rainer Hofmann-Wellenhof, MD, Daniela Massi,
MD, Giampiero Mazzocchetti, MD, Massimiliano Scalvenzi, MD, and
Ingrid H. Wolf, MD, Milan, Italy, 2000, Edra Medical Publishing and
New Media. $290.00. ISBN 88-86457-30-8. CD-ROM requirements
(minimum): Pentium 133 MHz, 32-Mb RAM, 24X CD-ROM drive,
800× 600 resolution. Journal of the American Academy of Dermatology,
50(5), 208, 807, 808.
41. Ma, X., Buffler, P. A., Gunier, R. B., Dahl, G., Smith, M. T., Reinier,
K., & Reynolds, P., (2002). Critical Windows of exposure to household
pesticides and risk of childhood leukemia. Environmental Health
Perspectives, 110(9), 955–960.
42. Mehreen, S., & Aslam, B., (2015). Windows 8 cloud storage analysis:
Dropbox forensics. In: 2015 12th International Bhurban Conference
on Applied Sciences and Technology (IBCAST) (pp. 312–317). IEEE.
43. Mészárosová, E., (2015). Is python an appropriate programming
language for teaching programming in secondary schools. International
Journal of Information and Communication Technologies in Education,
4(2), 5–14.
44. Naik, D. C., (2004). Inside Windows storage: Server storage
technologies for Windows 2000, Windows server 2003 and beyond.
Computing Reviews, 45(8), 468–469.
258 Key Dynamics in Computer Programming
45. Narayan, S., Feng, T., Xu, X., & Ardham, S., (2009). Network
performance evaluation of wireless IEEE802. 11n encryption methods
on Windows vista and Windows server 2008 operating systems.
In: 2009 IFIP International Conference on Wireless and Optical
Communications Networks (pp. 1–5). IEEE.
46. Narayan, S., Shang, P., & Fan, N., (2009). Performance evaluation of
ipv4 and ipv6 on Windows vista and Linux ubuntu. In: 2009 International
Conference on Networks Security, Wireless Communications and
Trusted Computing (Vol. 1, pp. 653–656). IEEE.
47. Nolze, G., & Kraus, W., (1998). PowderCell 2.0 for Windows. Powder
Diffraction, 13(4), 256–259.
48. Odell, D., & Chandrasekaran, V., (2012). Enabling comfortable
thumb interaction in tablet computers: A Windows 8 case study. In:
Proceedings of the Human Factors and Ergonomics Society Annual
Meeting (Vol. 56, No. 1, pp. 1907–1911). Sage CA: Los Angeles, CA:
SAGE Publications.
49. Oliveira, O. L., Monteiro, A. M., & Roman, N. T., (2013). Can natural
language be utilized in the learning of programming fundamentals? In:
2013 IEEE Frontiers in Education Conference (FIE) (pp. 1851–1856).
IEEE.
50. Pfeiffer, P., Scott, S. L., & Shukla, H., (2003). ORNL-RSH package
and Windows ‘03 PVM 3.4. In: European Parallel Virtual Machine/
Message Passing Interface Users’ Group Meeting (pp. 388–394).
Springer, Berlin, Heidelberg.
51. Prentice, A. M., Ward, K. A., Goldberg, G. R., Jarjou, L. M., Moore,
S. E., Fulford, A. J., & Prentice, A., (2013). Critical Windows for
nutritional interventions against stunting. The American of Clinical
Nutrition, 97(5), 911–918.
52. Purcell, D. M., & Lang, S. D., (2008). Forensic artifacts of Microsoft
Windows vista system. In: International Conference on Intelligence
and Security Informatics (pp. 304–319). Springer, Berlin, Heidelberg.
53. Qian, C., & Lau, K. K., (2017). Enumerative variability in software
product families. In: 2017 International Conference on Computational
Science and Computational Intelligence (CSCI) (pp. 957–962). IEEE.
54. Rajesh, B., Reddy, Y. J., & Reddy, B. D. K., (2015). A survey paper
on malicious computer worms. International Journal of Advanced
Research in Computer Science and Technology, 3(2), 161–167.
Timeline of Computer Windows and Its Features 259
69. Uzunboylu, H., Bicen, H., & Cavus, N., (2011). The efficient virtual
learning environment: A case study of web 2.0 tools and Windows live
spaces. Computers & Education, 56(3), 720–726.
70. Venčkauskas, A., Damaševičius, R., Jusas, N., Jusas, V., Maciulevičius,
S., Marcinkevičius, R., & Toldinas, J., (2015). Investigation of artifacts
left by BitTorrent client in Windows 8 registry. Science and Education,
3(2), 25–31.
71. Vogel-Heuser, B., Rehberger, S., Frank, T., & Aicher, T., (2014).
Quality despite quantity—Teaching large heterogeneous classes in C
programming and fundamentals in computer science. In: 2014 IEEE
Global Engineering Education Conference (EDUCON) (pp. 367–372).
IEEE.
72. Warner, P. D., (2001). Windows ME. The CPA Journal, 71(1), 64.
73. Westerlund, A., & Danielsson, J., (2001). Heimdal and Windows
2000 Kerberos-how to get them to play together. In: USENIX Annual
Technical Conference, FREENIX Track (pp. 267–272).
74. Whitehouse, O., (2007). An analysis of address space layout
randomization on Windows vista. Symantec Advanced Threat Research,
1–14.
75. Zhang, S., Wang, L., Zhang, R., & Guo, Q., (2010). Exploratory
study on memory analysis of Windows 7 operating system. In: 2010
3rd International Conference on Advanced Computer Theory and
Engineering (ICACTE) (Vol. 6, pp. V6–373). IEEE.
76. Zhang, W., Lu, L., Peng, J., & Song, A., (2016). Comparison of the
overall energy performance of semi-transparent photovoltaic Windows
and common energy-efficient Windows in Hong Kong. Energy and
Buildings, 128, 511–518.
77. Zimmermann, T., Nagappan, N., & Williams, L., (2010). Searching for
a needle in a haystack: Predicting security vulnerabilities for Windows
vista. In: 2010 Third International Conference on Software Testing,
Verification, and Validation (pp. 421–428). IEEE.
INDEX
A Binary data 15
Binary number 12, 14, 15, 16, 18
Abstraction 66, 67, 69, 70, 71, 82, Bits 10, 11
84, 85, 86 Bold-faced numbers 156, 157
Ada 70, 75, 76, 77, 82, 84, 87, 91, Bootstrap program 201
94 Bootstrap software 202
Adobe Creative Cloud 241 Buffering 210
Adobe Photoshop 2, 10 Bytes 10, 13, 14
Algol 72, 73, 92, 93, 94
Algorithmic languages 72 C
American National Standards Insti-
C 69, 70, 72, 74, 75, 76, 80, 82, 84,
tute (ANSI) 74
86, 87, 89, 90, 91, 92, 93, 94
Apple 222
C++ 70, 74, 75, 82, 84, 91
Application software 10, 200
Canva 241
Arithmetic logic unit (ALU) 47
Central processing unit (CPU) 16,
Array subscripts 137
18
Artificial intelligence (AI) 57
Clustered system 204
Assemblers 10
COBOL programming language 72
Audio devices 201
Communication protocols 52
B Compiled languages 96
Compilers 10, 30, 35
Babbage Engine 40 Computer devices 2
Backward induction 161, 162, 163, Computer processing units (CPUs)
165, 173, 175, 182, 183 5
Basic assembly language (BAL) 46 Computer programming 2, 3
Batch systems 51 Control device 208
Control Panel 245, 250
262 Key Dynamics in Computer Programming
P S
Paper tape 44 Secondary storage devices 4
Parallel programs 53, 54, 59 Security 66
Pascal 69, 70, 73, 74, 75, 76, 86 Sequential programs 53
Pattern matching 86 Settings 251
Personal computers (PC) 47, 211 Shared memory 201
Pixels 15 Software 2, 3, 4, 8, 9, 10, 17, 21, 23,
Power User Tasks Menu 250 24, 25, 29, 32, 34, 35
Print statement 98, 99, 101, 102, Software engineering environments
116 48
Program 2, 3, 4, 6, 10, 16, 17, 18, Software systems 48, 49, 50, 51, 54,
19, 20, 21, 22, 23, 24, 25, 26, 56, 57, 59, 60
27, 28, 29, 30, 31, 32, 33, 34 Source code 38
Programmable systems (P-type) 49 Specifiable systems (S-type) 49
Programming language 3, 20, 21, Speech recognition 231
22, 27, 32, 33, 34, 35 Spooling 210
Progressive web app (PWA) 242 Standard library 134, 135, 136, 140
Punched cards 44 Start menu 222, 226, 232, 233, 234,
Python 3, 20, 21, 22, 23, 24, 25, 26, 246, 249, 251
27, 34, 35 StickyKeys 228
Python math library 119 Switches 5, 10, 11, 27
Symbolic constants 139, 140, 141,
Q
145, 146, 148
Qualifier volatile 143 System calls 213, 215
Quotes 144 System Information 251, 252
System Restore 229
R
System software 9
RAM, or Random-access memory 6
T
Read-only memory (ROM) 45
Real-time systems 208 Task Manager 250, 252
Recursive optimization 164, 165, Transition function 165, 166, 179,
194 184
Recursive optimization procedure
U
164, 165
Registry Editor 250, 251 Ultralight plane 66
Relay-Based Computers 40 Unicode 14
Universal Turing System 39
Universal windows app (UWP) 242
UNIX 74
Index 265