Computer Program - Wikipedia
Computer Program - Wikipedia
A computer program in its human-readable form is called source code. Source code needs another
computer program to execute because computers can only execute their native machine
instructions. Therefore, source code may be translated to machine instructions using a compiler
written for the language. (Assembly language programs are translated using an assembler.) The
resulting file is called an executable. Alternatively, source code may execute within an interpreter
written for the language.[2]
If the executable is requested for execution, then the operating system loads it into memory and
starts a process.[3] The central processing unit will soon switch to this process so it can fetch,
decode, and then execute each machine instruction.[4]
If the source code is requested for execution, then the operating system loads the corresponding
interpreter into memory and starts a process. The interpreter then loads the source code into
memory to translate and execute each statement. Running the source code is slower than running
an executable.[5][b] Moreover, the interpreter must be installed on the computer.
Example computer program
The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the
language BASIC (1964) was intentionally limited to make the language easy to learn.[6] For example,
variables are not declared before being used.[7] Also, variables are automatically initialized to zero.[7]
Here is an example computer program, in Basic, to average a list of numbers:[8]
Once the mechanics of basic computer programming are learned, more sophisticated and powerful
languages are available to build large computer systems.[9]
History
Analytical Engine
In 1837, Jacquard's loom inspired Charles Babbage to attempt to build the Analytical Engine.[10] The
names of the components of the calculating device were borrowed from the textile industry. In the
textile industry, yarn was brought from the store to be milled. The device had a store which
consisted of memory to hold 1,000 numbers of 50 decimal digits each.[11] Numbers from the store
were transferred to the mill for processing. The engine was programmed using two sets of
perforated cards. One set directed the operation and the other set inputted the variables.[10][12]
However, the thousands of cogged wheels and gears never fully worked together.[13]
Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine (1843).[14]
The description contained Note G which completely detailed a method for calculating Bernoulli
numbers using the Analytical Engine. This note is recognized by some historians as the world's first
computer program.[13]
ENIAC
The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall
1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create
the circuits. At its core, it was a series of Pascalines wired together.[18] Its 40 units weighed 30 tons,
occupied 1,800 square feet (167 m2), and consumed $650 per hour (in 1940s currency) in electricity
when idle.[18] It had 20 base-10 accumulators. Programming the ENIAC took up to two months.[18]
Three function tables were on wheels and needed to be rolled to fixed function panels. Function
tables were connected to function panels by plugging heavy black cables into plugboards. Each
function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the
3,000 switches. Debugging a program took a week.[19] It ran from 1947 until 1955 at Aberdeen
Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing
firing tables to aim artillery guns.[20]
Stored-program computers
Instead of plugging in cords and turning switches, a stored-program computer loads its instructions
into memory just like it loads its data into memory.[21] As a result, the computer could be
programmed quickly and perform calculations at very fast speeds.[22] Presper Eckert and John
Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page
memo dated February 1944.[23] Later, in September 1944, John von Neumann began working on the
ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC,
which equated the structures of the computer with the structures of the human brain.[22] The design
became known as the von Neumann architecture. The architecture was simultaneously deployed in
the constructions of the EDVAC and EDSAC computers in 1949.[24][25]
The IBM System/360 (1964) was a family of computers, each having the same instruction set
architecture. The Model 20 was the smallest and least expensive. Customers could upgrade and
retain the same application software.[26] The Model 195 was the most premium. Each System/360
model featured multiprogramming[26]—having multiple processes in memory at once. When one
process was waiting for input/output, another could compute.
IBM planned for each model to be programmed using PL/1.[27] A committee was formed that
included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that
was comprehensive, easy to use, extendible, and would replace Cobol and Fortran.[27] The result
was a large and complex language that took a long time to compile.[28]
Switches for manual input on a Data
General Nova 3, manufactured in the mid-
1970s
Computers manufactured until the 1970s had front-panel switches for manual programming.[29] The
computer program was written on paper for reference. An instruction was represented by a
configuration of on/off settings. After setting the configuration, an execute button was pressed.
This process was then repeated. Computer programs also were automatically inputted via paper
tape, punched cards or magnetic-tape. After the medium was loaded, the starting address was set
via switches, and the execute button was pressed.[29]
A major milestone in software development was the invention of the Very Large Scale Integration
(VLSI) circuit (1964).
Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968), achieved a
technological improvement to refine the production of field-effect transistors (1963).[30] The goal is
to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally
occurring silicate minerals are converted into polysilicon rods using the Siemens process.[31] The
Czochralski process then converts the rods into a monocrystalline silicon, boule crystal.[32] The
crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then
integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of
metal–oxide–semiconductor (MOS) transistors.[33][34] The MOS transistor is the primary component
in integrated circuit chips.[30]
Originally, integrated circuit chips had their function set during manufacturing. During the 1960s,
controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The
matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the
matrix was to burn out the unneeded connections. There were so many connections, firmware
programmers wrote a computer program on another chip to oversee the burning. The technology
became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip
and named it the Intel 4004 microprocessor.[35]
The terms microprocessor and central processing unit (CPU) are now used interchangeably.
However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU
made from circuit boards containing discrete components on ceramic substrates.[36]
x86 series
In 1978, the modern software development environment began when Intel upgraded the Intel 8080
to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088.[37] IBM
embraced the Intel 8088 when they entered the personal computer market (1981). As consumer
demand for personal computers increased, so did Intel's microprocessor development. The
succession of development is known as the x86 series. The x86 assembly language is a family of
backward-compatible machine instructions. Machine instructions created in earlier
microprocessors were retained throughout microprocessor upgrades. This enabled consumers to
purchase new computers without having to purchase new application software. The major
categories of instructions are:[c]
Memory instructions to set and access numbers and strings in random-access memory.
Integer arithmetic logic unit (ALU) instructions to perform the primary arithmetic operations on
integers.
Floating point ALU instructions to perform the primary arithmetic operations on real numbers.
Call stack instructions to push and pop words needed to allocate memory and interface with
functions.
Single instruction, multiple data (SIMD) instructions[d] to increase speed when multiple
processors are available to perform the same algorithm on an array of data.
Changing programming environment
VLSI circuits enabled the programming environment to advance from a computer terminal (until the
1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a
single shell running in a command-line environment. During the 1970s, full-screen source code
editing became possible through a text-based user interface. Regardless of the technology
available, the goal is to program in a programming language.
The programming style of a programming language to provide these building blocks may be
categorized into programming paradigms.[39] For example, different paradigms may differentiate:[39]
Each of these programming styles has contributed to the synthesis of different programming
languages.[39]
A programming language is a set of keywords, symbols, identifiers, and rules by which programmers
can communicate instructions to the computer.[40] They follow a set of rules called a syntax.[40]
Symbols are characters to form operations, assignments, control flow, and delimiters.
Identifiers are words created by programmers to form constants, variable names, structure
names, and function names.
Programming languages get their basis from formal languages.[41] The purpose of defining a
solution in terms of its formal language is to generate an algorithm to solve the underlining
problem.[41] An algorithm is a sequence of simple instructions that solve a problem.[42]
The evolution of programming languages began when the EDSAC (1949) used the first stored
computer program in its von Neumann architecture.[43] Programming the EDSAC was in the first
generation of programming language.[44]
The basic structure of an assembly language statement is a label, operation, operand, and
comment.[47]
Labels allow the programmer to work with variable names. The assembler will later translate
labels into physical memory addresses.
Operations allow the programmer to work with mnemonics. The assembler will later translate
mnemonics into instruction numbers.
Operands tell the assembler which data the operation will process.
Comments allow the programmer to articulate a narrative because the instructions alone are
vague.
The key characteristic of an assembly language program is it forms a one-to-one mapping to its
corresponding machine language target.[48]
The third generation of programming language uses compilers and interpreters to execute
computer programs. The distinguishing feature of a third generation language is its independence
from particular hardware.[49] Early languages include Fortran (1958), COBOL (1959), ALGOL
(1960), and BASIC (1964).[45] In 1973, the C programming language emerged as a high-level
language that produced efficient machine language instructions.[50] Whereas third-generation
languages historically generated many machine instructions for each statement,[51] C has
statements that may generate a single machine instruction.[f] Moreover, an optimizing compiler
might overrule the programmer and produce fewer machine instructions than statements. Today,
an entire paradigm of languages fill the imperative, third generation spectrum.
The fourth generation of programming language emphasizes what output results are desired,
rather than how programming statements should be constructed.[45] Declarative languages
attempt to limit side effects and allow programmers to write code with relatively few errors.[45]
One popular fourth generation language is called Structured Query Language (SQL).[45] Database
developers no longer need to process each database record one at a time. Also, a simple select
statement can generate output records without having to understand how they are retrieved.
Imperative languages
A declaration introduces a variable name to the computer program and assigns it to a datatype[53]
– for example: var x: integer;
A statement might assign an expression to a variable or use the value of a variable to alter the
program's control flow – for example: x := 2 + 2; if x = 4 then do_something();
Fortran
FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system". It was
designed for scientific calculations, without string handling facilities. Along with declarations,
expressions, and statements, it supported:
arrays.
subroutines.
"do" loops.
It succeeded because:
records.
pointers to arrays.
COBOL
COBOL (1959) stands for "COmmon Business Oriented Language". Fortran manipulated symbols. It
was soon realized that symbols did not need to be numbers, so strings were introduced.[55] The US
Department of Defense influenced COBOL's development, with Grace Hopper being a major
contributor. The statements were English-like and verbose. The goal was to design a language so
managers could read the programs. However, the lack of structured statements hindered this
goal.[56]
COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards.
As a consequence, it was not changed for 15 years until 1974. The 1990s version did make
consequential changes, like object-oriented programming.[56]
Algol
ALGOL (1960) stands for "ALGOrithmic Language". It had a profound influence on programming
language design.[57] Emerging from a committee of European and American programming language
experts, it used standard mathematical notation and had a readable, structured design. Algol was
first to define its syntax using the Backus–Naur form.[57] This led to syntax-directed compilers. It
added features like:
"for" loops.
functions.
recursion.[57]
Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On
another branch the descendants include C, C++ and Java.[57]
Basic
BASIC (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code". It was developed at
Dartmouth College for all of their students to learn.[8] If a student did not go on to a more powerful
language, the student would still remember Basic.[8] A Basic interpreter was installed in the
microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the
language.[8]
Basic pioneered the interactive session.[8] It offered operating system commands within its
environment:
However, the Basic syntax was too simple for large programs.[8] Recent dialects added structure
and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a
graphical user interface.[7]
C programming language (1973) got its name because the language BCPL was replaced with B, and
AT&T Bell Labs called the next version "C". Its purpose was to write the UNIX operating system.[50] C
is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware
growth in the 1980s.[50] Its growth also was because it has the facilities of assembly language, but it
uses a high-level syntax. It added advanced features like:
inline assembler.
arithmetic on pointers.
pointers to functions.
bit operations.
C allows the programmer to control which region of memory data is to be stored. Global variables
and static variables require the fewest clock cycles to store. The stack is automatically used for the
standard variable declarations. Heap memory is returned to a pointer variable from the malloc()
function.
The global and static data region is located just above the program region. (The program region is
technically called the text region. It is where machine instructions are stored.)
The global and static data region is technically two regions.[58] One region is called the
initialized data segment, where variables declared with default values are stored. The other
region is called the block started by segment, where variables declared without default values
are stored.
Variables stored in the global and static data region have their addresses set at compile time.
They retain their values throughout the life of the process.
The global and static region stores the global variables that are declared on top of (outside) the
main() function.[59] Global variables are visible to main() and every other function in the
source code.
On the other hand, variable declarations inside of main() , other functions, or within { } block
delimiters are local variables. Local variables also include formal parameter variables. Parameter
variables are enclosed within the parenthesis of a function definition.[60] Parameters provide an
interface to the function.
Local variables declared using the static prefix are also stored in the global and static data
region.[58] Unlike global variables, static variables are only visible within the function or block.
Static variables always retain their value. An example usage would be the function int
increment_counter(){static int counter = 0; counter++; return
[h]
counter;}
The stack region is a contiguous block of memory located near the top memory address.[61]
Variables placed in the stack are populated from top to bottom.[i][61] A stack pointer is a special-
purpose register that keeps track of the last memory address populated.[61] Variables are placed
into the stack via the assembly language PUSH instruction. Therefore, the addresses of these
variables are set during runtime. The method for stack variables to lose their scope is via the POP
instruction.
Local variables declared without the static prefix, including formal parameter variables,[62]
are called automatic variables[59] and are stored in the stack.[58] They are visible inside the
function or block and lose their scope upon exiting the function or block.
The heap region is located below the stack.[58] It is populated from the bottom to the top. The
operating system manages the heap using a heap pointer and a list of allocated memory
blocks.[63] Like the stack, the addresses of heap variables are set during runtime. An out of
memory error occurs when the heap pointer and the stack pointer meet.
C provides the malloc() library function to allocate heap memory.[j][64] Populating the heap
with data is an additional copy function.[k] Variables stored in the heap are economically passed
to functions using pointers. Without pointers, the entire block of data would have to be passed
to the function via the stack.
C++
In the 1970s, software engineers needed language support to break large projects down into
modules.[65] One obvious feature was to decompose large projects physically into separate files. A
less obvious feature was to decompose large projects logically into abstract data types.[65] At the
time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers,
and strings of characters. Abstract datatypes are structures of concrete datatypes, with a new name
assigned. For example, a list of integers could be called integer_list .
In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition;
no memory is allocated. When memory is allocated to a class and bound to an identifier, it is called
an object.[66]
Object-oriented imperative languages developed by combining the need for classes and the need for
safe functional programming.[67] A function, in an object-oriented language, is assigned to a class.
An assigned function is then referred to as a method, member function, or operation. Object-
oriented programming is executing operations on objects.[68]
C++ (1985) was originally called "C with Classes".[70] It was designed to expand C's capabilities by
adding the object-oriented facilities of the language Simula.[71]
An object-oriented module is composed of two files. The definitions file is called the header file.
Here is a C++ header file for the GRADE class in a simple school application:
// grade.h
// -------
class GRADE {
public:
// This is the constructor operation.
// ----------------------------------
GRADE ( const char letter );
A constructor operation is a function with the same name as the class name.[72] It is executed when
the calling operation executes the new statement.
A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple
school application:
// grade.cpp
// ---------
#include "grade.h"
Here is a C++ header file for the PERSON class in a simple school application:
// person.h
// --------
#ifndef PERSON_H
#define PERSON_H
class PERSON {
public:
PERSON ( const char *name );
const char *name;
};
#endif
Here is a C++ source file for the PERSON class in a simple school application:
// person.cpp
// ----------
#include "person.h"
Here is a C++ header file for the STUDENT class in a simple school application:
// student.h
// ---------
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
Here is a C++ source file for the STUDENT class in a simple school application:
// student.cpp
// -----------
#include "student.h"
#include "person.h"
// student_dvr.cpp
// ---------------
#include <iostream>
#include "student.h"
std::cout
// Notice student inherits PERSON's name
<< student->name
<< ": Numeric grade = "
<< student->grade->numeric
<< "\n";
return 0;
}
clean:
rm student_dvr *.o
Declarative languages
Imperative languages have one major criticism: assigning an expression to a non-local variable may
produce an unintended side effect.[73] Declarative languages generally omit the assignment
statement and the control flow. They describe what computation should be performed and not how
to compute it. Two broad categories of declarative languages are functional languages and logical
languages.
The principle behind a functional language is to use lambda calculus as a guide for a well defined
semantic.[74] In mathematics, a function is a rule that maps elements from an expression to a range
of values. Consider the function:
times_10(x) = 10 * x
The expression 10 * x is mapped by the function times_10() to a range of values. One value
happens to be 20. This occurs when x is 2. So, the application of the function is mathematically
written as:
times_10(2) = 20
A functional language compiler will not store this value in a variable. Instead, it will push the value
onto the computer's stack before setting the program counter back to the calling function. The
calling function will then pop the value from the stack.[75]
Imperative languages do support functions. Therefore, functional programming can be achieved in
an imperative language, if the programmer uses discipline. However, a functional language will force
this discipline onto the programmer through its syntax. Functional languages have a syntax tailored
to emphasize the what.[76]
A functional program is developed with a set of primitive functions followed by a single driver
function.[73] Consider the snippet:
function range( a, b, c ) {
The primitives are max() and min() . The driver function is range() . Executing:
Functional languages are used in computer science research to explore new language features.[77]
Moreover, their lack of side-effects have made them popular in parallel programming and concurrent
programming.[78] However, application developers prefer the object-oriented features of imperative
languages.[78]
Lisp
Lisp (1958) stands for "LISt Processor".[79] It is tailored to process lists. A full structure of the data is
formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure
lends nicely for recursive functions.[80] The syntax to build a tree is to enclose the space-separated
elements within parenthesis. The following is a list of three elements. The first two elements are
themselves lists of two elements:
Lisp has functions to extract and reconstruct elements.[81] The function head() returns a list
containing the first element in the list. The function tail() returns a list containing everything but
the first element. The function cons() returns a list that is the concatenation of other lists.
Therefore, the following expression will return the list x :
cons(head(x), tail(x))
One drawback of Lisp is when many functions are nested, the parentheses may look confusing.[76]
Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the
imperative language operations of the assignment statement and goto loops.[82] Also, Lisp is not
concerned with the datatype of the elements at compile time.[83] Instead, it assigns (and may
reassign) the datatypes at runtime. Assigning the datatype at runtime is called dynamic binding.[84]
Whereas dynamic binding increases the language's flexibility, programming errors may linger until
late in the software development process.[84]
Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the
program may be much shorter than an equivalent imperative language program.[76] Lisp is widely
used in artificial intelligence. However, its usage has been accepted only because it has imperative
language operations, making unintended side-effects possible.[78]
ML
ML (1973)[85] stands for "Meta Language". ML checks to make sure only data of the same type are
compared with one another.[86] For example, this function has one input parameter (an integer) and
returns an integer:
times_10 2
It returns "20 : int". (Both the results and the datatype are returned.)
Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same datatype.[87]
Moreover, ML assigns the datatype of an element at compile time. Assigning the datatype at
compile time is called static binding. Static binding increases reliability because the compiler
checks the context of variables before they are used.[88]
Prolog
Prolog (1972) stands for "PROgramming in LOGic". It is a logic programming language, based on
formal logic. The language was developed by Alain Colmerauer and Philippe Roussel in Marseille,
France. It is an implementation of Selective Linear Definite clause resolution, pioneered by Robert
Kowalski and others at the University of Edinburgh.[89]
The building blocks of a Prolog program are facts and rules. Here is a simple example:
After all the facts and rules are entered, then a question can be asked:
?- eat(tom,jerry).
true
The following example shows how Prolog will convert a letter grade to its numeric value:
numeric_grade('A', 4).
numeric_grade('B', 3).
numeric_grade('C', 2).
numeric_grade('D', 1).
numeric_grade('F', 0).
numeric_grade(X, -1) :- not X = 'A', not X = 'B', not X = 'C', not X
= 'D', not X = 'F'.
grade('The Student', 'A').
1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon:
billows_fire(X) :-
is_a_dragon(X).
billows_fire(X) :-
is_a_creature(X),
is_a_parent_of(Y,X),
billows_fire(Y).
is_a_creature(X) :-
is_a_dragon(X).
is_a_dragon(norberta).
is_a_creature(puff).
is_the_mother_of(norberta, puff).
Rule (2) is a recursive (inductive) definition. It can be understood declaratively, without the need to
understand how it is executed.
Rule (3) shows how functions are represented by using relations. Here, the mother and father
functions ensure that every individual has only one mother and only one father.
Prolog is an untyped language. Nonetheless, inheritance can be represented by using predicates.
Rule (4) asserts that a creature is a superclass of a dragon.
?- billows_fire(X).
X = norberta
X = puff
Practical applications for Prolog are knowledge representation and problem solving in artificial
intelligence.
Object-oriented programming
Here is a C programming language header file for the GRADE abstract datatype in a simple school
application:
/* grade.h */
/* ------- */
typedef struct
{
char letter;
} GRADE;
/* Constructor */
/* ----------- */
GRADE *grade_new( char letter );
The grade_new() function performs the same algorithm as the C++ constructor operation.
Here is a C programming language source file for the GRADE abstract datatype in a simple school
application:
/* grade.c */
/* ------- */
#include "grade.h"
grade->letter = letter;
return grade;
}
In the constructor, the function calloc() is used instead of malloc() because each memory
cell will be set to zero.
Here is a C programming language header file for the PERSON abstract datatype in a simple school
application:
/* person.h */
/* -------- */
#ifndef PERSON_H
#define PERSON_H
typedef struct
{
char *name;
} PERSON;
/* Constructor */
/* ----------- */
PERSON *person_new( char *name );
#endif
Here is a C programming language source file for the PERSON abstract datatype in a simple school
application:
/* person.c */
/* -------- */
#include "person.h"
person->name = name;
return person;
}
Here is a C programming language header file for the STUDENT abstract datatype in a simple
school application:
/* student.h */
/* --------- */
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
typedef struct
{
/* A STUDENT is a subset of PERSON. */
/* -------------------------------- */
PERSON *person;
GRADE *grade;
} STUDENT;
/* Constructor */
/* ----------- */
STUDENT *student_new( char *name );
#endif
Here is a C programming language source file for the STUDENT abstract datatype in a simple school
application:
/* student.c */
/* --------- */
#include "student.h"
#include "person.h"
/* student_dvr.c */
/* ------------- */
#include <stdio.h>
#include "student.h"
return 0;
}
# makefile
# --------
all: student_dvr
clean:
rm student_dvr *.o
Identify the relationships from object to object. Most likely these will be verbs.
For example:
The syntax of a computer program is a list of production rules which form its grammar.[96] A
programming language's grammar correctly places its declarations, expressions, and
statements.[97] Complementing the syntax of a language are its semantics. The semantics describe
the meanings attached to various syntactic constructs.[98] A syntactic construct may need a
semantic description because a production rule may have an invalid interpretation.[99] Also, different
languages might have the same syntax; however, their behaviors may be different.
The syntax of a language is formally described by listing the production rules. Whereas the syntax
of a natural language is extremely complicated, a subset of the English language can have this
production rule listing:[100]
4. an article is 'the';
5. an adjective is 'big' or
6. an adjective is 'small';
7. a noun is 'cat' or
8. a noun is 'mouse';
9. a verb is 'eats';
The words in bold-face are known as non-terminals. The words in 'single quotes' are known as
terminals.[101]
From this production rule listing, complete sentences may be formed using a series of
replacements.[102] The process is to replace non-terminals with either a valid non-terminal or a valid
terminal. The replacement process repeats until only terminals remain. One valid sentence is:
sentence
noun-phrase verb-phrase
One production rule listing method is called the Backus–Naur form (BNF).[103] BNF describes the
syntax of a language and itself has a syntax. This recursive definition is an example of a
metalanguage.[98] The syntax of BNF includes:
::= which translates to is made up of a[n] when a non-terminal is to its right. It translates to is
when a terminal is to its right.
Using BNF, a subset of the English language can have this production rule listing:
This allows for an infinite number of possibilities. Therefore, a semantic is necessary to describe a
limitation of the number of digits.
Two formal methods are available to describe semantics. They are denotational semantics and
axiomatic semantics.[105]
Software engineering and computer programming
Performance objectives
The systems analyst has the objective to deliver the right information to the right person at the right
time.[108] The critical factors to achieve this objective are:[108]
4. The speed of the output. Time-sensitive information is important when communicating with
the customer in real time.
Cost objectives
Achieving performance objectives should be balanced with all of the costs, including:[109]
1. Development costs.
2. Uniqueness costs. A reusable system may be expensive. However, it might be preferred over a
limited-use system.
3. Hardware costs.
4. Operating costs.
Applying a systems development process will mitigate the axiom: the later in the process an error is
detected, the more expensive it is to correct.[110]
Waterfall model
5. The maintenance phase lasts throughout the life of the system. Changes to the system after it
is deployed may be necessary.[113] Faults may exist, including specification faults, design
faults, or coding faults. Improvements may be necessary. Adaption may be necessary to react
to a changing environment.
Computer programmer
A computer programmer is a specialist responsible for writing or modifying the source code to
implement the detailed plan.[107] A programming team is likely to be needed because most systems
are too large to be completed by a single programmer.[114] However, adding programmers to a
project may not shorten the completion time. Instead, it may lower the quality of the system.[114] To
be effective, program modules need to be defined and distributed to team members.[114] Also, team
members must interact with one another in a meaningful and effective way.[114]
The module's name should be derived first by its function, then by its context. Its logic should not be
part of the name.[117] For example, function compute_square_root( x ) or function
compute_square_root_integer( i : integer ) are appropriate module names. However,
function compute_square_root_by_division( x ) is not.
The degree of interaction within a module is its level of cohesion.[117] Cohesion is a judgment of the
relationship between a module's name and its function. The degree of interaction between modules
is the level of coupling.[118] Coupling is a judgement of the relationship between a module's context
and the elements being performed upon.
Cohesion
Coincidental Cohesion: A module has coincidental cohesion if it performs multiple functions, and
the functions are completely unrelated. For example, function
read_sales_record_print_next_line_convert_to_float() . Coincidental cohesion
occurs in practice if management enforces silly rules. For example, "Every module will have
between 35 and 50 executable statements."[119]
Logical Cohesion: A module has logical cohesion if it has available a series of functions, but only
one of them is executed. For example, function perform_arithmetic(
perform_addition, a, b ) .
Temporal Cohesion: A module has temporal cohesion if it performs functions related to time. One
example, function initialize_variables_and_open_files() . Another example,
stage_one() , stage_two() , ...
Procedural Cohesion: A module has procedural cohesion if it performs multiple loosely related
functions. For example, function read_part_number_update_employee_record() .
read_part_number_update_sales_record() .
Functional Cohesion: a module has functional cohesion if it achieves a single goal working only
on local variables. Moreover, it may be reusable in other contexts.
Coupling
Content Coupling: A module has content coupling if it modifies a local variable of another
function. COBOL used to do this with the alter verb.
Control Coupling: A module has control coupling if another module can modify its control flow.
For example, perform_arithmetic( perform_addition, a, b ) . Instead, control
Stamp Coupling: A module has stamp coupling if an element of a data structure passed as a
parameter is modified. Object-oriented classes work at this level.
Data Coupling: A module has data coupling if all of its input parameters are needed and none of
them are modified. Moreover, the result of the function is returned as a single object.
The diagram also has arrows connecting modules to each other. Arrows pointing into modules
represent a set of inputs. Each module should have only one arrow pointing out from it to represent
its single output object. (Optionally, an additional exception arrow points out.) A daisy chain of ovals
will convey an entire algorithm. The input modules should start the diagram. The input modules
should connect to the transform modules. The transform modules should connect to the output
modules.[121]
Functional categories
Computer programs may be categorized along functional lines. The main functional categories are
application software and system software. System software includes the operating system, which
couples computer hardware with application software.[122] The purpose of the operating system is
to provide an environment where application software executes in a convenient and efficient
manner.[122] Both application software and system software execute utility programs. At the
hardware level, a microcode program controls the circuits throughout the central processing unit.
Application software
Application software is the key to unlocking the potential of the computer system.[123] Enterprise
application software bundles accounting, personnel, customer, and vendor applications. Examples
include enterprise resource planning, customer relationship management, and supply chain
management software.
The potential advantages of in-house software are features and reports may be developed exactly
to specification.[126] Management may also be involved in the development process and offer a level
of control.[127] Management may decide to counteract a competitor's new initiative or implement a
customer or vendor requirement.[128] A merger or acquisition may necessitate enterprise software
changes. The potential disadvantages of in-house software are time and resource costs may be
extensive.[124] Furthermore, risks concerning features and performance may be looming.
The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs
should be fulfilled, and its performance and reliability have a track record.[124] The potential
disadvantages of off-the-shelf software are it may have unnecessary features that confuse end
users, it may lack features the enterprise needs, and the data flow may not match the enterprise's
work processes.[124]
An operating system is the low-level software that supports a computer's basic functions, such as
scheduling processes and controlling peripherals.[122]
In the 1950s, the programmer, who was also the operator, would write a program and run it. After the
program finished executing, the output may have been printed, or it may have been punched onto
paper tape or cards for later processing.[29] More often than not the program did not work. The
programmer then looked at the console lights and fiddled with the console switches. If less
fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the
amount of wasted time by automating the operator's job. A program called an operating system was
kept in the computer at all times.[130]
The term operating system may refer to two levels of software.[131] The operating system may refer
to the kernel program that manages the processes, memory, and devices. More broadly, the
operating system may refer to the entire package of the central software. The package includes a
kernel program, command-line interpreter, graphical user interface, utility programs, and editor.[131]
Kernel Program
The kernel program should perform process scheduling,[132] which is also known as a context
switch. The kernel creates a process control block when a computer program is selected for
execution. However, an executing program gets exclusive access to the central processing unit
only for a time slice. To provide each user with the appearance of continuous access, the kernel
quickly preempts each process control block to execute another one. The goal for system
developers is to minimize dispatch latency.
When the kernel initially loads an executable into memory, it divides the address space logically
into regions.[133] The kernel maintains a master-region table and many per-process-region
(pregion) tables—one for each running process.[133] These tables constitute the virtual address
space. The master-region table is used to determine where its contents are located in physical
memory. The pregion tables allow each process to have its own program (text) pregion, data
pregion, and stack pregion.
The program pregion stores machine instructions. Since machine instructions do not change,
the program pregion may be shared by many processes of the same executable.[133]
To save time and memory, the kernel may load only blocks of execution instructions from the
disk drive, not the entire execution file completely.[132]
The kernel is responsible for translating virtual addresses into physical addresses. The kernel
may request data from the memory controller and, instead, receive a page fault.[134] If so, the
kernel accesses the memory management unit to populate the physical data region and
translate the address.[135]
The kernel allocates memory from the heap upon request by a process.[64] When the process is
finished with the memory, the process may request for it to be freed. If the process exits
without requesting all allocated memory to be freed, then the kernel performs garbage
collection to free the memory.
The kernel also ensures that a process only accesses its own memory, and not that of the
kernel or other processes.[132]
The kernel program should perform file system management.[132] The kernel has instructions to
create, retrieve, update, and delete files.
The kernel program should perform device management.[132] The kernel provides programs to
standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other
devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the
same time.
The kernel program should perform network management.[136] The kernel transmits and receives
packets on behalf of processes. One key service is to find an efficient route to the target system.
The kernel program should provide system level functions for programmers to use.[137]
Programmers access files through a relatively simple interface that in turn executes a
relatively complicated low-level I/O interface. The low-level interface includes file creation, file
descriptors, file seeking, physical reading, and physical writing.
Programmers create processes through a relatively simple interface that in turn executes a
relatively complicated low-level interface.
Programmers perform date/time arithmetic through a relatively simple interface that in turn
executes a relatively complicated low-level time interface.[138]
The kernel program should provide a communication channel between executing processes.[139]
For a large software system, it may be desirable to engineer the system into smaller processes.
Processes may communicate with one another by sending and receiving signals.
Originally, operating systems were programmed in assembly; however, modern operating systems
are typically written in higher-level languages like C, Objective-C, and Swift.[l]
Utility program
A utility program is designed to aid system administration and software execution. Operating
systems execute hardware utility programs to check the status of disk drives, memory, speakers,
and printers.[140] A utility program may optimize the placement of a file on a crowded disk. System
utility programs monitor hardware and network performance. When a metric is outside an
acceptable range, a trigger alert is generated.[141]
Utility programs include compression programs so data files are stored on less disk space.[140]
Compressed programs also save time when data files are transmitted over the network.[140] Utility
programs can sort and merge data sets.[141] Utility programs detect computer viruses.[141]
Microcode program
NOT gate
NAND gate
NOR gate
AND gate
OR gate
A microcode program is the bottom-level interpreter that controls the data path of software-driven
computers.[142] (Advances in hardware have migrated these operations to hardware execution
circuits.)[142] Microcode instructions allow the programmer to more easily implement the digital
logic level[143]—the computer's real hardware. The digital logic level is the boundary between
computer science and computer engineering.[144]
A logic gate is a tiny transistor that can return one of two signals: on or off.[145]
These five gates form the building blocks of binary algebra—the digital logic functions of the
computer.
Microcode instructions are mnemonics programmers may use to execute digital logic functions
instead of forming them in binary algebra. They are stored in a central processing unit's (CPU)
control store.[146] These hardware-level instructions move data throughout the data path.
The micro-instruction cycle begins when the microsequencer uses its microprogram counter to
fetch the next machine instruction from random-access memory.[147] The next step is to decode the
machine instruction by selecting the proper output line to the hardware module.[148] The final step is
to execute the instruction using the hardware module's set of gates.
Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU).[149] The ALU
has circuits to perform elementary operations to add, shift, and compare integers. By combining
and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.
Microcode instructions move data between the CPU and the memory controller. Memory controller
microcode instructions manipulate two registers. The memory address register is used to access
each memory cell's address. The memory data register is used to set and read each cell's
contents.[150]
Notes
a. The Prolog language allows for a database of facts and rules to be entered in any order.
However, a question about a database must be at the very end.
d. introduced in 1999
g. The line numbers were typically incremented by 10 to leave room if additional statements were
added later.
i. This is despite the metaphor of a stack, which normally grows from bottom to top.
j. C also provides the calloc() function to allocate heap memory. It provides two additional
services: 1) It allows the programmer to create an array of arbitrary size. 2) It sets each
memory cell to zero.
k. For string variables, C provides the strdup() function. It executes both the allocation
l. The UNIX operating system was written in C, macOS was written in Objective-C, and Swift
replaced Objective-C.
References
10. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer
(https://archive.org/details/eniac00scot/page/16) . Walker and Company. p. 16 (https://archi
ve.org/details/eniac00scot/page/16) . ISBN 978-0-8027-1348-3.
11. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition (https://archiv
e.org/details/structuredcomput00tane/page/14) . Prentice Hall. p. 14 (https://archive.org/det
ails/structuredcomput00tane/page/14) . ISBN 978-0-13-854662-5.
12. Bromley, Allan G. (1998). "Charles Babbage's Analytical Engine, 1838" (http://profs.scienze.univ
r.it/~manca/storia-informatica/babbage.pdf) (PDF). IEEE Annals of the History of
Computing. 20 (4): 29–45. doi:10.1109/85.728228 (https://doi.org/10.1109%2F85.728228) .
S2CID 2285332 (https://api.semanticscholar.org/CorpusID:2285332) . Archived (https://web.
archive.org/web/20160304081812/http://profs.scienze.univr.it/~manca/storia-informatica/ba
bbage.pdf) (PDF) from the original on 2016-03-04. Retrieved 2015-10-30.
13. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition (https://archiv
e.org/details/structuredcomput00tane/page/15) . Prentice Hall. p. 15 (https://archive.org/det
ails/structuredcomput00tane/page/15) . ISBN 978-0-13-854662-5.
14. J. Fuegi; J. Francis (October–December 2003), "Lovelace & Babbage and the creation of the
1843 'notes' ", Annals of the History of Computing, 25 (4): 16, 19, 25,
doi:10.1109/MAHC.2003.1253887 (https://doi.org/10.1109%2FMAHC.2003.1253887)
15. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications (https://archive.org/detai
ls/discretemathemat00rose/page/654) . McGraw-Hill, Inc. p. 654 (https://archive.org/details/
discretemathemat00rose/page/654) . ISBN 978-0-07-053744-6. "Turing machines can model
all the computations that can be performed on a computing machine."
16. Linz, Peter (1990). An Introduction to Formal Languages and Automata. D. C. Heath and
Company. p. 234. ISBN 978-0-669-17342-0.
17. Linz, Peter (1990). An Introduction to Formal Languages and Automata. D. C. Heath and
Company. p. 243. ISBN 978-0-669-17342-0. "[A]ll the common mathematical functions, no
matter how complicated, are Turing-computable."
18. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer
(https://archive.org/details/eniac00scot/page/102) . Walker and Company. p. 102 (https://arc
hive.org/details/eniac00scot/page/102) . ISBN 978-0-8027-1348-3.
19. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer
(https://archive.org/details/eniac00scot/page/94) . Walker and Company. p. 94 (https://archi
ve.org/details/eniac00scot/page/94) . ISBN 978-0-8027-1348-3.
20. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer
(https://archive.org/details/eniac00scot/page/107) . Walker and Company. p. 107 (https://arc
hive.org/details/eniac00scot/page/107) . ISBN 978-0-8027-1348-3.
21. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer
(https://archive.org/details/eniac00scot/page/120) . Walker and Company. p. 120 (https://arc
hive.org/details/eniac00scot/page/120) . ISBN 978-0-8027-1348-3.
22. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer
(https://archive.org/details/eniac00scot/page/118) . Walker and Company. p. 118 (https://arc
hive.org/details/eniac00scot/page/118) . ISBN 978-0-8027-1348-3.
23. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer
(https://archive.org/details/eniac00scot/page/119) . Walker and Company. p. 119 (https://arc
hive.org/details/eniac00scot/page/119) . ISBN 978-0-8027-1348-3.
24. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer
(https://archive.org/details/eniac00scot/page/123) . Walker and Company. p. 123 (https://arc
hive.org/details/eniac00scot/page/123) . ISBN 978-0-8027-1348-3.
26. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition (https://archiv
e.org/details/structuredcomput00tane) . Prentice Hall. p. 21 (https://archive.org/details/struc
turedcomput00tane/page/n42) . ISBN 978-0-13-854662-5.
27. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 27. ISBN 0-201-71012-9.
28. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 29. ISBN 0-201-71012-9.
29. Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley.
p. 6. ISBN 978-0-201-50480-4.
30. To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS (https://books.go
ogle.com/books?id=UUbB3d2UnaAC&pg=PA46) . Johns Hopkins University Press. 2002.
ISBN 9780801886393. Archived (https://web.archive.org/web/20230202181649/https://book
s.google.com/books?id=UUbB3d2UnaAC&pg=PA46) from the original on February 2, 2023.
Retrieved February 3, 2022.
31. Chalamala, Babu (2017). "Manufacturing of Silicon Materials for Microelectronics and Solar
PV" (https://www.osti.gov/servlets/purl/1497235) . Sandia National Laboratories. Archived (h
ttps://web.archive.org/web/20230323163602/https://www.osti.gov/biblio/1497235) from
the original on March 23, 2023. Retrieved February 8, 2022.
32. "Fabricating ICs Making a base wafer" (https://www.britannica.com/technology/integrated-circ
uit/Fabricating-ICs#ref837156) . Britannica. Archived (https://web.archive.org/web/20220208
103132/https://www.britannica.com/technology/integrated-circuit/Fabricating-ICs#ref83715
6) from the original on February 8, 2022. Retrieved February 8, 2022.
37. "Bill Gates, Microsoft and the IBM Personal Computer" (https://books.google.com/books?id=V
DAEAAAAMBAJ&pg=PA22) . InfoWorld. August 23, 1982. Archived (https://web.archive.org/w
eb/20230218183644/https://books.google.com/books?id=VDAEAAAAMBAJ&pg=PA22)
from the original on 18 February 2023. Retrieved 1 February 2022.
38. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley.
p. 10. ISBN 978-0-321-56384-2.
39. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley.
p. 11. ISBN 978-0-321-56384-2.
40. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 159.
ISBN 0-619-06489-7.
41. Linz, Peter (1990). An Introduction to Formal Languages and Automata. D. C. Heath and
Company. p. 2. ISBN 978-0-669-17342-0.
42. Weiss, Mark Allen (1994). Data Structures and Algorithm Analysis in C++. Benjamin/Cummings
Publishing Company, Inc. p. 29. ISBN 0-8053-5443-3.
43. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition (https://archiv
e.org/details/structuredcomput00tane/page/17) . Prentice Hall. p. 17 (https://archive.org/det
ails/structuredcomput00tane/page/17) . ISBN 978-0-13-854662-5.
44. Wilkes, M. V.; Renwick, W. (1982), Randell, Brian (ed.), "The EDSAC" (https://link.springer.com/c
hapter/10.1007/978-3-642-61812-3_34) , The Origins of Digital Computers: Selected Papers,
Berlin, Heidelberg: Springer, pp. 417–421, doi:10.1007/978-3-642-61812-3_34 (https://doi.org/1
0.1007%2F978-3-642-61812-3_34) , ISBN 978-3-642-61812-3, retrieved 2025-04-25
45. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 160.
ISBN 0-619-06489-7.
46. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition (https://archiv
e.org/details/structuredcomput00tane/page/399) . Prentice Hall. p. 399 (https://archive.org/
details/structuredcomput00tane/page/399) . ISBN 978-0-13-854662-5.
47. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition (https://archiv
e.org/details/structuredcomput00tane/page/400) . Prentice Hall. p. 400 (https://archive.org/
details/structuredcomput00tane/page/400) . ISBN 978-0-13-854662-5.
48. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition (https://archiv
e.org/details/structuredcomput00tane/page/398) . Prentice Hall. p. 398 (https://archive.org/
details/structuredcomput00tane/page/398) . ISBN 978-0-13-854662-5.
49. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 26. ISBN 0-201-71012-9.
50. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 37. ISBN 0-201-71012-9.
51. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 160.
ISBN 0-619-06489-7. "With third-generation and higher-level programming languages, each
statement in the language translates into several instructions in machine language."
52. Wilson, Leslie B. (1993). Comparative Programming Languages, Second Edition. Addison-
Wesley. p. 75. ISBN 978-0-201-56885-1.
53. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley.
p. 40. ISBN 978-0-321-56384-2.
54. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 16. ISBN 0-201-71012-9.
55. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 24. ISBN 0-201-71012-9.
56. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 25. ISBN 0-201-71012-9.
57. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 19. ISBN 0-201-71012-9.
59. Kernighan, Brian W.; Ritchie, Dennis M. (1988). The C Programming Language Second Edition.
Prentice Hall. p. 31. ISBN 0-13-110362-8.
60. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 128. ISBN 0-201-71012-9.
61. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 121. ISBN 978-
1-59327-220-3.
62. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 122. ISBN 978-
1-59327-220-3.
63. Kernighan, Brian W.; Ritchie, Dennis M. (1988). The C Programming Language Second Edition.
Prentice Hall. p. 185. ISBN 0-13-110362-8.
64. Kernighan, Brian W.; Ritchie, Dennis M. (1988). The C Programming Language Second Edition.
Prentice Hall. p. 187. ISBN 0-13-110362-8.
65. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 38. ISBN 0-201-71012-9.
66. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 193. ISBN 0-201-71012-9.
67. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 39. ISBN 0-201-71012-9.
68. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 35. ISBN 0-201-71012-9.
69. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 192. ISBN 0-201-71012-9.
70. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley.
p. 22. ISBN 978-0-321-56384-2.
71. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley.
p. 21. ISBN 978-0-321-56384-2.
72. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley.
p. 49. ISBN 978-0-321-56384-2.
73. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 218. ISBN 0-201-71012-9.
74. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 217. ISBN 0-201-71012-9.
75. Weiss, Mark Allen (1994). Data Structures and Algorithm Analysis in C++. Benjamin/Cummings
Publishing Company, Inc. p. 103. ISBN 0-8053-5443-3. "When there is a function call, all the
important information needs to be saved, such as register values (corresponding to variable
names) and the return address (which can be obtained from the program counter)[.] ... When
the function wants to return, it ... restores all the registers. It then makes the return jump.
Clearly, all of this work can be done using a stack, and that is exactly what happens in virtually
every programming language that implements recursion."
76. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 230. ISBN 0-201-71012-9.
77. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 240. ISBN 0-201-71012-9.
78. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 241. ISBN 0-201-71012-9.
79. Jones, Robin; Maynard, Clive; Stewart, Ian (December 6, 2012). The Art of Lisp Programming.
Springer Science & Business Media. p. 2. ISBN 9781447117193.
80. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 220. ISBN 0-201-71012-9.
81. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 221. ISBN 0-201-71012-9.
82. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 229. ISBN 0-201-71012-9.
83. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 227. ISBN 0-201-71012-9.
84. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 222. ISBN 0-201-71012-9.
85. Gordon, Michael J. C. (1996). "From LCF to HOL: a short history" (http://www.cl.cam.ac.uk/~mj
cg/papers/HolHistory.html) . Archived (https://web.archive.org/web/20160905201847/http://
www.cl.cam.ac.uk/~mjcg/papers/HolHistory.html) from the original on 2016-09-05.
Retrieved 2021-10-30.
86. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 233. ISBN 0-201-71012-9.
87. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 235. ISBN 0-201-71012-9.
88. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 55. ISBN 0-201-71012-9.
90. Kowalski, R., Dávila, J., Sartor, G. and Calejo, M., 2023. Logical English for law and education. In
Prolog: The Next 50 Years (pp. 287–299). Cham: Springer Nature Switzerland.
91. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 35. ISBN 0-201-71012-9. "Simula was based on Algol 60 with one very important addition —
the class concept. ... The basic idea was that the data (or data structure) and the operations
performed on it belong together[.]"
92. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 39. ISBN 0-201-71012-9. "Originally, a large number of experimental languages were
designed, many of which combined object-oriented and functional programming."
93. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 284. ISBN 0-256-08515-3. "While it is true that OOD [(object oriented design)] as such is not
supported by the majority of popular languages, a large subset of OOD can be used."
94. Weiss, Mark Allen (1994). Data Structures and Algorithm Analysis in C++. Benjamin/Cummings
Publishing Company, Inc. p. 57. ISBN 0-8053-5443-3.
95. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 285. ISBN 0-256-08515-3.
96. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 290. ISBN 0-201-71012-9. "The syntax (or grammar) of a programming language describes
the correct form in which programs may be written[.]"
97. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 78. ISBN 0-201-71012-9. "The main components of an imperative language are declarations,
expressions, and statements."
98. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 290. ISBN 0-201-71012-9.
99. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 294. ISBN 0-201-71012-9.
100. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications (https://archive.org/detai
ls/discretemathemat00rose/page/615) . McGraw-Hill, Inc. p. 615 (https://archive.org/details/
discretemathemat00rose/page/615) . ISBN 978-0-07-053744-6.
101. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 291. ISBN 0-201-71012-9.
102. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications (https://archive.org/detai
ls/discretemathemat00rose/page/616) . McGraw-Hill, Inc. p. 616 (https://archive.org/details/
discretemathemat00rose/page/616) . ISBN 978-0-07-053744-6.
103. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications (https://archive.org/detai
ls/discretemathemat00rose/page/623) . McGraw-Hill, Inc. p. 623 (https://archive.org/details/
discretemathemat00rose/page/623) . ISBN 978-0-07-053744-6.
104. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications (https://archive.org/detai
ls/discretemathemat00rose/page/624) . McGraw-Hill, Inc. p. 624 (https://archive.org/details/
discretemathemat00rose/page/624) . ISBN 978-0-07-053744-6.
105. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley.
p. 297. ISBN 0-201-71012-9.
106. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. Preface. ISBN 0-256-08515-3.
107. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 507.
ISBN 0-619-06489-7.
108. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 513.
ISBN 0-619-06489-7.
109. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 514.
ISBN 0-619-06489-7.
110. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 516.
ISBN 0-619-06489-7.
111. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 8. ISBN 0-256-08515-3.
112. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 517.
ISBN 0-619-06489-7.
113. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 345. ISBN 0-256-08515-3.
114. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 319. ISBN 0-256-08515-3.
115. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 331. ISBN 0-256-08515-3.
116. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 216. ISBN 0-256-08515-3.
117. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 219. ISBN 0-256-08515-3.
118. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 226. ISBN 0-256-08515-3.
119. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 220. ISBN 0-256-08515-3.
120. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 258. ISBN 0-256-08515-3.
121. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers.
p. 259. ISBN 0-256-08515-3.
122. Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley.
p. 1. ISBN 978-0-201-50480-4.
123. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 147.
ISBN 0-619-06489-7. "The key to unlocking the potential of any computer system is application
software."
124. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 147.
ISBN 0-619-06489-7.
125. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 147.
ISBN 0-619-06489-7. "[A] third-party software firm, often called a value-added software vendor,
may develop or modify a software program to meet the needs of a particular industry or
company."
126. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 148.
ISBN 0-619-06489-7. "Heading: Proprietary Software; Subheading: Advantages; Quote: You can
get exactly what you need in terms of features, reports, and so on."
127. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 148.
ISBN 0-619-06489-7. "Heading: Proprietary Software; Subheading: Advantages; Quote: Being
involved in the development offers a further level of control over the results."
128. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 147.
ISBN 0-619-06489-7. "Heading: Proprietary Software; Subheading: Advantages; Quote: There is
more flexibility in making modifications that may be required to counteract a new initiative by
one of your competitors or to meet new supplier and/or customer requirements."
129. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 149.
ISBN 0-619-06489-7.
130. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition (https://archiv
e.org/details/structuredcomput00tane/page/11) . Prentice Hall. p. 11 (https://archive.org/det
ails/structuredcomput00tane/page/11) . ISBN 978-0-13-854662-5.
131. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 21. ISBN 978-1-
59327-220-3.
132. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 22. ISBN 978-1-
59327-220-3.
133. Bach, Maurice J. (1986). The Design of the UNIX Operating System. Prentice-Hall, Inc. p. 152.
ISBN 0-13-201799-7.
134. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson.
p. 443. ISBN 978-0-13-291652-3.
135. Lacamera, Daniele (2018). Embedded Systems Architecture. Packt. p. 8. ISBN 978-1-78883-
250-2.
136. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 23. ISBN 978-1-
59327-220-3.
137. Kernighan, Brian W. (1984). The Unix Programming Environment. Prentice Hall. p. 201. ISBN 0-
13-937699-2.
138. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 187. ISBN 978-
1-59327-220-3.
139. Haviland, Keith (1987). Unix System Programming. Addison-Wesley Publishing Company.
p. 121. ISBN 0-201-12919-1.
140. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 145.
ISBN 0-619-06489-7.
141. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 146.
ISBN 0-619-06489-7.
142. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 6.
ISBN 978-0-13-291652-3.
143. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson.
p. 243. ISBN 978-0-13-291652-3.
144. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson.
p. 147. ISBN 978-0-13-291652-3.
145. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson.
p. 148. ISBN 978-0-13-291652-3.
146. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson.
p. 253. ISBN 978-0-13-291652-3.
147. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson.
p. 255. ISBN 978-0-13-291652-3.
148. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson.
p. 161. ISBN 978-0-13-291652-3.
149. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson.
p. 166. ISBN 978-0-13-291652-3.
150. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson.
p. 249. ISBN 978-0-13-291652-3.