Program
Program
ENIAC
Stored-program computers
Instead of plugging in cords and turning switches, a stored-program computer loads
its instructions into memory just like it loads its data into memory.[21] As a
result, the computer could be programmed quickly and perform calculations at very
fast speeds.[22] Presper Eckert and John Mauchly built the ENIAC. The two engineers
introduced the stored-program concept in a three-page memo dated February 1944.[23]
Later, in September 1944, John von Neumann began working on the ENIAC project. On
June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC,
which equated the structures of the computer with the structures of the human
brain.[22] The design became known as the von Neumann architecture. The
architecture was simultaneously deployed in the constructions of the EDVAC and
EDSAC computers in 1949.[24][25]
The IBM System/360 (1964) was a family of computers, each having the same
instruction set architecture. The Model 20 was the smallest and least expensive.
Customers could upgrade and retain the same application software.[26] The Model 195
was the most premium. Each System/360 model featured multiprogramming[26]—having
multiple processes in memory at once. When one process was waiting for
input/output, another could compute.
IBM planned for each model to be programmed using PL/1.[27] A committee was formed
that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a
language that was comprehensive, easy to use, extendible, and would replace Cobol
and Fortran.[27] The result was a large and complex language that took a long time
to compile.[28]
Switches for manual input on a Data General Nova 3, manufactured in the mid-1970s
Computers manufactured until the 1970s had front-panel switches for manual
programming.[29] The computer program was written on paper for reference. An
instruction was represented by a configuration of on/off settings. After setting
the configuration, an execute button was pressed. This process was then repeated.
Computer programs also were automatically inputted via paper tape, punched cards or
magnetic-tape. After the medium was loaded, the starting address was set via
switches, and the execute button was pressed.[29]
Very Large Scale Integration
Originally, integrated circuit chips had their function set during manufacturing.
During the 1960s, controlling the electrical flow migrated to programming a matrix
of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses.
The process to embed instructions onto the matrix was to burn out the unneeded
connections. There were so many connections, firmware programmers wrote a computer
program on another chip to oversee the burning. The technology became known as
Programmable ROM. In 1971, Intel installed the computer program onto the chip and
named it the Intel 4004 microprocessor.[35]
x86 series
The original IBM Personal Computer (1981) used an Intel 8088 microprocessor.
In 1978, the modern software development environment began when Intel upgraded the
Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the
cheaper Intel 8088.[37] IBM embraced the Intel 8088 when they entered the personal
computer market (1981). As consumer demand for personal computers increased, so did
Intel's microprocessor development. The succession of development is known as the
x86 series. The x86 assembly language is a family of backward-compatible machine
instructions. Machine instructions created in earlier microprocessors were retained
throughout microprocessor upgrades. This enabled consumers to purchase new
computers without having to purchase new application software. The major categories
of instructions are:[c]
Memory instructions to set and access numbers and strings in random-access memory.
Integer arithmetic logic unit (ALU) instructions to perform the primary arithmetic
operations on integers.
Floating point ALU instructions to perform the primary arithmetic operations on
real numbers.
Call stack instructions to push and pop words needed to allocate memory and
interface with functions.
Single instruction, multiple data (SIMD) instructions[d] to increase speed when
multiple processors are available to perform the same algorithm on an array of
data.
Changing programming environment
arrays.
subroutines.
"do" loops.
It succeeded because:
programming and debugging costs were below computer running costs.
it was supported by IBM.
applications at the time were scientific.[54]
However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would
likely fail IBM's compiler.[54] The American National Standards Institute (ANSI)
developed the first Fortran standard in 1966. In 1978, Fortran 77 became the
standard until 1991. Fortran 90 supports:
records.
pointers to arrays.
COBOL
COBOL (1959) stands for "COmmon Business Oriented Language". Fortran manipulated
symbols. It was soon realized that symbols did not need to be numbers, so strings
were introduced.[55] The US Department of Defense influenced COBOL's development,
with Grace Hopper being a major contributor. The statements were English-like and
verbose. The goal was to design a language so managers could read the programs.
However, the lack of structured statements hindered this goal.[56]
COBOL's development was tightly controlled, so dialects did not emerge to require
ANSI standards. As a consequence, it was not changed for 15 years until 1974. The
1990s version did make consequential changes, like object-oriented programming.[56]
Algol
ALGOL (1960) stands for "ALGOrithmic Language". It had a profound influence on
programming language design.[57] Emerging from a committee of European and American
programming language experts, it used standard mathematical notation and had a
readable, structured design. Algol was first to define its syntax using the Backus–
Naur form.[57] This led to syntax-directed compilers. It added features like:
Basic
BASIC (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code". It was
developed at Dartmouth College for all of their students to learn.[8] If a student
did not go on to a more powerful language, the student would still remember Basic.
[8] A Basic interpreter was installed in the microcomputers manufactured in the
late 1970s. As the microcomputer industry grew, so did the language.[8]
C
C programming language (1973) got its name because the language BCPL was replaced
with B, and AT&T Bell Labs called the next version "C". Its purpose was to write
the UNIX operating system.[50] C is a relatively small language, making it easy to
write compilers. Its growth mirrored the hardware growth in the 1980s.[50] Its
growth also was because it has the facilities of assembly language, but it uses a
high-level syntax. It added advanced features like:
inline assembler.
arithmetic on pointers.
pointers to functions.
bit operations.
freely combining complex operators.[50]
The global and static data region is located just above the program region. (The
program region is technically called the text region. It is where machine
instructions are stored.)
The global and static data region is technically two regions.[58] One region is
called the initialized data segment, where variables declared with default values
are stored. The other region is called the block started by segment, where
variables declared without default values are stored.
Variables stored in the global and static data region have their addresses set at
compile time. They retain their values throughout the life of the process.
The global and static region stores the global variables that are declared on top
of (outside) the main() function.[59] Global variables are visible to main() and
every other function in the source code.
On the other hand, variable declarations inside of main(), other functions, or
within { } block delimiters are local variables. Local variables also include
formal parameter variables. Parameter variables are enclosed within the parenthesis
of a function definition.[60] Parameters provide an interface to the function.
Local variables declared using the static prefix are also stored in the global and
static data region.[58] Unlike global variables, static variables are only visible
within the function or block. Static variables always retain their value. An
example usage would be the function int increment_counter(){static int counter = 0;
counter++; return counter;}[h]
The stack region is a contiguous block of memory located near the top memory
address.[61] Variables placed in the stack are populated from top to bottom.[i][61]
A stack pointer is a special-purpose register that keeps track of the last memory
address populated.[61] Variables are placed into the stack via the assembly
language PUSH instruction. Therefore, the addresses of these variables are set
during runtime. The method for stack variables to lose their scope is via the POP
instruction.
Local variables declared without the static prefix, including formal parameter
variables,[62] are called automatic variables[59] and are stored in the stack.[58]
They are visible inside the function or block and lose their scope upon exiting the
function or block.
The heap region is located below the stack.[58] It is populated from the bottom to
the top. The operating system manages the heap using a heap pointer and a list of
allocated memory blocks.[63] Like the stack, the addresses of heap variables are
set during runtime. An out of memory error occurs when the heap pointer and the
stack pointer meet.
C provides the malloc() library function to allocate heap memory.[j][64] Populating
the heap with data is an additional copy function.[k] Variables stored in the heap
are economically passed to functions using pointers. Without pointers, the entire
block of data would have to be passed to the function via the stack.
C++
In the 1970s, software engineers needed language support to break large projects
down into modules.[65] One obvious feature was to decompose large projects
physically into separate files. A less obvious feature was to decompose large
projects logically into abstract data types.[65] At the time, languages supported
concrete (scalar) datatypes like integer numbers, floating-point numbers, and
strings of characters. Abstract datatypes are structures of concrete datatypes,
with a new name assigned. For example, a list of integers could be called
integer_list.
C++ (1985) was originally called "C with Classes".[70] It was designed to expand
C's capabilities by adding the object-oriented facilities of the language Simula.
[71]
// grade.h
// -------
class GRADE {
public:
// This is the constructor operation.
// ----------------------------------
GRADE ( const char letter );
A module's other file is the source file. Here is a C++ source file for the GRADE
class in a simple school application:
// grade.cpp
// ---------
#include "grade.h"
// person.h
// --------
#ifndef PERSON_H
#define PERSON_H
class PERSON {
public:
PERSON ( const char *name );
const char *name;
};
#endif
Here is a C++ source file for the PERSON class in a simple school application:
// person.cpp
// ----------
#include "person.h"
// student.h
// ---------
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
// student.cpp
// -----------
#include "student.h"
#include "person.h"
// student_dvr.cpp
// ---------------
#include <iostream>
#include "student.h"
std::cout
// Notice student inherits PERSON's name
<< student->name
<< ": Numeric grade = "
<< student->grade->numeric
<< "\n";
return 0;