0% found this document useful (0 votes)
1 views10 pages

Program

The document outlines the evolution of computing, beginning with Alan Turing's Universal Turing machine in 1936 and the development of the ENIAC, the first general-purpose computer, in the 1940s. It discusses the transition to stored-program computers, the introduction of the von Neumann architecture, and the advancements in programming languages from machine language to high-level languages like Fortran, COBOL, and C. Additionally, it highlights the impact of Very Large Scale Integration (VLSI) technology on computer architecture and the programming environment.

Uploaded by

Nemesis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views10 pages

Program

The document outlines the evolution of computing, beginning with Alan Turing's Universal Turing machine in 1936 and the development of the ENIAC, the first general-purpose computer, in the 1940s. It discusses the transition to stored-program computers, the introduction of the von Neumann architecture, and the advancements in programming languages from machine language to high-level languages like Fortran, COBOL, and C. Additionally, it highlights the impact of Very Large Scale Integration (VLSI) technology on computer architecture and the programming environment.

Uploaded by

Nemesis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 10

In 1936, Alan Turing introduced the Universal Turing machine, a theoretical device

that can model every computation.[15] It is a finite-state machine that has an


infinitely long read/write tape. The machine can move the tape back and forth,
changing its contents as it performs an algorithm. The machine starts in the
initial state, goes through a sequence of steps, and halts when it encounters the
halt state.[16] All present-day computers are Turing complete.[17]

ENIAC

Glenn A. Beck changing a tube in ENIAC


The Electronic Numerical Integrator And Computer (ENIAC) was built between July
1943 and Fall 1945. It was a Turing complete, general-purpose computer that used
17,468 vacuum tubes to create the circuits. At its core, it was a series of
Pascalines wired together.[18] Its 40 units weighed 30 tons, occupied 1,800 square
feet (167 m2), and consumed $650 per hour (in 1940s currency) in electricity when
idle.[18] It had 20 base-10 accumulators. Programming the ENIAC took up to two
months.[18] Three function tables were on wheels and needed to be rolled to fixed
function panels. Function tables were connected to function panels by plugging
heavy black cables into plugboards. Each function table had 728 rotating knobs.
Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a
program took a week.[19] It ran from 1947 until 1955 at Aberdeen Proving Ground,
calculating hydrogen bomb parameters, predicting weather patterns, and producing
firing tables to aim artillery guns.[20]

Stored-program computers
Instead of plugging in cords and turning switches, a stored-program computer loads
its instructions into memory just like it loads its data into memory.[21] As a
result, the computer could be programmed quickly and perform calculations at very
fast speeds.[22] Presper Eckert and John Mauchly built the ENIAC. The two engineers
introduced the stored-program concept in a three-page memo dated February 1944.[23]
Later, in September 1944, John von Neumann began working on the ENIAC project. On
June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC,
which equated the structures of the computer with the structures of the human
brain.[22] The design became known as the von Neumann architecture. The
architecture was simultaneously deployed in the constructions of the EDVAC and
EDSAC computers in 1949.[24][25]

The IBM System/360 (1964) was a family of computers, each having the same
instruction set architecture. The Model 20 was the smallest and least expensive.
Customers could upgrade and retain the same application software.[26] The Model 195
was the most premium. Each System/360 model featured multiprogramming[26]—having
multiple processes in memory at once. When one process was waiting for
input/output, another could compute.

IBM planned for each model to be programmed using PL/1.[27] A committee was formed
that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a
language that was comprehensive, easy to use, extendible, and would replace Cobol
and Fortran.[27] The result was a large and complex language that took a long time
to compile.[28]

Switches for manual input on a Data General Nova 3, manufactured in the mid-1970s
Computers manufactured until the 1970s had front-panel switches for manual
programming.[29] The computer program was written on paper for reference. An
instruction was represented by a configuration of on/off settings. After setting
the configuration, an execute button was pressed. This process was then repeated.
Computer programs also were automatically inputted via paper tape, punched cards or
magnetic-tape. After the medium was loaded, the starting address was set via
switches, and the execute button was pressed.[29]
Very Large Scale Integration

A VLSI integrated-circuit die


A major milestone in software development was the invention of the Very Large Scale
Integration (VLSI) circuit (1964).

Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968),


achieved a technological improvement to refine the production of field-effect
transistors (1963).[30] The goal is to alter the electrical resistivity and
conductivity of a semiconductor junction. First, naturally occurring silicate
minerals are converted into polysilicon rods using the Siemens process.[31] The
Czochralski process then converts the rods into a monocrystalline silicon, boule
crystal.[32] The crystal is then thinly sliced to form a wafer substrate. The
planar process of photolithography then integrates unipolar transistors,
capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–
semiconductor (MOS) transistors.[33][34] The MOS transistor is the primary
component in integrated circuit chips.[30]

Originally, integrated circuit chips had their function set during manufacturing.
During the 1960s, controlling the electrical flow migrated to programming a matrix
of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses.
The process to embed instructions onto the matrix was to burn out the unneeded
connections. There were so many connections, firmware programmers wrote a computer
program on another chip to oversee the burning. The technology became known as
Programmable ROM. In 1971, Intel installed the computer program onto the chip and
named it the Intel 4004 microprocessor.[35]

IBM's System/360 (1964) CPU was not a microprocessor.


The terms microprocessor and central processing unit (CPU) are now used
interchangeably. However, CPUs predate microprocessors. For example, the IBM
System/360 (1964) had a CPU made from circuit boards containing discrete components
on ceramic substrates.[36]

x86 series

The original IBM Personal Computer (1981) used an Intel 8088 microprocessor.
In 1978, the modern software development environment began when Intel upgraded the
Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the
cheaper Intel 8088.[37] IBM embraced the Intel 8088 when they entered the personal
computer market (1981). As consumer demand for personal computers increased, so did
Intel's microprocessor development. The succession of development is known as the
x86 series. The x86 assembly language is a family of backward-compatible machine
instructions. Machine instructions created in earlier microprocessors were retained
throughout microprocessor upgrades. This enabled consumers to purchase new
computers without having to purchase new application software. The major categories
of instructions are:[c]

Memory instructions to set and access numbers and strings in random-access memory.
Integer arithmetic logic unit (ALU) instructions to perform the primary arithmetic
operations on integers.
Floating point ALU instructions to perform the primary arithmetic operations on
real numbers.
Call stack instructions to push and pop words needed to allocate memory and
interface with functions.
Single instruction, multiple data (SIMD) instructions[d] to increase speed when
multiple processors are available to perform the same algorithm on an array of
data.
Changing programming environment

The DEC VT100 (1978) was a widely used computer terminal.


VLSI circuits enabled the programming environment to advance from a computer
terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer
terminals limited programmers to a single shell running in a command-line
environment. During the 1970s, full-screen source code editing became possible
through a text-based user interface. Regardless of the technology available, the
goal is to program in a programming language.

Programming paradigms and languages


Programming language features exist to provide building blocks to be combined to
express programming ideals.[38] Ideally, a programming language should:[38]

express ideas directly in the code.


express independent ideas independently.
express relationships among ideas directly in the code.
combine ideas freely.
combine ideas only where combinations make sense.
express simple ideas simply.
The programming style of a programming language to provide these building blocks
may be categorized into programming paradigms.[39] For example, different paradigms
may differentiate:[39]

procedural languages, functional languages, and logical languages.


different levels of data abstraction.
different levels of class hierarchy.
different levels of input datatypes, as in container types and generic programming.
Each of these programming styles has contributed to the synthesis of different
programming languages.[39]

A programming language is a set of keywords, symbols, identifiers, and rules by


which programmers can communicate instructions to the computer.[40] They follow a
set of rules called a syntax.[40]

Keywords are reserved words to form declarations and statements.


Symbols are characters to form operations, assignments, control flow, and
delimiters.
Identifiers are words created by programmers to form constants, variable names,
structure names, and function names.
Syntax Rules are defined in the Backus–Naur form.
Programming languages get their basis from formal languages.[41] The purpose of
defining a solution in terms of its formal language is to generate an algorithm to
solve the underlining problem.[41] An algorithm is a sequence of simple
instructions that solve a problem.[42]

Generations of programming language


Main article: Programming language generations

Machine language monitor on a W65C816S microprocessor


The evolution of programming languages began when the EDSAC (1949) used the first
stored computer program in its von Neumann architecture.[43] Programming the EDSAC
was in the first generation of programming language.[44]

The first generation of programming language is machine language.[45] Machine


language requires the programmer to enter instructions using instruction numbers
called machine code. For example, the ADD operation on the PDP-11 has instruction
number 24576.[e][46]
The second generation of programming language is assembly language.[45] Assembly
language allows the programmer to use mnemonic instructions instead of remembering
instruction numbers. An assembler translates each assembly language mnemonic into
its machine language number. For example, on the PDP-11, the operation 24576 can be
referenced as ADD R0,R0 in the source code.[46] The four basic arithmetic
operations have assembly instructions like ADD, SUB, MUL, and DIV.[46] Computers
also have instructions like DW (Define Word) to reserve memory cells. Then the MOV
instruction can copy integers between registers and memory.
The basic structure of an assembly language statement is a label, operation,
operand, and comment.[47]
Labels allow the programmer to work with variable names. The assembler will later
translate labels into physical memory addresses.
Operations allow the programmer to work with mnemonics. The assembler will later
translate mnemonics into instruction numbers.
Operands tell the assembler which data the operation will process.
Comments allow the programmer to articulate a narrative because the instructions
alone are vague.
The key characteristic of an assembly language program is it forms a one-to-one
mapping to its corresponding machine language target.[48]
The third generation of programming language uses compilers and interpreters to
execute computer programs. The distinguishing feature of a third generation
language is its independence from particular hardware.[49] Early languages include
Fortran (1958), COBOL (1959), ALGOL (1960), and BASIC (1964).[45] In 1973, the C
programming language emerged as a high-level language that produced efficient
machine language instructions.[50] Whereas third-generation languages historically
generated many machine instructions for each statement,[51] C has statements that
may generate a single machine instruction.[f] Moreover, an optimizing compiler
might overrule the programmer and produce fewer machine instructions than
statements. Today, an entire paradigm of languages fill the imperative, third
generation spectrum.
The fourth generation of programming language emphasizes what output results are
desired, rather than how programming statements should be constructed.[45]
Declarative languages attempt to limit side effects and allow programmers to write
code with relatively few errors.[45] One popular fourth generation language is
called Structured Query Language (SQL).[45] Database developers no longer need to
process each database record one at a time. Also, a simple select statement can
generate output records without having to understand how they are retrieved.
Imperative languages
Main article: Imperative programming

A computer program written in an imperative language


Imperative languages specify a sequential algorithm using declarations,
expressions, and statements:[52]

A declaration introduces a variable name to the computer program and assigns it to


a datatype[53] – for example: var x: integer;
An expression yields a value – for example: 2 + 2 yields 4
A statement might assign an expression to a variable or use the value of a variable
to alter the program's control flow – for example: x := 2 + 2; if x = 4 then
do_something();
Fortran
FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system".
It was designed for scientific calculations, without string handling facilities.
Along with declarations, expressions, and statements, it supported:

arrays.
subroutines.
"do" loops.
It succeeded because:
programming and debugging costs were below computer running costs.
it was supported by IBM.
applications at the time were scientific.[54]
However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would
likely fail IBM's compiler.[54] The American National Standards Institute (ANSI)
developed the first Fortran standard in 1966. In 1978, Fortran 77 became the
standard until 1991. Fortran 90 supports:

records.
pointers to arrays.
COBOL
COBOL (1959) stands for "COmmon Business Oriented Language". Fortran manipulated
symbols. It was soon realized that symbols did not need to be numbers, so strings
were introduced.[55] The US Department of Defense influenced COBOL's development,
with Grace Hopper being a major contributor. The statements were English-like and
verbose. The goal was to design a language so managers could read the programs.
However, the lack of structured statements hindered this goal.[56]

COBOL's development was tightly controlled, so dialects did not emerge to require
ANSI standards. As a consequence, it was not changed for 15 years until 1974. The
1990s version did make consequential changes, like object-oriented programming.[56]

Algol
ALGOL (1960) stands for "ALGOrithmic Language". It had a profound influence on
programming language design.[57] Emerging from a committee of European and American
programming language experts, it used standard mathematical notation and had a
readable, structured design. Algol was first to define its syntax using the Backus–
Naur form.[57] This led to syntax-directed compilers. It added features like:

block structure, where variables were local to their block.


arrays with variable bounds.
"for" loops.
functions.
recursion.[57]
Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one
branch. On another branch the descendants include C, C++ and Java.[57]

Basic
BASIC (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code". It was
developed at Dartmouth College for all of their students to learn.[8] If a student
did not go on to a more powerful language, the student would still remember Basic.
[8] A Basic interpreter was installed in the microcomputers manufactured in the
late 1970s. As the microcomputer industry grew, so did the language.[8]

Basic pioneered the interactive session.[8] It offered operating system commands


within its environment:

The 'new' command created an empty slate.


Statements evaluated immediately.
Statements could be programmed by preceding them with line numbers.[g]
The 'list' command displayed the program.
The 'run' command executed the program.
However, the Basic syntax was too simple for large programs.[8] Recent dialects
added structure and object-oriented extensions. Microsoft's Visual Basic is still
widely used and produces a graphical user interface.[7]

C
C programming language (1973) got its name because the language BCPL was replaced
with B, and AT&T Bell Labs called the next version "C". Its purpose was to write
the UNIX operating system.[50] C is a relatively small language, making it easy to
write compilers. Its growth mirrored the hardware growth in the 1980s.[50] Its
growth also was because it has the facilities of assembly language, but it uses a
high-level syntax. It added advanced features like:

inline assembler.
arithmetic on pointers.
pointers to functions.
bit operations.
freely combining complex operators.[50]

Computer memory map


C allows the programmer to control which region of memory data is to be stored.
Global variables and static variables require the fewest clock cycles to store. The
stack is automatically used for the standard variable declarations. Heap memory is
returned to a pointer variable from the malloc() function.

The global and static data region is located just above the program region. (The
program region is technically called the text region. It is where machine
instructions are stored.)
The global and static data region is technically two regions.[58] One region is
called the initialized data segment, where variables declared with default values
are stored. The other region is called the block started by segment, where
variables declared without default values are stored.
Variables stored in the global and static data region have their addresses set at
compile time. They retain their values throughout the life of the process.
The global and static region stores the global variables that are declared on top
of (outside) the main() function.[59] Global variables are visible to main() and
every other function in the source code.
On the other hand, variable declarations inside of main(), other functions, or
within { } block delimiters are local variables. Local variables also include
formal parameter variables. Parameter variables are enclosed within the parenthesis
of a function definition.[60] Parameters provide an interface to the function.
Local variables declared using the static prefix are also stored in the global and
static data region.[58] Unlike global variables, static variables are only visible
within the function or block. Static variables always retain their value. An
example usage would be the function int increment_counter(){static int counter = 0;
counter++; return counter;}[h]
The stack region is a contiguous block of memory located near the top memory
address.[61] Variables placed in the stack are populated from top to bottom.[i][61]
A stack pointer is a special-purpose register that keeps track of the last memory
address populated.[61] Variables are placed into the stack via the assembly
language PUSH instruction. Therefore, the addresses of these variables are set
during runtime. The method for stack variables to lose their scope is via the POP
instruction.
Local variables declared without the static prefix, including formal parameter
variables,[62] are called automatic variables[59] and are stored in the stack.[58]
They are visible inside the function or block and lose their scope upon exiting the
function or block.
The heap region is located below the stack.[58] It is populated from the bottom to
the top. The operating system manages the heap using a heap pointer and a list of
allocated memory blocks.[63] Like the stack, the addresses of heap variables are
set during runtime. An out of memory error occurs when the heap pointer and the
stack pointer meet.
C provides the malloc() library function to allocate heap memory.[j][64] Populating
the heap with data is an additional copy function.[k] Variables stored in the heap
are economically passed to functions using pointers. Without pointers, the entire
block of data would have to be passed to the function via the stack.
C++
In the 1970s, software engineers needed language support to break large projects
down into modules.[65] One obvious feature was to decompose large projects
physically into separate files. A less obvious feature was to decompose large
projects logically into abstract data types.[65] At the time, languages supported
concrete (scalar) datatypes like integer numbers, floating-point numbers, and
strings of characters. Abstract datatypes are structures of concrete datatypes,
with a new name assigned. For example, a list of integers could be called
integer_list.

In object-oriented jargon, abstract datatypes are called classes. However, a class


is only a definition; no memory is allocated. When memory is allocated to a class
and bound to an identifier, it is called an object.[66]

Object-oriented imperative languages developed by combining the need for classes


and the need for safe functional programming.[67] A function, in an object-oriented
language, is assigned to a class. An assigned function is then referred to as a
method, member function, or operation. Object-oriented programming is executing
operations on objects.[68]

Object-oriented languages support a syntax to model subset/superset relationships.


In set theory, an element of a subset inherits all the attributes contained in the
superset. For example, a student is a person. Therefore, the set of students is a
subset of the set of persons. As a result, students inherit all the attributes
common to all persons. Additionally, students have unique attributes that other
people do not have. Object-oriented languages model subset/superset relationships
using inheritance.[69] Object-oriented programming became the dominant language
paradigm by the late 1990s.[65]

C++ (1985) was originally called "C with Classes".[70] It was designed to expand
C's capabilities by adding the object-oriented facilities of the language Simula.
[71]

An object-oriented module is composed of two files. The definitions file is called


the header file. Here is a C++ header file for the GRADE class in a simple school
application:

// grade.h
// -------

// Used to allow multiple source files to include


// this header file without duplication errors.
// ----------------------------------------------
#ifndef GRADE_H
#define GRADE_H

class GRADE {
public:
// This is the constructor operation.
// ----------------------------------
GRADE ( const char letter );

// This is a class variable.


// -------------------------
char letter;

// This is a member operation.


// ---------------------------
int grade_numeric( const char letter );
// This is a class variable.
// -------------------------
int numeric;
};
#endif
A constructor operation is a function with the same name as the class name.[72] It
is executed when the calling operation executes the new statement.

A module's other file is the source file. Here is a C++ source file for the GRADE
class in a simple school application:

// grade.cpp
// ---------
#include "grade.h"

GRADE::GRADE( const char letter )


{
// Reference the object using the keyword 'this'.
// ----------------------------------------------
this->letter = letter;

// This is Temporal Cohesion


// -------------------------
this->numeric = grade_numeric( letter );
}

int GRADE::grade_numeric( const char letter )


{
if ( ( letter == 'A' || letter == 'a' ) )
return 4;
else
if ( ( letter == 'B' || letter == 'b' ) )
return 3;
else
if ( ( letter == 'C' || letter == 'c' ) )
return 2;
else
if ( ( letter == 'D' || letter == 'd' ) )
return 1;
else
if ( ( letter == 'F' || letter == 'f' ) )
return 0;
else
return -1;
}
Here is a C++ header file for the PERSON class in a simple school application:

// person.h
// --------
#ifndef PERSON_H
#define PERSON_H

class PERSON {
public:
PERSON ( const char *name );
const char *name;
};
#endif
Here is a C++ source file for the PERSON class in a simple school application:

// person.cpp
// ----------
#include "person.h"

PERSON::PERSON ( const char *name )


{
this->name = name;
}
Here is a C++ header file for the STUDENT class in a simple school application:

// student.h
// ---------
#ifndef STUDENT_H
#define STUDENT_H

#include "person.h"
#include "grade.h"

// A STUDENT is a subset of PERSON.


// --------------------------------
class STUDENT : public PERSON{
public:
STUDENT ( const char *name );
GRADE *grade;
};
#endif
Here is a C++ source file for the STUDENT class in a simple school application:

// student.cpp
// -----------
#include "student.h"
#include "person.h"

STUDENT::STUDENT ( const char *name ):


// Execute the constructor of the PERSON superclass.
// -------------------------------------------------
PERSON( name )
{
// Nothing else to do.
// -------------------
}
Here is a driver program for demonstration:

// student_dvr.cpp
// ---------------
#include <iostream>
#include "student.h"

int main( void )


{
STUDENT *student = new STUDENT( "The Student" );
student->grade = new GRADE( 'a' );

std::cout
// Notice student inherits PERSON's name
<< student->name
<< ": Numeric grade = "
<< student->grade->numeric
<< "\n";
return 0;

You might also like