Computer Fundamental and Programming LAB: Adamson University College of Engineering Computer Engineering Department
Computer Fundamental and Programming LAB: Adamson University College of Engineering Computer Engineering Department
College of Engineering
Computer Engineering Department
What is a Computer?
- A computer is any machine that can be programmed to carry out a set of algorithms
and arithmetic instructions.
HISTORY OF COMPUTERS
THE MECHANICAL ERA (1623-1945)
The idea of using machines to solve mathematical problems can be traced at least
as far as the early 17th century. The first multi-purpose, i.e. programmable,
computing device was probably Charles Babbage's Difference Engine, which was
begun in 1823 but never completed. A more ambitious machine was the Analytical
Engine that was designed in 1842, but unfortunately it also was only partially
completed by Babbage. Many historians think the major reason he was unable to
complete these projects was the fact that the technology of the day was not reliable
enough. In spite of never building a complete working machine, Babbage and his
colleagues, most notably Ada, Countess of Lovelace, recognized several important
programming techniques, including conditional branches, iterative loops and index
variables.
A machine inspired by Babbage's design was arguably the first to be used in
computational science. George Scheutz read of the difference engine in 1833, and
along with his son Edvard Scheutz began work on a smaller version. By 1853 they
had constructed a machine that could process 15-digit numbers and calculate fourth-
order differences. Their machine won a gold medal at the Exhibition of Paris in 1855,
and later they sold it to the Dudley Observatory in Albany, New York, which used it
to calculate the orbit of Mars. One of the first commercial uses of mechanical
computers was by the US Census Bureau, which used punch-card equipment
designed by Herman Hollerith to tabulate data for the 1890 census. In 1911
Hollerith's company merged with a competitor to found the corporation which in
1924 became International Business Machines.
FIRST GENERATION ELECTRONIC COMPUTERS (1937-1953)
Three machines have been promoted at various times as the first electronic
computers. These machines used electronic switches, in the form of vacuum tubes,
instead of electromechanical relays.
The earliest attempt to build an electronic computer was by J. V. Atanasoff, a
professor of physics and mathematics at Iowa State, in 1937. Atanasoff set out to
build a machine that would help his graduate students solve systems of partial
differential equations. By 1941 he and graduate student Clifford Berry had succeeded
in building a machine that could solve 29 simultaneous equations with 29 unknowns.
However, the machine was not programmable, and was more of an electronic
calculator.
A second early electronic machine was Colossus, designed by Alan Turing for the
British military in 1943. This machine played an important role in breaking codes
used by the German army in World War II. Turing's main contribution to the field of
computer science was the idea of the Turing machine, a mathematical formalism
widely used in the study of computable functions. The existence of Colossus was kept
secret until long after the war ended, and the credit due to Turing and his colleagues
for designing one of the first working electronic computers was slow in coming.
The first general purpose programmable electronic computer was the Electronic
Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and John V.
Mauchly at the University of Pennsylvania. Work began in 1943 and was completed
in 1945.
SECOND GENERATION (1954-1962)
Electronic switches in this era were based on discrete diode and transistor
technology with a switching time of approximately 0.3 microseconds. The first
machines to be built with this technology include TRADIC at Bell Laboratories in
1954 and TX-0 at MIT's Lincoln Laboratory. Memory technology was based on
magnetic cores which could be accessed in random order, as opposed to mercury
delay lines, in which data was stored as an acoustic wave that passed sequentially
through the medium and could be accessed only when the data moved by the I/O
interface.
Important innovations in computer architecture included index registers for
controlling loops and floating point units for calculations based on real numbers.
During this second generation many high level programming languages were
introduced, including FORTRAN (1956), ALGOL (1958), and COBOL (1959).
Important commercial machines of this era include the IBM 704 and its successors,
the 709 and 7094. The latter introduced I/O processors for better throughput between
I/O devices and main memory.
THIRD GENERATION (1963-1972)
The third generation brought huge gains in computational power. Innovations in
this era include the use of integrated circuits, or ICs (semiconductor devices with
several transistors built into one physical component), semiconductor memories
starting to be used instead of magnetic cores, microprogramming as a technique for
efficiently designing complex processors, the coming of age of pipelining and other
forms of parallel processing (described in detail in Chapter CA), and the introduction
of operating systems and time-sharing.
FOURTH GENERATION (1972-1984)
The next generation of computer systems saw the use of large scale integration
(LSI - 1000 devices per chip) and very large scale integration (VLSI - 100,000
devices per chip) in the construction of computing elements. At this scale entire
processors will fit onto a single chip, and for simple systems the entire computer
(processor, main memory, and I/O controllers) can fit on one chip.
PARTS OF A COMPUTER
The five main components that make up a typical, present-day computer include:
1. MOTHERBOARD - a printed circuit board and foundation of a computer that is the
biggest board in a computer chassis. It allocates power and allows communication to
and between the CPU, RAM, and all other computer hardware components.
2. CENTRAL PROCESSING UNIT (CPU) - the unit which performs most of the
processing inside a computer. It processes all instructions received by software
running on the PC and by other hardware components, and acts as a powerful
calculator.
5. STORAGE - Modern computers either use a Hard Disk Drive (HDD) or Solid-State
Drive (SSD). HDDs are made of an actual disk onto which data is stored. The disk is
read by a mechanical arm. SSDs have no moving parts and are faster than a hard
drive, because no time is spent waiting for a mechanical arm to find data on a
physical location on the disk.
3. PEOPLE - They are the ultimate “users” of the computer systems. There are three
types of people that interact with the system: The programmers, the System Analyst
and the End-users.
4. PROCEDURES – These are a set of instructions, written in code, to instruct a
computer on how to perform a task, run a software, do calculations etc.
5. DATA - This is essentially the raw facts and figures that we input in the computer.
The data gets processed via the computer system and becomes information, which is
processed and organized data. Information can then be used for decision-making
purposes.
6. CONNECTIVITY - This is when the computers are linked to a network. It facilitates
sharing of information, files, and other facilities. Computers can connect to a network
via LAN cables, Bluetooth, Wi-Fi, satellites etc. The internet is one example of
connectivity in a computer system.
REFERENCES:
(1) https://web.itu.edu.tr/~gerzeli/History.htm
(2) https://www.livescience.com/20718-computer-history.html
(3) https://www.idtech.com/blog/parts-of-a-computer
(4) https://www.toppr.com/guides/accountancy/application-of-computers-in-
accounting/meaning-and-elements-of-computer-system/