ProEl14 - Module 1 The Computer - 2024
ProEl14 - Module 1 The Computer - 2024
B. Mechanical Age
- The mechanical age is when we first start to see connections between our current technology and its
ancestors.
- The mechanical age can be defined as the time between 1450 and 1840.
- A lot of new technologies were developed in this era due to an explosion of interest in computation
and information.
→ The “Calculating Clock” is the first mechanical calculator or the first gear-driven calculating
machine that was invented in 1623 by Wilhelm Shickard, a German scientist. It can work with
six digits and can carry digits across columns.
→ In 1801 a French weaver and merchant, Joseph-Marie Jacquard, developed the automatic
loom (weaving loom) 1801 controlled by punched cards (as a form of instruction).
→ Jacquard’s loom simplifies the process of manufacturing textiles with complex patterns. The
loom was controlled by a “chain of cards”, a number of punched cards, laced together into a
continuous sequence.
Image 5. The Difference Engine (left) and the Analytical Engine (right).
→ Both the Difference Engine and the Analytical Engine were proposed by Charles Babbage, an
English mathematician and inventor who is credited with having conceived the first automatic
digital computer. He is called the “Father of Computer”.
→ The Difference Engine was designed to solve polynomial equations and is considered the first
modern computer design and was powered by steam.
→ The Analytical Engine was a general-purpose machine which is a precursor to the computer
that we know now.
→ Lady Ada Augusta, “Countess of Lovelace”, is considered to have written instructions for the
first computer program in 1842 by translating an article from Babbage’s analytical engine and
adding her thoughts and ideas on the machine. She is considered as the First Computer
Programmer and the programming language “Ada” was named in her honor.
C. Electro-Mechanical Age
- The electro-mechanical age heralded the beginnings of telecommunications as we know it today.
- Time between 1840 and 1940.
- Several revolutionary technologies were invented in this period such as Morse code, telephones,
radio, etc.
- All these technologies were crucial steppingstones towards modern information technology
systems.
• Boolean Algebra
→ The Boolean Algebra is a symbolic system of mathematical logic that represents a relationship
between entities – either ideas or objects.
→ George Boole, an English mathematician, helped establish modern symbolic logic and whose
algebra of logic, now called Boolean algebra, is the basis for the design of digital computer
circuits.
D. Electronic Age
- These machines used electronic switches, in the form of vacuum tubes, instead of
electromechanical relays in the previous era.
- In principle the electronic switches would be more reliable since they would have no moving parts
that would wear out, but the technology was still new at that time and the tubes were comparable
to relays in reliability.
- The major benefit of electronic switches was that they could “open” and “close” thousands of
times faster than relays.
- Generation - the state of improvement in the development of a product. This term is also used in
the different advancements of computer technology.
1. First Generation
- Used vacuum tubes.
- Audion vacuum tubes were invented by Lee de Forest in 1906. His vacuum tube provided an
electrically controlled switch.
- Very large machines that need a special room to house them with air conditioning and specially
trained technicians to run and maintain them.
- The first-generation computers were unreliable.
- Machine language was used for programming. However, they were difficult to program and
use.
- The Bombe and Colossus were code-breaking machines created by the British during World
War II. The Bombe was used to break the Enigma code and the Colossus was created to break
the lesser known “Fish” transmissions which were sent in binary code resembling the binary
code used inside present-day computers.
- Alan Turing developed The Bombe. His work on the Turing Machine (also known as the
Universal Machine) became a precursor to the ABC.
- Harvard Mark I (official name: Automatic Sequence Controlled Calculator) was developed
from 1939 to 1944 by Howard Aiken at Harvard University and was built as a partnership
between Harvard and IBM. It is the first programmable digital computer made in the U.S where
data was entered into the computer using paper tape.
- ABC (Atanasoff-Berry Computer) is the first all-electronic computer created by John
Atanasoff and Clifford Berry in 1942. It featured about 300 vacuum tubes for control and
arithmetic calculations, the use of binary numbers, logic operations (instead of direct counting),
memory capacitors, and punched cards as input/output units.
- ENIAC (Electronic Numerical Integrator and Computer) was built at the University of
Pennsylvania between 1943 and 1945 by two professors, John Mauchly and the 24-year-old J.
Presper Eckert. It was commissioned by the U.S. Department of Defense and delivered in 1946.
- The ENIAC has a dimension of 30 x 30 feet and weighs 10 tons. It is powered by 18,000
vacuum tubes and it can compute a ballistic firing trajectory in 20 seconds versus 30
conventional ways.
- It was built to that would replace all the "computers”; the term given to women who were
employed calculating the firing tables for the army's artillery guns.
PROEL14: COMPUTER APPLICATION IN BUSINESS 3|Page
2. Second Generation
- Make use of the transistors invented by Bell Telephone laboratories and they had many of the
same component as the modern-day computer.
Image 12. A replica of the first transistor (left) and the modern transistors (right).
- A transistor is used to regulate the flow of an electrical current and to switch electricity on
and off.
- Second-generation computers typically have a printer, some sort of tape or disk storage,
operating systems, stored programs, as well as some sort of memory.
- Smaller, faster, and more reliable: used transistors, 6,000 to 300,000 operations/s, and main
memory 6 kilobytes to 1.3 megabytes.
- Become common in larger businesses and universities.
3. Third Generation
- The 3rd Generation computers replaced transistors with “integrated circuits” or I.C. was
invented by Jack Kilby of Texas Instruments in 1958.
- Integrated Circuits (IC) are transistors, resistors, and capacitors integrated into a single chip.
- The 3rd generation computers using integrated circuits proved to be highly reliable, relatively
inexpensive, and faster. Less human labor is required at the assembly stage.
4. Fourth Generation
- The microprocessor brought the fourth generation of computers, as thousands of integrated
circuits were built onto a single silicon chip.
- Fourth-generation computers are smaller, faster, more reliable, and lower in price. They are
the size of a television or much smaller. It also costs one-tenth, or less, the amount of third
generation which made it very common in homes and businesses.
- In 1976, the Apple Computer 1, originally released as the Apple Computer and known later
as the Apple I, or Apple-1, is a desktop computer released by the Apple Computer Company
in 1976. It was designed by Steve Wozniak. The idea of selling the computer came from
Wozniak's friend and co-founder Steve Jobs.
5. Fifth Generation
- Fifth Generation computers are based on parallel processing hardware and AI (Artificial
Intelligence) software.
- Artificial Intelligence (AI) is an emerging branch in computer science, which interprets the
means and method of making computers think like human beings.
- All the high-level languages like C and C++, Java, .Net, etc., are used in this generation.
A. HARDWARE
➢ The mechanical devices that make up the computer. It is any part of the computer you can touch.
➢ A computer’s hardware consists of interconnected electronic devices that you can use to control
the computer’s operation, input, and output.
1.INPUT DEVICES
➢ These are devices that accept data and instructions from the user or from another computer
system (such as a computer on the Internet). Input devices translate data from a form that humans
understand to one that the computer can work with.
• Keyboard - the most common input device which accepts letters, numbers, and commands from
the user.
2.OUTPUT DEVICES
➢ An output device is any piece of computer hardware equipment used to communicate the results
of data processing carried out by an information processing system (such as a computer) which
converts the electronically generated information into human-readable form.
• Monitor (Display Screen) - the computer sends output to the monitor when the user needs only
to see the output.
a) CRT (cathode ray tube) - A vacuum tube containing one or more electron guns, and a
fluorescent screen used to view images.
b) LCD (liquid-crystal display) - A flat panel display, electronic visual display, or video
display that uses the light-modulating properties of liquid crystals. Liquid crystals do not emit
light directly.
c) LED (light emitting diode) - A flat panel display, which uses an array of light-emitting
diodes as a video display.
• Printer - A device that makes a printed copy (also called hard copy) of your work on a sheet of
paper.
• Speaker – it is connected to the computer so that you can hear music and sound
• Projector - also called digital light projectors and video projectors. Images can be shown directly
from the computer's disk and displayed on the PC's screen or projected on a wall or large screen.
A data projector plugs into one of the computer’s ports and then projects the video output onto an
external surface.
B. SYSTEM UNIT
➢ It is the most important piece of computer hardware.
➢ It is a set of electronic components of a computer used to process data.
➢ The systems unit has slot holes in the front where you will be able to put information storage
devices like floppy disks, CD- ROM and buttons to turn the computer on and off.
➢ In the back it has plug holes where you can attach the other pieces of computer hardware with
wires.
Types of Software
• System Software – it is responsible for controlling, integrating, and managing the individual
hardware components of a computer system so that other software and the users of the system see
it as a functional unit without having to be concerned with the low-level details such as transferring
data from memory to disk, or rendering text onto a display.
- Consists of the following:
a. Operating system
b. Some fundamental utilities such as disk formatters, file managers, display managers, text
editors, user authentication (login) and management tools, and networking and device control
software.
• Application Software – it is used to accomplish specific tasks other than just running the computer
system.
- Application software may consist of a single program, such as an image viewer; a small
collection of programs that work closely together to accomplish a task, such as a spreadsheet or
text processing system; a larger collection of related but independent programs and packages
that have a common user interface or shared data format which consists of closely integrated
word processor, spreadsheet, database, etc.; or a software system, such as a database
management system, which is a collection of fundamental programs that may provide some
service to a variety of other independent applications.
D. MEMORY
• Primary Memory
1.Random Access Memory (RAM)
- Responsible for storing data on a temporary basis, so that it can be promptly accessed by the
processor as and when needed.
- Lost when the computer is turned off.
- “volatile” memory
- RAM stores data randomly and the processor accesses these data randomly from the RAM
storage.
- Some Types of RAM:
a. Dynamic RAM (D-RAM): dynamic means changing, which for memory is not necessarily a
good thing, so dynamic memory must be continually refreshed.
b.Synchronous DRAM — when the memory update and clock are better coordinated (“in sync”)
c. Static RAM (SRAM) doesn’t need constant refreshing, and is faster but more expensive than
dynamic.
• Secondary Memory
→ Also referred to as auxiliary memory
→ Non-volatile
→ External
→ Used to store program and data files permanently.
1.Hard Drive (HD)
- A hard disk is part of a unit, often called a "disk drive," "hard
drive," or "hard disk drive," that store and provides relatively quick
access to large amounts of data on an electromagnetically charged
surface or set of surfaces.
3.Optical Disk
- A disk drive that uses laser light as part of the process of reading or writing data to or from
optical discs.
- Some drives can only read from discs, but recent drives are commonly both readers and
recorders, also called burners or writers.
- Compact discs, DVDs, and Blu-ray discs are common types of optical media which can be read
and recorded by such drives. Optical drive is the generic name; drives are usually described as
"CD" "DVD", or "Blu-ray", followed by "drive", "writer", etc.
4.Flash Disk
- A storage module made of flash memory chips.
- A Flash disks have no mechanical platters or access arms, but the
term "disk" is used because the data are accessed as if they
were on a hard drive.
- The disk storage structure is emulated.
• CACHE
→ pronounced “cash”
→ Invisible to the OS
→ Interacts with other memory management hardware.
→ The processor must access memory at least once per instruction cycle.
→ It is a small unit of fast memory built into the processor to improve performance.