0% found this document useful (0 votes)
71 views9 pages

The Five Generations of Computers: Computer Devices

The document discusses the five generations of computers, describing the major technological development that characterized each generation and how it improved upon the previous generation, making computers smaller, cheaper, more powerful, efficient and reliable over time, from the first generation using vacuum tubes to the current fifth generation focusing on artificial intelligence.

Uploaded by

Shweta Anand
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views9 pages

The Five Generations of Computers: Computer Devices

The document discusses the five generations of computers, describing the major technological development that characterized each generation and how it improved upon the previous generation, making computers smaller, cheaper, more powerful, efficient and reliable over time, from the first generation using vacuum tubes to the current fifth generation focusing on artificial intelligence.

Uploaded by

Shweta Anand
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

The Five Generations of Computers

Each generation of computer is characterized by a major


technological development that fundamentally changed the
way computers operate, resulting in increasingly smaller,
cheaper, more powerful and more efficient and reliable
devices.
The history of computer development is often referred to in reference to the different generations of
computing devices. Each generation of computer is characterized by a major technological development
that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper,
more powerful and more efficient and reliable devices.

First Generation (1940-1956) Vacuum Tubes


The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were
often enormous, taking up entire rooms. They were very expensive to operate and in addition to
using a great deal of electricity, generated a lot of heat, which was often the cause of
malfunctions.

First generation computers relied on machine language, the lowest-level programming language
understood by computers, to perform operations, and they could only solve one problem at a
time. Input was based on punched cards and paper tape, and output was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S. Census
Bureau in 1951.

Second Generation (1956-1963) Transistors


Transistors replaced vacuum tubes and ushered in the second generation of computers. The
transistor was invented in 1947 but did not see widespread use in computers until the late 1950s.
The transistor was far superior to the vacuum tube, allowing computers to become smaller,
faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors.
Though the transistor still generated a great deal of heat that subjected the computer to damage,
it was a vast improvement over the vacuum tube. Second-generation computers still relied on
punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or
assembly, languages, which allowed programmers to specify instructions in words. High-level
programming languages were also being developed at this time, such as early versions of
COBOL and FORTRAN. These were also the first computers that stored their instructions in
their memory, which moved from a magnetic drum to magnetic core technology.

The first computers of this generation were developed for the atomic energy industry.

Third Generation (1964-1971) Integrated


Circuits
The development of the integrated circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which
drastically increased the speed and efficiency of computers.

Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to
run many different applications at one time with a central program that monitored the memory.
Computers for the first time became accessible to a mass audience because they were smaller
and cheaper than their predecessors.

Fourth Generation (1971-Present)


Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated
circuits were built onto a single silicon chip. What in the first generation filled an entire room
could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the
components of the computer—from the central processing unit and memory to input/output
controls—on a single chip.

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many
areas of life as more and more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation computers
also saw the development of GUIs, the mouse and handheld devices.

Fifth Generation (Present and Beyond)


Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development,
though there are some applications, such as voice recognition, that are being used today. The use
of parallel processing and superconductors is helping to make artificial intelligence a reality.
Quantum computation and molecular and nanotechnology will radically change the face of
computers in years to come. The goal of fifth-generation computing is to develop devices that
respond to natural language input and are capable of learning and self-organization.

2. Direct memory access (DMA) is a feature of modern computers and microprocessors that
allows certain hardware subsystems within the computer to access system memory for reading
and/or writing independently of the central processing unit. Many hardware systems use DMA
including disk drive controllers, graphics cards, network cards and sound cards. DMA is also
used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-
chips, where its processing element is equipped with a local memory (often called scratchpad
memory) and DMA is used for transferring data between the local memory and the main
memory. Computers that have DMA channels can transfer data to and from devices with much
less CPU overhead than computers without a DMA channel. Similarly a processing element
inside a multi-core processor can transfer data to and from its local memory without occupying
its processor time and allowing computation and data transfer concurrency.

Without DMA, using programmed input/output (PIO) mode for communication with peripheral
devices, or load/store instructions in the case of multicore chips, the CPU is typically fully
occupied for the entire duration of the read or write operation, and is thus unavailable to perform
other work. With DMA, the CPU would initiate the transfer, do other operations while the
transfer is in progress, and receive an interrupt from the DMA controller once the operation has
been done. This is especially useful in real-time computing applications where not stalling
behind concurrent operations is critical. Another and related application area is various forms of
stream processing where it is essential to have data processing and transfer in parallel, in order to
achieve sufficient throughput.

Principle

DMA is an essential feature of all modern computers, as it allows devices to transfer data
without subjecting the CPU to a heavy overhead. Otherwise, the CPU would have to copy each
piece of data from the source to the destination, making itself unavailable for other tasks. This
situation is aggravated because access to I/O devices over a peripheral bus is generally slower
than normal system RAM. With DMA, the CPU gets freed from this overhead and can do useful
tasks during data transfer (though the CPU bus would be partly blocked by DMA).

DMA can lead to cache coherency problems n addition to hardware interaction, DMA can also be used
to offload expensive memory operations, such as large copies or scatter-gather operations, from the
CPU to a dedicated DMA engine. Intel includes such engines on high-end servers, called I/O Acceleration
Technology (IOAT).
HOW THIS OCCUR

Direct Memory Access (DMA) is a method of allowing data to be moved from one location to
another in a computer without intervention from the central processor (CPU).

The way that the DMA function is implemented varies between computer architectures, so this
discussion will limit itself to the implementation and workings of the DMA subsystem on the
IBM Personal Computer (PC), the IBM PC/AT and all of its successors and clones.

The PC DMA subsystem is based on the Intel 8237 DMA controller. The 8237 contains four
DMA channels that can be programmed independently and any one of the channels may be
active at any moment. These channels are numbered 0, 1, 2 and 3. Starting with the PC/AT, IBM
added a second 8237 chip, and numbered those channels 4, 5, 6 and 7.

The original DMA controller (0, 1, 2 and 3) moves one byte in each transfer. The second DMA
controller (4, 5, 6, and 7) moves 16-bits from two adjacent memory locations in each transfer,
with the first byte always coming from an even-numbered address. The two controllers are
identical components and the difference in transfer size is caused by the way the second
controller is wired into the system.

The 8237 has two electrical signals for each channel, named DRQ and -DACK. There are
additional signals with the names HRQ (Hold Request), HLDA (Hold Acknowledge), -EOP (End
of Process), and the bus control signals -MEMR (Memory Read), -MEMW (Memory Write),
-IOR (I/O Read), and -IOW (I/O Write).

The 8237 DMA is known as a ``fly-by'' DMA controller. This means that the data being moved
from one location to another does not pass through the DMA chip and is not stored in the DMA
chip. Subsequently, the DMA can only transfer data between an I/O port and a memory address,
but not between two I/O ports or two memory locations.

4.

1. main()  
2. {   
3. int n;  
4. printf("\nEnter any number:"); scanf("%d", &n); dec2bin(n);  
5. } dec2bin(int n)  
6. {  
7. if(n == 0)  
8. return ; else  
9. {  
10. dec2bin (n/2); printf("%d", n%10);  
11. }  
12. }  
5.

Quicksort sorts by employing a divide and conquer strategy to divide a list into two sub-lists.

The steps are:

1. Pick an element, called a pivot, from the list.


2. Reorder the list so that all elements with values less than the pivot come before the pivot,
while all elements with values greater than the pivot come after it (equal values can go
either way). After this partitioning, the pivot is in its final position. This is called the
partition operation.
3. Recursively sort the sub-list of lesser elements and the sub-list of greater elements.

The base case of the recursion are lists of size zero or one, which never need to be sorted.

In simple pseudocode, the algorithm might be expressed as this:

function quicksort(array)
var list less, greater
if length(array) ≤ 1
return array
select and remove a pivot value pivot from array
for each x in array
if x ≤ pivot then append x to less
else append x to greater
return concatenate(quicksort(less), pivot, quicksort(greater))

Notice that we only examine elements by comparing them to other elements. This makes
quicksort a comparison sort. This version is also a stable sort (assuming that the "for each"
method retrieves elements in original order, and the pivot selected is the last among those of
equal value).

The correctness of the partition algorithm is based on the following two arguments:

 At each iteration, all the elements processed so far are in the desired position: before the
pivot if less than the pivot's value, after the pivot if greater than the pivot's value (loop
invariant).
 Each iteration leaves one fewer element to be processed (loop variant).

The correctness of the overall algorithm follows from inductive reasoning: for zero or one
element, the algorithm leaves the data unchanged; for a larger data set it produces the
concatenation of two parts, elements less than the pivot and elements greater than it, themselves
sorted by the recursive hypothesis.

The disadvantage of the simple version above is that it requires Ω(n) extra storage space, which
is as bad as merge sort. The additional memory allocations required can also drastically impact
speed and cache performance in practical implementations. There is a more complex version
which uses an in-place partition algorithm and can achieve the complete sort using O(logn) space
(not counting the input) use on average (for the call stack):

function partition(array, left, right, pivotIndex)


pivotValue := array[pivotIndex]
swap array[pivotIndex] and array[right] // Move pivot to end
storeIndex := left
for i from left to right - 1 // left ≤ i < right
if array[i] ≤ pivotValue
swap array[i] and array[storeIndex]
storeIndex := storeIndex + 1
swap array[storeIndex] and array[right] // Move pivot to its final place
return storeIndex

In-place partition in action on a small list. The boxed element is the pivot element, blue elements
are less or equal, and red elements are larger.

This is the in-place partition algorithm. It partitions the portion of the array between indexes left
and right, inclusively, by moving all elements less than or equal to array[pivotIndex] to the
beginning of the subarray, leaving all the greater elements following them. In the process it also
finds the final position for the pivot element, which it returns. It temporarily moves the pivot
element to the end of the subarray, so that it doesn't get in the way. Because it only uses
exchanges, the final list has the same elements as the original list. Notice that an element may be
exchanged multiple times before reaching its final place. Also, in case of pivot duplicates in the
input array, they can be spread across left subarray, possibly in random order. This doesn't
represent a partitioning failure, as further sorting will reposition and finally "glue" them together.

This form of the partition algorithm is not the original form; multiple variations can be found in
various textbooks, such as versions not having the storeIndex. However, this form is probably
the easiest to understand.

Once we have this, writing quicksort itself is easy:

procedure quicksort(array, left, right)


if right > left
select a pivot index //(e.g. pivotIndex := left+(right-left)/2)
pivotNewIndex := partition(array, left, right, pivotIndex)
quicksort(array, left, pivotNewIndex - 1)
quicksort(array, pivotNewIndex + 1, right)

However, since partition reorders elements within a partition, this version of quicksort is not a
stable sort.

Note the left + (right-left)/2 expression. (left + right)/2 would seem to be adequate, but in the
presence of overflow, can give the wrong answer; for example, in signed 16-bit arithmetic,
32000 + 32000 is not 64000 but -1536, and dividing that number by two will give you a new
pivotIndex of -768 — obviously wrong. The same problem arises in unsigned arithmetic: 64000
+ 64000 truncated to an unsigned 16-bit value is 62464, and dividing that by two gives you
31232 — probably within the array bounds, but still wrong. By contrast, (right - left) and (right -
left)/2 obviously do not overflow, and left + (right - left)/2 also does not overflow ((right - left)/2
= (right + left)/2 - left which is clearly less than or equal to intmax - left).

Another implementation that works in place:

function quicksort(array, left, right)


var pivot, leftIdx = left, rightIdx = right
if right - left + 1 greater than 1
pivot = (left + right) / 2
while leftIdx less than or equal to pivot and rightIdx greater than
or equal to pivot
while array[leftIdx] less than array[pivot] and leftIdx less than
or equal to pivot
leftIdx = leftIdx + 1
while array[rightIdx] greater than array[pivot] and rightIdx
greater than or equal to pivot
rightIdx = rightIdx - 1;
swap array[leftIdx] with array[rightIdx]
leftIdx = leftIdx + 1
rightIdx = rightIdx - 1
if leftIdx - 1 equal to pivot
pivot = rightIdx = rightIdx + 1
else if rightIdx + 1 equal to pivot
pivot = leftIdx = leftIdx - 1
quicksort(array, left ,pivot - 1)
quicksort(array, pivot + 1, right)

In very early versions of quicksort, the leftmost element of the partition would often be chosen as
the pivot element. Unfortunately, this causes worst-case behavior on already sorted arrays, which
is a rather common use-case. The problem was easily solved by choosing either a random index
for the pivot, choosing the middle index of the partition or (especially for longer partitions)
choosing the median of the first, middle and last element of the partition for the pivot (as
recommended by R. Sedgewick).[2][3]

Two other important optimizations, also suggested by R. Sedgewick, as commonly


acknowledged, and widely used in practice[4][5][6] are:
 To make sure at most O(log N) space is used, recurse first into the smaller half of the
array, and use a tail call to recurse into the other.
 Use insertion sort, which has a smaller constant factor and is thus faster on small arrays,
for invocations on such small arrays (i.e. where the length is less than a threshold t
determined experimentally). This can be implemented by leaving such arrays unsorted
and running a single insertion sort pass at the end, because insertion sort handles nearly
sorted arrays efficiently. A separate insertion sort of each small segment as they are
identified adds the overhead of starting and stopping many small sorts, but, avoids
wasting effort comparing keys across the many segment boundaries, which keys will be
in order due to the workings of the quicksort process.

5.program of quick sort

#include "stdio.h"

#define MAXARRAY 10

void quicksort(int arr[], int low, int high);

int main(void)
{
int array[MAXARRAY] = {0};
int i = 0;

/* load some random values into the array */


for(i = 0; i < MAXARRAY; i++)
array[i] = rand() % 100;

/* print the original array */


printf("Before quicksort: ");
for(i = 0; i < MAXARRAY; i++)
{
printf(" %d ", array[i]);
}
printf("\n");

quicksort(array, 0, (MAXARRAY - 1));

/* print the `quicksorted' array */


printf("After quicksort: ");
for(i = 0; i < MAXARRAY; i++) {
printf(" %d ", array[i]);
}
printf("\n");

return 0;
}

/* sort everything inbetween `low' <-> `high' */


void quicksort(int arr[], int low, int high)
{
int i = low;
int j = high;
int y = 0;
/* compare value */
int z = arr[(low + high) / 2];

/* partition */
do {
/* find member above ... */
while(arr[i] < z) i++;

/* find element below ... */


while(arr[j] > z) j--;

if(i <= j)
{
/* swap two elements */
y = arr[i];
arr[i] = arr[j];
arr[j] = y;
i++;
j--;
}
} while(i <= j);

/* recurse */
if(low < j)
quicksort(arr, low, j);

if(i < high)


quicksort(arr, i, high);
}

You might also like