The Five Generations of Computers: Computer Devices
The Five Generations of Computers: Computer Devices
First generation computers relied on machine language, the lowest-level programming language
understood by computers, to perform operations, and they could only solve one problem at a
time. Input was based on punched cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S. Census
Bureau in 1951.
The first computers of this generation were developed for the atomic energy industry.
Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to
run many different applications at one time with a central program that monitored the memory.
Computers for the first time became accessible to a mass audience because they were smaller
and cheaper than their predecessors.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many
areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation computers
also saw the development of GUIs, the mouse and handheld devices.
2. Direct memory access (DMA) is a feature of modern computers and microprocessors that
allows certain hardware subsystems within the computer to access system memory for reading
and/or writing independently of the central processing unit. Many hardware systems use DMA
including disk drive controllers, graphics cards, network cards and sound cards. DMA is also
used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-
chips, where its processing element is equipped with a local memory (often called scratchpad
memory) and DMA is used for transferring data between the local memory and the main
memory. Computers that have DMA channels can transfer data to and from devices with much
less CPU overhead than computers without a DMA channel. Similarly a processing element
inside a multi-core processor can transfer data to and from its local memory without occupying
its processor time and allowing computation and data transfer concurrency.
Without DMA, using programmed input/output (PIO) mode for communication with peripheral
devices, or load/store instructions in the case of multicore chips, the CPU is typically fully
occupied for the entire duration of the read or write operation, and is thus unavailable to perform
other work. With DMA, the CPU would initiate the transfer, do other operations while the
transfer is in progress, and receive an interrupt from the DMA controller once the operation has
been done. This is especially useful in real-time computing applications where not stalling
behind concurrent operations is critical. Another and related application area is various forms of
stream processing where it is essential to have data processing and transfer in parallel, in order to
achieve sufficient throughput.
Principle
DMA is an essential feature of all modern computers, as it allows devices to transfer data
without subjecting the CPU to a heavy overhead. Otherwise, the CPU would have to copy each
piece of data from the source to the destination, making itself unavailable for other tasks. This
situation is aggravated because access to I/O devices over a peripheral bus is generally slower
than normal system RAM. With DMA, the CPU gets freed from this overhead and can do useful
tasks during data transfer (though the CPU bus would be partly blocked by DMA).
DMA can lead to cache coherency problems n addition to hardware interaction, DMA can also be used
to offload expensive memory operations, such as large copies or scatter-gather operations, from the
CPU to a dedicated DMA engine. Intel includes such engines on high-end servers, called I/O Acceleration
Technology (IOAT).
HOW THIS OCCUR
Direct Memory Access (DMA) is a method of allowing data to be moved from one location to
another in a computer without intervention from the central processor (CPU).
The way that the DMA function is implemented varies between computer architectures, so this
discussion will limit itself to the implementation and workings of the DMA subsystem on the
IBM Personal Computer (PC), the IBM PC/AT and all of its successors and clones.
The PC DMA subsystem is based on the Intel 8237 DMA controller. The 8237 contains four
DMA channels that can be programmed independently and any one of the channels may be
active at any moment. These channels are numbered 0, 1, 2 and 3. Starting with the PC/AT, IBM
added a second 8237 chip, and numbered those channels 4, 5, 6 and 7.
The original DMA controller (0, 1, 2 and 3) moves one byte in each transfer. The second DMA
controller (4, 5, 6, and 7) moves 16-bits from two adjacent memory locations in each transfer,
with the first byte always coming from an even-numbered address. The two controllers are
identical components and the difference in transfer size is caused by the way the second
controller is wired into the system.
The 8237 has two electrical signals for each channel, named DRQ and -DACK. There are
additional signals with the names HRQ (Hold Request), HLDA (Hold Acknowledge), -EOP (End
of Process), and the bus control signals -MEMR (Memory Read), -MEMW (Memory Write),
-IOR (I/O Read), and -IOW (I/O Write).
The 8237 DMA is known as a ``fly-by'' DMA controller. This means that the data being moved
from one location to another does not pass through the DMA chip and is not stored in the DMA
chip. Subsequently, the DMA can only transfer data between an I/O port and a memory address,
but not between two I/O ports or two memory locations.
4.
1. main()
2. {
3. int n;
4. printf("\nEnter any number:"); scanf("%d", &n); dec2bin(n);
5. } dec2bin(int n)
6. {
7. if(n == 0)
8. return ; else
9. {
10. dec2bin (n/2); printf("%d", n%10);
11. }
12. }
5.
Quicksort sorts by employing a divide and conquer strategy to divide a list into two sub-lists.
The base case of the recursion are lists of size zero or one, which never need to be sorted.
function quicksort(array)
var list less, greater
if length(array) ≤ 1
return array
select and remove a pivot value pivot from array
for each x in array
if x ≤ pivot then append x to less
else append x to greater
return concatenate(quicksort(less), pivot, quicksort(greater))
Notice that we only examine elements by comparing them to other elements. This makes
quicksort a comparison sort. This version is also a stable sort (assuming that the "for each"
method retrieves elements in original order, and the pivot selected is the last among those of
equal value).
The correctness of the partition algorithm is based on the following two arguments:
At each iteration, all the elements processed so far are in the desired position: before the
pivot if less than the pivot's value, after the pivot if greater than the pivot's value (loop
invariant).
Each iteration leaves one fewer element to be processed (loop variant).
The correctness of the overall algorithm follows from inductive reasoning: for zero or one
element, the algorithm leaves the data unchanged; for a larger data set it produces the
concatenation of two parts, elements less than the pivot and elements greater than it, themselves
sorted by the recursive hypothesis.
The disadvantage of the simple version above is that it requires Ω(n) extra storage space, which
is as bad as merge sort. The additional memory allocations required can also drastically impact
speed and cache performance in practical implementations. There is a more complex version
which uses an in-place partition algorithm and can achieve the complete sort using O(logn) space
(not counting the input) use on average (for the call stack):
In-place partition in action on a small list. The boxed element is the pivot element, blue elements
are less or equal, and red elements are larger.
This is the in-place partition algorithm. It partitions the portion of the array between indexes left
and right, inclusively, by moving all elements less than or equal to array[pivotIndex] to the
beginning of the subarray, leaving all the greater elements following them. In the process it also
finds the final position for the pivot element, which it returns. It temporarily moves the pivot
element to the end of the subarray, so that it doesn't get in the way. Because it only uses
exchanges, the final list has the same elements as the original list. Notice that an element may be
exchanged multiple times before reaching its final place. Also, in case of pivot duplicates in the
input array, they can be spread across left subarray, possibly in random order. This doesn't
represent a partitioning failure, as further sorting will reposition and finally "glue" them together.
This form of the partition algorithm is not the original form; multiple variations can be found in
various textbooks, such as versions not having the storeIndex. However, this form is probably
the easiest to understand.
However, since partition reorders elements within a partition, this version of quicksort is not a
stable sort.
Note the left + (right-left)/2 expression. (left + right)/2 would seem to be adequate, but in the
presence of overflow, can give the wrong answer; for example, in signed 16-bit arithmetic,
32000 + 32000 is not 64000 but -1536, and dividing that number by two will give you a new
pivotIndex of -768 — obviously wrong. The same problem arises in unsigned arithmetic: 64000
+ 64000 truncated to an unsigned 16-bit value is 62464, and dividing that by two gives you
31232 — probably within the array bounds, but still wrong. By contrast, (right - left) and (right -
left)/2 obviously do not overflow, and left + (right - left)/2 also does not overflow ((right - left)/2
= (right + left)/2 - left which is clearly less than or equal to intmax - left).
In very early versions of quicksort, the leftmost element of the partition would often be chosen as
the pivot element. Unfortunately, this causes worst-case behavior on already sorted arrays, which
is a rather common use-case. The problem was easily solved by choosing either a random index
for the pivot, choosing the middle index of the partition or (especially for longer partitions)
choosing the median of the first, middle and last element of the partition for the pivot (as
recommended by R. Sedgewick).[2][3]
#include "stdio.h"
#define MAXARRAY 10
int main(void)
{
int array[MAXARRAY] = {0};
int i = 0;
return 0;
}
/* partition */
do {
/* find member above ... */
while(arr[i] < z) i++;
if(i <= j)
{
/* swap two elements */
y = arr[i];
arr[i] = arr[j];
arr[j] = y;
i++;
j--;
}
} while(i <= j);
/* recurse */
if(low < j)
quicksort(arr, low, j);