Information and Technology Managment
Information and Technology Managment
Information and Technology Managment
managment
Undertaken at
“TECNIA INSTITUTE OF ADVANCED STUDIES”
Submitted in the partial fulfillment for the award of the degree of
Introduction
Arithmetic is at the heart of the digital computer, and the majority of arithmetic
performed by computers is binary arithmetic, that is, arithmetic on base two numbers.
Decimal and floating-point numbers, also used in computer arithmetic, depend on binary
representations, and an understanding of binary arithmetic is necessary in order to
understand either one.
The sizes of numbers which can be arithmetic operands are determined when the
architecture of the computer is designed. Common sizes for integer arithmetic are eight,
16, 32, and recently 64 bits. It is possible for the programmer to perform arithmetic on
larger numbers or on sizes which are not directly implemented in the architecture.
However, this is usually so painful that the programmer picks the most appropriate size
implemented by the architecture. This puts a burden on the computer architect to select
appropriate sizes for integers, and on the programmer to be aware of the limitations of the
size he has chosen and on finite-precision arithmetic in general.
Consider what it would be like to perform arithmetic if one were limited to threedigit
decimal numbers. Neither negative numbers nor fractions could be expressed
directly, and the largest possible number that could be expressed is 999. This is the
circumstance in which we find ourselves when we perform computer arithmetic because
the number of bits is fixed by the computer’s architecture. Although we can usually
express numbers larger than 999, the limits are real and small enough to be of practical
concern. Working with unsigned 16-bit binary integers, the largest number we can
express is 216−1, or 65,535. If we assume a signed number, the largest number is 32,767.
There are other limitations. Consider again the example of three-digit numbers.
We can add 200 + 300, but not 600 + 700 because the latter sum is too large to fit in three
digits. Such a condition is called overflow and it is of concern to architects of computer
systems. Because not all operations which will cause overflow can be predicted when a
computer program is written, the computer system itself must check whether overflow
has occurred and, if so, provide some indication of that fact.
a + (b − c) = (a + b) − c
using a = 700, b= 400, and c=300, the left-hand side evaluates to 800, but overflow
occurs when evaluating a + b in the right-hand side. The associative law does not hold.
Similarly if we evaluate
a × (b − c) = a × b − a × c
using a = 5, b = 210, and c = 195, the left-hand side produces 75, but in the right-hand
side, a × b overflows and distributive law does not hold.
The rules for binary addition are the same as those for any positional number
system. One adds the digits column-wise from the right. If the sum is greater than B–1
for base B, a carry into the next column is generated. In the case of binary numbers, a
sum greater than one generates a carry. Here is the binary addition table:
0 0 1 1 +1
0 +1 +0 +1 +1
0 1 1 10 11
The first three entries are self-explanatory. The third entry is 1+1=102, or one
plus one is two; we have a sum of zero and a carry of one into the two’s place. The
fourth entry is 1+1+1=112, or three ones are three. The sum is one and there is a carry
into the two’s place.
Now we will add two binary numbers with more than one bit each so you can see
how the carries “ripple” left, just as they do in decimal addition.
The three carries are shown on the top row. Normally, you would write these
down as you complete the partial sum for each column. Adding the rightmost column
1 1 1
0 0 1 1 1
+ 0 0 1 1 1
1 0 1 0 1
produces a one with no carry; adding the next column produces a zero with one to carry.
Work your way through the entire example from right to left.
One can also express the rules of binary addition with a truth table. This is
important because there are techniques for designing electronic circuits that compute
functions expressed by truth tables. The fact that we can express the rules of binary
addition as a truth table implies that we can design a circuit which will perform addition
on binary numbers, and that turns out to be the case.
We only need to write the rules for one column of bits; we start at the right and
apply the rules to each column in succession until the final sum is formed. Call the bits
of the addend and augend A and B, and the carry in from the previous column Ci. Call
the sum S and the carry out Co. The truth table for one-bit binary addition looks like this:
A B Ci S Co
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
0 1 0 0 1
1 1 0 0 1
1 1 1 1 1
This says if all three input bits are zero, both S and Co will be zero. If any one of
the bits is one and the other two are zero, S will be one and Co will be zero. If two bits
are ones, S will be zero and Ci will be one. Only if all three bits are ones will both S and
Co be ones.
Negative Numbers
is formed by complementing each bit of the binary number. Again a zero in the
sign bit indicates a positive number and a one indicates a negative number.
Signedmagnitude
and excess 2n−1 numbers are used in floating point, and will be discussed there.
One’s complement arithmetic is obsolete.
Two’s complement numbers are used almost universally for integer representation
of numbers in computers. In the binary number system, we can express any non-negative
integer as the sum of coefficients of powers of two:
One way of looking at two’s complement numbers is to consider that the leftmost
bit, or sign bit, represents a negative coefficient of a power of two and the remaining bits
represent positive coefficients which are added back. So, an n-bit two’s complement
number has the form
Consider 10000000, an eight-bit two’s complement number. Since the sign bit is
a one, it represents –27 or –128. The remaining digits are zeroes, so 10000000 = –128.
The number 10000001 is –128+1 or –127. The number 10000010 is –126, and so on.
11111111 is –128 + 127 or –1.
Now consider 01111111, also an eight-digit two’s complement number. The sign
bit still represents –27 or –128, but the coefficient is zero, and this is a positive number,
+127.
The two’s complement representation has its own drawback. Notice that in eight
bits we can represent –128 by writing 10000000. The largest positive number we can
represent is 01111111 or +127. Two’s complement is asymmetric about zero. For any
size binary number, there is one more negative number than there are positive numbers.
This is because, for any binary number, the number of possible bit combinations is even.
We use one of those combinations for zero, leaving an odd number to be split between
positive and negative. Since we want zero to be represented by all binary zeros and we
want the sign of positive numbers to be zero, there’s no way to escape from having one
more negative number than positive.
• Take the complement of each bit in the number to be negated. That is, if a bit is a
zero, make it a one, and vice-versa.
• To the result of the first step, add one as though doing unsigned arithmetic.
Let’s do an example: we will find the two’s complement representation of –87.
We start with the binary value for 87, or 01010111. Here are the steps:
01010111 original number
10101000 each bit complemented, or “flipped”
+ 1 add 1 to 10101000
10101001 this is the two’s complement, or –87.
We can check this out. The leftmost bit represents –128, and the remaining bits
have positive values which are added back. We have –128 + 32 + 8 + 1, or –128 + 41 =
−87. There’s another way to check this. If you add equivalent negative and positive
numbers, the result is zero, so –87 + 87 = 0. Does 01010111 + 10101001 = 0? Perform
the addition and see.
In working with two’s complement numbers, you will often find it necessary to
adjust the length of the number, the number of bits, to some fixed size. Clearly, you can
expand the size of a positive (or unsigned) number by adding zeroes on the left, and you
can reduce its size by removing zeroes from the left. If the number is to be considered a
two’s complement positive number, you must leave at least one zero on the left in the
sign bit’s position.
It’s also possible to expand the size of a two’s complement negative number by
supplying one-bits on the left. That is, if 1010 is a two’s complement number, 1010 and
11111010 are equal. 1010 is –8+2 or –6. 11111010 is –128+64+32+16+8+2 or –6.
Similarly you can shorten a negative number by removing ones from the left so long as at
least one one-bit remains.
We can generalize this notion. A two’s complement number can be expanded by
replicating the sign bit on the left. This process is called sign extension. We can also
shorten a two’s complement number by deleting digits from the left so long as at least
one digit identical to the original sign bit remains.
Binary addition of two’s complement signed numbers can be performed using the
same rules given above for unsigned addition. If there is a carry out of the sign bit, it is
ignored.
Since we are dealing with finite-precision arithmetic, it is possible for the result of
an addition to be too large to fit in the available space. The answer will be truncated, and
will be incorrect. This is the overflow condition discussed above. There are two rules for
determining whether overflow has occurred:
• If two numbers of the same sign are added, overflow has occurred if and only if
the result is of the opposite sign.
Subtraction
Addition has the property of being commutative, that is, a+b = b+a. This is not
true of subtraction. 5 – 3 is not the same as 3 – 5. For this reason, we must be careful of
the order of the operands when subtracting. We call the first operand, the number which
is being diminished, the minuend; the second operand, the amount to be subtracted from
the minuend, is the subtrahend. The result is called the difference.
51 minuend
– 22 subtrahend
29 difference.
It is possible to perform binary subtraction using the same process we use for
decimal subtraction, namely subtracting individual digits and borrowing from the left.
This process quickly becomes cumbersome as you borrow across successive zeroes in the
minuend. Further, it doesn’t lend itself well to automation. Jacobowitz describes the
“carry” method of subtraction which some of you may have learned in elementary school,
where a one borrowed in the minuend is “paid back” by adding to the subtrahend digit to
the left. This means that one need look no more than one column to the left when
subtracting. Subtraction can thus be performed a column at a time with a carry to the left,
analogous to addition. This is a process which can be automated, but we are left with
difficulties when the subtrahend is larger than the minuend or when either operand is
signed
.
Since we can form the complement of a binary number easily and can add signed
numbers easily, the obvious answer to the problem of subtraction is to take the two’s
complement of the subtrahend, then add add it to the minuend. We aren’t saying
anything more than that 51–22 = 51+(–22). Not only does this approach remove many of
the complications of subtraction by the usual method, it means we don’t have to build
special circuits to perform subtraction. All we need is a circuit which can form the
bitwise complement of a number and an adder.
Multiplication
42 multiplicand
× 27 multiplier
294 first partial product (42 × 7)
84 second partial product (42 × 2)
1134 total product.
With pencil-and-paper multiplication, we form all the partial products, then add
them. It isn’t necessary to do that; we could simply keep a running sum. When the last
partial product is added, the running sum will be the total product. We can now state an
algorithm for binary multiplication suitable for a computer implementation:
• If the rightmost digit of the multiplier is a one, copy the multiplicand to the
product, otherwise set the product to zero
• For each successive digit of the multiplier, shift the multiplicand left one bit;
then, if the multiplier digit is a one, add the shifted multiplicand to the
product. The algorithm terminates when all the digits of the multiplier have
been examined.
Notice also that if the multiplier is n bits long, the multiplicand will have been
shifted left n bits by the time the algorithm terminates. For this reason, multiplication
algorithms make a copy of the multiplicand in a register 2n bits wide. Examination of the
bits of the multiplier is often performed by shifting a copy of the multiplier right one bit
at a time. This is because shift operations often save the last bit “shifted out” in a way
that is easy to examine.
Unfortunately, this algorithm does not work for signed numbers. If the multiplicand
is negative, the partial products must be sign-extended so that they form 2n-bit
negative numbers. If the multiplier is negative, the situation is even worse; the bits of the
multiplier no longer specify an appropriately-shifted copy of the multiplicand. One way
around this dilemma would be to take the two’s complement of negative operands,
perform the multiplication, then take the two’s complement of the product if the
multiBinary
plier and multiplicand are of different signs. This approach would require a considerable
amount of time before and after the actual multiplication, and so is usually rejected in
favor of a faster but less straightforward algorithm. One such algorithm is Booth’s
Algorithm, which is discussed in detail in Stallings.
Division
1 qutoint
divisor 0101 0110101 divdiend
0101
1 partial reminder
Now digits from the dividend are “brought down” into the partial remainder until
the partial remainder is again greater than or equal to the divisor. Zeroes are placed in the
quotient until the partial remainder is greater than or equal to the divisor, then a one is
placed in the quotient, as shown below.
1 01
0101 0110101
0101
1 01
The divisor is copied below the partial remainder and subtracted from it to form a
new partial remainder. The process is repeated until all bits of the dividend have been
used. The quotient is complete and the result of the last subtraction is the remainder
0101 0110101
0101
110
0101
. 11
This completes the division. The quotient is 10102 (1010) and the remainder is 112
(310), which is the expected result. This algorithm works only for unsigned numbers, but
it is possible to extend it to two’s complement numbers. As with the other algorithms, it
can be implemented using only shifting, complementation, and addition.
Summary
The four arithmetic operations on binary numbers are performed in much the
same way as they are performed on decimal numbers. By using two’s complement to
represent negative numbers, we can perform all four operations using only circuits which
shift, complement, and add.
Computers operate on numbers of fixed size. For this reason, the rules of finiteprecision
arithmetic apply to computer arithmetic. Programmers, as well as designers of
computer equipment, must be aware of the limitations of finite-precision arithmetic.
System software
System software refers to the files and programs that make up your
computer's operating system. System files include libraries of
functions, system services, drivers for printers and other hardware,
system preferences, and other configuration files. The programs that
are part of the system software include assemblers, compilers, file
management tools, system utilites, and debuggers.The system
software is installed on your computer when you install your operating
system. You can update the software by running programs such as
"Windows Update" for Windows or "Software Update" for Mac OS X.
Unlike application programs, however, system software is not meant
to be run by the end user. For example, while you might use your Web
browser every day, you probably don't have much use for an
assembler program(unless you are a computer programmer).
Since system software runs at the most basic level of your computer,
it is called "low-level" software. It generates the user interface and
allows the operating system to interact with the hardware. A user
doesn’t need to worry about what the system software is doing since it
just runs in the background.
• Loaders
• Linkers
• Utility software
• Desktop environment / Graphical user interface
• Shells
• BIOS
• Hypervisors
• Boot loaders
Multimedia Software:- They allow the users to create and play audio
and video media. They are capable of playing media files. Audio
converters, players, burners, video encoders and decoders are some
forms of multimedia software. Examples of this type of software
include Real Player and Media Player.
1) Microsoft Windows
2) Linux
3) Unix
4) Mac OSX
5) DOS
6) BIOS Software
7) HD Sector Boot Software
8) Device Driver Software i.e Graphics Driver etc
9) Linker Software
10) Assembler and Compiler Software