Unit Iv
Unit Iv
DATA COMMUNICATIONS
Introduction
The distance over which data moves within a computer may vary from a few
thousandths of an inch, as is the case within a single IC chip, to as much as several feet
along the backplane of the main circuit board. Over such small distances, digital data may
be transmitted as direct, two-level electrical signals over simple copper conductors.
Except for the fastest computers, circuit designers are not very concerned about the shape
of the conductor or the analog characteristics of signal transmission.
Frequently, however, data must be sent beyond the local circuitry that constitutes a
computer. In many cases, the distances involved may be enormous. Unfortunately, as the
distance between the source of a message and its destination increases, accurate
transmission becomes increasingly difficult. This results from the electrical distortion of
signals traveling through long conductors, and from noise added to the signal as it
propagates through a transmission medium. Although some precautions must be taken for
data exchange within a computer, the biggest problems occur when data is transferred to
devices outside the computer's circuitry. In this case, distortion and noise can become so
severe that information is lost.
Over the years, data communication has grown from a simple wired connection to
a global and societal necessity. In 1837, Samuel Morse's invention of the telegraph began
the history of data communication. This once remarkable invention sent signals over a
series of wires from place to place.
The Great Western Railroad adopted the telegraph service in 1843. This allowed
the telegraph service to expand across the United States.
In 1876, Alexander Graham Bell improved the telegraph with the introduction
of the telephone. It wasn't until 100 years later that telephone lines were able to traffic
data. However, Bell's invention laid the groundwork for future data communication
inventions. In 1958, the U.S. government improved these technological advancements by
launching satellites that were communication-oriented. These paved the path for further
global communications.
More than 1 million servers were using Internet Protocol technology by 1991. The
World Wide Web had also become the primary component of the Internet. As years
passed, the Internet revolutionized and become something that for years had seemed out
of the realm of possibilities; wireless. Data communication had expanded to a whole new
level. Technology continues to involve and is the revolving point of many societies
today.
Standards Organizations for Data Communication
Each of these approaches has certain benefits and problems. For example, a defacto
standard can be done rapidly, usually is complete, and can be modified rapidly should
problems be encountered. Standards bodies work a much slower pace designed to ensure
input from all its members. A defacto standard may not be widely used but one created
by many people invest their time because of their intent to create a product.
Committee T1
The Committee T1, working under the ATIS umbrella and accredited by ANSI,
creates network interconnections and interoperability standards for the United States.
Their standards focus is on meeting the needs of the telephone companies. Their working
groups:
Control Signaling
Frame-based ATM
Network Management
Physical Layer
P-NNI
Residential Broadband
Security
Signaling
Testing
Traffic Management
Wireless ATM
The ITU-T was created in 1993, replacing the former International Telegraph and
Telephone Consultative Committee (CCITT) whose origins go back to 1865.
The Internet Engineering Task Force
The Internet Engineering Task Force (IETF) is a large open international community
of network designers, operators, vendors, and researchers concerned with the evolution of
the Internet architecture and the smooth operation of the Internet.
Applications Area
Internet Area
A code used early in the data communications industry is the Baudot code. Baudot uses
five bits per character, thus allowing up to 32 distinct characters. As a technique used to
extend this limitation, the code uses up-shift and down-shift modes as is used on a
typewriter. In the Baudot code, each five bits transmitted must be interpreted according to
whether they are up-shifted (figures) or down-shifted (letters). For example, the bit
pattern 11111 represents up-shift and the bit pattern 11011 represents down-shift
characters. All characters transmitted after the sequence 11111 but before the shifted
sequence 11011 are treated as up-shift characters. All characters transmitted after the
sequence 11011 are treated as down-shift characters until the pattern 11111 is recognized.
The complete BAUDOT code (modified for this problem) is shown in the table at the end
of this problem.
Input
The input consists of two parts. The first part is the Baudot character set: line one
contains the 32 down-shift characters and line two contains the 32 up-shift characters.
(Note: spaces are inserted for the shift characters.) The remainder of the input file
consists of one or more messages encoded using the Baudot code. Each message will be
on a line in the input file. Each line will consist of 1's and 0's, with no characters between
the bits. There can be up to 80 bits per message.
The input file will be terminated by end-of-file. The initial state of each message should
be assumed to be in the down-shift state.
Output
The output should consist of one line of text for each message. The output should contain
the character representation, as translated using BAUDOT, of each of the messages.
Sample Input
Sample Output
DIAL:911
NOV 5, 8AM
Error Control
Error control coding provides the means to protect data from errors. Data
transferred from one place to the other has to be transferred reliably. Unfortunately, in
many cases the physical link can not guarantee that all bits will be transferred without
errors. It is then the responsibility of the error control algorithm to detect those errors and
in some cases correct them so upper layers will see an error free link.
Two error control strategies have been popular in practice. They are the FEC
(Forward Error Correction) strategy, which uses error correction alone, and the ARQ
(Automatic Repeat Request) strategy which uses error detection combined with
retransmission of corrupted data. The ARQ strategy is generally preferred for several
reasons. The main reason is that the number of overhead bits needed to implement an
error detection scheme is much less then the number of bits needed to correct the same
error.
For example, for the 4 bit item 1001, there are two one bits. For even parity, a
zero is added to the stream, maintaining an even number of one bits. For odd parity, a one
is added to make the number of ones odd. Following are the results:
Computing parity that is even involves XORing the bits of the data stream
together, while computing odd parity XORs the bits and negates the result equivalent to
XNOR operator (~).This is demonstrated in the following example using the previous
table:
1^0^0^1 = 0
(Computing Even Parity)
~(1^0^0^1) = 1
(Computing Odd Parity)
This mechanism enables the detection of single bit errors, because if one bit gets
flipped due to line noise, there will be an incorrect number of ones in the received data.
Consider the following assuming even parity:
A sends 10010
B receives 11010
1^1^0^1 = 1
The parity bits do not match (even parity the XOR result should be 0), indicating
an error. Rather than computing parity on the data bits and comparing them to the parity
bit, the receiver will actually XOR the data bits and the parity bit. If the result is zero, the
check passed, a non zero result indicates an error. If P is the parity generation function,
then :
P(d1,d2,d3,....dn) = P
and:
P(d1,d2,d3,....dn,P) = 0
if no error has occurred. Any incorrect bit results in the same outcome, but there
is no information regarding which bit flipped, and therefore no possibility of correcting it.
Also, if two bits flip, no errors are indicated:
A sends 10010
B receives 11000
1^1^0^0^0 = 0
and shows that the data is valid when it's not, therefore when using this method you must
assume that the probability for two errors is very low.
There are some other methods that can detect two errors, or detect and correct one
error such as parallel parity, Hamming codes, etc.
Parallel Parity
Parallel parity is based on regular parity. Parallel parity can detect weather or not
there was an error, furthermore it can detect which bit has flipped. This method is
implemented on a block of data which is made of sum words, the parity bit is then added
to the columns and rows. Following an example:
As we can see in the example above, every row has its parity bit, and every
column has its parity bit. If one bit in the block has flipped, we get two parity errors, one
in the row's parity and one in the column's parity, by intercepting the row and column we
can detect the bit that has flipped.
R. W. Hamming wrote the paper that both opened and closed this field in 1950.
His interest was in providing a means of self-checking in computers, which were just
being developed at the time he wrote this. the paper appeared in the Bell System
Technical Journal, April, 1950. Definitely worth tracking down in the library and
reading.
The best starting point for understanding ECC codes is to consider bit strings as
addresses in a binary hypercube. A hypercube is a generalization of a cube to various
dimensions; we're probably most familiar with the notion of a four-dimensional
hypercube. Here's a picture of binary hypercubes for several different dimensionalities:
Each of them was created by copying the one to the left twice, and connecting
corresponding vertices.
The Hamming distance between two bit strings is the number of bits you have to
change to convert one to the other: this is the same as the number of edges you have to
traverse in a binary hypercube to get from one of the vertices to the other. The basic idea
of an error correcting code is to use extra bits to increase the dimensionality of the
hypercube, and make sure the Hamming distance between any two valid points is greater
than one.
If the Hamming distance between valid strings is only one, a single-bit error
results in another valid string. This means we can't detect an error.
If it's two, then changing one bit results in an invalid string, and can be detected
as an error. Unfortunately, changing just one more bit can result in another valid
string, which means we can't know which bit was wrong: so we can detect an
error but not correct it.
If the Hamming distance between valid strings is three, then changing one bit
leaves us only one bit away from the original error, but two bits away from any
other valid string. This means if we have a one-bit error, we can figure out which
bit is the error; but if we have a two-bit error, it looks like one bit from the other
direction. So we can have single bit correction, but that's all.
Finally, if the Hamming distance is four, then we can correct a single-bit error and
detect a double-bit error. This is frequently referred to as a SECDED (Single
Error Correct, Double Error Detect) scheme.
So... now we have to think about how to increase the Hamming distance between valid
strings.
Parity
The simplest case is by adding a parity bit. Suppose we have a three-bit word (so
the bit strings define points in a cube). If we add a fourth bit, we can decree that any time
we want to switch a bit in the original three-bit string, we also have to switch the parity
bit. If we start with 000 in the left cube, so the full string is 0000, changing any one of the
original three bits requires us to change to the other cube: 1001, 1010, and 1100. Now if
we change a second bit, we have to move back to the left cube: 0011, 0101, 0110. And if
we change the third bit, we move back to the right cube: 0111.
So, there is a Hamming distance of two between any two valid strings. If we get a one-bit
error, we know it is an error because it's on one of the invalid vertices.
This can be computed by counting the number of 1's, and making sure it's always
even (so this is called even parity). We could have selected exactly the opposite set of
vertices as the valid ones, which would have given us odd parity. We picked even parity
because we'll be using it in the next step.
Error Correction
The weakness of the parity scheme is that we can tell we had an error, but we
can't know which bit is wrong. If we use enough extra bits, we can tell not only that a bit
is wrong, but which one it is. Since we need to have enough check bits to spot both an
error in the data and in the check bits themselves (after all, they aren't going to be perfect
either), we need (log n) + 1 bits (Hamming derives this result much, much more carefully
in his paper). The basic idea in what follows is that we'll divide the data bits into log n
subsets where each subset contains roughly half of all the bits, and compute the even
parity of each subset. If we have an error, we'll be able to tell which bit has the error
because it will be uniquely determined by the set of subsets that turn up with bad parity.
(note: in Hamming's paper, the following appears just as unmotivated as it does here. I
really have no idea how he derived this technique; he does show that it actually does
establish the needed distance between valid bit positions) We'll put the check bits in bit
positions which are powers of two, and intersperse the data bits between them. Here's
what it looks like if we have eight data bits:
Here's how we find the subsets: The data bit positions which contain a 1 in the bit
corresponding to a check bit number are used in calculating that check bit. So, looking at
the table, data bits M1, M2, M4, M5, and M7 are in rows 3, 5, 7, 9, and 11; those row
numbers all contain 20; those data bits are used in calculating check bit C1. We simply set
C1 as having the parity of its data bits.
C1 = M1 ^ M2 ^ M4 ^ M5 ^ M7
C2 = M1 ^ M3 ^ M4 ^ M6 ^ M7
C4 = M2 ^ M3 ^ M4 ^ ^ M8
C8 = ^ M5 ^ M6 ^ M7 ^ M8
Now: if we get an error, the parity will be wrong for all of the sets based on that bit. The
check bits that turn up wrong will be the bit number of the error!
We can combine ECC with parity. The way we do this, is we take the parity over
all the bits in the word (including the check bits). In our bit numbering scheme, we
consider Parity as bit 0000.
So, when we look at the parity and check bits, we get the following results:
If the parity is correct and the check bits are correct, our data is correct.
If the parity is incorrect, the check bits indicate which bit is wrong. If the check
bits indicate that the error is in bit 0000, it's the parity bit itself that is incorrect.
If the parity is correct but the check bits indicate an error, there is a two-bit error.
This can't be corrected.
This technique has seen a lot more development, by a lot more authors, than error
correcting codes have. The basic technique as described here appeared in a paper by
Peterson and Brown, which appeared in the January, 1961 issue of the Proceedings of the
Institute of Radio Engineers (the IRE was, of course, a predecessor organization to the
IEEE). Much has been done since on selecting good CRCs.
Once again, we'll start by defining a simple technique, and then define a more
complex one that works better.
Checksums
Suppose we have a fairly long message, which can reasonably be divided into
shorter words (a 128 byte message, for instance). We can introduce an accumulator with
the same width as a word (one byte, for instance), and as each word comes in, add it to
the accumulator. When the last word has been added, the contents of the accumulator are
appended to the message (as a 129th byte, in this case). The added word is called a
checksum.
Now, the receiver performs the same operation, and checks the checksum. If the
checksums agree, we assume the message was sent without error.
Performing a vertical parity has two advantages over a real checksum: it can be
performed with less hardware if the data is serial, and it will lead us into performing a
CRC.
To see how a vertical parity can be performed with less hardware than a
checksum, take a look at the next figure:
This figure shows an eight bit shift register and an exclusive-or gate. Initially, the
shift register is filled with 0's. As each bit is put into it, the new bit is exclusive-ored with
the contents of the eighth cell in the register. When the entire message has been passed
through the shift register, it contains the vertical parity.
00000000 11010110101010010100011101101010
(as we start, the shift register is empty
00000001 1010110101010010100011101101010
00000011 010110101010010100011101101010
00000110 10110101010010100011101101010
00001101 0110101010010100011101101010
00011010 110101010010100011101101010
00110101 10101010010100011101101010
01101011 0101010010100011101101010
11010110 101010010100011101101010
(at this point, the shift register contains the first byte of the message
10101100 01010010100011101101010
01011001 1010010100011101101010
10110011 010010100011101101010
01100111 10010100011101101010
11001111 0010100011101101010
10011111 010100011101101010
00111111 10100011101101010
01111111 0100011101101010
(it now contains the vertical parity of the first two bytes of the message)
11111110 100011101101010
11111100 00011101101010
11111001 0011101101010
11110011 011101101010
11100111 11101101010
11001110 1101101010
10011100 101101010
00111000 01101010
(first three bytes)
01110000 1101010
11100001 101010
11000010 01010
10000101 1010
00001010 010
00010100 10
00101001 0
01010010
(and the vertical parity of the whole 32 bit message)
Notice that a checksum or a vertical parity is much more efficient than ECC (in
the sense that it doesn't need as many added bits), but it isn't capable of correcting errors.
The problem with checksums is that a 1-bit error turns into a 1-bit code. If you
have a burst of noise, the odds are far too good that you'll end up with something that still
looks correct, even though it isn't. The next approach, CRC checks, "smears" the results
of the parity calculations through the signature, reducing the likelihood of that happening.
The basic idea of modulo-2 arithmetic is just that we are working in binary, but we
don't have a carry in addition or a borrow in subtraction. This means:
Addition and subtraction become the same operation: just a bit-wise exclusive-or.
Because of this, the total ordering we expect of integers is replaced by a partial
ordering: one number is greater than another iff its left-most 1 is farther left than
the other's. This will have an impact on division, in a moment.
Multiplication is just like multiplication in ordinary arithmetic, except that the
adds are performed using exclusive-ors instead of additions.
Division is like long division in ordinary arithmetic, except for two differences:
the subtractions are replaced by exclusive-ors, and you can subtract any time the
leftmost bits line up correctly (since, by the partial ordering described above, they
are regarded as equal in this case).
Multiplication
1101
0110
----
0000
11010
110100
0000000
-------
0101110
Division
1101
--------
0110)0101110
0110
----
0111
0110
----
0011
0000
----
0110
0110
----
0000
One last thing to say here is that most of the time, when we perform a modulo-2
addition on two numbers we get an answer of 0 or 1. In this case, we're performing the
arithmetic on each coefficient of the polynomial modulo-2. Easy to get confused....
Cyclic Redundancy Checks
k is the length of the message we want to send, ie the number of information bits.
n is the total length of the message we will end up sending: the information bits
followed by the check bits. Peterson and Brown call this a code polynomial.
n-k is the number of check bits. It is also the degree of the generating polynomial.
The basic (mathematical) idea is that we're going to pick the n-k check digits in
such a way that the code polynomial is divisible by the generating polynomial.
Then we send the data, and at the other end we look to see whether it's still
divisible by the generating polynomial; if it's not then we know we have a error, if
it is we hope there was no error.
The way we calculate a CRC is we establish some predefined n-k+1 bit number P
(called the Polynomial, for reasons relating to the fact that modulo-2 arithmetic is a
special case of polynomial arithmetic). Now we append n-k 0's to our message, and
divide the result by P using modulo-2 arithmetic. The remainder is called the Frame
Check Sequence. Now we ship off the message with the remainder appended in place of
the 0's. The receiver can either recompute the FCS and see if it gets the same answer, or it
can just divide the whole message (including the FCS) by P and see if it gets a remainder
of 0!
As an example, let's set a 5-bit polynomial of 11001, and compute the CRC of a 16 bit
message:
---------------------
11001)10011101010101100000
11001
-----
1010101010101100000
11001
----
110001010101100000
11001
----
00011010101100000
11001
----
0011101100000
11001
----
100100000
11001
-----
10110000
11001
-----
1111000
11001
-----
11100
11001
-----
0101
Notice that when I did the division, I didn't bother to keep track of the quotient; we don't
care about the quotient. Our only goal here is to get the remainder (0101), which is the
FCS.
CRC's can actually be computed in hardware using a shift register and some
number of exclusive-or gates (sounds a bit like the vertical parity calculation, doesn't it?).
The key insight is that we can perform a subtraction any time there is a 1 in the bit
that lines up with the most significant bit of the polynomial, and we can perform that
subtraction by performing an exclusive-or of the bits corresponding to 1's in all the other
places of the polynomial. This lets us implement the CRC calculation by using a shift
register similar to the one for vertical parity.
You can see how it's done by comparing the division we performed above to the
circuit in the next figure. The figure shows a shift register; the string to be checked is
inserted from the right. Whenever a "1" exits the left side of the shift register, it means
there is a 1 in the most significant bit of the part of the dividend we're working with;
since we're working in modulo-2 arithmetic, this means we can do a subtraction. What
this works out to is:
1. The most significant bit will be xored away, so it falls off to the left.
2. For every other bit with a "1" in the divisor, perform an exclusive-or with the
corresponding bit in the number being checked.
3. For bits with a "0" in the divisor, do nothing.
The figure below attempts to show this for the example CRC polynomial. Each of the
square boxes is a position in the shift register, where a value can be stored. Every round
box is a position where we may or may not perform an exclusive-or, depending on the
polynomial we're using. You can see the value of the CRC polynomial written above the
round boxes.
So, just a little bit more. First, there is quite a bit of theory behind choosing a
"good" CRC polynomial; the choice of polynomial can be tuned to make sure that any
burst of some given length can be caught.
Data Communication Hardware
The transfer of data over a communications circuit requires appropriate hardware and
software. Once these are working you can use various applications for the following
services and operations:-
Internet
Having registered with an Internet service provider (ISP), you can use the World
Wide Web (WWW), send messages via electronic mail (e-mail), convey files using
file transfer protocol (FTP) or use Internet telephony to speak to other Internet
users for the cost of a local call, irrespective of distance. However, to talk to
others that aren’t on the Internet you must register with an internet telephone
service provider (ITSP).
Terminal Emulation
This is similar to peer-to-peer operation, but lets you communicate with an old-
fashioned mainframe computer or a more modern system that uses a compatible
terminal.
Hardware Options
The choice of hardware is a compromise between cost and speed. The latter,
measured in kilobits per second (kbit/s), must be considered in two directions. First,
there’s the upstream speed, the rate at which you can send data or upload files to a
remote computer. Secondly, there’s the downstream speed, the rate a which you can
receive data or download files from another machine. Of the two, the latter is more
important, since most people receive more data than they send.
The following table shows the various options available in the United Kingdom,
in order of mass popularity. The speeds shown here are typical, although higher rates are
possible.
Downstream
Connection Method Access Upstream (kbit/s)
(kbit/s)
Standard modem Dial-up 56 56
ADSL Permanent 512 256
SDSL Permanent 512 512
Cable Permanent 128/512 256
ISDN Dial-up 64/128 64/128
Leased line Permanent 64/128 64/128
Satellite Permanent 400 56
Wireless Permanent 128/512 256/512
This lets you convey digital data over a normal phone circuit, also known as the
public switched telephone network (PSTN) or plain old telephone service (POTS). Both
the caller and the recipient must have a modem connected to their computers. As with a
normal telephone call, the link is operated as a dial-up service, only providing a
connection when required.
The modem at the sending end modulates the data into an audio signal which the
modem at the receiving end demodulates back into data. Speed is seriously limited, since
the phone system is designed for speech signals of restricted bandwidth.
Some modems support multilink operation, sharing data over several phone
circuits and giving increased speed.
Cable-assisted television (CATV), also known as cable TV, employs coaxial cables to
carry TV pictures, but can also provide a permanent connection to the Internet. This
requires a CATV splitter box for feeding your TV and a special cable modem.
In some parts of the United Kingdom you can use blueyonder, a cable-based
system provided by Telewest that offers improved rates of 512 kbit/s, 1 Mbit/s or
2 Mbit/s.
ISDN (64/128 kbit/s)
The Integrated Services Digital Network (ISDN) predates ADSL, but is well-
established in professional circles. It uses one or more circuits from your local telephone
exchange, normally operated as a dial-up service. However, instead of a modem, you’ll
need an ISDN terminal adaptor (ISDN TA), ISDN card or USB-to-ISDN adaptor. Such
devices support peer-to-peer communication while some TAs also accommodate
multilink operation.
You can use your mobile phone for sending and receiving data, preferably in
conjunction with a portable computer. Although GSM and other current systems are very
slow, the second generation of phones operate at a similar rate as PSTN, and the third
generation (3G) are even faster.
Satellite communication offers rates of 400 kbit/s or higher, although early systems
used a telephone modem for sending data ‘upstream’. More recent offerings download at
512 kbit/s and upload at 128 kbit/s, both via satellite, with an option for downloading at 2
MB/s. Most parts of the UK can use a 890 mm satellite dish, although a 980 mm version
is needed in Northern Ireland and Scotland.
The time-lag introduced by satellite systems makes them unsuitable for real-time
games.
Wireless (128/512 kbit/s)
While such interfaces as Ethernet, FireWire, and USB all send data as a serial stream, the
term "serial port" usually identifies hardware more or less compliant to the RS-232
standard, intended to interface with a modem or with a similar communication device.
Hardware
Some computers, such as the IBM PC, used an integrated circuit called a UART,
that converted characters to (and from) asynchronous serial form, and automatically
looked after the timing and framing of data. Very low-cost systems, such as some early
home computers, would instead use the CPU to send the data through an output pin,
using the so-called bit-banging technique. Before large-scale integration (LSI) UART
integrated circuits were common, a minicomputer or microcomputer would have a serial
port made of multiple small-scale integrated circuits to implement shift registers, logic
gates, counters, and all the other logic for a serial port.
Early home computers often had proprietary serial ports with pinouts and voltage
levels incompatible with RS-232. Inter-operation with RS-232 devices may be impossible
as the serial port cannot withstand the voltage levels produced and may have other
differences that "lock in" the user to products of a particular manufacturer.
Many personal computer motherboards still have at least one serial port, even if
accessible only through a pin header. Small-form-factor systems and laptops may omit
RS-232 connector ports to conserve space, but the electronics are still there. RS-232 has
been standard for so long that the circuits needed to control a serial port became very
cheap and often exist on a single chip, sometimes also with circuitry for a parallel port.
The individual signals on a serial port are unidirectional and when connecting two
devices the outputs of one device must be connected to the inputs of ther other. Devices
are divided into two categories "data terminal equipment" (DTE) and "data circuit-
terminating equipment" (DCE). A line that is an output on a DTE device is an input on a
DCE device and vice-versa so a DCE device can be connected to a DTE device with a
straght wired cable. Conventionally computers and terminals are DTE while modems and
perhipherals are DCE.
If it is nessacery to connect two DTE devices (or two DCE devices but that is
more unusual) a special cable known as a null-modem cable must be used.
Connectors
While the RS-232 standard originally specified a 25-pin D-type connector, many
designers of personal computers chose to implement only a subset of the full standard:
they traded off compatibility with the standard against the use of less costly and more
compact connectors (in particular the DE-9 version used by the original IBM PC-AT).
The desire to supply serial interface cards with two ports required that IBM reduce the
size of the connector to fit onto a single card back panel. A DE-9 connector also fits onto
a card with a second DB-25 connector that was similarly changed from the original
Centronics-style connector. Starting around the time of the introduction of the IBM PC-
AT, serial ports were commonly built with a 9-pin connector to save cost and space.
However, presence of a 9-pin D-subminiature connector is neither necessary nor
sufficient to indicate use of a serial port, since this connector was also used for video,
joysticks, and other purposes.
Many models of Macintosh favored the related RS-422 standard, mostly using
German Mini-DIN connectors, except in the earliest models. The Macintosh included a
standard set of two ports for connection to a printer and a modem, but some PowerBook
laptops had only one combined port to save space.
The standard specifies 20 different signal connections. Since most devices use
only a few signals, smaller connectors can often be used. For example, the 9 pin DE-9
connector was used by most IBM-compatible PCs since the IBM PC AT, and has been
standardized as TIA-574. More recently, modular connectors have been used. Most
common are 8P8C connectors. Standard EIA/TIA 561 specifies a pin assignment, but the
"Yost Serial Device Wiring Standard"[2] invented by Dave Yost (and popularized by the
Unix System Administration Handbook) is common on Unix computers and newer
devices from Cisco Systems. Many devices don't use either of these standards. 10P10C
connectors can be found on some devices as well. Digital Equipment Corporation defined
their own DEC connect connection system which was based on the Modified Modular
Jack (MMJ) connector. This is a 6 pin modular jack where the key is offset from the
center position. As with the Yost standard, DEC connect uses a symmetrical pin layout
which enables the direct connection between two DTEs. Another common connector is
the DH10 header connector common on motherboards and add-in cards which are usually
converted via a cable to the more standard 9 pin DE-9 connector (and frequently mounted
on a free slot plate or other part of the housing).
The RS-232 standard is used by many specialized and custom-built devices. This
list includes some of the more common devices that are connected to the serial port on a
PC. Some of these such as modems and serial mice are falling into disuse while others
are readily available.
Serial ports are very common on most types of microcontroller, where they can be used
to communicate with a PC or other serial devices.
Dial-up modems
GPS receivers (typically NMEA 0183 at 4,800 bit/s)
Bar code scanners and other point of sale devices
LED and LCD text displays
Satellite phones, low-speed satellite modems and other satellite based transceiver
devices
Flat-screen (LCD and Plasma) monitors to control screen functions by external
computer, other AV components or remotes
Test and measuring equipment such as digital multimeters and weighing systems
Updating Firmware on various consumer devices.
Some CNC controllers
Uninterruptible power supply
Stenography or Stenotype machines.
Software debuggers that run on a second computer.
Industrial field buses
Historic uses
Printers
Computer terminal, teletype
Older digital cameras
Networking (Macintosh AppleTalk using RS-422 at 230.4 kbit/s)
Serial mouse
Older GSM mobile phones
Parallel Interface
History
The Centronics Model 101 printer was introduced in 1970 and included the first
parallel interface for printers.[1] The interface was developed by Robert Howard and
Prentice Robinson at Centronics. The Centronics parallel interface quickly became a de
facto industry standard; manufacturers of the time tended to use various connectors on
the system side, so a variety of cables were required. For example, early VAX systems
used a DC-37 connector, NCR used the 36-pin micro ribbon connector, Texas
Instruments used a 25-pin card edge connector and Data General used a 50-pin micro
ribbon connector.
IBM released the IBM Personal Computer in 1981 and included a variant of the
Centronics interface— only IBM logo printers (rebranded from Epson) could be used
with the IBM PC.[4] IBM standardized the parallel cable with a DB25F connector on the
PC side and the Centronics connector on the printer side. Vendors soon released printers
compatible with both standard Centronics and the IBM implementation.
Historical uses
Before the advent of USB, the parallel interface was adapted to access a number
of peripheral devices other than printers. Probably one of the earliest devices to use
parallel were dongles used as a hardware key form of software copy protection. Zip
drives and scanners were early implementations followed by external modems, sound
cards, webcams, gamepads, joysticks, external hard disk drives and CD-ROM drives.
Adapters were available to run SCSI devices via parallel. Other devices such as EPROM
programmers and hardware controllers could be connected via the parallel port.
Current use
For electronics hobbyists the parallel port is still often the easiest way to connect
to an external circuit board. It is faster than the other common legacy port (serial port)
and requires no serial-to-parallel converter, and requires far less interface logic and
software than a USB target interface.
While the modem interfaces are standardized, a number of different protocols for
formatting data to be transmitted over telephone lines exist. Some, like CCITT V.34, are
official standards, while others have been developed by private companies. Most modems
have built-in support for the more common protocols -- at slow data transmission speeds
at least, most modems can communicate with each other. At high transmission speeds,
however, the protocols are less standardized.
bps : How fast the modem can transmit and receive data. At slow rates,
modems are measured in terms of baud rates. The slowest rate is 300 baud
(about 25 cps). At higher speeds, modems are measured in terms of bits per
second (bps). The fastest modems run at 57,600 bps, although they can
achieve even higher data transfer rates by compressing the data. Obviously,
the faster the transmission rate, the faster you can send and receive data. Note,
however, that you cannot receive data any faster than it is being sent. If, for
example, the device sending data to your computer is sending it at 2,400 bps,
you must receive it at 2,400 bps. It does not always pay, therefore, to have a
very fast modem. In addition, some telephone lines are unable to transmit data
reliably at very high rates.
voice/data: Many modems support a switch to change between voice and data
modes. In data mode, the modem acts like a regular modem. In voice mode,
the modem acts like a regular telephone. Modems that support a voice/data
switch have a built-in loudspeaker and microphone for voice communication.
auto-answer : An auto-answer modem enables your computer to receive calls
in your absence. This is only necessary if you are offering some type of
computer service that people can call in to use.
data compression : Some modems perform data compression, which enables
them to send data at faster rates. However, the modem at the receiving end
must be able to decompress the data using the same compression technique.
flash memory : Some modems come with flash memory rather than
conventional ROM, which means that the communications protocols can be
easily updated if necessary.
Fax capability: Most modern modems are fax modems, which means that
they can send and receive faxes.
Asynchronous Modem
The common modem used today. Each byte is placed between a stop and a start
bit. Each modem must operate with the same start and stop bit sequence, operate at the
same baud rate and have the same parity settings for the data checking in order to
communicate correctly. Define parity checking.
Synchronous Modem
Synchronous systems negotiate the communication parameters at the data link layer
before communication begins. Basic synchronous systems will synchronize the signal
clocks on both sides before transmission begins, reset their numeric counters and take
other steps. More advanced systems may negotiate things like error correction and
compression.
It is possible to have both sides try to synchronize the connection at the same time.
Usually, there is a process to decide which end should be in control. Both sides in
synchronous communication can go through a lengthy negotiation cycle where they
exchange communications parameters and status information. With a lengthy connection
establishment process, a synchronous system using an unreliable physical connection will
spend a great deal of time in negotiating, but not in actual data transfer. Once a
connection is established, the transmitter sends out a signal, and the receiver sends back
data regarding that transmission, and what it received. This connection negotiation
process takes longer on low error-rate lines, but is highly efficient in systems where the
transmission medium itself (an electric wire, radio signal or laser beam) is not
particularly reliable.
Low-Speed Modem
Low Speed modems, those operating at 2400 bits per second or less, are still used
in applications where the amount of data to be transferred is small.
Because these lower speed modems can connect much more quickly than high
speed modems, an application with less than 2K bits of data can actually complete
communications faster at 2400 bits per second than any of the faster protocols.
The 56 kbit/s theoretical speed is only possible when the system being dialled into
has a digital connection to the telephone system, such as DS0 service. By the time 56k
modems came into use, most of the telephone system beyond the local loop was already
digital, so the new 56k protocols took advantage of this.
If both calling party and called party have an analog connection, the voice band
signal will be converted from analog to digital and then back to analog. Each conversion
adds noise, and there will be too much noise from the second conversion for 56k to work.
The modem's negotiation processes will fall back to the less demanding 33.6 kbit/s mode.
Other local loop conditions, such as certain antiquated pair gain systems, may have
similar results.
In 8-N-1 connections (1 start bit, 8 data bits, No parity bit, 1 stop bit), which were
typical before LAPM became widespread, the actual throughput is a maximum of 5.6
kilobytes per second, since ten bits are transmitted for every 8-bit byte, although effective
throughput can be increased as high as 32 kbytes/s using internal compression, or 100
kbytes/s using ISP-side compression.
The upload speed is 33.6 kbit/s if an analog voice band modem is used (V.90), or
48.0 kbit/s using a digital modem (V.92). Due to the design of public telecommunications
networks, higher speed dialup modems are unlikely to ever appear. Also, depending on
the quality of the line conditions, the user may not be able to reach this maximum speed.
While faster communications such as DSL and cable modems became widely available to
urban consumers in the early 2000s in the United States, dial-up Internet access remains
common, since high speed rural Internet connections are often scarce and because people
may still use it to send faxes.
Modem Control
Modems are often used to initiate and receive calls. It is therefore important to
program the modem to negotiate a connection at the highest possible speed and to reset
itself to a known state after a connection is stopped.
The server will toggle the Data Terminal Ready (DTR) signal from on to off to
instruct the modem to terminate the connection. Most modems can be configured to reset
themselves when this on-to-off DTR transition occurs.
Note: The tty can be configured to not drop DTR by disabling the hupcl flag in the stty
run-time attributes.
For the connection between the server and the modem to be fully functional, the
cabling must have the following qualifications:
Note: The 16-port asynchronous adapter does not provide support for the RTS and
CTS signals. It is therefore impossible to use RTS/CTS hardware flow control
with this adapter.
TxD- Transmit Data. Pin 2 of the EIA 232D specification. Data is transmitted on this
signal. Controlled by the server.
RxD- Receive Data. Pin 3 of the EIA 232D specification. Data is received on this signal,
controlled by the modem, which is sent by the modem.
RTS- Request To Send. Pin 4 of the EIA 232D specification. Used when RTS/CTS flow
control is enabled. This signal is brought high when the system is ready to send data and
dropped when the system wants the modem to stop sending data.
CTS- Clear To Send. Pin 5 of the EIA 232D specification. Used when RTS/CTS flow
control is enabled. This signal will be brought high when the modem is ready to send or
receive data. It will be dropped when the modem wishes the server to stop sending data.
DSR- Data Set Ready. Pin 6 of the EIA 232D specification. Signals the server that the
modem is in a state where it is ready for use controlled by the modem.
SG- Signal Ground. Pin 7 of the EIA 232D specification. This signal provides a reference
voltage for the other signals.
DCD- Data Carrier Detect. Pin 8 of the EIA 232D specification. This provides a signal to
the server that the modem is connected with another modem. When this signal is brought
high, programs running on the server will be able to open the port controlled by the
modem.
DTR- Data Terminal Ready. Pin 20 of the EIA 232D specification. This provides a signal
to the modem that the server is on and ready to accept a connection. This signal is
dropped when the server wishes the modem to drop connection to another modem. It is
brought high when the port is being opened controlled by the server.
RI- Ring Indicate. Pin 22 of the EIA 232D specification. This provides a signal to the
server that the modem is receiving a call. It is seldom used and is not needed for common
operations controlled by the modem.