History of GSM - Output
History of GSM - Output
Theoretical and experimental studies of electricity started in the 18th and 19th
centuries enabled the development of the first electrical machines and the wide
use of electricity. During that time the first theory was founded and the rules of
electricity was formulated. The event of identification of the electron in second
half of 19th century by the English physicist J.J. Thompson and the measurement
of its electric charge in 1909 by the American physicist A. Milliken were the point
of turning the electronics evolution separately from that of electricity. Another
coarse of interest to electronics was the observation of the American inventor
Thomas A. Edison. He noticed that the current of electrons would flow from one
electrode to another, if the second one was with relatively positive charge. This
discovery led to the development of electron tubes. Electron tubes became very
useful for manufactory at that time. X-ray tube, the radio signal detectors and
transmitters, and the first power systems were based on electron tubes. The
development of the vacuum tube and later the three-electrode tube by adding the
grid between the anode and the cathode (Negative and positive electrodes in the
tube) improved the characteristics of the tube by far and made it more useful for
different electronic applications.
The first half of the 20th century was the era of the vacuum tubes in electronics.
Using the tube permitted the development of radio, long-distance telephony,
television and even the first computers. The most known one was the ENIAC
(Electronic Numerical Integrator and Computer) completed in 1946.
The first and the second World Wars gave a considerable boost to the way the
electronics science has advanced. Governments of rival countries invested a lot of
money in the technology of military industry. On other hand they wanted the
quick solution and were looking for long-range developments. Therefore the
varieties of vacuum electron tubes were the central device in the electronics
system of that time.
There are several limitations to the tube. Its big size, slow working paces, bad
accuracy, and very hard and high cost of production it. These limitations of the
tube motivated to the “Solid-State” revolution with the invention of the transistor
in 1947 by Bell Laboratories scientists: John Bardeen, Walter H. Brattain, and
William B. Shockley. The vacuum tube hasn’t disappeared from the world until
today. All kind of displays (except the liquid crystal one), laser systems, some
measurement equipment include the tube and there is no alternative product to
be used instead of the tube until now.
Recently we have witnessed the biggest event in the history of electronics – the
invention of the semiconductor devices. It made a real revolution in the world of
electronics. The semiconductors are small, accurate and low cost devices.
Transistors and diodes are made of crystalline solid materials, which have
electrical properties that are capable of variations, an extremely wide range, by
the addition of little quantities of other elements like resistors, inductors and
capacitors. Early semiconductors were produced using germanium as the
material, but since 1960 silicon quickly became the preferred material, because it
was less expensive and it could operate in wide range of the temperatures. For
instance, silicon diodes work at temperature up to 200°C (392°F), whereas
germanium diodes cannot work above 85°C (277°F).
Since 1960 transistors have quickly supplanted vacuum tubes. Electronic system
became more complex and smart. Computers included hundred of thousands of
transistor each (This without counting of other devices). This fact, together with
the need for compact, lightweight electronic missile guidance systems, led to the
invention of the integrated circuit (IC). This invention was the result of
independent research of Jack Kilby of Texas Instruments Incorporated in 1958
and of Jean Hoerni and Robert Noyce of Fairchild Semiconductor Corporation in
1959. Early ICs contained about 10 individual semiconductor elements but the
numbers rapidly increased during next ten years. In 1970 the number was 1,000
in a chip and the result of hard work of physicists, electronics and mechanical
engineers was developing and producing of first microprocessor with memory
interface in 1971. This event was the beginning of computerization and smart
digital electronics.
0 and 1. Those two small numbers changed our world. Computers, data
communication, the Internet understand only two numbers, 0 and 1. Digital
electronics (Microprocessors and it’s surrounding) based on Boolean algebra that
represent the numbers to the base of two. Since 1970 until today the digital ICs
have been in constant development. Everywhere we hear about new
microprocessor, which are that quicker, more complex, smarter and less
expensive than the previous one. We can state that ever since the first computers
electronics were improved it self. The calculation became simpler. The
measurement equipment becomes more accurate.
The process of development was relatively quick and interesting. The best brains
of 20th century were the example of this process. But the history of electronics
has not ended, as we see, and our using of electronics is the best evidence for it.
BASICS OF ELECTRICITY
Electricity is a form of energy involving the flow of electrons. All matter is made
up of atoms, and an atom has a center, called a nucleus. The nucleus contains
positively charged particles called protons and uncharged particles called
neutrons. The nucleus of an atom is surrounded by negatively charged particles
called electrons. The negative charge of an electron is equal to the positive
charge of a proton, and the number of electrons in an atom is usually equal to the
number of protons. When the balancing force between protons and electrons is
upset by an outside force, an atom may gain or lose an electron. When electrons
are "lost" from an atom, the free movement of these electrons constitutes an
electric current.
Electricity is a basic part of nature and it is one of our most widely used forms of
energy. We get electricity, which is a secondary energy source, from the
conversion of other sources of energy, like coal, natural gas, oil, nuclear power
and other natural sources, which are called primary sources. Many cities and
towns were built alongside waterfalls (a primary source of mechanical energy)
that turned water wheels to perform work. Before electricity generation began
slightly over 100 years ago, houses were lit with kerosene lamps, food was cooled
in iceboxes, and rooms were warmed by wood-burning or coal-burning stoves.
Beginning with Benjamin Franklin's experiment with a kite one stormy night in
Philadelphia, the principles of electricity gradually became understood. In the
mid-1800s, everyone's life changed with the invention of the electric light bulb.
Prior to 1879, electricity had been used in arc lights for outdoor lighting. The light
bulb’s invention used electricity to bring indoor lighting to our homes.
Theory
An electric generator (Long ago, a machine that generated electricity was named
"dynamo" today's preferred term is "generator".) is a device for converting
mechanical energy into electrical energy. The process is based on the relationship
between magnetism and electricity. When a wire or any other electrically
conductive material moves across a magnetic field, an electric current occurs in
the wire. The large generators used by the electric utility industry have a
stationary conductor. A magnet attached to the end of a rotating shaft is
positioned inside a stationary conducting ring that is wrapped with a long,
continuous piece of wire. When the magnet rotates, it induces a small electric
current in each section of wire as it passes. Each section of wire constitutes a
small, separate electric conductor. All the small currents of individual sections add
up to one current of considerable size. This current is what is used for electric
power.
An electric utility power station uses a turbine, engine, water wheel, or other
similar machine to drive an electric generator or a device that converts
mechanical or chemical energy to electricity. Steam turbines, internal-combustion
engines, gas combustion turbines, water turbines, and wind turbines are the most
common methods to generate electricity.
Electronics
The history of electronics began to evolve separately from the history of
electricity late in the 19th century. The English physicist J.J. Thomson identified
the electron by and the American physicist Robert A. Millikan measured its
electric charge in 1909.
The history of electronics is a story of the twentieth century and three key
components—the vacuum tube, the transistor, and the integrated circuit. In
1883, Thomas Alva Edison discovered that electrons will flow from one metal
conductor to another through a vacuum. This discovery of conduction became
known as the Edison effect. In 1904, John Fleming applied the Edison effect in
inventing a two-element electron tube called a diode, and Lee De Forest
followed in 1906 with the three-element tube, the triode. These vacuum tubes
were the devices that made manipulation of electrical energy possible so it could
be amplified and transmitted.
Communications technology was able to make huge advances before World War
II as more specialized tubes were made for many applications. Radio as the
primary form of education and entertainment was soon challenged by television,
which was invented in the 1920s but didn't become widely available until 1947.
Bell Laboratories publicly unveiled the television in 1927, and its first forms were
electromechanical. When an electronic system was proved superior, Bell Labs
engineers introduced the cathode ray picture tube and color television. But
Vladimir Zworykin, an engineer with the Radio Corporation of America (RCA), is
considered the "father of the television" because of his inventions, the picture
tube and the iconoscope camera tube.
After the war, electron tubes were used to develop the first computers, but they
were impractical because of the sizes of the electronic components. In 1947, the
transistor was invented by a team of engineers from Bell Laboratories. John
Bardeen, Walter Brattain, and William Shockley received a Nobel prize for their
creation, but few could envision how quickly and dramatically the transistor would
change the world. The transistor functions like the vacuum tube, but it is tiny by
comparison, weighs less, consumes less power, is much more reliable, and is
cheaper to manufacture with its combination of metal contacts and semiconductor
materials.
Electronics is a field of engineering and applied physics that grew out of the
study and application of electricity. Electricity concerns the generation and
transmission of power and uses metal conductors. Electronics manipulates the
flow of electrons in a variety of ways and accomplishes this by using gases,
materials like silicon and germanium that are semiconductors, and other devices
like solar cells, light-emitting diodes (LEDs), masers, lasers, and microwave
tubes. Electronics applications include radio, radar, television, communications
systems and satellites, navigation aids and systems, control systems, space
exploration vehicles, micro devices like watches, many appliances, and
computers.
Oscillators
Oscillators are amplifiers that receive an incoming signal and their own output as
feedback (that is, also as input). They produce radio and audio signals for
precision signaling, such as warning systems, telephone electronics between
individual telephones and central telephone stations, computers, alarm clocks,
high-frequency communications equipment, and the high-frequency transmissions
of broadcasting stations.
Electronics - History
Electronics - Electronic Components
Electronics - Integrated Circuits
Electronics - Sensors
Electronics - Amplifiers
Electronics - Power-supply Circuits
Electronics - Microwave Electronics
Electronics - Optical Electronics
Electronics - Digital Electronics
Optical electronics involve combined applications of optical (light) signals and
electronic signals. Optoelectronics have a number of uses, but the three general
classifications of these uses are to detect light, to convert solar energy to electric
energy, and to convert electric energy to light. Like radio waves and microwaves,
light is also a form of electromagnetic radiation except that its wavelength is very
short. Photo-detectors allow light to irradiate a semiconductor that absorbs the
light as photons and converts these to electric signals. Light meters, burglar
alarms, and many industrial uses feature photo-detectors.
Solar cells convert light from the sun to electric energy. They use single-crystal
doped silicon to reduce internal resistance and metal contacts to convert over
14% of the solar energy that strikes their surfaces to electrical output voltage.
Cheaper, polycrystalline silicon sheets and other lenses are being developed to
reduce the cost and improve the effectiveness of solar cells.
Like microwaves, optical electronics use waveguides to reflect, confine, and direct
light. The most familiar form of optical waveguide is the optic fiber. These fine,
highly specialized glass fibers are made of silica that has been doped with
germanium dioxide.
Electronic components
Integrated circuits are sets of electronic components that are interconnected. Active
components supply energy and include vacuum tubes and transistors. Passive
components absorb energy and include resistors, capacitors, and inductors.
Vacuum tubes or electron tubes are glass or ceramic enclosures that contain metal
electrodes for producing, controlling, or collecting beams of electrons. A diode has
two elements, a cathode and an anode. The application of energy to the cathode
frees electrons which migrate to the anode. Electrons only flow during one half-cycle
of an alternating (AC) current. A grid inserted between the cathode and anode can
be used to control the flow and amplify it. A small voltage can cause large flows of
electrons that can be passed through circuitry at the anode end.
Special purpose tubes use photoelectric emission and secondary emission, as in the
television camera tube that emits and then collects and amplifies return beams to
provide its output signal. Small amounts of argon, hydrogen, mercury, or neon
vapors in the tubes change its current capacity, regulate voltage, or control large
currents. The finely focused beam from a cathode-ray tube illuminates the coating
on the inside of the television picture tube to reproduce images.
During the early 1980s, analog cellular telephone systems were experiencing rapid growth in Europe, particularly
in Scandinavia and the United Kingdom, but also in France and Germany. Each country developed its own
system, which was incompatible with everyone else's in equipment and operation. This was an undesirable
situation, because not only was the mobile equipment limited to operation within national boundaries, which in a
unified Europe were increasingly unimportant, but there was also a very limited market for each type of equipment,
so economies of scale and the subsequent savings could not be realized.
The Europeans realized this early on, and in 1982 the Conference of European Posts and Telegraphs (CEPT)
formed a study group called the Groupe Spécial Mobile (GSM) to study and develop a pan-European public land
mobile system. The proposed system had to meet certain criteria:
In 1989, GSM responsibility was transferred to the European Telecommunication Standards Institute (ETSI), and
phase I of the GSM specifications were published in 1990. Commercial service was started in mid-1991, and by
1993 there were 36 GSM networks in 22 countries [6]. Although standardized in Europe, GSM is not only a
European standard. Over 200 GSM networks (including DCS1800 and PCS1900) are operational in 110 countries
around the world. In the beginning of 1994, there were 1.3 million subscribers worldwide [18], which had grown to
more than 55 million by October 1997. With North America making a delayed entry into the GSM field with a
derivative of GSM called PCS1900, GSM systems exist on every continent, and the acronym GSM now aptly
stands for Global System for Mobile communications.
The developers of GSM chose an unproven (at the time) digital system, as opposed to the then-standard analog
cellular systems like AMPS in the United States and TACS in the United Kingdom. They had faith that
advancements in compression algorithms and digital signal processors would allow the fulfillment of the original
criteria and the continual improvement of the system in terms of quality and cost. The over 8000 pages of GSM
recommendations try to allow flexibility and competitive innovation among suppliers, but provide enough
standardization to guarantee proper interworking between the components of the system. This is done by
providing functional and interface descriptions for each of the functional entities defined in the system.
A GSM network is composed of several functional entities, whose functions and interfaces are specified. Figure 1
shows the layout of a generic GSM network. The GSM network can be divided into three broad parts. The Mobile
Station is carried by the subscriber. The Base Station Subsystem controls the radio link with the Mobile Station.
The Network Subsystem, the main part of which is the Mobile services Switching Center (MSC), performs the
switching of calls between the mobile users, and between mobile and fixed network users. The MSC also handles
the mobility management operations. Not shown is the Operations and Maintenance Center, which oversees the
proper operation and setup of the network. The Mobile Station and the Base Station Subsystem communicate
across the Um interface, also known as the air interface or radio link. The Base Station Subsystem communicates
with the Mobile services Switching Center across the A interface.
Cellular Communications
Definition: A cellular mobile communications system uses a large number of low-power wireless transmitters to
create cells—the basic geographic service area of a wireless communications system. Variable power levels allow
cells to be sized according to the subscriber density and demand within a particular region. As mobile users travel
from cell to cell, their conversations are handed off between cells to maintain seamless service. Channels
(frequencies) used in one cell can be reused in another cell some distance away. Cells can be added to
accommodate growth, creating new cells in unserved areas or overlaying cells in existing areas.
Mobile Communications Principles: Each mobile uses a separate, temporary radio channel to talk
to the cell site. The cell site talks to many mobiles at once, using one channel per mobile. Channels use a pair of
frequencies for communication—one frequency (the forward link) for transmitting from the cell site and one
frequency (the reverse link) for the cell site to receive calls from the users. Radio energy dissipates over distance,
so mobiles must stay near the base station to maintain communications. The basic structure of mobile networks
includes telephone systems and radio services. Where mobile radio service operates in a closed network and has
no access to the telephone system
Personnel of the Detroit Police Department's radio bureau, began experimentation with a band
near 2 MHz for vehicular mobile service. On April 7, 1928 the Department commenced regular
1921
one-way radio communication with its patron cars. The system established the practicality of
land-mobile radio for police work and led to its adoption throughout the country.
The police department in Bayonne, New Jersey initiated regular two-way communications with
its patrol cars, a major advance over previous one-way systems. The very high frequency
1933 system placed transmitters in patrol cars to enable patrolmen to communicate with
headquarters and other cars instead of just receiving calls. Two-way police radio became
standard throughout the country following the success of the Bayonne initiative.
New frequencies allocated between 30-40 MHz leads to substantial buildup of police radio
systems. A major advance in police radio occurred when the Connecticut State Police began
operating a two-way, frequency modulated (FM) system in Hartford. The statewide system
1940. developed by Daniel E. Noble of the University of Connecticut and engineers at the Fred M.
Line Company greatly reduced static, the main problem of the amplitude modulated (AM)
system. FM mobile radio became standard throughout the country following the success of the
Connecticut initiative.
FCC allocates some 40 MHz of spectrum in range between 30 and 500 MHz for a host of
1940...
mobile services for private individuals, companies, and public agencies..
Bell System embarked on a program of supplying "public correspondence systems"
Late
1940's
(communication among a variety of users provided by a common carrier). FCC classified these
services as Domestic Public Land Mobile Radio Service (DPLMRS).
1946 First Bell, "urban" DPLMRS inaugurated in St. Louis -- three channels near 150 MHz,
A 35 to 44 MHz "highway" system between New York and Boston is inaugurated. It was
1947 thought that these frequencies would carry further along highways. This was all too true: due to
atmospheric skip, unwanted conversations were carried across country.
All of these aforementioned services employed push-to-talk (PTT ) operation -- i.e. radio is half
Note duplex which is unfamiliar and awkward for ordinary phone users -- and required operator
intervention to place a call.
First automatic 150 MHz service (called MJ). Free channel is automatically assigned. System
1964
was full duplex and customers could do their own dialing.
First automatic 450 MHz service (called MK). Extended MJ to new band. Taken together these
1969 two services became the ITMS (Improved Mobile Telephone Service) -- the standard until the
develop of AMPS.
Late In spite of the fact that mobile service was, indeed, a scarce luxury, the demand for service
1970's was rising rapidly.
FCC debates frequency allocation to common carriers. In 1974 it approved the underlying
1970-77 concepts of wireless cellular phone service and allocated for this purpose 666 duplex (two-way)
channels in the 800 - 900 MHz frequency range.authorization granted to Illinois Bell in 1978.
Field trials: AMPS (Advanced Mobile Phone Service) trials begin (850 MHz) in Chicago and
1978
ARTS (American Radio Telephone Service) in Washington DC.
NMT (Nordic Mobile Telephone System) enters public service in Sweden. Developed by
1981
Ericsson using frequencies in the 450 to 470 band.
Responding to a spectacular and unexpected rising demand for wireless services, Congress, in
the Omnibus Budget Reconciliation Act of 1993, mandated that the FCC reallocate portions of
the electro-magnetic spectrum for "personal communication" and authorized the FCC to
employ competitive bidding procedures in awarding licenses for the use of these new spectral
1993
resources. The mandate had several objectives, not the least of which was raising revenue to
help balance the federal budget. Furthermore, competitive bidding was deemed to be the most
effective means to expedite the licensing process and to open up opportunities for beneficial
competition