0% found this document useful (0 votes)
2K views709 pages

SIGMOID2K9

This paper proposes replacing conventional brushless exciters with overhang-type brushless exciters to improve efficiency and reduce costs in power plants. Conventional exciters are heavy, complex machines that require shaft, bearings, lubrication systems and cause operational problems. The overhang design is more compact, lightweight, reliable and efficient using permanent magnets and improved materials. It reduces volume and weight by 30% with elimination of unnecessary components. Combined performance testing of generators and efficient exciters results in electrical energy savings. The goal is to provide customers with a trouble-free solution.

Uploaded by

thanveermuzzu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views709 pages

SIGMOID2K9

This paper proposes replacing conventional brushless exciters with overhang-type brushless exciters to improve efficiency and reduce costs in power plants. Conventional exciters are heavy, complex machines that require shaft, bearings, lubrication systems and cause operational problems. The overhang design is more compact, lightweight, reliable and efficient using permanent magnets and improved materials. It reduces volume and weight by 30% with elimination of unnecessary components. Combined performance testing of generators and efficient exciters results in electrical energy savings. The goal is to provide customers with a trouble-free solution.

Uploaded by

thanveermuzzu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 709

INVITATION

WE CORDIALLY INVITE YOU TO THE


INAUGURAL FUNCTION
OF
SIGMOID 2K9
A NATIONAL LEVEL TECHNICAL PAPER CONTEST FOR STUDENTS
JOINTLY ORGANISED BY
Department of Electrical and Electronics Engineering,
S.V.U College of Engineering
And
IETE centre, Tirupati
 


Dr.P.V.G.K.Sharma, Head, Dept. of Bio-Technology, SVIMS
Will be the Chief Guest
Prof. M.M.Naidu, Principal, S.V.U. College of Engineering
Will be the Guest of Honor
Prof. G.R.Reddy, Head, Dept of EEE, S.V.U.C.E
Will Preside
Venue: SENATE HALL Time: 9:15 am
S.V.University Date: 14-02-09
Organizing Committee
ELECTRICAL
[FEB 14TH ]
CODE NAMES COLLEGE TITLE
E-1
M.Prakash,
N.Paneendra
S.V.College of Engg.
Chittoor
Nalanda Institute of
Engg., Guntur
E-2
N.Anusha,
N.Nagarjuna
SKIT, Srikalahasti.
Expert system
development...
E-3
V.Sravan Tej,
B.Madhava Reddy
A.I.T.S ,Rajampet
Plant condition
monitoring
E-4
A.Bhanu Tej,
J.Mohan Kumar
Siddharth Institute ,
Puttur
Detection of Pilferages
& power Thefts
E-5
T.Mounika,
B.Haritha
A.I.T.S ,Rajampet
Suppression of
resonance oscillations
E-6
K.S.Ravi Kumar,
C.S.Umar Farooq
S.V.U.C.E, Tirupati
Non-Conventional
Energy Sources
E-7
B.Chandra
Mohan,B.Ramesh
GATES Institute, Gooty
HVDC Light
Technology
E-8
k.Lakshmi Teja,
N.Akhileswari
Bapatla
Engg.College,Bapatla
SCADA In Power
distribution Systems
E-9
B.Sandeep,
V.Joyson
S.V.U.C.E, Tirupati
Eco-Friendly power
generation
E-10
E.Kanakaraju,
P.Javed Alikhan
JNTUCE, Ananthapur
Wireless Power
Transmission(Witricity)
E-11
T.V.Suresh,
U.M.Abhilash
St.Johns Coll of
Engg.Kurnool
Differential Indction
Machine
ELECTRICAL
[FEB 14TH ]
E-12
M.Bharagava
Narayana,
A.Himagiri Prasad
S.V.U.C.E, Tirupati
Modern Solar Power
System
E-13
J.Swetha,
V.S.R.B.D.Sameera
R.V.R & J.C Coll of
Engg, Guntur
Optimal voltage
regulator placement
E-14
M.Nanda Deepa,
I.Kavitha
Vignans Engg College,
Guntur
Artificial Intelligence
techniques
E-15
M.Srinivasa Reddy,
B.Srikanth
GATES Institute, Gooty
Trends in power
system protection &
Control
E-16
G.Sowmya,
J.Swetha
R.V.R & J.C Coll of
Engg, Guntur
Parameter to quash
frequency control
Problem
E-17
B.Mallikarjuna,
M.Manjunath
GATES Institute, Gooty
Non-Conventional
Sources of Energy
E-18
C.Vasavi,
A.Sandhya
Sri Vidyanikethan ,
RangamPet
Power quality &
voltage stability
E-19
K.V.Sathyavathi
QIS College of Engg,
Ongole
Utilization of Bio-Mass
Energy
E-20
R.Sarika.
G.Sowjanya
Bapatla
Engg.College,Bapatla
Electric power quality
diaturbance detection
E-21
K.Ravi Kumar,
D.Jamaal Reddy
St.Johns Coll of
Engg.Kurnool
Power Electonic
converters
E-22
E.Varadarajulu
Chetty, S.mahir Ali
Mohiddin
S.V.U.C.E, Tirupati
Electric loco
ELECTRICAL
[FEB 14TH ]
E-23
Abdul Rauf, MD.Ali
V.R Siddhartha Engg
Coll, Vijayawada
A Solution to Remote
detection of Illegal
Electricity
E-24
M.Bharadwaja,
G.V.Sudheer Kumar
Nalanda Institute of
Engg., Guntur
Fault diagnosis in
transmission systems
ELECTRONICS
Feb. 14
CODE NAMES COLLEGE TITLE
I-1 B.PRASHANTHI
V.PREETHI
SREE VIDHYANIKETHAN
RANGAMPETA
FPGA IMPLEMENTATION OF
ADAPTIVE MEDIAN FILTER FOR IMAGE
IMPULSE NOISE SUPPRESSION
I-2 K.C.POORNIMA
N.ANITHA
S.V.U.C.E
TIRUPATHI
BULLET PROOF VESTS USING CARBON
NANO TUBES
I-3 M.PRANAVA
SINDHURI
T.SHANTHI
PAVANI
BHOJ REDDY ENGG,
HYDERABAD
BAPATLA ENGG , BAPATLA
NANO TECHNOLOGY
I-4 B.MANJUNATH
S.SAIBABU
GATES INSTITUTE OF
TECHNOLOGY
EMBEDDED SYSTEMS
I-5 Y.HARSHA
VARDHAN REDDY
P.PAVAN
SREE VIDHYANIKETHAN
RANGAMPETA
NANO MOBILE
I-6 A.ALIBABU
SD.AREEF
S.V.U.C.E
TIRUPATHI
BIOMEDICAL APPLICATION OF NANO
ELECTRONICS
I-7 M.V.VARUN
BABU
G.T.PRABHA
BAPATLA ENGG COLLEGE
BAPATLA
ROBOTICS-MCKIBBEN S
I-8 S.SRIKANTH
P.SUJAN KUMAR
REDDY
K.S.R.M.C.E
KADAPA
ETHICAL HACKING
I-9 P.SHANMUKHA
SREENIVAS
P.NIKHIL
S.V.U.C.E
TIRUPATHI
HAPTIC TECHNOLOGY
I-10 V.ANJALI
Y.L.SWATHI
M.I.T.S
MADANAPALLI
UTILITY FOG
I-11 K.SONY
G.ANUSHA
L.B.R.C.E
MYLAVARAM
NANO WIRES CAN LISTEN IN ON
NEURONS
I-12 B.KEERTHI
V.JEEVITHA
M.I.T.S.
MADANAPALLE
STEREO VISION
I-13 CH.HARSHA
K.PRAVEEN
N.I.E.T
THE ROLE OF VLSI & SDR IN MAKING
MOBILES AFFORDABLE & FLEXIBLE
I-14 M.MADAN
N.ASHOK
S.V.U.C.E
TIRUPATHI
BIO MEDICAL INSTRUMENTATION
ELECTRONICS
Feb. 14
I-15 P.MAHESH A.C.E.T
ALLAGADDA
NANO ELECTRONICS
I-16 ANUSHA
SHRUTHI.D
R.PALLAVI
A.I.T.S
RAJAMPETA
BIO CHIP TECHNOLOGY THE FUTURE
TECHNOLOGY
I-17 K.KANCHANA
GANGA
B.K.BHARATH
KUMAR
GATES INSTITUTE OF
TECHNOLOGY
GOOTY
ELECTRONIC NOISE
I-18 L.SUNAINA
SULTHANA
P.SUJITHA
M.I.T.S
MADANAPALLI
SMART PHONE
I-19 Y.SIVA KRISHNA
J.VAGDEVI
RAMYA
BAPATLA ENGG.,
COLLEGE
A REVOLUTIONARY SYSTEM TO DETECT
HUMANBEINGS BURIED UNDER
EARTHQUAKE RUBBLE
COMMUNICATIONS
[FEB 14TH ]
CODE NAMES COLLEGE TITLE
C-1 B.DIVYASREE
M.GOUTHAMI
SREE VIDYANIKETHAN
COLEGE OF ENGG
A.RANGAMPET
WORLDWIDE
INTEROPERABILITY FOR
MICROWAVE
ACCESS(WIMAX)
C-2 S.V.SAIKRISHNA
B.SANTHOSH KUMAR
G.I.E.T
RAJAHMUNDRY
4G-MOBILE
COMMUNICATION
C-3 N.PAVAN KUMAR RAO
S.K. IMAM BASHA
VAAGDEVI INSTITUTE OF
TECHNOLOGY & SCIENCE
PRODDATUR
HYPER SONIC SOUND
C-4 N.RAMYA
D.N.SRAVANI
S.V.U.C.E
TIRUPATI
NANOMOBILE
C-5 G.RACHANA
V.SHILPA
KONERU LAKSHMAIAH
COLLEGE OF ENGG
VADDESWARAM
ENHANCED WATERSHED
IMAGE PROCESSING
SEGMENTATION
C-6 R.RADHAMMA
D.SHILPA
A.I.T.S
RAJAMPET
MILITARY APPLICATIONS
USING GLOBAL
POSITIONINGSYSTEM
C-7 A.UDAY KUMAR
P.REDDY PRASAD
M.I.T.S
MADANAPALLI
WIMAX IN 4G
COMMUNICATIONS
C-8 A.NAGA SWETHA
C.KISHORE
S.V.U.C.E
TIRUPATI
QUEUEING SYSTEMS
C-9 C.SWATHI
T.SRAVANI
A.I.T.S
RAJAMPET
DETECTION OF FAULTS IN
PCB S USING IMAGE
PROCESSING
C-10 S.KARIMULLA
V.NOORIMOHAMMAD
M.I.T.S
MADANAPALLI
WIRELESS
COMMUNICATION
C-11 G.VANDANA
J.SARANYA
S.V.U.C.E
TIRUPATI
MOBILE TECHNOLOGIES
C-12 R.SARIKA
G.SOWJANYA
BAPATLA ENGG COLLEGE
BAPATLA
DSP TO BOOST SCOPE
PERFORMANCE
C-13 B.SIVA PRASAD
K.KIRAN KUMAR
M.I.T.S
MADANAPALLI
DIGITAL SIGNAL
PROCESSING HOW NIGHT
VISION WORKS
C-14 ALEKYA.N.V.
SAHITHYA BHARATHI.T
SRI VIDYNIKETHAN CRYPTOGRAPHY
COMMUNICATIONS
[FEB 14TH ]
C-15 K.V.NAGABABU
Y.NELIN BABU
LAKKIREDDY BALIREDDY
COLLEGE
MYLAVARAM,KRISHNA(DIST)
ULTRAWIDEBAND GOLD IN
THE GARBAGE
FREQUENCY
C-16 S.SUGUNA DEVI
A.SHIRISHA
RGMCET
NANDYAL
VLSI IMPLEMENTATION OF
OFDM
C-17 P.M.SAISREE
S.SRILEKA
S.V.U.C.E.
TPT
ADVANCED COMM.
THROUGH REDTACTION
C-18 P.V.SAI VIJITHA
A.PRAVALLIKA RANI
CHADALAWADA
RAMANAMMA ENGG.
COLLEGE
A REAL TIME FACE
RECOGNITION SYSTEM
USING CUSTOM VLSI
HARDWARE
1 of 9
A PAPER PRESENTATION ON
COST EFFECTIVE AND ENERGY EFFICIENT BRUSHLESS
EXCITATION MACHINE
FOR
SIGMOID-2K9
Submitted by
Mr.N.Paneendra Mr. M.Prakash
Email id: [email protected] [email protected]
+919963986171 +919652738776
Third Year B.Tech EEE Student
Department of Electrical & Electronics Engineering
Sri Venkateswara College of Engineering &
Technology
R.V.S Nagar
Chittoor-517 127
2 of 9
Abstract
This paper presents the optimization technique adopted on conventional Brushless
exciter (Conv. BLE) of Turbo Generators in captive power plants by changing over
to Over
Hang type Brushless Exciter (OHBLE) without loss of function. This is a compact,
less
weight and cost effective most reliable machine. Benefits like elimination of sh
aft, bearing ,
lub. oil system, foundation frame etc. are achieved. Volume and weights are brou
ght down to
30% that of the standard machine. Light weight and high energy rare earth alloy
magnets are
used for Permanent Magnet Generator whcih eliminated the major operational probl
ems at
sites. Efficiency of the product is ultimately improved by this design. The prim
e goal of
customer satisfaction is achieved. Savings in electrical energy by combined perf
ormance
testing of generator and exciter are obtained.
Key words :
Brushless Exciter, cost effective, energy savings, permanent magnets ,rectifier
,
reliability
3 of 9
1.0. Introduction
Conversion of mechanical rotation into electrical energy in thermal power plants
is
performed by Turbo Generator, an electrical rotating machine as shown in Fig. 1
Field power
requirements of the generator is obtained through the classic arrangement of dir
ect driven
Brushless Exciter, an AC to DC converter. It consists of exclusive shaft, mounte
d on
magnetic core, copper windings, rectifier assembly, permanent magnets and monito
ring
peripherals at a rotational speed of 3000 rpm .It is coupled at one end to the g
enerator rotor
and supported on outboard bearing at the tail end with oil lubrication system. O
perational
problems like higher vibrations, misalignment, oil leakage, flying off magnets a
nd heavy
damage to the exciter were frequently reported from sites. Customer dissatisfact
ion on loss of
power generation due to sudden outage of equipment and long time cycle for recti
fication
had given a thought to modify the existing design with Over Hang type Brushless
Exciter
(OHBLE) which is compact and trouble free. Initiative was taken in introducing t
he OHBLE
to obtain high reliability and successfully implemented to the generators in the
range of 20
MW to 40MW rating captive power plants in industry sectors. It is further extend
ed to large
capacity machines. Efficiency had been enhanced with low loss electrical sheets
and rare
earth light weight ,high energy magnets which supports energy efficiency program
. The first
section of this paper is devoted to introducing the background of the Conv. BL E
xciter and
its assemblies. Second section describes the problems noticed during operation a
t sites and
remedial measures taken to bring back the machine at the earliest. In the third
section,
Energy efficient Cost effective OHBL Exciter is introduced . Comparison between
Conv. &
OHBL Exciters , tangible and intangible benefits, electrical energy savings in t
esting
methodology are detailed in the fourth section. Remarkable achievements like - m
aking the
customer trouble free are addressed in the fifth section. And finally the conclu
sions are
summarized.
4 of 9
2.0 . Conventional Brushless Exciter (Conv. BLE)
Conv.BLE is an, electrical rotating machine which generates D.C.power. in the or
der
of hundreds of Kilo Watts . It is directly coupled to the turbo generator in the
rmal power
plants. The rotational and synchronous speed of 3000 RPM produces alternating el
ectric
power in the armature. Rectifier circuit which consists of semi conductor power
diodes, over
current protection fuses and surge guard snubber circuits converts A.C. power in
to D.C.
Power.
2.1 . Constructional Features : ( Fig. 2)
Brushless Exciter consists of the following assemblies.
1). 3 phase A.C.Armature 2). Rectifier wheel assembly 3).Permanent Magnet Assemb
ly 4) .
Forged machined shaft 5). Coupling flange 6). Support bearing pedestal 7). 3 pha
se PMG
winding 8). Yoke with field coils 9). Stator frame 10). Foundation frame
2.2 . Manufacture Technology :
The shaft is exclusively procured and machined to various steps to receive the
components. 3 phase A.C armature is manufactured by making magnetic core and cop
per
winding. It is shrunk fitted on to the shaft. The rectifier wheel and hub are ma
chined and
assembled on to the shaft. Permanent Magnets are massive and bolted to hexagonal
steel hub
in turn shrink fitted and locked. Integral coupling flange is machined and holes
are preciously
drilled. The shaft with mounted on various assemblies are coupled to the turbo g
enerator
shaft . The D.C. power transfers through a spring loaded electrical contactor at
the coupling
zone. The stator frame consists of pole winding and PMG 3 phase A.C. winding. To
tal
assembly is mounted on the exclusive foundation frame with support bearing at ta
il end. As
a package, it is transported to the site. On a separate civil foundation, the fr
ame along with
exciter is anchored, coupled to the generator and aligned.
2.3 . Problems experienced :
Frequent operational problems were reported from sites on this separately mounte
d
brushless exciters as mentioned below.
Shearing of fixing bolts of permanent magnets and severe damage to the machine
Rubbing of pole shoes with PMG core
Oil leakage and vibration problems
Armature damage due to release of steel bandage
The damages are irreparable and long duration of outages due to rectification at
works
made customers unhappy. Reliability of the machine was questionable and business
was at
5 of 9
risk. Root cause analysis, brain storming sessions and visit to various power pl
ants were
taken up. Varieties of solutions had been emerged out. Immediate solution to bri
ng back the
machine was by monitoring the manufacture process with hold points for quality c
ontrol
inspection, additional locking arrangement of magnet assembly and permanent weld
ing of
PMG yoke to the stator frame were carried out. Spare exciter was kept ready for
emergency
service to meet the contingencies. Permanent solution by changing over to the st
ate of art
technology namely Over Hang Brushless Exciter was strongly recommended for upcom
ing
projects.
3.0 . Overhang Brushless exciter (OHBLE) : ( Fig. 3 )
The OHBLE consists of all the assemblies as mentioned in Conv.BLE but very compa
ct in
size and weight. Its power to weight ratio factor is nearly three. It caters to
the needs of all
industrial sector power plants up to 80 MW capacity unlike two variants of Conv.
BLE. The
yoke is made of 12 poles with field winding. and fitted to the inner side of the
thinner cross
section of steel cylinder. By selecting higher number of poles, thickness of the
yoke had been
reduced for magnetic flux path It enables less material consumption, easy handli
ng ,less
machining cost and time cycle reduction.
3.1. Rectifier wheel assembly :
The geometry of the diode wheel to mount the rectifier circuit electronic compon
ents
had been optimized. Space capacity of the wheel is maximum utilized to locate th
e numerous
semiconductor diodes, fuses, dielectric components and connector rings. Aluminum
in sector
form is used for rings with simple bolted construction compared to a special pro
file
fabricated copper structure. The manufacturing process made simple. Silver plati
ng of the
copper components which is an electrolytic process is eliminated by using alumin
um
connector rings. Nearly 5 times weight reduction was acheived by modifying the c
onnection
parts. Surge protection assembly was removed as self protected diode are used. W
ith this
modification, procurement of RC blocks and inventory is avoided. In the Conv. BL
E, diode
wheel is assembled as a cantilever on to a machined steel forged hub. In OHBLE ,
integral
hub is machined in the diode wheel .Forged hub procurement and machining are exc
luded.
3.2 . Magnetic core :
The chosen geometry of the lamination is simple and surface area for magnetic fl
ux is
more. The armature is constructed in between two simple machined support hubs wi
thout
insulated non magnetic tension bolts as in Conv.BLE. Low loss silicon steels are
used to
reduce the magnetic core losses. Multi turn diamond shaped copper coils for wind
ings are
6 of 9
used inplace of half coils. Joint silver brazings at each coil was eliminated wi
th this design. It
improves the productivity and electric power savings in brazing. Tools required
for special
brazing is reduced. Centrifugal forces are not very high.
3.3 Permanent Magnet Generator :
Conv BLE possess exclusive assembly of AlNiCo magnet assembly on hexagonal
shaped machined hub with heavy pole shoes The magnet assembly weighs around 50 k
gs
and bolted with non magnetic high strength imported bolts. High energy rare eart
h magnets
of Neodymium Iron Boron (NdFeB) , light weight rectangular magnets are used . No
.of
magnets are also doubled to get higher frequency power output to get smooth ripp
le free DC
output. Stainless fixing screws of smaller size are used for assembly of magnets
and they are
housed in the hub in a guided slots. The miniaturization brought out many advant
ages
namely less weight, less machining cost and more reliability. Lesser diameter an
d weight are
attributed to the operational efficiency improvement .
3.4 Assembly : ( Fig. 4 )
All the modules are assembled in series on shaft extension of the generator roto
r at
the non drive end. Rectifier wheel and armature core are shrunk fitted on to the
shaft. Inside
the hub the magnets are assembled. The tiny PMG stator winding is assembled insi
de the
magnet assembly and fitted to the stator frame.
3.5 . Testing :
Performance testing of OHBLE is made simple. As it is mounted on the extension o
f
generator shaft, combined generator and exciter test in single rotation is possi
ble. In Conv.BL
Exciter, exclusive drive motor is required for balancing and testing where as OH
BLE totally
eliminates the usage of drive motor. Electrical energy savings of around 7000 Ki
lo Watt
Hours per machine was achieved.
4.0 . Advantages of OHBLE
4.1 ) . Customer benefits :
OHBL Exciter presents the following advantages to the customer:
Higher reliability at lesser cost and longer duration of power generation
Lesser torque, lesser centrifugal forces, no vibrations and no oil leakage probl
ems
Elimination of magnet assembly dislodging
Maintenance free operation and less spares inventory
No alignment and no civil foundation arrangement
4.2) . Benefits to manufacturer :
7 of 9
Tangible and intangible gains to the manufacturers are worth noted.
Ease and less cycle time of manufacture
Less number of parts and less inventory
No exclusive shaft forging, no bearing and lube oil system
Higher efficiency, lower electrical ,magnetic and mechanical losses .
Compact , less weight and higher power to weight ratio
Single design up to 80MW ratings generators
Savings in electric bill
High energy magnet application technology
Customer satisfaction
Business capture in the present competitive environment
Fig. 1 Block diagram of Electric power generation in power plant
Fig. 2 3D model of conventional Brushless Exciter
Turbo
Generato
r
Turbine
Brushle
ss
3000
RPM
8 of 9
Fig. 3 Block diagram of Overhang Brushless Exciter
Fig. 4 Assembly of Overhang Brushless Exciter on Generator shaft
G
E
N
E
R
A
T
O
R
B
E
A
R
I
N
G
O
H
E
X
C
I
T
E
R
9 of 9
5.0 . Conclusion
In simple terms, the OHBLE gained customer applause due to higher reliability an
d
maintenance free operation. Single design to cater the needs of most of industri
al sector
power plants turbo generators improved the productivity. Energy savings are note
worthy.
Emergence of new technology with high energy magnets application improved more
availability of power generation. This beautiful optimization enhanced business
in stiff
competition.
References
1). Customer O&M manual of Brushless Exciters- BHEL
2). Operational experience at various sites
SRI KALAHASTEESWARA INSTITUTE OF
TECHNOLOGY
DEPARTMENT OF EEE
PAPER PRESENTATION ON
EXPERT SYSTEM DEVELOPMENT FOR
FAULT DIAGNOSIS TO IMPROVE POWER
SYSTEM RELIABILITY
ANUSHA.N N.NAGARJUNA
III II,EEE III-II,EEE
9951098123 9703520336
[email protected] [email protected]
ABSTRACT
Plant Condition Monitoring
By Using
Infrared Thermography
PRESENTED BY
V.SRAVAN TEJ B.MADHAVA REDDY
III.B.Tech EEE III.B.Tech EEE
E-mail:[email protected] Email:[email protected]
Abstract:
Infrared condition monitoring
techniques offer an objective way of
assessing the condition of plant
equipment. Infrared thermography is
a condition monitoring technique
used to remotely gather thermal
information from any object or area,
converting it to a visual image. The
equipment is more compact, it is
easier to use, it provides better
imagery, faster analysis and uses
software that allows reports to be
written easily. Prices are also
continually dropping in order to
predict the need for maintenance.
Thermography also has the ability to
generate information that can be used
to improve equipment and enhance
operational and process
modifications.
Temperature is a key
variable in virtually any situation and
for all processes.For example, if we
have even the slightest deviation
from normal body temperature we
feel sick. In industry, we have plenty
of examples too. All this radiation
around us can be imaged, measured
and stored by an infrared system for
analysis.Infrared thermography is the
science of the acquisition and
analysis of thermal information from
non-contact thermal imaging devices.
Another area where
thermography can provide significant
benefits is in the optimization of
preventive maintenance (PM).
Preventive maintenance (PM) tasks
are designed to avoid unplanned
failures by periodically inspecting,
testing and replacing parts. In many
Cases, these time-based tasks result
in unnecessary work and wasted
parts or materials.Satisfactory
preventive maintenance inspection
can justify deferral or elimination of
some tasks, reducing plant
manpower requirements and part
expenditure.
Introduction:
All electrical
components have a tendency to heat up
as their physical condition worsens or
their electrical properties deteriorate. In
1965 the Swedish Power Board began
inspecting approximately 1,50,000
components a year. In 1986 the UK
Electrical Generation Board began
utilizing infrared thermography for
predictive maintenance on transmission
lines. However, thermography was
revolutionized with the introduction of
image type thermovision cameras in the
Nineties.
As this is a non
contact technique, it is safe and
shutdown is not required. It helps to
record and documents the thermal
characteristics of almost any object that
emits infrared radiation. Thermal images
can quickly and easily locate abnormal
sources of heat, which in electrical
systems often indicate potential
problems. Portable infrared cameras are
used to convert this infrared radiation
energy into high resolution thermal
images that are displayed on
conventional video screens for
quantitative and qualitative analysis.
Temperature is the single most measured
parameter for a condition monitoring
exercise. Temperature is simply crucial
and having control over it will mean
higher quality, better safety and
money saved. Thermography spans
many subject areas like electrical power
generation, transmission, and
distribution systems. An Infrared
Camera is designed to detect this
overheating and interpret it as early
warning signs of imminent failure.
Infrared energy:
Our environment
contains many different forms of energy
that are propagated through space at the
speed of light. These forms of energy
are differentiated as a function of their
wavelength. Infrared radiation begins
just above the visible light spectrum and
continues up to wave lengths of one
thousand of a meter. Above infrared are
radio waves. All objects above absolute
zero in temperature emit infrared
radiation .This natural occurrence is
caused by thermal agitation of the
object s molecules .Because molecules
are composed of electrical charges, the
oscillations of the molecules created
radiation emitted by an object is directly
related to its temperature.
The Infrared spectrum
is divided In to four common regions.
These are 0.75 to 2 micron referred to as
near infrared, 2 to 5 microns referred to
as short wave Infrared, 8 to 14 microns
referred to as long wave Infrared.
Radiation in the 5 to 8 micron range is
almost completely absorbed by the
atmosphere. Infrared Thermography
spans many subject areas like electrical
power generation, transmission, and
distribution systems and various fields
like mechanical and medicine fields as
follows:
Long wave systems are
theoretically more sensitive to low
temperature emission, whereas short
wave has in theory better capabilities to
detect a broader band of higher
temperatures. However the short wave
region has areas of signal attenuation
caused by the absorption of the signal by
CO2 H20 and 03. The long wave system
is not sensitive to reflections, which are
normally a problem for a short wave
system. The choice of using a short
wave system over a long wave system,
or a long wave system over a short wave
system, should not be based on
theoretical detection but on actual
detectability of the particular system.
Basic Thermal Science:
One must know the
basic concepts of temperature, heat, heat
transfer and direction of heat transfer, to
understand infrared thermography.
Thermal energy is transferred from one
body to another body by any or all of the
following mechanisms:
1. Conduction
2. Convection
3. Radiation
4. Evaporation / Condensation
with infrared imaging, the sensor or the
scanner is only detecting radiated
energy. Heat transfer by radiation is
achieved by emission and absorption of
thermal radiation. All objects will emit
and absorb thermal ration at the same
time.
The net heat transfer is the
difference between what an object
absorbs and emits Existent radiation is
all the radiated energy that leaves the
surface of an object regardless of its
original sources:
1. Emitted from the object itself
2. Reflected from a source in front
of the object
3. Transmitted, from a source
behind the object
The target fig -2 has a temperature and
an emissivity, which the power of the
radiation coming from the target
depends upon the radiation power of the
other two radiation component does not
depend on the target temperature, but on
the temperature and emissivity of the
reflection and the transmission heat
sources, respectively.
How is A Visual Light is
Image Created From Infrared
energy ?
An infrared imaging device
contains one or more detectors that
convert energy in the infrared spectrum
into an electrical signal. The more
energy detected the greater the electrical
signal output. The electrical signals are
typically formatted into a video signal
and displayed on a CRT/LCD. The
amplitudes of the electrical signals are
then displayed as varying intensities on
the CRT/LCD thus creating a contrast in
the image in different pallets such as
Grey, Iron and Rainbow etc. depending
upon the applications.
In thermograph, there are
many factors apart from the surface
temperature of the object s that affect
and disturb the temperature
measurements for accurate temperature
measurements it is crucial to know
which those factors are, and how the
equipment compensates for them.
Before the measured radiation can be
transformed into temperature all other
radiation sources have to be
compensated for by the equipment so
that the measured temperature is a
function of the object temperature and
not of the distance, emissivity or the
internal equipment temperature.
If any if the Electrical
components deteriorate there is an
increase in resistance to the flow of
electrical current. With increase in
resistance comes the increase in radiant
energy output as the component gets
heated a thermal imaging system detects
this radiant energy. In case of an
overloading conductor or imbalance in a
three phase system the more current
flowing through the line the greater the
temperature of that line and the brighter
the thermal pattern appears.
Visualv/s Infrared Image:
There are to fundamental differences
between looking in the infrared and in
the visual.
.. Visual is mainly reflections,
while infrared is mainly a
combination of object emission
and reflection.
.. Visual is color and intensity,
while infrared is only intensity.
.. If two objects are at same
temperature, the object with
higher emissivity radiates more
than the object with low
emissivity. Hence the first object
looks brighter than the second.
.. Emissivity causes the contrast in
both the thermal images
.. Though both receive radiation
from the surroundings, which is
also reflected, but more by the
second object with low
emissivity and high reflectivity
and less by first object with high
emissivity and low reflectivity.
Quantitativev/sQualitative
Analysis:
In planning for an infrared
inspection is normally looking to obtain
the best service for the amount of money
spend. Unfortunately, due to many misrepresentations
by infrared service
companies the customers are confused
about the facts of infrared operations
and often pay for meaningless date. This
problem is especially prevalent in
conducting infrared inspections for the
electrical utility industries. Infrared as a
technology is not new! In fact, quality
infrared systems have been in service for
over 30 years and continue to evolve.
Thermal Image Analysis
Techniques:
Thermal Gradient - It is a gradual
change in temperature over distance. It
Indicates presence of conductive heat
transfer which is the only mode in
opaque solids.
Thermal Tuning: It means adjusting
the scale of the image in order to
optimize contrast. For this level and
span controls of the camera are used
Isotherm: It replaces certain colors
in the scale with a contrasting color. It
marks an interval of equal apparent
temperature. The feature is used to find
out if there is any heat floe e.g. thermal
gradient.
Palettes: The color palettes of the
image assigns different colors to mark
specific levels of apparent temperature
they can give more or less contrast
depending on the colors used in them
for electrical installation, generally an
iron palette is used which is a low
contrast palette.
Factor Affecting the
Measurement:
Atmosphere:- Though it is
a Tran missive object between camera
and the target the even some factors
affect the measurement they are
Distance, Ambient temperature, and
Relative humidity.
Reflected radiation: -
Reflection from nearby objects apparent
temperature of these objects that result in
radiation that is reflected by the target
into the camera is known as reflected
apparent temperature.
Emissivity: - A low
emissivity target will always try to look
like the surroundings if the target is
hotter than the surroundings it will look
colder than it is and if it is colder than
the surrounding it will look warmer than
it is. It can be said that a low emissivity
target tries to camouflage its real
temperature to the thermal imager for
high emissivity targets apparent t
temperature is very close to real
temperature.
Calibration: - The
calibration of the camera is performed in
a lab under controlled environmental
conditions with a large number of black
body reference sources within emissivity
approaching 1.0.
Spatial resolution and
target size:- Ideal equipment would of
course measure the same object
temperature even when looking at an
object that is very small compared to the
whole field of view. Relation between
Field Of View and Distance (240 Lens)
.
Infrared Applications:
Electrical Distribution
Systems
What Can Be
Detected:
Loose/deterior
ated
connections
Overloads
Imbalanced
Loads
Open Circuits
Inductive
Heating
Harmonics
Defective
Equipment
Benefits:
locate
problems
quickly,
without
interrupting
service
drastically
reduce costly,
unscheduled
power outages
minimize
preventive
maintenance
time and
maximize
troubleshootin
g effectiveness
prevent
premature
failure and
extend
Improperly
Closed
Air Switch
Load Imbalance
on Bus Duct
equipment life
Mechanical Systems
What Can Be Detected:
Misalignment of coupled equipment
Over/under lubrication of bearings
Over/under tension of belted systems
Excessive friction
Defective Equipment
Benefits:
quickly locate misaligned coupled
equipment
increase equipment reliability and life
increase production and efficiency while
saving energy
increase quality of product
minimize downtime by planning the
required manpower and materials before
shutdown
improve worker productivity and morale
by correcting potential problems
proactively
Uneven Heating Caused
by Misalignment
Defective Pillow
Block Bearing
Overheated
Shaft Bearing
Structural Energy Loss
What Can Be Detected:
Missing, damaged, or improperly installed
insulation
Energy losses caused by air infiltration and
exfiltration
Q/A Inspection Detects
Missing Insulation in
New Building
Water infiltration
Damaged refractory
Benefits:
help reduce heating and cooling energy
costs
evaluate thermal performance of retrofits
identify areas of latent moisture
detect conditions conducive to mold or
insect problems
provide hardcopy proof of problems
Compromised Refractory
in Steel Ladle
Conventional Maintenance
Procedures:
Generally a fairly uniform set
of maintenance procedures are adopted
in many organizations.
These include:-
.Visual inspections
.Cleaning equipment
.Tightening connections
.Over current device testing
.Insulation quality testing
Advantages of
Thermographic
Approach:
Infrared inspection is non-contact.
It uses remote sensing. Firstly, it keeps
the user out of danger i.e. away from live
electrical components. Secondly, it does
not intrude upon or affect the target as
well
Infrared thermography is two
dimensional. We can measure
temperature of many points in the same
image and compare them. Thus analysis
of image is very effective and simple.
Infrared thermography is real
time. It allows us to do over fast
scanning
Electrical equipment is
inspected during operation, so the power
doesn t have to be interrupted.
Reduced inspection costs as large
quantities of equipment can be scanned
in a short period of times finding the
trouble spot quickly, saving labour time
and money over regular trouble
shooting.
Faults can be pinpointed before
maintenance is carried out, so
maintenance resources are directed
where they are most needed and
prioritized, resulting in significant
labour and cost savings.
Infrared Program :
In order to profit from the
benefits of infrared thermography,
regardless of the technology chosen,
much consideration should be given to
establishing an infrared inspection
program. One that is properly initiated is
guaranteed to provide users with a quick
return on investment. Typically this will
occur within 3 months of purchasing and
using the equipment, but many
companies claim receiving a payback the
very first day on which they performed
an infrared inspection.
The first of several steps in setting up a
successful thermography program is:
The interest in this technology is that it
promises major advances for infrared
focal plane arrays:
.. Excellent pixel uniformity,
imaging and sensitivity
performance.
.. Large pixel format capability, up
to 640 x 480
.. QWIPs are tunable and can be
made responsive from about 3 to
25 microns, can be made for
broad band and dual band
applications.
.. Can be produced at relatively
low cost and in large quantities.
Education: The very first step is to find
out some more about the products and
technology that are available and how
they can be used.
.. Go to introductory seminars and
conferences.
.. Request product data sheets and
application literature from
equipment vendors
.. Browse the internet. This is a
little time consuming, but there is
a wealth of
information on the web.
.. Contract in an independent
consultant to assist in the
assessment and education
process.
.. Hire an experienced infrared
service company and learn from
their employees while they are
performing an inspection in the
field.
Conclusion:
Hence conventional
cleaning and tightening procedures can
overlook many problems these
overlooked problems as well as those
that may have been remedied by the
preventive maintenance program will be
identified by a competent infrared
survey. Infrared thermography is
capable to instantly identify all resistive
type problem, that are the object of the
conventional cleaning and tightening
procedures in addition poor connections
that are not readily accessible during
conventional maintenance can be
checked connection contact and
calibration problems in thermal
overload devices and fuses can be
instantly spotted. The most costly
component of many preventive
maintenance program is equipment
cleaning and connection tightening this
is appropriate since these procedure are
directed towards correcting deficiencies
in terminations, joints and contact
points the location of most electrical
failures also these procedures are highly
labour intensive since substantial
component disassembly and reassembly
is required to access all the major
contact points and terminations.
Thermo-graphic imaging
and infrared temperature measurements
have been used extensively by
POWERGRID for maintenance related
activities. Improvements in the
sensitivity and selectivity of infrared
imaginary now allow more meaningful
observational comparisons of substation
equipments the team of thermographers
with skills and capabilities have allowed
to uncover a number of impending
problems that could have led to
catastrophic failure and unscheduled
outages. The increased sensitivity of
newer designs rejection of unwanted
reflections .improvements of specific
point resolution and in depth training
have contributed to
Infrared Imaging as an
Effective Condition Monitoring
System .
A Paper on
Detection Of Pilferages And Power Thefts
Using
SCADA
BY
A. BHANU TEJ J. MOHAN KUMAR
B.Tech EEE - III Yr-II Sem B.Tech EEE - III Yr-II Sem
[email protected] [email protected]
Siddharth Institute of Engineering & Technology
SIDDHARTH NAGAR
NARAYANAVANAM ROAD
PUTTUR 517 583, CHITTOOR (DT)
Abstract
SCADA (Supervisory Control and
Data Acquisition) systems are at the heart of
the modern industrial enterprise ranging
from mining plants, water and electrical
utility installations to oil and gas plants. A
SCADA system usually includes signal
hardware (input and output), controllers,
networks, user interface (HMI),
communications equipment and software.
The brains of a SCADA system are Remote
Terminal Units, where as the HMI (Human
Machine Interface) processes the data and
presents it to be viewed and monitored by a
human operator.
The aim of this paper is to introduce
a new technique to control the pilferage and
power theft using interface of SCADA with
GIS system. The SCADA system will
continuously get the real time readings of all
electrical parameters at monitored points on
feeders. These parameters include Voltage,
Angle, Power Factor, Active Power,
Reactive Power and Energy. The system
shall also get the status of various switching
devices like circuit breakers, switches and
isolators. It will also get the transformer
parameters like tap position, etc.
Electronic meters will be installed at
HT consumers. These meters will be
equipped with the interface for
communications with the SCADA system.
SCADA system will be communicating with
the meters using an industry standard
protocol. Meter readings shall be used to
monitor the load and for detection of
attempts to tamper with the meter. As soon
as a tamper is detected the meter/consumer
shall be tagged on the GIS system. The
information shall be passed on to the
vigilance groups for physical check, to take
further action.
INTRODUCTION:
The Power sector plays a very
important and vital role in the economic
development of a country. The growth of
development of Industries, Agriculture,
Infrastructure, is dependent on the state of
power sector. In India approx. 35-40% of the
losses are contributed by Transmission and
Distribution losses which are more.
As the nature of the loss is both
technical and commercial, it is more difficult
to differentiate the loss in between these two
factors. As pilferage takes place mostly at the
LT level hence it becomes crucial to carry out
the study up to consumer level. The losses in
the physical system like line losses,
transformation losses form the technical
losses. Commercial losses come from a
variety of sources, all of which have in
common that energy was delivered but not
paid for. The potential sources of commercial
loss or the theft of utility service could be a
direct connection from a feeder or wire
bypassing the meter.
"The total power generation in the
country was around 1,00,000 MW of which
billing was done only for 55,000 MW and
the rest 45,000 MW was going as pilferage
and power theft. Out of it, the annual power
theft was around 30,000 MW causing a
financial loss of Rs 20,000 crore to the
nation's exchequer every year - A report"
SCADA
SCADA stands for Supervisory
Control And Data Acquisition. As the name
indicates, it is not a full control system, but
rather focuses on the supervisory level. As
such, it is a purely software package that is
positioned on top of hardware to which it is
interfaced, in general via Programmable
Logic Controllers (PLCs), or other
commercial hardware modules.
SCADA is a commonly used
industry term for computer-based systems
allowing system operators to obtain realtime
data related to the status of an electric
power system and to monitor and control
elements of an electric power system over a
wide geographic area.
SCADA System Functions
The SCADA System connects two
distinctly different environments. The
su
bstation, where it measures, monitors, controls
and digitizes; and The Operations Center,
where it collects, stores, displays and
processes substation data. A communications
pathway connects the two environments.
Interfaces to substation equipment and a
conversions and communications resource
complete the system. The substation terminus
for traditional SCADA system is the Remote
Terminal Unit (RTU) where the
communications and substation interface
interconnect. SCADA system RTUs collect
measurements of power system parameters,
transport them to an Operations Center
where the SCADA Master presents them
to system operators.
SCADA system master stations
monitor the incoming stream of analog
variables and flag values that are outside
prescribed limits with warnings and alarms
to alert the system operator to potential
problems. Data is screened for bad data as
well.
GIS System
Geographic information system
(GIS) technology can be used for scientific
investigations, resource management and
development planning.
A GIS is a computer system capable
of capturing, storing, analyzing, displaying
geographically referenced information; i.e.,
data identified according to location.
Practitioners also define GIS as including
the procedures, operating personnel, and
spatial data that go into the system. GIS uses
computers & software to leverage the
fundamental principle of geography that
location is important in people s lives.
Integrate data in various formats and from
many sources using GIS
Application of GIS in Power Utilities
GIS is at the core of a full gamut of
electrical utility applications customer
information systems, work management,
distribution management, meter order
processing, load and feeder planning, outage
management. Electric companies are already
finding GIS very useful in management of
distribution.
GIS is used in power sector for:
The study and analysis for electrical
distribution system, analysis and design
Applications are also being developed
for tackling problem of designing the
electrical supply system for new
residential development
For process automation in order to
provide their customers with high
quality attendance
GIS are also integrated for mapping and
analysis of electric distribution circuits
GIS can play a crucial role in tightening
the leakages -real and procedural -that
result in monstrous losses in the
transmission and distribution chain.
Pilferage detection can be done at
consumer, distribution transformer, and
feeder or substation levels
GIS system integrated with SCADA can
be used in detecting power thefts by HT
consumers
Role of SCADA interfaced GIS
system in detecting potential thefts
The proposed solution is interface of
SCADA with GIS system. The SCADA
system will continuously get the real time
readings of all electrical parameters at
monitored points on feeders. These parameters
include Voltage, Angle, Power Factor, Active
Power, Reactive Power and Energy.
Electronic meters will be installed at
HT consumers. These meters will be equipped
with the interface for communications with
the SCADA system. SCADA system will be
communicating with the meters using an
industry standard protocol. Meter readings
shall be used to monitor the load and for
detection of attempts to tamper with the meter.
As soon as a tamper is detected the
meter/consumer
shall be tagged on the GIS system. The
information shall be passed on to the vigilance
groups for physical check, to take further
action. The system can be graphically
illustrated in Figure..
In its stride towards Power for All
by 2012 the Ministry of Power has decided
to deploy Geographical Information Systems
(GIS) and Global Positioning System (GPS)
and Remote Sensing (RS) to improve its
distribution network, restoration services as
well as to harness the hydro power potential
in the North Himalayan region and in
Northeastern India. Recent ranking survey
of potential hydro sites conducted by the
Central Electricity Authority (CEA) had
extensively used GIS in its report. Power
Grid has charted an ambitious plan to add
about 60,000 circuit km of transmission
lines by 2012. To facilitate this, construction
of high capacity inter-regional transmission
lines and power highways, culminating in a
national grid is envisaged.
Conclusion:
In this paper a GIS solution for preventing
the power pilferages has been presented. It
can be concluded from the above discussion:
A GIS system integrated with
consumer billing system can be very
effectively used in detecting the
power pilferage.
Pilferage detection can be done at
consumer, distribution transformer,
and feeder or substation levels.
The accuracy of the result depends
on the accuracy of the loading
pattern considered during the
evaluation of technical losses and
the accuracy of the meter readings.
Analysis of patterns of individual
consumption over GIS can help in
identifying the sources of pilferage at
subscriber level.
GIS system integrated with SCADA
can be used in detecting power thefts
by HT consumers.
BIBLIOGRAPHY
A. Daneels, W. Salter, Selection and Evaluation of Commercial SCADA Systems for t
he
Control of CERN LHC Experiments .
Practical Modern SCADA Protocols-> B.H.BOGG
www.amazon.com
www.tech-faq.com
1
SUPRESSION OF RESONANCE OSCILLATIONS IN PWM CURRENT
SOURCECONVERTER
BY
P.Mounika B.Haritha
III B.Tech EEE III B.Tech EEE
A.I.T.S, Rajampet A.I.T.S, Rajampet
E-mail:[email protected] E-mail:[email protected]
ABSTRACT
This paper presents the simulation and real time implementation of a suppression
method for the resonance
oscillation in the ac side of a pulse-width modulation (PWM) current source conv
erter. The converter is
operated with the PWM switching pattern, which is generated by the full digital
control of computer
software. The resonance current, caused by the low pass filter at the step chang
e of the pattern, can be
effectively suppressed by one pulse control of the pattern. The proposed method
does not need to have the
feedback loop of the current/voltage and does not offer the switching stress of
the devices. The main
objective of the work is suppression of resonance in PWM current source converte
r.
1
INTRODUCTION
Use of turn-off devices and application of Pulse
Width Modulation (PWM) technique for power
converter have the achievements of sinusoidal wave
of AC side in rectifier and inverter is an important
result, and it contributes to unity power factor and
reduction of harmonics in the AC power source and
to low noise drives of AC motor. The converter with
high performance of power control is realized by
employing powerful DSP.
The PWM converter is classified into voltage
source type and current source type. The former has
ac inductor and dc capacitor, and the latter has ac LC
filter and the DC inductor. The PWM voltage source
converter has been widely used because of conversion
efficiency and installation size superior to the PWM
current source converter. To achieve the sinusoidal ac
input current, the voltage source converter
necessitates the control loop for switching of the
devices, for the instance a current regulated
modulation control with comparator. In the current
source converter, the sinusoidal input current can be
easily obtained without the addition of control loop
for the switching because it depends on the dc
current. The blanking time (dead time),which is the
significant parameter in PWM voltage source
converter, does not need to be taken into
consideration, so that the PWM pulse generation is
simple in the logic circuit.
The voltage waveform of the AC side in voltage
source consists of the train of pulses of which the
width is sinusoidally modulated with constant
amplitude. The harmonics due these pulse trains
cause the audible in the AC inductor. As the ac
current source converter can directly convert the DC
current into sinusoidal current through the LC filter
the noise in the filter inductor is low considerably.
In many industrial applications, to control of the
output voltage of inverters is often necessary
1. To cope with variations of dc input voltage,
2. To regulate voltage of the inverters
3. To satisfy the constant volts and frequency control
requirement.
The most efficient method of controlling
the gain is to incorporate PWM control within the
inverters. The commonly used techniques are:
1. Single pulse width modulation
2. Multiple pulse width modulation
2
3. Sinusoidal pulse width modulation
4. Modified sinusoidal pulse width modulation
5. Phase displacement control
SYSTEM CONFIGURATION USING GTOs
AND ITS CHARACTERISTICS
Introduction
This chapter deals about GTOs and its switching
performance. These switching performances consist
of turn ON and turn OFF of GTO. The circuit used
here consists of LC filter, GTOs. SPWM technique is
used to reduce the higher order harmonics. LC filter is
designed such that to suppress the resonance
oscillations. The applications and advantages of
GTOs and IGBTs are discussed here.
Gate turn-OFF thyristor (GTO)
A gate turn-OFF thyristor (GTO), as the name
indicates is basically a thyristor type device that can
be turned on by a small positive gate current pulse,
but in addition, has the capability of being turned
OFF by a negative gate current pulse. The turn-OFF
capability of a GTO is due to the diversion of P-N-P
collector current by the gate, thus breaking the P-NP/
N-P-N regenerative feed back effect. GTOs are
available with asymmetric and symmetric voltageblocking
capabilities, which are used in voltage-fed
and current-fed converters, respectively. The turn
OFF current gain of GTO, defined as the ratio of
anode current prior to turn-OFF to the negative gate
current required for turn-OFF, is very low, typically
4 or 5. This means that a 6000v. A GTO requires as
high as -1500 A gate current pulse. However, the
duration of the pulsed gate current and the
corresponding associated with it is small can easily be
supplied by the low voltage power MOSFETs. GTOs
are used in motor drives, static VAR compensators
(SVCs), and AC/DC power supplies with high power
ratings. A GTO like an SCR can be turned ON by
applying a positive gate signal. However, a GTO can
be turned OFF by a negative gate signal. A GTO is a
non latching device and can be built with current and
voltage ratings similar to those of an SCR. A GTO is
turned ON by applying a short positive pulse and
turned OFF by a short negative pulse to its gate.
The GTO s has these advantages over SCR s:
1. Elimination of commutating components in forced
commutation circuit, resulting in reduction
in cost, weight and volume.
2. Reduction in electro magnetic noise due to the
elimination of commutation chokes.
3
3. Faster turnoff, permitting high switching
frequencies and
4. Improved efficiency of converters.
In low power applications, GTOs have the following
advantages over bipolar transistors:
1. A higher blocking voltage capability
2. A high on state gain
3. A high ratio of peak controllable current to average
current.
Like a thyristor , a GTO is a latch on device, but it is
also a latch off device. The GTO symbol is shown in
fig.3.1
(a) (b)
Fig.3.1 (a) and (b) GTO circuit
symbols.
Like the conventional, the GTO switches regeneration
into the ON state when a positive gate signal is applied
to the base of the N-P-N transistor. In a regular
thyristor, the current gains of the N-P-N and P-N-P
transistors are large in order to maximize gate
sensitivity at turn ON and to minimize ON state
voltage drop .But this pronounced regenerative,
latching effect means that the thyristor cannot OFF the
gate.
Fig.3.2 Two-transistor analogy of GTO.
Internal regeneration is reduced in the GTO by a
reduction in the current gain of the P-N-P transistor,
and turn OFF is achieved by drawing sufficient
current from the gate. The turn OFF action may be
explained as in fig. 3.3.
4
Fig. 3.3 Basic GTO structure showing anode to Nbase
short-circuiting spots
When a negative bias is applied at the gate, excess
carriers are drawn from the base region of the N-P-N
transistor, and the collector current of the P-N-P
transistor is diverted into the external gate circuit.
Thus, the base drive of the N-P-N transistor is
removed and this , in turn, removes the base drive of
the P-N-P transistor ,and stops conduction. The
reduction in gain of the P-N-P transistor can be
achieved by the diffusion of gold or other heavy
metal to reduce carrier lifetime, or by the introduction
of anode to N-base short-circuiting spots or by a
combination of these two techniques. Device
characteristics are influenced by the particular
technique used. Thus, the gold-doped GTO retains its
reverse -blocking capability but has a high on state
voltage drop. The shorted anode emitter construction
has a lower ON-state voltage, but the ability to block
reverse voltage is sacrificed. Large GTOs also have
an interdigitated gate-cathode structure in which the
cathode emitter consists of many parallel connected
N-type fingers diffused into the turn ON or turn OFF
of the whole active area of the chip.
Fig 3.4 Delay, rise turn-ON times during gated
turn-ON
GTOs are available with symmetric or asymmetric
voltage blocking capabilities. A symmetric blocking
device cannot have anode shorting and, therefore, is
somewhat slower. The use of asymmetrical GTOs
requires the connection of a diode in series with each
GTO to gain the reverse blocking capability, where as
symmetrical GTOs have the ability to block the
reverse voltage. In symmetrical GTOs, N-base is
doped with a heavy metal to reduce the turn off time
.The asymmetrical GTOs offer more stable
5
temperature characteristics and lower ON state
voltage compared to symmetrical GTOs.
Waveforms for GTO circuit
For R=10ohms, L=20H
Fig4.11 Triangular wave compared with the
sinusoidal wave
Fig 4.11(a)Zoom of fig4.11
Fig4.11 shows the comparision of the triangular wave
and the sinusoidal wave and the fig 4.11(a)shows the
zoom of fig 4.11.
Fig4.12 Pulses to be applied to GTOs 1,4
Fig 4.12(a)Zoom of fig4.12
Fig 4.12 shows the pulses obtained by the
comparision of the triangular waveform with a
sinusoidal waveform. Fig 4.12(a)shows the zoom of
Fig4.12. The switching frequency is 1.96KHz. The
magnitude of the pulse is 1V. By giving NOT gate for
the obtained pulses we get the pulses for GTOs 2
and3.
6
Fig4.13 Input current and its harmonics
Fig4.13 shows the input current and its corresponding
harmonics .The fundamental i.e., at 50Hz is 0.05938.
The total harmonic distortion is1.78%.
Fig 4.14 Input voltage and its harmonics
Fig4.14 shows the input voltage and its corresponding
harmonics. The magnitude of voltage is 230V.
Fig4.15 Output current and its harmonics
Fig4.15 shows the waveform of output current and its
harmonics. It is a continuous waveform. Its Total
Harmonic Distortion is 65.13%.It is observed through
the load R=10ohms and L=20H.
Fig4.16 Output voltage and its harmonics
Fig4.16 shows the waveforms of output voltage and
its harmonics. The DC component is 22.61 and its
7
THD is 101.35%. It is observed across the load
R=10ohms and L=20H.
For R=10ohms, L=0.2H
Fig4.17 Input current and its harmonics
Fig4.17 shows the input current waveform and its
corresponding harmonics are also shown below it.
The fundamental harmonic is 0.05989. The THD is
2.71%.
Fig 4.18 Input voltage and its harmonics
Fig4.14 shows the input voltage and its corresponding
harmonics. The magnitude of voltage is 230V. It is
observed at the input side i.e., source side.
Fig4.19 Output current and its harmonics
Fig 4.19 shows the output current waveform and its
corresponding harmonics. It is observed through the
load R=10ohms and L=0.2H. The THD is 85.20%.
Fig4.20 Output voltage and its harmonics
8
Fig4.20 shows the output voltage waveform and its
corresponding harmonics. It is observed across the
load R=10ohms and L=0.2H.
Conclusion
The suppression method for the resonance oscillation
in the PWM current source converter has been
proposed and the results in the single-phase
converters have been implemented in this paper for
simulation. This proposed method is very effective
for the suppression of the resonance oscillation. No
feedback loop is necessary for this suppression
method. When the carrier frequency for the
generation of the PWM pulses is selected at the value
corresponding to the control timing, the pulse
regulation in two carrier cycles allows the oscillation
to be damped. The results prove that the proposed
method with very simple control is useful for the
single-phase converter.
The circuits having IGBTs and the GTOs are
explained clearly and its characteristics are also
described. Depending upon its characteristics, they
are used in their relevant applications.
SRI VENKATESWARA UNIVERSITY COLLEGE OF
ENGINEERING
Department of Electrical Engineering
Tirupathi-517502
A TECHNICAL PAPER ON
NON-CONVENTIONAL ENERGY SOURCES
K.S.RAVI KUMAR C.S.UMAR FAROOQ
10703012 10703007
Room no:1329 Room no:1330
Visweswara block Visweswara block
Svuce hostels Svuce hostels
Tirupati. Tirupati.
e-mail:[email protected] e-mail:[email protected]
ABSTRACT: Energy is the key input to drive
and improve the life cycle. Primarily, it is the
gift of the nature to the mankind in various
forms. The consumption of the energy is
directly proportional to the progress of the
mankind .With ever growing population,
improvement in the living standard of the
humanity, industrialization of the
developing countries, the global demand for
energy is expected to increase rather
significantly in the near future .The primary
source of energy is fossil fuel, however the
finiteness of fossil fuel reserves and large
scale environmental degradation caused by
their widespread use, particularly global
warming, urban air pollution and acid
rain, strongly suggests that harnessing of
non-conventional, renewable and
environment friendly energy resources is
vital for steering the global energy supplies
towards a sustainable path. This paper
describes in brief the non conventional
energy sources
INTRODUCTION:
To meet the future energy demands and to
give quality and pollution free supply to the
growing and todays environment conscious
population, the present world attention is to
go in for natural, clean and renewable
energy sources. These energy sources
capture their energy from on-going natural
processes, such as geothermal heat flows,
sunshine, wind, flowing water and
biological processes .Most renewable forms
of energy, other than geothermal and tidal
power ultimately
Come from the Sun. Some forms of energy,
such as rainfall and wind power are
considered short-term energy storage,
whereas the energy in biomass is
accumulated over a period of months, as in
straw, and through many years as in wood.
Fossil fuels too are theoretically renewable
but on a very long time-scale and if
continued to be exploited at present rates
then these resources may deplete in the near
future. Therefore, in reality, Renewable
energy is energy from a source.That is
replaced rapidly by a natural process and is
not subject to depletion in a human
timescale.Renewable energy resources may
be used directly, such as solar ovens,
geothermal heating, and water and wind
mills or indirectly by transforming to other
more convenient forms of energy such as
electricity generation through wind turbines
or photovoltaic cells, or production of fuels
(ethanol etc.) from biomass.
BRIEF DESCRIPTION OF NONCONVENTIONALENERGYRESO
URCES:
1. SOLAR ENERGY:
Since most renewable energy is ultimately
"solar energy" that is directly collected from
sun light. Energy is released by the Sun as
electromagnetic waves. This energy
reaching the earth s atmosphere consists of
about 8% UV radiation, 46% visible light
and 46% infrared radiations. Solar energy
storage is as per figure given below: Solar
energy can be used in two ways:
Solar heating.
Solarelectricity
Solar heating is to capture/concentrate sun s
energy for heating buildings and for cooking
/heating foodstuffs etc .Solar electricity is
mainly produced by using photovoltaic solar
cells which are made of semi-conducting
materials that directly convert sunlight into
electricity. Obviously the sun does not
provide constant energy to any spot on the
Earth, so its use is limited. Therefore, often
Solar cells are used to charge batteries
which are used either a secondary energy
source or for other applications of
intermittent use such as night lighting or
water pumping etc.
Solar power plant offers good option for
electrification in areas of disadvantageous
locations such as hilly regions forests,
deserts, and islands where other resources
are neither available nor exploitable in
techno economically viable manner. MNES
has identified 18, 000 such villages to be
electrified through non-conventional sources
.India is a vast country with an area of over
3.2million sq. km. Most parts of the country
have about 250-300 sunny days . Thus there
is tremendous solar potential.140 MW solar
thermal/naphtha hybrid power plant with 35
MW solar trough component will be
constructed in Rajasthan raising India into
the 2nd position in the world in utilization of
solar thermal.
Grid interactive solar photovoltaic power
projects aggregating to 2490 KW have so far
been installed and other projects of 800 KW
capacity are under installation
2.Wind Energy:
The origin for Wind energy.When sun rays
fall on the earth, its surface gets heated up
and as a consequence unevenly winds are
formed. Kinetic energy in the wind can be
used to run wind turbines but the output
power depends on the wind speed. Turbines
generally require a wind in the range 5.5 m
/s (20 km/h). In practice relatively few land
areas have significant prevailing winds.
Otherwise Wind power is one of the most
cost competitive renewable today and this
has been the most rapidly-growing means of
electricity generation at the turn of
the21stcentury and provides a complement
to large-scale base-load power stations. Its
long-term technical potential is believed 5
times current global energy consumption or
40 times current electricity demand. India
now has the 5th largest wind power installed
capacity, of 3595 MW, in the world. The
estimated gross Wind potentials in India is
45,000 MW.
3. Water Power
Energy in water can be harnessed and used,
in the form of motive energy or temperature
differences. Since water is about a thousand
times heavier than air is, even a slow
flowing stream of water can yield great
amounts of energy .There are many forms:
Hydroelectric energy, a term usually
reserved for hydroelectric dams.
Tidal power, which captures energy from
the tides in horizontal direction. Tides come
in, raise water levels in a basin, and tides
roll out. The water is made to pass through a
turbine to get out of the basin. Power
generation through this method has a
varying degree of success.
Wave power, which uses the energy in
waves. The waves will usually make large
pontoons go up and down in the water. The
wave power is also hard to tap
.Hydroelectric energy is therefore the only
viable option. However, even probably this
option is also not there with the developed
nations for future energy production,
because most major sites within these nation
s with the potential for harnessing gravity in
this way are either already being exploited
or are unavailable for other reasons such as
environmental considerations. On the other
side, large hydro potential of millions of
megawatts is available with the developing
countries of the world but major bottleneck
in the way of development of these large
Hydro projects is that each site calls for
huge investment.
3. Micro/Small Hydro Power
This is non-conventional and renewable
source and is easy to tap. Quantitatively
small volumes of water, with large falls (in
hills) and quantitatively not too large
volumes of water, with small
falls (such that of canals), can be tapped
.The force of the flowing and falling water is
used to run water turbines to generate
energy.
The estimated potential of Small Hydro
Power in India is about 15,000 MW.
In the country, Micro hydro projects up to
3 MW of total capacity of 240MW and 420
small hydro power projects up to 25 MW
station capacity with an aggregate capacity
of over 1423 MW have been set up and over
187 projects in this range with aggregate
capacity of 521 MW are under construction.
4 .Geothermal Energy
Geothermal energy is a very clean source of
power. It comes from radioactive decay in
the core of the Earth, which heats the Earth
from the inside out and thus energy/power
can be extracted owing to the temperature
difference between hot rock deep in the
earth and relatively cool surface air and
water. This requires that the hot rock be
relatively shallow, so it is site - specific and
can only be applied in geologically active
areas.
It can be used in two ways:
Geothermal heating
Geothermal electricity
As stated above, the geothermal energy from
the core of the Earth is closer to the surface
in some areas than in others. Where hot
underground steam or water can be tapped
and brought to the surface it may be used
directly to heat and cool buildings or
indirectly it can be used to generate
electricity by running the steam/gas turbines.
Even otherwise, on most of the globe, the
temperature of the crust a few feet below the
surface is buffered to a constant 7-14 degree
Celsius, so a liquid can be pre-heated or precooled
in under ground pipelines, providing
free cooling in the summer and heating in
the winter by using a heat pump.
5 .BIOMASS
a. Solid Biomass
Plants use photosynthesis to store solar
energy in the form of chemical energy. The
easiest way to release this energy is by
burning the dried up plants. Solid biomass
such as firewood or combustible field crops
including dried manure is actually burnt to
heat water and to drive turbines. Field crops
may be grown specifically for combustion or
may be used for other purposes and the
processed plant waste then used for
combustion. Most sorts of bio mass ,
Including Sugarcane residue, wheat chaff,
corn cobs and other plant matter can be, and
is, burnt quite successfully .Currently,
biomass contributes 15% of the total energy
supply world wide.
A drawback is that all biomass needs to go
through some of these steps: it needs to be
grown, collected, dried, fermented and
burned. All of these steps require resources
and an infrastructure.
In the area of small scale biomass
gasification, significant technology
development work has made India a world
leader .A total capacity of 55.105 MW has
so far been installed, mainly for stand-alone
applications. A 5 x 100 KW biomass
gasifier installation on Gosaba Island in
Sunderbans area of West Bengal is being
successfully run on a commercial basis to
provide electricity to the inhabitants of the
Island through a local grid.A 4X250 kW
(1.00 MW) Biomass Gasifier based project
has recently been commissioned at
Khtrichera , Tripura for village
electrification.A 500 KW grid interactive
biomass gasifier, linked to an energy
plantation, has been commissioned under
ademonstration project.
b. Bio fuel
Bio fuel is any fuel that derives from
biomass - recently living organisms or their
metabolic byproducts, such as manure from
cows. Typically bio fuel is burned to release
its stored chemical energy. Biomass can be
used directly as fuel or to produce liquid bio
fuel.
Agriculturally produced biomass fuels, such
as biodiesel, ethanol, and bagasse
(often a by-product of sugarcane cultivation)
can be burned in internal combustion
engines or boilers.India is the largest
producer of cane sugar and the Ministry
is implementing the world s largest cogeneration
programme in the sugar mills.
India has so far commissioned a capacity
of 537 MW through bagasse based cogeneration
in sugar mills and 536 MW is
under installation.
It has an established potential of 3,500 MW
of power generation.
c. Biogas
Biogas can easily be produced from current
waste streams, such as: paper production,
sugar production, sewage ,animal waste and
so forth. These various waste streams have
to be slurred together and allowed to
naturally ferment, producing 55% to 70%
inflammable methane gas. India has world s
largest cattle population 400 million thus
offering tremendous potential for biogas
plants. Biogas production has the capacity
to provide us with about
half of our energy needs, either burned for
electrical productions or piped into current
gas lines for use. It just has to be done and
made a priority. Though about 3.71 millions
biogas plants in India up to March, 2003 are
successfully in operation but still it is
utilizing only 31% of the total estimated
potential of 12 million plants. The pay back
period of the biogas plants is only 2/3 years,
rather in the case of Community and
Institutional Biogas Plants is even less.
Therefore biogas electrification at
community/ Panchayat level is required to be
implemented.
6. FOSSIL FUEL RESERVES
Fossil fuels supply most of the energy
consumed today. They are relatively
concentrated and pure energy source sand
technically easy to exploit, and provide
cheap energy. Presently Oil 40%, natural gas
22.5%, coal 23.3%,hydro electric 7.0%,
nuclear 6.5%, biomass and others 0.7%
provide almost all of the world's energy
requirements .However the reserves of fossil
fuels are limited as under:
Conservative predictions are that
conventional oil production will peak in
2007.
The pessimists predict a peak for
conventional gas production between 2010
and 2020.
There are today 200 years of economically
exploitable reserves of coal at the current
rate of consumption.
The raw material for nuclear power i.e.
uranium reserves will last for 50 years at the
present rate of use.
(Though there are other alternatives raw
materials such as thorium but this
technology is yet to be developed.) Hence
the need was felt to explore and develop
renewable energy sources to meet with ever
growing demand of energy.
Issues:
1 .Habitat Hazards
Some renewable energy systems entail unique environmental problems. For instance
, wind
turbines can behazardous to flying birds, while hydroelectric dams can create ba
rriers for
migrating fish. Burning biomass andbiofuels causes air pollution similar to that
of burning fossil
fuels, although it causes a lower greenhouse effect sincethe carbon placed in th
e atmosphere was
already there before the plants were grown.
2. Proximity to Demand
Ignificant resources are often located at distance from the major population cen
ters where
electricity demandexists. Exploiting such resources on a large scale is likely t
o require
considerable investment in transmission anddistribution networks as well as in t
he technology
itself.
3. Availability
One recurring criticism of renewable sources is their intermittent nature. Solar
energy, for
example can only be expected to be available during the day (50% of the time). W
ind energy
intensity varies from place to place and somewhat on season to season. Constant
stream of
water is often not available throughout the year for generating optimum Hydro po
wer.
Conclusion:
Keeping in view the reserves of the fossil fuels and the economy concerns, these
fuels are
likely to dominate the world primary energy supply for another decade but enviro
nmental
scientists have warned that if the present trend is not checked then by 2100, th
e average
temperature around the globe will rise by 1.4 to 5.8 degrees Celsius, which will
cause a upsurge
in the sea water levels drowning all lands at low elevation along the coastal li
nes. So the world
has already made a beginning to bring about the infrastructural changes in the e
nergy sector so
as to be able to choose the renewable energy development trajectory. In developi
ng countries,
where a lot of new energy production capacity is to be added, the rapid increase
of renewables is,
in principle, easier than in the industrial countries where existing capacity wo
uld need to be
converted if a rapid change were to take place. That is, developing countries co
uld have the
competitive advantage for driving the world market. However, strong participatio
n of developed
countries is needed since majority of energy technologies in use in developing c
ountries have
been developed and commercialized in developed countries first. Nevertheless, In
dia must
give more thrust to the research and development in the field of non-conventiona
l energy
sources not only to mitigate greenhouse effect but also to lessen dependence on
oil/gas import,
which consumes major chunk of foreign exchange reserve. It is also clear that an
integrated
energy system consisting two or more renewable energy sources has the advantage
of stability,
reliability and are economically viable. Last but not the least, it is for the c
itizens also to
believe in power of renewable energy sources, and understand its necessity and i
mportance.
References:
1) Overview of power sector in India 2005 IndiaCore.com
2) C.R Bhattacharjee, Wanted an aggressive Outlook on Renewable Energy,
Electrical India, vol.
45 No 11, pp. 147-150, Nov. 2005.
3) Pradeep K Katti, Dr.Mohan K. Khedkar, Photovoltaic and Wind Energy,
Electrical India, vol.
45 No 11, pp. 151-155, Nov. 2005.
4) Kadambini Sharma, Renewable Energy: The way to Sustainable
Development, Electrical India, vol. 42 No 14, pp. 20-21, Jul. 2002.
5) H Ravishankar Kamath, P.N.Hrishikesh, Sandeep Baidwan, P.N. SreedharC.R.
Bhattacharjee,
Application of biogas energy for rural lighting, Electrical India, vol. 42 No 21,
pp. 33-35, Nov.
2002
6) B. Siddarth Baliga, Renewable Energy Sources, Electrical India, vol. 44 No 11,
pp. 44-51, Nov.
2004.
7) C.R. Bhattacharjee, Commercial approach to solar power in India Electrical
India, vol. 43 No 8, pp. 52-56, May. 2003.
8) P.M. Nayar, Photovoltaic development and use in India, Electrical India, Vol.
43 No 7, pp. 44-
50, July, 2003.
Presented by:
1.B.CHANDRA MOHAN
III B.Tech, (EEE)
06F21A0206
[email protected]
2.B.RAMESH
III B.Tech, (EEE)
06F21A0246
[email protected]
GATES INSTITUTE OF TECHNOLOGY
Gooty
ABSTRACT:
Urban electrical power systems with
steep demand increase need easily located
solutions with short lead time from decision to
transmission. While AC cable solutions can offer
sufficient power ratings, the problems of load
controllability and short circuit power increase
with every added circuit. These problems may be
countered with Voltage Source Converter (VSC)
based technology using Cables for transmission,
such as HVDC Light. This technology offers up
to 500 MW per station with small footprint, ideal
for in feed to city centers. Fast implementation is
possible thanks to modular pre-assembled design
and extruded polymer underground cables.
System benefits from the VSC technology, such
as independent full active and reactive power
control and no added short circuit power makes
it easy to apply in a heavily loaded grid. From an
environmental point of view, the dc cable
technology gives virtually no alternating
magnetic field and no risk for oil leakage. Higher
transmission capacity is possible through
polymer DC-cables as compared to equivalent
AC-cables .A number of different topologies are
possible for single or multi-in feed, giving large
freedom of design to adapt to each specific
network situation.
Starting with a brief history of the
evolution of HVDC light technology , the paper
gives the definition of HVDC LIGHT . This
paper focuses on the HVDC light converter
technology and about the light cable. The
advantages of HVDC light cables over AC under
ground cables are discussed. The active and
reactive power control by HVDC light are seen
and Emergency Power and Black Start
Capability of HVDC Light and the applications,
by considering the possible economical and
environmental considerations are discussed.
INTRODUCTION:
As the size of a concentrated load
in cities increases due to the on-going
urbanization, metropolitan power networks have
to be continuously upgraded to meet the demand.
Environmental issues are also becoming more
and more of a concern all over the world. Land
space being scarce and expensive, substantial
difficulties arise whenever new right-of-way is to
be secured for the feeding of additional power
with traditional transmission lines. With
increasing power levels, the risk of exceeding the
short-circuit capability of existing switchgear
equipment and other network components
becomes another real threat to further expansion.
The HVDC Light system is a solution to these
problems.
This technology is designed to
transmit large quantities of power using
underground cables and at the same time adds
stability and power quality to the connected
networks. The cables are easily installed
underground using existing right of ways,
existing cable ducts, roads, subways, railways or
channels. The HVDC Light converter stations
are compact and by virtue of their control, they
do not contribute to the short-circuit levels. As
its name implies, HVDC Light is a high voltage,
direct current transmission technology and is
well suited to meet the demands of competitive
power market for transmission up to 1100 MW.
EVOLUTION OF HVDC LIGHT
TECHNOLOGY:
.
Recent development efforts in
transmission technology have focused on
compact, small weight and cost-effective, socalled
voltage source converters (VSC), using
novel high power semiconductors that can be
switched at high frequencies. In parallel, a
scientific and engineering breakthrough in
extruded DC cable technology makes it now
possible to manufacture lightweight, high-power
DC cables that are easily installed, using
conventional ploughing techniques.
By combining the advances made
in VSC and DC cables, a new breed of electricity
transmission and distribution technology
emerges: The "HVDC Light" technology. The
new technology extends the economical power
range for High Voltage Direct Current
transmission (HVDC) downwards to just a few
MW. Transmission of electricity over long
distances using underground DC cables is both
economical and technically advantageous.
HVDC Light is thus an alternative to
conventional AC transmission or local
generation in many situations. By feeding a
remote load from the main grid, it is feasible to
shut down small, expensive and possibly
polluting generation plants, as well as eliminate
the associated fuel transport. This makes the new
technology very attractive from both an
economical and environmental point of view.
WHAT IS HVDC LIGHT?
HVDC Light is the successful and
environmentally-friendly way to design a power
transmission system for a submarine cable, an
underground cable or network interconnection.
HVDC Light is HVDC technology based on
voltage source converters (VSCs). The new
transmission technology is called "HVDC
Light", thus emphasizing the lightweight and
compactness features intrinsic to it as well as its
competitiveness in applications in the low end of
thepowerscale.
..HVDC-LIGHTCABLES:
The cable
system is complete with cables, accessories and
installation services. The cables are operated in
bipolar mode, one cable with positive polarity
and one cable with negative polarity. .
The HVDC Light cable is a new
design triple extruded, polymeric insulated DCcable,
which has been successfully type-tested to
150kV DC. It is a new lightweight cable similar
in appearance and characteristics to a standard
AC, XLPE cable except that the problem
associated with space charges which breakdown
the insulation when using AC, XLPE cables on
DC has been over come with this new design.
Their strength and flexibility make the HVDC
Light cables well suited for severe installation
conditions both underground as a land cable and
as a submarine cable. HVDC Light has the
capability to rapidly control both active and
reactive power independently of each other, to
keep the voltage and frequency stable. This gives
total flexibility regarding the location of the
converters in the AC system since the
requirements of short-circuit capacity of
connected AC network is low (SCR down to
zero).
The submarine cables can be laid in
deeper waters and on rough bottoms.
The land cables can be installed less
costly with ploughing technique.
HVDC cables can now also go overhead
with aerial cables
HVDC LIGHT CONVERTER
TECHNOLOGY:
Conventional HVDC converter
technology is based on the use of linecommutated
or phase-commutated converters
(PCC). With the appearance of high switching
frequency components, such as IGBT s
(Insulated Gate Bipolar Transistor) it becomes
advantageous to build VSC (Voltage Source
Converters) using PWM (Pulse Width
Modulation) Technology.
HVDC Light uses Pulse Width
Modulation to generate the fundamental voltage.
It controls the magnitude and phase of the
voltage freely and almost instantaneously and
allows independent and very fast control of
active and reactive power flows. PWM voltage
source converter does not contribute to the shortcircuit
power, as the AC current can be
controlled by the converter valve.
The key part of the HVDC Light
converter consists of an IGBT valve bridge. No
special converter transformers are necessary
between the valve bridge and the AC-grid. A
converter reactor can separate the fundamental
frequency from the raw PWM waveform. If the
desired DC voltage does not match the AC
system voltage, a normal AC transformer may be
used in addition to the reactor. A small shunt
AC-filter is placed on the AC-side of the reactor.
On the DC-side there is a DC capacitor that
serves as a DC filter.
ACTIVE AND REACTIVE POWER
CONTROL:
The fundamental frequency
voltage across the converter reactor defines the
power flow between the AC and DC sides.
Changing the phase angle between the
fundamental frequency voltage generated by the
converter and the voltage on the AC bus controls
the active power flow between the converter and
the network. The reactive power flow is
controlled by the width of the pulses from the
converter bridge.
In an HVDC Light system the
active and reactive power can be controlled at the
same time like in a synchronous converter, but
the control is much faster, in the millisecond
range. This fast control makes it possible to
create any phase angle or amplitude, which can
be done almost instantaneously providing
dependent control of both active and reactive
power. From a system point of view it acts as a
motor or a generator without mass.
EMERGENCY POWER AND BLOCK
START CAPABILITY:
A VSC transmission system will
be a very valuable asset during a grid restoration.
It will be available almost instantly after the
blackout and does not need any short circuit
capacity in order to become connected to the
grid. The benefits will differ if one or both ends
are exposed to the blackout. The following list
highlights some aspects:
No need for short circuit power for
commutation. Black start capability if
equipped with a small diesel generator
feeding auxiliary power (or power from
another grid).
Fast voltage control is available in both
ends virtually instantly after the
auxiliary power is back.
Can energize a few transmission lines at
a lower voltage level avoiding severe
Ferranti over voltage and allow remote
end connection of transformers/reactors
at a safer voltage level.
When active power is available in the
remote end the VSC connection can
feed auxiliary power to local plants
making sure that they have a stable
frequency to on.
When the local plants are synchronized
to the grid they can ramp up power
production at a constant and safe speed
and do not initially have to participate
in frequency control.
Compared with AC
underground cables the HVDC
Light cable has some
significant advantages to be
considered:
DC cables require only two cables
between each converter station.
DC-cables have no technical limit to
distance.
DC cables can carry up to 50% more
power than the equivalent AC cable.
Being considerably more compact and
lightweight than classic HVDC, HVDC
Light enables transmission of electrical
power to, from, and between offshore
installations where distances prohibit
AC transmission
APPLICATION OF NEW DC
TECHNOLOGY: HVDC-Light
HVDC light is expected to
become the preferred alternative in many
electricity supply applications such as:
Connection of small-dispersed
electricity generators to a grid:
With the independent control
of reactive and active power afforded by the
VSC scheme, the varying operating
conditions of the wind power units can be
allowed without negative impact on the
power quality level of the grid. The
underground cable also helps in minimizing
the impact of environmental factors on the
reliability and availability of the
transmission while keeping the visual impact
on the environment down to a minimum.
Furthermore, the VSC technology allows a
variable frequency to be used in the wind
generator, thus making the plant operate at
the speed that gives maximum power. The
variable speed operation scheme can boost
the energy delivery of the wind power plant
by 5-25%, thus improving the economy of
the installation. Obviously, the HVDC-light
technology is very suited for the collection,
transmission and distribution of electricity
from small, run-of-the-river, hydro power
plants.
Feeding electric power to large and
rapidly growing cities:
As the size of a concentrated
load increases due to on-going urbanization,
the metropolitan power network has to be
continuously upgraded to meet the demand.
Land space being scarce and expensive,
substantial difficulties arise whenever new
right-of-way is to be secured for the feeding
of additional power. Furthermore, with
increasing power levels, the risk of
exceeding the short-circuit capability of
switchgear equipment and other network
components becomes a real threat to further
expansion. Consequently new power in feed
solutions is required. The HVDC-light
technology meets both demands: The cables
are easily installed underground, the
converter stations are compact and by virtue
of their control, they do not contribute to
short-circuit levels.
Feeding of electric power to
remotely located loads:
Small cities, mining districts,
villages and other places that are located far from
any electrical network, can now be economically
fed from larger networks via an HVDC-light
link. In this way, the advantages afforded by
large electricity networks are brought to
basically any place on land or even offshore. In
the past, for loads in the range below 100 MW,
local generation was necessary if the distance
between the existing electric grid and the load
was beyond what is possible to achieve
economically using traditional AC technology.
The new DC technology makes it possible to
cost effectively bridge across large distances
with a minimum of losses.
ENVIRONMENTAL
CONSIDERATIONS:
Magnetic fields are eliminated since
HVDC Light cables are laid in pairs
with DC currents in opposite directions.
It offers no overhead lines, neutral
electromagnetic fields, oil-free cables
and compact converter stations.
The cable insulation is power electronic
based are not dangerous.
CONCLUSION:
The technical development that has
recently taken place in the field of electrical
transmission, coupled to a changing business
environment of the electricity supply industry
and the de-regulation of energy sector at large,
lead to a growing attractiveness of electrical
transmission.
The hallmarks of the new technology
are: short lead times, cost effectiveness,
compactness, environmental friendliness, and
ease of application. It is anticipated that this
technology will quickly become the preferred
alternative for transportation of energy, in many
application cases where electricity transmission
was not considered previously.
REFERENCES:
B. Normark, D. Ravemark,
Underground transmission with HVDC
Light .
.
Power System Stability benefits with
VSC DC-Transmission Systems ,
CIGRE
.K. Eriksson, HVDC Light An
excellent tool for City Center In feed
SCADA
IN
POWER DISTRIBUTION SYSTEMS
BAPATLA ENGINEERING COLLEGE
BAPATLA
K.Lakshmi teja N.Akhileswari
III/IV B.TECH
ELECTRICAL AND ELECTRONICS ENGINEERING
EMAIL ID: [email protected]
[email protected]
ABSTRACT:
The efficient and authentic power supply to the consumer is the primary function
of any
distribution system. So, in distribution systems certain measures are taken for
supervision,
control, operation, measurement and protection. These are highly onerous works t
hat take lot of
manpower. So, the need of advanced automatic control systems to reach the requir
ed destination
is becoming mandatory, to supersede antiquated ways that are persisting in the p
resent
distribution system.
In this paper we emphasize mainly on the SCADA (Supervisory Control And Data
Acquisition) systems, the most sophisticated automatic control system, being use
d in distribution
automation for quality power. This paper commences with basic introduction of wha
t is
SCADA? Then continues by describing about the hardware components and basic archi
tecture
of SCADA system used in distribution automation. Clearly elucidates about the so
ftware
components that are installed in a SCADA system which can be used for distributi
on power
systems and for quality power
This paper then takes upon applications of SCADA, the exalted aspect, in distrib
ution
systems. The applications include control, operation, supervision, measurement a
nd
instrumentation, service of SCADA in distribution systems. This is the latest tr
end in power
system protection and control.
1.INTRODUCTION:
The Indian electric power supply system is the most complex power grid system. S
o,
efficient and reliable power supply is the major concern of our supply system. T
he losses that
occur in the transmission and distribution are very large in comparison with maj
or developed
countries. This occurs because of inefficient safety, monitoring and control dev
ices that are
persisting in present distribution system. The most advanced automatic control s
ystem, which
can perform the operations like monitoring and control is SCADA. SCADA is the ap
plication of
computer in power systems. Distribution automation is the major up gradation of
any distribution
system. This can be achieved by implementing SCADA in distribution systems.
SCADA is an acronym for Supervisory Control and Data Acquisition. SCADA systems
are used to monitor and control a plant or equipment in industries such as telec
ommunications,
and waste control, energy, oil and gas refining and transportation. These system
s encompass the
transfer of data between a SCADA central host computer and a number of Remote Te
rminal
Units (RTUs) and/or Programmable Logic Controllers (PLCs), and the central host
and the
operator terminals. These systems can be relatively simple, such as one that mon
itors
environmental conditions of a small office building, or very complex, such as a
system that
monitors all the activity in a nuclear power plant or the activity of a municipa
l water system.
Traditionally, SCADA systems have made use of the Public Switched Network (PSN)
for
monitoring purposes. Today many systems are monitored using the infrastructure o
f the
corporate Local Area Network (LAN)/Wide Area Network (WAN). Wireless technologie
s are
now being widely deployed for purposes of monitoring.
A SCADA system can be implemented with the hardware and software components that
constitute a whole SCADA system. Using SCADA system the various application prog
rams that
can be implemented in power supply systems are fault location, load balancing, l
oad shedding
etc. Now a detailed description of hard ware components, software components and
application
programs is given.
2. HARDWARE COMPONENTS:
The components of a SCADA system are field instrumentation, remote stations,
Communication Network (CN) and Central Monitoring Station (CMS).
2.1 Field instrumentation:
Field instrumentation generally comprises sensors, transmitters and actuators th
at are directly
interfaced to the plant or equipment and generate the analog and digital signals
that will be monitor
by the remote station. Signals are also conditioned to make sure they are compat
ible with the
inputs/outputs of the Remote Terminal Unit (RTU) or a Programmable Logic Control
ler (PLC) at the
remote Station. It also refers to the devices that are connected to the equipmen
t or machines being
controlled and monitored by the SCADA system. These are sensors for monitoring c
ertain
parameters and actuators for controlling certain modules of the system.
2.2 Remote stations:
The remote station is installed at the remote plant with equipment being monitor
ed and
controlled by the central host computer. This can be a RTU or PLC. Field instrum
entation, connected
to the plant or equipment being monitored and controlled, is interfaced to the r
emote station to allow
process manipulation at a remote site. It is also used to gather data from the e
quipment and transfer it
to the central SCADA system.
Fig 1: RTU on the pole top
2.3 Communication Network:
The Communication Network (CN) refers to the communication equipment needed to
transfer data to and from different sites. The medium used can be cable, telepho
ne, radio, and
fiber optic or satellite communication system.
2.4 Central Monitoring Station:
The Central Monitoring Station (CMS) is the Fig 2: RTU in a sub station master u
nit of the
SCADA system. Its function is collecting information gathered by the remote stat
ions and generating
necessary action for any event that is detected. The CMS can have a single compu
ter configuration
or it can be networked to workstations to facilitate sharing of information from
the SCADA system.
It uses a Man Machine Interface (MMI) to monitor various types of data needed fo
r the operation.
A MMI program runs on the CMS computer. A mimic diagram of the whole plant or pr
ocess can be
displayed onscreen for easier identification with the real system. Each I/O poin
t of the remote units
can be displayed with corresponding graphical representation and the present I/O
reading. Set-up
parameters such as trip values, limits, etc. are entered on this program and dow
nloaded to the
corresponding remote units for updating of their operating parameters.
Fig 3: A typical SCADA system architecture
There are two typical network configurations for the SCADA systems. They are the
point-topoint
and the point-to-multipoint configurations. The point-to-point configuration is
the simplest
set-up for a telemetry system. Here data is exchanged between two stations. One
station can be
set up as the master and the other as the slave. The point-to-multipoint configu
ration is where
one device is designated as the master unit to several slave units. The master i
s usually the main
host and is located in the control room, while the slaves are the remote units.
Each slave is
assigned a unique address or identification number.
3. SOFTWARE COMPONENTS:
3.1 Data Acquisition and Processing:
This serves as a data collector from the devices to our SCADA system is connecte
d and
presents it as a processed data to user. Data acquisition can be done in multipl
e scan rates and
uses different protocols. The data can be fetched as a whole or as a group and a
lso report by
exception. Data processing means conversion of fetched data into engineering con
versions, zero
suppression, reasonability check, and calculation subsystem. So, user can use th
is processed data
for future purposes.
3.2 Control:
Users are allocated to groups, which have defined read/write access privileges t
o the process
parameters in the system and often also to specific product functionality. The a
llocated users can
have the access to the devices, which are to be controlled. Control can be eithe
r single or group, open
or closed loop control. The execution of control can be executed at selective pl
aces, can be
immediately executed, can be executed at required time etc.
3.3 Man machine interface:
The products support multiple screens, which can contain combinations of synopti
c diagrams
and text. They also support the concept of a "generic" graphical object with lin
ks to process
variables. These objects can be "dragged and dropped" from a library and include
d into a synoptic
diagram.
Most of the SCADA products that were evaluated decompose the process in "atomic"
parameters (e.g. a power supply current, its maximum value, its on/off status, e
tc.) to which a Tagname
is associated. The Tag-names used to link graphical objects to devices can be ed
ited as
required. The products include a library of standard graphical symbols, many of
which would
however not be applicable to the type of applications encountered in the experim
ental physics
community.
Standard windows editing facilities are provided: zooming, re-sizing, scrolling
etc. Online
configuration and customization of the MMI is possible for users with the approp
riate privileges.
Links can be created between display pages to navigate from one view to another.
3.4 Alarm handling:
Alarm handling is based on limit and status checking and performed in the data s
ervers. More
complicated expressions (using arithmetic or logical expressions) can be develop
ed by creating
derived parameters on which status or limit checking is done by the data server.
The alarms are
logically handled centrally, i.e., the information only exists in one place and
all users see the same
status (e.g., the acknowledgement), and multiple alarm priority levels (in gener
al many more than 3
such levels) are supported
Most of the SCADA products that were evaluated decompose the process in "atomic"
parameters (e.g. a power supply current, its maximum value, its on/off status, e
tc.) to which a
Tag-name is associated. The Tag-names used to link graphical objects to devices
can be edited as
required. The products include a library of standard graphical symbols, many of
which would
however not be applicable to the type of applications encountered in the experim
ental physics
community.
3.5 Logging and Archiving:
The terms logging and archiving are often used to describe the same facility. Ho
wever,
logging can be thought of as medium-term storage of data on disk, whereas archiv
ing is longterm
storage of data either on disk or on another permanent storage medium. Logging i
s typically
performed on a cyclic basis, i.e., once a certain file size, time period or numb
er of points is
reached the data is overwritten. Logging of data can be performed at a set frequ
ency, or only
initiated if the value changes or when a specific predefined event occurs. Logge
d data can be
transferred to an archive once the log is full. The logged data is time-stamped
and can be filtered
when viewed by a user. The logging of user actions is in general performed toget
her with either a
user ID or station ID. There is often also a VCR facility to play back archived
data.
3.6 Automated mapping and facilities management (AM/FM):
SCADA systems can be made use to have GUI system. GUI system can be used to have
maps, graphical representation of the required area, and graphical representatio
n of required data.
Using SCADA these maps can be layered, zoomed, scrolled and planned. The histori
cal data of
the machine can also be used.
4. APPLICATION PROGRAMS:
The various application programs that can be implemented using SCADA systems are
clearly explained here. The following are the applications that can be used for
remote
monitoring, control, safety, efficient utilization of resources etc.
4.1 Fault location, isolation and Service Restoration:
This function determines alternate paths for restoring service to the affected l
oad points
due to a fault on a section of the feeder considering current loading conditions
. Most of the rural
feeders do not have an alternate supply for service restoration. In urban areas,
many alternate
paths are available to a feeder; therefore, this function will be more effective
. To implement this
function, load switches or sectionalizers are needed at selected feeder location
s. Earlier,
sectionalizers were air-break switches without any remote-control features. All
such switches
should be replaced with remotely controllable switches.
4.2 Maintaining good voltage profile:
This function controls the capacitor banks and voltage regulators to provide a g
ood
voltage profile in the distribution feeders. An appropriate schedule for switchi
ng on/off of
capacitor banks and raise/lower voltage regulator taps was based on the feeders'
reactive load
curves in order to get good voltage profiles and reduce energy losses.
4.3 Load Balancing:
This function distributes the system total load among the available transformers
and the
feeders in proportion to their capacities. As explained above, there was a need
to replace the
existing switches with remotely controllable switches in order to reconfigure th
e network for
load balancing.
4.4 Load Control:
Load Management Function is divided into three categories:
(a) During summer there is usually a generation shortage. Therefore, loads need
to be shed for
long durations. A restriction and control schedule is worked out based on which
of the loads at
different substations are shed on a rotation basis. This function will automatic
ally shed the loads
according to the schedule. Provisions to change the schedule are also provided.
(b) Emergency Based Load Shedding: During emergencies, the utility needs to shed
some load to
keep up the balance between generation and demand. Instructions are sent to resp
ective
substations to shed load. Based on the amount of relief requested, the operator
would select some
loads and shed them. This function will help to identify loads to be shed consid
ering their
priority, time when they were last shed and the duration of the last interruptio
n to ensure that
only the required amount of load shedding is done
(c) Agricultural Pump Control -Agricultural loads are categorized into groups. T
his function
controls the agricultural loads automatically, based on predefined schedule. Pro
vision to change
schedules is also provided.
(d) Frequency-Based Automatic Load Shedding:-In this implementation, frequency-b
ased
automatic load shedding is carried out by software using this function. Appropri
ate loads are
shed by the RTU, based on priorities and actual amount of load whenever the syst
em frequency
crosses the pre-set values. This is done as a closed loop function in the RTU. T
o sense system
frequency, high-response-time (about 200 msec) frequency transducers are require
d. Presently it
has been difficult to find such high-response-time frequency transducers.
4.5 Remote metering:
The function of remote metering is to read data from the meters and to provide i
nformation to the
operator of the consumption patterns of the high-value HV customers. Its main fe
ature is to provide a
multiple tariff to the customers to encourage them to shift their loads from pea
k times to off-peak
times. This function also provides meter-tampering detection
4.6Maintaining Maps:
The function of AM/FM is to have an integrated display of the geographical maps
and
single-line schematics of the electrical distribution network to facilitate: -Di
splay dynamic
information of various devices -Import scanned maps in standard formats Provide
functions like
map information layering, zooming, scrolling and panning
- Extract historical data of the devices from the database
Fig 4. Typical example of a geographical map
4.7 Fuse-off-call operations:
This consumer-aid application function responds to complaints from consumers. It
has
the following features: Accepts interruption/restoration data from the operator.
Accepts DT
trip/close information from SCADA. Identifies the interruption source whenever p
ossible and
gives information on the outage effects to the operator. Displays status of ener
gized /deenergized
status of the consumer. This function will improve the response time to the cons
umer complaints
4.8 Energy accounting:
This function helps in arriving at the system's load patterns, which helps in pl
anning
expansion. It also helps in detecting abnormal energy consumption patterns of th
e consumers and
identifying high-loss areas. Processing the data obtained by the remote metering
function and the
data obtained from the substation does this.
5. CONCLUSION:
Because of the explained application programs and advantages SCADA systems can b
e
used for efficient, reliable, safe power supply systems. SCADA systems are most
advancing
computer application, so even once the SCADA system is installed its up gradatio
n can be easily
done. So SCADA system is to be implemented in all the power industries.
References:
(1) NDR Sarma, Rapid Growth Leads to System Automation Efforts , Transmission
and Distribution World, Sept, 1997.
(2) David L. Brown, James W. Skeen, Parkash Daryani, Farrokh A Rahimi, Prospects
For Distribution Automation at Pacific Gas & Electric Company , IEEE Transactions
on
Power Delivery, Vol. 6, No. 4, October 1991, pp 1946-1954.
S.V.U.COLLEGE OF ENGINEERING
TIRUPATHI
DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING
Technical Paper on
ECO-FRIENDLY POWER GENERATION
Presented by
B.SANDEEP JOYSON.V
Roll.No:107329 Roll.No:107311
Dept of EEE Dept of EEE
SVUCE SVUCE
Tirupathi. Tirupathi.
e-mail:[email protected] e-mail:[email protected]
ABSTRACT:
Eco-friendly power is energy generated from natural resources such
as sunlight , wind , rain , tides and geothermal heat which are renewable . Hence
ecofriendly
power is otherwise known as Green power or renewable energy . "Renewable
energy is derived from natural processes that are replenished constantly. In its
various forms, it
derives directly from the sun, or from heat generated deep within the earth. Inc
luded in the
definition is electricity and heat generated from solar, wind, ocean, hydropower
, biomass,
geothermal resources, and biofuels and hydrogen derived from renewable resources
" .The
importance of green power in today s modern world and the reasons for its importan
ce .
Several types of green power WIND , WATER , SOLAR , BIOFUEL , GEO THERMAL
etc.,- description , analogy and uses . Scientific methods of generation of gree
n power-
GAS DISPLACEMENT SYSTEM , SPACE SOLAR POWER SYSTEM - definition ,
process , advantages and disadvantages and its requirements . This technology on
a larger
scale, combined with already demonstrated wireless power transmission can supply
nearly all the
electrical needs of our planet. It doesn't help to remove fossil fuels from vehi
cles if you just turn
around and use fossil fuels again to generate the electricity to power those veh
icles. Space solar
power can provide the needed clean power for any future electric transportation
system. Green
power generation in INDIA various technologies by BHEL , SRAAC . Applications of
green power in modern trends - RITI coffee printer or green printer . Companies
supporting in generation of green power. Conclusion.
KEYWORDS:
Green power , Generation, Space solar power system,Gas displacement
system, Green printer.
INTRODUCTION :
Renewable energy is energy generated from natural resources such as sunlight,
wind, rain, tides and geothermal heat which are renewable (naturally replenished).
In 2006,
about 18% of global final energy consumption came from renewables, with 13% comi
ng from
traditional biomass, such as wood-burning. Hydroelectricity was the next largest
renewable
source, providing 3% (15% of global electricity generation), followed by solar h
ot water/heating,
which contributed 1.3%. Modern technologies, such as geothermal energy, wind pow
er, solar
power, and ocean energy together provided some 0.8% of final energy consumption.
Climate
change concerns coupled with high oil prices, peak oil and increasing government
support are
driving increasing renewable energy legislation, incentives and commercializatio
n. Investment
capital flowing into renewable energy climbed from $80 billion in 2005 to a reco
rd $100 billion
in 2006. Wind power is growing at the rate of 30 percent annually, with a worldw
ide installed
capacity of over 100 GW, and is widely used in several European countries and th
e United
States. the IEA reported that the replacement of current technology with renewab
le energy could
help reduce CO2 emissions by 50% by 2050.
The majority of renewable energy technologies are powered by the sun. The Earth-
Atmosphere
system is in equilibrium such that heat radiation into space is equal to incomin
g solar radiation,
the resulting level of energy within the Earth-Atmosphere system can roughly be
described as the
Earth's "climate." The hydrosphere (water) absorbs a major fraction of the incom
ing radiation.
Most radiation is absorbed at low latitudes around the equator, but this energy
is dissipated
around the globe in the form of winds and ocean currents. Wave motion may play a
role in the
process of transferring mechanical energy between the atmosphere and the ocean t
hrough wind
stress. Solar energy is also responsible for the distribution of precipitation w
hich is tapped by
hydroelectric projects, and for the growth of plants used to create biofuels. Re
newable energy
flows involve natural phenomena such as sunlight, wind, tides and geothermal hea
t, as the
International Energy Agency explains:
"Renewable energy is derived from natural processes that are replenished constan
tly. In its
various forms, it derives directly from the sun, or from heat generated deep wit
hin the earth.
Included in the definition is electricity and heat generated from solar, wind, o
cean, hydropower,
biomass, geothermal resources, and biofuels and hydrogen derived from renewable
resources."
WIND ENERGY:
Airflows can be used to run wind turbines. Modern wind turbines range from aroun
d 600
kW to 5 MW of rated power, although turbines with rated output of 1.5 3 MW have be
come the
most common for commercial use; the power output of a turbine is a function of t
he cube of the
wind speed, so as wind speed increases, power output increases dramatically. Are
as where winds
are stronger and more constant, such as offshore and high altitude sites, are pr
eferred locations
for wind farms.
Wind turbines
HYDRO POWER:
Hydroelectric energy is a term usually reserved for large-scale hydroelectric da
ms.
Examples are the Grand Coulee Dam in Washington State and the Akosombo Dam in Gh
ana.
Micro hydro systems are hydroelectric power installations that typically produce
up to 100 kW
of power. They are often used in water rich areas as a Remote Area Power Supply
(RAPS)..
Ocean energy describes all the technologies to harness energy from the ocean and
the sea:
Marine current power. Similar to tidal stream power, uses the kinetic energy of
marine currents
Ocean thermal energy conversion (OTEC) uses the temperature difference between t
he warmer
surface of the ocean and the colder lower recesses. To this end, it employs a cy
clic heat engine.
OTEC has not been field-tested on a large scale. Tidal power captures energy fro
m the tides.
SCIENTIFIC METHODS OF GREEN POWER GENERATION:
1)GAS DISPLACEMENT SYSTEM:
The company started tinkering with the idea of biomass
heating in the 1980s. In 1999, it began operating a fully
functional system in the plant, one still in use today. In
those early days, Vidir burned sunflower pellets for
energy. The price of the pellets subsequently went up
and became cost inefficient. The firm then switched to a
coal-burning furnace, but discovered that maintenance
costs were too high. It was then Vidir Machine bought a straw-burning furnace an
d began
experimenting with other types of biomass to produce heat.At first Dueck and his
firm were told
that they couldn t burn straw efficiently as it produces silica as a by-product wh
en burned -- and
that would clog the pipes in the system. They set to work and figured out a way
to deal with the
silica problem, a method for which they are now seeking a patent.The practice of
burning straw
for fuel has benefits for all involved. For the farmer, it provides an economica
l practical way of
getting rid of it rather by burning it in the field. For the environment, the st
raw burns cleanly, as
biomass combustion is considered CO2 neutral. Commercialized, it means cheaper h
eating for
residents. Consider that, according to the company, biomass heating from straw c
osts about 10
per cent of the price of natural gas. And it s a constantly renewable resource.
2) SOLAR SPACE ROVAR SYSTEM:
The United States and the world need to find new sources of clean energy. Space
Solar
Power gathers energy from sunlight in space and transmits it wirelessly to Earth
. Space solar
power can solve our energy and greenhouse gas emissions problems. Not just help,
not just take
a step in the right direction, but solve. Space solar power can provide large qu
antities of energy
to each and every person on Earth with very little environmental impact.The sola
r energy
available in space is literally billions of times greater than we use today. The
lifetime of the sun
is an estimated 4-5 billion years, making space solar power a truly long-term en
ergy solution. As
Earth receives only one part in 2.3 billion of the Sun's output, space solar pow
er is by far the
largest potential energy source available, dwarfing all others combined. Solar e
nergy is routinely
used on nearly all spacecraft today. This technology on a larger scale, combined
with already
demonstrated wireless power transmission can supply nearly all the electrical ne
eds of our
planet. It doesn't help to remove fossil fuels from vehicles if you just turn ar
ound and use fossil
fuels again to generate the electricity to power those vehicles. Space solar pow
er can provide the
needed clean power for any future electric transportation system.
Advantages of Space Solar Power
Unlike oil, gas, ethanol, and coal plants, space solar power does not emit green
house
gases.--Unlike coal and nuclear plants, space solar power does not compete for o
r depend
upon increasingly scarce fresh water resources.--Unlike bio-ethanol or bio-diese
l, space
solar power does not compete for increasingly valuable farm land or depend on na
turalgas-
derived fertilizer--Space solar power can take advantage of our current and hist
oric
investment in aerospace expertise to expand employment opportunities in solving
the
difficult problems of energy security and climate change. --Space solar power ca
n
provide a market large enough to develop the low-cost space transportation syste
m that is
required for its deployment. This, in turn, will also bring the resources of the
solar system
within economic reach.
Disadvantages of Space Solar Power
High development cost. Yes, space solar power development costs will be very lar
ge,
although much smaller than American military presence in the Persian Gulf or the
costs
of global warming, climate change, or carbon sequestration. The cost of space so
lar
power development always needs to be compared to the cost of not developing spac
e
solar power.
Requirements for Space Solar Power
The technologies and infrastructure required to make space solar power feasible
include:
Low-cost, environmentally-friendly launch vehicles. Current launch vehicles are
too
expensive, and at high launch rates may pose atmospheric pollution problems of t
heir
own. Cheaper, cleaner launch vehicles are needed.
Large scale in-orbit construction and operations. To gather massive quantities o
f energy,
solar power satellites must be large, far larger than the International Space St
ation (ISS),
the largest spacecraft built to date. Fortunately, solar power satellites will b
e simpler than
the ISS as they will consist of many identical parts.
Power transmission. A relatively small effort is also necessary to assess how to
best
transmit power from satellites to the Earth s surface with minimal environmental i
mpact.
GREEN POWER GENERATION IN INDIA(BHEL):
INDIA has joined a select band of countries like the US, Germany, and Japan with
the
successful inhouse development of an eco-friendly power generation technology by
Bharat
Heavy Electricals Ltd (BHEL), suitable for stand-alone power generation in remot
e areas.
BHEL's corporate Research and Development (R&D) division has achieved this break
through by
successfully developing, testing and demonstrating a 50 KW `Phosphoric acid fuel
cell' (PAFC)
power pack for the first time in the country. The fuel cell power pack has been
developed as a
joint venture between BHEL, Ministry of Non-Conventional Energy Sources (MNES) a
nd Sree
Rayalseema Alkalies and Allied Chemicals Ltd (SRAAC). Fuel cells are modular uni
ts which
produce electricity efficiently, noiselessly and without pollution. They convert
fuels like
hydrogen or natural gas directly into electricity without any intermediate therm
al engines. The
only waste generated in the process is pure hot water or steam, which can also b
e harnessed.
APPLICATIONS OF GREEN POWER:
1)RITI COFFEE PRINTER :
The innovative RITI Coffee Printer can greenyour printing habits for you by turn
ing your
leftovercoffee grounds into eco-friendly ink for your printer. Who would have th
ought that the
dregs from yourdaily coffee could offer a sustainable ink source andreplace all
of those
environmentally un-friendly ink cartridges? It is easy to see why the RITI print
er was selected as
one of fifty top entries in this year s Greener Gadgets
2)WATER CAR(H2O CAR):
A water-fuelled car is an automobile that supposedly derives its energy directly
from
water.. These vehicles may be claimed to produce fuel from water onboard with no
other energy
input, or may be a hybrid of sorts claiming to get energy from both water and a
conventional
source (such as gasoline).
Electrolysis of water:
Many alleged water-fuelled cars obtain hydrogen or a mixture of hydrogen and oxy
gen
(sometimes called "oxyhydrogen", "HHO", or "Brown's Gas") by the electrolysis of
water, a
process that must be powered electrically. The hydrogen or oxyhydrogen is then b
urned,
supposedly powering the car and also providing the energy to electrolyse more wa
ter. The
overall process can be represented by the following chemical equations:
2H2O . 2H2 + O2 [Electrolysis step]
2H2 + O2 . 2H2O [Combustion step]
Since the combustion step is the exact reverse of the electrolysis step, the ene
rgy released in
combustion exactly equals the energy consumed in the electrolysis step, and even a
ssuming
100% efficiency there would be no energy left over to power the car. In other word
s, such
systems start and end in the same thermodynamic state, and are therefore perpetu
al motion
machines, violating the first law of thermodynamics. More energy is therefore re
quired to drive
the electrolysis cell than can be extracted from burning the resulting hydrogen-
oxygen mixture.
CONCLUSION:
In a country like India, where power is in short supply in relation to the requi
rement,
renewable energy sources such as green power have huge potential. It is hearteni
ng to note that
as many as 12 States have taken initiatives to include green power in their ener
gy portfolio.It is
better if the Government considers incentives and tax concessions for bio-power
generation
units. Biofuel must also be encouraged and used extensively in India. This incre
ase in price of
crude oil has given rise to runaway inflation. All transport companies and under
takings both in
the public and private sectors must be made to use biofuel. All companies produc
ing biofuel
should be encouraged to expand production. Green power and biofuel are the energ
y sources of
the future. It is also necessary to promote extensive planting of Jatropha and o
ther similar plants.
REFERENCES:
1)World changing : A users guide for the 21st century-by Alex Steffen
2)The sustainability revolution - by Andres R.Edwards
3)Cradle to Cradle-remaking the World - by William McDonough
4)Deep economy the wealth of environment - by Bill McKibben
5)The next sustainability wave - by Bob Williard
6)Eco-World - by amazon affiliate
A Paper presentation on
WIRELESS POWER TRANSMISSION
(WITRICITY)
Presented by
Kanaka raju.E p. Javed alikhan
III B. Tech, EEE. III B. Tech, EEE
Contact:9703576640 E-mail:[email protected]
Email:[email protected] Contact:9885504225
JNTU COLLEGE OF ENGINEERING
Anantapur
ABSTRACT:
The aim of this paper is to introduce a new system of transmitting the power whi
ch is
called wireless electricity or witricity. Witricity is based upon coupled resona
nt objects
to transfer electrical energy between objects without wires. The system consists
of a
Witricity transmitter (power source), and devices which act as receivers (electr
ical
load). It is based on the principle of resonant coupling and microwave energy tr
ansfers.
The action of an electrical transformer is the simplest instance of wireless ene
rgy transfer.
There are mainly two types of transfers i.e. short range and long range transmis
sion. The
short range are of 2-3metres where as the long range are of few kilometers.
Wireless transmission is ideal in cases where instantaneous or continuous energy
transfer
is needed, but interconnecting wires are inconvenient, hazardous, or impossible.
The
tangle of cables and plugs needed to recharge today's electronic gadgets could s
oon be a
thing of the past. The concept exploits century-old physics and could work over
distances
of many metres. Consumers desire a simple universal solution that frees them fro
m the
hassles of plug-in chargers and adaptors. "Wireless power technology has the pot
ential to
deliver on all of these needs." However, transferring the power is the important
part of
the solution.
Witricity, standing for wireless electricity, is a term coined by MIT researcher
s, to
describe the ability to provide electricity to remote objects without wires. Usi
ng selfresonant
coils in a strongly coupled regime, efficient non-radiative power transfer
over distances of up to eight times the radius of the coils can be done.. Unlike
the
conduction-based systems, Witricity uses resonant magnetic fields to reduce wast
age
of power. Currently the project is looking for power transmissions in the range
of
100 watts.
With wireless energy transfer, the efficiency is a more critical parameter and t
his creates
important differences from the wireless data transmission technologies. To avoid
the
conflicts like recharging and carrying its appliances of electrical and electron
ic devices,
wireless power transmission is desirable. Wireless power transmission was origin
ally
proposed to avoid long distance electrical distribution based mainly on copper c
ables.
This can be achieved by using microwave beams and the rectifying antenna, or rec
tenna,
which can receive electromagnetic radiation and convert it efficiently to DC ele
ctricity.
Researchers have developed several techniques for moving electricity over long d
istances
without wires. Some exist only as theories or prototypes, but others are already
in use.
Magnetic resonance was found a promising means of electricity transfer because
magnetic fields travel freely through air yet have little effect on the environm
ent or, at the
appropriate frequencies, on living beings and hence is a leading technology for
developing witricity.
HOW IT WORKScoil
obstacle
Wireless light: Researchers used
magnetic resonance coupling to power a
60-watt light bulb. Tuned to the same
frequency, two 60-centimeter copper
coils can transmit electricity over a
distance of two meters, through the air
and around an obstacle.
The researchers built two resonant
copper coils and hung them from the
ceiling, about two meters apart.
When they plugged one coil into the
wall, alternating current flowed through it, Light bulb
Creating a magnetic field.
The second coil, tuned to the same frequency and hooked to a light bulb, resonat
ed with
the magnetic field, generating an electric current that lit up the bulb--even wi
th a thin
wall between the coils.
How wireless energy could work-
"Resonance", a phenomenon that causes an object to vibrate when energy of a cert
ain
frequency is applied. Two resonant objects of the same frequency tend to couple
very
strongly." Resonance can be seen in musical instruments for example. "When you p
lay a
tune on one, then another instrument with the same acoustic resonance will pick
up that
tune, it will visibly vibrate,"
Instead of using acoustic vibrations, system exploits the resonance of electroma
gnetic
waves. Electromagnetic radiation includes radio waves, infrared and X-rays. Typi
cally,
systems that use electromagnetic radiation, such as radio antennas, are not suit
able for the
efficient transfer of energy because they scatter energy in all directions, wast
ing large
amounts of it into free space. To overcome this problem, the team investigated a
special
class of "non-radiative" objects with so-called "long-lived resonances". When en
ergy is
applied to these objects it remains bound to them, rather than escaping to space
. "Tails"
of energy, which can be many metres long, flicker over the surface. If another r
esonant
object is brought with the same frequency close enough to these tails then it tu
rns out that
the energy can tunnel from one object to another.
Hence, a simple copper antenna designed to have long-lived resonance could trans
fer
energy to a laptop with its own antenna resonating at the same frequency. The co
mputer
would be truly wireless. Any energy not diverted into a gadget or appliance is s
imply
reabsorbed. The systems that are described would be able to transfer energy over
three to
five metres. This would work in a room let's say but can be adapted to work in a
factory.
It could also be scaled down to the microscopic or nanoscopic world.
HOW WIRELESS POWER COULD WORK
1. Power from mains to antenna, which is made of copper
2. Antenna resonates at a frequency of 6.4MHz, emitting electromagnetic waves
3. 'Tails' of energy from antenna 'tunnel' up to 5m (16.4ft)
4. Electricity picked up by laptop's antenna, which must also be resonating at 6
.4MHz.
Energy used to re-charge device
5. Energy not transferred to laptop re-absorbed by source antenna.
People/other objects not affected as not resonating at 6.4MHz
Short range power transmission and reception
Power supply for portable electronic devices is considered, which receives ambie
nt radio
frequency radiation (typically in an urban environment) and converts it to DC el
ectricity
that is stored in a battery for use by the portable device.
A Power transmission unit (PTU) is connected to the electrical utility, typicall
y in a
domestic and office environment, and uses the electricity to generate a beam of
electromagnetic radiation. This beam can take the form of visible light, microwa
ve
radiation, near infrared radiation or any appropriate frequency or frequencies,
depending
on the technology chosen. The beam can be focused and shaped using a focusing
mechanism: for example, a parabola shape may be chosen to focus light waves at a
certain distance from the PTU.
A Power reception unit (PRU) receives power from one or several PTU's, and conve
rts
the total power received to electricity, which is used to trickle charge a stora
ge unit such
as a battery or transferred directly to the appliance for use, or both. If trans
ferred to the
storage unit, the output of the storage unit can power the appliance. Similarly
to the
focusing of the transmitted power, it is possible to concentrate the received po
wer for
conversion, using receiving arrays, antennas, reflectors or similar means.
It is possible to construct power "relay units", consisting of PRU's powering PT
U's,
whose function is to make the transmitted power available at further distances t
han would
normally be possible.
Long-distance Wireless Power-
Some plans for wireless power involve moving electricity over a span of miles. A
few
proposals even involve sending power to the Earth from space. The Stationary Hig
h
Altitude Relay Platform (SHARP) unmanned plane could run off power beamed from t
he
Earth. The secret to the SHARP's long flight time was a large, ground-based micr
owave
transmitter. A large, disc-shaped rectifying antenna, or rectenna, near the syst
em
changed the microwave energy from the transmitter into direct-current (DC) elect
ricity.
Because of the microwaves' interaction with the rectenna, the system had a const
ant
power supply as long as it was in range of a functioning microwave array.
Rectifying antennae are central to many wireless power transmission theories. Th
ey are
usually made of an array of dipole antennae, which have positive and negative po
les.
These antennae connect to semiconductor diodes. Here's what happens:
1. Microwaves, which are part of the electromagnetic spectrum, reach the dipole
antennae.
2. The antennae collect the microwave energy and transmit it to the diodes.
3. The diodes act like switches that are open or closed as well as turnstiles th
at let
electrons flow in only one direction. They direct the electrons to the rectenna'
s
circuitry.
4. The circuitry routes the electrons to the parts and systems that need them. .
TYPES OF WIRELESS TRANSMISSIONNear
field
1. Induction
2. Resonant induction
Far field
1. Radio and microwave transmission
2. Laser
3. Electrical conduction
Near field-
These are wireless transmission techniques over distances comparable to, or a fe
w times
the diameter of the device(s).
Induction
Inductive couplingThe action of an electrical transformer is the simplest instan
ce of
wireless energy transfer. The primary and secondary circuits of a transformer ar
e not
directly connected. The transfer of energy takes place by electromagnetic coupli
ng
through a process known as mutual induction. (An added benefit is the capability
to step
the primary voltage either up or down.) The battery charger of an electric tooth
brush is an
example of how this principle can be used. The main drawback to induction, howev
er, is
the short range. The receiver must be very close to the transmitter or induction
unit in
order to inductively couple with it.
Resonant induction
By
designin
g
electro
magneti
c
resonato
rs that
suffer
minimal
loss due
to
radiatio
n and absorption and have a near field with midrange
extent (namely a few times the resonator
size), mid-range efficient wireless energytransfer
is possible. The reasonment is that, if
two such resonant objects are brought in midrange
proximity, their near fields (consisting of
so-called 'evanescent waves') couple (evanescent
wave coupling) and can allow the energy to transfer from one object to the other
within
times much shorter than all loss times, which were designed to be long, and thus
with the
maximum possible energy-transfer efficiency. Since the resonant wavelength is mu
ch
larger than the resonators, the field can circumvent extraneous objects in the v
icinity and
thus this mid-range energy-transfer scheme does not require line-of-sight. By ut
ilizing in
particular the magnetic field to achieve the coupling, this method can be safe,
since
magnetic fields interact weakly with living organisms.
"Resonant inductive coupling" has key implications in solving the two main probl
ems
associated with non-resonant inductive coupling and electromagnetic radiation, o
ne of
which is caused by the other; distance and efficiency. Electromagnetic induction
works
on the principle of a primary coil generating a predominantly magnetic field and
a
secondary coil being within that field so a current is induced within its coils.
This causes
the relatively short range due to the amount of power required to produce an
electromagnetic field. Over greater distances the non-resonant induction method
is
inefficient and wastes much of the transmitted energy just to increase range. Th
is is
According to the theory, one coil can
recharge any device that is in range, as
long as the coils have the same resonant
frequency.
A trumpet's size, shape and material
composition determine its resonant
frequency.
where the resonance comes in and helps efficiency dramatically by "tunneling" th
e
magnetic field to a receiver coil that resonates at the same frequency. Unlike t
he
multiple-layer secondary of a non-resonant transformer, such receiving coils are
single
layer solenoids with closely spaced capacitor plates on each end, which in combi
nation
allow the coil to be tuned to the transmitter frequency thereby eliminating the
wide
energy wasting "wave problem" and allowing the energy used to focus in on a spec
ific
frequency increasing the range.
Some of these wireless resonant inductive devices operate at low milliwatt power
levels
and are battery powered. Others operate at higher kilowatt power levels. Current
implantable medical and road electrification device designs achieve more than 75
%
transfer efficiency at an operating distance between the transmit and receive co
ils of less
than 10 cm.
Resonance and Wireless Power-
Household devices produce relatively small
magnetic fields. For this reason, chargers hold
devices at the distance necessary to induce a
current, which can only happen if the coils are
close together. A larger, stronger field could
induce current from farther away, but the process
would be extremely inefficient. Since a magnetic
field spreads in all directions, making a larger
one would waste a lot of energy.
The distance between the coils can be extended by adding resonance to the equati
on.
A good way to understand resonance is to think of it in terms of sound. An objec
t's
physical structure -- like the size and shape of a trumpet -- determines the fre
quency at
which it naturally vibrates. This is its resonant frequency. It's easy to get ob
jects to
vibrate at their resonant frequency and difficult to get them to vibrate at othe
r
frequencies. This is why playing a trumpet can cause a nearby trumpet to begin t
o
vibrate. Both trumpets have the same resonant frequency.
Induction can take place a little differently if the electromagnetic fields arou
nd the coils
resonate at the same frequency. The theory uses a curved coil of wire as an indu
ctor. A
capacitance plate, which can hold a charge, attaches to each end of the coil. As
electricity travels through this coil, the coil begins to resonate. Its resonant
frequency is a
product of the inductance of the coil and the capacitance of the plates.
As with an electric toothbrush, this system relies on
two coils. Electricity, traveling along an
electromagnetic wave, can tunnel from one coil to the
other as long as they both have the same resonant
frequency. The effect is similar to the way one
vibrating trumpet can cause another to vibrate.
As long as both coils are out of range of one another,
nothing will happen, since the fields around the coils
aren't strong enough to affect much around them.
Similarly, if the two coils resonate at different
frequencies, nothing will happen. But if two
resonating coils with the same frequency get within a
few meters of each other, streams of energy move
from the transmitting coil to the receiving coil.
According to the theory, one coil can even send electricity to several receiving
coils, as
long as they all resonate at the same frequency. The researchers have named this
nonradiative
energy transfer since it involves stationary fields around the coils rather than
fields that spread in all directions. This kind of setup could power or recharge
all the
devices in one room. Some modifications would be necessary to send power over lo
ng
distances, like the length of a building or a city.
Far field-
Means for long conductors of
electricity forming part of an electric
circuit and electrically connecting
said ionized beam to an electric
circuit.
These methods achieve longer ranges,
often multiple kilometre ranges,
where the distance is much greater
than the diameter of the device(s).
Radio and microwave-
Microwave power transmission
Power transmission via radio waves can be made more directional, allowing longer
distance power beaming, with shorter wavelengths of electromagnetic radiation, t
ypically
in the microwave range. A rectenna may be used to convert the microwave energy b
ack
into electricity. Rectenna conversion efficiencies exceeding 95% have been reali
zed.
Power beaming using microwaves has been proposed for the transmission of energy
from
orbiting solar power satellites to Earth and the beaming of power to spacecraft
leaving
orbit has been considered.
The MIT wireless power project
uses a curved coil and
capacitive plates.
Power beaming by microwaves has the difficulty that for most space applications
the
required aperture sizes are very large. These sizes can be somewhat decreased by
using
shorter wavelengths, although short wavelengths may have difficulties with atmos
pheric
absorption and beam blockage by rain or water droplets.
For earthbound applications a large area 10 km diameter receiving array allows l
arge
total power levels to be used while operating at the low power density suggested
for
human electromagnetic exposure safety. A human safe power density of 1 mW/cm2
distributed across a 10 km diameter area corresponds to 750 megawatts total powe
r level.
This is the power level found in many modern electric power plants.
High power-
Wireless Power Transmission (using microwaves) is well proven. Experiments in th
e tens
of kilowatts have been performed, achieving distances on the order of a kilomete
r.
Low power-
A new company, Powercast introduced wireless power transfer technology using RF
energy; this system is applicable for a number of devices with low power require
ments.
This could include LEDs, computer peripherals, wireless sensors, and medical imp
lants.
Currently, it achieves a maximum output of 6 volts for a little over one meter.
Laser-
With a laser beam centered on its panel of
photovoltaic cells, a lightweight model plane
makes the first flight of an aircraft powered by
a laser beam inside a building at NASA
Marshall Space Flight Center.
In the case of light, power can be transmitted
by converting electricity into a laser beam
that is then fired at a solar cell receiver.
This is generally known as "power beaming".
Its drawbacks are:
1. Conversion to light, such as with a laser, is moderately inefficient (althoug
h
quantum cascade lasers improve this)
2. Conversion back into electricity is moderately inefficient, with photovoltaic
cells
achieving 40%-50% efficiency.
3. Atmospheric absorption causes losses.
4. As with microwave beaming, this method requires a direct line of sight with t
he
target.
Electrical conduction
Electrical energy can also be transmitted by means of electrical currents made t
o flow
through naturally existing conductors, specifically the earth, lakes and oceans,
and
through the atmosphere a natural medium that can be made conducting if the
breakdown voltage is exceeded and the gas becomes ionized. For example, when a h
igh
voltage is applied across a neon tube the gas becomes ionized and a current pass
es
between the two internal electrodes. In a practical wireless energy transmission
system
using this principle, a high-power ultraviolet beam might be used to form a vert
ical
ionized channel in the air directly above the transmitter-receiver stations. The
same
concept is used in virtual lightning rods, the electrolaser electroshock weapon
and has
been proposed for disabling vehicles.
The Tesla effect- A "world system" for "the transmission of
electrical energy without wires" that depends upon electrical
conductivity was proposed by Tesla. Through longitudinal
waves, an operator uses the Tesla effect in the wireless transfer
of energy to a receiving device. The Tesla effect is the
application of a type of electrical conduction (that is, the
movement of energy through space and matter; not just the
production of voltage across a conductor).
Tesla stated, Instead of depending on induction at a distance to
light the tube [... the] ideal way of lighting a hall or room would
[...] be to produce such a condition in it that an illuminating
device could be moved and put anywhere, and that it is lighted, no matter where
it is put
and without being electrically connected to anything. I have been able to produc
e such a
condition by creating in the room a powerful, rapidly alternating electrostatic
field. For
this purpose I suspend a sheet of metal a distance from the ceiling on insulatin
g cords and
connect it to one terminal of the induction coil, the other terminal being prefe
rably
connected to the ground. An exhausted tube may then be carried in the hand anywh
ere
between the sheets or placed anywhere, even a certain distance beyond them; it r
emains
always luminous.
The Tesla effect is a type of high field gradient between electrode plates for w
ireless
energy transfer.
ADVANTAGES-
Wireless electric energy transfer for experimentally powering electric automobil
es
and buses is a higher power application (>10kW) of resonant inductive energy
transfer.
The use of wireless transfer has been investigated for recharging electric
automobiles in parking spots and garages as well.
Any low-power device, such as a cell phone, iPod, or laptop, could recharge
automatically simply by coming within range of a wireless power source,
eliminating the need for multiple cables and perhaps, eventually, for batteries.
With the advent of wireless communication protocols such as Wi-Fi or
Bluetooth, consumers are realizing that life without physical cables is easier,
more
flexible and often less costly.
As the population continues to grow the demand for electricity could out space
the ability to produce it, eventually wireless power may become a necessity rath
er
than just an interesting idea.
DRAWBACKS-
The wireless transmission of energy is common in much of the world. Radio
waves are energy, and people use them to send and receive cell phone, TV, radio
and Wi-Fi signals every day. The radio waves spread in all directions until they
reach antennae that are tuned to the right frequency. This method for transferri
ng
electrical power would be both inefficient and dangerous.
The main drawback to induction, however, is the short range. The receiver must
be very close to the transmitter or induction unit in order to inductively coupl
e
with it.
Many people would resist the idea of being constantly bathed in microwaves from
space, even if the risk were relatively low.
APPLICATIONS-
1. Researchers have outlined a relatively simple system that could deliver power
to
devices such as laptop computers or MP3 players without wires. The concept explo
its
century-old physics and could work over distances of many metres, the researcher
s said.
2. A UK company called Splashpower has also designed wireless recharging pads on
to
which gadget lovers can directly place their phones and MP3 players to recharge
them.
The pads use electromagnetic induction to charge devices, the same process used
to
charge electric toothbrushes.
3. Resonant inductive wireless energy transfer was used successfully in implanta
ble
medical devices including such devices as pacemakers and artificial hearts. Whil
e the
early systems used a resonant receiver coil later systems implemented resonant
transmitter coils as well.
4. Today resonant inductive energy transfer is regularly used for providing elec
tric power
in many commercially available medical implantable devices.
5.some of the applications with the diagram are shown below:
A toothbrush's daily exposure to water makes a
traditional plug-in charger potentially dangerous.
Ordinary electrical connections could also allow
water to seep into the toothbrush, damaging its
components. Because of this, most toothbrushes
recharge through inductive coupling.
How a transformer works, and its how an electric
toothbrush recharges. It takes three basic steps:
1. Current from the wall outlet flows through a coil inside the charger, creatin
g a
magnetic field. In a transformer, this coil is called the primary winding.
2. When you place your toothbrush in the charger, the magnetic field induces a
current in another coil, or secondary winding, which connects to the battery.
3. This current recharges the battery.
You can use the same principle to recharge several devices at once. For example,
the
Splashpower recharging mat and Edison
Electric's Powerdesk both use coils to create a
magnetic field. Electronic devices use
corresponding built-in or plug-in receivers to
recharge while resting on the mat. These
receivers contain compatible coils and the
circuitry necessary to deliver electricity to
devices' batteries.
Eliminating the power cord would make today s
ubiquitous portable electronics truly wireless.
CONCLUSIONMost
electric toothbrushes recharge
through inductive coupling.
An electric toothbrush's base and
handle contain coils that allow the
battery to recharge.
A
Splashpower mat uses induction to
recharge multiple devices simultaneously.
From these researches and discoveries it can be said that wireless power transm
ission is
going to be a major field of interest for scientists and for people. The facts t
hat the power
can be transmitted from space to earth will revolutionize the field of satellite
s. Since the
uses of wireless power transmission are many, from easy installation, neatness,
easy
maintenance to multi-equipment working are amazing, the area for researchers on
this
field seems very interesting.
Rather concentrating on the false beliefs, the concentration should be put on ad
vantages
of witricity for further increasing the efficiency of wireless power transmissio
n with more
safety measures. It is a rocking technology provided the researches continue to
move in
same speeding direction.
PAPER PRESENTATION
ON
THE DIFFERENTIAL INDUCTION MACHINE
By
T.V.SURESH U.M.ABHILASH
III EEE, II SEM III EEE, II SEM
Regd No: 06G31A0258 Regd No: 06G31A0259
Dept. EEE Dept. EEE
tvsureshbabu258@ gmail.com [email protected]
Ph No: 9701034574 Ph No: 9701417900
St. JOHNS COLLEGE OF ENGINEERING &
TECHNOLOGY
YERRAKOTA, YEMMIGANUR
KURNOOL 518 360 (A.P)
THE DIFFERENTIAL INDUCTION MACHINE
Abstract: This paper presents the theory
and performance of a differential
induction machine, which is a special type of
induction machine having two shafts
projected from the two ends of a single stator.
Application of a differential load
on the two shafts cause them to run at different
speed as a motor, which permits
true differential movement and thus can meet
the requirements of a differential
drive in an electric vehicle. The machine is
also capable of regeneration in the
differential mode. This paper presents the
construction of the above machine and
performance of the same based on
experimental results from a laboratory
prototype. The equivalent circuit of the motor
has been presented and verified
experimentally.
Keywords.
Differential drive;
electric vehicle drive;
induction machine.
1. Introduction
The concept of a differential motor was
presented, but was never analysed in detail nor
verified experimentally. This paper presents
the theory, construction and performance of
the machine, based on experimental results
from a laboratory prototype. The
equivalent circuit of the machine has been
developed with the two rotors equivalence in
series but shows a non-linear parameter
content, which was never reported earlier. This
has been verified for both motor and generator
mode of operation. The results show that the
machine is well suited as a motor for
differential drive for an electric vehicle.
In an electric vehicle, whenever the vehicle is
needed to make a turn, the wheel on the
inner side makes fewer revolutions than the
wheel on the outer side in order to create
rotation of the shaft connecting the two wheels
and hence turn the vehicle. The difference in
speed is dependant on the radius of curvature
of the turn being taken by the vehicle and the
spacing between the two wheels. This is
possible in conventional vehicles through the
use of a differential gear system that connects
the two wheels (on opposite sides of the
electric vehicle) on the same shaft to the single
prime-mover. However, with the differential
induction machine, the two shafts can be
directly connected to the two co-axial wheels
on opposite sides of the electric vehicle,
making it possible to provide driving power
along with the
Figure 1. Schematic diagram of the machine.
feasibility of taking a turn. Further, whenever
the vehicle is coming down a gradient with
sufficient speed, the braking torque can be
converted into sufficient electrical energy to
partially charge back the energy storage
battery (that otherwise supplies energy for
driving the vehicle).
2. Construction
The differential induction machine has two
mechanically separated rotors inside a
common stator, as shown in figure 1. The two
rotors are identical and of squirrel cage type,
axially separated from each other. The length
of iron part of one rotor is about half of the
stator length. Each rotor shaft is fixed by two
bearing at driving end only since internal
bearings will become heated by eddy currents
due to flux. Thus the axial gap between the
two rotors is also at minimum. The stator has a
4 pole, 3 phase balanced winding. The rotor
has higher leakage inductance than a normal
machine due to the separating gap between the
two rotors under the common stator. This
causes reduction in torque, which is
compensated by designing the rotors as double
cage construction.
3. Equivalent circuit
The equivalent circuit of the motor as shown
in figure 2, is similar to that of a conventional
induction motor, but with two rotors in series.
The parameters of the equivalent circuit as
obtained from test results show that
magnetizing reactances and core loss
component resistances change due to the
tendency of one core to saturate when one
rotor is loaded more than the other. Flux of the
two rotors, when run at different load, changes
Figure 2. Equivalent circuit of the machine
(per phase).
Figure 3. Equivalent circuit with one rotor
locked and other at no-load.
accordingly and is dependent on slip of rotor
1(s1) and rotor 2(s2). The rotor with lighter
load has higher flux. Both rotors are subjected
to different load condition and in extreme
case, when one rotor is locked and the other at
no-load condition, the input current drawn
is somewhat higher than the rated current and
not several times higher (as in conventional
machine). However, when both rotors are
locked, it behaves as a normal induction
motor.
When one rotor is locked and the other rotor is
at no load as shown in figure 3, the difference
in slip between the rotors is maximum and
voltage across the light running rotor becomes
about 1·3 times than that at no load condition,
with rated current. Thus rotor at no load enters
saturation and its value of xm and rm are
reduced to justify the increased core loss.
Hence, in contrast to conventional equivalent
circuits, the equivalent circuit for this machine
is proposed with variable values of xm and rm,
as presented in figure 2. This aspect is not
reported in existing literature pertaining to this
machine. From no-load and blocked rotors test
data (both rotors blocked) assuming
x1:x21S:x22S = 1:0·4:0·4 and taking no-load
rotational loss as 30watt/phase (from
experimental data), the following parameters
of the motor are obtained:
r1 = 0·187 ohms
r_2 = 0·172 ohms
x_2S = 0·23 ohms
rm = 0·694 ohms
xm = 5·064 ohms
The value of rm and xm as obtained from test
data are as follows.
Unsaturated rm = 0·694 ohms; xm = 5·064
ohms
Saturated rm = 0·311 ohms; xm = 3·0 ohms.
Since the rotor is double-cage, the parameters
will change from standstill to full-speed
condition (Alger 1951). The motor parameters
considered are given below:
Standstill condition Running condition
r_2 0·172 ohms 0·0755 ohms
x_2S 0·23 ohms 0·281 ohms.
Note that at no-load for the above values are
not needed since the rotor circuit will be
open.
4. Verification of equivalent circuit
In order to verify the equivalent circuit, a set
of experiments was carried out at different
conditions of loading, including differential
and balanced loadings. Using the experimental
value of slip, calculation was performed with
the equivalent circuit parameters obtained
earlier to predict the performance data. The
calculated values are depicted adjacent to the
experimentally obtained values in table 1. The
two sets of data are observed to be numerically
close, confirming the validity of the proposed
equivalent circuit.
Data set No. 1 of table 1 depicts the condition
when both rotors run at no-load in motoring
mode. Set No. 2 shows the situation when both
rotors are equally loaded as motor, while set
No. 3 shows the situation for unequal loading
and set No. 4 depicts the condition when both
rotors run at more than synchronous speed
(generator mode).
5. Performance
Different performance characteristics of the
differential machine as a motor are shown in
the following figures. The three sets of data
depicted in the form of three curves (a), (b)
and (c), correspond to:
a) One rotor is maintained at no-load while the
other is loaded as a motor.
b) Both rotors are equally loaded as motor.
c) One rotor is loaded to a fixed high value of
slip as motor, while the load on the other rotor
is varied.
Figure 4 depicts the torque versus slip
characteristics under the different conditions
stated. The set of data clearly shows that the
torque-slip characteristic of one shaft is
dependent on the loading of the other----a
unique feature that makes the machine suitable
as a differential drive.
Figure 5 represents the power input as a motor
versus the input current for the three specified
conditions. Note that the power deliverable
from one shaft is restricted when the other
shaft is at no-load. Similarly, when one shaft is
heavily loaded, the power output from the
other shaft is restricted.
Figure 6 represents the input power factor
versus the input current as a motor for the
three specified conditions. The input power
factor is poor when only one shaft is loaded
and the other is at no-load. Power factor
increases for the same input current when load
is increased on the other shaft.
Figure 4. Torque in N-m Vs. Slip.
Figure 7 represents the efficiency as a motor
versus the input current for the three specified
conditions. As in the earlier results, the
efficiency is best for balanced operation of the
two shafts.
Figure 8 presents the torque versus slip curves
of both rotors as the slip is changed in a
particular manner. The slip of rotor 1 is
intentionally held constant (by suitable load
variation)
while the slip of rotor 2 is increased in steps.
From these curves, it is clear that torque in
rotor1 and rotor 2 both changes due to change
in slip of rotor 2, even when slip of rotor 1
remains constant. This is a special feature of
the differential induction machine, making it
suitable for driving an electric vehicle in
differential mode.
Figure 5. Power input in watt Vs input
current in amperes.
Figure 6. Power factor Vs. input current
in amps.
6. Generator mode of operation
Any motor applied to an electric vehicle is
likely to be subjected to regeneration during
deceleration and braking, apart from when
going down-hill. The machine was thus tested
by running it as a generator by driving it from
both its shafts at the same time, at
supersynchronous speeds. The two driving
sources were intentionally made to operate at
different speeds so that two different values of
slip were obtained, each being negative. The
machine successfully operated as an induction
generator and power was fed back through its
common stator terminals to the ac supply.
Performance characteristics of the differential
machine as a
Figure 7. Efficiencies Vs input current
in amps.
Figure 8. Torque in N-m of both rotors
Vs slip.
generator, are shown in figures 9 and 10. The
three sets of data depicted in the form of three
curves (x), (y) and (z), correspond to:
x) One rotor is maintained at low value
negative slip while the negative slip of the
other is increased.
y) One rotor is maintained at significant value
of negative slip, while the negative slip on the
other rotor is varied.
z) Both rotors are at equal negative slip which
is varied together.
Figure 9. Generator output in watt Vs.
slip.
Figure 10. Generator output power factor
Vs. slip.
Figure 9 represents the power output as a
generator versus the slip of one rotor for the
three above specified conditions. Note that the
power deliverable from one rotor is
proportional to slip as in conventional
induction generator, but is also dependent on
the slip of the other rotor
in this case.
Figure 10 represents the output power factor
versus the slip of one rotor as a generator
for the three above specified conditions. Note
that the Differential Induction Machine draws
lagging reactive power as in conventional
induction generator and the pf depends on the
slip of both the rotors.
7. Conclusion
The operation and performance of an
induction machine with two rotors, operating
as a differential drive, has been demonstrated
in this paper. The equivalent circuit proposed
has non-linear elements, but has been verified
through test data as motor as well as generator
at different conditions. The operating
characteristics of the machine as a motor as
well as a generator have been demonstrated
with different speed on the two shafts. The
results show that the machine is well suited for
use in electric vehicles as the direct drives for
two opposite wheels with differential
capabilities.
REFERENCES:
www.ias.ac.in/sadhana
www.google.com
SADHANA-Academic proceedings in
Engineering Sciences
POWER CHASE -2K9
M.BHARGAVA NARAYANA A.HIMAGIRI PRASAD
10703040 10703001
DEPT OF EEE DEPT OF EEE
S.V.U.C.E S.V.U.C.E
TIRUPATI TIRUPATI
EMAIL ADDRESS: [email protected] [email protected]
MODERN SOLAR POWER GENERATION
ABSTRACT:
This Paper gives an approach to the
implementation of Lunar Solar Power
(LSP) generation. The LSP System is a
reasonable alternative to supply earth s
needs for commercial energy without the
undesirable characteristics of current
options. The long term exploration and
colonization of the solar system for
scientific research and commercial
interest depends critically on the
availability of electrical energy. In this
paper first we discuss about the present
power scenario and to improve the
power necessity for the future decades,
the construction of LSP station,
transmits electricity produced in moon to
the earth, preferring microwave for
transmitting the electricity, At last we
discuss about the cost of installing the
project and how to minimize the
installation cost.
KEYWORDS:
Lunar Solar Power,
Rectennas,
Microwave,
Solar cells,
Relay satellites,
Solar Power satellites
1. INTRODUCTION:
Out of all the renewable and nonpolluting
sources solar power become
the most the primary source of
commercial power for every one in the
world to achieve the same high standard
of living. Over the past 200 years the
developed nations have vastly increased
their creation of per capita income
compared to the other nations. In
parallel, the developed nations increased
the use of commercial thermal power to
~6.9Kwt/person. In fact, most people in
the developing nations use much less
commercial thermal power and most
have little (or) no access to electric
power. By the year 2050, people will
require at least 20,000 GWe of power.
This requires approximately 60,000 GWt
of conventional thermal power
generation. Such enormous thermal
energy consumption will exhaust
economical recoverable deposits of coal,
shale, oil, natural gas, uranium and
thorium. As a result, of conventional
systems become useless. Terrestrial
renewable systems are always captive to
global climate change induced by
volcanoes, natural variation in regional
climate, industrial haze and possibly
even microclimates induced by large
area collectors. Over the 21-st century, a
global stand alone system for
renewable power would cost thousand of
trillions of dollars to build and maintain.
Energy costs could consume most of the
world s wealth. We need a power system
that is independent of earth s biosphere
and provides an abundant energy at low
cost. To do this man kind must collect
dependable solar power in space and
reliably send it to receivers on earth. The
MOON is the KEY.
2. Present and Future Power Scenario
In 1975 Goeller and Weinberg published
a fundamental paper on the relation of
commercial power to economic
prosperity. They estimated that an
advanced economy could provide the
full range of Goods and services to its
population with 6kWt/person. As
technology advances, the goods and
services could be provided by ~2
kWe/person of electric power. There will
be approximately 10 billion people in
2050.They must be supplied with ~6
kWt/person or ~2 kWe/person in order
to achieve energy and economic
prosperity. Present world capacity for
commercial power must increase by a
factor of ~5 by 2050 to 60 kWt or ~20
TWe (T=1012). Output must be
maintained indefinitely. Conventional
power systems are too expensive for the
Developing Nations. Six kilowatts of
thermal power now costs ~1,400 $/Yperson.
This is ~50% of the average per
capita income within the Developing
Nations. Other major factors include the
limited availability of fossil and nuclear
fuels (4,000,000 GWt-Y) and the
relatively low economic output from
thermal energy (~ 0.25 $/kWt-h).
Humans must transition to solar energy
during first part of the 21st Century to
extend the newly emerging world
prosperity. However, solar and wind are
intermittent and diffuse. Their energy
output is too expensive to collect, store,
and dependably distribute.
3. LUNAR SOLAR POWER GENERATION:
Two general concepts have been
proposed for delivering solar power to
Earth from space. In one, Peter Glaser of
Arthur D. Little, Inc. (Cambridge, MA),
proposed in 1968 that a huge satellite in
geosynchronous orbit around Earth
could dependably gather solar power in
space. In the second concept figure (1),
discussed here, solar power would be
collected on the moon. In both ideas,
many different beams of 12cm
wavelength microwaves would deliver
power to receivers at sites located
worldwide. Each receiver would supply
Commercial power to a given region.
Such a receiver, called a rectenna, would
consist of a large field of small
rectifying antennas. A beam with a
maximum intensity of less than 20% of
noontime sunlight would deliver about
200 W to its local electric grid for every
square meter of rectenna area. Unlike
sunlight, microwaves pass through rain,
clouds, dust, and smoke. In both
scenarios, power can be supplied to the
rectenna at night Several thousand
individual rectennas strategically located
around the globe, with a total area of
100,000 km2, could continuously
provide the 20 TW of electric power, or
2 kW per person, required for a
prosperous world of 10 billion people in
2050. This surface area is 5% of the
surface area that would be needed on
Earth to generate 20 TW using the most
advanced terrestrial solar-array
technology of similar average capacity
now envisioned. Rectennas are projected
to cost approximately $0.004/kWe h,
which is less than one-tenth of the
current cost of most commercial electric
energy. This new electric power would
be provided without any significant use
of Earth s resources several types of
solar power satellites have been
proposed. They are projected, over 30
years, to deliver approximately 10,000
kW h of electric energy to Earth for each
kilogram of mass in orbit around the
planet. To sell electric energy at $0.01/
kW h, less than $60 could be expended
per kilogram to buy the components of
the power satellites, ship them into
space, assemble and maintain them,
decommission the satellites, and finance
all aspects of the space operations. To
achieve this margin, launch and
fabrication costs would have to be
lowered by a factor of 10,000. Power
prosperity would require a fleet of
approximately 6,000 huge, solar-power
satellites. The fleet would have more
than 330,000 km2 of solar arrays onorbit
and a mass exceeding 300 million
tones. By comparison, the satellite
payloads and rocket bodies now in Earth
geosynchronous orbit have a collective
surface area of about 0.1 km2. The mass
launch rate for a fleet of power
satelliteswould have to be 40,000 times
that achieved during the Apollo era by
both the United States and the Soviet
Union. A many decade development
program would be required before
commercial development could be
considered.
4. LUNAR SOLAR COLLECTORS:
Fortunately, in the Lunar Solar Power
(LSP) System, an appropriate, natural
satellite is available for commercial
development. The surface of Earth s
moon receives 13,000 TW of absolutely
predictable solar power. The LSP
System uses 10 to 20 pairs of bases
one of each pair on the eastern edge and
the other on the western edge of the
moon, as seen from Earth to collect on
the order of 1% of the solar power
reaching the lunar surface. The collected
sunlight is converted to many low
intensity beams of microwaves and
directed to rectennas on Earth. Each
rectenna converts the microwave power
to electricity that is fed into the local
electric grid. The system could easily
deliver the 20 TW or more of electric
power required by 10 billion people.
Adequate knowledge of the moon and
practical technologies has been available
since the late 1970s to collect this power
and beam it to Earth. Successful Earth
moon power beams are already in use by
the Arecibo planetary radar, operating
from Puerto Rico. This radio telescope
periodically images the moon for
mapping and other scientific studies with
a radar beam whose intensity in Earth s
atmosphere is 10% of the maximum
proposed for the LSP System. Each
lunar power base would be augmented
by fields of solar converters located on
the back side of the moon, 500 to 1,000
km beyond each visible edge and
connected to the earthward power bases
by electric transmission lines. The moon
receives sunlight continuously except
during a full lunar eclipse, which occurs
approximately once a year and lasts for
less than three hours. Energy stored on
Earth as hydrogen, synthetic gas,
dammed water, and other forms could be
released during a short eclipse. Each
lunar power base consists of tens of
thousands of power plots figure (2)
distributed in an elliptical area to form
fully segmented, phased-array radar that
is solar-powered. Each demonstration
power plot consists of four major
subsystems. Solar cells collect sunlight,
and buried electrical wires carry the
solar energy as electric power to
microwave generators. These devices
convert the solar electricity to
microwaves of the correct phase and
amplitude and then send the microwaves
to screens that reflect microwave beams
toward Earth. Rectennas located on
Earth between 60º N and 60º S can
receive power directly from the moon
approximately 8 hours a day. Power
could be received anywhere on Earth via
a fleet of relay satellites in high
inclination, eccentric orbits around Earth
figure (1). A given relay satellite
receives a power beam from the moon
and retransmits multiple beams to
several rectennas on Earth required by
an alternative operation. This enables the
region around each rectenna to receive
power 24 hours a day. The relay
satellites would require less than 1% of
the surface area needed by a fleet of
solar-power satellites in orbit around
Earth. Synthetic-aperture radars, such as
those flown on the Space Shuttle, have
demonstrated the feasibility of multibeam
transmission of pulsed power
directed to Earth from orbit. Relay
satellites may reflect the beam or may
receive the beam, convert it in frequency
and phasing and then, transmit a new
beam to the rectenna. A retransmitter
satellite may generate several beam and
simultaneously service several rectennas.
The orbital reflector and retransmitter
satellites minimize the need on earth for
long distance power lines. Relay
satellites also minimize the area and
mass of power handling equipments in
orbit around earth. There by reducing the
hazards of orbital debris to space
vehicles and satellites.
5. MICROWAVE:
For direct microwave wireless power
transmission to the surface of the earth, a
limited range of transmission
frequencies is suitable. Frequencies
above 6 GHz are subject to atmospheric
attenuation and absorption, while
frequencies below 2 GHz require
excessively large apertures for
transmission and reception. Efficient
transmission requires the beam have a
Gaussian power density. Transmission
efficiency .b for Gaussian beams is
related to the aperture sizes of the
transmitting and receiving antennas:
Where Dt is the transmitting array
diameter, Dr is the receiving array
diameter, .b .is the wavelength of
transmission and R is the range of
transmission. Frequencies other than
2.45 GHz, particularly 5.8 GHz and 35
GHz are being given greater attention as
candidates for microwave wireless
power transmission in studies and
experiments. The mass and size of
components and systems for the higher
frequencies are attractive. However, the
component efficiencies are less than for
2.45 GHz, and atmospheric attenuation,
particularly with rain, is greater.
6. COST FORECASTING:
To achieve low unit cost of energy, the
lunar portions of the LSP System are
made primarily of lunar derived
components. Factories, fixed and
mobile, are transported from the Earth to
the Moon. High output greatly reduces
the impact of high transportation costs
from the Earth to the Moon. On the
Moon the factories produce 100s to
1,000s of times their own mass in LSP
components. Construction and operation
of the rectennas on Earth constitutes
greater than 90% of the engineering
costs. Any handful of lunar dust and
rocks contains at least 20% silicon, 40%
oxygen, and 10% metals (iron,
aluminum, etc.). Lunar dust can be used
directly as thermal, electrical, and
radiation shields, converted into glass,
fiberglass, and ceramics, and processed
chemically into its elements. Solar cells,
electric wiring, some micro-circuitry
components, and the reflector screens
can be made out of lunar materials.
Soilhandling and glass production are
the primary industrial operations.
Selected micro circuitry can be supplied
from Earth. Use of the Moon as a source
of construction materials and as the
platform on which to gather solar energy
eliminates the need to build extremely
large platforms in space. LSP
components can be manufactured
directly from the lunar materials and
then immediately placed on site. This
eliminates most of the packaging,
transport, and reassembly of components
delivered from Earth or the Moon to
deep space. There is no need for a large
manufacturing facility in deep space.
The LSP System is the only likely means
to provide 20 TWe of affordable electric
power to Earth by 2050. According to
criswell in the year 1996 Lunar solar
power reference design for 20,000GWe
is shown in table (1).Its also noted that
the total mass investment for electricity
from lunar solar energy is less than for
Terrestrial solar energy systems.
Terrestrial Thermal power - 310,000 tones /
GWe
Terrestrial Photovoltaic - 430,000 tones / GWe.
Lunar solar power - 52,000 tones / GWe.
7. MERITS OF LSP:
In technical and other aspects there are
two reasons for which we prefer LSP
are: Unlike earth, the, moon is the ideal
environment for large area solar
converters.
The solar flux to the lunar surface is
predictable and dependable.
There is no air or water to degrade
large area thin film devices.
Solar collectors can be made that are
unaffected by decades of exposure to
solar cosmic rays and the solar wind.
Sensitive circuitry and wiring can be
buried under a few- tens of centimeters
of lunar soil, and completely protected
against solar radiations temperature
extremes. Secondly, virtually all the LSP
components can be made from local
lunar materials.
The high cost of transportation to and
from the moon is cancelled out by
sending machines and small factors to
the moon that produce hundreds to
several thousand times there own mass
in components and supplies.
Lunar materials will be used to reduce
the cost of transportation between the
earth and the moon and provide supplies.
7.1 ADDITIONAL FEATURES OF LSP:
The design and demonstration of robots
to assemble the LSP components and
construct the power plots can be done in
parallel. The crystalline silicon solar
cells can be used in the design of robots
which will further decrease the
installation cost.
7.2 ECONOMICAL ADVANTAGES OF LSP
AND CRYSTALLINE SILICON SOLAR
CELL :
Crystalline silicon solar cells almost
completely dominate world wide solar
cell production.
Excellent stability and reliability plus
continuous development in cell structure
and processing make it very likely that
crystalline silicon cells will remain in
this position for the next ten years.
Laboratory solar cells, processed by
means of sophisticated micro
electronic techniques using high quality
Fe-Si substrate have approached energy
conversion efficiencies of 24%.
8. CONCLUSION:
The LUNAR SOLAR POWER (LSP)
system will establish a permanent two
planet economy between the earth and
the moon. The LSP System is a
reasonable alternative to supply earth s
needs for commercial energy without the
undesirable characteristics of current
options. The system can be built on the
moon from lunar materials and operated
on the moon and on Earth using existing
technologies. More-advanced production
and operating technologies will
significantly reduce up-front and
production costs. The energy beamed to
Earth is clean, safe, and reliable, and its
source the sun is virtually
inexhaustible.
9. REFERENCES:
[1] Alex Ignatiev, Alexandre Freundlich,
and Charles Horton., Electric Power
Development on the Moon from In-Situ
Lunar Resources , Texas Center for
Superconductivity and Advanced
Materials University of Houston,
Houston, TX 77204 USA.
[2] Criswell, D. R. and Waldron, R. D.,
Results of analysis of a lunar-based
power system to supply Earth with
20,000GW of electric power , SPS 91
Power from Space Paris/Gif-sur-Yvette
27 to 30 August 1991, pp. 186-193
[3] Dr. David R. Criswell., Lunar solar
power utilization of lunar materials and
economic development of the moon .
[4] Dr. David R. Criswell., Solar Power
via the Moon
[5] G.L.Kulcinski., Lunar Solar Power
System , lecture 35, April 26, 2004.
[6] G.L.Kulcinski., Lunar Solar Power
System , lecture 41, April 30, 2004.
OPTIMAL VOLTAGE REGULATOR PLACEMENT IN A
RADIAL DISTRIBUTION SYSTEM USING FUZZY
LOGIC
J.SWETHA and V.S.R.B.D.SAMEERA
* Department of Electrical Engineering R.V.R&J.C College of engineering, Guntur,
India
(E-mail: [email protected])
Ph.no:9701813321
(E-mail: [email protected])
Ph.no:9490256074
Abstract:
The operation and planning studies of a distribution system require a steady sta
te
condition of the system for various load demands. Our aim is to obtain optimal v
oltage control
with voltage regulators and then to decrease the total cost of voltage regulator
s and losses, to
obtain the net saving. An algorithm is proposed which determines the initial sel
ection and tap
setting of the voltage regulators to provide a smooth voltage profile along the
network. The
same algorithm is used to obtain the minimum number of the initially selected vo
ltage
regulators, by moving them in such a way so as to control the network voltage at
the minimum
cost. The algorithm has been implemented using MATLAB along with Fuzzy Logic and
the
result of both conventional and Fuzzy Logic are compared.
Introduction:
General description of
Distribution System
Distribution system is that part of the
electric power system which connects
the high voltage transmission network to
the low voltage consumer service point
In any distribution
system the power is distributed to
various uses through feeders,
distributors and service mains. Feeders
are conductors of large current carrying
capacity which carry the current in bulk
to the feeding points. Distributors are
conductors from which the current is
tapped of from the supply to the
consumer premises. A typical
distribution system with all its elements
is shown in figure 1.1
1.1.1 Basic Distribution Systems
There are two basic structures for
distribution system namely
(i) Radial distribution system
(ii) Ring main distribution system
Radial Distribution System:
If the distributor is connected to the
supply system on one end only then the
system is said to be a radial distribution
system. A Radial Distribution System is
shown in fig 1.2. In such a case the end
of the distributor nearest to the
generating station would be heavily
loaded and the consumers at the far end
of the distributor would be subjected to
large voltage variations as the load
varies. The consumer is dependent upon
a single feeder so that a fault on any
feeder or distributor cuts off the supply
to the consumers who are on the side of
fault away from the station.
1.2 Distribution System Losses
It has been established that
70% of the total losses occur in the
primary and secondary distribution
system, while transmission and sub
transmission lines account for only 30%
of the total losses. Distribution losses
are 15.5% of the generation capacity
and target level is 7.5%. Therefore the
primary and secondary distribution must
be properly planned to ensure losses
within the acceptability limits.
1.2.1 Factors Effecting Distribution
System Losses
Factors contributing to the
increase in the line losses in primary and
secondary distribution system are:
Inadequate size of conductor
Feeder Length
Location of Distribution
Transformers
Low Voltage
Low Power Factor
1.3 Reduction of line losses:
The losses in Indian power system are
on the higher side. So, the government of
India has decided to reduce the line losses
and set a target for reduction of T&D losses
by 1% per annum in order to realize an
overall reduction of 5% in the national
average.
Methods for the reduction of line
losses:
The following methods are adopted for
reduction of distribution losses.
(1) HV distribution system
(2) Feeder reconfiguration
(3) Reinforcement of the feeder
(4) Grading of conductor
(5) Construction of new substation
(6) Reactive power compensation
(7) Installing Voltage regulators.
Installing Voltage Regulators:
Voltage regulator or
Automatic voltage booster is essentially
an auto transformer consisting of a
primary or existing winding connected
in parallel with the circuit and a second
winding with taps connected in series
with the circuit. Taps of series winding
are connected to an automatic tap
changing mechanism. AVB is also
considered a tool for loss reduction and
voltage control is a statutory obligation.
Benefits of AVB
When a booster is installed at a
bus, it causes a sudden voltage rise at its
point of location and improves the
voltage at the buses beyond the location
of AVB. The % of voltage improvement
is equal to the setting of % boost of
AVB. The increase in voltage in turn
causes the reduction in losses in the
lines beyond the location of AVB.
Multiple units can be installed in series
to the feeder to maintain the voltage
within the limits and to reduce the line
losses. It can be removed and relocated,
whenever and wherever required easily.
FUZZY LOGIC
2.1 Introduction
Fuzzy logic, invented by
Professor Lotfi Zadeh of UC-Berkeley
in the mid 1960s, provides a
representation scheme and a calculus for
dealing with vague or uncertain
concepts. It provides a mathematical
way to represent vagueness in
humanistic systems. The crisp set is
defined in such a way as to dichotomize
the individuals in some given universe
of discourse into two groups as below:
a) Members (those who
certainly belong to the set.)
b) Non-members (those who
certainly do not belong to the set.)
2.2 Fuzzy Logic in Power Systems
Analytical approaches
have been used over the years for many
power system operation, planning and
control problems. However, the
mathematical formulations of real world
problems are derived under certain
restrictive assumptions and even with
these assumptions, the solutions of large
scale power systems problems are not
trivial. On the other hand, there are
many uncertainties in various power
system problems because power
systems are large, complex,
geographically widely distributed
systems and influenced by unexpected
events.
More recently, the
deregulation of power utilities has
introduced new issues into the existing
problems. These facts make it difficult
to effectively deal with many power
systems problems through strict
mathematical formulations alone.
Although a large number of AI
techniques have been employed in
power systems, fuzzy logic is a
powerful tool in meeting challenging
problems in power systems. This is so
because fuzzy logic is the only
technique, which can handle in precise,
vague or fuzzy information.
2.3 Fuzzy Systems:
Fuzzy logic is based on the
way the brain deals with inexact
information.
OPTIMAL VR PLACEMENT USING
FES
3.1 Introduction
Optimal place for placing of
voltage regulators can be obtained by
using back tracking algorithm discussed
in the section 3.4. The same can also be
obtained by using Fuzzy Logic. First a
vector based load flow calculates the
power losses in each line and voltages at
every bus. The voltage regulators are
placed at every bus and total real power
losses is obtained for each case. The
total real power losses are normalized
and named as power loss indices. The
per unit voltages at every bus and the
power loss indices obtained are the
inputs to the FES which determines the
bus most suitable for placing voltage
regulator without violating the limits.
The FES (Fuzzy Expert System)
contains a set of rules which are
developed from qualitative descriptions.
Table3.1 Rules for Fuzzy Expert System
The inputs to the rules are the voltages
and power loss indices and the output
consequent is the suitability of voltage
regulator placement. The rules are
summarized in the fuzzy decision matrix
in table given above.
Fuzzy variables of PLI (power loss
index) are low, low-medium, medium,
high-medium, high.
RULES,
FUZZY
SETS
DEFUZZIFIE
R
PROCESS
LOGIC
FUZZIFIER
INPUT
PHYSICAL
DEVICE
SYSTEM
OUTPUT
Fig3.1 Member ship functions for power
loss index
fuzzy variables for Voltage are low,
low-normal, normal, high-normal, high.
Fuzzy variables for Voltage regulator
suitability index are low, low-medium,
medium, high-medium, high.
Fig 3.3 Membership functions for
Voltage regulator suitability index
These fuzzy variables described by
linguistic terms are represented by
membership functions shown in fig 3.1,
3.2 and3.3.
3.3 Algorithm for optimum voltage
regulator placement in RDS using
FES:
Step 1. Read line and load data.
Step 2. Run load flows for the system
and compute the voltages at each bus,
real and reactive power losses of the
system.
Step 3. Install the voltage regulator at
every bus and compute the total real
power loss of the system at each case
and convert into normalized values.
Step4. Obtain optimal number of VRs and
location of VRs by giving voltages and
power loss indices as inputs to FES.
Step 5. Obtain the optimal tap position of
VR using Eqn. (3.2), so that the voltage is
within the specified limits.
Step 6. Again run the load flows with
VR, then compute voltages at all buses, real
and reactive power losses. If voltages are
not within the limits, go to step 3.
Step 7. Determine the reduction in power
loss and net saving by using objective
function (Eqn (3.1)).
Step 8. Print results.
Step 9. Stop.
4. RESULTS AND ANALYSIS
4.1.1 Results of FES:
The proposed method is illustrated with
47 bus practical RDS and 69 bus RDS.
4. 1.2 Example
Consider 69 bus RDS, the line and
load data of which is given in [9] and
the results are presented in Table 6.6.
By applying FES the optimal place for
placing voltage regulator is bus 6 which
improves the voltage regulation and net
savings. The results are summarized in
the table given below.
It is observed that from Table 6.1.2,
without voltage regulators in the system
the percentage power loss is 5.9323 and
percentage voltage regulation is 9.0811.
With voltage regulators at buses only
from 57 to 65, the percentage power
loss is 5.3422 and percentage voltage
regulation is 4.3503 but the net saving is
(-) Rs.1, 52,280, with voltage regulators
at optimal location (obtained with
proposed method) of bus 6 the
percentage power loss is reduced to
5.2372 and percentage voltage
regulation is reduced to 2.9496. The
optimal net saving is increased to Rs.1,
37,488.
Conclusion:
In a radial distribution it is necessary
to maintain voltage levels at various buses
by using capacitors or conductor grading or
placing voltage regulators at suitable
locations. In this paper voltage regulators is
discussed to maintain the voltage profile
and to maximize net savings. The proposed
FES provides good voltage regulation and
reduces power loss which inturn increases
net savings.
Reference:
1. www.internationalseminars.com
2. www.googlesearch.com
3. Electrical Power Distribution, AS
Pabla, 5th Edition.
ARTIFICIAL INTELLIGENCE
TECHNIQUES In POWERSYSTEMS
Presented By:
M. NANDA DEEPA I. KAVITA
III/IV EEE III/IV EEE
VIGNAN S ENGINEERING COLLEGE
VADLAMUDI
GUNTUR
2
ABSTRACT
This paper reviews five artificial intelligence tools that are most applicable t
o engineering problems fuzzy
logic, neural networks and genetic algorithms. Each of these tools will be outli
ned in the paper together with
examples of their use in different branches of engineering.
INTRODUCTION
Artificial intelligence emerged as a computer science discipline in the mid 1950
s. Since then, it has
produced a number of powerful tools, many of which are of practical use in engin
eering to solve difficult
problems normally requiring human intelligence. Three of these tools will be rev
iewed in this paper. They
are: fuzzy logic, neural networks and genetic algorithms. All of these tools hav
e been in existence for more
than 30 years and have found applications in engineering. Recent examples of the
se applications will be
given in the paper, which also presents some of the work at the Cardiff Knowledg
e-based Manufacturing
center, a multi-million pound research and technology transfer center created to
assist industry in the
adoption of artificial intelligence in manufacturing.
A.I METHODS USED IN POWER SYSTEMS
1.FUZZY LOGIC,
2.NUERAL NETWORKS
3.GENETIC ALGORITHM
First our discussion starts with fuzzy logic.
FUZZY LOGIC
INTRODUCTION
Fuzzy logic has rapidly become one of the most successful of today's technologie
s for developing
sophisticated control systems. The reason for which is very simple. Fuzzy logic
addresses such applications
perfectly as it resembles human decision making with an ability to generate prec
ise solutions from certain or
approximate information. It fills an important gap in engineering design methods
left vacant by purely
mathematical approaches (e.g. linear control design), and purely logic-based app
roaches (e.g. expert
systems) in system design.
While other approaches require accurate equations to model real-world behaviors,
fuzzy design can
accommodate the ambiguities of real-world human language and logic. It provides
both an intuitive method
for describing systems in human terms and automates the conversion of those syst
em specifications into
effective models.
As the complexity of a system increases, it becomes more difficult and eventuall
y impossible to make a
precise statement about its behavior, eventually arriving at a point of complexi
ty where the fuzzy logic
method born in humans is the only way to get at the problem.
(Originally identified and set forth by Lotfi A. Zadeh, Ph.D., University of Cal
ifornia, Berkeley)
Fuzzy logic is used in system control and analysis design, because it shortens t
he time for engineering
development and sometimes, in the case of highly complex systems, is the only wa
y to solve the problem.
The first applications of fuzzy theory were primarily industrial, such as proces
s control for cement kilns.
However, as the technology was further embraced, fuzzy logic was used in more us
eful applications. In
3
1987, the first fuzzy logic-controlled subway was opened in Sendai in northern J
apan. Here, fuzzy-logic
controllers make subway journeys more comfortable with smooth braking and accele
ration. Best of all, all
the driver has to do is push the start button! Fuzzy logic was also put to work
in elevators to reduce waiting
time. Since then the applications of Fuzzy Logic technology have virtually explo
ded, affecting things we
use everyday.
HISTORY
The term "fuzzy" was first used by Dr. Lotfi Zadeh in the engineering journal, "
Proceedings of the IRE," a
leading engineering journal, in 1962. Dr. Zadeh became, in 1963, the Chairman of
the Electrical
Engineering department of the University of California at Berkeley.
The theory of fuzzy logic was discovered. Lotfi A. Zadeh, a professor of UC Berk
eley in California, soon to
be known as the founder of fuzzy logic observed that conventional computer logic
was incapable of
manipulating data representing subjective or vague human ideas such as "an attra
ctive person" or "pretty
hot". Fuzzy logic hence was designed to allow computers to determine the distinc
tions among data with
shades of gray, similar to the process of human reasoning. In 1965, Zadeh publis
hed his seminal work
"Fuzzy Sets" which described the mathematics of fuzzy set theory, and by extensi
on fuzzy logic. This
theory proposed making the membership function (or the values False and True) op
erate over the range of
real numbers [0.0, 1.0]. Fuzzy logic was now introduced to the world.
Although, the technology was introduced in the United States, the scientist and
researchers there ignored it
mainly because of its unconventional name. They refused to take something, which
sounded so child-like
seriously. Some mathematicians argued that fuzzy logic was merely probability in
disguise. Only stubborn
scientists or ones who worked in discrete continued researching it.
While the US and certain parts of Europe ignored it, fuzzy logic was accepted wi
th open arms in Japan,
China and most Oriental countries. It may be surprising to some that the world's
largest number of fuzzy
researchers is in China with over 10,000 scientists. Japan, though currently pos
itioned at the leading edge of
fuzzy studies falls second in manpower, followed by Europe and the USA. Hence, i
t can be said that the
popularity of fuzzy logic in the Orient reflects the fact that Oriental thinking
more easily accepts the concept
of "fuzziness". And because of this, the US, by some estimates, trail Japan by a
t least ten years in this
forefront of modern technology.
UNDERSTANDING FUZZY LOGIC
Fuzzy logic is the way the human brain works, and we can mimic this in machines
so they will perform
somewhat like humans (not to be confused with Artificial Intelligence, where the
goal is for machines to
perform EXACTLY like humans). Fuzzy logic control and analysis systems may be el
ectro-mechanical in
nature, or concerned only with data, for example economic data, in all cases gui
ded by "If-Then rules"
stated in human language.
The Fuzzy Logic Method
The fuzzy logic analysis and control method is, therefore:
1. Receiving of one, or a large number, of measurement or other assessment of co
nditions existing in some
4
system we wish to analyze or control.
2. Processing all these inputs according to human based, fuzzy "If-Then" rules,
which can be expressed in
plain language words, in combination with traditional non-fuzzy processing.
3. Averaging and weighting the resulting outputs from all the individual rules i
nto one single output
decision or signal which decides what to do or tells a controlled system what to
do. The output signal
eventually arrived at is a precise appearing, defuzzified, "crisp" value.
Fuzzy logic is a superset of conventional (Boolean) logic that has been extended
to handle the concept of
partial truth- truth-values between "completely true" and "completely false". As
its name suggests, it is the
logic underlying modes of reasoning which are approximate rather than exact. The
importance of fuzzy
logic derives from the fact that most modes of human reasoning and especially co
mmon sense reasoning are
approximate in nature.
The essential characteristics of fuzzy logic as founded by Zadeh Lotfi are as fo
llows.
In fuzzy logic, exact reasoning is viewed as a limiting case of approximate reas
oning.
In fuzzy logic everything is a matter of degree.
Any logical system can be fuzzified.
In fuzzy logic, knowledge is interpreted as a collection of elastic or, equivale
ntly, fuzzy constraint on
a collection of variables
Inference is viewed as a process of propagation of elastic constraints.
The third statement hence, defines Boolean logic as a subset of Fuzzy logic.
Professor Lofti Zadeh at the University of California formalized fuzzy Set Theor
y in 1965. What Zadeh
proposed is very much a paradigm shift that first gained acceptance in the Far E
ast and its successful
application has ensured its adoption around the world.
A paradigm is a set of rules and regulations, which defines boundaries and tells
us what to do to be
successful in solving problems within these boundaries. For example the use of t
ransistors instead of
vacuum tubes is a paradigm shift - likewise the development of Fuzzy Set Theory
from conventional
bivalent set theory is a paradigm shift.
Bivalent Set Theory can be somewhat limiting if we wish to describe a 'humanisti
c' problem
mathematically.
The whole concept can be illustrated with this example. Let's talk about people
and "youthness". In this case
the set S (the universe of discourse) is the set of people. A fuzzy subset YOUNG
is also defined, which
answers the question "to what degree is person x young?" To each person in the u
niverse of discourse, we
have to assign a degree of membership in the fuzzy subset YOUNG. The easiest way
to do this is with a
membership function based on the person's age.
Young (x) = {1, if age (x) <= 20,
(30-age (x))/10, if 20 < age (x) <= 30,
0, if age (x) > 30}
a graph of this looks like:
5
Given this definition, here are some example values:
Person Age degree of youth
--------------------------------------
Johan 10 1.00
Edwin 21 0.90
Parthiban 25 0.50
Arosha 26 0.40
Chin Wei 28 0.20
Rajkumar 83 0.00
So given this definition, we'd say that the degree of truth of the statement "Pa
rthiban is YOUNG" is 0.50.
Fuzzy Rules
Human beings make decisions based on rules. Although, we may not be aware of it,
all the decisions we
make are all based on computer like if-then statements. If the weather is fine,
then we may decide to go out.
If the forecast says the weather will be bad today, but fine tomorrow, then we m
ake a decision not to go
today, and postpone it till tomorrow. Rules associate ideas and relate one event
to another.
Fuzzy machines, which always tend to mimic the behavior of man, work the same wa
y. However, the
decision and the means of choosing that decision are replaced by fuzzy sets and
the rules are replaced by
fuzzy rules. Fuzzy rules also operate using a series of if-then statements. For
instance, if X then A, if y then
b, where A and B are all sets of X and Y. Fuzzy rules define fuzzy patches, whic
h is the key idea in fuzzy
logic.
A machine is made smarter using a concept designed by Bart Kosko called the Fuzz
y Approximation
Theorem (FAT). The FAT theorem generally states a finite number of patches can c
over a curve as seen in
the figure below. If the patches are large, then the rules are sloppy. If the pa
tches are small then the rules are
fine.
Fuzzy Patches
In a fuzzy system this simply means that all our rules can be seen as patches an
d the input and output of the
machine can be associated together using these patches. Graphically, if the rule
patches shrink, our fuzzy
subset triangles get narrower. Simple enough? Yes, because even novices can buil
d control systems that
beat the best math models of control theory. Naturally, it is math-free system.
Fuzzy Control
Fuzzy control, which directly uses fuzzy rules, is the most important applicatio
n in fuzzy theory. Using a
procedure originated by Ebrahim Mamdani in the late 70s, three steps are taken t
o create a fuzzy controlled
machine:
1) Fuzzification (Using membership functions to graphically describe a situation
)
2) Rule evaluation (Application of fuzzy rules)
3) Defuzzification (Obtaining the crisp or actual results)
6
Block diagram of Fuzzy controller.
TERMS USED IN FUZZY LOGIC
Degree of Membership - The degree of membership is the placement in the transiti
on from 0 to 1 of
conditions within a fuzzy set. If a particular building's placement on the scale
is a rating of .7 in its position
in newness among new buildings, then we say its degree of membership in new buil
dings is .7.
Fuzzy Variable - Words like red, blue, etc., are fuzzy and can have many shades
and tints. They are just
human opinions, not based on precise measurement in angstroms. These words are f
uzzy variables.
Linguistic Variable - Linguistic means relating to language, in our case plain l
anguage words.
Fuzzy Algorithm - An algorithm is a procedure, such as the steps in a computer p
rogram. A fuzzy
algorithm, then, is a procedure, usually a computer program, made up of statemen
ts relating linguistic
variables.
An example for a fuzzy logic system is provided at the end of the paper.
A Fuzzy Proportional controller
A Fuzzy PD controller
7
A Fuzzy PID controller
Time response of FPID controller.
These are some of the controllers used in engineering.
CONCLUSION
Fuzzy logic potentially has many applications in engineering where the domain kn
owledge is usually
imprecise. Notable successes have been achieved in the area of process and machi
ne control although other
sectors have also benefited from this tool. Recent examples of engineering appli
cations include:
1.controlling the height of the arc in a welding process
2. Controlling the rolling motion of an aircraft
3. Controlling a multi-fingered robot hand
4. Analyzing the chemical composition of minerals
5. Determining the optimal formation of manufacturing cells
6. Classifying discharge pulses in electrical discharge machining.
8
Fuzzy logic is not the wave of the future. It is now! There are already hundreds
of millions of dollars of
successful, fuzzy logic based commercial products, everything from self-focusing
cameras to washing
machines that adjust themselves according to how dirty the clothes are, automobi
le engine controls, antilock
braking systems, color film developing systems, subway control systems and compu
ter programs
trading successfully in the financial markets.
NUERAL NETWORKS
INTRODUCTION
Like inductive learning programs, neural networks can capture domain knowledge f
rom examples.
However, they do not archive the acquired knowledge in an explicit form such as
rules or decision trees and
they can readily handle both continuous and discrete data. They also have a good
generalization capability
as with fuzzy expert systems.
UNDERSTANDING NUERAL NETWORKS
A neural network is a computational model of the brain. Neural network models us
ually assume that
computation is distributed over several simple units called neurons, which are i
nterconnected and operate in
parallel (hence, neural networks are also called parallel-distributed-processing
systems or connectionist
systems).
The most popular neural network is the multi-layer perceptron, which is a feed f
orward network:
All signals flow in a single direction from the input to the output of the netwo
rk. Feed forward networks can
perform static mapping between an input space and an output space: the output at
a given instant is a
function only of the input at that instant.
Recurrent networks, where the outputs of some neurons are fed back to the same n
eurons or to neurons in
layers before them, are said to have a dynamic memory: the output of such networ
ks at a given instant
reflects the current input as well as previous inputs and outputs.
Implicit knowledge is built into a neural network by training it. Some neural netw
orks can be trained by
being presented with typical input patterns and the corresponding expected outpu
t patterns. The error
between the actual and expected outputs is used to modify the strengths, or weig
hts, of the connections
between the neurons. This method of training is known as supervised training. In
a multi-layer perceptron,
the back-propagation algorithm for supervised training is often adopted to propa
gate the error from the
output neurons and compute the weight modifications for the neurons in the hidde
n layers.
Some neural networks are trained in an unsupervised mode, where only the input p
atterns are provided
during training and the networks learn automatically to cluster them in groups w
ith similar features.
A neuro-fuzzy can be used to study both neural as well as fuzzy logic systems. A
neural network can
approximate a function, but it is impossible to interpret the result in terms of
natural language. The fusion of
neural networks and fuzzy logic in neuro fuzzy models provide learning as well a
s readability. Control
engineers find this useful, because the models can be interpreted and supplement
ed by process operators.
9
Figure 1: Indirect adaptive control: The controller parameters are updated indir
ectly via a process model.
A neural network can model a dynamic plant by means of a nonlinear regression in
the discrete time
domain. The result is a network, with adjusted weights, which approximates the p
lant. It is a problem,
though, that the knowledge is stored in an opaque fashion; the learning results
in a (large) set of parameter
values, almost impossible to interpret in words.
Conversely, a fuzzy rule base consists of readable if-then statements that are a
lmost natural language, but it
cannot learn the rules itself. The two are combined in neuro fuzzy in order to a
chieve readability and
learning ability at the same time. The obtained rules may reveal insight into th
e data that generated the
model, and for control purposes, they can be integrated with rules formulated by
control experts (operators).
Assume the problem is to model a process such as in the indirect adaptive contro
ller in Fig. 1. A mechanism
is supposed to extract a model of the nonlinear process, depending on the curren
t operating region. Given a
model, a controller for that operating region is to be designed using, say, a po
le placement design method.
One approach is to build a two-layer perceptron network that models the plant, l
inearise it around the
operating points, and adjust the model depending on the current state (Nørgaard, 1
996). The problem seems
well suited for the so-called Takagi-Sugeno type of neuro fuzzy model, because i
t is based on piecewise
linearisation.
Extracting rules from data is a form of modeling activity within pattern recogni
tion, data analysis or data
mining also referred to as the search for structure in data.
TRIAL AND ERROR
The input space, that is, the coordinate system formed by the input variables (p
osition, velocity, error,
change in error) are partitioned into a number of regions. Each input variable i
s associated with a family of
fuzzy term sets, say, negative , zero , and positive . The expert must then define the m
mbership
functions. For each valid combination of inputs, the expert is supposed to give
typical values for the outputs.
The task for the expert is then to estimate the outputs. The design procedure wo
uld be
1. Select relevant input and output variables,
2. Determine the number of membership functions associated with each input and o
utput, and
3. Design a collection of fuzzy rules.
Considering data given,
10
Figure 2: A fuzzy model approximation (solid line, top) of a data set (dashed li
ne, top). The input space is
divided into three fuzzy regions (bottom).
CLUSTERING
A better approach is to approximate the target function with a piece-wise linear
function and interpolate, in
some way, between the linear regions.
In the Takagi-Sugeno model (Takagi & Sugeno, 1985) the idea is that each rule in
a rule base defines a
region for a model, which can be linear. The left hand side of each rule defines
a fuzzy validity region for
the linear model on the right hand side. The inference mechanism interpolates sm
oothly between each local
model to provide a global model. The general Takagi-Sugeno rule structure is
If f (e1is A1, e2 is A2, ,ek is Ak), then y=g(e1,e2, ..)
Here f is a logical function that connects the sentences forming the condition,
y is the output, and g is a
function of the inputs e1. An example is
If error is positive and change in error is positive then
U=Kp (error + Td*change in error)
Where x is a controller s output, and the constants Kp and Td are the familiar tun
ing constants for a
proportional-derivative (PD) controller. Another rule could specify a PD control
ler with different tuning
settings, for another operating region. The inference mechanism is then able to
interpolate between the two
controllers in regions of overlap.
11
Figure 3: Interpolation between two lines (top) in the overlap of input sets (bo
ttom).
FEATURE DETERMINATION
In general, data analysis (Zimmermann, 1993) concerns objects, which are describ
ed by features. A feature
can be regarded as a pool of values from which the actual values appearing in a
given column are drawn.
E.g.,
12
Some other techniques are HARD CLUSTERS ALGORITHM, FUZZY CLUSTERS ALGORITHM,
SUBTRACTIVE ALGORITHM, and NEURO FUZZY APPROXIMATION, ADAPTIVE NEURO FUZZY
INFERENCE SYSTEM.
Above is an example of clusters.
CONCLUSION
Thus, better system modeling can be obtained by using neuro fuzzy modeling as se
en above, as resultant
system occupies a vantage point above both neural and fuzzy logic systems.
GENETIC ALGORITHM
A problem with back propagation and least squares optimization is that they can
be trapped in a local
minimum of a nonlinear objective function, because they are derivative based. Ge
netic algorithm-survival
of the fittest! -Are derivative-free, stochastic optimization methods, and there
fore less likely to get trapped.
They can be used to optimize both structure and parameters in neural networks. A
special application for
them is to determine fuzzy membership functions. A genetic algorithm mimics the
evolution of populations.
First, different possible solutions to a problem are generated. They are tested
for their performance, that is,
how good a solution they provide. A fraction of the good solutions is selected,
and the others are eliminated
(survival of the fittest). Then the selected solutions undergo the processes of
reproduction, crossover, and
mutation to create a new generation of possible solutions, which is expected to
perform better than the
previous generation. Finally, production and evaluation of new generations is re
peated until convergence.
Such an algorithm searches for a solution from a broad spectrum of possible solu
tions, rather than where the
results would normally be expected. The penalty is computational intensity. The
elements of a genetic
algorithm are explained next (Jang et al., 1997).
1.Encoding. The parameter set of the problem is encoded into a bit string repres
entation.
For instance, a point (x, y)=(11,6) can be represented as a chromosome which is
a concatenated bit string
1 0 1 1 0 1 1 0
Each coordinate value is a gene of four bits. Other encoding schemes can be used
, and arrangements can be
made for encoding negative and floating-point numbers.
2.Fitness evaluation. After creating a population the fitness value of each memb
er is calculated.
13
3.Selection. The algorithm selects which parents should participate in producing
off springs for the next
generation. Usually the probability of selection for a member is proportional to
its fitness value.
4.Crossover. Crossover operators generate new chromosomes that hopefully retain
good features from the
previous generation. Crossover is usually applied to selected pairs of parents w
ith a probability equal to a
given crossover rate. In one-point crossover a crossover point on the genetic co
de is selected at random and
two parent chromosomes interchange their bit strings to the right of this point.
5.Mutation. A mutation operator can spontaneously create new chromosomes. The mo
st common way is to
flip a bit with a probability equal to a very low, given mutation rate.
The mutation prevents the population from converging towards a local minimum. Th
e mutation rate is low
in order to preserve good chromosomes.
ALGORITHM
An example of a simple genetic algorithm for a maximization problem is the follo
wing.
1. Initialize the population with randomly generated individuals and evaluate th
e fitness of each individual.
(a) Select two members from the population with probabilities proportional to th
eir fitness values.
(b) Apply crossover with a probability equal to the crossover rate.
(c) Apply mutation with a probability equal to the mutation rate.
(d) Repeat (a) to (d) until enough members are generated to form the next genera
tion.
3. Repeat steps 2 and 3 until a stopping criterion is met.
If the mutation rate is high (above 0.1), the performance of the algorithm will
be as bad as a primitive
random search.
CONCLUSION
This is how genetic algorithm method of analysis is used in power systems.
These are the various Artificial Intelligence techniques used in power systems.
CONCLUSION
Over the past 40 years, artificial intelligence has produced a number of powerfu
l tools. This paper has
reviewed five of those tools, namely fuzzy logic, neural networks and genetic al
gorithms. Applications of
the tools in engineering have become more widespread due to the power and afford
ability of present-day
computers. It is anticipated that many new engineering applications will emerge
and that, for demanding
tasks, greater use will be made of hybrid tools combining the strengths of two o
r more of the tools reviewed.
Other technological developments in artificial intelligence that will have an im
pact in engineering include
data mining, or the extraction of information and knowledge from large databases
and multi-agent systems,
or distributed self-organizing systems employing entities that function autonomo
usly in an unpredictable
environment concurrently with other entities and processes. This paper is an eff
ort to give an insight into the
ocean that is the field of Artificial Intelligence.
REFERENCES:
www.thesis.lib/cycu
www.scholar.google.com
www.ieee-explore.com
www.onesmartclick.com/engineering
GATES INSTITUTE OF TECHNOLOGY
A PAPER
ON
TRENDS IN POWER SYSTEM
PROTECTION AND CONTROL
PRESENTED
BY
M. SREENIVASA REDDY B. SRIKANTH
AD: NO: 06F21A0252 AD: NO 06F21A0251
III-II SEM EEE III-II SEM EEE
Gates Institute Of Technology Gates Institute Of Technology
Email: [email protected] Email:[email protected]
Mobile: 9701770636 Mobile: 9885042564
ABSTRACT
As a consequence of deregulation,
competition, and problems in securing capital
outlays for expansion of the infrastructure,
modern power systems are operating at eversmaller
capacity and stability margins. Traditional
entities involved in securing adequate protection
and control for the system may soon become
inadequate, and the emergence of the new
participants ( non-utility generation,
transmission, and distribution companies)
requires coordinated approach and careful
coordination of the new operating conditions. The
paper reviews the key issues and design
considerations for the present and new generation
of SPS and emergency control schemes, and
evaluates the strategies for their implementation.
1. Introduction
System-wide disturbances in power
systems are a challenging problem for the utility
industry because of the large scale and the
complexity of the power system. When a major
power system disturbance occurs, protection and
control actions are required to stop the power
system degradation, restore the system to a
normal state, and minimize the impact of the
disturbance. The present control actions are not
designed for a fast-developing disturbance and
may be too slow. Further, dynamic simulation
software is applicable only for off-line analysis.
The operator must therefore deal with a very
complex situation and rely on heuristic solutions
and policies.
Today, local automatic actions protect
the system from the propagation of the fastdeveloping
emergencies. However, local
protection systems are not able to consider the
overall system, which may be affected by the
disturbance. The trend in power system planning
utilizes tight operating margins, with less
redundancy, because of new constraints placed by
economical and environmental factors. At the
same time, addition of non-utility generators and
independent power producers, an interchange
increase, an increasingly competitive
environment, and introduction of FACTS devices
make the power system more complex to operate
and to control, and, thus, more vulnerable to a
disturbance. On the other hand, the advanced
measurement and communication technology in
wide area monitoring and control, FACTS devices
(better tools to control the disturbance), and new
paradigms (fuzzy logic and neural networks) may
provide better ways to detect and control an
emergency.
Better detection and control
strategies through the concept of wide area
disturbance protection offer a better management
of the disturbances and significant opportunity for
higher power transfers and operating economies.
Wide area disturbance protection is a concept of
using system-wide information and sending
selected local information to a remote location to
counteract propagation of the major disturbances
in the power system. With the increased
availability of sophisticated computer,
communication and measurement technologies,
more "intelligent" equipment can be used at the
local level to improve the overall emergency
response. Decentralized subsystems, that can
make local decisions based on local
measurements and remote information (systemwide
data and emergency control policies) and/or
send pre-processed information to higher
hierarchical levels are an economical solution to
the problem. A major component of the systemwide
disturbance protection is the ability to
receive system wide information and commands
via the data communication system and to send
selected local information to the SCADA centre.
This information
should reflect the prevailing state of the power
system.
2. Types of Disturbances and Remedial
Measures
Phenomena which create the power system
disturbance are divided into the following
categories: angular stability, voltage stability,
overload and power system cascading.
2.1. Angular stability
The objective of out-of-step protection as
it is applied to generators and systems, is to
eliminate the possibility of damage to generators
as a result of an out-of-step condition. In the case
of the power system separation is imminent, it
should take place along boundaries which will
form islands with matching load and generation.
Distance relays are often used to
provide an out-of-step protection function,
whereby they are called upon to provide blocking
or tripping signals upon detecting an out-of-step
condition. The most common predictive scheme
to combat loss of synchronism is the Equal-Area
Criterion and its variations. This method assumes
that the power system behaves like a two-machine
model where one area oscillates against the rest of
the system. Whenever the underlying assumption
holds true, the method has potential for fast
detection.
2.2. Voltage stability
Voltage stability is defined by the
System Dynamic Performance Subcommittee of
the IEEE Power System Engineering Committee
[3] as being the ability of a system to maintain
voltage such that when load admittance is
increased, load power will increase, and so that
both power and voltage are controllable. Also,
voltage collapse is defined as being the process by
which voltage instability leads to a very low
voltage profile in a significant part of the system.
It is accepted that this instability is caused by the
load characteristics, as opposed to the angular
instability which is caused by the rotor dynamics
of generators. The risk of voltage instability
increases as the transmission system becomes
more heavily loaded.
The typical scenario of these
instabilities starts with a high system loading,
followed by a relay action due to either a fault, a
line overload or hitting an excitation limit.
Voltage instability can be alleviated by a
combination of the following remedial measures
means: adding reactive compensation near load
centers, strengthening the transmission lines,
varying the operating conditions such as voltage
profile and generation dispatch, coordinating
relays and controls, and load shedding. Most
utilities relyon planning and operation studies to
guard against voltage instability. Many utilities
utilize localized voltage measurements in order to
achieve load shedding as a measure against
incipient voltage instability [4].
2.3. Overload and Power System Cascading
Outage of one or more power system
elements due to the overload may result in
overload of other elements in the system. If the
overload is not alleviated in time, the process of
power system cascading may start, leading to
power system separation. When a power system
separates, islands with an imbalance between
generation and load are formed with a
consequence of frequency deviation from the
nominal value. If the imbalance cannot be handled
by the generators, load or generation shedding is
necessary. The separation can also be started by a
special protection system or out-of-step relaying.
A quick, simple, and reliable way to reestablish
active power balance is to shed load by
under frequency relays. There are a large variety
of practices in designing load shedding schemes
based on the characteristics of a particular system
and the utility practices [5-6]. While the system
frequency is a final result of the power deficiency,
the rate of change of frequency is an
instantaneous indicator of power deficiency and
can enable incipient recognition of the power
imbalance. However, change of the machine
speed is oscillatory by nature, due to the
interaction among generators. These oscillations
depend on location of the sensors in the island and
the response of the generators. The problems
regarding the rate-of-change of frequency
function are [7]:
· A smaller system inertia causes a
larger peak-to-peak value for oscillations. For the
larger peak-to-peak values, enough time must be
allowed for the relay to calculate the actual rateof-
change of frequency reliably. Measurements at
load buses close to the electrical center of the
system are less susceptible to oscillations (smaller
peak-to-peak values) and can be used in practical
applications. A smaller system inertia causes a
higher frequency of oscillations, which enables
faster calculation of the actual rate-of-change of
frequency. However, it causes faster rate-ofchange
of frequency, and, consequently, a larger
frequency drop.
· Even if rate-of-change of frequency relays
measure the average value throughout the
network, it is difficult to set them properly, unless
typical system boundaries and imbalance can be
predicted. If this is the case (eg. industrial and
urban systems), the rate of change of frequency
relays may improve a load shedding scheme
(scheme can be more selective and/or faster).
· Adaptive settings of frequency and frequency
derivative relays may enable implementation of a
frequency derivative function more effectively
and reliably.
3. Possible Improvements in Control and
Protection
Existing protection/control systems may
be improved and new protection/control systems
may be developed to better adapt to prevailing
system conditions during system-wide
disturbance. While improvements in the existing
systems are mostly achieved through
advancement in local measurements and
development of better algorithms, improvements
in new systems are based on remote
communications. However, even if
communication links exist, systems with only
local information may still need improvement
since they are envisioned as fallback positions.
The modern energy management
system (EMS) can provide system-wide
information for the network control and
protection. The EMS is supported by supervisory
control and data acquisition (SCADA) software
and various power system analysis tools. The
increased functions and communication ability in
today's SCADA systems provide the opportunity
for an intelligent and adaptive control and
protection system for system-wide disturbance.
This in turn can make possible full utilization of
the network, which will be less vulnerable to a
major disturbance.
3.1 Angular stability
Out-of-step relays have to be fast
and reliable. The increased utilization of
transmission and generation capacity as well as
the increased distance of power transmission are
some of the factors that cause an out-of step
situation to develop rapidly. The interconnected
nature of power systems cause large geographic
areas to be affected by an out-of-step condition.
The present technology of out-of-step tripping or
blocking distance relays is not capable of fully
dealing with the control and protection
requirements of power systems.
Central to the development
effort of an out-of-step protection system is the
investigation of the multi-area out-of-step
situation. The new generation of out-of-step
relays has to utilize more measurements, both
local and remote, and has to produce more
outputs. The structure of the overall relaying
system has to be distributed and coordinated
through a central control. In order for the relaying
system to manage complexity, most of the
decisions have to be taken locally. The relay
system is preferred to be adaptive, in order to
cope with system changes. To deal with out-ofstep
prediction, it is necessary to start with a
system-wide approach, find out what sets of
information are crucial, how to process
information with acceptable speed and accuracy.
3.2. Voltage Instability
The protection against voltage
instability should also be addressed as a part of
hierarchical structure. Decentralized actions are
performed at substations with local signals and
signals obtained from slow communication with
other substations and/or central level (e.g. using
SCADA data). The higher hierarchical level
requires more sophisticated communication of
relevant system signals and a coordination
between the actions of the various substations.
The recommended approach for designing the
new generation of voltage instability protection is
to first design a voltage instability relay with only
local signals. The limitations of local signals
should be identified in order to be in a position to
select appropriate communicated signals.
However, a minimum set of communicated
signals should always be known in order to design
a reliable protection, and it requires the following:
(a) determining the algorithm for gradual
reduction of the number of necessary
measurement sites with minimum loss of
information necessary for voltage stability
monitoring, analysis, and control;
(b) development of methods (i.e. sensitivity
analysis of reactive powers), which should
operate concurrent with any existing local
protection techniques, and possessing superior
performance, both in terms of security and
dependability.
3.3. Power System Cascading and Load
Shedding Strategies
Conventional load shedding
schemes without communications are not adaptive
to system conditions which are different from the
one used in the load shedding design. For the
relays to adapt to the prevailing system
conditions, their settings should change based on
the available spinning reserve, total system
inertia, and load characteristics. These values may
be periodically determined at the central site from
SCADA data and provided to the relays using low
speed communications. In addition, the actual
load, which would represent an assigned
percentage for shedding at each step, may be
periodically calculated at a central site based on
the actual load distribution. However, the system
characteristics may change depending on the
separation points. If the separation is controlled
from a central site or can be predicted, an
algorithm may calculate the settings and assign
the appropriate load in coordination with
switching actions. However, high speed
communication may be required to and from the
central location for fast-developing disturbances,
such as multi-machine angular instability.
Another aspect, may be
adding a correction element to a scheme. If only
slow speed communications are available, a fast
load shedding scheme may be implemented to
stop system degradation. When adequate
information is available, corrective measures may
be applied. If the composite system inertia
constant is known, the actual power imbalance
may be calculated directly from the frequency.
This detection should be fast (to avoid a large
frequency drop) and done at the location close to
the center of inertia. High speed communications
are required to initiate load shedding at different
power system locations. Further, changes of load
and generation, with frequency and in particular
voltage, impact the power imbalance and
calculation of the average of the frequency
derivative. In addition, the power system
imbalance changes after the initial disturbance
due to dynamic system changes. Thus, relay
settings should be based on the spinning reserve,
total system inertia, and load characteristics and
distribution. In conclusion, sophisticated models
and/or high-speed communication may be
required for accurate estimation of the amount
and distribution of the load to be shed. If
communications are available, it is easier and
more reliable to calculate the amount of load to
shed from the switching information and the
mismatch (based on data on load and generation
before the separation) in the island. To avoid
disadvantages of the under frequency load
shedding and difficulties with implementing the
rate-of change of frequency function, the
automated load shedding that will reduce
overloading or prevent system instability before
the system is isolated is proposed as an
advantageous strategy.
4. Example: Angular Stability
An algorithm for predicting the location at which
an out of- step can take place following a
disturbance in a large scale system is shown as an
example of the hierachical protection and control
strategies using communications. To implement
this scheme, one needs a central computer that
receives information from across the system. The
sets of crucial information that require fast
communications consist of: generator speeds, and
changes in line status. Other information needed
by the algorithm are generation
and load levels. Using these sets of information, a
simple and quick processing method is able to
tell, with a high degree of accuracy,
(1) whether an out-of-step is imminent, and (2)
the boundary across which this out-o f step will
take place. This algorithm has been tested
thoroughly using a Monte-Carlo-type approach.
At each test, random values are assigned to line
impedances, load, generator inertias, as well as
disturbance location. It is found that the algorithm
is capable of making accurate prediction. To
illustrate how the algorithm works, consider the
power system shown in Figure 1. This system is a
modified version of the IEEE 39-bus test system.
In Figure 1, each generator is marked with a
circle, and each load with a square; the size of
each symbol indicates relatively the power
generated or consumed at the node. For example,
Generator 34 supplies more MW to the grid than
does Generator 38.
A disturbance is introduced to the system where
two lines are simultaneously removed (each line
is marked by an 'x' in Figure 1). This information
is fed to the algorithm, which predicts that an outof-
step will occur across the line 2-25. Figure 2
reveals that the two generators 37 and 38
eventually separate from the other generators. All
line angles have been checked and none but line
2-25 indicate the boundary of the out-of-step. The
angle of critical line is shown in Figure 3. This
confirms the result of the algorithm.
Such an algorithm requires a centralized scheme
and high speed communication links across the
wide system. Decentralized scheme requires
communications with a central location.
According to this hierarchical scheme, each
regional computer issues control actions to
alleviate problems that are imminent within its
jurisdiction; the coordination among regions is
left to the central computer.
5. Conclusion
A large disturbance such as a
sudden outage of a transmission line may trigger a
sequence of events leading to machine swings,
voltage problem, and eventually power outage in
a large area of the system. The role of a protection
and control system is to timely predict the system
instability, to perform actions to restore the
system to a normal state and minimize the impact
of the disturbance.
As communication and
computer technology continue to improve, and
protection and control becomes more integrated,
an application of adaptive system-wide protection
is becoming more feasible. Since any
improvement in system-wide protection and
control products provides significant savings to
utility, the decentralized systems that provide
improved and economical solution for the systemwide
disturbance problems are very attractive.
Automated load shedding that
will reduce overloading before the system is
isolated is an improved solution in comparison to
under frequency load shedding. Although local
measurements may suffice if tasks are simple (eg.
protection against few contingencies only),
information communicated either from central
location or from remote substation seems
necessary for more sophisticated requirements.
Microprocessor-based
coordinated protection, monitoring and control
systems are the key to innovations in power
system operating philosophy. The coordinated
system is clearly the future of relaying
technology.
As communication and
computer technology continue to improve, and
protection and control become more integrated,
the application of the adaptive wide area
disturbance protection concept is becoming more
feasible. Since any improvement in the wide area
protection and control strategy provides
significant savings to the utility, the intelligent
systems that provide improved and economical
solution for the wide area disturbance problems
are very attractive. Intelligent emergency control
systems (i.e. systems described in the paper)
provide more secure operation and better
emergency responses, allowing utilities to operate
at closer transmission and generation margins.
References
[1] Voltage Stability of Power Systems:
Concepts, Analytical
Tools, and Industry Experience, IEEE
Publication,
[2] System Protection and Voltage Stability, IEEE
Power
System Relaying Committee, IEEE Publication,
[3] System Disturbances: 1986-1997 North
American Electric
Reliability Council, NERC Reports.
[4] A. Apostolov, D. Novosel, and D.G. Hart,
Intelligent
Protection and Control During Power System
Disturbance,
56th annual APC, Chicago,
1
CALCULATION OF A PARAMETER TO QUASH
FREQUENCY CONTROL PROBLEM
By
G.Sowmya J.Swetha
[email protected] [email protected]
Phone no:9491338873 phone no:9394198655
R.V.R. & J.C.COLLEGE OF ENGINEERING
ABSTRACT:
As frequency is a major stability criterion in order to provide the stability,
active power balance and constant frequency are required. To improve stability l
oad
frequency control (LFC) is required. Speed governing system is the most importan
t part
in the LFC of an isolated power system. In this paper speed regulation of govern
or R
for control system stability is calculated using ROUTH-HURWTIZ array and the sam
e is
calculated using ROOT LOCUS in MATLAB and the results are compared . Later the
LFC system is equipped with the secondary integral control loop for automatic ge
neration
control (AGC) and the frequency deviation step responses with and without AGC ar
e
drawn using MATLAB and are compared.
2
INTRODUCTION:
Frequency is a major stability criterion
for large-scale stability in multi area
power systems. To provide the stability,
active power balance and constant
frequency are required. Frequency
depends on active power balance. If any
change occurs in active power
demand/generation in power systems,
frequency cannot be hold in its rated
value. So oscillations increase in both
power and frequency. Thus, system
subjects to a serious instability problem.
To improve the stability of the power
networks, it is necessary to design a load
frequency control (LFC) systems that
control the power generation and active
power at tie lines.
In modern large
interconnected systems manual
regulation is not feasible and therefore
automatic generation and voltage
regulation equipment is installed on each
generator. The controllers are set for a
particular operating condition and they
take care of small changes in load
demand with out voltage and frequency
exceeding the prescribed limits. With the
passage of time, as the change in the
load demand becomes large, the
controllers must be reset either manually
or automatically.
Schematic diagram of load
frequency and excitation
voltage regulators of a turbo
generator.
The two loops voltage controlled loop
and the frequency controlled loop don t
interfere with each other because the
time constants are entirely different.
Furthermore excitation voltage control is
fast acting in which major time constant
encountered is that of the generator field;
while the power frequency control is
slow acting with major time constant
contributed by the turbine and generator
moment of inertia-this time constant is
3
much larger than that of generator field.
Thus the transients in excitation voltage
control vanish much faster and don t
affect the dynamics of power frequency
control.
To understand the load frequency
control problem, a single turbo-generator
system supplying an isolated load is
considered.
Turbine speed governing
system:
The system consists of the following
components:
Fly ball speed governor:
This is the heart of the system
which senses the change in speed. As the
speed increases the fly balls move
outwards and the point b on linkage
mechanism move downwards. The
reverse happens when the speed
decreases.
Hydraulic amplifier:
It comprises a pilot and main
piston arrangement. Low power level
pilot valve movement is converted into
high power level piston valve
movement. This is necessary in order to
open or close the steam valve against
high pressure steam.
Linkage mechanism:
ABC is a rigid link pivoted at B
and CDE is another rigid link pivoted at
D. This link mechanism provides a
movement to the control valve in
proportion to the change in speed. It also
provides a movement to the steam valve
movement.
Speed changer:
It provides a steady state output to
the turbine. Its downward movement
opens the upper pilot valve so that more
steam is admitted to the turbine under
steady conditions. The reverse happens
for upward movement of speed changer.
4
Model of speed governing
system:
R-Regulation of the speed governor
Tg-Time constant of speed governor
Turbine model:
GENERATOR MODEL:
Load model:
The speed-load characteristic of a
composite load is approximated,
Where,
PL is the non-frequency-sensitive
load change,
D w is the frequency-sensitive load
change
D is expressed as percent change in load
divided by percent change in frequency,
Generator and load block diagram
P s s
G s P s
V T
m
T +t
=
.
.
=
1
1
( )
( ) ( )
. = . - . .
R
P Pg ref
1
( ) ( ) 1 (s)
R
P s P s g ref . = . - .O
( )
1
( ) 1 P s
s
P s g
g
V .
+
. =
t
[ ( ) ( )]
2
( ) 1 P s P s
Hs
s m e .O = . - .
.P = .P + D.. e L
5
Load frequency control block
diagram of an isolated power
system
Example to find the speed
regulation of governor for
the system stability:
Consider an isolated power station with
the following parameters
Turbine time constant tT = 0.5 sec
Governor time constant tg = 0.2 sec
Generator inertia constant H = 5 sec
Governor speed regulation = R per unit
The load varies by 0.8 percent for a 1
percent change in frequency, i.e., D=0.8
Aim: To find the speed regulation of
governor
A) using Routh Hurwitz array to
find the range of R for control
system stability
B) using Root locus to find the
range of R
The result i.e. value of R is compared
which is obtained by these two
methods
step1: Substituting the system
parameters in the LFC block diagram
results in the block diagram shown.
Step2:
Open loop transfer function is:
( ) ( ) ( )
(2 )(1 )(1 ) 1
(1 )(1 )
( )
( )
(2 )(1 )(1 )
( ) ( ) 1 1
s P s T s
Hs D s s R
s s
P s
s
R Hs D s s
KG s H s
L
g T
g T
L
g T
.O = -.
+ + + +
+ +
=
- .
.O
+ + +
=
t t
t t
t t
( )
D R
s s Pss s L 1
lim ( ) 1
0 +
. = .O = - .
.
.
6
Step3:
The characteristic equation is given by
This results in the characteristic
polynomial equation:
The Routh-Hurwitz array for this
polynomial is then
From the s1 row, we see that for control
system stability, K must be less than
73.965. Also, from s0 row, K must be
greater than -0.8. Thus, with positive
values of K, for control system stability
Since R=1/K, for control system
stability, the governor speed regulation
must be
Step4:
For K=73.965, the auxiliary equation
from the s2 row is
or s=±j3.25. That is, for R=0.0135, we
have a pair of conjugate poles on the jw
axis, and the control system is
marginally stable.
Step5:
B) finding root locus:
For this open loop transfer function
is considered
To obtain the root-locus, we use the
following commands
num=1;
den = [1 7.08 10.56 .8];
figure (1), rlocus (num, den);
The result is shown in figure. The
loci intersect the jw axis at s= ±j3.25
Now S value is substituted in the
characteristic equation to get the
value of k K=73.965
7.08 10.56 0.8
(10 0.8)(1 0.2 )(1 0.5 )
( ) ( )
3 + 2 + +
=
+ + +
=
s s s
K
s s s
KG s H s K
R
where K = 1
0
7.08 10.56 0.8
1 ( ) ( ) 1 3 2 =
+ + +
+ = +
s s s
KG s H s K
s3 + 7.08s2 +10.56s + 0.8 + K = 0
0
0
0.8
10.56
0.8
7.08
73.965
7.08
1
0
1
2
3
K
K
K
s
s
s
s
+
+
-
K < 73.965
0.0135
73.965
R > 1 or R >
7.08s2 + 74.765 = 0
7
Thus, the system is marginally stable for
R=1/73.965=0.0135
Importance of AGC:
With the speed governing system
installed on each machine, the steady
load frequency characteristic for a given
speed changer setting has considerable
droop from no load to full load. System
frequency specifications are rather
stringent and, therefore, so much change
in frequency can t be tolerated. In fact, it
is expected that the steady change in
frequency will be zero. While steady
state frequency can be brought back to
the scheduled value by adjusting speed
changer setting, the system could under
go intolerable dynamic frequency
changes with changes in load. It leads to
the natural suggestion that the speed
changer setting be adjusted
automatically by monitoring the
frequency changes. For this purpose, a
single from is fed through an
integrator to the speed changer resulting
in the block diagram. The system now
modifies to a proportional plus integral
controller, gives a zero steady state error
.i.e.
Example to compare the
frequency deviation step
response with and with out
AGC:
Consider that the governor speed
regulation is set to R = 0.05 per unit.
The turbine rated output is 250 MW
at nominal frequency of 60 Hz. A
sudden load change of 50 MW ( PL =
0.2 per unit) occurs.
Aim: To obtain the frequency deviation
step responses for two cases i.e. with and
with out AGC
MATLAB commands are used to obtain
these.
8
With out AGC:
PL = 0.2; num= [0.1 0.7 1]
den = [1 7.08 10.56 20.8];
t = 0:.02:10; c = -PL*step(num, den, t);
figure (2), plot(t, c), xlabel ( t, sec ),
ylabel( pu )
title( Frequency deviation step
response ), grid
With AGC:
The LFC system is equipped with the
secondary integral control loop for
automatic generation control.
The frequency deviation step response
for a sudden load change of PL =0.2
pu. The integral controller gain is set to
Ki=7.
Step1:
Substituting for the system parameters,
with speed regulation adjusted to R=0.05
pu, results in the following closed-loop
transfer function.
Step2: To find the step response, we use
the following commands
pl=0.2;
ki=7;
num=[0.1 0.7 1 0];
den=[1 7.08 10.56 20.8 ki];
t=0:.02:12;
c=-pl*step(num,den,t);
plot(t,c);
xlabel('t,sec');
ylabel('pu');
title('frequency deviation step response');
Results and conclusion:
.. The speed regulation of the
governor is calculated for control
system stability using R-H array
and root locus and the result is
compared
R-H Rootlocus
Speed
regulation
R=0.0135 R=0.0135
stability Marginally
stable
Marginally
stable
7.08 10.56 20.8 7
( ) 0.1 0.7 4 3 2
3 2
+ + + +
+ +
=
s s s s
T s s s s
9
.. Frequency deviation step
responses are obtained for two
cases i.e. with and with out AGC
From the step response, with integral
control we observe that the steady-state
frequency deviation wss is zero
REFERENCES:
Modern power system analysis
by D P KOTHARI and
I J NAGRATH
Control Systems by
A. NAGOORKANI
Gooty
THE PRESENTATION
ON
PRESENTED
BY
B.MALLIKARJUNA M.MANJUNATH
III-II Sem EEE III-II Sem EEE
Email: [email protected] Email: [email protected]
Ph No: +919000465030 Ph No: +919701436235
Abstract
Now a day s non conventional energy sources are back bone in our country. In
the last sixty years the world s energy needs have increased by 800% and they are
likely
to increase further, this conventional energy is likely to get exhausted by next
50 years.
Energy sources may be employed in solar systems, stationary, environment and por
table
applications. Energy in water can be harnessed and used, in the form of motive e
nergy or
temperature differences. Since water is about a thousand times heavier than air
is, even a
slow flowing stream of water can yield great amounts of energy.
Geothermal energy is a very clean source of power. It comes from radioactive
decay in the core of the Earth, which heats the Earth from the inside out and th
us
energy/power can be extracted owing to the temperature difference between hot ro
ck
deep in the earth and relatively cool surface air and water. This requires that
the hot rock
be relatively shallow, so it is site - specific and can only be applied in geolo
gically active
areas. Nonconventional energy sources are used for portable power applications,
such as
laptop computers, are small, lightweight, low-temperature, and easy to refill or
recharge.
Consequently source of energy can be termed as Energy of the future
1. Introduction
To meet the future energy demands and to give quality and pollution free supply
to the growing and today s environment conscious population, the present world att
ention
is to go in for natural, clean and renewable energy sources. These energy source
s capture
their energy from on-going natural processes, such as geothermal heat flows, sun
shine,
wind, flowing water and biological processes. Most renewable forms of energy, ot
her
than geothermal and tidal power ultimately come from the Sun. Some forms of ener
gy,
such as rainfall and wind power are considered short-term energy storage, wherea
s the
energy in biomass is accumulated over a period of months, as in straw, and throu
gh many
years as in wood. Fossil fuels too are theoretically renewable but on a very lon
g timescale
and if continued to be exploited at present rates then these resources may deple
te in
the near future. Therefore, in reality, Renewable energy is energy from a source
that is
replaced rapidly by a natural process and is not subject to depletion in a human
timescale.
Renewable energy resources may be used directly, such as solar ovens,
geothermal heating, and water and windmills or indirectly by transforming to oth
er more
convenient forms of energy such as electricity generation through wind turbines
or
photovoltaic cells, or production of fuels (ethanol etc.) from biomass.
2. RENEWABLE ENERGY UTILIZATION STATUS IN THE
WORLD
3. BRIEF DESCRIPTION OF NON-CONVENTIONAL ENERGY
RESOURCES
3.1 Solar Energy
Since most renewable energy is ultimately "solar energy" that is directly collec
ted
from sun light. Energy is releasedby the Sun as electromagnetic waves. This ener
gy
reaching the earth s atmosphere consists of about 8% UV radiation, 46% visible lig
ht and
46% infrared radiations. Solar energy storage is as per figure given below:
Solar energy can be used in two ways:
Solar heating.
Solar electricity.
Solar heating is to capture concentrate sun s energy for heating buildings and for
cooking/heating foodstuffs etc. Solar electricity is mainly produced by using ph
otovoltaic
solar cells which are made of semi-conducting materials that directly convert su
nlight
into electricity. Obviously the sun does not provide constant energy to any spot
on the
Earth, so its use is limited. Therefore, often Solar cells are used to charge ba
tteries which
are used either as secondary energy source or for other applications of intermit
tent use
such as night lighting or water pumping etc. A solar power plant offers good opt
ion for
electrification in areas of disadvantageous locations such as hilly regions, for
ests, deserts,
and islands where other resources are neither available nor exploitable in techn
o
economically viable manner. MNES has identified 18, 000 such villages to be elec
trified
through non-conventional sources.
India is a vast country with an area of over 3.2 million sq. km. Most parts of t
he
country have about 250-300 sunny days. Thus there is tremendous solar potential.
140 MW solar thermal/naphtha hybrid power plants with 35 MW solar trough
components will be constructed in Rajasthan raising India into the 2nd position
in
the world in utilization of solar thermal.
Grid interactive solar photovoltaic power projects aggregating to 2490 KW have
so far been installed and other projects of 800 KW capacity are under installati
on.
3.2 Wind Energy
The origin for Wind energy is sun. When sun rays fall on the earth, its surface
gets
heated up and as a consequence unevenly winds are formed. Kinetic energy in the
wind
can be used to run wind turbines but the output power depends on the wind speed.
Turbines generally require a wind in the range 5.5 m /s (20 km/h). In practice r
elatively
few land areas have significant prevailing winds. Otherwise Wind power is one of
the
most cost competitive renewable today and this has been the most rapidly-growing
means
of electricity generation at the turn of the21st century and provides a compleme
nt to
large-scale base-load power stations. Its long-term technical potential is belie
ved 5 times
current global energy consumption or 40 times current electricity demand.
India now has the 5th largest wind power installed capacity, of 3595 MW, in the
world.
The estimated gross Wind potentials in India is 45,000 MW.
3.3Water Power
Energy in water can be harnessed and used, in the form of motive energy or
temperature differences. Since water is about a thousand times heavier than air
is, even a
slow flowing stream of water can yield great amounts of energy.
There are many forms:
Hydroelectric energy, a term usually reserved for hydroelectric dams.
Tidal power, which captures energy from the tides in horizontal direction. Tides
come in, raise water levels in a basin, and tides roll out. The water is made to
pass through a turbine to get out of the basin. Power generation through this
method has a varying degree of success.
Wave power, which uses the energy in waves. The waves will usually make large
pontoons go up and down in the water. The wave power is also hard to tap.
Hydroelectric energy is therefore the only viable option. However, even probably
this
option is also not there with the developed nations for future energy production
, because
most major sites within these nations with the potential for harnessing gravity
in this way
are either already being exploited or are unavailable for other reasons such as
environmental considerations. On the other side, large hydro potential of millio
ns of
megawatts is available with the developing countries of the world but major bott
leneck in
the way of development of these large Hydro projects is that each site calls for
huge
investment.
3.4 Geothermal Energy
Geothermal energy is a very clean source of power. It comes from radioactive
decay in the core of the Earth, which heats the Earth from the inside out and th
us
energy/power can be extracted owing to the temperature difference between hot ro
ck
deep in the earth and relatively cool surface air and water. This requires that
the hot rock
be relatively shallow, so it is site - specific and can only be applied in geolo
gically active
areas.
It can be used in two ways:
Geothermal heating
Geothermal electricity
As stated above, the geothermal energy from the core of the Earth is closer to t
he surface
in some areas than in others. Where hot underground steam or water can be tapped
and
brought to the surface it may be used directly to heat and cool buildings or ind
irectly it
can be used to generate electricity by running the steam/gas turbines. Even othe
rwise, on
most of the globe, the temperature of the crust a few feet below the surface is
buffered to
a constant 7- 14 degree Celsius, so a liquid can be pre-heated or pre-cooled in
underground pipelines, providing free cooling in the summer and heating in the w
inter by
using a heat pump.
3.5 BIOMASS
3.5.1 Solid Biomass
Plants use photosynthesis to store solar energy in the form of chemical energy.
The easiest way to release this energy is by burning the dried up plants. Solid
biomass
such as firewood or combustible field crops including dried manure is actually b
urnt to
heat water and to drive turbines. Field crops may be grown specifically for comb
ustion or
may be used for other purposes and the processed plant waste then used for combu
stion.
Most sorts of biomass, including Sugarcane residue, wheat chaff, corn cobs and o
ther
plant matter can be, and is, burnt quite successfully. Currently, biomass contri
butes 15%
of the total energy supply worldwide.
A drawback is that all biomass needs to go through some of these steps: it needs
to be grown, collected, dried, fermented and burned. All of these steps require
resources
and an infrastructure.
In the area of small scale biomass gasification, significant technology
development work has made India a world leader.
A total capacity of 55.105 MW has so far been installed, mainly for stand-alone
applications.
A 5 x 100 KW biomass gasifier installation on Gosaba Island in Sunderbans area
of West Bengal is being successfully run on a commercial basis to provide
electricity to the inhabitants of the Island through a local grid.
A 4X250 kW (1.00 MW) Biomass Gasifier based project has recently been
commissioned at Khtrichera, Tripura for village electrification.
A 500 KW grid interactive biomass gasifier, linked to an energy plantation, has
been commissioned under a demonstration project.
3.5.2 Biofuel
Biofuel is any fuel that derives from biomass - recently living organisms or the
ir
metabolic byproducts, such as manure from cows. Typically biofuel is burned to r
elease
its stored chemical energy. Biomass, can be used directly as fuel or to produce
liquid
biofuel. Agriculturally produced biomass fuels, such as biodiesel, ethanol, and
bagasse
(often a by-product of sugarcane cultivation) can be burned in internal combusti
on
engines or boilers.
India is the largest producer of cane sugar and the Ministry is implementing the
world s largest co-generation programme in the sugar mills.
India has so far commissioned a capacity of 537 MW through bagasse based
co-generation in sugar mills and 536 MW is under installation.
It has an established potential of 3,500 MW of power generation.
3.5.3 Biogas
Biogas can easily be produced from current waste streams, such as: paper
production, sugar production, sewage, animal waste and so forth. These various w
aste
streams have to be slurried together and allowed to naturally ferment, producing
55% to
70% inflammable methane gas. India has world s largest cattle population 400 milli
on
thus offering tremendous potential for biogas plants. Biogas production has the
capacity
to provide us with about half of our energy needs, either burned for electrical
productions
or piped into current gas lines for use. It just has to be done and made a prior
ity. Though
about 3.71 millions biogas plants in India up to March, 2003 are successfully in
operation
but still it is utilizing only 31% of the total estimated potential of 12 millio
n plants. The
pay back period of the biogas plants is only 2/3 years, rather in the case of Co
mmunity
and Institutional Biogas Plants is even less. Therefore biogas electrification a
t
community/Panchayat level is required to be implemented.
4. CUMULATIVE ACHIEVEMENTS OF RENEWABLE ENERGY IN
INDIA
5. Jalkheri Power Plant
Punjab State Electricity Board (PSEB) took the first step to exploit the nonconv
entional
energy sources, when a 10 MW plant was set up in village Jalkheri (Distt.
Patiala) in 1991. This was a demonstration unit wholly designed and manufactured
by the
BHEL, India. This is basically a mini thermal plant which uses biomass as fuel i
nstead of
coal for releasing heat energy. The heat so liberated goes into water which is c
onverted
into superheated steam. The steam is then used to rotate the steam turbine. Thus
heat
energy is converted into the kinetic energy of rotation. The turbine is on the s
ame shaft as
the generator, therefore this kinetic energy is converted into electrical energy
and the
latter runs the turbine to generate power. The generation voltage of power is 11
KV which
is stepped up to 66KV for linking it to PSEB transmission network.
In order to ensure adequate raw-material for the plant a consortium of following
type has been formed:
The requirement of water for the plant is met from nearby canal.
Though, there was no dearth of the crop residue as fuel but initial difficulties
in arranging
biomass at site and some drawbacks in the plant forced its shut down. There afte
r,
modifications/improvements were carried out in the plant.
Two major modifications carried out were:
i) Conveyor system for feeding fuel to the furnace was entirely changed so that
any
type of biomass may be used as fuel in the plant. Accordingly, rice/wheat
straw, mustard straw, rice husk, saw dust, cotton waste, bagasse and tree chips
i.e., any conceivable biomass, can be used as fuel.
ii) Automation of the plant was carried out which enables the handling of all m
ain
controls and monitoring all the performance parameters from a single computer so
as to
obtain optimum generation.
The plant was recommisioned in 9/2001 and is now being run by a private
entrepreneur on lease hold basis and the plant is now running quite satisfactori
ly. The
following table of fuel consumption and generation in the current financial year
gives an
idea of plant s present performance:
PSEB is currently purchasing power @ Rs. 3.66 per unit from the supplier under
an agreement. No doubt renewable supplies generally have higher costs than fossi
l fuels
if the externalized costs of pollution are ignored, as is common. But with furth
er R&D,
the generation cost is bound to come down.
The automation of the plant has facilitated the monitoring and control of the pl
ant
from remote location. The logic control shown below has all the necessary comman
ds.
One can control the governor to regulate the steam in the turbine, the air suppl
y and
furnace draught can be changed and in case of fault in any equipment such as pum
ps etc.,
the stand by can also be selected while sitting before the computer screen.
6. ISSUES
6.1 Habitat Hazards
Some renewable energy systems entail unique environmental problems. For instance
,
wind turbines can be hazardous to flying birds, while hydroelectric dams can cre
ate
barriers for migrating fish. Burning biomass and biofuels causes air pollution s
imilar to
that of burning fossil fuels, although it causes a lower greenhouse effect since
the carbon
placed in the atmosphere was already there before the plants were grown.
.6.2 Proximity to Demand
Significant resources are often located at distance from the major population ce
nters
where electricity demand exists. Exploiting such resources on a large scale is l
ikely to
require considerable investment in transmission and distribution networks as wel
l as in
the technology itself.
6.3 Availability
One recurring criticism of renewable sources is their intermittent nature.
Solar energy, for example can only be expected to be available during the day (5
0% of
the time). Wind energy intensity varies from place to place and somewhat on seas
on to
season. Constant stream of water is often not available throughout the year for
generating
optimum Hydro power.
CONCLUSION
Keeping in view the reserves of the fossil fuels and the economy concerns, these
fuels are likely to dominate the world primary energy supply for another decade
but
environmental scientists have warned that if the present trend is not checked th
en by
2100, the average temperature around the globe will rise by 1.4 to 5.8 degrees C
elsius,
which will cause a upsurge in the sea water levels drowning all lands at low ele
vation
along the coastal lines. So the world has already made a beginning to bring abou
t the
infrastructural changes in the energy sector so as to be able to choose the rene
wable
energy development trajectory. In developing countries, where a lot of new energ
y
production capacity is to be added, the rapid increase of renewables is, in prin
ciple, easier
than in the industrial countries where existing capacity would need to be conver
ted if a
rapid change were to take place. That is, developing countries could have the co
mpetitive
advantage for driving the world market. However, strong participation of develop
ed
countries is needed since majority of energy technologies in use in developing c
ountries
have been developed and commercialized in developed countries first. Nevertheles
s,
India must give more thrust to the research and development in the field of nonc
onventional
energy sources not only to mitigate greenhouse effect but also to lessen
dependence on oil/gas import, which consumes major chunk of foreign exchange res
erve.
Last but not the least, it is for the citizens also to believe in power of renew
able energy
sources, and understand its necessity and importance.
References
1) Overview of power sector in India 2005 IndiaCore.com
2) C.R Bhattacharjee, Wanted an aggressive Outlook on Renewable Energy,
3) Pradeep K Katti, Dr.Mohan K. Khedkar, Photovoltaic and Wind Energy,
4) Kadambini Sharma, Renewable Energy: The way to Sustainable Development,
POWER QUALITY
AND
VOLTAGE STABILITY
C.vasavi A.sandhya
III year EEE III year EEE
E-mail ID:[email protected] E-mail ID: [email protected]
Phone number: 9966197269
9491202519
POWER QUALITY AND VOLTAGE
STABILITY
- A Growing concern For All
ABSTRACT
Power quality has become a great concern for both energy suppliers and their
customers. Due to Increasing use of sensitive devices and the significant conseq
uences of a
poor power quality for the companies. The technological advancements in electron
ic field
resulted into sophisticated equipments. These equipments are highly sensitive to
poor power
quality. These require reliable and good power quality from all power quality is
sues such as
voltage sag, voltage swell, surge harmonics, flickers, and voltage imbalance. Th
e term power
quality has been used to describe the voltage, current, and frequency of the powe
r.
Now a day s large amount of equipment has been added to power system such as
solid-state control of active power using thrusters are used to feed like adjust
able speed drives
etc.being nonlinear loads ,these converters ,draw harmonics and reactive power c
omponents, of
currents from mains. in three-phase system, they could cause unbalance and draw
excessive
neutral currents. These problems cause low system efficiency and poor power-fact
or. The main
objective of this paper is about concept of harmonics, their generation, problem
s created by
them and HARMONIC FILTERATION as a solution for these problems.
INTRODUCTION
POWER QUALITY is defined as the
concept of powering and grounding
electronic equipment in a manner that is
suitable to the operation of the equipment
in a manner that is suitable to the
Operation of that equipment.
Power quality has become a
strategic issue for the following reasons:
The economic necessity for business to
increase their competitiveness.
The widespread use of equipment, with
is sensitive to voltage disturbances and
/or generates disturbances itself.
The deregulation of the electricity
market.
The power quality correction and
harmonic filtering system give solution to
solve the problems of harmonic
disturbances and voltage fluctuations.
MAIN POWER QUALITY
DISTURBANCES:
Power quality involves cauterizing
low frequency conducted electromagnetic
disturbances, which can be ranked in
different categories.
Voltage sag:-
Voltage sag is a sudden
reduction (between 10%and 90%) of the
voltage magnitude at a point in the electric
system and lasting from 0.5 cycles to few
seconds. either switching operation or any
type of faults as well as fault clearing
process can cause a voltage dip. switching
like those associated with the starting of
large motor loads is the most common.
these events may be originated at the utility
side or at the customer site.
Voltage dips and interruption:-
A voltage dip is sudden decease of
voltage followed by voltage recovery after
a short period of time from a few cycles to
a few seconds.
Interruption is a special type of voltage dip
to typically within 1-10% of the reference
voltage.
Voltage dips, short interruptions are mainly
caused by faults on the transmission or
distribution or the installation itself and
switching of large loads.
Voltage variation and fluctuations:-
Voltage variations are the variations
in the rms value or peak value of amplitude
of less than 10%of the nominal voltage.
Voltage variations are series of voltage
changes which are characterized by the
variations in the frequency magnitude.
Voltage variations are caused by slow
variations of loads connected to the net
work and mainly due to rapidly varying
industrial loads such as welding machines.
Voltage fluctuations are systematic
variations of voltage envelope or a series
random voltage changes.
DEFINING THE PROBLEM:-
Harmonics are current or voltage
with frequencies that are integer multiples
of the fundamental power frequency. The
fundamental harmonic itself called first
harmonic. the second harmonic is the twice
that of first fundamental, the third harmonic
frequency thrice that of fundamental and so
on. For e.g. if the fundamental frequency is
50HZthen the second harmonic is 100HZ,
third harmonic is 150HZ.
GENERATION OF HARMONICS:-
Harmonics are created by non linear loads
that draw current abrupt pulses rather than a
smooth sinusoidal manner. These pulses
cause distorted sine wave shapes which
inurn cause harmonic currents to flow back
into other parts of the power system.
CONSUMER GENERATING
HARMONICS:-
Harmonics are not generated by
power generators but are produced by nonlinear
loads as under:
1. Loads that make use of semiconductor
devices like transistors,thyristors i.e. static
rectifiers.(ac\dc conversion using
SCRS),static frequency converters, static
inverters like:
Static power converters
Static rectifiers
Static frequency converters
Static uninterrupted power supplies
Static induction regulators.
2. variable impedance loads, using electric
arcs,arcfurnaces,welding units,fluroscent
tubes, discharge lamps, light control,
brightness est.
3. Loads using strong magnetizing currents,
saturated transformers, inductance,
furnaces, reactors EST.
4. Office automation equipment like
computers, ups, printersand fax machines.
PROBLES DUE TO LOW POWER
QUALITY
1. Motors fail to start due to low voltage
2. Setting and resetting of electronic
equipments will be out of control.
3. Industrial and domestic loads will get
damaged.
COMMON POWER DISTURBANCE
Common power quality disturbance
include surges, spikes and sags in power
source voltage and harmonics noise on the
power line. Each of the occurrences is
discussed briefly below
1. OVER VOLTAGE: an over voltage is
an increasing in the rms ac voltage grater
than 110 % at lower power frequency for a
duration longer than one minute
2.UNDER VOLTAGE: an under voltage is
an decreasing in the rms ac voltage less
than 90% at lower power frequency for a
duration longer than one minute
3. HARMONICS: Harmonics are
sinusoidal voltages or currents having
frequency that is integral multiplies if
frequency at which the supply system is
designed to operate
4. INTER HARMONICS: Voltages or
currents having frequency that are not
integers multiple of frequency at which the
supply system is designed to operate
5. OUTAGE: Total loss of power for some
period of time. Outages are caused by
excessive demands on the power systems,
lightingstrikes and accidental damage to
power lines
6. SPIKE: An extremely high and nearly
instantaneous increasing voltage with a
very short duration measured in micro
seconds. Spikes often caused by lightning s
are by events such as power coming back
on after an outage, a spike can damage or
destroy sensitive electronic equipment .turn
the equipment off during a power outage.
Wait a few minutes after power is restored
before turning it on, then turn on one device
at time
7. SAG: A sag is defined as the decrease in
between 0.1 to 0.9PU in rms voltage or
current as power frequency for durations
from0.5 cycles to 1 minute .sag typically is
caused by simultaneous high o\power
demand of many electrical devices such as
motor, compressors and so on. The effect of
sag is to starve electronic equipment of
power
8. NOISE: Noise is defined as the
unwanted electrical signals with broad band
spectral content with lower than 200KHZ
super imposed upon power system voltage
or current in phase conductors or signal
lines.
9. VOLTAGE FLICKER: Voltage flicker
are voltage fluctuations caused by repeated
changes in the customer load voltage.
10. POWER FREQUENCY
VARIATIONS: Power frequency variations
are defined as the power system
fundamental, frequency formats normal
value
HARMONICS:
Power system harmonic form the
latest parameter reflecting power quality.
Harmonic currents are generated by loads
with non- linear characteristics. The
distorted current drawn by a non-linear
device interacts with the circuit impedance
of source and results in a distorted voltage
wave form as well. The distorted voltage
waveform causes harmonic currents be
drawn by other loads, transformers,
motorstnd capacitors connected to the extra
supply system. The extraneous harmonic
currents imposed on the normal current sis
particularly harmful to transformers, motors
& capacitors because of the extra heating
these current produce, which can weaken
the electrical insulation and can result in
destructive failure. Even those non-linear
devices that produce non-linear distorted
voltage waveforms can fail to operate
properly if the distortion of the voltage
waveform exceeds limits.
HARMONICS CAUSES:
1. Transformers to over heat
2. Neutrls to over heat
3. Transformer secondary voltage distortion
4. Power system losses to increase.
5. Telephone and other communication
noise.
6. Watthour meters to read high or low
7. Protective relays will fail capacitor to
explode
VOLTAGE SAG & SWELL,
UNDERVOLTAGE, OVERVOLTAGE,
VOLTAGE IMBALANE:
Voltage sag and swell
undervoltage/overvoltage are identified by
voltage magnitude, sally in % of the rated
voltage, and duration .voltage sag is a
reduction in the rms magnitude of the
voltage from 10% to 90% & duration from
0.5 cycles to 1minute. If the duration is
grater than 1 minute it is considered as
under voltage. Precision manufacturing
process, sophisticated electric tools can be
adversely affected by voltage sag of just
2or3 cycles.
Asymmetrically lodes cause
unbalanced voltage. the conventional
electric furnace loads, steel rolling mill
loads and 2-phase loads like welding
machines cause voltage fluctuations,
voltage imbalance and flicker. voltage
imbalance leads to unwanted heating of
machine windings (heating due to third
harmonic currents) resulting in considerable
damage three-phase motors used in
industries. Voltage and frequency levels
affect performance of lightining, sensitive
process equipment, computer load,
frequency sensitive devices like
synchronous motor and other domestic
appliances such as refrigerators, washing
machines. low voltage cause frequent burn
out motors used in agriculture and industry.
SOLUTION TO HARMONIC
DISTORTIONS:
a) Harmonic filteration: harmonics in ac &
dc waveforms are minimized by following
means:
1. Use of dc smooth in reactor
2. Use of dc harmonic filters
3. Use of ac harmonic filters
1. DC smooth in reactor: this I soil cooled,
oil insulated reactor having high inductance
(0.35H to 1H).it is connected in series on
the dc side of the converter. It smoothen the
ripple in the dc current. The dc reactor also
helps in reducing the rate of rise of current
surges appearing on the dc side of the
converters due to sudden changes in dc
power flow due to faults or load change.
2. Ac filters: these are shunt connected ac
harmonic filters. They are connected
between AC bus bars. They offer low
impedance to harmonic frequencies & high
impedance to power frequency. Thus
harmonic
Frequencies are passed to earth and are
eliminated from the ac network.
Ac shunt filters serve dual purpose on ac
side they are
1. They divert harmonics to earth and
reduce the harmonic contents in main ac
network
2. They provide shunt compensation
required on ac side for satisfactory
converter operations.
CLASSIFICATION OF AC SHUNT
FILTERS:
1.Tuned ac shunt filters are used for
suppressing lower lower order
characteristics harmonics e.g. separate
tuned branches for 5th,7th,11th, 13th
harmonics .these branches may either
singled tuned for each of the above
characteristic frequency or each branch
double tuned for two frequencies
e.g5th/7th,11th/13th,23rd/25th,3rd/36th est.
tuned ac shunt filters are classified in two
types
a) A single frequency tuned shunt filters
is a single rlc series circuit connected
between phase & earth. The filter is tuned
to one particular characteristic harmonic
frequency. Separate branches are necessary
for each lower characteristic harmonic
frequency. The filter is tuned to the
resonates frequency equal to the
corresponding characteristic harmonic
order.
B) A double frequency tuned shunt
filters has two resonant frequencies
e.g5th/7th,11th/13th,3rd/36.a typical circuit of
a double tuned ac shunt filter consists of a
RLC series circuit in series with another
L2,C2,&R3,L3 parallel circuits. The values
are selected such that the total filter has two
resonant frequencies.
2) High pass filter: branches are arranged
to suppress harmonics of higher order say
24th &above .with high pass shunt harmonic
filter connected near ac busbars,all the
higher harmonics are diverted to earth and
are not allowed to proceed to the ac
network.at resonant frequencies, the circuit
of the double tuned filter is equitant to two
parallel single tuned branches. a double
frequency tuned filter has less impedance at
to resonant frequencies. These filters are
also called as damped filters.
3) DC harmonic filters
DC filters are designed to reduce DC
harmonics to minimize the telephone
interference in voice frequency range
(100HZ to 5HZ). In addition to the
telephone interference criterion, the DC
filter is designed with interior of the
avoiding the resonance between DC line at
lower order harmonics.
A) Active filtrations:
1) How active filterer works?
By monitoring the load/source current, an
active filterer can generate the required
harmonics for the load, leaving the source
to provide the only the fundamental
sinusoidal component in face with voltage
as shown in f face with voltage as shown in
The job of active filter is to extract
compensation current from non linear load
current and then to inject this compensation
current to suppress the harmonics and
reactive power component of load current.
2. Basic power circuit
The power circuit of the active power filter
in shown in fig it consists of three main
parts a single phase bridge converter, a DC
Bus for capacitor and a filter inductor. The
converter is used to supply the desired
compensation and charging power. The DC
bus bar capacitor use to reduce the
fluctuation under load variation. The filter
indictor is used to smooth the compensation
current supplied from the converter
a. Bridge converter the converter is used
in active power filter is a full H bridge
converter. The control strategy used in the
unipolar or bi polar PWM, and switching
devices used may be IGBT, GTO, power
MOSFETS etc., it supplies the real power
to the DC Bus bar of the converter to
maintain a constant DC Voltage and
generator compensation current to
compensate the load current. It is operated
voltage source inverter that converts the DC
Voltage on the energy storage capacitor to
an AC Voltage to the line. The task the of
the H bridge is to provide the reactive and
the harmonic current required by the non
linear load, so that the net current drawn
from the AC main gives the fundamental
active power
B. Energy storage capacitor:-
The main energy storage capacitor must
provide sensibly the constant DC
Voltage to the converter. While
determining the capacitor many factors
must be considered. The capacitor
ripple current at supply harmonic and
carrier ripple frequency must be
considered and the capability of the
capacitor to carry these currents must be
examined. The capacitor must be
capable of long term operation at the
maximum DC rail voltage and
sufficient safety margin. The bank of
series and parallel connected capacitor
are usually necessary to enable all the
requirements to be met. In the steady
state, the capacitor voltage should be
constant from one line cycle to another
C. filter inductor:-
The filter inductor is used to smooth
the compensation currents supplied
from the converter, so that the
fundamental load current remains
constantly sinusoidal. Design
parameters of filter inductor are
1.For good dynamic response the size
of the inductor must be as small as
possible
2.The inductor winding must be capable
of carrying the harmonic current.
Classification of active power filters:-
Depending on the converter type, topology
used number of phases the APF can be
classified as follows.
Converter based classification:-
a.Current fed converter
It behaves as a non sinusoidal current
source to meet the harmonic current
requirement of the load. A diode is used in
series with a self commutating device
(IGBT) blocking. How ever the GTO based
configuration doest not need the series
diode but they have restricted frequency of
switching.
b. Voltage fed converter
It has self supported DC voltage bus
with large DC Capacitor. It has become
more dominant since it has lighter,
cheaper and can readily expand in
parallel to increase their combined
rating. It is more popular in the UPS
Based applications because in the
presence on mains, the same inverter
bridge can be used as AF to eliminate
the harmonics of critical non linear
loads.
Topology based classification:-
a. Shunt APF
Shunt APF is most widely used to eliminate
current harmonics, reactive power
compensation and balancing unbalancing
current. It is mostly used at load end
because current harmonics are injected by
non linear loads. It injects equal and
opposite compensating current to cancel
harmonics and or reactive component of
the non linear load current it can also be
used in the power system net works for
stabilizing and improving the load profile
b. Series APF
Series APF is connected before the load in
series with the mains using machine
transformer, to eliminate voltage harmonic
and to balance and regulate the terminal
voltage of load or line it has been used to
reduce negative sequence voltage and
regulate the voltage on three phase system.
It can install by electric utilities to
compensate the voltage harmonic and to
damp out the harmonic propagation caused
by resonance.
C. Unified Power quality conditioner:-
UPQC is the combination of active series
and shunt filters. It is considered an ideal
AF which eliminates the voltage and
current harmonics and capable of giving
clean power to critical and harmonic prone
loads the DC links element is shared
between two current source and voltage
source bridges appearing as active series
and shunt compensator. It is used in single
phase as well as three phase configuration
its main draw back or its large cost and
control complexity because of the large
number of solid state devices involved.
D. hybrid filters:-
The hybrid filters is combinations of active
series and shunt filters. It is quite popular
because solid state devices used in active
series can be reduced sized and cost and
major part is the passive shunt LC filter
used to eliminate the lower order harmonics
it has the capability of reducing voltage and
current harmonic at reasonable cost.
Supply system based classification:-
This classification APF is based on supply
and /or load system having single phase
systems they are many non linear load such
as domestics appliance connected to single
phase supply system. Some three phase non
linear loads are with out neutral such as
adjustable speed drive feed from three wire
supply system. They are many non linear
single phase load distributed on four wire,
three phase supply system computers,
commercial lighting etc, hence the APFS
may also be classified accordingly as to
wire, three wire and four wire types
Control strategies for active power
filters:
It is one of the most important factors.
Overall control action activity is designed
at this stage. It consists of
1. Signal conditioning Sensing of
essential voltages and current signals
using puts, cuts, Hall Effect sensors,
and isolation amplifier to gather
accurate system information.
2. Derivation of compensating
commands:-
Compensating commands in terms of
current or voltage are derived based on
control methods and APF
Configurations.
3. Generation of gating signals
To give control signals to solid state
device generated by PWM, hysteresis,
sliding mode current control.
Power quality monitoring
Power quality is become a serious problem
in power system has great loss of time and
revenue. Hence it is necessary to monitor
and measure power quality with adequate
power quality monitoring devices the field
of power quality diagnostics and
monitoring has a matured drastically over
the past years. New lower cost monitoring
system can integrate standard electrical
energy monitoring information with high
speed power quality data capture to provide
pro active electric system information.
Power quality meters track critical electric
parameters such as voltage and current
wave forms and harmonics to identify the
power quality degradation
.. voltage sags and surges
.. short and long interruption
.. voltage and current distortion
in percentage
.. total harmonic distortion
Reactive approach power quality
monitoring:-
Reactive approach of power quality
monitoring entails collecting and analyzing
data hence taking corrective measures of
power quality problems have been detected.
Utility should send a team engineers
equipped with power quality monitoring
equipments to visit the effected industries
or customer site to perform problem
analysis. If necessary the power quality
monitoring equipment should be installed
in the customer premises to the further
identifies their problems. Based up on the
investigation the team should be suggesting
necessary mitigation techniques.
Proactive approach:-
The proactive approach entail collecting
and analyzing data in such a way that one
can investigate degrading trends with in
power system net works and take remedial
action before the actual problems. The
primary function or protective approach to
power quality monitoring, is to develop
base line describing existing power quality
level in power system data base will help
utility to under stand the expectation of the
customer regarding power quality as a
function of important system design
parameters. This will also help to create a
data base of describing information in a
general sense.
System wide power quality monitoring:-
System wide power quality monitoring
involves locating measuring equipment to a
strategic location in the power system to
record predetermining variations in
voltages and current supplied to customers.
The objective of system wide power quality
monitoring, is to establish base line data on
the levels of power quality in the system.
The primary purpose base line data is to
Investigate regarding trends in power
quality monitoring for planning and design
action
The four main considerations in system
wide power quality monitoring are
1. Systematic selection of monitoring site
2. Identification of right power
3. Efficient data handling
4. Data base and analysis tool
a.Choosing a monitoring location it is
better monitor the power quality parameters
as close as possible to the sensitive
equipment being effected by power quality
variations. It is important that the power
quality monitor see same the variation that
the sensitive equipment sees high frequency
transients in particular can be significant
deferent if there is significant deferent
separation between the monitor and
effected equipment
b. Identify the right power quality
equipment it is necessary to select power
monitoring equipment that meets the
minimum criteria for the intended purpose
the following criteria should be adopted
while selecting the power quality
equipment.
1.Ability to perform simultaneous
measurements of minimum channel.
2.Ease of installation with out power
shutdown.
3.adequate back up supply with auto shut
down and restart
Efficient data handling:
Since the large amount of monitored
data are required to be acquired the use of
automation system in the monitoring
activity is the required the network should
consists of connection of power quality
monitoring equipment to a central server it
is necessary to store all the events that are
captured in a data base for analysis data
will be down loaded monitoring.
Data base and analysis tool:
All modern power quality
monitoring equipment comes with its own
data base and analysis tools. Efficient soft
ware is necessary for the extraction of right
information from huge data base collected.
CONCLUSIONS:
Power quality has become strategic issue
for consumers. In this present economic
contest the consequences of electrical
disturbances become more and more
serious.
However problems of disturbance should
not be regarded as insurmountable as
solutions do exist. Harmonic currents &
voltage distortion are be combining the
most severe and complex electrical
challenge for the electrical industry.
So, by designing and giving complete
solutions in the form of detuned and tuned
harmonic filter systems.
By defining and implementing these
solutions users will be provided with the
right quality of power supply for their
requirements.
BIBLOGRAPHY:-
1. Stability and control by I.J NAGARATH
2. Power system by C.L.WADWAH
3. Power system dynamics by STEVENS
A Technical Paper Presentation on
UTILIZATION OF BIOMASS ENERGY
CONDUCTED BY :
SVU college of Engineering,
SIGMOID 2K9
SUBMITTED BY :
QIS COLLEGE OF ENGINEERING AND TECHNOLOGY
ONGOLE 523002,
PRAKASAM DISTRICT.
Miss.K.V.Satyavathi,
Ht.No:05491A0276,
B.Tech IV yr EEE,
Ph.No.9491512851,
Email ID: [email protected]
ABSTRACT :
The emergence of Bio-mass and biofuels as viable alternative sources of energy i
s a direct consequence
of unhindered exploitation of fossil fuels, leading to global energy resource im
balance. Due to the escalating
prices of oil and petroleum products and the hazards coming in the way of nuclea
r fuel utilization, the need for
harnessing biomass energy has become imperative. The diverse aspects of biomass
energy utilization are
presented briefly in this study.
INTRODUCTION :
Energy is the key input to drive and improve
the life cycle. The consumption of the energy is
directly proportional to the progress of the mankind
with ever growing population, improvement in the
living standard of the humanity, industrialization of
the developing countries like India. The global
demand for energy is increasing on alarming rate.
The primary source of energy is fossil fuel
(like coal, diesel) , which are decreasing day-by-day
due to more energy demand and there is global
warming problem due to these sources. So, we need
non-conventional energy source for full fill the
demand of energy. The paper, describe the brief
introduction about bio-mass energy and the
advantage, disadvantages of the bio-mass,
conversion of biomass in modern energy carriers
way becomes tool of rural development. Electricity
is the key to economic development for any
country. During the last five decades, the demand
for electricity has increased manifold in India,
primarily due to rapid rate of the urbanization and
industrialization. The conventional fossil fuel
resources for power generation are fast depleting
and there is a growing concern over the
environmental degradation caused by conventional
power plants. Against such implications, power
generation from non- conventional resources
assumes greater significance. Among the various
renewable energy sources, biomass conversion
technologies appear to be one of the best suited for
conversion to shaft power/electricity. The
increasing demand for energy is putting immense
pressure on fossil fuel resources and sanctity of
ecological system. Government and organization
across the world and engaged in a mammoth
exercise to ensure energy security, at the same time
balancing it with a sustainable environment. The
consumption matrix of India shows a dependence
on coal as a primary source of energy. However,
coal being a fossil fuel is limited in supply.
Moreover, oil and gas prove to be expensive energy
sources given our import dependence. Also, these
source emit a huge volume of carbon dioxide which
is detrimental to the ecological system in the long
run. Therefore, sustainable energy sources such as
hydro power and renewable energy (presently
accounting for only five percent of total energy
supply) assume importance and offer potential for
growth. The government has also recently become
conscious of these impeding realities. In terms of
the amount of energy available, the bioenergy
option is very large in resources. According to the
best estimates India s total energy requirement
including that of domestic, commercial and
industrial sector is estimated to be 235 million tonne
coal equivalent (mtce). And the contribution of
biomass to the energy requirement of country is
117.7 mtce. Knowing this facts, this paper will
deals with the utilization of the resources available
at grass root levels in the form of agricultural
residues and human extract. Using high voluminous
loose biomass causes many drawback viz. lower
efficiency, low calorific values and large size per
unit energy. So our intention is to convert this
biomass into high density suitable energy packets.
1. BIOMASS ENERGY :
Biomass is a organic matter produced by both
terrestrial and aquatic plants and their
derivatives. It is considered as renewable energy
source because plant life revenues and adds to itself
every year unlike coal, oil, natural gases. Actually it
is one, which harnesses the solar energy by
photosynthesis.
2. CATEGORIES OF BIOMASS ENERGY :
There many type of biomass source, which are
shown below diagram
Biomass
Traditional Fuel Form Gaseous Form
Solid mass
Wood and Agri. Ethanol Biogas
Residue Methanol
3. STRIKING FEATURES OF BIO-MASS :
i. Ease of availability and cheap.
ii. Renew ability.
iii. Possible decentralization of energy.
iv. No storage requirements.
The traditional Biomass fuels are having potentials
of providing heat energy as well as generating
substantial electrical power.
4. BIOMASS BREAK UP :
Biomass covers all kinds of vegetation from
fuel wood to marine vegetation and organic wastes.
Some of biomass sources relevant for harnessing
energy are listed below with their break-up.
1. Wheat straw,
2. Bagasee,
3. Cotton stalles
4. Rice husk,
5. Ragi & Bajara,
6. Coconut and groundnut shells
FACTORS INFLUENCING
UTILIZATIONS OF BIOMASS :
Factors pertaining to material, environment,
and economical technical aspect will influence the
utilization of biomass as source of energy.
1. MATERIAL FACTORS :
Biomass material contains moisture. (See
table 1) An increase in the moisture content
decreases its calorific value. Moisture content of the
feed material also affects the calorific value of the
product combustible gases.
2. TECHNOLOGICAL FACTORS :
Variety of technologies is available for the
conversion of biomaterial to useful energy forms
Biomass
Thermo Chemical Biochemical Conversion
Combustion Gasification Pyrolysis
Aerobic Anaerobic
3. ECONOMIC FACTORS :
High cost of production could be the main in
the commercialization of process technologies. In
the case of biomass it has been estimated that the
production cost is 4.23 Rs/kw-hr which is very
close to conventional source, energy production
cost. But intensive work is required to make the
utilization economically attractive.
4. ENVIRONMENTAL FACTORS:
Public attitude and environmental impacts at
global and local influence the use of biomass.
Depending upon the technology and fuel use, NOx
emission ranges from low from law to moderate.
These fuels produce virtually no sulphur emission.
Particulate emission in combustion is influenced by
fuel feed rate, the quantum at fines in fuel and the
amount of excess air supplied.
CONVERSION OF BIOMASS IN TO
SUITABLE ENERGY CARRIERS :
The loses biomass available is affected by
i. The moisture content
ii. Lower density
iii. Non-pulverized form
iv. Ash content
This result in lower calorific value and less
efficiency. So it is preferred to convert loose
biomassinto energy packets in the form of solids,
liquids or gasses fuels. These fuels can ultimately
be used for variety of applications. Various
processes for this conversion are
1. PYROLYSIS :
It is an irreversible conversion of biomass
into charcoal pyrolytic oils and fuel gasses through
a process of heating in oxygen free environment.
Pyrolysis units operate generally below 600oC. The
gases produced are a mixture of hydrogen, methane,
carbon monoxide, carbon dioxide and lower
hydrocarbons. The liquids are oil and solids are
similar to charcoal.
2. BREQUETTING :
It is a process of converting the voluminous
loose biomass into high density, high value solid
fuel. The briquetting process comprises the three
steps crushing, drying and briquetting. The sugar
cane trash, vegetable waste, agricultural waste may
use as raw material. This process facilitates
a.) Improved energy density
b). To reduce weight and volume per unit energy
c). Easy transportation and handling
3. DIRECT COMBUSTION :
Direct combustion of biomass mainly the
wood is the oldest energy producing method. Direct
combustion is efficient method of recovering energy
from bio-mass which can be used to generate
electrical power.
4. THERMO CHEMICAL PROCESS:
These processes enable the conversion of
bio-mass into gaseous and liquid fuels. These
processes basically involve cracking of complex
organic molecules into simpler ones. In this process
biomass is converted into BTU gas, medium BTU
gas, sub natural gases.
5. ANAEROBIC FERMENTATION :
Anaerobic digestion is a bio-medical process
carried out by several micro-organisms. The end of
product of anaerobic digestion is a mixture of
carbon dioxide, methane and other gases which can
be utilized.
PROCEDURE FOR THE BIOMASS
BRIQUETTING :
i.) It requires a furnace, designed especially as
shown in fig.2. It s made up of kiln (steel,iron
material) and seven barrels. The kiln is provided
with chimney i.e. smoke out let.
ii.) 3 kg of loose biomass is filled in each barrel and
they are placing in such a manner so that, the hole
on the upper lead comes down in to kiln biomass
is also filled at the bottom of kiln .
iii.) Firing the mass at the bottom of kiln will
produces the combustible gases. These gases help in
further burning of high voluminous loose biomass.
iv.) After 45 minutes, cooled down the barrels 1 kg
of charcoal is ready in each barrel. So, it is possible
to produce75-80 kg charcoal in a day.
v.) Grind it properly and add 1 kg wheat floor paste
per 10 kg of charcoal. Mix it properly.
vi.) The bricks can be made from the paste using
models of required shape and size.
PROPERTIES OF BIOMASS BRICKS :
For single brick of loose biomass
i. Weight -100 gms
ii. Calorific value
a. Sugarcane trash charcoal 4500-5000 kcal/kg
b. Biomass charcoal 5600 kcal/kg
c. Vegetable and agriculture residues 3900
Kcal/kg.
iii. Moisture content 0.00% (at ideal conditions)
iv. Ash content
a. Sugarcane trash charcoal 20-25%
b. Bamboo and wood trash 4-5%
A CASE STUDY :
We have made survey of village name
Narangi a tribal place with 1200 population of tribal
communities like Thakkar and Dhangar. Today
there are 42 charcoal manufacturing plants based
on the same technology. The plants are built and
operated by housewives only. The detailed
information is as follows:
i). Manufactures: Sarasvati Saving Group: A group
of 5 tribal women.
ii). Site: Narangi (tribal area) Tal: Khanapur Dist
Raigad Maharashtra
iii). Product: Charcoal brequetting from loose
biomass
iv). Installation cost; 15000/- Rs.
v). Energy specifications: (In below table)
S.No. Product Total Calorific value produced per day
A Sugar cane trash charcoal 5000*80 = 400*103 KCAL
B Bamboo Charcoal 4600*80 = 368*103 KCAL
C Vegetable agriculture waste 3900*80 = 312*103 KCAL
vi). Labor cost: 50/- Rs wage per labour per day,
Total labor cost: 250 Rs.
vii). No. of bricks manufactured per day = 850
viii). weight of a brick = 100 gm
ix). Manufacturing cost of a brick = 0.32 Rs.
x). Market price of a brick = 1.00 Rs.
xi). annual turn over of a unit = 3,06,000/-Rs
xii). Net profit/annum = 2,06,640/- Rs.
xiii).Economy:Calorific value of kerosene is 10,000
kcal/kg while that of sugar cone trash charcoal
is 5000 kcal/kg. I.e. 2 kg of charcoal = 1 kg of
keroseneCost of 1 kg kerosene = 23 Rs (App.)
Cost of 2 kg charcoal = 3.20 x 2 = 6.40
Money saving = 16.60 Rs/kg ofCharcoal
xiv). Employment generation: In order to generate
the employment for the rural men and women
technical backupsupport unit suppose to provide the
training in manufacturing the charcoal briquettes
and appliances regarding it. Earning of each women
form self employment per month Rs.1500/- and
earning of each women from profit dividend
rs.3500/- approximately hence the earning of each
women per month will be about Rs.5000/- looking
the figures you can imagine that how the
enrichment is going on at the grass roots with the
appropriate utilization of biomass.
APPLICATIONS OF BIOMASS
BRIQUETTES :
1. DIRECT COMBUSTION :
Biomass bricks may use directly with
improved stores, shegaries and chullas. Rural mass
may utilize it for cooking food, heating water and
similar domestic uses it may become a strong option
for the kerosene , LPG and wood biomass bricks
facilitates
1. Complete combustion
2. Improved calorific values
3. Cheep and clean operation
4. No formation of hazardous flue gases as CO2
and CO .
5. Saving of conventional fossil fuels viz. LPG,
kerosene.
6. Flexibility, ease of handling and transportation
7. Lower overall cost.
2. RURAL ELECTRIFICATION :
Micro-power plants based on renewable
energy sources mean more and more freedom from
the dependence on a central electricity grid.
Biomass available in India is having a potential of
17,000 MW electricity generations. The less
voluminous bio bricks may be utilized for domestic
lightening and pumping water for irrigation
purpose. It is possible with the help setting up
decentralized micro-power plants, which will also
gives the additional advantage of employment
generation. Calorific value of bio-bricks is
depending upon the type of biomass used for it.
(Ref. table) Forming the high density biomass likes
bamboo and Sal wood and using it, it is possible to
produce high density, high calorific value bricks
which may used in boilers.
CONCLUSION :
The paper, describe the brief introduction
about bio-mass energy and the advantage,
disadvantages of the bio-mass, conversion of
biomass in modern energy carriers way becomes
tool of rural development.
ELECTRIC POWER QUALITY DISTURBANCE
DETECTION USING WAVELET TRANSFORM ANALYSIS
BY
R.SARIKA G.SOWJANYA
3/4B.TECH, EEE 3/4B.TECH, EEE
BAPATLA ENGINEERING BAPATLA ENGINEERING
COLLEGE COLLEGE
EMAIL:[email protected] EMAIL:[email protected]
ABSTRACT
The objective of this paper is to present a novel approach to detect and localiz
e various
electric power quality disturbances using wavelet transform analysis. Unlike oth
er
approaches where the detection is performed directly in the time domain, detecti
on using
wavelet transform analysis approach is carried out in the time-scale domain. As
far as
detection in power quality disturbance is concerned, one or two-scale signal
decomposition is adequate to discriminate disturbances from their background. Th
is
approach is robust in detecting and localizing a wide range of power disturbance
s such as
fast voltage fluctuations, short and long duration voltage variations, and harmo
nic
distortion.
1. INTRODUCTION
Electric power quality has become an
important issue in power systems
nowadays. The demand for clean
power has been increasing in the past
several years. The reason is mainly due
to the increased use of microelectronic
processors in various types of
equipment, such as computer terminals,
programmable logic controllers (PLCs),
diagnostic systems, etc. Most of these
devices are quite susceptible to
disturbances of the incoming alternating
voltage waveform. For example, a
momentary power interruption or thirty
percent voltage sag lasting for hundredth
of a second can reset PLCs for an
assembly line. The cost due to this
disturbance can be substantial [l].
Therefore, to ensure efficient and proper
utilization of sensitive load equipment, a
clean voltage waveform is very
desirable. Electric power quality in
general refers to maintaining sinusoidal
waveform of power distribution bus
voltages at rated voltage magnitude and
frequency [2]. On the other hand,
electric power quality disturbances can
be thought of as any deviation,
distortion, or departure from a sinusoidal
voltage waveform at rated magnitude
and frequency. Since the power
distribution system is an interconnected
system between utility and customer,
any disturbances generated from the
utility side can propagate to the customer
side. Moreover, since customers lines
are interconnected to each other and due
to the use of various loads, it is possible
for one customer to generate
disturbances which will affect the power
quality of other customers. Responding
to the need of high power quality,
several research institutes [3, 41 are
conducting independent studies and
surveys of power quality in the United
State and Canada.
The common objective of these studies
is to collect a pool of raw data for
subsequent disturbance analysis in order
to provide insight to the causes and
impacts of various power quality
disturbances and further to mitigate the
source of such disturbances. The
collected data are abundant, thus, it is
not practical to retrieve the data from the
databases, display it graphically and then
manually inspect the waveforms.
Therefore, an automatic detection
scheme is called for. The current state of
the art to detect power quality
disturbances available in the commercial
market is based on a point-to-point
comparison of adjacent cycles [3, 61. In
this approach, the incoming waveform is
sampled at about 5 KHz. Each sample
point of the present cycle is compared to
the corresponding sample point of the
previous cycle. A disturbance is said to
occur if the comparison shows a
difference that exceeds a user supplied
threshold. This approach fails to detect
disturbances that appear periodically
such as flat-top and phase controlled
load wave shape disturbances. Another
approach to detect disturbances is based
on neural networks [5]. This approach
seems appropriate in detecting a
particular type of disturbance; however,
due to its intrinsic nature, specific neural
network architecture to detect a
particular type of disturbance is required.
Therefore, this neural network will, in
general, not be appropriate to detect
other types of disturbances.
2. WAVELET TRANSFORM
ANALYSIS AS A DETECTION AND
LOCALIZATION TOOL
As mention previously, most current
methods for detecting power quality
disturbances have their own limitations
and are performed directly in time
domain. In this paper, we present a novel
approach for disturbance detection and
localization based on the orthonormal
wavelet transform where detection is
carried out in the time-scale domain. As
will be shown in section 4, this approach
is powerful in detecting a wide range of
power quality disturbances such as fast
voltage fluctuations, short and long
duration voltage variations, and
harmonic distortion. It also can detect
disturbances that appear periodically.
The method of detection is fairly
straightforward. A given disturbance
waveform is transformed into the
timescale domain using multiresolution
signal decomposition (MSD) (7, 81.
Normally, one- or twescale signal
decomposition is adequate to
discriminate disturbances from their
background because the decomposed
signals at lower scales have high time
localization. In other words, high scale
signal decomposition is not necessary
since it gives poor time localization.
Assume that we have chosen a specific
type of mother wavelet with L filter
coefficients, h(n) and g ( n ) , which
form a family of scaling functions 4(t)
and orthonormal wavelet $(t),
respectively, so that
The detection and localization process is
then just a series of convolution and
decimation processes at each
corresponding scale. At scale one, the
electric power signal c0 (n) with N
sample points, is decomposed into two
other signals, c1 (n) and dl (n). From the
MSD technique, signal c1 (n) and dl (n)
are defined by
As mentioned in several wavelet
transform references, signal c1(n) is a
smoothed version of the original signal
c0(n),while d1(n) is the detailed version
of the original signal which is
represented as wavelet transform
coefficients (WTCs) at scale one. These
coefficients bring the detection
information. In power quality
disturbance cases, whenever
disturbances occur in the given
sinusoidal waveform, WTCs are
exclusively larger than their
surroundings. As will be made clear
later, the wavelet transform analysis is
sensitive to signals with irregularities
(i.e. power quality disturbances) but is
blind to constant-like behavior of the
signal (i.e. the 60 Hz sinusoidal
waveform). Based on this property, it is
clear that wavelet transform analysis is
an appropriate tool to detect and localize
power quality disturbances. Underlying
this straightforward process, one should
keep in mind that the physical
understanding of the detection and
localization described in (3) and (4) is
given by
f (t) in (7) can be thought of as a
dummy signal generated by a linear
combination of c0(n) the scaling
function at scale zero. Therefore, any
disturbances in c0 (n) will appear in f (t)
as well. Substituting (1) and (2) into (5)
and (6), respectively, we have
From (8), it is understood that c1 (n) is
simply the smoothed version of the
original signal CO, since h (n) has a low
pass frequency response. Whereas from
(9), it is clear that d1 (n) contains only
higher frequency components of the
signal f (t) because g (n) has a band pass
filter response. This explains why the
wavelet transform analysis is sensitive to
signals with large irregularities but
blind to constant-like behavior. In
practice, the construction of f (t) is not
necessary but it is useful in
understanding the physics of the
detection and localization process as
indicated in (5) and (6). However,
signals c1 (n) and dl (n) are actually
obtained directly from (3) and (4). This
makes the detection and localization
process very straightforward. The
detection process for scale two starts
from signal c1 (n) where this signal can
be thought of as a new c0 (n).The
above process is then repeated. Since the
scaling and wavelet functions get wider
and wider as the scale increases, time
localization is lost. It suggests that
higher-scale decomposition is not
necessary. As far as detection in power
quality disturbances is concerned, twoscale
signal decomposition of the
original signal c0 (n) normally adequate
to detect and localize disturbances.
3. CHOICE OF MOTHER
WAVELET
The choice of mother wavelet plays a
significant role in detecting and
localizing various types of disturbances.
Daubechies wavelets with 4, 6, 8, and
10 filter coefficients work well in most
disturbance detection cases [9]. However
for some disturbances, such as sag or
over voltage disturbances (within 5%),
Daubechies wavelet with 4 filter
coefficients (Daub4) can not detect or
localize the disturbances. Therefore, the
choice of the mother wavelet is
important. In power quality disturbance
detection, generally, one can classify
disturbances into two categories, fast and
slow transients. In the fast transient case,
the waveforms are marked with sharp
edges, abrupt and rapid changes, and a
fairly short duration in time. In this case,
Daub4 and Daub6, due to their
compactness, are particularly good in
detecting and localizing such
disturbances. In the slow transient case,
the waveforms are marked with a slow
change or smooth amplitude change.
Daub4 and Daub6 may not be able to
catch such disturbances, since the timeinterval
in integral (6) evaluated at point
n is very short. However, if Daub8 and
DaublO are used, the time interval
integral is long enough and, thus, such
wavelets can sense the slow changes [9].
4. RESULTS AND DISCUSSION
The proposed detection method
introduced in section 2, is now applied to
various type of disturbances. The
disturbance signals presented here are
generated using computer codes
developed by the first author. The
sinusoidal waveform is 60 Hz and of a
unit amplitude with sampling frequency
of 2.56 KHz. Daub4 and DiablO
wavelets are used to show that in some
cases, DiablO detects and localizes
better than Daub4 or vice versa. Only
one-scale signal decomposition is
performed since it gives the best time
localization.
In the following, the detection and
localization of the flat-top wave shape
disturbance (fast and short transient
disturbances), the voltage sag
disturbance (slow and long transient
disturbance), and harmonic distortion are
Presented. Flat-top wave shape
disturbance [6]. The disturbance is
identified by a flattened shape for a very
short period of time near the peaks (i.e.
at the 90" peak and the 270" peak) as
shown in Fig. la. The flattened shape
may be nearly horizontal or in some
case, has a positive slope. This
disturbance is normally caused by an
electronic load that draws maximum
current from a distribution transformer at
the peak of the voltage waveform. The
detection and localization results using
Daub4 and DiablO at scale one is shown
in Fig. lb and c, respectively. In both
cases, the peaks indicate the occurrences
of disturbances. However, the detection
using DiablO has spurious effects near
the peaks. This is due to the fact that
diablo wavelet is not as compactly
supported as Daub4, and the time
interval integral in (6) is too long for this
purpose. The current state of the art
technique uses point-to-point
comparison technique described earlier
to detect this type of disturbance.
Because disturbances appear at the same
localization in every cycle, this
technique cannot detect flattop
disturbance as well as any disturbances
that appear periodically. However, the
wavelet transform approach can detect
this type of disturbance easily. Voltage
Sag [I] Voltage sag is denoted by a
sudden drop of voltage amplitude from
its nominal value for several cycles.
Voltage sag with a 30% drop or more is
considered severe. Fig. 2a shows a 5%
sag disturbance for 4 cycles. The
detection and localization of this
disturbance using Daub4 and DaublO at
scale one are shown in Fig. 2b and c.
Daub4 barely detects the disturbance.
Notice the small dips in the circles
which indicate the changes. For all
practical purposes, Daub4 fails to detect
this smooth disturbance. However, this
disturbance is well detected with
DaublO.Now, the disturbance
occurrences are obvious. In this case,
DaublO works much better than Daub4
because the disturbance is so slow such
that Daub4, which is the most compactly
supported wavelet, does not have enough
time to sense the slow change. Harmonic
Distortion [2]
When a perfect sinusoidal waveform of
60 Hz is contaminated with harmonics,
the resulting waveform is called
harmonically distorted. A common way
to measure the departure from a perfect
sinusoidal utilizes the total harmonics
Figure 1. The flat-top wave fault
detection and localization using Daub4
and DaublO. (a) The disturbance signal.
(b) The detection result at scale one
using Daub 4 and (c). DaublO.
Figure 2. The five percent sag
disturbance detection and localization
using Daub4 and DaublO (a). the sag
disturbance. (b) The detection results at
scale one using Daub4 and (c) DaublO
distortion (THD) definition, which is
given by
Where VI is the amplitude of the
fundamental frequency f0, and V, is the
amplitude of the harmonics at frequency
a f0, is a positive integer greater than
one. A perfect sinusoidal has a THD of
0%. The more the waveform is distorted,
the higher is the THD. Since the basis
function of the wavelet transform is not
defined in terms of frequency, it is
inherently not suitable to quantify the
harmonic content. However, the wavelet
transform plays an important role in
detecting and localizing the presence of
harmonic events. Figure 3a shows a
sinusoidal waveform that looks perfect.
However, at time 67.2 msec to 133.6
msec which is approximately 4 cycles,
the waveform is contaminated with the
addition of odd order harmonics up to
25th order. The harmonic contents are as
follows. The 5th, 7th, 1lth, 13th, 17th,
19th, 23rd, and 25th harmonics are 9.03,
5.02, 3.01, 1.23, 0.89, 0.78, 3.12, and 1.9
percent of the fundamental frequency,
respectively. The calculated THD using
(10) is 11.49%. Figures 3b and c shows
the detection and localization of the
harmonic events using Daub4 and
DaublO at scale one, respectively. These
results indicate that the wavelet
transform can localize the occurrence of
harmonic events although it cannot
quantify the harmonic content. The
reason is due to the fact that the original
signal is filtered by h (n), a low pass
filter, and g (n), a band pass filter. Since
the filter g (n) has a band pass filter
response, it then can sense and localize
the presence of the harmonics.
Comparing the results obtained using
Daub4 and DaublO, it is clear that
DaublO provides a better localization.
This is related to the frequency response
of Daub4 and DaublO wavelets.
5. CONCLUSION
We have demonstrated that the proposed
approach based on wavelet transform
analysis is very powerful in detecting
and localizing various types of power
quality disturbances. The results
presented are not the only disturbances
that can be detected. Many other
disturbances such as momentary
interruptions, impulses and various types
of wave shape faults, sags, and surges
can be well detected and localized using
the proposed method [9]. Currently, we
are conducting an investigation to
further detection and localization
capability. That is, the goal is to not only
detect but also to classify various types
of disturbances automatically. In another
area, we are utilizing similar wavelet
transform techniques to detect and
quantify transient phenomena in
tokomak fusion plasmas.
Figure 3. The harmonic distortion
detection and localization using Daub4
and DaublO. (a) The harmonic distortion
signal. (b) The detection result at scale
one using Daub4 and (c) DaublO.
POWER ELECTRONIC CONVERTERS FOR
RAILWAY
INFRASTRUCTURE UPGRADING
AUTHORS: K.RAVI KUMAR,
D.JAMAAL REDDY.
ST.JOHN S COLLEGE OF ENGINEERING AND TECHNOLOGY
YERRAKOTA,YEMMIGANUR.
Email Id:[email protected]
[email protected]
ABSTRACT
Rail transportation is considered to be a critical infrastructure in our country
, because much of its
economy relies on it. It is a matter of fact that a there is a significant incre
ase in rail traffic now a
days. Railway infrastructure will certainly have to be upgraded to support this
traffic increase.
Two main consequences of increase in rail traffic is increase in power consumpti
on and voltage
drop on electrified lines. This problem has two possible solutions: construction
of new
substations and lines of higher capacity, or to make use of power electronic com
pensators.
Construction of new stations is tedious and involves huge expenditure. The best
alternative as
suggested by power electronics is the connection of VAR compensator which increa
ses the
traffic capacity of the line and also improves the voltage profile of the system
. As the load
increases the reactive power absorbed by the train increases and this provokes m
ore voltage
drop.
Historically two methods were used to compensate this reactive power: Synchronou
s
machines, compensation capacitors. The main draw backs of these methods are they
do not
give fast transient response. Static VARs (SVC) is an electric device which is u
sed to provide
fast acting reactive power.
As every coin has two sides, SVC also has some drawbacks. A high harmonic distor
tion is
produced by SVC systems. But this problem can be minimized by including some fil
tering
circuits. They eliminate all frequency components such as 3rd, 5th, 7th, etc. Th
is solution confirms
that using power electronic compensators is a very interesting option in future
for up gradation
of railways economically. Last but not the least power electronics is emerging a
s a powerful
technology for the further evolution of railways.
INTRODUCTION:
Critical infrastructures comprise those
industries that provide a continual flow of
the goods and services essential to a
country s welfare, and safety of its citizen.
These infrastructures are experiencing an
important evolution, increasing their
performances by the introduction of new
technologies. Rail transportation is
considered to be a critical infrastructure in
many countries, because much of their
economy relies on it to supply the necessary
components for its introduction. Railway
infrastructure will certainly have to be
upgraded to support this traffic increase.
This upgrading may be realized by the
introduction of new technologies on the
existing infrastructure, avoiding therefore
the construction of new infrastructure.
ABOUT INDIAN RAILWAYS:
Statistics of Indian railways: The
railways traverse the length and breadth
of the country. IR's routes cover 7,137
stations over a total length of more than
63,327 kilometers.
Catenary voltage: A catenary is a
system of overhead wires used to
supply electricity to light rail vehicle.
In practice, the catenary voltage in
the 25kV AC system can vary from
something like 18kV to over 30kV
because of poor regulation at the
substation or incorrect configurations
of the transformers, etc.
Substations: The substation is where
the electricity from the supplying
regional grid is transformed to a
voltage suitable for use for the
railways, and fed to the various
sections of the catenaries by a step
down transformer.
Transmission power :Power is
transmitted to the electrical
substations at 750kV, 220kV,
132kV, or 110kV and then stepped
down as required to 25kV or 50Kv
DC System: In DC systems with
overhead catenary, the basic
principle is the same, with the
catenary being supplied electricity at
1.5kV DC. Usually the current from
the catenary goes directly to the
motors. A DC loco may however
convert the DC supply to AC
internally using inverters or a motorgenerator
combination which then
drives AC motors. In India, the
1.5kV DC overhead system is used
around Mumbai, for rest of the
country mostly 2.5K.V is used.
NEED TO UPGRADE THE INDIAN
RAILWAY INFRASTUCTURE:
Now that a significant increase of traffic is
expected, it is necessary to analyze its
consequences and to assess the changes that
will have to be made in the railway
infrastructure. Two of the main
consequences are the increase of the power
consumption and the excessive voltage drop
on electrified lines.
In the first case it may cause the
saturation of some lines and
transformers, demanding
more power than their capacity.
In the second case, railways are designed
to assure a minimum catenary voltage at
full load, so that the train can operate
normally. If the connected load is
increased, voltage drop will also increase
and it will not be possible to assure the
required minimum voltage.
SOLUTIONS:
Two possible solutions to adapt the railway
infrastructure to the challenges of a traffic
increase are the following ones:
- The construction of new substations and
lines:
By building new infrastructure, the rating of
the elements of the network may be
increased, adapting them for a higher power
demand.
- VAR compensation:
This compensation may be done locally in
the trains or by a compensation device
connected to the line. In any case, this
compensation can reduce the voltage drop
on the lines, and it allows a higher load
capacity. The choice between these two
solutions depends on the characteristics of
the line that has to be upgraded. .
Furthermore, growing attention to
environmental issues makes the installation
of new electrical substations more and more
difficult. Therefore, compensators have
become an interesting alternative to the
construction of new infrastructure.
COMPENSATION TECHNIQUES:
Historically two methods of reactive power
compensation have been used on electrical
networks:
- Synchronous machines.
- Capacitor banks.
1. The first method is based on the
capacity of synchronous machines to
produce reactive power. These
machines are connected to the line
and controlled to generate the
amount of reactive power that is
needed at each moment. The main
drawbacks of these systems are the
high inertia that they present,
avoiding a fast transient response,
and the fact that they include moving
parts (maintenance).
2. The second method is based on
capacitors and it is a discrete compensation
method. Some predetermined amounts
of reactive power can be generated, depending
on the quantity of capacitors that are
connected to the line at each moment. Thus,
depending on the reactive power compensation
requirement, the controller will have to decide
which combination of capacitors has to be
connected to the line, in order to produce the
closest quantity that can be produced by the
compensator. The main disadvantages of this
compensation method are its discrete nature
and especially the hard transients that follow
capacitor switching. These compensators are
quite appropriate for power system
applications, but not for railway applications,
because the rapidly changing nature of the
loads (the locomotives) requires a more flexible
and rapid compensation system.
RAILWAY SYSTEM UPGRADIG
WITH POWER ELECTRONIC
COMPENSATORS:
VAR Compensator: In alternating-current
power transmission and distribution, voltampere
reactive power (VAR) is a unit used
to measure reactive power in an AC electric
power system. Since AC power has a
varying voltage, efficient power systems
must therefore vary the current in synchrony
with the voltage. VARs measure
unsynchronized "leading" or "lagging"
currents. These currents are usually caused
by the side effects of powering equipment
that behaves like coils (e.g. motors) or
capacitors (e.g. arc welders). Only effective
power, i.e. the actual power delivered to or
consumed by the load, is expressed in watts.
Imaginary power is properly expressed in
volt-amperes reactive. VARs are the product
of the rms voltage and current, or the
apparent power, multiplied by the sine of the
phase angle between the voltage and the
current.
The connection of VAR compensation
systems can help increasing the traffic
capacity of the line, by improving the
voltage profile and by increasing the active
power that flows on the system. Due to the
high reactance of the railway catenary and
the substation transformer, the reactive
power absorbed by the trains may provoke
significant voltage drops. The connection of
a VAR compensator can limit the reactive
power flow in some parts of the system,
compensating the voltage drop that the trains
produce. In the same way, if the reactive
power is produced locally by the
compensator, the substation transformer will
be able to handle more active power,
permitting a traffic increment in the line.
STATIC VAR COMPENSATOR (SVC):
Static VAR Compensator (or SVC) is an
electrical device for providing fast-acting
reactive power compensation on high-voltage
electricity transmission networks. SVCs are part
of the Flexible AC transmission system
(FACTS) family of devices. The term "static"
refers to the fact that the SVC has no moving
parts other than circuit breakers and disconnects.
Traditionally, power factor correction has been
done with synchronous condensers, large
synchronous motors whose excitation
determines whether they absorb or supply
reactive power to the system. The SVC is an
automated impedance matching device. If the
power system's reactive load is capacitive
(leading), the SVC will use reactors to consume
VARs from the system, bringing the system
closer to unity power factor and lowering the
system voltage. A similar process is carried out
with an inductive (lagging) condition and
capacitor banks, thus providing a power factor
closer to unity and, consequently, a higher
system voltage
In most of the applications, thyristor-based
and shunt connected SVCs have been
proposed for railway VAR .They are
composed of a capacitor, which is the VAR
generator, and a TCR (Thyristor Controlled
Reactor), which behaves as a variable VAR
absorbing load (depending on the firing
angle of the thyristor valve). Thus, the SVC
can inject or absorb a variable amount of
reactive power to the railway network,
adapting the compensation to the load
conditions at each instant.
CIRCUIT OPERATION OF SVC:
Typically, a SVC comprises a bank of
individually switched capacitors in
conjunction with a thyristor-controlled airor
iron-core reactor. By means of phase
angle modulation switched by the thyristors,
the reactor may be variably switched into
the circuit, and so provide a continuously
variable MVAr injection (or absorption) to
the electrical network. The thyristors are
electronically controlled. Thyristors, like all
semiconductors, generate heat, and
deionized water is commonly used to cool
them.
Static VAR compensator
DRAWBACK OF SVC
COMPENSATORS:
The main drawback of SVC compensators is
the generation of harmonics. Their highly
non-linear characteristics make them absorb
a non-linear current, injecting harmonic
currents into the railway catenary. This
phenomenon is a common characteristic of
most of the power electronic
Compensators.
PROBLEMS DUE TO HARMONICS:
The harmonics flowing on the railway
system can provoke some problems not only
on the railway system but also in other
systems related to it. Therefore, some
standards and recommendations have been
established in order to avoid the potential
problems caused by railway harmonics the
main harmonic constraints of railway
systems are:
Limitations on the railway line voltage
distortion.
Limitations of the signaling track circuits.
The psophometric current constraint.
ELIMINATION OF HARMONICS:
Due to the high harmonic injection produced
by SVC systems, they generally include a
filtering system in order to minimize their
impact on the network. The simplest filter
topology consists on a reactor added to the
SVC capacitor branch. Furthermore,
depending on the application it may be
interesting to add other filtering branches
tuned to different low frequency harmonics
or even a high pass filter.
The proposed solution for harmonics mitigation
is based on passive power filtering. Finally, the
proposed filter installation consists of three
filters two single tuned filters for 3rd and 5th
harmonics plus a high pass filter for higher order
harmonics.
Tuned harmonic filters (traps) involve the series
connection of an inductance and capacitance to
form a low impedance path for a specific (tuned)
harmonic frequency. The filter is connected in
parallel (shunt) with the power system to divert
the tuned frequency currents away from the
power source.
IMPACT OF SVC WITH FILTERING
CIRCUITS:
Power electronics compensators can play an
important role in the process of upgrading
this infrastructure. However it is necessary
to analyze in detail the consequences of the
introduction of these compensators to avoid
any negative consequence. The SVC does
not worsen the distortion of the
catenary current and voltage, but it can even
improve it, if it includes the correct
Filtering system. There are fewer current
harmonics in the system with the SVC (with
its filter)
connected than without the SVC.
CONCLUSION:
The performance of critical infrastructures
can be improved by the introduction of new
technologies. However it is necessary to
minimise the increase of vulnerability that
these new technologies can provoke. The
introduction of power electronics
compensation in order to upgrade the
railway infrastructure is a good example of
the improvement of a critical infrastructure
using new technology. SVC does not worsen
the distortion of the catenary current and
voltage, but it can even improve it, if it
includes the correct filtering system. There
are fewer current harmonics in the system
with the SVC (with its filter) connected than
without the SVC. These conclusions will
hopefully be confirmed by the test
measurements that are now being made on
the prototype, confirming that power
electronic compensators can be a very
interesting option for future railway network
upgrading.
SRI VENKATESWARA UNIVERSITY
COLLEGE OF ENGINEERING
DEPARTMENT OF ELECTRICAL & ELECTRONIC ENGINEERING
PRESENTED BY:
E.VARDARAJULU CHETTY S. MAHIR ALI MOHIDDIN
III B.TECH (E.E.E. III B.TECH (E.E.E.)
Email: [email protected] Email: [email protected]
ELECTRIC LOCO
Preamble:
Abstract
Main Transformer
Circuit Breaker
Control
Protection
Abstract:
Trains play a vital role in human life. They are not only used for
travelling but also for transportation of heavy goods like automobiles,
fuels, iron ores etc., over longer distances. With the development in
technology we are now enjoying electric trains, electromagnetic trains
etc., But have we ever thought of how an electric loco runs? This
paper on Electric loco is about the working phenomenon of an electric
locomotive. This includes the energy conversions from ac to dc, from
1-phase ac to 3-phase ac, controlling of traction motors, protection---
using various circuit breakers ,sensing relays, lightening arrestors,
control mechanisms, Electric braking.
CIRCUIT BREAKER(DJ):-
Electrical equipment of the locomotive is connected or disconnected from
supply by means of a circuit breaker. Compressed air is used to operate the
breaker. Circuit breaker is tripped in case of relays QOA,
QOP,QLM,QRS11&2,Q44&Q118.
RATINGS:-
TYPE 20CB6 B2,Vaccum DBTF 30:250 Air Blast
Current 600A 400A
Voltage 25KV AC 25 KV AC
Maximum pressure 10Kg/cm2 10Kg/cm2
Minimum pressure 4 Kg/cm2 4Kg/cm2
Operating time 45 milli sec 80 milli sec
Rupturing capacity 250 MVA (max) 250 MVA
Make M/s General Elec.. M/s.ABB
AIR BLAST CIRCUIT BREAKER
Air Blast circuit breaker consists of two contacts namely primary
contact and secondary contact Fitted on the loco roof. The primary contact is
provided inside the horizontal insulator and the secondary contact is provided
on the rotating vertical insulator. The secondary contact insulator is connected
through fork and actuating rod to the piston of DJ servomotor.
To close DJ, pneumatic pressure admitted into DJ servomotor on
right hand side(piston rod end). To open DJ, the pneumatic pressure admitted
into DJ servomotor on left side of the piston.
The air admission is controlled on right hand side by an electro valve coil EFDJ
and on left hand side by an electro valve coil MTDJ. To close DJ the pneumatic
pressure in RDJ should be above 6.5Kg/ cm2 and to energize the electro valve
coils the battery voltage should be above 85 volts
MAIN TRANSFORMER(TFP)
The main transformer fed from the catenary through DJ. It
comprises of an Auto Transformer with 32 taps and a step down
Transformer(TFP) with two separate secondaries. Primary of the step down
transformer is connected to one of the 32 taps of the Auto Transformer by
means of tap changer GR ,driven by a pneumatic servomotor (SMGR) .The
passage from one tap of Transformer to another takes place on load. For feeding
the auxiliary circuits, auxiliary winding TFWA is provided. It feeds auxiliaries
at a voltage of (380±22%).
The two secondary winding of the step down Transformer TFP
(a3,a4,a5,a6) are protected against surge voltage by means of surge arrestors
ETTFP1-2 & by means of CAPTFP 1-2 network and RCAPTFP 1-2 network.
RATINGS
Type HETT3900
Cooling OFAF
Primary voltage 25 KV Nominal
27.5 KV Maximum
22.5 KV Average
19 KV Minimum
Secondary No load voltage 2*865 Volts
Primary input 4170 KVA[A-33-A-0..3900 KVA,
a0-a1..270KVA]
Secondary output 3900 KVA[a3-a4,a5-a6]
Auxiliary circuit output 270 KVA [a0-a1]
No. of taps 32
SILICON RECTIFIER(RSI 1&2)
The main rectifier consists of two identical cubicles. Each cubicle houses the
diodes ,Fan, bridges-fuse etc. The output of each rectifier feeds a group of thr
ee
traction motors. In the event of failure of any of the bridges, the bridge fuse
blows, triggering in turn the signaling fuse which lights a signal lamp LSRSI on

the driver s desk.


Each cubicle is fitted with one axial flow blower driven by 3 ph motor(MVSI
1-2)
RATINGS
No. of cubicles per loco 2
Rated current 3300 Amps
Max. starting current 4050 Amps
No load rated voltage at 22.5 Kv 750 v Dc
Connection Bridge
No. of bridges 6 per cubicle
No.of Diodes 4 per bridge
SMOOTHING REACTOR (SL 1&2)
The current leaving the rectifier block is a pulsating dc current. The undulatio
n
of the currents thus rectified is reduced to a value acceptable for the traction
motors by smoothing reactor. Two coils form a single unit. SL 1 is provided in
the output of RSH block and SL2 is provided in the output of RS12 block. SL s
are cooled through forced air by blowers MVSL1 and MVSL2.
Make CLW CLW
Type S1-42 Sl-30
Current 1000 Amps/coil 1350 Amps/coil
NO. of coils/ reactor Two Two
Voltage 1270 V 1270 V
Inductance 7mh. At 1000 A 7mh. At 1000 A
For each coil For each coil
Cooling One Blower One Blower
Per reactor Per reactor
Resistance at 110 c 0.00707 ohm. 0.00707 ohm.
for Each coil for each coil
Insulation Class-F Class-H
No. of smoothing Two Two
Reactor per loco
Weight 1385 Kg 1400 Kg
LINE CONTACTORS:-
These contactors are electro pneumatic operated contactors. Line
contactors L-1, L-2, L-3, L-4, L-5 and L-6 are used to connect the motors in the
circuit. These contactors are designed to open on load.
RATINGS
Rated voltage main circuits 1270 DC
Rated voltage control circuits 110 V DC
Rated current 1000 A
Rated air pressure 9 Kg/cm2
TRACTION MOTORS (TM)
In WAG-5 loco, TM-1,2,3 are provided in bogie-1 and TM-4,5,6
are provided in bogie-2. These motors are axle-hung,nose suspended type.
There are two types of traction motors supplied by CLW ie.,TAO 659 and
Hitachi.
Grease lubricated roller bearings are used for the armature and for
suspension in Hitachi motors.In TAO 659 motors,roller bearings for the
armature and white metal bearings for suspension are used.
Special provision has been made in design of the motors to ensure that
locomotive can be operated satisfactorily on flooded tract,to a maximum flood
level of 20 cm above rail level.
Make CLW CLW
Type HS 15250 A TAO 659
Continuous output 630 KW 585 KW
Voltage 750 V 750 V
Starting current 1350 A 1100 A
Current(cont) 900 A 840 A
Speed 895-rev/min 1060-rev/min
Max service speed 2150 rev/min 2500-rev/min
Insulation class-C Class H
No. of poles Main-6, Main-6,
Interpoles-6 Interpoles-6
CONTROLLING:-
REVERSORS(J1&2) AND TRACTION/ BREAKING SWITCH(CTF-1,2,3)
These are drum type of change over switches and are operated
electro pneumatically by two magnetic values. The reversor handle of master
controller (MPJ ) can be turned forward(F) or reverse(R) on position to energise
the respective valves.
The traction/breaking switch(CTF 1-2-3) has two positions. One
traction and another is breaking. It prepares the circuit of traction motors for
traction/breaking.
RATINGS
Make CLW
Rated voltage 1270 V
Rated current 1000 A
Rated air pressure 5 Kg/cm2 to 9 Kg/cm2
RHEOSTATIC BRAKING:
The Rheostatic braking is provided in WAG 5A locos for controlling
the train on down gradients effectively.
i) The rheostatic braking is effective during 40 to 60 KMPH. Speed
of the locomotive.
ii) Auxuliary transformer for field excitation(ATFEX) during braking
is connected across a5-a6 terminals of the TFP windings through a
contactor C 145.
iii) The output terminals of ATFEX are connected to RS 11.
PROTECTION EMPLOYED
1. In case of over excitation the RB is isolated by means of relay QE, which
is connected in ATFEX circuit . The present value of relay QE is 900 A.
2. Over current relays QF1 and QF2 are connected across TM1 & 4
respectively. These relays isolate RB, if current drawn by RF1 & RF4
exceeds 700 A
3. Motor MVRF is used to cool the braking resistors(RF s)
PROTECTION RELAYS
a) HIGH VOLTAGE OVER LOAD RELAY(QLM)
The relay QLM is fed by means of the high voltage current
transformer TFLIM (250/5 A) which causes the high voltage circuit breaker DJ
to trip, if the current taken in by the main transformer exceeds the settling va
lue
of the relay (300 A)
b) OVER LOAD RELAYS FOR SILICON RECTIFIERS(QRSI 1&2)
The relays QRSI 1-2 are fed by means of the rectifier current
transformer RSILM 1& 2(4000/5 A) which cause the high voltage circuit
breaker DJ to trip, if the current taken in by the rectifier exceeds the settlin
g
value of the relays (3600 A)
c) BRAKING EXCITATION OVER LOAD RELAY(QE)
The relay QE is fed by means of the excitation current transformer
ELM (1000/5 A)which causes auto regression of GR, if the current taken by the
excitation winding exceeds the settling value of relay(900 A)
d) BRAKING OVER LOAD RELAYS(QD1&2)
The relay Qf1-2 are connected to the shunt(SHF1-2), which
causes auto regression of GR. If the current taken by braking resistors(RF1-4)
exceeds the settling value of relays(700 A)
e) CURRENT DIFFERENTIAL RELAYS(QD1 &2)
These relays are of current differential type. These
relays(QD1&2) are having two coils in each.QD-1 is connected between motors
2&3 and QD-2 is connected between 4 & 5. When ever current difference
between TM s 2&3 or TM s 4&5 exceeds 125 A or above, respective QD relay
energises in turn energises Q-48. There by energizing sanding electro
valves(VESA) for auto sanding to corresponding wheels. Relay Q-51 is also
energized causing regression of tap changer till current difference 80A.
f) TRACTION POWER EARTH CIRCUIT RELAY(QOP 1&2)
It is a safety relay for the protection of traction power circuit
against earth fault. QOP-1 is provided for circuit of TM1,2,3 branch and QOP-
2 is provided for circuit of TM 4,5,6 branch. If there is any earth fault in tra
ction
power circuit, respective relay QOP will energise and trips the high voltage
circuit breaker DJ. The switches HQOP 1&2 makes it possible to isolate the
relay and replaces it through a resistance RQOP1&2 in order to limit the fault
current.
g) AUXILIARY CIRCUIT EARTH FAULT RELAY(QOA)
It is safety relay for the protection of auxiliary power circuit
against earth fault.If there is any earth fault in auxiliary power circuit the
relay QOA will energise and trips the high voltage circuit breaker DJ. The
switch HQOA makes it possible to isolate the relay and replaces it through a
resistance RQOA in order to limit the fault current.
h) OVER LOAD RELAY FOR AUX. POWER CIRCUIT(QLA)
The relay QLA is fed by means of a current transformer which
causes the high voltage circuit breaker DJ to trip, if the current taken in by t
he
auxiliary winding exceeds the setting value of the relay(1400 A).
i) TRACTION MOTOR OVER LOAD RELAY(Q20)
It is an over load relay connected in the output of the RSI-1 block.
Relay Q20 is connected in series with resistance RQ20 and causes auto
regression if voltage exceeds 865 volts. When voltage falls to 740 V, auto
regression will stop.
j) NO VOLTAGE RELAY(Q30)
Relay Q-30 is a low voltage or No-voltage relay and drops out, if
the output of single phase auxiliary winding voltage drops below 215 V .Its
contacts opens on relay Q44 branch and trips DJ. When the voltage reaches to
260 V the relay energises. It protects the loco equipment from no or low
voltage.
k) ARNO PROTECTION RELAY(QCVAR)
Relay QCVAR is a protection relay for ARNO to ensure proper
starting and it is connected across W phase and neutral of ARNO. When
ARNO picks up its rated speed and voltage across W phase reaches 155-160
V AC , Relay QCVAR is energized and opens starting phase by opening the
contactor C118.
REFERENCES
1 . IEEE Spectrum - July 2005, May 2006 issues
2. www.MIT.edu
3.UTILISATION OF POWER SYSTEMS BY C.L. WADHVA
1
A Solution to Remote Detection of Illegal Electricity
Usage via Power Line Communications
DEPARTMENT OF ELECTRICAL AND ELECTRONICS
V.R.SIDDHARTHA ENGINEERING COLLEGE
Presented by
P.Sai Raghunath Ch.Lakshmi Narayana
3/4 B.Tech,EEE 3/4 B.Tech,EEE
[email protected] [email protected]
Ph no:9441144558 ph no:9985953792
2
ABSTRACT:-
Power line communication (PLC) presents an interesting and economical solution f
or Automatic
Meter Reading (AMR). If an AMR system via PLC is set in a power delivery system,
a detection
system for illegal electricity usage may be easily added in the existing PLC net
work. In the detection
system, the second digitally energy meter chip is used and the value of energy i
s stored. The recorded
energy is compared with the value at the main kilo Watt-hour meter. In the case
of the difference
between two recorded energy data, an error signal is generated and transmitted v
ia PLC network.
The detector and control system is proposed. The architecture of the system and
their critical
components are given. The measurement results are given.
1. Introduction
3
Figure 2: AMR communication set up [5].
Figure1: Electromechanical movement to digital signal conversion.
India, the largest democracy with an
estimated population of about 1.04 billion, is on
a road to rapid growth in economy. Energy,
particularly electricity, is a key input for
accelerating economic growth.
The theft of electricity is a criminal
offence and power utilities are losing billions of
rupees in this account. If an Automatic Meter
Reading system via Power line Communication
is set in a power delivery system, a detection
system for illegal electricity usage is possible.
Power line communications (PLC) has
many new service possibilities on the data
transferring via power lines without use of extra
cables. Automatic Meter Reading (AMR) is a
very important application in these possibilities
due to every user connected each other via
modems, using power lines. AMR is a technique
to facilitate remote readings of energy
consumption.
The following sections will describe the
proposed detection and control system for
illegal electricity usage using the power lines.
Index Terms: ----- - Automatic meter reading
(AMR), detector, illegal electricity usage,
power line communication, power line
communications (PLC) modem.
2. Detection of illegal electricity usage
In this section the discussion is on how a
subscriber can illegally use the electricity and
the basic building blocks for the detection using
power line communication.
2.1 Methods of illegal electricity usage
In illegal usage a subscriber illegally use
electricity in the following ways,
1) Using the mechanical objects:
A subscriber can use some mechanical
objects to prevent the revolution of a meter, so
that disk speed is reduced and the recorded
energy is also reduced.
2) Using a fixed magnet:
A subscriber can use a fixed magnet to
change the electromagnetic field of the current
coils. As is well known, the recorded energy is
proportional to electromagnetic field.
3) Using the external phase before meter
terminals:
This method gives subscribers free
energy without any record.
4) Switching
the energy
cables at the
meter connector box:
In this way, the current does not pass
through the current coil of the meter, so the
meter does not record the energy consumption.
Although all of the methods explained
above may be valid for electromechanical
meters, only the last two methods are valid for
digital meters. Therefore, this problem should
be solved by electronics and control techniques
[1].
2.2Building blocks for detection
4
2.2.1. Automatic Meter Reading (AMR):
The AMR system starts at the meter.
Some means of translating readings from
rotating meter dials, or cyclometer style meter
dials, into digital form is necessary in order to
send digital metering data from the customer
site to a central point. In most cases, the meter
that is used in an AMR system is the same
ordinary meter used for manual reading but the
difference with conventional energy meter is the
addition of some device to generate pulses
relating to the amount of consumption
monitored, or generates an electronic, digital
code that translates to the actual reading on the
meter dials. One such technique using optical
sensor is shown in Figure 1.
The three main components of AMR
system are,
1. Meter interface module: with power supply,
meter sensors, controlling electronics and a
communication interface that allows data to be
transmitted from this remote device to a central
location.
2. Communications systems: used for the
transmission, or telemetry, of data and control
send signals between the
meter interface units and the central office.
3. Central office systems equipment: including
modems, receivers, data concentrators,
controllers, host upload links, and host
computer [4].
2.2.2 Power Line Communication (PLC):
Power line carrier communications take
place over the same lines that deliver electricity.
This technique involves injecting a high
frequency AC carrier onto the power line and
modulating this carrier with data originating
from the remote meter or central station. Power
line communications has many new service
possibilities on the data transferring via power
lines without use of extra cables. AMR is a very
important application in these possibilities due
to every user connected each other via power
lines. In this power network, every user
connected to each other via modems with data
originating from the remote meter or central
station.
Electrical power systems vary in
configuration from country to country
depending on the state of the respective power
sources and loads. The practice of using
medium-voltage (11-to-33kV) and low-voltage
(100-to-400V) power distribution lines as highspeed
PLC communication means and optical
networks as backbone networks is
commonplace.
Under normal service conditions, they
can be broadly divided into open-loop systems,
each with a single opening, and tree systems
with radial arranged lines. In the case of tree
systems, connection points for adjacent systems
are provided in order that paths/loads may be
switched when necessary for operation.
Additionally, in terms of distribution line types,
there are underground cables and overhead
power distribution lines. Where transformers are
concerned, they can be divided into polemounted
transformers, pad-mounted
transformers and indoor transformers.
High-speed PLC applications of the
future include Automatic Meter Reading
(AMR), power system fault detection, power
theft detection, leakage current detection, and
the measurement/control/energy-management of
electrical power equipment for electrical power
companies, as well as home security, the
remote- monitoring/control of electrical
household appliances, online games, home
networks, and billing [3].
Figure 3: Schematic illustration of detection system of illegal electricity usag
e. [1]
5
3. Detection and Control System
The proposed control system [1] for the
detection of illegal electricity usage is shown in
Fig.3. PLC signaling is only valid over the low
voltage VAC power lines. The system should be
applied to every low-voltage distribution
network. The system given in Fig. 3 belongs
only one distribution transformer network and
should be repeated for every distribution
network. Although the proposed system can be
used uniquely, it is better to use it with
automatic meter reading system. If the AMR
system will be used in any network, the host
PLC unit and a PLC modem for every
subscriber should be contained in this system. In
Fig. 3, the host PLC unit and other PLC
modems are named PLC1A, PLCNA and are
used for AMR. These units provide
communication with each other and send the
recorded data in kilowatt-hour meters to the
PLC unit. In order to detect illegal usage of
electrical energy, a PLC modem and an energy
meter chip for every subscriber are added to an
existing AMR system. As given
in Fig. 3, PLC1B, PLCNB and energy meter
chips belong to the detector.
The detector PLC s and energy meters
must be placed at the connection point between
distribution main lines and subscriber s line.
Since this connection point is usually in the air
or at underground, it is not suitable for anyone
to access, such that its control is easy. The main
procedure of the proposed system can be
summarized as follows.
PLC signaling must be in CENELEC
standards. In Europe, CENELEC has formed the
standard EN-50 065-1, in which the frequency
bands, signaling levels, and procedures are
specified. 3 95 kHz are restricted for use
by electricity suppliers, and 95 148.5
kHz are restricted to consumer use.
The recorded data in kilowatt-hour
meters for every subscriber are sent to host PLC
modem via PLC modems, which is placed in
subscriber s locations. On the other hand,
energy meter chips are located at the connection
points and read the energy in kilowatt-hours and
also send the data to host PLC unit. This
proposed detector system has two recorded
energy data in host PLC unit, one, which comes
from the AMR-PLC, and the other, which
comes from the PLC modem at the connection
points. These two recorded energy data are
compared in the host PLC; if there is any
difference between two readings, an error signal
is generated. This means that there is an illegal
usage in the network. After that, the subscriber
address and error signal are combined and sent
to the central control unit. If it is requested, a
contactor may be included to the system at
subscriber locations to turn off the energy
automatically, as in the case of illegal usage.
3.1 Simulation
The system model and simulation of the
detection system of illegal electricity usage is
shown in Fig. 4. It contains a host PLC modem,
an energy meter chip and its PLC modem, an
electromechanical kilowatt-hour meter and its
PLC modem, and an optical reflector sensor
system is loaded at the same phase of the power
grid. The energy value at the electromechanical
kilowatt-hour meter is converted to digital data
using by optical reflector sensor. Disk speed of
the kilowatt-hour meter is counted and obtained
data is sent to PLC modem as energy value of
the kilowatt-hour meter. At the system model,
an illegal load may be connected to the power
line before the kilowatt-hour meter via an S
switch. While only a legal load is in the system,
two meters are accorded each other to
compensate for any error readings. The host
PLC unit reads two recorded data coming from
metering PLC units. If the S switch is closed,
the illegal load is connected to the system, and
therefore two recorded energy values are
different from each other.
Figure 4: Illegal detector system for one subscriber. [1]
Figure 5: System simulation and modeling of the detection
system of illegal electricity usage for electromechanical
kilowatt-hour meters. [1]
6
The host PLC unit is generated when it
received two different records from the same
subscriber. This is the detection of the illegal
usage for interested users. In these tests, the
carrier frequency is selected at 132 kHz, which
is permitted in the CENELEC frequency band.
In real applications, the AMR system may be
designed in all CENELEC bands. The data rate
between the host and other PLC modems is
2400 b/s.
Data signaling between PLC modems
has a protocol, which includes a header,
address, energy value data, error correction bits,
and other serial communication bits such as
parity and stop bits. The protocol may also be
changed according to the properties of the
required system and national power grid
architecture.
Fig.5 shows the detection system for an
electromechanical kilowatt-hour meter system.
In the digital energy meter system, the recorded
energy may be received in the digital form
directly using the port of the meter. Therefore,
there is no need for an optical reflector system
in digital meters. The results of the tests show
that this system may solve this problem
economically because the budget of the
proposed system is approximately U.S. $ 20---25
per subscriber. It is very economical and is a
reliable solution when it is compared with the
economic loss caused by illegal usage [1].
4. Overview of the proposed Detector System
The proposed detector system is the
equipment and procedure for controlling more
remote stations from a master control station. It
includes
PLC
modems
, energy meters, control logics, and the system
software. The PLC modems are host and target
modems
for twoway
communications to and from the host station
and the remotely controlled targets. The energy
meters include metering chips and some circuit
elements; the control and logic units compare
and generate the error signal in the illegal usage.
The system software has two parts: assembler
program for the micro controller and the
operating software for the management of the
overall system. Operating software may be
downloaded from a PC and should be placed in
the main center of the system.
An AMR system including an illegal
detector performs the following functions.
1) Every user has two PLC modems; one is for
AMR and the other is used to send the data from
second energy meter chip to host PLC modem.
2) An energy meter must be installed in the
connection box between a home line and main
power lines.
3) The host PLC unit must be placed in the
distribution transformer and the configuration of
Figure 7: Bit-error probability with frequency and load impedance for
1000-m [2]
Figure 6: Effects of distance of the source-receiver on the
loss for various [2]
7
the addressing format of PLC signaling must be
designed carefully.
4) The host PLC modems and its controller
must include two addresses per every user: one
is the AMR and the other for the energy meter.
These two addresses must be selected
sequentially.
5) Operating software must be designed for the
information of every subscriber in every sub
power network: subscriber identification
number, billing address, etc.
6) The system has two values of the energy
consumption for every user, so if there is a
difference between them, an error signal is
generated for the illegal user,
7) The proposed equipment is the only one
distributed in the power network. So this system
should be repeated for all distribution power
networks. All host units in each distribution
transformer may be connected to only one
main center station via phone lines, fiber-optic
cable, or RF links.
Results and the variations of the
measurements are shown in Figs. 6---7 [2]. The
relations between frequency, length, and biterror
probability are given in these figures [1].
Research work has been taking place in
the CPRI, Bangalore for the remote metering
and detection of power theft and will soon be
helpful to electricity boards in India.
5. Conclusion
The proposed detector system to
determine illegal electricity usage via power
line communications is examined in the
laboratory conditions. Results proved that if
AMR and detector system are used together
illegal usage of electricity might be detected.
Once this proposed detection systems are tried
in real power lines, the distribution losses in
India can be reduced effectively.
6. References:
[1] I. H. Cavdar, A Solution to Remote
Detection of IEEE Transactions on power
delivery, Vol. 19, No. 4, October 2004.
[2] I. H. Cavdar, Performance analysis of FSK
power line communications systems over the
time-varying channels: Measurements and
8
modeling, IEEE Trans. Power Delivery, vol.
19, pp. 111 117, Jan. 2004.
[3] Yoshinori Mizugai and Masahiro Oya
World Trends in Power Line Communications
Mitsubishi Electric ADVANCE March 2005.
[4] Tom D Tamarkin Automatic Meter
Reading , Public Power magazine Volume50,
Number5 September-October 1992.
[5]Online;
www.wikipedia.org/powerlinecommunication
Fault Diagnosis in Transmission
Systems for Alarm
Processing by Fuzzy Relation and
Fault Tree Analysis
Presented By:
M.BHARADWAJA G.V.SUDHEER KUMAR
07A75A0201 07A75A0202
III.B.Tech III.B.Tech
Ph.No: 9963506694 9966896446
Nalanda Institute of Engineering & Technology,
Sattenapalli, Guntur (DIST.,)
e-mail us:
[email protected]
[email protected]
Abstract
Fault diagnosis in a transmission system is a process to identify faulted compon
ents using
the post fault statuses of the protective relays and circuit breakers. However,
it is not always easy
to do so because the system operator has to quickly analyze a large number of me
ssages and
alarms transmitted to the control center to draw a conclusion for which a proper
action can be
taken to restore the system. Therefore, the system operator needs analysis tools
to provide
assistance in interpreting the data. This paper proposes a method to solve the p
roblem of locating
faults that occur randomly anywhere in transmission systems. The method is based
on fuzzy
relation that utilizes information on operating time sequences of the actuated r
elays and tripped
circuit breakers, and fault tree that can efficiently handle circuit breaker fai
lure. The method is
versatile in that uncertainties, such as protective relay failure, circuit break
er failure are taken
into consideration. The method is applied to a 6-bus system. The results show th
at not only can a
fault component be precisely identified but also the problem of loss of alarm da
ta in relays can
be solved.
Keywords: Fault diagnosis, Alarm Processing, Expert systems, Fuzzy relation, Fau
lt tree
analysis.
INTRODUCTION:
Every component in a power system is subject to a fault. Faults in a power syste
m must
be isolated as quickly as possible because it affects the reliability and securi
ty of the system.
When a fault occurs, the fault section must be detected and removed from the res
t of the system
thorough the series of relay and circuit breaker operations. However, the identi
fication of faults
is a time consuming and complex task because a large amount of data obtained fro
m monitoring
devices needs to be analyzed to ascertain whether the relevant protection scheme
s responded
correctly. Other constraints are some false operations of relay and circuit brea
kers, inexperienced
system operators, and simultaneous faults with different locations. Therefore, a
system operator
needs analysis tools to provide assistance in interpreting the data and comes wi
th a critical piece
of information. A number of methods have been proposed such as artificial neural
network. Of
these, the expert system is the most established. For example, Pseudo-relaxation
method is a fast
iterative learning algorithm combined with neural network to find a solution. Ho
wever, it is
computationally expensive for a large-scale system, and in some cases, it may no
t converge to a
solution. An expert system based on If-Then-Rule also has a limitation on designed
rule-base
that cannot cover all possible situations. In other word, the expert system can
perform
satisfactorily only for those situations that have been considered during the de
velopment of
knowledge base. An expert system based on fuzzy relation was proposed in. The pa
per ranks
fault section candidates in terms of the degree of membership and the malfunctio
n or wrong
alarm. This method provides more flexibility than conventional expert system met
hods.
However, circuit breaker failure are not taken into account. This paper proposes
a technique for
fault identification in transmission systems for alarm processing by fault tree
analysis and fuzzy
relation. The proposed method is able to deal with circuit breaker failure. The
method builds a
sagittal diagram for each of the components to represent operating conditions of
the relays and
circuit breakers during a fault. The degree of membership of is then calculated
by fuzzy
arithmetic. This degree of membership will be used to determine the maximum like
lihood of the
fault location. The paper is organized as follows. Section II describes the basi
c principle for
SCADA. Protective systems emphasizing on transmission networks are given in Sect
ion III.
Sagittal and fuzzy relations are described in Section IV. Section V details faul
t tree analysis.
Section VI shows a case study for a 6-bus power system. Section IV concludes the
paper.
SCADA SYSTEM IN CONTROL CENTER
SCADA stands for supervisory control and data acquisition. It is used to monitor
or to
control the system equipment in remote areas. SCADA has a capability of checking
the statuses
of the system equipment (e.g., relay, circuit breaker) and data (e.g., current,
voltage) and
reporting them back to the control center. A SCADA system has proven to be effic
ient and
economical for power system operations,
making it possible for them to maintain
relatively complete knowledge of condition
and the portions of the system for which they
are responsible . In general, SCADA consists
of control center, remote terminal units
(RTU) and communicating system.
PROTECTION SYSTEM
The basic function of the relay is to detect a faulted element and remove it, wi
th the help
of circuit breaker, from the remaining healthy system as quickly as possible so
that as much as
the rest of the system is left to continue service and to avoid damage and maint
ain security or
reliability of supply A protective relay is a device that senses a fault, determ
ines the fault
location, and sends tripping command to the proper circuit breaker. A fault may
not be cleared if
the circuit breaker fails to open or relay malfunctions. In such case, the fault
is cleared by backup
protection. This situation will not favor the system operator as it makes the da
ta interpretation
more difficult to be analyzed to find the fault location. Differential and dista
nce relays are widely
used for the protection of transmission systems because the relays are more effe
ctive than over
current relays. Differential relays detect abnormal conditions in the area for w
hich they are
responsible. They are often used to protect busbars and transformers. Distance r
elays, in general,
protect transmission line in three zones: zone 1 covering 100% of line impedance
with
instantaneous operation, zone 2 covering 80% with time delay and zone 3 covering
120-150%
with time delay.
SAGITTAL AND FUZZY RELATIONS
Sagittal diagrams represent the fuzzy relations for
power systems and diagnose fault sections using the
operation in fuzzy relation. A sagittal diagram has a
three set of layers that describe the relationship of
fault section (first layer), relay (second layer), and
circuit breaker (third layer). In each layer, there are nodes representing the a
ppropriate device.
The interconnection between the nodes in each layer is determined from the norma
l operations of
relays and circuit breakers in the occurrence of fault, and the causality is den
oted by arrow.
FAULT TREE ANALYSIS
The event of circuit breaker failure makes it difficult to analyze the faulted c
omponent
because the backup protection is involved in the investigation. To overcome that
problem, this
paper introduces fault tree analysis (FTA), which is a topdown
approach that utilizes the statuses of the circuit
breakers and protective relays as an indicator. The
advantage of FTA is that it provides a good,
diagrammatic connection of events, as shown in Fig.3.
The symbol stands for a further-analyzed event and for a
terminal event. The connection between events can be
represented by the symbols OR. Fig. shows that CB2 fails
owing to the following three sub events: the protection of
CB2 operates, CB2 closes after receiving command from
relay and zone 3 of other transmission lines sees and clears the fault (i.e.,L2,
L3).
CONCLUSION
The main advantage of fuzzy relation and expert system presented in this paper i
s the
reduction of complexity in creating rule base by using the sagittal diagram. The
method can
efficiently solve the problem of loss of alarm for relays. This paper also intro
duces fault tree
analysis to deal with the problem of circuit breaker failure. The case study per
formed has shown
that a fault location can be precisely identified within a short period of time,
indicating a
promising result to help the system operator in decision making for real time op
eration. Future
work is the application of the proposed method to a more complex, large-scaled s
ystem and to a
variety type of relays, for example transformer differential relay, recloser rel
ay and overcurrent
relay.
FPGA IMPLEMENTATION OF ADAPTIVE MEDIAN
FILTER FOR IMAGE IMPULSE NOISE SUPPRESSION
B.PRASHANTHI V.PREETHI
III B.Tech(ECE) III .B.Tech(ECE)
Email:[email protected] Email:[email protected]
SREE VIDYANIKETHAN ENGINEERING COLLEGE
ABSTRACT
This paper introduces, a new intelligent
hardware module suitable for the
computation of an adaptive median filter is
presented for the first time. The function of
the proposed circuit is to detect the existence
of impulse noise in an image neighborhood
and apply the operator of the median filter
only when it is necessary. The noise
detection procedure can be controlled so that
a range of pixel values is considered as
impulse noise. In this way, the blurring of
the image in process is avoided, and the
integrity of edge and detail information is
preserved. Experimental results with real
images demonstrate the improved
performance.
The proposed digital hardware
structure is capable to process gray-scale
images of 8-bit resolution and is fully
pipelined, whereas parallel processing is
used in order to minimize computational
time
In the presented design, a 3x3 or 5x5 pixel
image neighborhood can be selected for the
computation of the filter output. However,
the system can be easily expanded to
accommodate windows of larger sizes. The
proposed digital structure was designed,
compiled and simulated using the
Modelsim and Synthesized in Xilinx
VERTEX-11. For the implementation of the
system the PF10K200SRC240-1 fieldprogrammable
gate array device of the
FLEX10KE device family is utilized, and it
can be used in industrial imaging
applications where fast processing is
required. The typical clock frequency is 65
MHz.
Index Terms-Field-programmable gate
arrays (FPGAs), impulse noise, median
filter, real-time filtering.
Introduction
Two applications of great importance
in the area of image processing are noise
filtering and image enhancement. These
tasks are an essential part of any image
processor whether the final image is utilized
for visual interpretation or for automatic
analysis. The aim of noise filtering is to
eliminate noise and its effects on the original
image, while corrupting the image as little as
possible. To this end, nonlinear techniques
(like the median and, in general, order
statistics filters) have been found to
provide more satisfactory results in
comparison to linear methods. For this
reason, a number of non-linear filters, which
utilize correlation among vectors using
various distance measures. However, these
approaches are typically implemented
uniformly across an image, and they also
tend to modify pixels that are undisturbed by
noise, at the expense of blurred and distorted
features.
In this paper, an intelligent hardware
structure of an adaptive median filter (AMF)
suitable for impulse noise suppression for
gray-scale images is presented for the first
time. The function of the proposed circuit is
to detect first the existence of noise in the
image window and apply the corresponding
median filter only when necessary. The
noise detection level procedure can be
controlled so that a range of pixel values
(and not only the
Fig. 1: Block diagram of the adaptive filtering method
fixed values 0 and 255, but also salt-andpepper
noise) is considered as impulse
noise. The main advantage of this adaptive
approach is that the blurring of the image in
process is avoided and the integrity of edge
and detail information is preserved.
Moreover, the utilization of the median
filter is done in a more efficient way.
Experimental results demonstrate the
improved performance of the AMF. The
proposed digital hardware structure is
capable of processing gray-scale images
of 8-bit resolution and performs both
positive and negative impulse noise
removal. A moving window of a 3x3 and
5x5 pixel image neighborhood can be
selected. Furthermore, the system is
directly expandable to accommodate larger
size gray-scale images. The architecture
chosen is based on a sequence of four basic
functional pipelined stages and parallel
processing is used within each stage. The
proposed structure was implemented
using field-programmable gate arrays
(FPGAs) , which offer an attractive
combination of low-cost, highperformance,
and apparent flexibility. The
presented digital circuit was designed,
compiled and simulated using the
MAX + PLUS II Programmable Logic
Development System by Altera
Corporation. For the realization of the
system the EPF10K200SRC240-1 FPGA
device of the FLEX10KE device family, a
device family suitable for designs that
require high densities and high I/O
count, is utilized. The total number of the
system inputs and outputs are 44 and eight
pins, respectively and the percentage of the
logic cells utilized is (40 inputs for the
input data and four in- puts for the clock
and the control signals required) and the
percentage of the logic cells utilized is
99%. The typical clock frequency is 65
MHz and the system can be used for realtime
imaging applications where fast
processing is of the utmost importance. As
an example, the time required to perform
filtering of a grayscale image of 260x244
pixels is approximately 7.6 ms.
Adaptive median filter design
The most common method used for
impulse noise suppression for gray-scale
and color images is the median filter.
Impulse noise exists in many practical
applications and can be generated by
various sources, including many man-made
phenomena such as unprotected switches,
industrial machines, and car ignition
systems. Images are often corrupted by
impulse noise due to a noisy sensor or
channel transmission errors. This type of
noise is the classical salt-and-pepper noise
for grayscale images.
The output of a median filter at a
point x of an image f depends on the values
of the image points in the neighborhood of
x. This neighborhood is determined by a
window W that is located at point x of f
including n points x1, x2, .., xn of f. the
median can be determined when the
number of points included in W is odd i.e.,
when n = 2k+1. the n values f(x1), f(x2), ,
f(xn) of the n points x1, x2, .., xn are
placed in ascending order forming the set
of ordered values {f1, f2, .., fn}, in which
f1 = f2 =. , = fn. The median is defined as
he (k+1)th value of the set {f1, f2, .,fn},
med = fk+1.
The basic disadvantage of the
application of the median filter is the
blurring of the image in process. In the
general case, the filter is applied
uniformly across an image, modifying
pixels that are not contaminated by
noise. In this way, the pixel values of the
input image are altered, leading thus to an
overall degradation of the image and to
blurred or distorted features.
The proposed adaptive median filter
can be utilized for impulse noise
suppression for gray-scale images. Its
function is to detect the existence of noise
in the image window and apply the
corresponding median filter only when
necessary. The noise detection scheme for
the case of positive (negative) noise is as
follows.
1) For a neighborhood window W that is
located at point x of the image f, the
maximum (minimum) pixel value of the n-
1 surrounding points of the neighborhood is
computed, denoted as fmax (x) (fmin(x)),
excluding the value of the central pixel at
point x.
Fig. 2: Noise detection algorithm (a) Impulse
noise (b) Signal-dependent noise
2) The value fmax (x) (fmin(x)) is multiplied
by a parameter a which is a real number
and an be modified. The result is the
threshold value for the detection of a noise
pixel, denoted as fthreshold (x) and is limited
to a positive (negative) integer threshold
value.
3) The value of the central pixel is
compared to fthreshold (x), and the central
pixel is considered to be noise, when its
value is greater (less) than the threshold
value fthreshold (x).
4) When the central pixel is considered to
be noise, it is substituted by the median
value of the neighborhood, fk+1 ,
which is the normal operation of the
median filter. In the opposite case, the
value of the central pixel is not altered and
the procedure is repeated for the next
neighborhood window.
A block diagram of the adaptive
filtering procedure previously described is
depicted in Fig. 1. An example of the
application of the noise detection algorithm
for the cases of impulse and signal
dependent noise is illustrated in Fig. 2(a)
and (b), respectively. In Fig. 2(a), a typical
3x3 pixel neighborhood window of a grayscale
image is depicted. The central pixel of
the window occupies an extreme value
(pixel value = 252) compared to the values
of the surrounding image points (pixel
values ranging from 32 to 42). For this
reason, the central pixel is considered to be
impulse noise and it should be eliminated.
In the specific example, the noise detection
scheme is applied as follows. In the first
step, we find the maximum value of the
surrounding pixels, fmax (x) = 42. If we
consider the parameter a = 5, then
threshold value fthreshold (x) = fmax (x) * 5 =
210. The central pixel value is 252 >
fthreshold (x) and, therefore, is considered to
be a noisy pixel. Finally, it is substituted by
the median value of the neighborhood. The
same discussion applies to the example of
Fig. 2(b), which is the case of signaldependent
noise. The same procedure is
followed, and the noisy pixel is
successfully detected for a parameter value
a = 2.
Fig. 3: Impulse noise removal. (a) Original image Café. (b) Image corrupted by 5% po
sitive and
negative impulse noise. (c) Median filter result using 3x3 pixel window. (d) AMF
result using 3x3 pixel
window. (e) Median filter result using 5x5 pixel window. (f) AMF result using 5x
5 pixel window.
Two major remarks about the
presented adaptive algorithm should be
made. First, the value of the parameter a is
of great importance, since it controls
the operation of the circuit and the
result of the overall procedure for
different noise cases. Second, an
appropriate positive and negative
threshold value must be utilized for the
case of impulse noise, when fthreshold (x) =
255. For example, in the case of Fig. 2(a),
if we consider the parameter a = 8, then
fthreshold (x) = fmax (x) * 8 = 336 and the
fthreshold (x) is limited to the value 255,
fthreshold (x) = 255. The central pixel value is
252 < fthreshold (x) and the central pixel is
erroneously not considered to be impulse
noise. An adjustable positive threshold
value (for example 240) can be used as a
limit of fthreshold (x). In this way, fthreshold (x)
= 240, whereas the central pixel value is
252 > fthreshold (x), and the central pixel is
successfully detected as impulse noise. The
meaning of this normalization procedure is
that pixels occupying values between a
range of the impulsive values (and not only
pixels with values 0 and 255) should be
considered as noisy pixels.
Hardware architecture
The proposed architecture is based
on a sequence of pipeline stages in order to
reduce computational time. Parallel
processing has been employed to further
accelerate the process. For the
computation of the filter output, a 3x3 or
5x5 pixel image neighborhood can be
selected.
The structure of the adaptive filter
comprises four basic functional units, the
moving window unit, the median
computation unit, the noise detection unit,
and the output selection unit. The input data
of the system are the gray-scale values of
the pixels of the image neighborhood, the
value of the parameter a, and the positive
and negative threshold values.
Additionally, two control signals required
for the selection of the operation of the
system (negative/positive noise
suppression) and the neighborhood size
(3x3 or 5x5) are also utilized.
Moving window unit
The pixel values of the input image,
denoted as IMAGE_INPUT [7..0], are
imported into this unit in serial. The value
of the parameter is denoted as
MOD_VALUE[7..0] and the positive and
negative threshold values as POS/NEG
THRESHOLD respectively. The parameter
a is a real number, 5 and 3 bits are used for
the representation of the integral and the
fractional part, respectively. The
NEG/POS control signal is used to
determine the noise type. When
NEG/POS is equal to 0 ( 1 ) the
circuit operation is negative (positive) noise
suppression.
For the moving window
operation, a 3x3 (5x5) pixel serpentine type
memory is used, consisting of 9 (25)
registers, illustrated in Fig. 6. In this way,
when the window is moved into the next
image neighborhood, only 3 or 5 pixel
values stored in the memory are altered.
The outputs of this unit are rows of pixel
values (3 or 5, respectively), which are the
inputs to the median computation unit.
Median computation unit
In this stage, the median value of
the image neighborhood is computed in
order to substitute the central pixel value, if
necessary. In this unit, the min/max value
of the neighborhood is also computed, used
in the noise detection process. For the
computation of both the median and the
min/max value a 24-input sorter is utilized,
the central pixel value is not included. In
this way, the complexity of the design is
reduced since no additional min/max
modules are utilized. The modules of the
sorter used only in the case of the 5x5 pixel
neighborhood are enabled by the en5x5
control signal. A CS block is a max/min
module; its first output is the maximum of
the inputs and its second output the
minimum. The implementation of a CS
block includes a comparator and two
multiplexers and is depicted in Fig. 4.
The outputs of the sorter, denoted as
OUT_0[7..0] ... OUT_23[7..0], produce
a sorted list of the 24 initial pixel values.
The output OUT_0[7..0] is the minimum
pixel value for both 3x3 and 5x5 pixel
image window. The sorter outputs
OUT_3[7..0] and OUT_4[7..0] and the
central pixel value are utilized for the
computation of the median value for the
3x3 pixel neighborhood, denoted as
MEDIAN_3x3 [7..0].
Fig. 4: Implementation of CS block
Fig. 5: Hardware structure of the figure
Fig. 6: Schematic diagram of a 3x3 pixel
serpentine memory
Noise detection unit
The task of the noise detection unit is
to compute the threshold value for the
detection of a noise pixel, fthreshold (x), and
to limit this value to the positive (negative)
threshold. Initially, the min/max value of
the neighborhood is selected, and for that
reason, the values OUT_0[7..0],
OUT_7[7..0] and OUT_23[7..0] (min
and max values, respectively) are
imported into a multiplexer. The selection
is based on the values of the NEG/POS
control signals. In the next step, the output
of the multiplexer is multiplied by the
parameter a (8 bits) using a multiplier
module, the resultant 16-bit value is
denoted. An additional 2-to-1 multiplexer is
utilized to select the positive or negative
threshold to which the
THRESHOLD_VALUE should be
normalized, controlled by the
NEG/POS control signal. A comparator
is used to compare the
THRESHOLD_VALUE to the positive or
negative threshold and a multiplexer to
select the corresponding output threshold
value, denoted as THRESHOLD
Output selection unit
The final stage of the design is
the output selection unit. In this unit,
the appropriate output value for the
performed operation is selected. For the
selection of the output value the
corresponding threshold value for the
image neighborhood, THRESHOLD, is
used. The value THRESHOLD is
compared to the central pixel value,
denoted in the circuit as
CENTRAL_PIXEL.
Depending on the result of the
comparison, the central pixel is considered
to be contaminated by noise or not. For the
case of positive (negative) noise, if the
central pixel value is greater (less) than the
corresponding threshold value, then the
Fig.7: Complete simulation results for the circuit
central pixel is positive (negative)
noise and has to be eliminated. The
FILTER_OUTPUT is the output of this
the adaptive filter. Note the way that the
input data is converted to 3-pixel rows
every three clock pulses, and that for a
sliding of the window to the next image
neighborhood only three pixel values of the
memory are changed.
The modification of the system to
accommodate windows of larger sizes is
done in a straightforward way, requiring
only a small number of changes. More
specifically, in the first unit, the size of the
serpentine memory used for the serial input
of the image data and the corresponding
number of multiplexers increase following
a square law. In the second unit, the sorter
module should be modified, whereas no
changes are required in the last two units.
RTL View
Floor plan placement of the top
module
Chip View of Top Module
Synthesized top level report
Device Utilization for 2V80cs144
***********************************
Resource Used Avail
Utilization
-----------------------------------------------
IOs 19 92 20.65%
Function Generators 608 1024
59.38%
CLB Slices 304 512 59.38%
Dffs or Latches 161 1300 12.38%
-----------------------------------------------
Using wire table: xcv2-80-6_wc
Clock Frequency Report
Clock : Frequency
------------------------------------
clk : 44.6 MHz
Conclusion
This paper presents a new design of
an adaptive median filter, which is capable
of performing impulse noise suppression
for 8-bit grayscale images using a 3x3 or
5x5 pixel neighborhood. The proposed
circuit detects the existence of noise in
the image neighborhood and applies the
corresponding median filter only when it is
necessary. The noise detection procedure is
controllable, and, thus, pixel values other
than the two extreme ones can be
considered as impulse noise, provided that
they are significantly different from the
central pixel value. In this way, the blurring
of the image is avoided.
The system is suitable for real-time
imaging applications where fast processing
is required. Moreover, the design of the
circuit can be easily modified to
accommodate larger size windows. In this
case, only small modifications are required
in the first two units, mostly regarding the
size of the serpentine memory and the
sorter module.
The proposed digital hardware
structure was designed, compiled and
successfully simulated in FPGAs.The
typical system clock frequency is 65 MHz.
REFERENCES
[1] W. K. Pratt, Digital Image Processing.
New York: Wiley, 1991.
[2] G. R. Arce, N. C. Gallagher, and T.
Nodes, Median filters: Theory and
applications, in Advances in
Computer Vision and Image
Processing, T. Huang, Ed.
Greenwich, CT: JAI, 1986.
[3] T. A. Nodes and N. C. Gallagher Jr.,
The output distribution of median type
filters, IEEE Trans. Commun., vol.
COM-32, pp. 532 541, May 1984.
[4] T. Sun and Y. Neuvo, Detailpreserving
median based filters in
image processing, Pattern Recognit.
Lett., vol. 15, pp. 341 347, Apr. 1994.
[5] E. Abreau, M. Lightstone, S. K. Mitra,
and K. Arakawa, A new effici-cient
approach for the removal of impulse
noise from highly corrupted images,
IEEE Trans. Image Processing, vol. 5,
pp. 1012 1025, June 1996.
[6] K. S. Ali, Digital circuit design using
FPGAs, in Proc. 19th ICCIE, 1996,
pp. 127 129.
[7] K. E. Batcher, Sorting networks
and their applications, in Proc.
AFIPS-SJCC, 1968, pp. 307 314.
[8] I. Pitas and A.N. Venetsanopoulos
Nonlinear Digital Filters: Principles
and Applications. Boston,
MA: Kluwer, 1990.
[9] R.M Hodgson, D.G. Bailey, M. J.
Naylor, A.L. M. Ng, and S.J. McCneil,
Properties, Implementations and
applications of rank filters , Image Vis.
Comput., vol.3,pp. 3-14,1985.
[10] E. R. Dougherty and P. Laplante,
Introduction to Real time Imaging.
Bellingham, WA:SPIE,1995.
A PAPER PRESENTATION
ON
BULLET PROOF VESTS USING CARBON
NANO TUBES
SRI VENKATESWARA UNIVERSITY COLLEGE OF
ENGINEERING
TIRUPATI
Presented by:
K.C.POORNIMA N.ANITHA
ROLL NUMBER: 10704013 ROLL NUMBER: 10704019
II B.TECH, ECE II B.TECH, ECE
[email protected] [email protected]
ABSTRACT:
Carbon nano tubes are allotropes of
carbon. Carbon nano tubes are just a few
billionths of a meter across, but are ultra
strong. Their unusual properties promise
to revolutionize electronics, computers,
chemistry and material sciences. They
had been discovered in 1991.
One of the main applications of carbon
nano tubes is super strong bullet proof
vests. A new bullet proof material is
designed which actually rebounds the
force of a bullet. Bulletproof materials
at the moment are designed to spread the
force.The nature of the bonding of a
nano tube is described by applied
quantum chemistry, specifically, orbital
hybridization. The chemical bonding of
nano tubes are composed entirely of sp2
bonds, similar to those of graphite.The
lightweight fiber, made up of millions of
tiny carbon nano tubes, is starting to
reveal exciting properties. The fiber is
very strong, lightweight and good at
absorbing energy in the form of
fragments traveling at very high
velocity. Inherent property of elasticity
is the main reason for it. Carbon nano
tubes exhibit extraordinary mechanical
properties.
It is as stiff as diamond. The material is
already up to several times stronger,
tougher and stiffer than fibers currently
used to make protective amour. For body
armour, the strength of the fibers in a
fabric is critical parameter. Recently
preparing bullet proof t-shirts can save
lots of people from bullet hits. In future,
these amours can be used for military
applications.
INTRODUCTION:
Technology that works at the
nanometer scale of molecules and
atoms will be a large part of nano
technology. This technology will be
enabling great improvements in all the
fields of human presence .Electronics
is an area where nano technology
making great gain.
CARBON NANOTUBES is one of the
main applications of nano technology.
Carbon nano tubes are allotropes of
carbon. They are of so many types.
Among them single nano tubes have
more significance. A carbon nano tube
is a one-atom thick sheet of graphite
rolled up into a seamless cylinder with
diameter on the order of a nanometer.
This results in a nanostructure where
the length-to-diameter ratio exceeds
1,000,000. They exhibit extraordinary
strength and unique electrical
properties. The inherent property of
elasticity made them to show
rebounding property. The nano tubes
are created rapidly by squirting a
carbon source, like ethanol, and an iron
nano catalyst through a hydrogen
carrier, and into a furnace at 1,200
degrees Celsius.
The size of the nano tube is in the order
of few nano meters, approximately
1/50000th of the width of a human hair,
while they can be up to several
millimeters in length. . Nano tubes are
mainly categorized as single-walled
nano tubes and multi-walled nano
tubes. Among them single walled tubes
are used to make bullet proof vests
which are mainly used for military
applications
EVOLUTION:
They had been discovered in 1991.The
current huge interest in carbon nano
tubes is a direct consequence of the
synthesis of buckminsterfullerene,
C60. The search was given new
importance when it was shown in 1990
that C60 could be produced in a simple
arc-evaporation apparatus readily
available in all laboratories. The tubes
contained at least two layers, often
many more, and ranged in outer
diameter from about 3 nm to 30 nm.
They were invariably closed at both
ends. In 1993, a new class of carbon
nano tube was discovered, with just a
single layer. It was soon established
that these new fibers had a range of
exceptional properties (see below), and
this sparked off an explosion of
research into carbon nano tubes Recent
research has focused on improving the
quality of catalytically-produced nano
tubes.
STRUCTURE:
The nature of the bonding of a nano
tube is described by applied quantum
chemistry, specifically, orbital
hybridization. The chemical bonding of
nano tubes are composed entirely of
sp2 bonds, similar to those of graphite.
This bonding structure, which is
stronger than the sp3 bonds found in
diamond, provides the molecules with
their unique strength. Nano tubes
naturally align themselves into "ropes"
held together by Vander Waals forces.
The bonding in carbon nano tubes is
sp², with each atom joined to three
neighbours, as in graphite.
The tubes can therefore be considered
as rolled-up graphene sheets. The first
two of these, known as armchair (top
left) and zigzag (middle left) have a
high degree of symmetry. The terms
"armchair" and "zigzag" refer to the
arrangement of hexagons around the
circumference. The third class of tube,
which in practice is the most common,
is known as chiral, meaning that it can
exist in two mirror-related forms. An
example of a chiral nanotube is shown
at the bottom left.
Condensing on the walls of the
reaction vessel and some of it on the
cathode. It is the deposit on the cathode
which contains the carbon nano tubes.
Single-walled nano tubes are produced
when Co and Ni or some other metal is
added to the anode. It has been known
since the 1950s, if not earlier, that
carbon nano tubes can also be made by
passing a carbon-containing gas, such
as a hydrocarbon, over a catalyst. The
catalyst consists of nano-sized particles
of metal, usually Fe, Co or Ni. These
particles catalyze the breakdown of the
gaseous molecules into carbon, and a
tube then begins to grow with a metal
particle at the tip. It was shown in 1996
that single-walled nano tubes can also
be produced catalytically. The
perfection of carbon nano tubes
produced in this way has generally
been poorer than those made by arcevaporation,
but great improvements in
the technique have been made in recent
years. The big advantage of catalytic
synthesis over arc-evaporation is that it
can be scaled up for volume
production. The third important
method for making carbon nano tubes
involves using a powerful laser to
vaporize a metal-graphite target. This
can be used to produce single-walled
tubes with high yield.
Types of carbon nano tubes:
Carbon nano tubes of different types.
They are:
1. SINGLE WALLED
Single-walled nano tubes are generally
FIG: MODELS OF NANO TUBES
narrower than the multi walled tubes,
with diameters typically in the range 1-
2 nm, and tend to be curved rather than
straight. The image on the right shows
some typical single-walled tubes
Single-walled nano tubes are a very
important variety of carbon nano tube
because they exhibit important electric
properties that are not shared by the
multi-walled carbon nano tube
variants. These tubes are used to make
super strong amour bodies. They have
elastic property and hence used to
make bullet proof vests.
2. MULTI WALLED:
Multi-walled nano tubes consist of
multiple layers of graphite rolled in on
themselves to form a tube shape. There
are two models which can be used to
describe the structures of multi-walled
nano tubes. The special place of
double-walled carbon nano tubes must
be emphasized here because they
combine very similar morphology and
properties as compared to SWNT,
while improving significantly their
resistance to chemicals. This is
especially important when
fictionalization is required to add new
properties to the CNT
3.FULLERITE:
Fullerites are the solid-state
manifestation of fullerenes and related
compounds and materials. Being
highly incompressible nano tube forms,
polymerized single-walled nano tubes
are a class of fullerites and are
comparable to diamond in terms of
hardness. However, due to the way that
nanotubes intertwine, they don't have
the corresponding crystal lattice that
makes it possible to cut diamonds
neatly. This same structure results in a
less brittle material, as any impact that
the structure sustains is spread out
throughout the material.
4. NANO BUD:
A stable nano bud structure
Carbon nano buds are a newly
discovered material combining two
previously discovered allotropes of
carbon: carbon nano tubes and
fullerenes. In this new material
fullerene-like "buds" are covalently
bonded to the outer sidewalls of the
underlying carbon nano tube. This
hybrid material has useful properties of
both fullerenes and carbon nano tubes.
In particular, they have been found to
be exceptionally good field emitters.
Among all types, bullet proof vests can
be made from single walled nano
tubes.
SPECIFIC APPLICATION--
SUPER STRONG BULLET
PROOF VESTS:
One of the main applications of carbon
nano tubes is super strong bullet proof
vests. Bullet proof vests are woven by
the carbon nano fibers which are made
up of millions of tiny carbon nano
tubes is starting to reveal exciting
properties. A new bullet proof material
is designed which actually rebounds
the force of a bullet. Bulletproof
materials at the moment are designed
to spread the force. The lightweight
fiber, made up of millions of tiny
carbon nano tubes, is starting to reveal
exciting properties the lightweight
fiber. The fiber is very strong,
lightweight and good at absorbing
energy in the form of fragments
travelling at very high velocity. So it
can be used as body amour.
Carbon nano tubes have great potential
applications in making ballisticresistance
materials. The remarkable
properties of carbon nano tubes makes
them an ideal candidate for reinforcing
polymers and other materials, and
could lead to applications such as
bullet-proof vests as light as a T-shirt,
shields, and explosion-proof blankets.
For these applications, thinner, lighter,
and flexible materials with superior
dynamic mechanical properties are
required. It should explore the energy
absorption capacity of a single-walled
carbon nano tube under a ballistic
impact. The result offers a useful
guideline for using carbon nano tube as
a reinforcing phase of materials to
make devices to prevent from ballistic
penetration or high speed impact. . It is
as stiff as diamond
PRINCIPLE:
The main principle is
Carbon nano fibers are good at
absorbing energy so they can absorb
the energy coming from the bullet.
The inherent property of elasticity
makes the bullet to rebound.
STRENGTH OF FIBRE:
It is 100% stronger than the steel.
Lighter than aluminum
Conduct electricity as copper.
These fibers can be used as space
elevators.
HOW DOES BULLET PROOF
VEST WORK?
Bulletproof jackets do not turn security
guards, police officers and armed
forces into Robocops, repelling the
force of bullets in their stride. New
research in carbon nanotechnology
however could give those in the line of
fire materials which can bounce bullets
without a trace of damage.
A research paper published in the
Institute of Physics' Nanotechnology
details how engineers from the Centre
for Advanced Materials Technology at
the University of Sydney have found a
way to use the elasticity of carbon
nanotubes to not only stop bullets
penetrating material but actually
rebound their force.
When the bullet strikes the jacket, the
fiber shows its elastic property and
rebounds the bullet. The Engineers in
Australia have designed a new bullet
proof material which actually rebounds
the force of a bullet. Bulletproof
materials at the moment are designed
to spread the force. The use of
nanotechnology in design means those
in the line of fire can be shot without a
flinch.
Also known as body amours, there are
different types of bullet proof vests.
The most common is the soft vest
usually used by the police force and
private security; it cannot stop
ammunition of big caliber. Hard-plate
reinforced vests are necessary when
heavy ammunition is involved; they are
used as part of the default equipment in
the Army.
Soft bullet proof vests are formed from
advanced woven fibers that can be
sewn into vests and other soft clothing.
The fibers form a tight interlaced net
which disperses the energy of the
bullet reducing its speed until it stops.
The most effective material used in
body armor is Kevlar fiber. Kevlar is
light as cloth, but five times stronger
than a piece of steel of the same
weight. When interlaced into a dense
net, this material can absorb a great
amount of energy. The fivers are
usually twisted individually and the
material covered by a double coat of
resin and plastic. The second most used
material is Vectran, which is two times
stronger than Kevlar. New trends
include spider web, feathers and
carbon nano tubes.
HOW DOES THE BULLET
PROOF VEST STOP
BULLETS?
The bullets do so much damage
because of the focused blunt trauma:
they focus all the impact in a reduced
area increasing the penetration rate.
Bullet proof vests are designed to
spread the energy laterally over the
whole vest while deforming the bullet
at the same time. The system works as
the net of a soccer goal, lateral tension
of the net spreads the energy of the
impact and stops the ball without
reflecting it (in most cases), the
collision is completely inelastic. When
hard protective pieces are added, the
bullets might be deflected instead of
stopped, but in the case of soft body
armor it is extremely difficult to deflect
a bullet, it will be trapped by the
material and stopped.
Most anti-ballistic materials, like
bullet-proof jackets and explosionproof
blankets, are currently made of
multiple layers of Kevlar, Twaron or
Dyneema fibers which stop bullets
from penetrating by spreading the
bullet's force.
The elasticity of carbon nano tubes
means that blunt force trauma may be
avoided and that's why the engineers in
Sydney have undertaken experiments
to find the optimum point of elasticity
for the most effective bullet-bouncing
gear.
Scientists have used the elasticity of
carbon nano tubes to not only stop
bullets penetrating material but to
actually rebound their force.
Clothing woven from the fibers that
could store electrical energy, much like
a battery, and be used to power various
electrical devices. Synthetic muscles
capable of generating 100 times the
force of the same diameter natural
muscle. Distributed fiber sensors able
to monitor the movement and health of
first responders to emergencies. A
power source for spacecraft on long
voyages through conversion of thermal
energy to electrical energy using
nanotube fibers.
SCOPE FOR FUTURE
WORK:
"Although we might be making several
kilometers a day, it's a very thin fiber,
weighing much less than a gram, "We
have to scale this up to make
kilograms. Before anyone will commit
the huge investment needed to build a
full-scale plant, they need to know that
it will make a good bullet-proof vest."
The research is going on to prepare
protective body suits.
Body amour for women personal.
Reinforced soft and hard body armour.
Helmet and body protective device.
Breathable garment to be woven to
improve the comfort of the human
body.
CONCLUSION:
We conclude that these bullet proof
vests are very effective and essential
for the coming generations. As these
are made from carbon nano fibers,
which are readily available and less in
cost, the coming generations will be
benefited more. These are safe and
protective than the bullet proofs which
are used in these days. A new
generation of bullet-proof vests and
anti-ballistic materials that are much
more effective than those in use today.
REFERENCES:
1 Nanotechnology and Homeland
Security - Danielratner, mark A . ratner
2 Nanotechnology- Richard
booker , Earl boysen -2005-Technology
and Engineering
3. www.nanowerk.com
4. www.nanovip.com
NANOTECHNOLOGY
THE NEXT REVOLUTION OF REDEFINE ELECTRONICS
A Technical Paper submitted by
M.PRANAVA SINDHURI
3/4EIE
BHOJ REDDY ENGINEERING COLLEGE FOR WOMEN
HYDERABAD
[email protected]
PH.NO:9293186539
T.SANTHI PAVANI
3/4ECE
BAPATLA ENGINEERING COLLEGE
BAPATLA
[email protected]
PH.NO:9948507979
ABSTRACT
The discoveries, which have been
emerged from the tender minds of young
scientists in 20th century, had led too
much innovative idea s .one such idea
applying the concept of nanotechnology
in the fields of nanorobots. There re two
concepts of nanotechnology. They are
position assembly &self-replication.
Nanorobots are used in various fields
Nanotechnology is smaller than micro
technology. It is building with Intent and
design, molecule by molecule.
Nanotechnology can be created at
nanoscale to perform new and improved
functions. Nanotechnology is an enabler
of accomplishment in a truly diverse mix
of science and engineering fields.
Scientist Looking for building blocks to
form electronics and machines those are
not much bigger than molecules.
Researches have found a way to make
carbon nanowires used In
Nanoelectronics, as microscopic
machine parts and in materials
constructed Molecule-by-molecule. This
paper is mainly deals about
nanomedicine and some of its
applications namely nanorobots and
remote sensing. Nanomedicine is the
Application of nanotechnology i.e.
engineering of tiny machines which is
used for the Prevention and treatment of
disease in human body. Nanomedicine
has potential to Change medical science
dramatically in twenty first century.
TABLE OF CONTENTS
1.INTRODUCTION
2. SPECIALISED FACILITIES
3. NANO TECHNOLOGY AT
NASA
4. NANO MEDICINE
5. NANO ROBOT
6. CONCLUSION
7. REFERENCES
INTRODUCTION
Nano is a Greek prefix that defines the
smallest (1000 times smaller then
micrometer) natural structures. It is
building with indent &design, molecule
by molecule, these two things:
.. Incredibly advanced
extremely capable nanoscale
machines & computers.
.. Ordinary size objects, using
other incredibly small machines
called assemblers.
Nanotechnology can be created
at nanoscale & to perform new &
improved functions.
It is going to be responsible for massive
changes in the way we live, the way
interact with one another & our
environment.
SPECIALIZED NANOTECHNOLOGY
FACILITIES & CAPABILITIES
Nanotechnology is both the means to an
end-an enables of accomplishments in
truly
diverse mix of science & engineering
field. It is a revolution in industry that
deliver wave
after wave of innovative products and
services.
a. Molecular measuring machine
(m^3)
Nist conceived two dimensional coordinate
measuring machine can
measure with
nanometer level with accuracy,
locations, distance and features sizes
over a 50mm by
50mm area, an enormous expense in the
nanotechnology world .It uses a high
precision
inferometer.
b. Pulsed inductive Micro wave
Magnetometer (PIMM)
Using PIMM, nanostuctured materials
are used to record data in extremely
small bits (at sizes below 160 square nm
per bit), now can assess quickly the
composition and growth conditions that
promote high speed response, permitting
the development of future magnetic
memories that read and write data at
sustained speeds in excess of 1 billions
bits per second.
c. Carbon Wires expand Nano
toolkit
Scientists looking for building
blocks to form electronics & machines
that are not much bigger than molecules
have gained a new tool, Japan have
found a way to make carbon nanowires
that measure only a few carbon atoms
across. CNW could eventually be used
in ultra-stronger fibers, as friction-free
bearings &in space shuttle nose cones.
Carbon nanotubes are very strongly
having useful electrical properties,
because they are solid, and they should
be even stronger than nanotubes. They
could be used in nanoelectronics as
microscopic machine parts, and in
materials constructed molecules by
molecule.
d. Nanotubes boost storage:
Multiwalled carbon nanotubes to
make denser, more efficient data
Storage devices. It was possible
to use multiwalled
carbonnanotubes tips rather then
silicon to write data on to a
polymer film. Binary data is
written by heating the polymer to
make indentation that represent
1s; blank space represent
0s.nanotubes tips can be used to
write more than 250 gigabytes.
NANOTECHNOLOGY AT NASA
.. Advanced miniaturization is
a key thrust area to enable new
science and exploration missions
.. Ultra small sensors, power
sources, communication,
navigation, and propulsion
systems with very low mass,
volume and power consumption
are needed
.. Revolutions in electronics
and computing will allow
reconfigurable, autonomous,
"thinking" spacecraft
.. Nanotechnology presents a
whole new spectrum of
opportunities to build device
components and systems for
entirely new space architectures
.. Networks of ultra small
probes on planetary surfaces
.. Micro-rovers that drive, hop,
fly, and burrow
.. Collection of micro
spacecraft making a variety of
measurements
The Nan rover
Technology Task is a
technology development
effort to create very small
(10-100s of grams) but
scientifically capable
robotic vehicles for
planetary exploration,
which can easily fit
within the mass and/or
volume constraints of
future missions to
asteroids, comets, and
Mars. The task objective
is twofold:
to create a useful rover system
using current-generation
technology including mobility,
computation, power, and
communications within a mass of
a few hundred grams, and
to advance selected
technologies which offer
breakthroughs in size reduction,
mobility, or science return to
enable complete rovers to be
built with a mass well under 100
grams.
Key Technology Elements
Miniaturization of all rover
systems including science
payload
Computer/electronics design for
operation without thermal
enclosure and control to survive
ambient temperature ranges of -
125C to +125C
Miniature actuator usage and
control in thermal/vacuum
environments
Mobility and navigation in lowgravity
(1/100,000 of Earth)
environments
Sensing and autonomous
control of rover operations
. NANOMEDICINE:
Nanomedicine is the application
of nanotechnology (the engineering of
tiny machines) to the prevention and
treatment of disease in the human body.
It has the potential to change medical
science dramatically in the 21st century.
According to Jar off,
Nanotech is capable of delivering
medication to the exact location
where they are needed. In addition to
much fewer deaths (and disorders)
from side effects, the drug would
also be more potent. The drug could
also reach nearly inaccessible places
that current techniques don t allow.
The most elementary nanomedical
devices will be used to diagnose
illness. Chemical tests exist for this
purpose; nanomachines could be
employed to monitor the internal
chemistry of the body. Mobile
nanorobots, equipped with wireless
transmitters, might circulate in the
blood and lymph systems, and send
out warnings when chemical
imbalances occur. Similar fixed
nanomachines could be planted in
the nervous system to monitor pulse,
brain-wave activity, and other
functions.
METHODS OF MEDICATION:
A more advanced use of
nanotechnology might involve implanted
devices to dispense drugs or hormones
as needed in people with chronic
imbalance or deficiency states. Heart
defibrillators and pacemakers have been
around for some time; nanomedicine
carries this to the next level down in
terms of physical dimension, with the
potential to affect the behavior of
individual cells. Ultimately, artificial
antibodies, artificial white and red blood
cells, and antiviral nanorobots might be
devised.
The most advanced
nanomedicine involves the use of
nanorobots as miniature Surgeons. Such
machines might repair damaged cells, or
get inside cells and replace or assist
damaged intracellular structures. At the
extreme, nanomachines might replicate
themselves, or correct genetic
deficiencies by altering or replacing
DNA (deoxyribonucleic acid) molecules.
CANCER DEDUCTION AND
TREATMENT:
Nanotechnology can be used in
detection of cancer at an early stage.
Nanotechnology tools are extremely
sensitive and can be used with out
physically altering the cells or tissue
tests and can be run on a single small
device. The cantilever is a tool and when
the cancer molecules bid to the
cantilevers the cantilevers bends. from
this, the detected from which the cancer
cells are detected.
Nanotubes helps to identify DNA
changes associated with cancer.
Quantum dots are tiny crystals that glow
when UV light stimulates them. Latex
bend filled with these crystals can be
designed to bind to specific DNA
sequences and the cancer cells detected.
NANOROBOTS: MEDICINE OF THE
FUTURE
"Living organisms are naturallyexisting,
fabulously complex systems of
molecular nanotechnology." - Dr.
Gregory Fahy
The above statement raises the
interesting possibility that machines
constructed at the molecular level
(nanomachines) may be used to cure the
human body of its various ills. This
application of nanotechnology to the
field of medicine is commonly called as
nanomedicine.
Nanorobots are nanodevices that will
be used for the purpose of
maintaining and protecting the human
body against pathogens. They will
have a diameter of about 0.5 to 3
microns and will be constructed out of
parts with dimensions in the range of
1 to 100 nanometers. The main
element used will be carbon in the
form of diamond / fullerene
nanocomposites because of the
strength and chemical inertness.
Metabolizing local glucose and oxygen
for energy can do the powering of the
nanorobots. Communication with the
device can be achieved by broadcasttype
acoustic signaling.
A navigational network may be
installed in the body, with station
keeping navigational elements providing
high positional accuracy to all passing
nanorobots that interrogate them,
wanting to know their location. This will
enable the physician to keep track of the
various devices in the body. When the
task of the nanorobots is completed,
allowing them to exfuse themselves via
the usual human excretory channels can
retrieve them. They can also be removed
by active scavenger systems. This
feature is design-dependent
NANOROBOT WORKING IN
BLOOD VESSELS
CONCLUSION:
Nanotechnology has become a
reality and some companies are already
implementing it. Nanotechnology is an
expected future upcoming technology
that will make most products lighter,
stronger, cleaner, less expensive & more
precise.
Nanotechnology is an enabler of
accomplishment in a truly diverse mix of
science and engineering field.
Nanotechnology is going to be
responsible for massive changes in the
way we live, the way we interact with
one another and ourenvironment. NEMS
are used for wide range of sensing
application.
Nanomedicine is the application
of nanotechnology and it has the
potential to change medical science in
twenty first century. This path_breaking
initiative needs a significant revolution
in the existing medical technology to
make this through in mind into a thing
in hand . Government funding in the
field of nanotechnology is around 520
million dollars a year (according to the
editors of Scientific American).
Institutions like Foresight (foresight.org)
and companies like Zyvex (zyvex.com)
are further advancing nanotechnology.
Although the future of medicine lies
unclear, it is certain that nanotechnology
will have a significant impact. The
Philosopher s Stone can t be seen by the
naked eye.
REFERENCES:
1. Persistent holographic recording in doubly-doped lithium niobate crystals
Authors: Ali Adibi, Karsten Buse, Demetri Psaltis, Caltech.
2. Holographic for Information storage and processing Author: Geoffrey W.Burr
3. Photo electric effects, Author: WL Warren, D.Dimos.
4. www.nanotechnology.org
5.. IEEE Transactions on nanotechnology, vol1, no.1, MARCH 2002.
Mark T.Bohr Nanotechnology Goals and Challenges for Electronic
Applications.
6. IEEE Transactions on nanotechnology, vol1, no.4, DEC 2002.
Dae Hwan Kim, Suk-Kang Sung, Kyung Rok Kim, Jong Duk Lee, and Byung-
Gook Park Single-Electron Transistors Based on Gate-Induced Si Island
for Single-Electron Logic Application.
Paper presentation on
Virtually entered every sphere of our lives
By
B. Manjunath
S. Sai Babu
IIIrd ECE
E-mails:-
[email protected]
[email protected]
Contact:
9290714563
9440410899
Gates Institute of Technology
ABSTRACT
Embedded systems have virtually entered every sphere of our lives
Embedded systems encompass a variety of hardware and software components,
which perform specific functions in host systems, for example, satellites, washi
ng
machines, hand-held telephones and automobiles. Embedded systems have become
increasingly digital with a non-digital periphery (analog power) and therefore,
both
hardware and software co-design are relevant. The vast majority of computers
manufactured are used in such systems. They are called `embedded' to distinguish
them from standard mainframes, workstations, and PCs. Although the design of
embedded systems has been used in industrial practice for decades, the systemati
c
design of such systems has only recently gained increased attention. Advances in
microelectronics have made possible applications that would have been impossible
without an embedded system design.
Embedded System Applications will be of great interest to researchers and
designers working in the design of embedded systems for industrial applications.
Embedded systems have virtually entered every sphere of our lives, right from
the time we work out on trade mills in the gym, to the cars we drive today. Embe
dded
systems cover a broad range of products that generalization is difficult.
CONTENTS
.. INTRODUCTON
.. Definition
.. EMBEDDED SYSTEMS AND REAL TIME SYSTEMS
.. COMPONENTS OF AN EMBEDDED SYSTEM
.. Processor
.. Memory
.. Peripherals
.. Hardware Timers
.. Software
.. CLASSIFICATION
.. EMBEDDED SOFTWARE DEVELOPMENT
.. SIMULATOR
.. EMULATOR
.. APPLICATIONS
.. CHALLENGES FOR SYSTEM DEVELOPERS
.. FUTURE DEVELOPMENTS
.. CONCLUSION
.. REFERENCES
INTRODUCTION:
Breathtaking developments in microelectronics, processor speeds, and
memory elements, accompanied with dropping prices, have resulted in powerful
embedded systems with a number of applications.
An embedded system is a microprocessor based system that is
incorporated into a device to monitor and control the functions of the component
s
of the device. They are used in many devices ranging from a microwave oven to a
nuclear reactor. Unlike personal computers that run a variety of applications,
embedded systems are designed for performing specific tasks. An embedded system
used in a device (for instance the embedded system in washing machine that is us
ed to
cycle through the various states of the washing machine) is programmed by the
designers of the system and generally cannot be programmed by the end user.
Definition:
An embedded system is various type of computer system or computing
device that performs a dedicated function and/or is designed for use with a spec
ific
embedded software application.
Embedded systems posses the following distinguishing qualities.
Reliability:
Embedded system should be very reliable as they perform critical
functions. For instance, consider the embedded system used for flight control. F
ailure
of the embedded system could have disastrous consequences. Hence embedded
system programmers should take into consideration all possibilities and write
programs that do not fail.
Responsiveness:
Embedded systems should respond to events as soon as possible. For
example, a patient monitoring system should process the patient s heart signals
quickly and immediately notify if any abnormality in the signals is detected.
Specialized hardware:
Since embedded systems are used for performing specific functions,
specialized hardware is used. For example, embedded systems that monitor and
analyze audio signals use signal processors.
Low cost:
As embedded systems are extensively used in consumer electronic
systems, they are cost sensitive. Thus their cost must be low.
Robustness:
Embedded systems should be robust since they operate in a harsh
environment. They should endure vibrations, power supply fluctuations and excess
ive
heat.
EMBEDDED SYSTEM AND REAL TIME SYSTEM:
Embedded systems are confused with real-time systems. A real time
system is one in which the correctness of the computations not only depends on t
he
accuracy of the result, but also on the time when the result is produced. Figure
1
shows the relationship between embedded and real time systems.
Fig: Embedded And real-time systems
COMPONENTS OF AN EMBEDDED SYSTEM:
Embedded systems have the following components.
PROCESSOR:
A processor fetches instructions from the memory unit and executes the
instructions. An instruction consists of an instruction code and the operands on
which
the instruction should act upon. The format of instruction code and operands of
a
processor is defined by the processor s instruction set. Each type of processor ha
s its
own instruction set. Performance of the system can be improved by dedicated
processors, which implement algorithms in hardware using building blocks such as
hardware counters and multipliers.
Some embedded processors have special fuzzy logic instructions. This
is because inputs to an embedded system are sometimes better represented as fuzz
y
variables. For instance, the mathematical model for a control system may not exi
st or
may involve expensive computing power. Fuzzy logic can be employed for such
control systems to provide a cost-effective solution.
MEMORY:
The memory unit in an embedded system should have low access time
and high density. (A memory chip- has greater density if it can store more bits
in the
same amount of space. Memory in an embedded system consists of ROM and RAM
.The contents of ROM are non-volatile while RAM is volatile. ROM stores the
program code while RAM is used to store transient input or output data. Embedded
systems generally do not possess secondary storage devices such as magnetic disk
s.
As programs of embedded systems are small there is no need of virtual storage.
PERIPHERALS:
Peripherals are input and output devices connected to the serial and
parallel ports of the embedded system. Serial port transfers one bit at a time b
etween
the peripheral and the microprocessor. Parallel ports transfer an entire word co
nsisting
of many bits simultaneously between the peripheral and the microprocessor.
Programmable interface devices which act as an interface between microprocessor
with peripherals provide flexibility since they can be programmed to perform I/O
on
different peripherals. The microprocessor monitors the inputs from peripherals a
nd
performs actions when certain events occur. For instance, when sensors indicate
the
level of water in the washtub of a washing machine is above the present level, t
he
microprocessor starts the wash cycle.
HARDWARE TIMERS:
The clock pulses of the microprocessor periodically update hardware
timers. The timers count the clock pulses and interrupt the processor at regular
intervals of time to perform periodic tasks.
SOFTWARE:
Due to the absence of secondary storage devices in an embedded
system, program code and constant data reside in the ROM. During execution of th
e
program, storage space for variables is allocated in the RAM. The program should
execute continuously and should be capable of handling all possible exceptional
conditions. Hence the programs generally do not call the function exit.
Real-time embedded systems possess an RTOS (Real Time Operating
System). The RTOS consists of a scheduler that manages the execution of multiple
tasks in the embedded systems. Unlike operating systems for the desktop computer
s
where scheduling deadlines are not critical, an RTOS should schedule tasks and
interrupt service routines such that they are completed within their deadlines.
CLASSIFICATION:
Embedded systems are divided into autonomous, real-time, networked,
and mobile categories.
Autonomous systems function in standalone mode. Many embedded
systems used for process control in manufacturing units and automobiles fall und
er
this category. In process control systems the inputs originate from transducers
that
convert a physical quantity, such as temperature, into an electric signal. The s
ystem
output controls the device. In standalone systems, the deadlines or response tim
es are
not critical. An air-conditioner can be set to turn on when the temperature reac
hes a
certain level. Measuring instruments and CD players are examples of autonomous
systems.
Real time systems are required to carry out specific tasks in a specified
amount of time. These systems are extensively used to carry out time critical ta
sks in
process control. For instance, a boiler plant must open the valves if the pressu
re
exceeds a particular threshold. If the job is not carried out in the stipulated
time, a
catastrophe may result.
Networks embedded systems monitor plant parameters, such as
temperature, pressure, and humidity, and send the data over the network to a
centralized system for online monitoring. A network-enabled web camera monitorin
g
the plant floor transmits its video output to remote controlling organization.
CPU ROM RAM I/O
Address Bus
Data Bus
Control Bus
Figure 1: Functional diagram of a typical embedded system
Mobile gadgets need to store databases locally in their memory. These
gadgets imbibe powerful computing and communication capabilities to perform real
time as well as non-real time tasks and handle multimedia applications. The gadg
ets
embed powerful processor and OS, and a lot of money with minimal power
consumption.
EMBEDDED SOFTWARE DEVELOPMENT:
Programmers who write programs for desktop computers do their work
on the same kind of computer on which their application will run. A programmer
developing a program to run on a Linux machine edits the program, compiles it an
d
debugs it on a Linux machine. This approach cannot be used for embedded system.
For example, the absence of a keyboard in the embedded system rules out editing
a
program in the embedded system. So, most of the programming work for an
embedded system, which includes writing, compiling, assembling and linking the
program, is done on a general purpose computer called a host that had all the re
quired
programming tools. The final executable consisting of machine code is then
transferred to the embedded system.
Programs are written on the host in a high level language (such as C) or
assembly language of the target system s processor. The program files written in t
he
high level language are compiled on the host using a cross-compiler to obtain th
e
corresponding object files.
The assembly language files are assembled on the host using a crossassembler
to obtain the object files. The object files produced by cross-compilers and
cross-assemblers contain instructions that are understood by the target s processo
r
(native compilers and assemblers on the other hand produce object files containi
ng
instructions that are understood by the host s processor).
The object files are linked using a specialized linker called locator to
obtain the executable code. This executable code is stored in the ROM. Since the
program code already resides in memory, there is no need for a loader in an embe
dded
system. In personal computers, on the other hand, the loader has to transfer the
program code from the magnetic disk to memory to execute the program.
The binary code obtained be translating an assembly language program
using an assembler is smaller and runs faster than the binary code obtained by
translating a high level language using a compiler since the assembly language g
ives
the programmer complete control over the functioning of a processor. The advanta
ge
of using a high level language is that a program written in a high level languag
e is
easier to understand and maintain than a program written in assembly language.
Hence time-critical applications are written in assembly language while complex
applications are written in a high level language.
Fig: Software development on the host
SIMULATOR:
A simulator is software tool that runs on the host and simulates the behavior of
the target s processor and memory. The simulator knows the target processor s
architecture and instruction set. The program to be tested is read by the simula
tor and
as instructions are executed the simulator keeps track of the values of the targ
et
processor s registers and the target s memory. Simulators provide single step and
breakpoint facilities to debug the program. Simulators cannot be used if the emb
edded
system uses special hardware that cannot be simulated and the only way to test t
he
program is to execute it on the target. Although simulators do not run at the sa
me
speed as the target microprocessor, they provide details from which the time tak
en to
execute the code on the target microprocessor can be determined. For instance, t
he
simulator can report the number of target microprocessor s bus cycles taken to exe
cute
the code. Multiplying this value with the time taken for one bus cycle gives the
actual
time taken by the target microprocessor to execute the code.
EMULATOR:
An emulator is a hardware tool that helps in testing and debugging the
program on the target. The target s processor is removed from the circuit and the
emulator is connected in its place. The emulator drives the signals in the circu
it in the
same way as the target s processor and hence the emulator appears to be the proces
sor
to all other components of the embedded system. Emulators also provide features
such
as single step and breakpoints to debug the program.
APPLICATIONS:
Embedded systems have virtually entered every sphere of our lives, right from
the time we work out on trade mills in the gym, to the cars we drive today.
Embedded systems cover a broad range of products that generalization is
difficult. Some broad categories are:
Aero Space and Defence electronics . Flight safety, flight management,
fire control ..
Automotive . auto safety, braking and steering systems, car information
systems ..
Broadcast and Entertainment . audio control systems, camera systems,
DVD players .
Consumer/Internet Applications . Handheld computers, internet hand held
devices, point-of-sale systems like ATM
Medical Electronics . cardiovascular devices, real time imaging system
(patient monitoring systems) .
Mobile data infrastructures . wireless LANS, pagers, wireless phones,
satellite terminals (VSATs) .
CHALLENGES FOR SYSTEM DEVELOPERS:
In an embedded system, assigning functions to hardware and software
is a vital consideration. Hardware implementation has the advantage that the tas
k
execution is faster than in software implementation. On the flip side, the hardw
are
chip occupies space, costs money, and consumes power. Porting a readily availabl
e
OS to the processor or writing embedded software without any OS embedded into th
e
system are the main challenges. Developing, testing, and debugging input/output
interfaces in embedded systems are even more challenging.
Embedded systems need to offer high performance at low power. These
should meet the basic functional requirements of the device: A hand held PC must
execute with an OS and a basic set of applications, a gaming gadget must facilit
ate
games, and a phone must provide basic telephony and so on.
FUTURE DEVELOPMENTS:
A refrigerator that tells you the expiry date of the yogurt, a micro oven
that helps you to view different recipes that can be cooked and even gives ideas
on
serving, a future home that is completely wired with the ability to control ever
y
appliance from almost any where. All this may seem incredible today, but it wont
be
too long before such appliances are produced for mass usage. In future it is pos
sible to
have an embedded internet that connects different embedded systems on a single
network.
CONCLUSION:
Embedded systems have requirements that differ significantly from
general-purpose computers. The main goal of an embedded system developer is to
design a lowest cost system that performs the desired tasks without failing. Whi
le the
hardware approach improves performance, the software approach provides flexibili
ty.
Recent developments in hardware-software co-design permit tradeoffs between
hardware and software for cost-effective embedded systems.
REFERENCES:
Embedded systems design: An introduction to processor, tools and techniques
By Arnold.s.Berger
Embedded systems Application:
By Claude Baron, Jean Claude Geffory, Gilles Motel
1
A
PAPER PRESENTATION
ON
NANOMOBILE
A NEW PERSPECTIVE FOR DESIGNING MOBILEPHONE CIRCUITS
ON A NANO-SCALE
PRESENTED BY:-
Y.HARSHA VARDHAN REDDY P.PAVAN,
III B.Tech, EIE, III BTech, EIE,
Email:[email protected] Email:[email protected]
voice@9885445731 voice@98666653380
2
ABSTRACT:
This technology has the potential to
replace existing manufacturing
methods for integrated circuits, which
may reach their practical limits within
the next decade when Moore` s Law
eventually hits a brick wall
- Physicist Bernard Yurke of
Bell Labs
Nanotechnology is an extremely
powerful emerging technology.
Research works are being carried out
on the possibilities of applying this
technology in designing electronic
circuits. This paper throws light on
NANOMOBILE a mobile phone
with its internal circuitry designed on a
nanoscale. The nanomobile is a huge
leap towards the advent of
nanotechnology in the field of
electronics and communication.
Nanomobile is a perfect blend of the
conventional radio communication
principles and the recent advancements
in nanotechnology. We have dealt with
the nanolithography a top-down
approach and carbon nanotubes the
Bottom-up approach for the design and
fabrication of the internal circuitry.
The nanomobile can be visualized to
be of a size that is slightly smaller than
an i-pod, with enhanced features like
touch screen and Bluetooth
connectivity. Owing to its small size
the nanomoile would find its
application both in commercial market
as well as in the field of defence.
This paper thus projects, an innovative
idea to replace the existing micro-scale
fabrication method paving way a
visionary of having compact, robust
and technologically enhanced mobile
phones with minimal resource and
miniature size.
INTRODUCTION:
Millions of people around the world use
mobile phones. They are such great
gadgets. These days, mobile phones
provide an incredible array of functions,
and new ones are being added at a
breakneck pace. Mobile phones have
become an integral part of our day to day
life. In today` s fast world, a modern
man requires mobile phones that are
highly compact which would also satisfy
all his requirements.
In spite of the manufacturer s efforts to
bring out handier mobiles, the sizes of
the present day mobiles are still bulky.
This can be attributed to the increase in
the enhancement features. Thus with the
current methods used in design it is
practically impossible to have mobiles
that are compact yet have all the
required enhancement.
In order to overcome this constraint we
propose a new gadget called the
NANOMOBILE. The unique feature of
the nanomobile is that it employs the
principles of nanotechnology in its
design. Nanotechnology basically aims
at size reduction and hence its
application to mobiles would aid in
producing handier ones.
INSIDE A MOBILE PHONE:
3
Now let us take a look at the internal
structure of a mobile phone.
As shown in the figure a mobile phone
consists of the following blocks housed
on a printed circuit board:
A digital signal processor
A microprocessor and control
logic
Radio frequency
transmitter/receiver amplifiers
Analog to digital converter
Digital to analog converter
An internal memory
Radio frequency and power
section.
.. The digital signal processor:
The DSP is a "Digital Signal Processor"
- a highly customized processor
designed to perform signal manipulation
calculations at high speed. The DSP is
rated at about 40 MIPS (Millions of
Instructions per Second) and handles all
the signal compression and
decompression
.. The microprocessor and
control logic:
The microprocessor and control logic
handle all of the housekeeping chores for
the keyboard and
Internal structure
Display, deal with command and control
signaling with the base station and also
coordinate the rest of the functions on the
board.
.. Radio frequency transmitter/receiver
amplifiers:
The RF amplifiers handle signals in and out
of the antenna. Mobile communication
involves the travel of signals through long
distances and hence there is a possibility of
the signal being attenuated mid way. Hence
the RF amplifiers play an important role of
boasting the power levels of the signals, so
that they can be deciphered at both the ends.
The figure illustrates the circuit of a class-C
power amplifier.
4
a) Class C amplifier
.. ADC/DAC Converters:
The signal has to be converted from analog
to digital at the transmitting end. This task is
accomplished by the analog to digital
converter. At the receiving, the digital signal
must be converted back to its analog
equivalent. This is done by the digital to
analog converter.
b) DAC
c) ADC
.. Memory:
The memory refers to the internal ROM and
RAM that is used to store and handle the
data required by both the user and the
system.
.. RF and power section:
5
The RF and power section handles power
management and recharging and also deals
with the hundreds of FM channels.
CONVENTIONAL METHODS &
THEIR DEMERITS:
Today` s mobile phones use the MIC in their
internal circuits. The monolithic integrated
circuits are used to achieve circuits on a
smaller scale, the recent advancement being
the microwave monolithic integrated
circuits. These circuits are a combination of
active and passive elements which are
fabricated on a single substrate. The various
fabrication techniques include:
Diffusion and ion implantation
Oxidation and film deposition
Epitaxial growth
Optical lithography
Etching and photo resist
Deposition
As mentioned above these techniques
contribute to the reduction of the circuit size,
yet its disadvantage is that this
method<MIC> is not effective in shrinking
the circuit size to the desired level. The final
circuit will be a combination of a large
number of substrates ultimately making the
internal circuit bulkier.
NANOTECHNOLOGY, A REMEDY:
Nanotechnology provides an effective
replacement for the conventional monolithic
integrated circuit design techniques. This
technology aims at developing nano sized
materials that will be both compact and
robust. One of the branches of
nanotechnology called nanoelectronics deals
with the study of shrinking electronic
circuits to nano scale. Nanoelectronics has
two approaches for fabrication and design of
nano sized electronic circuits namely topdown
and bottom-up approach.
TOP-DOWN APPROACH:
Top-down approach refers to the process of
arriving at a smaller end product from a
large quantity of raw material. In this
approach a large chunk of raw material is
sliced into thin wafers above which the
circuit elements are drawn on radiation
sensitive films. The unwanted materials are
then removed by the process of etching. In
the following section we project
nanolithography as a means to implement
the top-down approach.
NANOLITHOGRAPHY:
Nanolithography using electron beam
lithography can pattern small feature
with 4nm resolution. It does not require
any photolithography masks or optical
alignment. Electron beam lithography is
a great tool for research and
development because of its versatility
and quick design and test cycle time.
The layout can be drawn, and the device
can be patterned, fabricated and tested
easily.
THE PROCESS INVOLVED:
Electron Beam Lithography (EBL)
system is ideal for patterning small area
devices with nanometer resolution. In
the EBL system, the beam spot size can
be varied from 4nm to 200nm,
depending on the acceleration voltage
and beam current. The EBL system uses
a thermal field emission type cathode
and ZrO/w for the emitter to generate an
electron beam. The beam generated from
the emitter is processed through a four-
6
stage e-beam focusing lens system and
forms a spot beam on the work piece.
Pattern writing is carried out on a work
piece, which has been coated with an
electron beam sensitive resist, by
scanning the electron beam. The EBL
system adopts a vector scanning and
step-and-repeat writing method. It has a
two-stage electrostatic deflection system.
The position-deflection system (main
deflection system) scans over a 500um x
500um area, and it controls precise
positioning of the beam. The scanningdeflection
system (subsidiary deflection
system) scans over a 4um x 4um area,
and it performs high-speed beam
scanning.
The electron beam generated is
accelerated through a 100kV (or 50kV)
electrode, and the beam is turned on or
off by a blanking electrode when the
stage moves. The EBL system is also
equipped with electrodes that correct the
field curvature and astigmatism due to
beam deflection. The schematic diagram
of the EBL can be visualized as shown
in the figure below.
EBL system
7
The minimum feature that can be
resolved by the EBL system depends on
several factors, such as the type of resist,
resist thickness, exposure dosage, the
beam current level, proximity correction,
development process and etching
resistance of the particular electron beam
resist used.
The feature patterned on the electron
beam resist can be transferred to the
substrate using two methods: the lift-off
process or the direct etching process. In
a lift-off process, the resist is first spun
onto the wafer, exposed by E-beam
lithography, and developed in a solution.
Next, a masking material, such as
Titanium, is sputtered onto the wafer.
The wafer is then placed in a resist
stripper to remove the resist. The metal
that is sputtered directly on top of the
substrate where there is no resist will
stay, but the metal that is sputtered on
top of the resist will be lifted off along
with the resist, hence, it is called the liftoff
process. The metal left behind
becomes the etching mask for the
substrate. The negative resist is typically
preferred for the lift-off process because
it has a slightly angled sidewall profile.
In a direct etching process, a masking
material such as silicon dioxide is first
deposited onto the silicon substrate.
Silicon dioxide is used as a mask
because it has high selectivity in silicon
etching (1:100). The resist is then spun
onto the wafer, exposed and developed.
Next, the pattern is transferred onto the
oxide mask by reactive ion etching (RIE)
or inductively coupled plasma (ICP).
One thing to take into consideration is
that the electron beam resist will also be
etched during the oxide etching.
Therefore, the selectivity of the resist to
oxide during the etching process will
determine the minimum required resist
thickness for a given oxide thickness.
8
BOTTOM-UP APPROACH:
The process of rigging up smaller
elements in order to obtain the end
product<in this case a circuit> is called
bottom-up approach. In nanotechnology
the bottom-up approach is implemented
using carbon nanotubes. Tailoring the
atomic structure of organic molecules it
is possible to create individual electronic
components. This is a completely
different way of building circuits.
Instead of whittling down a big block of
silicon we are building from the groundup;
creating molecules on a surface and
then allowing them to assemble into
larger structures.
Fig: magnified sketch of a carbon
nanotube
Scientist are now attempting to
manipulate individual atoms and
molecules. Building with individual
atoms is becoming easier and scientists
have succeeded in constructing
electronic devices using carbon
nanotubes. But a practical constraint
comes in integrating the individual
components. No method has emerged for
combining the individual components to
form a complete electronic circuit.
Hence the bottom-up approach is in its
early stage of research and is thus
practically difficult to realize.
THE COMPLETE PICTURE OF A
NANOMOBILE:
After all the above discussions, we now
present a schematic picture of a
nanomobile.
a) front view b) rear view
As seen from the figure b the internal
circuit of a conventional mobile has been
tremendously reduced. The nanomobile
consists of a two tier display featuring
the touch screen display in the top and
the system display at the bottom. The
touch screen display enables the user to
dial a number. The black scroll key at
the top would enable the user to traverse
through his contacts. The namomobile
does not feature a microphone and a
speaker system instead the user is
connected to his mobile via Bluetooth.
COMPARATIVE STUDY:
The circuit of a nanomobile can
be achieved on a nano scale while that of
a conventional mobile can maximum be
reduced to micro scale thus making the
rear portion of the nanomobile almost
flat.
The speaker and the microphone
which adds bulkiness to the conventional
mobile has been removed and Bluetooth
has been introduced as a tool to
communicate.
9
The key pad which consumes a
large space has been replaced by a touch
screen that also adds fancy to the
nanomobile.
The heat produced in the internal
circuit is greatly reduced.
DEMERITS OF A NANOMOBILE:
Any fault arising in the internal
circuitry needs a high degree of
precision to be rectified and
hence would result in
complexity.
Repair of the circuit is very
tedious and hence only a
complete replacement of the
circuit is possible.
The electron beam process used
in nanolithography is quite slow
and would take a couple of days.
A higher voltage is required to
generate the electron beam.
CONCLUSION
Though nanomobile has a few demerits,
it paves the way for a revolutionary idea
of bringing down the size of the
electronic circuits to a nano scale. The
nanomobile can be seen as an effective
solution to resolve problem of present
day bulky mobiles. Thus the nanomobile
can be considered as a giant leap
towards the advent of nanotechnology in
the field of electronics to cater our dayto-
day requirements.
A TECHNICAL PAPER
ON
BIOMEDICAL APPLICATION OF NANO ELECTRONICS
SRI VENKATESWARA UNIVERSITY COLLEGE OF ENGINEERING
DEPARTMENT OF ELECTRICAL AND ELECTRONIC ENGINEERING
Presented by:
A.ALIBABU Sd.AREEF
Address for communication:
A.ALIBABU Sd.AREEF
ROOM NO:1311 ROOM NO:1310
VISVESWARA BLOCK VISVESWARA BLOCK
S.V.U.C.E.HOSTELS S.V.U.C.E.HOSTELS
TIRUPATI-517502 TIRUPATI-517502
Ph:9885973665
Email: [email protected]
Email: [email protected]
ABSTRACT
Molecular manufacturing promises precise control of matter at the atomic and mol
ecular level,
allowing the construction of micron-scale machines comprised of nanometer-scale
components.
Medical nanorobots are still only a theory, but scientists are working to develo
p them. According
to the World Health Organization the today s world is suffering an extreme shortag
e of donor
blood, even with Red Cross receiving 36,000 units a day. This doesn t satisfy the
80,000 that
are needed. People who have anemia also run into a blood problem when their hemo
globin
Concentration In the red blood cells fall below normal , which can cause severe
tissue
damage . The root of the problem lies in hemoglobin because it delivers oxygen f
rom
the lungs to the body tissue. Of the many conditions, which can do, harm to the
human body, one
of the most fundamental and fast acting is a lack of perfusion of oxygen to the
tissue. Insufficient
oxygenation can be caused by problems with blood flow in the arteries due to obs
truction or
exsanguinations, or problems with oxygen transportation, as with anemia. Advance
s in
nanotechnology have suggested a possible treatment for these conditions in the f
orm of
micreo electromechanical red blood cell analogs called RESPIROCYTE . The artificial
red
blood cell or "respirocyte" proposed here is a blood borne spherical 1-micron di
amondoid
1000atm pressure vessel with active pumping powered by endogenous serum glucose,
able to
deliver 236 times more oxygen to the tissues per unit volume than natural red ce
lls and to
manage carbonic acidity. An onboard nanocomputer and numerous chemical and press
ure
sensors enable complex device behaviors remotely reprogrammable by the physician
via
externally applied acoustic signals.
Introduction:
1. What are respirocytes?
The respirocyte is a blood borne 1-
micron-diameter spherical nanomedical
device designed by Robert A. FreitasJr..
The device acts as an artificial mechanical
red blood cell It is designed as a diamondoid
1000-atmosphere pressure vessel with
active pumping powered by endogenous
serum glucose, and can deliver 236 times
more oxygen to the tissues per unit volume
than natural red cells while simultaneously
managing carbonic acidity. An individual
respirocyte consists of 18 billion precisely
arranged structural atoms plus 9 billion
temporarily resident molecules when fully
loaded. An onboard nanocomputer and
numerous chemical and pressure sensors
allow the device to exhibit behaviors of
modest complexity,remotel reprogrammable
by the physician via externally applied
acoustic signals.
Twelve pumping stations are spaced evenly
along an equatorial circle.Each station has
its own independent
glucosemetabolizingpower plant, glucose
tank, environmental glucose sensors, and
glucose sorting rotors. Each station alone
can generate sufficient energy to power the
entire respirocyte, and has an array of 3-
stage molecular sorting rotor assemblies
for pumping O2, CO2, and H2O from the
ambient medium into an interior chamber,
and vice versa. Thenumber of rotor sorters
in each array is determined both by
performance requirements and by the
anticipated concentration of each target
molecule in the bloodstream. The equatorial
pumping station network occupies ~ 50%
of respirocyte surface . On the remaining
surface, a universal "bar code" consisting of
concentric circular patterns of shallow
rounded ridges is embossed on each side,
centered on the "north pole" and "south
pole" of the device. This coding permits
easy product identification by an attending
physician with a small blood sample and
access to an electron microscope.
Equatorial Cutaway View of
Respirocyte
2. Preliminary Design Issues:
The biochemistry of respiratory gas
transport in the blood, oxygen and carbon
dioxide (the chief byproduct of the
combustion of food stuffs ) are carried
between t he lungs and the other tissues ,
mostly within the red blood cells.
Hemoglobin , the principal protein in the
red blood cell , combines reversibly with
oxygen , forming oxyhemoglobin . About
95% of the O2 is carried in this form, the
rest being dissolved in the blood. At human
body temperature, the hemoglobin in 1 liter
of blood holds 200 cm3 of oxygen, 87 times
more than plasma alone (2.3 cm3) can carry.
Carbon dioxide also combines reversibly
with the amino groups of hemoglobin,
Forming carbamino hemoglobin . About
25% of the CO2 produced during cellular
metabolism is carried in this form, with
another 10% dissolved in blood plasma
and the remaining 65% transported inside
the red cells after hydration of CO2 to
bicarbonate ion. The creation of carbamino
hemoglobin and bicarbonate ion releases
hydrogen ions, which, in the absence of
hemoglobin, would make venous blood 800
times more acidic than the arterial. This does
not happen because buffering action and
isohydric carriage by hemoglobin reversibly
absorbs the excess hydrogen ions, mostly
within the red blood cells. Respiratory gases
are taken up or released by hemoglobin
according to their local partial pressure.
There is a reciprocal relation between
hemoglobin's affinity for oxygen and carbon
dioxide. The relatively high level of O2 in
the lungs aids the release of CO2, which is to
be expired, and the high CO2 level in other
tissues aids the release of O2 for use by those
tissues.
Existing Artificial Respiratory Gas
Carriers
Possible artificial oxygen carriers have been
investigated for eight decades, starting with
the first documented use of hemoglobin
solutions in humans in 1916 . The
commercial potential for successful blood
substitutes has been estimated at between
$5-10 billion/year , so the field is quite
active. Current blood substitutes are either
hemoglobin formulations or fluorocarbon
emulsions.
Shortcomings of Current Technology
At least four hemoglobin formulations and
one fluorocarbon are in Phase I safety
trials, and one company has filed
an application to conduct an efficacy trial.
Most of the red cell substitutes under trial at
present have far too short a survival time in
the circulation to be useful in the treatment
of chronicanemia, and are not specifically
designed to regulate carbon dioxide or to
participate in acid/base buffering. Several
cell-free hemoglobin preparations evidently
cause vasoconstriction , decreasing tissue
oxygenation , and there are reports of
increased susceptibility to bacterial infection
due to blockade of the monocytemacrophage
system, complement activation, free-radical
induction, and nephrotoxicity. The greatest
physiological limitation is that oxygen
solvates linearly with partial pressure, so
clinically significant amounts of oxygen can
only be carried by prolonged breathing of
potentially toxic high oxygenconcentrations.
3. Nanotechnological Design of
Respiratory
(a) Pressure Vessel
The simplest possible design for an artificial
respirocyte is a microscopic pressure vessel,
spherical in shape for maximum
compactness. Most proposals for durable
nanostructures employ the strongest
materials, such as flawless diamond or
sapphire constructed atom by atom. Tank
storage capacity is given by Van der Waals
equation, which takes account of the finite
size of tightly packed molecules and the
intermolecular forces at higher packing
densities. Rupture risk and explosive energy
rise with pressure , to a standard 1000 atm
peak operating pressure appears optimum,
providing high packing density with an
extremely conservative 100-fold structural
safety margin. (By comparison, natural red
blood cells store oxygen at an equivalent
0.51 atm pressure, of which only 0.13 atm is
deliverable to tissues.) In the simplest case,
oxygen release could be continuous
throughout the body. Slightly more
sophisticated is a system responsive to local
O2 partial pressure , with gas released either
through a needle valve ( as in aqualung
regulators ) controlled by a heme protein
that changes conformation in response to
hypoxia , or by diffusion via low pressure
chamber into a densely packed aggregation
of heme-like molecules trapped in an
external fullerene cage porous to
environmental gas and water molecules, or
by molecular sorting rotors.
(b) Molecular Sorting Rotors
The key to successful respirocyte function is
an active means of conveying gas molecules
into, and out of, pressurized microvessels.
Molecular sorting rotors have been proposed
that would be ideal for this task.
Each rotor has binding site "pockets" along
the rim exposed alternately to the blood
plasma and interior chamber by the rotation
of the disk. Each pocket selectively binds a
specific molecule when exposed to the
plasma . Once the binding site rotates to
expose it to the interior chamber , the
bound molecules are forcibly ejected by
rods thrust outward by the cam surface.
Rotors are fully reversible , so they can
be used to load or unload gas storage
tanks, depending upon the direction of rotor
rotation. Typical molecular concentrations
in the blood for target molecules of interest
(O2, CO2, N2 and glucose) are ~10-4, which
should be sufficient to ensure at least 90%
occupancy of rotor binding sites at the
stated rotor speed . Each stage can
conservatively provide a concentration
factor of 1000 , so a multi-stage cascade
should ensure storage of virtually pure
gases. Since each 12-arm outbound rotor
can contain binding sites for 12 different
impurity molecules, the number of out
bound rotors in the entire system can
probably be reduced to a small fraction
of the number of inbound rotors.
Sorting Rotor Cascade
Sorting Rotor Binding Sites
Receptors with binding sites for specific
molecules must be extremely reliable
(high affinity and specificity) and
survive long exposures to the aqueous
media of the blood. Oxygen transport
pigments are conjugated proteins, that is,
proteins complexed with another organic
molecule or with one or more metal
atoms. Transport pigments contain metal
atoms such as Cu2+ or Fe3+ making
binding sites to which oxygen can
reversibly attached. Many proteins and
enzymes have binding sites for carbon
dioxide. For example, hemoglobin
reversibly binds CO2, forming
carbamino hemoglobin. A zinc enzyme
present in red blood cells, carbonic
anhydrase, catalyzes the hydration of
dissolved carbon dioxide to bicarbonate
ion, so this enzyme has receptors for
both CO2 and H2O.
Binding sites for glucose are common in
nature. For example, cellular energy
metabolism starts with the conversion of
the 6-carbon glucose to two 3-carbon
fragments (pyruvate or lactate), the first
step in glycolysis. This is catalyzed by
the enzyme hexokinase, which has binding
sites for both glucose and ADP. Another
common cellular mechanism is the glucose
transporter molecule, which carries glucose
across cell membranes and contains several
binding sites.
(c) Sensors
Various sensors are needed to
acquireexternal data essential in regulating
gas loading and unloading operations, tank
volume management, and other special
protocols. For instance, sorting rotors can
be used to construct quantitative
concentration sensors for any molecular
species desired. One simple twochamber
designed synchronized with a counting rotor
(linked by rods and ratchets to the computer)
to assay the number of molecules of the
desired type that are present in a known
volume of fluid. The fluid sample is drawn
from the environment in to a fixed-volume
reservoir with 104 refills/sec using two
paddle wheel pumps . At typical blood
concentrations, this sensor, which measures
45 nm x 45 nm x 10 nm comprising
~500,000 atoms (~10-20 kg), should count
~100,000 molecules/sec of glucose, ~30,000
molecules/sec of arterial or venous CO2, or
~2000 molecules/sec of arterial or venous
O2. It is also convenient to include internal
pressure sensors to monitor O2 and CO2 gas
tank loading, ullage (container fullness)
sensors for ballast and glucose fuel tanks,
and internal/external temperature sensors to
help monitor and regulate total system
energy output.
Molecular Concentration Sensor
(d) Pumping Station Layout
Twelve pumping stations are spaced
evenly along an equatorial circle. Each
station ha its own independent glucose
engine , glucose tank , environmental
glucose sensors, and glucose sorting
rotors . Each station alone can generate
sufficient energy to power the entire
respirocyte. Each pumping station has an
array of 3- stage molecular sorting rotor
assemblies for pumping O2, CO2, and H2O
into and out of the ambient medium. The
number of rotor sorters in each array is
determined both by performance
requirements and by the anticipated
concentration of each target molecule in the
bloodstream. Any one pumping station,
acting alone, can load or discharge any
storage tank in ~10 sec (typical capillary
transit time in tissues), whether gas, ballast
water, or glucose. Gas pumping rotors are
arrayed in a noncompact geometry to
minimize the possibility of local molecule
exhaustion during loading. Each station
includes three glucose engine flues for
discharge of CO2 and H2O combustion waste
products , 10 environmental oxygen
pressure sensors distributed throughout
the O2 sorting rotor array to provide fine
control if unusual concentration gradients
are encountered, 10 similar CO2 pressure
sensors on the opposite side, 2 external
environment temperature sensors (one on
each side located as far as possible from the
glucose engine to ensure true readings), and
2 fluid pressure transducers for receiving
command signals from medical personnel.
The equatorial pumping station network
occupies ~50% of respirocyte surface.
On the remaining surface, a universal
"bar code" consisting of concentric
circular patterns of shallow rounded ridges
is embossed on each side, centered on the
"north pole" and "south pole" of the device.
This coding permits easy product
identification by an attending physician with
a small blood sample and access to an
electron microscope, and may also allow
rapid reading by other more sophisticated
medical nanorobots.
4. Applications
nary, battlefield and other
applications.
(a) Treatment of Anemia
The artificial respirocyte is a simple
Nanotechnological device whose
primary applications include
transfusable blood substitution;
treatment for anemia, perinatal and
neonatal disorders, and a variety of lung
diseases and conditions; contribution to
the success of certain aggressive
cardiovascular and neurovascular
procedures, tumor therapies and
diagnostics; prevention of asphyxia;
maintenance of artificial breathing in
adverse environments; and a variety of
sports, veteri
Oxygenating respirocytes offer complete
or partial symptomatic treatment for
virtually all forms of anemia, including
acute anemia caused by a sudden loss of
blood after injury or surgical
intervention; secondary anemias caused
by bleeding typhoid, duodenal or gastric
ulcers; chronic, gradual, or posthemorrhagic
anemias from bleeding
gastric ulcers (including ulcers caused
by hookworm), hemorrhoids, excessive
menstrual bleeding, or battle injuries in
war zones; hereditary anemias including
hemophilia, leptocytosis and sicklemia,
thalassemia, hemolytic jaundice and
congenital methemoglobinemia;
.
(b) Respiratory Diseases
Current treatments for a variety of
respiratory viruses and diseases,
including pneumonia, bronchopneumonia
and pleuropneumonia;
pneumoconiosis including asbestosis,
silicosis and berylliosis; emphysema,
empyema, abscess, pulmonary edema
and pleurisy; epidemic pleurodynia;
diaphragm diseases such as
diaphragmatic hernia, tetanus, and
hiccups; blood flooding in lungs
bronchitis and bronchiectasis;
atelectasis and pneumothorax; chronic
obstructive lung disease; arterial chest
aneurysm; influenza, dyspneas, and even
laryngitis, snoring, pharyngitis, hay
fever and colds could be improved using
respirocytes to reduce the need for strong,
regular breathing.
The devices could provide an effective
long-term drug-free symptomatic
treatment for asthma, and could assist in
the treatment of hemotoxic (pit viper)
and neurotoxic (coral) snake bites;
Respirocytes could also be used to treat
conditions of low oxygen availability to
nerve tissue, as occurs in advanced
atherosclerotic narrowing of arteries,
strokes, diseased or injured reticular
formation in the medulla oblongata
, birthtraumas leading to cerebral palsy, and
low blood-flow conditions seen in most
organs of people as they age. Even
poliomyelitis, which still occurs in
unvaccinated Third World populations,
could be treated with respirocytes and
diaphragmatic pacemaker.
(c) Asphyxia
Respirocytes make breathing possible in
oxygen-poor environments, or in cases
where normal breathing is physically
impossible. Prompt injection with a
therapeutic dose, or advance infusion
with an augmentation dose, could greatly
reduce the number of choking deaths
(~3200 deaths/yr in U.S.) and the use of
emergency tracheostomies, artificial
respiration in first aid, and mechanical
ventilators. The device provides an
excellent prophylactic treatment for most
forms of asphyxia, including drowning,
strangling, electric shock, nerve-blocking
paralytic agents, carbon monoxide
poisoning, underwater rescue operations,
smoke inhalation or firefighting
activities, anaesthetic/barbiturate
overdose, confinement in airtight spaces
(refrigerators, closets, bank vaults,
mines, submarines), and obstruction of
breathing by a chunk of meat or a plug
of chewing tobacco lodged in the larynx,
by inhalation of vomitus, or by a plastic
bag pulled over the head of a child.
Respirocytes augment the normal
physiological responses to hypoxia,
which may be mediated by pulmonary
neuroepithelial oxygen sensors in the airway
mucosa of human and animal lungs. A
design alternative to augmentation
infusions is a therapeutic population of
respirocytes that loads and unloads at an
artificial nanolung, implanted in the
chest, which exchanges gases directly
with the natural lungs or with exogenous
gas supplies.
(d) Underwater Breathing
Respirocytes could serve as an in vivo
SCUBA (Self-Contained Underwater
Breathing Apparatus) device. With an
augmentation dose or nanolung, the
diver holds his breath for 0.2-4 hours,
goes about his business underwater, then
surfaces, hyperventilates for 6-12
minutes to recharge, and returns to work
below. (Similar considerations apply in
space exploration scenarios.) Respirocytes
can relieve the most dangerous hazard of
deep sea diving decompression sickness
("the bends") orcaisson disease, the
formation ofnitrogen bubbles in blood as a
diver risesto the surface, from gas
previously dissolved in the blood at higher
pressure at greater depths. Safe
decompression procedures normally require
up to several hours. At full saturation, a
human diver breathing pressurized air
contains about ~(d - d0) x 1021 molecules
N2, where d is diving depth in meters
and d0 is the maximum safe diving depth
for which decompression is not required,
~10 meters. A therapeutic dose of
respirocytes reconfigured to absorb N2
instead of O2/CO2 could allow complete
decompression of an N2-saturated human
body from a depth of 26 meters (86 feet)
in as little as 1 second, although in
practice full relief will require ~60 sec
approximating the circulation time of the
blood. Each additional therapeutic dose
relieves excess N2 accumulated from
another 16 meters of depth. Since full
saturation requires 6-24 hours at depth,
normal decompression illness cases
present tissues far from saturation; hence
relief will normally be achieved with
much smaller dosages. The same device
can be used for temporary relief from
nitrogen narcosis while diving, since N2
has an anesthetic effect beyond 100 feet
of depth. Direct water-breathing, even with
the
help of respirocytes, is problematic for
several reasons: (1) Seawater contains at
most one-thirtieth of the oxygen per
lungful as air, so a person must breathe
at least 30 times more lungful of water
than air to absorb the same volume of
respiratory oxygen; lungs full of water
weigh nearly three times more than
lungs full of air, so a person could
hyperventilate water only about onethird
as fast as the same volume of air.
As a result, a water-breathing human can
absorb at most 1%-10% of the oxygen
needed to sustain life and physical
activity. (2) Deep bodies of water may
have low oxygen concentrations because
oxygen is only slowly distributed by
diffusion; in swamps or below the
thermocline of lakes, circulation is poor
and oxygen concentrations are low, a
situation aggravated by the presence of
any oxygen-consuming bottom dwellers
or by oxidative processes involving
bottom detritus, pollution, or algal
growth. (3) Both the diving reflex and
the presence of fluids in the larynx
inhibit respiration and cause closure of
the glottis, and inhaled waterborne
microflora and microfauna such as
protozoa, diatoms, dinoflagellates,
zooplankton and larvae could establish
(harmful) residence in lung tissue.
5. Conclusions
The respirocyte is constructed of tough
diamondoid material, employs a variety of
chemical, thermal and pressure sensors, has
an onboard nanocomputer which enables the
device to display many complex responses
and behaviors, can be remotely
reprogrammed via external acoustic signals
to modify existing or to install new
protocols, and draws power from abundant
natural serum glucose supplies, thus is
capable of operating intelligently and
virtually indefinitely, unlike red cells which
have a natural lifespan of 4 months.
Although still only a theory, respirocyte
could become a reality when future
advances in the engineering of molecular
machine systems permit its construction.
Within the next twenty years
nanotechnology will advance greatly, and
may be fully capable of producing tiny
complex machines. The development of
nanodevices that assemble other
nanomachines and will allow for
massive cheap production. Respirocytes
could be manufactured economically and
abundantly.
6. References
Drexler KE, Peterson C, Pergamit G.
Unbounding the Futur
Nanotechnology Revolution.
91.
oresight Update
New York: William Morrow, 19
F , No. 24, 1996:1-2
Jones JA. Red blood cell substitutes:
current status. Brit J Anaesthes 1995;
74:697-703
Drexler KE. Nanosystems: Molecular
Machinery, Manufacturing, and
Computation. New York: John Wiley &
Sons, 1992.
ROBOTICS MCKIBBEN S MUSCLE
PRESENTED BY
M.V.VARUN BABU G.T.PRABHA
III ECE III ECE
[email protected]
[email protected]
BAPATLA ENGINEERING COLLEGE
BAPATLA
ABSTRACT:
ROBOTS MAKE OUR WORK LIGHTER,
BUT WE HAVE MADE THE ROBOTS
LIGHTER.
Industrial robots, which are heavy
moving bodies, show high risk of damage
when working and also during training
sessions in dense environment of other
robots. This initiated the allure for lighter
robot constructions using soft arms. This
paper reports on the design of a biorobotic
actuator. Data from several vertebrate
species (rat, frog, cat, and human) are
used to evaluate the performance of a
McKibben pneumatic actuator. Soft arms
create powerful, compact, compliance and
light robotic arms and consist of
pneumatic actuators like McKibben
muscles. Currently there are some
trajectory problems in McKibben muscles,
which restrict its application. This paper
presents solutions to certain problems in
the McKibben muscles by the use of
Electro Active Polymers (EAP). The main
attractive characteristic of EAP is their
operational similarity to biological
muscles, particularly their resilience and
ability to induce large actuation strains.
Electro Active Polymers (EAP) as
sensors, which simplify a robotic finger
models by acting like an actuator (sensor
cum actuator). Ion-exchange Polymer
Metal Composite (IPMC), one of the
EAPs, has been selected by us ahead of its
alternatives like shaper memory alloys and
electro active ceramics and the reason for
its selection is also discussed in this paper.
We devise a unique model to
eliminate trajectory errors by placing
EAP stripes in robots joints, which can
also be applied to current (heavy) robots
actuated by motors. This paper would
obliterate all the difficulties currently
present in McKibben muscles system,
which currently restricts its application.
Adroit use of the solutions provided in this
paper would abet researchers to produce
highly efficient artificial muscles system.
We give the idea of an artificial muscle
system which consume less energy &
oxygen than a natural one. Therefore we
discuss the world s most energy efficient
robot with our innovative idea.
INTRODUCTION TO MCKIBBEN
MUSCLES:
Industrial robots are very heavy and
highly rigid because of their mechanical
structure and motorization. These kinds
of robots in the dense environment of
other robots may hit and damage each
other due to technical errors or during
the training sessions. This initiated the
idea of developing lighter robot
constructions. Replacing heavy motor
driving units, which constitute much
weight of a robot with lighter McKibben
muscles, will serve the purpose. The
McKibben Artificial Muscle is a
pneumatic actuator, which exhibits many
of properties found in real muscle.
American physician Joseph L.
McKibben first developed this muscle in
1950 s. It was originally intended to
actuate artificial limbs for amputees
spring-like characteristics, physical
flexibility, and lightweight. Its main
advantage is the very high force to
weight ratio, making it ideal for mobile
robots.
CONSTRUCTION:
The device consists of an expandable
internal bladder (a rubber elastics tube)
surrounded by helically weaved braided
shell made of nylon cloth which are
attached to either sides like tendon-like
structures. A McKibben Artificial
Muscle can generate an isometric force
of about 200 N when pressurized to 5
bars and held to a length of 14 cm. This
actuator is relatively small.
Fig.1
WORKING:-
When the internal bladder is
pressurized, expands in a balloon-like
manner against the braided shell. The
braided shell acts to constrain the
expansion in order to maintain a
cylindrical shape.
Fig.2
As the volume of the internal
bladder increases due to increase in
pressure, the actuator shortens and
produces tension if coupled to a
mechanical load. This basic principle is
the conversion of the radial stress on the
rubber tube into axial stress and during
relaxation of the muscle the reverse
happens. A thin rubber bladder is used to
transmit the entire pressure acting on it
to the unstreachable outside shell. One
end of the muscles is sealed where loads
can be attached and the other end is for
the air from the regulator as shown in
figure 3.
By using a finite element model
approach, we can estimate the interior
stresses and strains of the McKibben
actuator.
Fig.3
Performance Characteristics:
The force generated by a McKibben
Artificial Muscle is dependent on the
weave characteristics of the braided
shell, the material properties of the
elastic tube, the actuation pressure, and
the muscle'slength.
Artificial versus Biological
Muscle: - The force-length properties
of the McKibben actuator are reasonably
close to biological muscle. However, the
force-velocity properties are not. We
have designed a hydraulic damper to
operate in parallel with the McKibben
actuator to produce the desired results.
Energy requirement: the energy
requirement of a McKibben artificial
robot is the least among all the robots. It
is even less than that used up by the
human muscle.
MCKIBBEN MUSCLES AS
ACTUATOR:
A PHYSIOLOGICAL MODEL
Two McKibben muscles put into
antagonism define a rotoid actuator
based on the physiological model of the
biceps-triceps systems of human beings.
The two muscles are the agonist and the
antagonist and are connected by means
of a chain and driving sprocket as shown
in the figure 3. The force difference
between the two generates a
torque. An initial tension must be
maintained against the passive tension
found in human physiology. When the
pressures are increased and decreased to
P1 and P2 respectively, an angular
deflection of . is produced. The equation
for the torque produced was deduced as:
T=k1 (P1-P2)-k2 (P1+P2) .
Where, k1 and k2 are constants. This
equation is much similar and gives a
near value to the one deduced by
N.Hogan having the system of bicepstriceps
as the basis.
T= Tmax (Ub-Ut)-k
(Ub-Ut) .
Where Tmax, k are constants and Ub, Ut
are normalized nervous control of biceps
and triceps.
Advantages of the McKibben
Artificial Muscle
High force to weight ratio
Size availability
Flexible
Powerful
Damped
Effective
Lightweight
Low-cost
Smooth.
Electro active Polymer Artificial
Muscles:
Electro active polymers (EAP)
are being developed to enable effective,
miniature, inexpensive and light robotic
applications like surface wipers etc. The
EAP material that is commonly used is
known as IPMC (Ion- exchange polymer
metal composite), which is dealt later.
The EAP strip can be made as grippers
and strings, which can grab and lift
loads, among many other potential
uses. These strips and strings have the
potential to greatly simplify robotic
spacecraft tasks.
Fig. 5 EAP: As dust wiper.
When an electric charge follows through
the ribbon, charged particles in the
polymer get pushed or pulled on the
ribbon s two sides, depending on the
polarity. The net result: the ribbon
bends. Four such ribbons can be made to
lift a load. They can operate under
cryogenic conditions like -140 degree
Celsius. When the power supply is
turned off, the cylinder relaxes, enabling
it to lift or drop loads.
INFLUENCE OF ELECTRIC FIELD
(i.e. BENDING OF THE STRIP): -
The bending can occur due to
differential contraction and expansion of
outer most remote regions of a strip if
an electric field is imposed across its
thickness as shown in figure. IPMC
strips generally bend towards the anode
and if the voltage signal is reversed they
also reverse their direction of bending.
Conversely by bending the material,
shifting of mobile charges become
possible due to imposed stresses. When
a rectangular strip of the composite
sensor is placed between two electrodes
and is bent, a stress gradient is built on
the outer fibers relative to the neutral
axis (NA). The mobile ions therefore
will shift toward the favored region
where opposite charges are available.
The deficit in one charge and excess in
the other can be translated into a voltage
gradient that is easily sensed by a low
power amplifier. Since these muscles
can also be cut as small as one desires,
they present a tremendous potential to
micro-electro-mechanical systems
(MEMS) sensing and actuation
applications.
ADVANTAGES OF EAP:
. Can be manufactured and cut
in any size and shape.
. Have good force to weight
characteristics in the presence
of low applied voltages.
. Work well in both humid and
dry environments.
. Work well in cryogenic
conditions and at low
pressures.
. Unique characteristics of low
density as well as high
toughness, large actuation
strain and inherent damping
vibrations.
. Show low impedance.
IPMC:
Construction:
The IPMC are composed of a per
fluorinated ion exchange membrane
which consist of a polymer matrix that is
coated on the outer surface with
platinum in most cases (silver and
copper have also been used).This coating
aids in the distribution of the voltage
over surface. These
are made into sheets that can be cut into
different shapes and sizes as needed.
Working:
Strips of these composites can undergo
large bending and flapping displacement
if an electric field is imposed across the
thickness. A circuit is connected to
surface to produce voltage difference,
causing bending. Thus, in this sense they
are large motion actuators. Conversely
by bending the composite strip, either
quasi-statically or dynamically, a voltage
is produced across the thickness of the
strip. Thus, they are also large motion
sensors.
When the applied signal
frequency is varied, so does the
displacement up to a point where large
deformations are observed at a critical
frequency called resonant frequency. At
resonant frequency maximum
deformation is observed and beyond this
frequency the actuator response is
diminished. Lower frequencies (down to
0.1 or 0.01 Hz) lead to higher
displacement (approaching 25 mm) for
a 0.5cm X 2cm X 0.2mm thick strip and
failed for other frequency values under
similar conditions. IPMC films have
shown remarkable displacement under
relatively low voltage, using very low
power. A film-pair weighing 0.2-g was
configured as an actuator and using 5V
and 20mW successfully induced more
than 11% contraction displacement.
Since the IPMC films are made of a
relatively strong material with a large
displacement capability, we investigated
their application to emulate fingers. The
gripper we suggested may be supported
using graphite/epoxy composite rod to
emulate a lightweight robotic arm.
Advantages of IPMC
Light
Compact
Driven by low power & voltage
Large strain capability
An example for EAP is per fluorinated
ion exchange membrane (IEM) whose
molecular structure is shown below.
[-(CF2-CF2) n- (CF-CF2) m-]
|
O-CF-CF2-O-CF2-
SO3 M+
|
CF3
A comparison between IPMCs and
other types of actuators is given
below
MCKIBBEN MUSCLES AND EAP
SENSORS
(INTELLIGENT ROBOTS)
Developing intelligent
robots requires the combination of
strong muscles (actuators) and acute
sensors, as well as the understanding of
the biological model. Using effective
EAP materials as artificial muscles, one
can develop biologically inspired robots
and locomotives that possibly can walk,
fly, hop, dig, swim and/or dive. Natural
muscles are driven by a complex
mechanism and are capable of lifting
large loads at short (millisecond)
response times.. Since muscle is
fundamental to animal life and changes
little between species, we can regard it
as a highly optimized system. The
mobility of insects is under extensive
study.
Development of EAP actuators is
expected to enable insect-like robots that
can be launched into hidden areas of
structures to perform inspection and
various maintenance tasks. In future
years, EAP may emulate the capabilities
of biological creatures with integrated
multidisciplinary capabilities to launch
space missions with innovative plots.
Some biological functions that may be
adapted include soft-landing like cats,
traversing distances by hopping like a
grasshopper and digging and operating
cooperatively as ants.
DEVELOPMENT OF EAP FOR
SPACE APPLICATIONS
Since 1995, under the author s
lead, planetary applications
using EAP have been explored
while improving the
understanding, practicality and
robustness of these materials.
EAP materials are being
sough as a substitute to
conventional actuators, and
possibly eliminating the need
for motors, gears, bearings,
screws, etc. Generally, space
applications are the most
demanding in terms of
operating conditions, robustness
and durability offering an
enormous challenge and great
potential for these materials.
:
SUGGESTIONS
EAP AS SENSORS: This paper
suggests placing of EAP strips (IPMC)
at each joint of the robot with each and
fixed to each arm as shown in diagram.
Relative angular deflections of the arms
bend the strip (mechanical deformation),
which generates current. During robot s
training session the current signals from
each joint is converted (using
transducer) into data signals for a PCplatform
data acquisition system which
stores the data as a base. During the
robot s regular work, the signal from
each joint is analyzed for every
PROPERTIES Ionic
polymer
Metal
Composites
(IPMC)
Shape
Memory
Alloys
(SMA)
Electro
Active
Ceramics
(EAC)
Actuation
Displacement
>10% <8%
short
fatigue
life
0.1-0.3%
Force (Mpa) 10-30 About
700
30-40
Reaction
speed
Micro sec
to second
Sec to
min
Micro sec
to sec
Density 1.25 g/cc 5-6g/cc 6-8g/cc
Drive voltage 4-7 V NA 50-800V
Fracture
Toughness
Resilient,
elastic
Elastic Fragile
microsecond and compared with the
stored database. Any variation would
develop error signals, which are
processed for correction signals by the
system. These signals are then used to
control the piezo-electric or high-speed
matrix pneumatic valve which regulates
the air flow to muscles .This forms a
closed system which corrects the
trajectory errors.
Fig.7 Inserting EAP stripes 1,2,3 in
the robot arm
EAP FINGERS:
This paper suggests two or more
EAP (IPMC) strips supported by an
epoxy/graphic composite holder can act
as robotic fingers (lifters and
grippers).When the stripes are actuated
by passing current, the fingers bend
outwards to allow the object in. The
fingers are then de-energized by
reducing the voltage. During the training
session, the maximum and minimum
voltages required for opening and
closing respectively is stored in the
database. When the object having
less/more dimension (than the standard)
is gripped, the additional bend produced
in the strips would generate current
which is sensed and processed for
dimensional inaccuracy and the object is
rejected. The error signals can also be
possessed for correction signals, which
control the manufacturing machines.
This forms a closed system which
reduces the dimensional inaccuracy.
Fig.8Pos1: -EAPfingers Pos 2: - EAP
fingers holding the object.
EAP AS DAMAGE ANALYSER:
Since the equation for the stress on
an EAP stripe is available, the force with
which a robot arm hits any obstacle can
be analyzed. Having known the
geometry and material properties of the
arm, the analysis will infer the
replacement or extension of life span of
that arm without getting into the depth
study of the damage caused which takes
time and money. The stress acting on the
metal composite can be calculated using
the following equation.
s = k (C0, Ci) E2
Where k (C0, Ci) is an
electromechanical coefficient and E is
the local electric field.
DESIGN OF INTELLIGENT
ROBOTIC HAND :
Fig. 9 An EAP actuated hand with
fingers
The robotic hand muscles are made up
of McKibben muscles while the fingers
are supported with EAP strips which can
act as sensors as well as actuators and
can be used for lifting loads as shown in
the diagram 8.
DESIGN OF MUSCLES FOR
HUMAN BEINGS-BIONIC MEN
AND WOMEN
The McKibben muscles along with the
EAP strips can be used to replace
damaged muscles for handicaps. Years
from now, the McKibben muscles
could also conceivably replace damaged
human muscles, leading to partially
bionic men and bionic women of the
future as shown in the figure 10.
Fig.10
These biorobotic muscles:
.. Reduce the metabolic cost of
locomotion.
.. Reduce the level of perceived
effort.
.. Improve gait symmetry as
measured by kinematics and
kinetic techniques.
.. Consume less oxygen and energy
than even a natural system.
CONCLUSION:
Electro active polymers are
changing the paradigm about the
complexity of robots. In the future, we
see the potential to emulate the resilience
and fracture tolerance of biological
muscles, enabling us to builds simple
robots that dig and operate cooperatively
like ants, soft-land like cats or traverse
long distances like a grass hopper. The
observed remarkable vibrational
characteristics of IPMC composite
artificial muscles clearly point to the
potential of these muscles for
biomimetic applications such as
swimming robotic structures, wing
flapping flying machines, slithering
snakes, heart and circulation assist
devices, peristaltic pumps etc..It has
recently been established that the
tweaking of the chemical composition
of IPMC the force capability of these
muscles can be greatly improved. IPMCs
are the artificial muscles that give space
robots animal-like flexibility and
manipulation ability based on a simple,
light-weight strip of highly flexible
plastic that bends and functions
similarly to human fingers when
electrical voltage is applied to it. Two
EAP actuators are used as miniature
wipers to clear dust off the viewing
windows of optical and infrared science
instruments. Studies made by robotics
specialists and neurophysiologists
suggest that McKibben artificial
muscles can be used to develop
Biorobotic arms for handicaps. Years
from now, the McKibben muscles
could also conceivably replace
damaged human muscles, leading to
partially bionic men and bionic
women of the future.
REFERENCES:
Alexander R. M., Elastic Mechanisms
in Animal Movement , The Cambridge
University Press: Cambridge, 1988.
Bar-Cohen Y., T. Xue and S.-S., Lih,
"Polymer Piezoelectric Transducers for
Ultrasonic NDE,
Bar-Cohen, Y., (Ed.), Proceedings of the
Electro active Polymer Actuators and
Devices, Smart Structures and Materials
1999, Volume 3669, pp. 1-414, (1999a).
Full, R.J., and Tu, M.S., Mechanics of
six-legged runners. J. Exp. Biol.
Vol.148, pp. 129-146 (1990).
Hathaway K.B., Clark A.E.,
"Magnetostrictive materials," MRS
BULLETIN, Vol.18, No. 4 (April 1993),
pp. 34-41.
PRESENTED BY
S.SRIKANTH
III.B.TECH(E.E.E)
K.S.R.M.C.E
Email id:[email protected]
Contact no:9705097474
P.SUJAN KUMAR REDDY
III.B.TECH(E.E.E)
K.S.R.M.C.E
Email id:[email protected] Contact no:08562247248
INTRODUCTION
Hacking what does it actually mean
hacking means stealing the data from the
system with out the permission of the
administrator in the internet. Ethical hacking
means- stealing the data with the permission
of the user what it exactly reveals is ethical
hacking means to hack our own network to
find the vulnerabilities in our network and
resolving the loopholes as there is a saying
like to catch a thief one should think like a
thief it is the same thing we are using
here.Actually hackers are not computer
criminals,The media is responsible for
this,they had been printed in the newspaper
and projected hackers to be computer
vandals ,but in reality the media should call
them as crackers and not hackers and now
almost every one know only hackers and not
crackers ,so we are using a word called
ethical hacking means against to hacking or
antihacking .ethical hackers know
everything about computers both software
and hardware.we are know that hacking is a
crime but ethical hacking is not a crime ,I
want to explain all about hacking so that one
can know how to hack and one can also
know how the system can be protected from
hackers .cyber crime has become punishable
under law and is considered as a serious
offence,recently cyber forensic department
in India has stated that
Years of imprisonment and life time banning
to use internet for the hackers if they are
caught.But knowing knowledge is not a
crime and we have to use this knowledge for
good purpose.In this respect I want to tell a
small information, There was a year 13year
old hacker in the us who with his other
hacker friend used to relish programming
and hacking.They always enjoyed breaking
in to each other s systems and providing
their superiority.They both were immensely
intelligent and had the perfect mind needed
for business. This geeks too could have
crossed the line and become crackers and do
all sorts of stupid things,in effect ruin their
lives.But fortunately for them,and also
fortunately for us they did nthing
likethat.Today we know them as Billgates
and Paul allen.Both of them as most of u
known as millionaires
HACKERS CLASSIFICATION
Hackers are generally classified as 5 types
they are
1)Black hat hackers-these are the hackers
who will use the knowledge for destruction
purpose these are also called as crackers
they are very dangerous to the society
2)White hat hackers- these are the one who
will use the knowledge for construction
purpose ,they generally work for
organizations
3)Grey hat hackers-these hackers are people
who wil hack networks and relese the
network when they are got paid legally
4)Haxoers- they are the persons who doesn
know any thing about hacking they just
know the term from media and starting
downloading some softwares in net and
trying to hack but they are easily caught by
the police
5)Suicide hackers- this hackers are recently
developed and they are the terrorist and they
are ready to announce their names that they
are hacked the server and they will
challenge openly to police
departments(recently C.I.D website of our
country is hacked by Pakistani terrorists and
challenged openly but we had done nothing
because there is no strict rules of cybercrime
in Interpol department.)
Now I will explain various types of hacking
procedures
SYSTEM HACKING
.we must know about registry in system
hacking ,it is the core of the operating
system.If you mess with it you may need to
reinstall your operatingsystem,so keep
installation disks ready.But if you do
conquer the registry,you can control the
whole computer,even the whole LAN for the
matter.Controlling the registry is
comparable to having root access of a
UNIX box,registry should not be edited
unless if you know it properly,there are
many registry tips and we can change entire
appearance of windows by changing the
default numbers in registry.
NETWORK HACKING
It is the hacking done with the help of
internet.Most people say that windows is
crackabale but not linuz But it is more easy
to hack the linux than the xp,it differ from
the persons using the softwares
Here we have to know about telnet its the
ultimate hacking tool that every hacker must
know how to use before he can even think
about hacking in to servers.TELNET is
better described as a protocol that requires
or runs on TCP/IP .It can be used to connect
to remote computers and to run command
line programs by simply typing commands
in to the GUI window. Telnet doesnot use
the resources of a clients computer but those
of the server to which the client has
connected.Basically it is a terminal
emulation program that allows us to connect
to remote computers.It is found at
c:\windows\telnet.exe in WIN9X systems
andc:\winnt\system32\telnet.exe in NT
machines. We can connect to remote
computers using telnet as the basic syntax
shows like c:\>telnet hoastname.com.here
the host name is just like www.yahoo.com.
IPADDRESS
Like in the real world every one has and
individual home address or telephone
number to enable easy acces.Simillarly all
computers connected to the internet are
given a unique Internet Protocol address that
can be used to contact that particular
computer.In geek language an ip address is
as decimal notation that divides a 32 -bit
internet addresses in to four 8-bit fields. An
example of typical ip address is
202.34.12.23,which can be broken down as
follows 202 reprsent first 8 bits and so
on..There are countless ip addresses in use
in todays wired age.
Finding ip address of your own system steps
1)connect to the internet
2)launch msdos
3)type netstat n at the prompt
The ip address displayed after entering the
above procedure can be shown in the local
address field denotes the ip address of your
system. Here if we want to hack the system
we mustknow the ip address of the system
first with out that we cannot make a efficient
hacking.here the ip address is having various
forms like domain name system(DNS)
,dword format,octal system,hexadecimal
system,cross breed .When we are typing a
web site in url you can clearly see the ip
address changing in to various forms in the
bottom of your browser ,to know this we
must first know the networking concept
,firstly when we type a website in to our
browser we will clearly see after the host
name there will be some dots present like
www.yahoo.com.... In bottomof your brower
and what it reveals is first it will search in
the dns of the isp provider if the website is
not found it wil be connected to the root
server and rootserver checks the
.com/edu/in etc ,there is a company called
IANA which will have the root servers there
are 13 rootservers al over the world iana is
the organization which will sell the domain
names and it is location is in u.s so every
website must be registered with it and it is
having divisons like .com organizer,.edu
organizer like that if one searches .com site
which is not found in local dns server then
iana transfer the data in to .com server
which is the meaning of ..after .com at the
bottom line. There are some ip ranges which
should not be scan like military address etc..
PORTNUMBERS SCANNING
Every system connected to the internet has a
number of ports open on it.ports are
basically virtual doors that allow the inflow
and outflow of data packets.With the out the
opening of ports,no data communication can
take place on a particular
system.typically,each time a client
establishes a new connection over the
network,a randomly choosen port number
gets opened .Simillarly each time a service
is enabled on a server ,it automatically opens
a predefined port number and listens for any
clients who misght want to establish a
connection.typically port numbers are of
three different types
*well known port numbers
*registerd port numbers
*dynamic/private port numbers
For example the default port for the ftp
service is usually runs on port 21 on most
servers on the internet
FINDING THE IP ADDRESS OF THE
VICTIM
Now That we are ready to hack we want to
know the ip address of the person ,suppose
we want to find the ip address of our friend
there are many methods some of them are
visiting your friends computer and typing in
dos prompt as ipconfig/all shows his ip
address or you can have his ip address by
email headers or by inviting him to chat and
sending him a file and when the person on
otherside is downloading you just go to
command prompt and type netstat -n you
wil find the foreign address place his ip
address after knowing ip address then we
have to prepare for the attack
HIDING YOUR IP ADDRESS
If we want to hide our ip address then we
want to have network address
translation(NAT)networks or we can use
proxy servers or we can change our ip
address by some commands and connecting
to the internet..
There is famous hacking called google
hacking it deals with all the loop holes of the
google and suppose
PROXY SERVERS
There is famous hacking called google
hacking it deals with all the loop holes of the
google and suppose just we are logged in
orkut account and we are logged out though
we are logged out google will store our
gmail id and it can be clearly visible upper
side of www.google.com we can see our
mail id there
and if we
search
anything it
wil record
all the data
searched,believe it is true that google will
keep all the record and if you suppose
searched on the item like HOW TO KILL
PRESIDENT then it will store all the data
you searched and even after yo close it wil
make other servers active and it just ready to
catch which pages yo are opening after
closing search page and what are you
seeing,so it is easy to be caught but to
prevent from this one should not search like
that or he should search in another search
engines like ask.com etc..so here proxy
servers come in to picture which will just
acting as a buffer between you and the
remote host to which you are connected
,instead communicating directly with the
hostyour system establishes a direct
connection with the proxy servers so the ip
address of the proxy server is readed not
your own ip address so that it is acting as a
gate way between you and the remotehost
there are many proxy servers like
squid,wingate,winproxy etc.. we can search
any site through it if some sites are blocked
in some
organization
s then this
find usefull .
TRACING
AN IP
ADDRESS
Suppose if you want to trace the ip address
of the system ,to know where it is present
then we are having different types like
manual trial an error method,reverse dns
lookup,whois queries,traceroute which all
are available in internet so that by installing
themand by entering the target ip address in
the space referred for that we can trace the
exact location of the destination use there is
a visual tools like neotracepro,3dtrace route
which will pin point the exact location.,if we
type the website name like
www.ksrmce.ac.in then we get the all
information about that web page like
creation date,owner,address,etc..
PORTSCANNING:
Some ports are scanned to find they are
openor not if they are not foundalive after
scanning we cannot penetrate in to such
systems.these can be easily scanned through
softwares readily available in internet
SNIFFING
Sniffers could capture,interpret and save
packets sent across the network for analysis
purpose suppose if any bad details flowing it
will stop that packet it will be used mainly
intelligence departments and try to trace
where the packet is coming from.
TROJANS
Trojan is a malicious program that when
installed ona system can be used for
nefarious purposes by an attacker ,it
contains two parts called server part and
client part and attaching some Trojans to a
normal file and sending that through internet
and when once the person install
otherside opens it it wil install on backdoor
and completely handover the system to our
control,suppose after Trojan installation if
he switches his cam we can switchoff his
camlike this we can do any thing
KEYLOGGERS,VIRUSES
Keyloggers are generally are of 2types
hardware in which a chip is readily available
which should attach to the keyboard of the
victim it is of micro size and whenever he
types anything it will record all that keys
and in software keylogger there are two
types,normal and stealth in normal we can
uninstall and in stealth if we once install this
it cannot uninstall until next formatting
suppose in software like perfect keylogger
we can make an arrangement that for every
5 min the data recorded can be sent to mail
but these key loggels are dangerous and
installation is difficult
PASSWORD HACKING
Every one is interested o know the password
of others ,in windows the
administratorpassword is stored in SAM
FILE and it is encryption is required here
password hacking is mainly dividedin
to4types they are 1)password guessing this
is the one where we have to know all the
details of the victim and trying different
combinations we can acquire password but it
is difficult and best example for this is if any
one has rediff mail account (minimum
3years back) then there is an option in the
rediif.com that
FORGOTTEN PASSWORD? if we click
it will just ask the details like your mothers
name and date of birth and the details which
you entered at the time of signing will be
matched then you can change the password
it is a loop hole in rediff knowing personal
details is easy through social community
sites like orkut etc.. 2)default passwords
here there will be some default passwords
set for bios by different companies
3)dictionary based attacks here in this it will
search for all possible combinations and if
the word that is matched then it can be
displaced 4)bruteforce attacks -here in this
type it will try all possible combinations and
it will reveal the password definetly but it
depends up on the password strength,
STEGNOGRAPHY
It is the art of hiding the secret data inside
another data for ex hiding a mp3data inside
another text formata and binding them and
sending through internet,BIN LADEN
followed this technique for9/11 attacks he
has sent a notepad msg inside a picture msg
to his team members through internet to
their mail no one noticed as it wil look just
normal picture but there is a data hidden
inside it,if we want to see what he has sent
open notepad and type q33n incapslock and
font to be adjusted to wingding and increase
size to 72 we can observe a plane crashing
two towers the plane number that crashed
the towers is also q33n.
CONCLUSION
We have discussed some topics on hacking
and there are many topics like log file
hacking,sqlinjection attack,Kerberos torn
apart,encryption algorthims,social
enginnering,ddos attacks,virus
programming,etc.these are very large topics
,and atlast I want to tell that there are many
loop holes in xp/vista suppose if u take xp
the registry can easily be changed,hackers
are growing dayby day and some countries
are encouraging them like pakisthan and
china, and they are hacking our sites ,there
are many examples like before Mumbai
attacks terrorist used to hack wi-fi network
and used the internet freely and our police
not cought the terrorist who used it but they
made disturbance to normal life of the
person who brought that wi fi network so to
defend them we have to learn ethical
hacking which indirectly states that learn
hacking ethically so our government should
encourage the security field,even security
field is important and every company need
the security professionals .In recent years
many hackers are growing and ethical
hackers are not constantly increasing and
one should be a white hat hacker and not the
black hat hacker and we should defend our
country by such hackers.
DISADVANTAGES OF HACKING
We will loose important confidential data, a
hacker may definetly cought ,hacking will
make large revenue loss etc..
ADVANTAGES OF ETHICAL HACKING
Ethical hackers will have a respectable
position in the society,many job
oppurtunities,service can be done by
protecting the national servers from hackers
etc..
We know that
google,yahoo,rediff,even nasa is hacked(by
little boy)and they made great revenue
loss.so cyber crime department developed
and many laws are brought by them to
reduce the crimes
STEPS TO BE TAKEN BY EVERY ONE
SO THAT THEY CAN NOT BE HACKED
1)Do not accept any data from internet if it
comes in .exe format and do not download
the file unless you know it will not contain
any keloggers inside
2)we have to put strong password which is a
combination of alpha numeric
3)Be carefull when you are searching in
google and,phishing attack may possible by
creating fake login pages ,never give credit
card information because they are fake
pages never download tools as all hacking
tools are come with virus.
Source:E.C.council,Ankitfadia(HACKING
GURU )AFCEH
1
1)P.SHANMUKHA SREENIVAS 2)P.NIKHIL
2ND YEAR B.TECH, 2ND YEAR B.TECH,
EEE, EEE,
S.V.U.C.E. S.V.U.C.E.
PH-9885296909 PH-9703594429
Email- Email-
[email protected] [email protected]
2
3
4
5
6
7
8
9
10
11
12
13
14
UTILITY FOG
V.ANJALI Y.L.SWATHI
[email protected] [email protected]
3rd B.Tech -ECE
DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ENGINEERING
MADANAPALLE INSTITUTE OF TECHNOLOGYAND SCIENCE
(Approved by AICTE, New Delhi & Affiliated to JNTU, Hyderabad)
Introduction:
The human body is a pretty nifty gadget. It has some maddening limitations, most
of
which are due to its essential nature as a bag of seawater. It wouldn't be too h
ard, given
nanotechnology, to design a human body that was stronger, lighter, with a faster
brain and less
limited senses, able to operate comfortably in any natural environment on earth
or in outer space
(excluding the Sun and a few other obvious places).
This Utility Fog material, composed of individual foglets below, would float loo
sely over
the driver and in the event of an accident they would hold together via their 12
arm to form an
invisible shield protecting the driver from injury. In the virtual environment o
f the uploads, not
only can the environment be anything you like; you can be anything you like. You
can be big or
small; you can be lighter than air, and fly; you can teleport and walk through w
alls. You can be a
lion or an antelope, a frog or a fly, a tree, a pool, the coat of paint on a cei
ling. You can be these
things in the real world, too, if your body is made of Utility Fog.
What is Utility Fog?
It is an intelligent substance, able to simulate the physical properties of most
common
substances, and having enough processing power that human-level processes could
run in a
handful or so of it. Imagine a microscopic robot. It has a body about the size o
f a human cell and
12 arms sticking out in all directions.
A bucketful of such robots might form a ``robot crystal'' by linking their arms
up into a
lattice structure. Now take a room, with people, furniture, and other objects in
it--it's still mostly
empty air. Fill the air completely full of robots. The robots are called Foglets
and the substance
they form is Utility Fog. With the right programming, the robots can exert any f
orce in any
direction on the surface of any object. They can support the object, so that it
apparently floats in
the air. They can support a person, applying the same pressures to the seat of t
he pants that a
chair would. They can exert (use a force) the same resisting forces that elbows
and fingertips
would receive from the arms and back of the chair.
A program running in the Utility Fog can thus operate in two modes:
( first, the ``naive'' mode where the robots act much like cells, and each robot
occupies a
particular position and does a particular function in a given object.
The second, or ``Fog'' mode, has the robots acting more like the pixels on a TV
screen. )
The object is then formed of a pattern of robots, which vary their properties ac
cording to which
part of the object they are representing at the time.
An object can then move across a cloud of robots without the individual robots m
oving,
just as the pixels on a CRT remain stationary while pictures move around on the
screen. The
Utility Fog which is simulating air needs to be impalpable. One would like to be
able to walk
through a Fog-filled room without the feeling of having been cast into a block o
f solid Lucite. Of
course if one is a Fog-mode upload this is straightforward; but the whole point
of having Fog
instead of a purely virtual reality is to mix virtual and physical objects in a
seamless way. To this
end, the robots representing empty space can run a fluid-flow simulation of what
the air would
be doing if the robots weren't there. Then each robot moves where the air it dis
places would
move in its absence. The other major functions the air performs, that humans not
ice, are
transmitting sound and light. Both of these properties are obscured by the prese
nce of Fog in the
air, but both can be simulated at a level sufficient to fool the senses of human
s and most animals
by transmitting the information through the Fog by means we'll consider later, a
nd reconstructing
the physical.
To understand why we want to fill the air with microscopic robots only to go to
so much
trouble to make it seem as if they weren't there, consider the advantages of a T
V or computer
screen over an ordinary picture. Objects on the screen can appear and disappear
at will; they are
not constrained by the laws of physics. The whole scene can shift instantly from
one apparent
locale to another. Completely imaginary constructions, not possible to build in
physical reality,
could be commonplace. Virtually anything imaginable could be given tangible real
ity in a Utility
Fog environment. Why not, instead, build a virtual reality machine that produces
a purely
sensory (but indistinguishable) version of the same apparent world? The Fog acts
as a continuous
bridge between actual physical reality and virtual reality. The Fog is universal
effector as well as
a universal sensor. Any (real) object in the fog environment can be manipulated
with an
extremely wide array of patterns of pressure, force, and support; measured; anal
yzed; weighed;
cut; reassembled; or reduced to bacteria-sized pieces and sorted for recycling.
(Foglets run on electricity, but they store hydrogen as an energy buffer. We
pick hydrogen in part because it's almost certain to be a fuel of choice in the
nanotech world, and
thus we can be sure that the process of converting hydrogen and oxygen to water
and energy, as
well as the process of converting energy and water to hydrogen and oxygen, will
be well
understood. )
General Properties and Uses
As well as forming an extension of the senses and muscles of individual people,
the Fog
can act as a generalized infrastructure for society at large. Fog City need have
no permanent
buildings of concrete, no roads of asphalt, no cars, trucks, or buses. It will b
e more efficient to
build dedicated machines for long distance energy and information propagation, a
nd physical
transport.
For local use and interface to the worldwide networks, the Fog is ideal for all
of these
functions. It can act as shelter, telephone, computer, and automobile. It will b
e almost any
common household object, appearing from nowhere when needed (and disappearing
afterwards).It gains certain efficiency from this extreme of polymorphism; consi
der the number
of hardcopy photographs necessary to store all the images one sees on a televisi
on or computer
screen. With Utility Fog we can have one ``display'' and keep all our physical p
ossesions on disk.
Another item of infrastructure that will become increasingly important in the fu
ture is
information processing. Nanotechnology will allow us to build some really monste
r computers.
Although each Foglet will possess a comparatively small processor--which is to s
ay the power of
a current-day supercomputer-- there are about 16 million Foglets to a cubic inch
. When those
Foglets are not doing anything else, i.e. when they are simulating the interior
of a solid object or
air that nothing is passing through at the moment, they can be used as a computi
ng resource
(with the caveats below).
Advantages
Advantages of a Utility Fog Environment:
Another major advantage for space-filling Fog is safety. In a car (or its nanote
ch
descendant) Fog forms a dynamic form-fitting cushion that protects better than a
ny seatbelt of
nylon fibres. An appropriately built house filled with Fog could even protect it
s inhabitants from
the (physical) effects of a nuclear weapon within 95% or so of its lethal blast
area. There are
many more mundane ways the Fog can protect its occupants, not the least being ph
ysically to
remove bacteria, mites, pollen, and so forth, from the air. A Fog-filled home wo
uld no longer be
the place that most accidents happen. First, by performing most household tasks
using Fog as an
instrumentality, the cuts and falls that accompany the use of knives, power tool
s, ladders, and so
forth, can be eliminated. Secondly, the other major class of household accidents
, young children
who injure themselves out of ignorance, can be avoided by a number of means. A c
hild who
climbed over a stair rail would float harmlessly to the floor. A child could not
pull a bookcase
over on itself; falling over would not be among the bookcase's repertoire. Power
tools, kitchen
implements, and cleaning chemicals would not normally exist; they or their analo
gous would be
called into existence when needed and vanish instead of having to be cleaned and
put away.
Outside the home, the possibilities are, if anything, greater. One can easily im
agine ``industrial
Fog'' which forms a factory. It would consist of larger robots. Unlike domestic
Fog, which would
have the density and strength of balsa wood, industrial Fog could have bulk prop
erties
resembling hardwood or aluminium. A nanotechnology- age factory would probably c
onsist of a
mass of Fog with special-purpose reactors embedded in it, where high-energy chem
ical
transformations could take place. All the physical manipulation, transport, asse
mbly, and so forth
would be done by the Fog.
The Limits of Utility Fog Capability
When discussing something as far outside of everyday experience as the Utility F
og, it is
a good idea to delineate both sides of the boundary. The Fog is capable of so ma
ny literally
amazing things, we will point out a few of the things it isn't capable of: --Any
thing requiring
hard metal (cold steel?). For example, Fog couldn't simulate a drill bit cutting
through hardwood.
It would be able to cut the hole, but the process would be better described as i
ntelligent
sandpaper. --Anything requiring both high strength and low volume. A parachute c
ould not be
made of fog (unless, of course, all the air was filled with Fog, in which case o
ne could simply
fly). --Anything requiring high heat. A Fog fire blazing merrily away on Fog log
s in a fireplace
would feel warm on the skin a few feet away; it would feel the same to a hand in
serted into the
``flame''. --Anything requiring molecular manipulation or chemical transformatio
n. Foglets are
simply on the wrong scale to play with atoms. In particular, they cannot reprodu
ce themselves.
On the other hand, they can do things like prepare food the same way a human coo
k
does- -by mixing, stirring, and using special-purpose devices that were designed
for them to use.
--Fog cannot simulate food, or anything else that is destined to be broken down
chemically.
Eating it would be like eating the same amount of sand or sawdust. --Fog can sim
ulate air to the
touch but not to the eyes. The best indications are that it would look like heav
y fog. Thus the Fog
would need to support a pair of holographic goggles in front of the eyes of an e
mbedded user.
Such goggles are clearly within the capabilities of the same level of nanotechno
logy as is needed
for the Fog, but are beyond the scope of this paper.
Applications
In Space Exploration:
The major systems of spaceships will need to be made with special- purpose
nanotechnological mechanisms, and indeed with such mechanisms pushed much closer
to their
true capacities than anything we have talked about heretofore. In the spaceship'
s cabin, however,
will be an acceleration couch. The Utility Fog makes a better acceleration couch
, anyway. Fill
the cabin with Utility Fog and never worry about floating out of reach of a hand
hold.
Instruments, consoles, and cabinets for equipment and supplies are not needed. N
onsimulable
items can be embedded in the fog in what are apparently bulkheads. The Fog can a
dd great
structural strength to the ship itself; the rest of the structure need be not mu
ch more than a
balloon. The same is true for spacesuits: Fog inside the suit manages the air pr
essure and makes
motion easy; Fog outside gives extremely fine manipulating ability for various t
asks. Of course,
like the ship, the suit contains many special purpose non-Fog mechanisms. Surrou
nd the space
station with Fog. It needs radiation shielding anyway (if the occupants are long
-term); use big
industrial Foglets with lots of redundancy in the mechanism; even so they may ge
t recycled
fairly often. It also makes a good tugboat for docking spaceships. Homesteaders
on the Moon
could bring along a batch of heavy duty Fog as well as the special purpose nanot
ech power
generation and waste recycling equipment. There will be a million and one things
, of the
ordinary yet arduous physical task kind, that must be done to set up and maintai
n a selfsufficient
household.
In Telepresence
An eidolon is the common term for a sophont's telepresence in utility fog. An ei
dolon is
generally seen as less personal than a visit in person, but more so than a virtu
al space interaction
and far more so than text, audio, or audio visual communications. It is commonly
used as an
alternative to a personal visit when direct contact is impossible because of gr
eat distance,
radically different environmental requirements, distrust between the parties con
cerned, or matters
of social convention and personal reasons will still consent to send or interact
with an eidolon as
propriety. Some individuals who avoid using virtual space for practical or perso
nal reasons will
still consent to send or interact with an eidolon as alternatives.
Telepresence refers to a set of technologies which allow a person to feel as if
they were
present, to give the appearance that they were present, or to have an effect, at
a location other
than their true location. Telepresence requires that the senses of the user, or
users, are provided
with such stimuli as to give the feeling of being in that other location. Additi
onally, the user(s)
may be given the ability to affect the remote location. In this case, the user's
position,
movements, actions, voice, etc. may be sensed, transmitted and duplicated in the
remote location
to bring about this effect. Therefore information may be travelling in both dire
ctions between the
user and the remote location.
Conclusion:
This paper throws light on a technology in the branch of nanorobotics named as
UTILITY FOG. Utility fog is a collection of tiny robots. The robots would be mic
roscopic, with
extending arms reaching in several different directions, and can perform lattice
reconfiguration.
Grabbers at the ends of the arms would allow the robots to mechanically link to
one another and
share both information and energy, enabling them to act as a continuous substanc
e with
mechanical and optical properties that could be varied over a wide range. Each f
oglet would have
substantial computing power, and would be able to communicate with its neighbour
s.
A paper presentation
From
LakiReddy Bali Reddy College Of Engineering
L.B.Reddy Nagar, Mylavaram.
Paper Presentations:
K. Sony, G.Anusha,
06761A0427 06761A0421
III B. Tech, ECE, III B. Tech, ECE,
LBRCE, LBRCE,
Mylavaram 521 230 Mylavaram 521 230
[email protected] [email protected]
Cell: 9866128278 Cell No: 9951192595
ABSTRACT
Nano-technology is an emerging technology of manipulating atoms at nanoscale dim
ensions. The
basic idea of nano-technology is to master over the characteristics of matter to
develop highly
efficient systems. It s a hybrid science combinig engineering, information technol
ogy, chemistry
and biology.
The scope and applications of nano-technology are really endless and are limited
only by the
imagination of the individual developing them. The works of scientists at Harvar
d University have
shown Silicon nanowires to be an important tool in the study of neurons and thei
r related aspects.
The silicon nanowires have the ability to form connections with the neurons call
ed Hybrid
synapses or Artificial synapses similar to the links the neurons form between them
in the brain.
This property of the nanowires facilitates the measurement, manipulation or inhi
bition of signals
passing along the neurons. This ability to analyse and manipulate the signals pa
ssing along the
neurons can help to develop highly sophisticated interfaces between the brain an
d external
Neuroprosthetics assisting the paralytic patients to make movements. The silicon
nanowires are
very useful in the development of Bio-Computers.The properties of silicon nanowi
res relevant to
the neurons those that make them significant in the above and other applications
are discussed in
this paper.
INTRODUCTION TO NANO-TECHNOLOGY
Nano-technology is an emerging technology of manipulating atoms at nanoscale
dimensions(nanometer is one billionth a meter). It s a hybrid science combining
Engineering, Information technology, Chemistry and Biology. The idea of nano-tec
hnology
is to master over the characteristics of matter in an intelligent way to develop
highly
efficient systems.
Noble Laureate Richard .P.Feymann has conceived nano-technology in 1959 itself.
But
people at that time were pessimistic about his theory of manipulating atoms. It
was even
considered as a science fiction. This is because the people earlier had no contr
ol over
particle size or any knowledge of the nanoscale. But Scanning Tunnel Microscope
(STM)
& Atomic Force Microscope (AFM) have changed the scenario. They are capable of
creating pictures of individual atoms and moving them from one place to another.
Building
things atom by atom, molecule by molecule is Nanotechnology.So it is also called M
olecular
manufacturing .
To build things at molecular level a proper understanding of the size, shape and
strength
of atoms or molecules is required. Atoms and molecules stick together because of
their
complimentary shapes. The assembly of many such atoms to form a product is broug
ht
about by a molecular assembler such as a robotic arm . Thus nano-technology is not
just a
miniaturization process but also a bottom-up manufacturing approach.
Molecular manufacturing is resource efficient as the products contain less mater
ial than
conventional products and thus is energy efficient as well.The scope range and p
otential
applications of nano-technology could not be defined, and are really endless the
y are
restricted only by the imagination of the one developing them.
Research work done by scientists at Harvard University have proved nanowires to
be an
efficient tool in the study of neurosciences.
NANO-TECHNOLOGY AND NEUROSCIENCES
Nano-technology has made and is believed to make many ideas which were once thou
ght
as science fiction come true. Silicon nanowires one of the products of nano-tech
nology
have opened a whole new interface between nanowires and Neurosciences which can
make
many dreams come true; Paralytic patients making movements on their own,Bio-
Computers with the strengths of both electronics and biological systems etc,.
Researchers at Harvard University have made silicon nanowires that can presicely
measure
multiple electric signals with in a neuron. These ultra-small silicon wires coul
d help brain
scientists to understand the underpinnings of learning and memory. These Silicon
nanowires
have opened a whole new interface between Nano-technology and could also be used
in neural prosthetics, providing electrodes far more sensitive than those curren
tly used.
Before going into the topic we would like to give an introduction on what a neur
on is, how
the signals are carried along a neuron and between two neurons.
Neuron, the mode of conduction of signals along a
neuron and between two neurons
NEURON
Neurons are the functional units of nervous system are highly specialized cells
whose cell
membrane is highly sensitive to stimuli. This property of neurons is very import
ant in the
conduction of signals along their length.
The Dendrites and Axon project out from the cell body or Cyton. They are togethe
r called as
neuronal projections. The Dendrites are generally many in number and serve in re
ceiving signals
from other neurons. The signals received through Dendrites are passed to Axons w
hich convey the
signal to the next neuron.
The basic structrure of a neuron is illustrated by the image given above.
Conduction of nerve impulse:
Along the length of the neuron is carried out by means of an action potential , a r
apid
swing in the cell membrane potential from negative to positive and back to negat
ive within
a few milliseconds .
The phenomena of action potential can be illustrated by the image given above.
When a neuron receives a stimulus the permeability of the neurons cell membrane
towards ions
changes resulting a change in membrane potential. This constitutes the action po
tential.
Between neurons is entirely a different situation. Neurons have no physical cont
act
between them .They are separated by a minute space called synapse .The conductio
n of the
signals through the synapse is accompanied by chemicals called Neurotransmitters .
Neurotransmitters are chemicals that are produced and stored in synaptic vesicle
s at the tip of the
axon. They are released when the nerve impulse reaches to travel to the next neu
ron. Some
examples of neurotransmitters are: Acetylcholine, Dopamine, Serotonin, GABA(Gamm
a-Amino
Butyric Acid),
The transmission of nerve impulse through the synapse is given by the above diag
ram.
Silicon nanowires, their synthesis and their interaction with the neurons
SILICON NANOWIRES
Leiber and his co-workers At Harvard University have made the
silicon nano wires from silane gas (SiH4)in a vacuum furnace in the presence of
gold catalyst
particles. The gold catalyst particles determine the diameter of the nano wires
. Nanowires of 20
nm diameter are generally used in neuron experiments.
THE WAY THEY INTERACT WITH NEURONAL PROJECTIONS
The size of the silicon nanowires being identical to the neuronal projections, t
hey can form
hybrid synapses when they come in contact with them, similar to the links neurons
form between
them. This makes possible the analysis of the electrophysiological activities of
the brain at the
level of individual neurons without causing any damage to the neurons unlike the
relatively crude
techniques available now. Microfabricated electrode arrays are too large to meas
ure the neuronal
activity at the level of individual neurons while Micropippeted electrodes are i
nvasive causing
damage to the neurons.
The signals detected by the nanowires at the hybrid synapses are very minute to
be
measured. They need to be amplified which is brought about by the nanowires itse
lf as silicon is a
semi-conductor material
The nanowires can even detect chemicals. Researchers have shown that the silicon
nanowires can detect molecular markers with high precision. So in the near futur
e there is a
possibility that they can be configured to detect neurotransmitters even.
Experiments of Lieber and his co-workers
The experiments and observations of Leiber and his co-workers have opened a new
platform in the
application of nano-technology to neurosciences.
.. The nanowires were connected to electrical contacts made of nickel and are th
en
mounted on a silicon chip that has been patterned with proteins that promote neu
ron
growth.
.. Next, they have seeded a rat brain neuron on the chip and waited for 7 to 10
days
during which the neuron grows. The neuron-friendly protein patterned on the sili
con
chip provides a path that directs a neurons growth along the chip. And ensures t
hat it
makes a contact with the nanowires.
.. They have found that the silicon nanowires have established connections with
the
neuronal projections, creating artificial synapses similar to the links neurons fo
rm
between each other.
The interactions of silicon nanowires with neurons are shown by the above pictur
e
They have measured the electrical conductance along the length of the axon at as
many as 50
locations. They have found that the signals along the axon can be manipulated by
stimulating the
axon with an electrical pulse.
Through their experiments Lieber and his co-workers have
reported the formation of Hybrid synapses between neurons and nanowires. They ha
ve also shown
that signals passing along them can be measured or even be manipulated by extern
al electrical
stimulus.
The nanowires can bring about many advances in the field of neurosciences.
Advances that can be brought about by the use of Silicon nanowires
.. The nanowires have made possible the measurement of the electrophysiological
activity of the brain at the level of individual neurons in a non-invasive way u
nlike
the techniques being used at present.
.. The analysis of the nerve impulse as it passes along an individual neuron hel
ps in
understanding how the neuron processes the signals that pass along its length. T
his
gives an insight into the underpinnings of learning and memory.
.. The signals passing along the neuron can even be manipulated by giving an ext
ernal
electrical stimulus through the nanowires. This makes possible to control the ne
rve
impulses being carried along the neuron .This provides a new paradigm for
developing sophisticated interfaces between brain and external neuroprosthetics.
.. A wide range of thought processes can be converted into electrical signals wh
ich
helps paralytic patients in making some movements.
Similar principle underlies the development of Bio-Computers
.. The ability of nanowires to detect chemicals represents a new, powerful, and
flexible
approach for real-time cellular assays useful for drug discovery.
CONCLUSION:
We would like to conclude that nano-technology is a boon to the mankind. It has
the potential to
outshine the advances brought about by Information Technology even. It is a scie
nce of BIO +
ENGINEERING + CHEMISTRY + I.T . Advances in the application of silicon nanowires
can
make the dream of paralytic people to make considerable movements. Other importa
nt application
of the silicon nanowires is the evolution of Bio-Computers.
THANK YOU
To,
R.V.S Satyanarayana
CONVENOR, SIGMO ID_2K9
DEPARTMENT OF eee
Sri Venkateswara University
Tirupati 517 502
EMAIL: [email protected]
Stereovision
B.KEERTHI V.JEEVITHA
[email protected] [email protected]
3rd B.Tech-ECE
DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ENGINEERING
MADANAPALLE INSTITUTE OF TECHNOLOGYAND SCIENCE
(Approved by AICTE, New Delhi & Affiliated to JNTU, Hyderabad)
ABSTRACT:
Stereo vision helps us to see WHERE objects are in relation to our own bodies wi
th much greater
precision especially when those objects are moving toward or away from us in the
depth
dimension. The word "stereo" comes from the Greek word "stereos" which means fir
m or solid.
The way that machine stereo vision generates the third dimension is achieved usi
ng two cameras
by finding the same features in each of the two images and then measuring the di
stances to
objects containing these features by triangulation; that is, by intersecting the
lines of sight from
each camera to the object. We can Use two/multiple cameras or one moving camera.
Term
binocular vision is used when two cameras are employed, i.e. Two Eyes = Three Di
mensions
(3D)!
Compared to the alternative special purpose sensors like acoustics, radar, or la
ser range finders
stereo vision has the advantage that it achieves the 3-D acquisition without ene
rgy emission or
moving parts. It is now home in India to the best brands associated with the fil
m/television
industry and it is the premier destination for every film-maker who seeks to ach
ieve cinematic
excellence.
Stereo vision systems are widely used in autonomous robots, stereovision cameras
are used on
unmanned ground vehicles to measure the distance between the camera and objects
in the field of
view, for purposes of path planning and obstacle avoidance. It is very useful in
mobile robots
also.
Introduction:
Stereo vision helps us to see WHERE objects are in relation to our own bodies wi
th much greater
precision especially when those objects are moving toward or away from us in the
depth
dimension. The word "stereo" comes from the Greek word "stereos" which means fir
m or solid.
The way that machine stereo vision generates the third dimension is achieved usi
ng two cameras
by finding the same features in each of the two images and then measuring the di
stances to
objects containing these features by triangulation; that is, by intersecting the
lines of sight from
each camera to the object. We can Use two/multiple cameras or one moving camera.
Term
binocular vision is used when two cameras are employed, i.e. Two Eyes = Three Di
mensions
(3D)!
Compared to the alternative special purpose sensors like acoustics, radar, or la
ser range finders
stereo vision has the advantage that it achieves the 3-D acquisition without ene
rgy emission or
moving parts. It is now home in India to the best brands associated with the fil
m/television
industry and it is the premier destination for every film-maker who seeks to ach
ieve cinematic
excellence. More than 200 different stereo vision methods have been published an
d their number
is increasing month by month. Although the principle of computational stereo has
been known for
more than 20 years, new directions in stereo research are still being under deve
lopment.
Fundamentals of stereo vision :
A single image has no depth information. Humans infer depth from clues in the scen
e, but these
are ambiguous.
camera model:
Models how 3-D scene points are transformed into 2-D image points
The goal of stereo analysis:
The inverse process: From 2-D image coordinates to 3-D scene coordinates
Displacement of corresponding points from one image to the other is Disparity. F
rom the
disparity, we can calculate depth. All stereo algorithms have three basic steps:
feature extraction,
matching and triangulation.
Depth -The distance from the camera to object
Stereo baseline- The line between two camera lens centers
Conjugate pair- Two points in different images that are the projections of the s
ame point in the
scene
Disparity- The horizontal displacement between corresponding points
Epipolar plane: Plane passing through the optical centers and a point in the sce
ne.
Epipolar line: Intersection of the epipolar plane with the image plane.
Disparity map: Disparities of all points form the disparity map usual output fro
m a stereo
matching algorithm often displayed as an image.
--There is no perfect stereo system except Human eyes.
Stereo Vision- Basics
Two cameras: Left and Right
Optical centers: OL and OR
Virtual image plane is projection of actual image
plane through optical centre
Baseline, b, is the separation between the optical
centers
Scene Point, P, imaged at
pL and pR
pL = 9
pR = 3
Disparity, d = pR PL = 6
Disparity is the amount by which the two images of P
are displaced relative to each other
Here depth, z=bf/pd where p is pixel width.
Triangulation method of finding Depth:
- 3D location of any visible point in the scene must lie on the straight line th
at passes through the
optical centre (centre of projection) and the projection of the point on the ima
ge plane.
- Binocular stereo vision determines the position of a point in the scene by fin
ding the intersection
of the two lines passing through the optical centers and the projection of the p
oint in each image.
Problems in Stereovision
Two major problems in Stereovision are: Matching and reconstruction
Matching (hardest)
Finding corresponding elements in the two images that are projections of the sam
e scene point.
Ambiguous correspondence between points in the two images may lead to several di
fferent
consistent interpretations of the scene. Triangulation depends on solution of th
e correspondence
problem.
Reconstruction
Establishing 3-D coordinates from the 2-D image correspondences found during mat
ching.
Having found the corresponding points, we can compute the disparity map. Dispari
ty maps are
commonly expressed in pixels i.e.; number of pixels between corresponding points
in two images.
Disparity map can be converted to a 3D map of the scene if the geometry of the i
maging system is
known. Critical parameters used for reconstruction are: Baseline, camera focal l
ength, pixel size.
Trade-off
Small baseline: Matching easier
Large baseline: Depth precision better
Correspondence problem is very difficult because
.. Some points in each image will have no corresponding points in the other imag
e
.. They are not binocularly visible or they are only monocularly visible.
.. Cameras have different fields of view.
.. Occlusions may be present.
.. A stereo system must be able to determine parts that should not be matched.
Types of stereo systems:
Stereo techniques can be distinguished by several attributes, e.g., if they use
area based or featurebased
techniques, if they are applied to static or dynamic scenes, if they use passive
or active
techniques, and if they produce sparse or dense depth maps.
Two main classes of correspondence (matching) algorithm are: Correlation-based a
nd Feature
based.
Correlation-Based Methods:
Match image sub-windows in the two images using image correlation. This is the o
ldest technique
for
finding correspondence between image pixels.
Provide a dense disparity map (useful for reconstructing surfaces).
Easier to implement.
Scene points must have the same intensity in each image.
Feature-Based Methods:
Look for a feature in an image that matches a feature in the other.
Typical features used are:
- edge points
- line segments
- corners (junctions)
Provide sparse disparity maps (OK for applications like visual navigation).
Faster than correlation-based methods.
.. The quality of stereo matching results can be improved when using color info
rmation. Color
could be one interesting cue to compute more dense results because the results t
hat have been
reached so far with color stereo methods are rather encouraging. Therefore, we b
elieve that color
information is very helpful for the correspondence analysis.
.. The disadvantage of these techniques consists of the fact that three-times as
much data has to be
handled and has to be computed when using color instead of gray values. Fortunat
ely, this is not
really a disadvantage in non-real-time applications.
.. Initially, images have been taken by static cameras in a static environment.
The main objective of
all proposed stereo methods was and still is the automatic search for correspond
ing points in the
two images.
.. Recently, a new trend in stereo research is to use motion to obtain more reli
able depth estimates.
This research direction, called dynamic stereo, is mostly pursued by the members
of the robot
vision community using mobile robots.
.. Although the use of (known) motion of the stereo system gives some additional
information to
depth computation, most of the stereo researchers still exclude motion informati
on from their
solutions because motion does not considerably improve the accuracy of the resul
ts, and it is not
always available.
Real-time Stereo Using Special Hardware:
Computational fast stereo techniques are required for real-time applications, es
pecially for mobile
robots and autonomous vehicles. General purpose computers are not fast enough to
meet real-time
requirements because of the algorithmic complexity of stereo vision techniques.
Consequently,
the use and/or development of special hardware are inevitable to achieve real-ti
me execution.
Several hardware implementations were presented during the past couple of years
and only some
will be mentioned here. Neural networks and transputers are successfully used fo
r stereo vision.
Integration of Stereo and Other Visual Modules:
Although the results of the stereo techniques mentioned so far are rather accept
able, the results
still lack accuracy. One possibility to improve stereo matching is to combine mu
ltiple stereo
techniques. A different direction in stereo research is to integrate stereo and
other visual modules
to obtain richer information on the shapes of the objects in the scene.
Example-- Integrate the three visual cues: stereo, vergence, and focus as source
s of depth. The
limitations of the individual cues are complimentary to the strengths of the oth
er cues.
An active stereo vision system has the following advantages:
It deals with scenes that do not contain sufficient features, such as edges or c
orners, which are
associated with intensity discontinuities, for the stereo matching process.
The correspondence problem is totally absent. For each pixel in the image that i
s illuminated by a
particular sheet of light from the projector or laser unit, the plane equation o
f the sheet of light
would have been computed from the projector calibration procedure; simple triang
ulation is only
required to compute the 3-D coordinates of the pixel.
It gives a very dense depth (or range) map.
It is applicable to many shape measurement and defect detection projects.
Compared to the alternative special purpose sensors like acoustics, radar, or la
ser range finders
stereo vision has the advantage that it achieves the 3-D acquisition without ene
rgy emission or
moving parts.
Its shortcomings are:
The system must be pre-calibrated. Unlike stereo vision systems with two cameras
, there exists no
self-calibration technique for recovering the geometry between the camera and th
e projector (or
laser) unit
For projector and camera pairs: well-controlled lighting is required, such syste
ms are therefore
restricted to working only in indoor environments.
Applications:
.. Stereo vision systems are widely used in autonomous robots. Images are a rich
source of
information about the surroundings of robots and can be used to calculate distan
ces of the objects
in the scene to avoid collision, for tracking of objects and localization.
.. Stereovision cameras are used on unmanned ground vehicles to measure the dist
ance between the
camera and objects in the field of view, for purposes of path planning and obsta
cle avoidance.
.. Stereo vision is very useful in mobile robots because
- It is a reliable and effective way to extract range information from the envir
onment (real-time
implementation on low-cost hardware).
- It is a passive sensor (no interferences with other sensoring devices).
- It can be easily integrated with other vision routines (object recognition, tr
acking).
.. Other application areas of stereo vision are: industrial inspection for 3-D o
bjects, 3-D growth
monitoring, Z-keying, medical, biomedical and bioengineering (stereo endoscopy,
stereo
radiographs and automatic creation of three dimensional model of a human face or
dental
structure from stereo images), 3-D model creation for e-commence or on-line shop
ping.
Conclusion:
Stereo vision is a method for 3-D analysis of a scene using images from two view
points.
We can use more images for accurate results. Stereo vision is proved to be advan
tageous in terms of its
effectiveness to extract range information in mobile robotic systems, path plann
ing and
obstacleavoidance in ground vehicles. It has to be still developed to extract mo
re and more information
to introduce new techlologies.
Presented by:
1.CH.Harsha 2.K.PRAVEEN
06A71A0408 3/4 B-tech,ECE 06A71A0447
([email protected]) ([email protected])
Ph.No-0863-2277272 Mobile: 9848789594
VLSI
Mobile
Communications
ABSTRACT: This is the world of VLSI . Presently all most all the people are usin
g
technology in various forms like mobiles, i-pods, i-phones etc., Every one enjoy
s technology but
only a few know about VLSI design in world design and its working. Coming to VLS
I it is the
process of creating integrated circuits by combing 1000`s of transistor based ci
rcuits in to a single
chip. It began 1970`s when complex semi conductors and communication technologie
s were being
developed, But now it has become key to many sophisticated electronic devices. A
s the subject is a big
ocean we laid our emphasis on VLSI chip coupled with SDR technology used in mobi
les. VLSI
technology makes the mobile affordable and SDR technology make its flexible. Com
ing to the role of
soft ware define radio abbreviated as SDR, it helps one to access different net
works like CDMA,
GSM, WILL etc. Basically SDR is radio communication system which can potentially
tune to any
frequency band and received any modulation across a large frequency spectrum by
means of little
hard ware as possible and process it signals through soft ware.
VLSI CHIP
This paper introduces the basic design of SDR, use of VLSI chips in mobiles and
their working
principles.
KEYWORDS:-
1. VLSI - Very large scale integration
2. SDR - Soft ware defined Radio
3. R.F - Radio Frequency
4. I.F -intermediate frequency
INTRODUCTION
Tele communication industry is one of the highly developed segment which is depe
ndent on
VLSI technology. SDR also plays a prominent role in the mobile communication VLS
I helps in
reducing the size and price of the mobile where as SDR increases the flexibility
of the mobile.
Existing networks in telecommunication can be classified into two major types -
a) GSM (Global System for Mobile communication)
b) CDMA (Code Division Multiple Access)
These networks differ in their accessing frequencies. The problem encountered wh
ile using these
network is, both these networks can not be accessed from the same hand set. Now
a days dual SIM
card phones have been developed using SDR.
Soft ware radio provides solution by using super heterodyne radio frequency fron
t end.
Soft ware defined radios have significant utility for the military and cell phon
e services both of which
must serve a wide variety of changing radio protocols in real time.
The following segments introduce the use of VLSI and SDR, their working principl
es,
advantages and disadvantages
VLSI technology :- Most of the student are exposed to ICs at a very basic level
involving SSI
and MSI circuits like multiplexers, encoders, decoders etc. VLSI is the next sta
ge of SSI and MSI.
This field involves packing more and more logic devices into smaller and smaller
areas. Particularly
in this era of Nano technology simplicity plays a very prominent role for any de
vice. This is possible
by using VLSI design. But this design involves a lot of expertise in many fronts
like system
architecture, logic and circuit design way for fabrication etc. A lot of knowled
ge is required for the
actual implementation and design of VLSI.
Digital VLSI circuits are predominantly CMOS based. The way normal blocks like l
atches and gates
are implemented is different from what students have seen so far, but the behavi
our remains the
same. All the miniaturization involves new things to consider. A lot of thought
has to go into actual
implementations as well as design. Let us look at some of the factors involved.
1. Circuit Delays. Large complicated circuits running at very high frequencies h
ave one big problem
to tackle - the problem of delays in propagation of signals through gates and wi
res ... even for areas a
few micrometers across! The operation speed is so large that as the delays add u
p, they can actually
become comparable to the clock speeds.
2. Power. Another effect of high operation frequencies is increased consumption
of power. This has
two-fold effect - devices consume batteries faster, and heat dissipation increas
es. Coupled with the
fact that surface areas have decreased, heat poses a major threat to the stabili
ty of the circuit itself.
3. Layout. Laying out the circuit components is task common to all branches of e
lectronics. What s so
special in our case is that there are many possible ways to do this; there can b
e multiple layers of
different materials on the same silicon, there can be different arrangements of
the smaller parts for
the same component and so on.
The power dissipation and speed in a circuit present a trade-off; if we try to o
ptimize on one, the
other is affected. The choice between the two is determined by the way we chose
the layout the circuit
components. Layout can also affect the fabrication of VLSI chips, making it eith
er easy or difficult to
implement the components on the silicon.
Most of the today s VLSI designs are classified into 3 categories -
Analog :- Small transistor counts circuits such as amplifiers, data converters ,
sensors etc.,
Application Specific integrated circuits :- Progress in the fabrication of ICs h
as enabled us to
create fast and powerful circuits in smaller and smaller devices. This also mean
s we can pack a lot
more of functionality in the same area. This is key for design of ASIC`s .
Systems on chip :- These are highly complex mixed signal circuits (digital & ana
log on the same
chip).
Mobiles developed by using all the above VLSI designs will be simple as they co
ntain a large
number of transistors on one chip, more ever they become cheap. Thus VLSI makes
the mobiles
compact, affordable and energy efficient.
Role of SDR in mobiles:- Frequency is an important term in the operation any net
works. Cell
phones are categorized in to CDMA and GSM based on this principle. CDMA operate
with in a
frequency range of 1 MHz to 800 MHz. GSM operates for a frequency range of 900 M
Hz to 1900
MHz. Thus both these networks can not be access for a single hand set. To solve
this problem soft
ware defined radio is developed. SDR can tune to any frequency band and relieve
any modification
across a large frequency spectrum.
Operating principles of SDR:- There are two concepts in the working of SDR, one
is ideal
and other is practical.
Ideal concept:- The receiver has an analog attached to digital converter and thi
s is attached to the
antenna. A digital signal processor would read the converter and its soft ware w
ould transform the
stream of data form the converter to any other form it requires. An ideal transm
itter is of the similar
type. A digital signal processor would generate a stream of numbers which will b
e sent to a digital to
analog converter connected to the radio antenna. But this ideal stream is not co
mpletely realizable.
Practical concept :- Current digital electronics are too slow to receive tropica
l radio signal over
approximately 40 MHz directly. An ideal soft ware radio has to collect and proce
ss samples at more
than twice the maximum frequency at which it is to operate for frequencies below
40 MHz a direct
conversion hard ware solution is required. In this hard ware solution an ADC con
verter is directly
connected to antenna. The output stream of digital data obtained from analog to
digital converter is
then passed to a soft ware defined processing stage for frequencies above 40 MHz
. The actual analog
to digital converter does not perform with sufficient speed, so direct conversio
n is not possible. To
solve this problem a super heterodyne RF front end is adopted.
Super heterodyne:- It consist of frequency mixer and reference oscillator to het
erodyne the radio
signals to lower frequencies. The mixer changes the frequency of the signal. The
Super heterodyne RF
front end lower the frequency of the received to intermediate frequency values u
nder 40 MHz
convertible limit. This intermediate frequency is then treated by ADC. Thus by u
sing same mobile
both the frequencies corresponding to GSM & CDMA networks can be accessed.
ADVANTAGES:-
1. Lower package count.
2. Low board space.
3. Fewer board level connections.
4. Higher performance.
5. Reliability and lower cost due to the lower chip count
DISADVANTAGES:-
1. Long design.
2. Long fabrication time.
3. Higher risk project.
4. Spiking problem.
5. Leakage of power.
However, CMOS transistor can reduce most of the problems.
CONCLUSION :- If we take geographical conditions into account, some networks wil
l be
advantageous in one part of the world and the other at some other places. Using
different phones for
this purpose will be some what inconvenient. So having ability to use all the ne
tworks will be a
welcome always. With the advent of SDR one needs just one set to access differen
t networks, thereby,
providing flexibility. Dual Simcards phones developed based on SDR technology ha
ve gained good
response. In countries like India, price place an important role in determining
the demand for a
particular product, and also interest towards compact and simple devices is incr
easing day by day. In
this context a lot of progress has been made in the circuit design. As VLSI has
succeeded in reducing
the cost and also making the product efficient it has gained a lot of craze. Mos
t of the companies are
producing the product based on single chip design.
REFERENCES :-
www.wikipedia .com
www.google.com
1
S.V.U COLLEGE OF ENGINEERING
TIRUPATI
DEPARTMENT OF
ELECTRICAL AND ELECTRONICS ENGINEERING
BIO MEDICAL INSTRUMENTATION
BY
M. MADAN N.ASHOK
10703016 10703019
Email:[email protected]
ABSTRACT:
This paper caters to the basic stages of measurements and instrumentation,
i.e. generalized bio medical instrumentation system, basis of bio-potential elec
trodes
and transducers for biomedical applications. Special techniques for measurement
of non-electrical biological parameters like diagnostic and therapeutical aspect
s of
imaging systems such as X-ray computer tomography (CT) and magnetic resonance
Imaging (MRI) is discussed in detail, along with their biomedical applications.
This paper also highlights the importance of instruments, which affect the
human body. The use of different stimulators and the new advances that have take
n
place in using the pacemakers and defibrillators are emphasized in detail. Also
the
latest developments in this field including the Mind s eye discovery are mentioned
and our own reasons for such phenomenon are given in detail. This paper also
emphasizes how these bio-medical instruments are not only helpful in identifying
the diseases but a l s o in identifying the culprits and get the facts from them
with
narco tests.
2
1. INTRODUCTION
A Biomedical Instrument performs a specific function on a biological system. The
function may be the exact measurement of physiological parameters like blood pre
ssure,
velocity of blood flow, action potentials of heart muscles, temperature, pH valu
e of
the blood and rates of change of these parameters.
The specification must meet the requirements of the living system.The design mus
t be
sufficiently flexible to accommodate the factor of 'biological variability' . Th
e
biomedical measuring devices should cause minimal disturbance to normal physical
function and are to be used with safety instrumentation.
Biomedical instrumentation can generally be classified into
two major types :
.. Clinical instrumentation
.. Research instrumentation
Clinical instrumentation is basically devoted to the diagnosis, care, and treatm
ent of
patients.
Research instrumentation is used primarily in the search for new knowledge perta
ining
to various systems that compose the human organism.
BM instruments-Life
savers in hospitals
3
2. MAN INSTRUMENT SYSTEM
In man-instrument system the data is obtained from living organisms, especially
humans
and there is large amount of interaction between the instrumentation system and
the
subject being measured. So it is essential that the person on whom measurements
are
made be considered an integral part of the instrumentation system. Consequently,
the
over all system,which includes both the human organism and the instrumentation
required for measurement is called the man-instrument system.An instrumentation
system is defined as a set of instruments and equipment utilized in the measurem
ent of
one or more characteristics or phenomena, plus the presentation of the informati
on
obtained from those measurements in a form that can be read and interpreted by a
man
3. GENERALIZED BIO MEDICAL INSTRUMENTATION
The sensor converts energy or information from the measurand to another form (us
ually
electric ). This signal is then processed and displayed so that humans can perce
ive the
information. The major difference between medical instrumentation and convention
al
instrumentation systems is that the source of signals is living tissue or energy
applied to
living tissue. Generalized biomedical instrument consists of :
Measurand
Sensors
Signal Conditioning
Output Display
4
Generalized Bio medical instrument system
4. BASIS OF BIO POTENTIAL ELECTRODES
A. Recording electrodes
Electrodes make a transfer from the ionic conduction in the tissue to the electr
onic
conduction, which is necessary for making measurements. Electrodes are employed
to
pick up the electric signals of the body. Since the electrodes are transferring
the
bioelectric event to the input of the amplifier, the amplifier should be designe
d in such a
way that it accommodates the characteristics of the electrodes.
To record the ECG, EEG, EMG, etc. electrodes must be used as transducers to conv
ert an
ionic flow of current in the body to an electronic flow along a wire. Two import
ant
characteristics of electrodes are electrode potential and contact impedance. Goo
d
electrodes will have low stable figures for both of the above characteristics.
EEG ECG
B. Types of electrodes
Many types of recording electrodes exist including metal discs, needles, suction
electrodes, glass microelectrodes, fetal scalp clips or screws, etc. The most wi
dely used
electrodes for biomedical applications are silver electrodes, which have been co
ated with
silver chloride by electrolyzing them for a short time in a sodium chloride solu
tion. When
chlorided, the surface is black and has a very large surface area. A pair of suc
h electrodes
might have a combined electrode potential below 5 mV.
5
5 . TRANSDUCERS FOR BIOMEDICAL APPLICATIONS
In biomedical applications there are various parameters obtained from the patien
t body by
using various transducers. These parameters include - blood pressure ( arterial
,direct) ,
blood flow (aortic, venous, cardiac output), heart rate, phonocardiogram,
ballistocardiogram, oximetry, respirationrate, pneumotachogram, tidal volume,
pulmonary diffusing capacity, pH, partial pressures of oxygen and carbon dioxide
,
temperature etc.
A. Pressure Transducers.
The basic principle behind all these pressure transducers is that the pressure t
o be
measured is applied to a flexible diaphragm, which gets deformed, by the action
exerted
on it .This motion of the diaphragm is then measured in terms of electrical sign
als . In its
simplest form, a diaphragm is a thin flat plate of circular shape, attached firm
ly by its
edge to the wall of a containing vessel. Typical diaphragm materials are stainle
ss
Steel, phosphor bronze and beryllium copper .Other transducers used are temperat
ure
measurement transducers, flow transducers, displacement, motion and position
transducers .
6. MEASUREMENTS OF NON ELECTRICAL BIOLOGICAL PARAMETERS
A. Computed Tomography (CT)
CT or "CAT" scans are special x-ray tests that produce cross-sectional images of
the
body using x-rays and a computer .These images allow the radiologist to look at
the
inside of the body just as one would look at the inside of a loaf of bread by sl
icing it .
a) Principle
The basic physical principle involved in CT is that the structures on a 2D (two
dimensional) object can be reconstructed from multiple projections of the slice.
Measurements are taken from the transmitted X-rays through the body and contain
information on all the constituents of the body in the path of X-ray beam. By us
ing
multidirectional scanning of the object, multiple data is collected .
b) Applications of Computer Tomography
CT scans are frequently used to evaluate the brain, neck, spine, chest, abdomen,
pelvis
and sinuses. It is also used in assessment of Coronary Arteries and musculo-skel
etal
6
investigations
CT Scan System
B. Magnetic Resonance Imaging (MRI)
Magnetic Resonance Imaging (MRl) is a method of looking inside the body without
using surgery, harmful dyes or radiation. The method uses magnetism and radio wa
ves to
produce clear pictures of the human anatomy . In the last three to four years, i
mproved
computer technology in hardware and software allowed MRI to obtain better qualit
y
images in most of the body. MRI has proven to be unusually capable in the detect
ion,
localization, and assessment of the extent and character of disease in the centr
al nervous,
musculoskeletal, and cardiovascular systems. In the brain for example, MRI has a
proven
ability to define some tumors and the plaques of multiple sclerosis better than
any other
technique .
MRI Scan
Latest Discovery :
Scientists have successfully discovered mind s eye with MRI scan system. With this
mind s eye, man can visualize his future and remember his past perfectly. This has
been
published in Eenadu daily dated 25-1-07 and the details were experiments were be
ing
performed on 21 persons by scanning their minds with MRI Scan and studying their
behavior during the time they remember the past.
Our view:
As per Einstein Theory of relativity, any particle which travels with the
velocity equal to or more than that of light, can enter both past and future .It
is clearly
7
evident that our mind travels with a velocity faster than light as an example we
can
visualize sun in fraction of second whereas it takes 8 minutes for light to reac
h from
sun to earth. As mind travels faster than light, mind can visualize future and w
e may
be able to invent time-machines in future with the help of this discovery thereb
y
alerting human race towards all the calamities, by knowing their occurrence in a
dvance.
Other imaging systems like Positron Emission Tomography (PET), gamma camera and
single photon emission computer tomography (SPECT) are the further improvements
that are used to diagnose different modalities in the human body.
PET Stimulator
7. ELECTRONIC INSTRUMENTS FOR AFFECTING HUMAN BODY:
A. Electrical Stimulators
Nerves and muscles in the body produce electric potentials when they operate, an
d conversely they
can be made to operate by electrical stimulation. In the physiotherapy departmen
t, stimulators exist
which may provide direct, alternating, pulsating, or pulsed waveforms and are us
ed to exercise the
muscles by stimulation through Electrodes placed on the skin. Stimulators exist
now which are
used for the relief of pain.
These Transcutaneous Electrical Neural Stimulators (TENS) appear to suppress oth
er
pains in the same general area where the stimulation is applied .Electrical stim
ulation of
trigger points is also claimed to be effective
B. Bladder stimulator
This is a general term applied to electrical stimulators of the bladder or ureth
ra. In some
disorders of the urinary bladder normal emptying is impossible, usually due to d
isruption
of the nerve supply. An electrical stimulator (usually implanted) can assist emp
tying of
8
the bladder by stimulating the bladder muscle directly or by stimulating the ner
ves where
they leave the spinal column. Only a small number of such devices have been
implanted. Sometimes external stimulators are used to test the likely effect of
an
implanted stimulator or as a substitute for it .These may use electrodes mounted
on a
plug, which fits into the rectum or vagina .
C.Pacemakers
The Pacemaker is an electrical stimulator with electrodes usually applied direct
ly to the
heart and providing pulses of a fixed rate (asynchronous Pacemaker) or it may pr
ovide
pulses only when the natural pulse fails to appear( demand Pacemaker ) .
D. Implantable Cardioverter Defibrillators (ICDs)
An implantable cardioverter-defibrillator is a device that resuscitates the hear
t attempt
store store a normal heart beat in a patient whose heart is beating rapidly and
in a
disorganized way. Implantable cardioverter-defibrillators are placed to shock th
e
heart and resuscitate the patient. It gets hooked up to a wire placed in the rig
ht side of the
heart much like a pacemaker. An implantable cardioverter-defibrillator has the c
apacity
to pace the heart like a pacemaker, but it also has the capability to shock the
heart and
resuscitate or abort a cardiac arrest. The battery is placed under the skin and
a vein
located underneath the left or the right clavicle is isolated the wires are plac
ed through
that vein to allow entry to the right side of the heart. The wires are attached
to the device
and the wound is closed. When the patient undergoes implantation of an implantab
le
cardioverter defibrillator , he also needs testing of his normal rhythm. An over
night stay
in the hospital is expected for all this .
8. NEW ADVANCES
The technical advances of newer ICD s have significantly modified patient follow-u
p
procedures over the last several years . The multiple functions of the new syste
ms, which
include highenergy defibrillation therapy, low-energy cardioversion, antitachyca
rdia
pacing, permanent and post-therapy bradycardia pacing, diagnostic counters, and
device
status parameters require everincreasing technical expertise from the
physician.Improvements in newer devices include multiple therapy modes, more pat
ient
diagnostic information, and accurate device status information. Currently, there
is a focus
on device down-sizing.This can lead to an inevitable search for lower defibrilla
tion
9
energy levels and newer models of energy storage and delivery. The goal of a pace
maker
like device of similar size and implantation technique will be pursued with vigor
in
future years. New features in ICDs have enhanced the ease and safety of implanta
tion, the
comfort to the patient, the clinical information provided to the clinician and p
atients by
the device, and the appropriateness and efficiency of these devices.
Age is no barrier!
The bio-medical instruments that are being invented today have suitability for a
ge group
0-100 years. These instruments can be used on a baby to be born or on a person i
n his
death bed. The latest microscopes developed are able to see the tiniest among ti
ny cells
and so new diseases and their characters can be studied and proper medication is
provided.
Narco Tests
These narco analysis tests have become very crucial in our judiciary where, in m
any
cases the reports from these tests have influenced the final verdict in a profou
nd
way.During narco analysis tests, the person on whom the test is being performed
is
being subjected to drugs which nullifies all his defenses and makes him unable t
o hide
any facts during query. But before a person is subjected to these tests, first a
team of
doctors check whether the person s is healthy enough to undergo narco analysis tes
ts.
This is done by the use of advanced biomedical instruments
Stamped!Telgi Noida Killers
10
Identification
These biomedical instruments also find use in forensic laboratories to identify
the dead
bodies or the remains of putrified body organs and bones. Thus , these instrumen
ts are
not only helpful in identifying the culprits but also to recognize the dead bodi
es and thus
this helps both judiciary and relatives of the dead.
9. CONCLUSION
Since biomedical instrumentation is a field where the technology has gone into a
lot of
advancements and sophistication, it is difficult for active medical personnel to
know the
spectrum of instruments available for various diagnosis and therapy. After the
introduction to computers and microprocessors, the biomedical instrumentation ha
s
improved further in all respects. New advances in the instrumentation include th
e design
of devices, which have greater convenience, higher quality and finer resolution,
which will
improve their standards.
10. REFERENCES
[1] Principles of medical electronics & Bio-medical Instrumentation, C. Raja Rao
, S.K.Guha.
[2] Medical Instrumentation, John G. Webster
[3] Introduction to Biomedical equipment technology, 1.Carr & M. Brown.
[4] Biomedical Instrumentation, M.Arumugam.
NANOELECTRONICS
Name: P.MAHESH
Year: III
Branch: ECE
University No: 06e81a0424
College: ACET (ALFA COLLEGE OF ENGG&TECH)
Place: ALLAGADDA,KURNOOL(D.T),A.P
E Mail: [email protected]
Mobile: 9491406324
ABSTRACT
A basic definition of Nanotechnology is the study manipulation and manufacture o
f extremely
minute machines or devices. These devices are so small to the point of manipulat
ing the atoms
themselves to form materials. By this Nanotechnology we can make computers billi
ons of
times more full than today s and new medical capabilities that will heal and cure
in cases that
are now viewed as utterly hopelessly. The properties of manufactured products de
pend on how
those atoms are arranged. If we know about exactly how many dopant atoms are in
a single
transistor and exactly where each individual dopant atom is located and placed r
oughly the
right number in roughly the right place, we can make a working transistor. Anoth
er
improvement in Nanotechnology is self replication. Self replication make a effec
tive route to
truly low cost manufacturing. Our intuitions about self replicating systems lear
ned from
biological systems that surround us are likely to seriously mislead us about the
properties and
characteristics of artificial self replicating systems designed for manufacturin
g purposes.
Artificial systems able to make a wide range of non biological products like dia
mond under
programmatic control are likely to be more brittle and less adaptable in their r
esponse to
changes in their environment than biological systems. At the same time they shou
ld be simpler
and easier to design. Thus the progress of technology around the world has alrea
dy given us
more precise, less expensive manufacturing technologies that can make an unprece
dented
diversity of new products. Everything requires the computer is a major reason wh
y people
should research and develop Nanotechnology.
INTRODUCTION:
A basic definition of Nanotechnology is the study , manipulation and manufacture
of
extremely minute machines/devices. In a few decades, this emerging manufacturing
technology will let us inexpensively arrange atoms and molecules in most of the
ways
permitted by physical law.
Nanotechnology could, in the future, be used to rapidly identify and block
attacks. Distributed surveillance systems could quickly identify arms buildups a
nd
offensive weapons deployments, while lighter, stronger and smarter materials con
trolled
by powerful molecular computers would let us make radically improved versions of
existing weapons able to respond to such threads .Replicating manufacturing syst
ems
could rapidly churn out the needed defenses in huge quantities.
While Nanotechnology does propose to use replication, it does not propose to cop
y living
systems. Living systems are wonderfully adaptable and can survive in a complex n
atural
environment. Instead, Nanotechnology proposes to build molecular machine systems
that
are similar to small versions of what you might find in today s modern factories.
Robotic
arms shrunk to submicron size should be able to pick up and assemble molecular p
arts like
their large cousins in factories around the world pick up and assemble nuts and
bolts.
Unfortunately, our institutions are about replicating systems can be led serious
ly astray by
a simple fact the only replicating systems most of us are familiar with are biol
ogical selfreplicating
systems.
WHY USE NANOTECHNOLOGY?
There are few reasons why these mega corporations are spending their resources o
n
Nanotechnology.
Firstly, the synthetic manufacture of materials is included under the science of
Nanotechnology. Once we learn enough to synthetically replicate and produce natu
rally
occurring substances on earth, we will not rely on remaining stores currently on
this earth.
Secondly, similar to the fabrication of materials that we currently use, self re
plication
would be a major step to reducing manufacturing costs, time and problems. The on
ly costs
incurred would be the cost of the material required, and the cost of making one
machine to
start with. Also, the conductivity of certain materials could be vastly improved
by
Nanotechnology. Timber is not a good choice for a semiconductor. This is because
electrons do not move very freely over it s surface. On the other hand silicon and
diamond
are good choices for a semiconductor. If we could manipulate these materials dow
n to
each atom and molecule then we could make a transistor the width of few molecule
s
across. The energy required to operate these super transistors would be greatly
smaller
than the requirements of today s computer systems. The computer with these super
transistors would run at around 60GHz, and be exceptionally more powerful than t
oday s
most advanced computers.
ABOUT THE TECHNOLOGY:
In the coming decades Nanotechnology could make a super computer so small it cou
ld
barely be seen in a light microscope. The coming revolution in manufacturing is
a
continuation of trends that date back decades and even centuries. Looking ahead,
we will
be able to manufacture products with the ultimate in precision the finest featur
es will be
made from individual atoms and molecules.
Manufactured products are made from atoms. The properties of those products depe
nd on
how those atoms are arranged. If we rearrange the atoms in coal we can make diam
ond. If
we rearrange the atoms in sand we can make computer chips. If we rearrange the a
toms in
dirt, water and air we can make potatoes. Today s manufacturing methods are very c
rude
at the molecular level.
There are two concepts commonly associated with Nanotechnology:
*POSITIONAL ASSEMBLY *SELF REPLICATION
POSITIONAL ASSEMBLY:
This positional assembly aims to place the right molecular parts in the right pl
ace. The
need for positional assembly implies an interest in molecular robotics. e.g., ro
botic devices
that are molecular both in their size and precision. These molecular scale posit
ional
devices are likely to resemble very small versions of their everyday macroscopic
counterparts. Positional assembly is frequently used in normal macroscopic manuf
acturing
today, and provides tremendous advantages. Imagine trying to build a bicycle wit
h both
hands tied behind your back! The idea of manipulating and positioning individual
atoms
and molecules is still new and takes some getting used to. However as Feynman sa
id The
principles of physics, do not against the possibility of maneuvering things atom
by atom.
We need to apply at the molecular scale the concept that was demonstrated it s
effectiveness at the macro scoping scale making parts go where we want by puttin
g them
where we want!
SELF REPLICATION:
The remarkably low manufacturing cost comes from self replication. Molecular
machines can make more molecular machines, which can make yet more molecular
machines. While the research and development costs for such systems are likely t
o be
quite high, incremental manufacturing costs of a system able to make systems lik
e itself
can be very low.
Self replication is at the heart of many policy discussions. The only self repli
cating systems
most of us are familiar with are biological. We automatically assume that
nanotechnological self replicating systems will be similar. The machines people
make
bear little resemblance to living systems and molecular manufacturing systems ar
e likely
to be just as dissimilar.
The artificial self replicating systems are being proposed for molecular manufac
turing are
inflexible and brittle. It is difficult enough to design a system able to self r
eplicate in a
controlled environment, let alone designing one that can approach the marvelous
adaptability that hundred s of millions of years of evolution have given to living
systems.
Designing a system that uses a single source of energy is both much easier to do
and
produce a much more efficient system. Artificial self replicating systems will b
e both
simpler and more efficient if most of this burden is offloaded: we can give them
the odd
compounds and unnatural molecular structures that they require in an artificial
feedstock
rather than forcing the device to make everything itself-a process that is both
less efficient
and more complex to design.
The mechanical designs proposed for Nanotechnology are more reminiscent of a fac
tory
than of a living system. Molecular scale robotic arms able to move and position
molecular
parts would assemble rather rigid molecular products using methods more familiar
to a
machine shop than the complex brew of chemicals found in a cell. Although we are
inspired by living systems, the actual designs are likely to owe more to design
constraints
and human objectives than to living systems.
Self replication is but one of many abilities that living systems exhibit. Copyi
ng that one
ability in an artificial system will be challenge enough without attempting to e
mulate their
many other remarkable abilities. The engineering effort required to design syste
ms of such
complexity will be significant, but should not be greater than the complexity in
volved in
the design of such existing systems as computers, airplanes etc.
THE VON NEUMANN ARCHITECTURE FOR A SELF REPLICATING SYSTEM
Fig. Vonneumann architecture of a self replicating system
Vonneumann s proposal consisted of two central elements: a universal computer and
a
universal constructer see in above figure.
The universal computer contains a program that directs the behavior of the unive
rsal
constructor. The universal constructor in turn, is used to manufacture both anot
her
universal computer and another universal constructor. Once Construction is finis
hed the
program contained in the original universal computer is copied to the new univer
sal
computer and program execution is started. The constructor had an arm which it c
ould
move about and which could be used to change the state of the cell at the tip ,i
t was
possible to create objects consisting of regions of the two dimensional cellular
automata
world which were fully specified by the program that controlled the constructor.
The
vonneumann s kinematic constructor has had perhaps a greater influence, for it is
a model
of general manufacturing which can more easily be adapted to the three dimension
al
world in which we live. The robotic arm of constructor is moved in three space a
nd which
grasped parts from a sea of parts around it. These parts were then assembled int
o another
kinematics constructor and its associated control computer. An important point t
o notice
is that self replication, while important, is not by itself an objective. A devi
ce able to make
copies of itself but unable to make anything else would not be very valuable.
Vonneumann s proposals centered around the combination of a universal constructor,
which could make anything it was directed to make, and a universal computer, whi
ch
could compute any thing it was directed to compute. It is this ability to make a
ny of a
broad range of structures under flexible programmatic control that is of value.
The ability
of the device to make copies of itself is simply a means to achieve low cost rat
her than end
in itself.
UNIVERSAL UNIVERSAL
COMPUTER CONSTRUCTOR
BROADCAST ARCHITECTURE:
In the Vonneumann s architecture, Drexler s assembler and in living systems the
complete set of plans for the system are carried internally in some sort of memo
ry. This
is not a logical necessity in a general manufacturing system. If we separate the
constructor from the computer and allow many individual constructors to receive
broadcast instructions from a single central computer then each constructor need
remember the plans for what it is going to construct :it can simply be told what
to do as
it does it as shown in above figure. This approach not only eliminates the requi
rement
for a central repository of plans with in the constructor, it can also eliminate
almost all
of the mechanisms involved in decoding and interpreting those plans. The advanta
ges
of the broadcast architecture are:
(1)It reduces the size and complexity of the self replicating component
(2)It allows self replicating component to rapidly redirect to build something n
ovel.
(3)If the central computer is macroscopic and under our direct control, the broa
dcast
architecture is inherently safe in that the individual constructors lack suffici
ent
capability to function autonomously.
A NANOPARTICLE CLASSIFICATION
Molecular
Constructor
Molecular
Constructor
Molecular
Constructor
Macroscopic
Computer
The development of new nanomaterial is a rapidly progressing science. Nanomateri
al
development includes metallic nanoparticles, germanium, ceramic and aluminum oxi
de
nanocrystals, gold nanowafers and copper oxide nanocubes.
APPLICATIONS:
The improvement and advance in the computer industry alone is a major reason why
people are and why people should research and develop Nanotechnology .Imagine th
e
world being run by supercomputers rather than the relatively slow and cumbersome
machines of today. Every thing that requires computer would be improved dramatic
ally
and be better, faster and more efficient. Nanotechnology will let us make
supercomputers that fit head of a pin and fleets of medical nanorobots smaller t
han
human cell able to eliminate cancer, infections, clogged arteries and even old a
ge.
.
The coming era of the NANO APPLICATIONS
.. Chip Making :
Nanotech method for making microchip components, which it says , should enable e
lectronic
devices to continue to get smaller and faster. Current techniques use light to h
elp etch tiny
circuitry on a chip, but IBM is now using molecules that assemble themselves int
o even
smaller patterns. Because the technology is compatible with existing manufacturi
ng tools, it
should be inexpensive to introduce.
.. Medicine :
It will deal with the problems involved in designing and building a micro-scale
robot that can
be introduced into the body to perform various medical activities.
Tumors. We must be able to treat tumors; that is to say, cells grouped in a clum
ped
mass. The specified goal is to be able to destroy tumorous tissue in such a way
as to
minimize the risk of causing or allowing a recurrence of the growth in the body.
Arteriosclerosis. This is caused by fatty deposits on the walls of arteries. The
device
should be able to remove these deposits from the artery walls. This will allow f
or both
improving the flexibility of the walls of the arteries and improving the blood f
low
through them
Blood clots. The cause damage when they travel to the bloodstream to a point whe
re
they can block the flow of blood to a vital area of the body. This can result in
damage
to vital organs in very short order. By using a microrobot in the body to break
up such
clots into smaller pieces.
.. Design Software
The simple molecular machines simulated so far can be easily designed and modele
d
using ad hoc software and molecule development. However, to design complex syste
ms such
as the molecular assembler/ replicators, more sophisticated software architectur
e will be
needed. The current NanoDesign software architecture is a set of c++ classes wit
h a tcl front
end for interactive molecular gear design. Simulation is via a parallelized FORT
RAN program
which reads files produced by the design system. We envision a future architectu
re centered
around an object oriented database of molecular machine components and systems w
ith
distributed access via CORBA from a user interface based on a WWW universal clie
nt
.. NanoRobots :
During the last 50 years, science fiction literature has used and described
nanotechnology as an inherent part of the future. Ultra small robots able to ent
er the human
body and repair damage tissue, nanosize logical chip implanted into the brain to
control human
functionality are two of the common objects .Current advances in Nanotechnologie
s and
understanding of the biology at the molecular level will render these concepts r
eality in the
near future. A Nanorobot can be defined as an artificially fabricated object abl
e to freely
diffuse in the human body and interact with specific cell at the molecular level
by itself. The
figure below is a schematic representation of a nanorobot that can be activated
by the cell itself
when it is needed. The stress induced by disease or infectious attack generally
leads to changes
in the chemical content of the cell. The cellular chemistry is now well understo
od from this
aspect and could be exploited in order to trigger a reaction of the nanorobots.
The size of the nanorobot has to be small to be able to going trough the natural
barriers and
especially inside the cell. Current nanotechnologies can provide multifunctional
structures with
a size range from 1 to 100 nm and more it is possible to assemble different elem
entary units
synthesized independently using colloidal chemistry, DNA template or atomic forc
e
microscopy. External shell is a crucial point because it has to be recognized as
a part of the
body (inert coating) and be able to release different size molecules. A rigid sh
ell like silica is
an ideal matrix if we consider that it is not toxic at the Nanometre level.
We can imagine millions of these tiny robots permanently present in the body and
repairing the
damaged cells or killing viruses with out any external action.
CONCLUSION:
The software required to design and model complex molecular machines is either
already available, or can be readily developed over the next few years. The Nano
Design
software is intended to design and test fullerene based hypothetical molecular m
achines and
components. The system is in an early stage of development. Presently, tcl provi
des an
interpreted interface, c++ objects represent design components, and a paralleliz
ed FORTRAN
program simulates the machine.
In the future, an architecture based on distributed objects is envisioned. A sta
ndard set
of interfaces would allow vendors to supply small, high quality components to a
distributed
system .
BIOCHIP TECHNOLOGY- THE FUTURE TECHNOLOGY
Anusha Shruthi.D, R.Pallavi,
II B.Tech (ECE), II B.Tech (ECE),
AITS, Rajampet. . AITS ,Rajampet,
[email protected] [email protected]
ABSTRACT
Biochips -The most exciting
future technology is an outcome of the
fields of, Electronics, Computer science
& Biology. Its a new type of bio-security
device to accurately track information
regarding what a person is doing, and
who is to accurately track information
regarding what he is doing, and who is
actually doing it. It s no more required
with biochips the good old idea of
remembering pesky PINs, Passwords, &
Social security numbers .
No more matters of carrying
medical records to a hospital, No more
cash/credit card carrying to the market
place; everything goes embedded in the
chip . Every thing goes digitalized. No
more hawker tricks on the internet .!
Biochip has a variety technique for
secured E-money transactions on the
net. The power of biochips exists in
capability of locating lost children,
downed soldiers, and wandering
Alzheimer patients.
The current, in use, biochip
implant system is actually a fairly simple
device. Today s, biochip implant is
basically a small (micro) computer chip,
inserted under the skin, for identification
purposes. The biochip system is radio
frequency identification (RFID) system,
using low-frequency radio signals to
communicate between the biochip and
reader.
1. INTRODUCTION
Biochips are any microprocessor
chips that can be used in Biology. The
biochip technology was originally
developed in 1983 for monitoring
fisheries, it s use now includes, over 300
zoos, over 80 government agencies in at
least 20 countries, pets (everything from
lizards to dogs), electronic "branding" of
horses, monitoring lab animals, fisheries,
endangered wildlife, automobiles,
garment tracking, hazardous waste, and
humans. Biochips are "silently" inching
into humans.
For instance, at least 6 million
medical devices, such as artificial body
parts (prosthetic devices), breast
implants, chin implants, etc., are
implanted in people each year. And most
of these medical devices are carrying a
"surprise" guest a biochip. In 1993,
the Food and Drug Administration
passed the Safe Medical Devices
Registration Act of 1993, requiring all
artificial body implants to have
"implanted" identification the
biochip.
So, the yearly, 6 million
recipients of prosthetic devices and
breast implants are "biochipped". To
date, over 7 million animals have been
"chipped". The major biochip companies
are A.V.I.D. (American Veterinary
Identification Devices), Trovan
Identification Systems, and Destron-
Fearing Corporation.
2. BIOCHIP TECHNOLOGY
` Biochips -The most exciting
future technology is an outcome of the
fields of Computer science, Electronics
& Biology. Its a new type of bio-security
device to accurately track information
regarding what a person is doing, and
who is to accurately track information
regarding what he is doing, and who is
actually doing it. It s no more required
with biochips the good old idea of
remembering pesky PINs, Passwords, &
Social security numbers .
No more matters of carrying
medical records to a hospital, No more
cash/credit card carrying to the market
place; everything goes embedded in the
chip . Every thing goes digitalized. No
more hawker tricks on the internet .!
Biochip has a variety technique for
secured E-money transactions on the net.
The power of biochips exists in
capability of locating lost children,
downed soldiers, and wandering
Alzheimer patients.
The current, in use, biochip
implant system is actually a fairly simple
device. Today s, biochip implant is
basically a small (micro) computer chip,
inserted under the skin, for identification
purposes. The biochip system is radio
frequency identification (RFID) system,
using low-frequency radio signals to
communicate between the biochip and
reader.
3. THE BIOCHIP IMPLANT
SYSTEM CONSISTS OF TWO
COMPONENTS:
PERSPECTIVE OF
THE ACTUAL SIZE
3.1THE TRANSPONDER:
The transponder is the actual
biochip implant. It is a passive
transponder, meaning it contains no
battery or energy of its own. In
comparison, an active transponder would
provide its own energy source, normally
a small battery. Because the passive
biochip contains no battery, or nothing
to wear out, it has a very long life, up to
99 years, and no maintenance.
Being passive, it's inactive until
the reader activates it by sending it a
low-power electrical charge. The reader
"reads" or "scans" the implanted biochip
and receives back data (in this case an
identification number) from the biochip.
The communication between biochip
and reader is via low-frequency radio
waves. The biochip transponder consists
of four parts.
3.1.1.Computer microchip
The microchip stores a unique
identification number from 10 to 15
digits long. The storage capacity of the
current microchips is limited, capable of
storing only a single ID number. AVID
(American Veterinary Identification
Devices), claims their chips, using an
nnn-nnn-nnn format, has the capability
of over 70 trillion unique numbers. The
unique ID number is "etched" or
encoded via a laser onto the surface of
the microchip before assembly. Once the
number is encoded it is impossible to
alter. The microchip also contains the
electronic circuitry necessary to transmit
the ID number to the "reader".
3.1.2. Antenna Coil:
This is normally a simple, coil of
copper wire around a ferrite or iron core.
This tiny, primitive, radio antenna
"receives and sends" signals from the
reader or scanner.
3.1.3. Tuning Capacitor: The capacitor
stores the small electrical charge (less
than 1/1000 of a watt) sent by the reader
or scanner, which activates the
transponder. This "activation" allows the
transponder to send back the ID number
encoded in the computer chip. Because
"radio waves" are utilized to
communicate between the transponder
and reader, the capacitor is "tuned" to
the same frequency as the reader.
3.1.4. Glass Capsule: \The glass capsule
"houses" the microchip, antenna coil and
capacitor. It is a small capsule, the
smallest measuring 11 mm in length and
2 mm in diameter, about the size of an
uncooked grain of rice. The capsule is
made of biocompatible material such as
soda lime glass. After assembly, the
capsule is hermetically (air-tight) sealed,
so no bodily fluids can touch the
electronics inside. Because the glass is
very smooth and susceptible to
movement, a material such as a
polypropylene polymer sheath is
attached to one end of the capsule.
This sheath provides a
compatible surface which the bodily
tissue fibers bond or interconnect,
resulting in a permanent placement of
the biochip.
BIOCHIP AND SYRINGE
The biochip is inserted into the subject
with a hypodermic syringe. Injection is
safe and simple, comparable to common
vaccines. Anesthesia is not required nor
recommended. In dogs and cats, the
biochip is usually injected behind the
neck between the shoulder blades.
Trovan, Ltd., markets an implant,
featuring a patented "zip quill", which
you simply press in, no syringe is
needed. According to AVID "Once
implanted, the identity tag is virtually
impossible to retrieve. . . The number
can never be altered."
3.2THE READER:
The reader consists of an
"exciter" coil which creates an
electromagnetic field that, via radio
signals, provides the necessary energy
(less than 1/1000 of a watt) to "excite" or
"activate" the implanted biochip. The
reader also carries a receiving coil that
receives the transmitted code or ID
number sent back from the "activated"
implanted biochip. This all takes place
very fast, in milliseconds. The reader
also contains the software and
components to decode the received code
and display the result in an LCD display.
The reader can include a RS-232 port to
attach a computer.
4.WORKING OF A BIOCHIP:
The reader generates a lowpower,
electromagnetic field, in this case
via radio signals, which "activates" the
implanted biochip. This "activation"
enables the biochip to send the ID code
back to the reader via radio signals. The
reader amplifies the received code,
converts it to digital format, decodes and
displays the ID number on the reader's
LCD display. The reader must normally
be between 2 and 12 inches near the
biochip to communicate. The reader and
biochip can communicate through most
materials, except metal.
5. THE APPLICATIONS:
With a biochip tracing of a
person/animal , anywhere in the world is
possible: Once the reader is
connected to the internet, satellite and a
centralized database is maintained about
the biochipped creatures, It is always
possible to trace out the personality
intended.An implanted biochip can be
scanned to pay for groceries, obtain
medical procedures, and conduct
financial transactions. Currently, the in
use, implanted biochips only store one
10 to 15 digits. If biochips are designed
to accommodate with more ROM &
RAM there is definitely an opportunity.
A biochip leads to a secured ECommerce
systems :It s a fact; the world
is very quickly going to a digital or E-
economy, through the Internet. It is
expected that by 2008, 60% of the
Business transactions will be performed
through the Internet. The E-money
future, however, isn't necessarily secure.
The Internet wasn't built to be Fort
Knox. In the wrong hands, this powerful
tool can turn dangerous. Hackers have
already broken into bank files that were
100% secure. A biochip is the possible
solution to the "identification and
security" dilemma faced by the digital
economy. This type of new bio-security
device is capable of accurately tracking
information regarding what users are
doing, and who are to accurately track
information regarding what users are
doing, and who is actually doing it.
Medicinal implementations of
Biochips:
A New Era Proposed by us Biochip as
Glucose Detector: The Biochip can
be integrated with a glucose detector.
The chip will allow diabetics to easily
monitor the level of the sugar glucose in
their blood. Diabetics currently use a
skin prick and a hand-held blood test,
and then medicate themselves with
insulin depending on the result. The
system is simple and works well, but the
need to draw blood means that most
diabetics don't test themselves as often
as they should. Although they may get
away with this in the short term, in later
life those who monitored infrequently
suffer from blindness, loss of circulation,
and other complications. The solution is
more frequent testing, using a less
invasive method. The biochip will sit
underneath the skin, sense the glucose
level, and send the result back out by
radio-frequency communication.
Proposed principle of Glucose
detection: A light-emitting diode (LED)
in the biochip starts off the detection
process. The light that it produces hits a
fluorescent chemical: one that absorbs
incoming light and re-emits it at a longer
wavelength. The longer wavelength of
light is then detected, and the result is
sent to a control panel outside the body.
Glucose is detected because the sugar
reduces the amount of light that the
fluorescent chemical re-emits. The more
glucose there is the less light that is
detected.
Biochip as Oxygen sensor :The biochip
can also be integrated with an oxygen
sensor .The oxygen sensor will be useful
not only to monitor breathing in
intensive care units, but also to check
that packages of food, or containers of
semiconductors stored under nitrogen
gas, remain airtight.
6. Typical Problem of Biochips:
A Solution Proposed
The Lock: Problem before the world
A chip implant would contain a
person s financial world, medical
history, health care it would contain
his electronic life". If cash no longer
existed and if the world s economy was
totally chip oriented; there would be a
huge "black-market" for chips! Since
there is no cash and no other bartering
system, criminals would cut off hands
and heads, stealing "rich-folks" chips.
"It is very dangerous because once
kidnappers get to know about these
chips, they will skin people to find
them,"
The typical solutions won t work well
are already proposed by different
people:The Biochip must retain data
only if it is placed in a fluid medium like
blood & not in any other medium. This
technique is unsuitable for identification
of dead bodies (murdered by the
kidnappers) as it loses the data about the
social security number.
7. CONCLUSION
The Cyber Future InfoTech will be
implanted in our bodies. A chip
implanted somewhere in human bodies
might serve as a combination of credit
card, passport, driver's license, personal
diary. No longer would it be needed to
worry about losing the credit cards while
traveling. A chip inserted into human
bodies might also give us extra mental
power. The really fascinating idea is
under fast track research "but we're
close. The day in which we have chips
embedded in our skins is not too far
from now. "This is science fiction stuff."
, This is a true example to prove science
really starts with fiction .
REFERENCES:
www.google.com
www.biochip.info.in
www.bioit.com
www.iec.com
A
PAPER ON
ELECTRONIC NOSE
AN APPLICATION OF ARTIFICIAL NEURAL NETWORKS
PAPER PRESENTED
BY
K.KANCHANA GANGA B.K.BHARATH KUMAR
ADM.NO: 06F21A0220 ADM.NO: 06F21A1209
[email protected] [email protected]
Mobile no: 9966350641
DEPARTMENT OF ELECTRICAL & ELECTRONICS
GATES INSTITUTE OF TECHNOLOGY
GOOTY.
CONTENTS:
1. ABSTRACT
2. INTRODUCTION TO NEURAL NETWORKS
3. A NEURAL NETWORK
4. WHY TO USE NEURAL NETWORKS
5. NEURAL NETWORKS VS CONVENTIONAL COMPUTERS
6. FROM HUMAN NEURONS TO ARTIFICIAL NEURONS
7. A SIMPLE NEURON
8. ELECTRONIC NOSES AND THEIR APPLICATIONS
9. ELECTRONIC NOSE FOR MEDICINE
10. CONCLUSION
ABSTRACT:
This Report is an introduction to Artificial Neural networks. It also deals with
an interesting
application of neural network called Electronic Nose . Electronic/artificial noses
are being
developed as systems for the automated detection and classification of odors, va
pors, and
gases. An electronic nose is generally composed of a chemical sensing system (e.
g., sensor
array or spectrometer) and a pattern recognition system (e.g., artificial neural
network).
We are developing Electronic noses for the automated identification of volatile
chemicals for
environmental and medical applications. In this paper, we briefly describe an el
ectronic nose,
show some results from a prototype electronic nose, and discuss applications of
electronic
noses in the environmental, medical, and food industries.
2. Introduction to neural networks
DEFINITION:
Neural Networks are form of Artificial
Intelligence that, through pattern
matching, predict the outcome from a
given set of inputs. The neural network
trains using a pattern file. In training, it
converges on a proper set of weights, or
coefficients that lead from input / output.
After training the network, simply
computes an arithmetic expression that is a
function of inputs and weight coefficients,
to obtain the output.
3. A Neural Network
An Artificial Neural Network (ANN) is an
information processing paradigm that is
inspired by the way biological nervous
systems, such as the brain, process
information. The key element of this
paradigm is the novel structure of the
information processing system. It is
composed of a large number of highly
interconnected processing elements
(neurons) working in unison to solve
specific problems. ANNs, like people,
learn by example. An ANN is configured
for a specific application, such as pattern
recognition or data classification, through
a learning process. Learning in biological
systems involves adjustments to the
synaptic connections that exist between the
neurons. This is true of ANNs as well. The
first artificial neuron was produced in
1943 by the neurophysiologist Warren
McCulloch and the logician Walter Pits.
But the technology available at that time
did not allow them to do too much.
4. Why to use neural networks?
Neural networks, with their remarkable
ability to derive meaning from
complicated or imprecise data, can be used
to extract patterns and detect trends that
are too complex to be noticed by either
humans or other computer techniques. A
trained neural network can be thought of
as an "expert" in the category of
information it has been given to analyze.
This expert can then be used to provide
projections given new situations of interest
and answer "what if questions.
Other advantages include:
1. Adaptive learning: An ability to learn
how to do tasks based on the data given for
training or initial experience.
2. Self-Organization: An ANN can create
its own organization or representation of
the information it receives during learning
time.
3. Real Time Operation: ANN
computations may be carried out in
parallel, and special hardware devices are
being designed and manufactured which
take advantage of this capability.
4. Fault Tolerance via Redundant
Information Coding: Partial destruction of
a network leads to the corresponding
degradation of performance. However,
some network capabilities may be retained
even with major network damage.
5. Neural networks versus
conventional computers
Neural networks take a different approach
to problem solving than that of
conventional computers. Conventional
computers use an algorithmic approach i.e.
the computer follows a set of instructions
in order to solve a problem.
Unless the specific steps that the computer
needs to follow are known the computer
cannot solve the problem. That restricts the
problem solving capability of conventional
computers to problems that we already
understand and know how to solve. But
computers would be so much more useful
if they could do things that we don't
exactly know how to do.
Neural networks process information in a
similar way the human brain does. The
network is composed of a large number of
highly interconnected processing elements
(neurons) working in parallel to solve a
specific problem. Neural networks learn
by example. They cannot be programmed
to perform a specific task. The examples
must be selected carefully otherwise useful
time is wasted or even worse the network
might be functioning incorrectly. The
disadvantage is that because the network
finds out how to solve the problem by
itself, its operation can be unpredictable.
On the other hand, conventional computers
use a cognitive approach to problem
solving; the way the problem is to solved
must be known and stated in small
unambiguous instructions. These
instructions are then converted to a high
level language program and then into
machine code that the computer can
understand. These machines are totally
predictable; if anything goes wrong is due
to a software or hardware fault.
Neural networks and conventional
algorithmic computers are not in
competition but complement each other.
There are tasks are more suited to an
algorithmic approach like arithmetic
operations and tasks that are more suited to
neural networks. Even more, a large
number of tasks, require systems that use a
combination of the two approaches
(normally a conventional computer is used
to supervise the neural network) in order to
perform at maximum efficiency.
Neural networks do not perform miracles.
But if used sensibly they can produce
some amazing results
FIG: NEURON
6. From Human Neurons to
Artificial Neurons
We conduct these neural networks by first
trying to deduce the essential features of
neurons and their interconnections. We
then typically program a computer to
simulate these features. However because
our knowledge of neurons is incomplete
and our computing power is limited, our
models are necessarily gross idealizations
of real networks of neurons.
FIG: Neuron model
7. A simple neuron
An artificial neuron is a device with many
inputs and one output. The neuron has two
modes of operation;
The training mode and the using mode.
In the training mode, the neuron can be
trained to fire (or not), for particular input
patterns.
In the using mode, when a taught input
pattern is detected at the input, its
associated output becomes the current
output. If the input pattern does not belong
in the taught list of input patterns, the
firing rule is used to determine whether to
fire or not.
FIG: A simple neuron
PNNL is a multiprogramming national
laboratory operated by Battelle Memorial
Institute for the U.S. Department of
Energy under Contract DE-AC06-76RLO
1830.
8. ELECTRONIC NOSES AND
THEIR APPLICATIONS
.The two main components of an
electronic nose are the sensing system and
the automated pattern recognition system.
The sensing system can be an array of
several different sensing elements (e.g.,
chemical sensors), where each element
measures a different property of the sensed
chemical, or it can be a single sensing
device (e.g., spectrometer) that produces
an array of measurements for each
chemical, or it can be a combination.
Each chemical vapor presented to the
sensor array produces a signature or
pattern characteristic of the vapor. By
presenting many different chemicals to the
sensor array, a database of signatures is
built up. This database of labeled
signatures is used to train the pattern
recognition system. The goal of this
training process is to configure the
recognition system to produce unique
classifications of each chemical so that an
automated identification can be
implemented. The quantity and complexity
of the data collected by sensors array can
make conventional chemical analysis of
data in an automated fashion difficult.
One approach to chemical vapor
identification is to build an array of
sensors, where each sensor in the array is
designed to respond to a specific chemical.
With this approach, the number of unique
sensors must be at least as great as the
number of chemicals being monitored. It is
both expensive and difficult to build
highly selective chemical sensors.
Artificial neural networks (ANNs), which
have been used to analyze complex data
and to recognize patterns, are showing
promising results in chemical vapor
recognition. When an ANN is combined
with a sensor array, the number of
detectable chemicals is generally greater
than the number of sensors [1]. Also, less
selective sensors which are generally less
expensive can be used with this approach.
Once the ANN is trained for chemical
vapor recognition, operation consists of
Propagating the sensor data through the
network. Since this is simply a series of
vector- matrix multiplications, unknown
chemicals can be rapidly identified in the
field. Electronic noses that incorporate
ANNs have been demonstrated in various
applications.
Some of these applications will be
discussed later in the paper. Many ANN
configurations and training algorithms
have been used to build electronic noses
including back propagation-trained, feed
forward networks; fuzzy ART maps;
Kohonen s self-organizing maps (SOMs);
learning vector quantizes (LVQs);
Hamming networks;
Boltzmann machines;
Hopfield networks.
FIG: Schematic diagram of EN
9. ELECTRONIC NOSES FOR
MEDICINE
Because the sense of smell is an important
sense to the physician, an electronic nose
has applicability as a diagnostic tool. An
electronic nose can examine odors from
the body (e.g., breath, wounds, body
fluids, etc.) and identify possible
problems. Odors in the breath can be
indicative of gastrointestinal problems,
sinus problems, infections, diabetes, and
liver problems. Infected wounds and
tissues emit distinctive odors that can be
detected by an electronic nose. Odors
coming from body fluids can indicate liver
and bladder problems. Currently, an
electronic nose for examining wound
infections is being tested at South
Manchester University Hospital.
A more futuristic application of electronic
noses has been recently proposed for
telesurgery. While the inclusion of visual,
aural, and tactile senses into telepresent
systems is widespread, the sense of smell
has been largely ignored. An electronic
nose will potentially be a key component
in an olfactory input to telepresent virtual
reality systems including telesurgery.
The electronic nose would identify odors
in the remote surgical environment. These
identified odors would then be
electronically transmitted to another site
where an odor generation system would
recreate them.
10. CONCLUSION
Thus an Artificial Neural Network is
developed to make the computer think like
a human brain. And an electronic nose is a
device intended to detect odors or flavors.
Over the last decade, electronic sensing
or esensing technologies have
undergone important developments from a
technical and commercial point of view.
The expression electronic sensing refers
to the capability of reproducing human
senses using sensor arrays and pattern
recognition systems. For the last 15 years
as of 2007, research has been conducted to
develop technologies, commonly referred
to as electronic noses that could detect and
recognize odors and flavors. These devices
have undergone much development and
are now used to fulfill industrial needs.
Smart Phone: An Embedded System for
Universal Interactions
BY
L.SUNAINA SULTHANA P.SUJITHA
III B.TECH III B.TECH
ECE ECE
Email:[email protected] Email: [email protected]
MADANAPALLI INSTITUTE OF SCIENCE AND TECHNOLOGY
ANGALLU,MADANAPALLI,CHITTOOR (DIST)
ABSTRACT:
In this paper, we present how a smart phone system architecture
allows user to interact with embedded system located in their proximity.
Firmly our smart phone system architecture incorporated with hybrid
communication capabilities.
We have identified four models of interaction between a Smart
Phone and the surrounding environment: universal remote control, dual
connectivity, gateway connectivity, and peer-to-peer. Although each of
these models has different characteristics, our architecture provides a
unique framework for all of the models.
Smart phones have the unique feature of incorporating short range
wireless connectivity (e.g., Bluetooth) and Internet connectivity (e.g.,
GPRS) in the same personal mobile device. This feature together with
significant processing power and memory can turn a Smart Phone into the
only mobile device that people will carry wherever they go.
INDEX
.
1. Introduction
2. Smart Phones Technology
3. Smart Phones Interaction Model
.. Universal Remote Control Model
.. Dual Connectivity Model
.. Gateway Connectivity Model
.. Peer-to-Peer Model
4. System architecture
.. Bluetooth Engine
.. Internet access Engine
.. Proximity Engine
.. Execution Engine
.. Interface Cache Engine
.. Personal Data Storage Engine
5. Status and Future Work
6. Related Work
7. Conclusion
8. Bibliography
1. Introduction:
Embedded systems are generally electronic devices that incorporated
microprocessors with in their implementation. A microprocessor in the device rem
ove
bugs, making modifications, or adding new features of rewriting the software tha
t
controls the device. Recent advances in technology make it feasible to incorpora
te
Significant processing power in almost every device that we encounter in our dai
ly life.
These embedded systems are heterogeneous, distributed everywhere in the surround
ing
environment, and capable of communicating through wired or wireless
interfaces. People, however, are not yet taking advantage of this
ubiquitous computing world. Despite all the computing power laying
around, most of our daily interactions with the surrounding environment
are still primitive and far from the ubiquitous computing vision. Our
pockets and bags are still jammed with a bunch of keys for the doors we
have to open/close daily the car key or remote, access cards, credit cards,
and money to pay for goods. Any of these forgotten at home can turn the
day into a nightmare.
All these items are absolutely necessary for us to properly
interact with our environment. The community does not lack
innovative solutions that address some of its aspects (e.g., wireless
micro servers, electronic payment methods, digital door keys). Ideally,
we would like to have a single device that acts as both personal server
and personal assistant for remote interaction with embedded systems
located in proximity of the user. This device should be programmable
and support dynamic software extensions for interaction with newly
encountered embedded systems (i.e., dynamically loading new
interfaces).
We believe that Smart Phones are the devices that have the greatest
chance of successfully becoming universal remote controls for people to
interact with various devices from their surrounding environment; they
will also replace all the different items we currently carry in our pockets.
Smart Phone is an emerging mobile phone technology that supports Java
program execution and provides both short range wireless connectivity
(Bluetooth) and cellular network connectivity through which the Internet
can be accessed. .
2. Smart Phones Technology:
With more than a billion mobile phones being carried around by
consumers of all ages, the mobile phone has become the most pervasive
pocket-carried device. We are beginning to see the introduction of Smart
Phones, such as Sony Ericsson P800/P900 and Motorola A760 (Figure 1),
as a result of the convergence of mobile phones and PDA devices. Unlike
traditional mobile phones, which have limited processing power and act
merely as dumb conduits for passing voice or data between the cellular
network and end users, Smart Phones combine significant computing
power with memory, short-range wireless interfaces (e.g.,Bluetooth),
Internet connectivity (over GPRS), and various input-output components
(e.g., high-resolution color touch screens, digital cameras, and MP3
players). Sony Ericsson P800/P900 runs Symbian OS, an operating system
specifically designed for resource constrained devices such as mobile
phones. It also comes equipped with two versions of Java technology:
Personal Java and J2ME CLDC/MIDP Additionally, it supports C++ which
provides low level access to the operating system and the Bluetooth
driver. The phone has 16MB of internal memory and up to 128MB
external flash memory.Motorola A760 has a Motorola i250 chip for
communication, Intel s 200 MHz PXA262 chip for computation, and
256MB of RAM memory. It runs a version of MontaVista Linux and
comes with Java J2ME support Bluetooth is a low-cost, low-power
standard for wireless connectivity. Today, we can find Bluetooth chips
embedded in PCs, laptops, digital cameras, GPS devices, Smart Phones,
and a whole range of other electronic devices.Bluetooth supports point-to-
Point and point-to-multipoint connections. We can actively connect a
Bluetooth device to up to seven devices simultaneously.
Together, they form an ad hoc network, called Piconet. Several
piconets can be linked to form a Scatternet. Another important
development for the mobile phone technology is the introduction of
General Packet Radio Service (GPRS), a packet switching technology over
the current GSM cellular networks. GPRS is offered as a nonvoice valueadded
service that allows data to be sent and received across GSM cellular
networks at a rate of up to 171.2kbps, and its goal is to supplement
today s Circuit Switched Data and Short Message Service. GPRS offers an
always-on service and supports Internet protocols.
3. Smart Phone Interaction Models:
A Smart Phone can be used to interact with the surrounding
environment in different ways. We have identified four interaction
models: universal remote control, dual connectivity, gateway
connectivity, and peer-to-peer. With these models, a Smart Phone can be
used to execute applications from as simple as remotely adjusting various
controls of home appliances or opening smart locks to complex
applications such as automatically booking a cab or ordering/paying in a
restaurant.
_ Universal Remote Control Model:
The Smart Phone can act as a universal remote control for interaction
with embedded systems located in its proximity. To support proximityaware
interactions, both the Smart Phone and the embedded systems with
which the user interacts must have short-range wireless communication
capabilities. Figure 2 illustrates such interactions using Bluetooth. Due to
its low-power, low-cost features, Bluetooth is the primary candidate for
the short-range wireless technology that will enable proximity-aware
communication.
Since embedded systems with different functionalities can be scattered
everywhere, a discovery protocol will allow Smart Phones to learn the
identity and the description of the embedded systems located in their
proximity. This protocol can work either automatically or on-demand, but
the information about the devices currently located in user s proximity is
displayed only upon user s request. An alternative, more flexible, solution
is to define a protocol that allows a Smart Phone to learn the interfaces
from the embedded systems themselves. The problem with this idea is that
many embedded systems may not be powerful enough to run complex
software that implements such protocols. In the following, we describe a
second model of interaction that solves this problem.
_ Dual Connectivity Model:
Central to our universal interaction architecture is the dual
connectivity model which is based on the hybrid communication
capabilities incorporated in the Smart Phones. They have the unique
feature of incorporating both short range wireless connectivity (e.g.,
Bluetooth) and Internet connectivity (e.g., GPRS) in the same personal
mobile device. Figure 3 illustrates the Dual Connectivity interaction
model.
As a typical application is opening/closing Smart Locks. We envision
that the entry in certain buildings will soon be protected by Smart Locks
(e.g., locks that are Bluetooth-enabled and can be opened using digital
door keys). The dual connectivity model enables users carrying Smart
Phones to open these locks in a secure manner. The Smart Phone can
establish a connection with the lock, obtain the ID of the lock, and
connect to an Internet server over GPRS to download the code that will be
used for opening the lock (a digital door key can also be downloaded at
the same time). The server hosting the interface and the keys for the
Smart Lock maintains a list of people that are allowed to open the lock.
The identity of the Smart Phone user (stored on the Smart Phone in the
form of personal information) is piggybacked on the request submitted to
the server. If the server finds that this user is allowed to open the lock, it
responds with the code for the interface and the digital key.
The dual connectivity model can also be used to implement
electronic payment applications. The Internet connection can be used by
the client to withdraw electronic currency from her bank and store it on
the phone. Another option provided by the Smart phone is to send some of
the unused money back into the bank account. For instance, this ability
can be used to authenticate the client. Figure 3 presents a similar
application that involves accessing an ATM using a Smart phone.
_ Gateway Connectivity Model:
Many pervasive applications assume wireless communication
through the IEEE 802.11 family of protocols. These protocols allow for a
significant increase in the communication distance and bandwidth
compared to Bluetooth. Using these protocols, the communication range is
250m or more, while Bluetooth reaches only 10m. The bandwidth is also
larger, 11-54Mbps compared to less than 1Mbps for Bluetooth.
The disadvantage of 802.11 is that it consumes too much energy, and
consequently, it drains out the mobile devices batteries in a very short
period of time. With the current state of the art, we do not expect to have
802.11 network interfaces embedded in Smart Phones or other resource
constrained embedded systems that need to run on batteries for a
significant period of time (e.g., several hours or even days). In such a
situation, a user would like to access data and services provided by these
networks from its Smart Phone. To succeed, a gateway device has to
perform a change of protocol from Bluetooth to 802.11 and vice-versa.
Figure 4 illustrates this communication model and also presents an
application that can be built on top of it. Let us assume a scenario where
people want to book nearby cabs using their Smart Phones. Instead of
calling a taxi company or gesturing to book a cab, a client can start an
application on her Smart Phone that seamlessly achieves the same goal.
Hence, the client is just one-click away from booking a cab. In this
scenario, each cab is equipped with 802.11 wireless networking and GPS
devices, and the entire booking process is completely decentralized. To
join the mobile ad hoc network created by the cabs, a Smart Phone needs
to connect to a gateway station that performs a translation of protocols
from Bluetooth to 802.11 and vice-versa.
_ Peer-to-Peer Model:
The Smart Phones can also communicate among themselves (or with
other Bluetooth-enabled devices) in a multihop, peer-to-peer fashion,
similar to mobile ad hoc networks. For instance, this model allows people
to share music and pictures with others even if they are not in the
proximity of each other. Figure 5 depicts yet another example of this
model. A group of friends having dinner in a restaurant can use their
Smart Phones to execute a program that shares the check. One phone
initiates this process, an ad hoc network of Smart Phones is created, and
finally the payment message arrives at the cashier.
4. System Architecture:
Our system architecture for universal interaction consists of a
common Smart Phone software architecture and an interaction protocol.
This protocol allows Smart Phones to interact with the surrounding
environment and the Internet. Figure 6 shows the Smart Phone software
architecture. In the following.
_ Bluetooth Engine is responsible for communicating with the
Bluetooth-enabled embedded systems. It is composed of sub-components
for device discovery and sending/receiving data. The Bluetooth Engine is
a layer above the Bluetooth stack and provides a convenient Java API for
accessing the Bluetooth stack.
_ Internet Access Module carries out the communication between the
Smart Phone and various Internet servers. It provides a well-defined API
that supports operations specific to our architecture(e.g., downloading an
interface). The protocol of communication is HTTP on top of GPRS.
_ Proximity Engine is responsible for discovering the embedded
systems located within the Bluetooth communication range. Each time the
user wants to interact with one of these systems, and an interface for this
system is not available locally (i.e., a miss in the Interface Cache), the
Proximity Engine is responsible from downloading such an interface. If
the embedded system has enough computing power and memory, the
interface can be downloaded directly from it. Otherwise, the Proximity
Engine invokes the Internet Access Module to connect to a web server and
download the interface. The downloaded interface is stored in the
Interface Cache for later reuse. Once this is done, the Proximity Engine
informs the Execution Engine to dispatch the downloaded interface for
execution. All further communication between the Smart Phone and the
embedded system happens as a result of executing this interface.
_ Execution Engine is invoked by the Proximity Engine and is
responsible for dispatching interface programs for execution over the Java
virtual machine. These programs interact with the Bluetooth Engine to
communicate with the embedded systems or with other Smart Phones (as
described in Section 3.4). They may also interact with the Internet Access
Module to communicate with Internet servers. For instance, the interface
programs may need to contact a server for security related actions or to
download necessary data in case of a miss in the Personal Data Storage.
_ Interface Cache stores the code of the downloaded interfaces. This
cache avoids downloading an interface every time it is needed. An
interface can be shared by an entire class of embedded systems (e.g.,
Smart Locks, or Microwaves). Every interface has an ID (which can be the
ID of the embedded system or the class of embedded systems it is
associated with). This ID helps in recognizing the cached interface each
time it needs to be looked up in the cache.
Additionally, each interface has an associated access handler that is
executed before any subsequent execution of the interface. This handler
may define the time period for which the interface should be cached, how
and when the interface can be reused, or the permissions to access local
resources. The user can set the access handler s parameters before the
first execution of the interface.
_ Personal Data Storage acts as a cache for active data , similar to
Active Cache. It stores data that needs to be used during the interactions
with various embedded systems. Examples of such data include digital
door keys and electronic cash. Each data item stored in this cache has
three associated handlers: access handler, miss handler, and eviction
handler. Each time an interface needs some data, it checks the Personal
Data Storage. If the data is available locally (i.e., hit), the access handler
is executed, and the program goes ahead. For instance, the access handler
may check if this data can be shared among different interfaces. If the
data is not available locally (i.e., miss), the miss handler instructs the
Internet Access Module to download the data from the corresponding
Internet server. The eviction handler defines the actions to be taken when
data is evicted from the cache. For instance, electronic cash can be sent
back to the bank at eviction time.
Figure 7 shows the interaction protocol that takes place when a
Smart Phone needs to interact with an embedded system.
In case of a miss in the Interface Cache, the interface needs to be
downloaded on the phone either from the web server or from the
embedded system itself. An interface downloaded from an embedded
system is untrusted and is not allowed to access local resources (i.e., this
is a sandbox model of execution, where the interface can only execute
safe instructions on the phone).
Each time a Smart Phone requests an interface from the web server,
it has to send the interface ID and the URL provided by the embedded
system. It also sends its ID (stored in the Personal Data Storage). The
permission to download an interface is subject to access control enforced
based on the Smart Phone ID and, potentially, other credentials presented
by the user.
5. Status and Future Work:
In this section, we briefly outline the current status and several
issues that we have to overcome in order to implement our system
architecture. Our first step consists of implementing the basic architecture
for the universal remote control interaction model.
The architecture components to be developed for this model are the
Bluetooth Engine and Proximity Engine along with a simple Execution
engine over Java. We have partially implemented the Bluetooth Engine
and have written and tested a few sample programs to test the feasibility
of connecting a phone to another phone or to a Bluetooth-enabled laptop.
Besides directly connecting to Bluetooth-enabled devices, a phone can
also connect to a LAN. We are in the process of investigating the
feasibility of using the Bluetooth LAN profile to connect the phone to a
LAN through a Bluetooth access point.
Our system architecture supports both situations through the peer-topeer
model and the gateway model, respectively. To connect a Smart
Phone to the Internet over GPRS, we can use HTTP or TCP. A decision
regarding the protocol used for Internet access needs to consider the
trade-offs between the simplicity provided by HTTP and the flexibility
and efficiency provided by TCP.
Although our architecture provides a level of security by obtaining
interface code and confidential data from a trusted web server, many
issues related to security and privacy still need to be addressed. A simple
password scheme is insufficient because entering a password every time
confidential data is accessed could be a major turn off for the users. We
plan to investigate both software protection mechanisms and hardware
solutions (e.g., biometric security using fingerprint recognition).
6. Related Work:
Our goal is to provide a simple method of interaction with systems
embedded in the surrounding environment. Unlike Personal Server which
cannot connect directly to the Internet, Smart Phones do not have to carry
every possible data or code that the user may need; they can download on
demand data and code for interfaces from the Internet.
Our model is more flexible as we allow code and data to be
downloaded to mobile devices, either from the physical environment via
short-range wireless connection, or from the Internet via the GPRS
connection.
Additionally, it covers other interaction models besides the
universal remote control model (e.g., gateway model, peer-to-peer model).
However, the issue of digital door key distribution from the external
authority to the Personal Servers is not addressed. Our work uses the
Smart Phone as an incarnation of a Personal Server and also addresses the
issue of secure key distribution. More generally, our system architecture
provides a general framework that can be used to implement any
application that needs to interact with wireless embedded systems.
7. Conclusion:
In this paper, we have argued for turning the Smart Phone into the
only device that people carry in their pockets wherever they go. The
Smart Phone can be used as both personal server that stores or downloads
data that its user needs and personal assistant for remote interaction with
embedded systems located in the user s proximity. To achieve this vision,
we have presented unified system architecture for different models of
interaction between a Smart Phone and the surrounding environment.
Central to this universal interaction architecture is the dual connectivity
feature of Smart Phones, which allows them to interact with the close-by
environment through short-range wireless networking and with the rest of
the world through the Internet over cellular links.
8. Bibliography:
[1] The Millicent Protocol for Inexpensive Electronic Commerce.
http://www.w3.org/Conferences/WWW4
/Papers/246/.
[2] MIDP Profile. http://wireless.java.sun.com/midp/.
[3] General Packet Radio Service (GPRS).
http://www.gsmworld.com/technology/gprs/intro.shtml.
[4] Zeevo Bluetooth. http://www.azzurri.com/new htm/zeevo.htm.
[5] HP iPAQ 5400. http://welcome.hp.com/country/us/en
/prodserv/handheld.html.
[6] Bluetooth. https://www.bluetooth.org/.
[7] Digicash. http://www.digicash.com.
[8] Ericsson P800. http://www.sonyericsson.com/P800/.
[9] PersonalJava. http://java.sun.com/j2me/.
[10] Symbian OS. http://www.symbian.com/.
A NEW REVOLUTIONARY SYSTEM
TO DETECT HUMAN BEINGS BURIED UNDER
EARTHQUAKE RUBBLE
USING MICROPROCESSOR OR MICROCONTROLLER.
(An Embedded System)
Presented BY
Y.SIVA KRISHNA J.VAGDEVI RAMYA
[email protected] [email protected]
BAPATLA ENGINEERING COLLEGE
ABSTRACT
Thousands of persons killed
as a cause of earthquake . The above words
aren t the headlines of the newspaper but daily
news everyone come across whenever we go
through a newspaper or watching over a TV
news.
A person s life is precious and
meaningful to his loved ones.
We, as responsible Engineers
felt a part of society to bring a system to avoid
these mishaps. With the meteoric Embedded
systems along with microprocessor our
designed system in preventing deaths and
providing safe guided measures.
A new revolutionary microwave
life detection system, which is used to locate
human beings buried under earthquake rubble, has
been designed. This system operating at certain
frequency can remotely detect the breathing and
heartbeat signals of human beings buried under
earthquake rubble. By proper processing of these
signals, the status of the person under trap can be
easily judged. The entire process takes place within
a few seconds as the system is controlled by a
microprocessor (8085) or microcontroller unit.
By advent of this system the world
death rate may decrease to greater extent as large
percentage of death occur due to earthquake.
INTRODUCTION:
At present as we all know the need
of the hour is to find an effective method for
rescuing people buried under earthquake
rubble (or) collapsed building. It has to be
done before we experience another quake.
Present methods for searching and rescuing
victims buried (or) tapped under earthquake
rubble are not effective. Taking all the factors
in mind, a system, which will be really
effective to solve the problem, has been
designed.
PRINCIPLE OF OPERATION:
The basic principle is that when
a microwave beam of certain frequency [L
(or) S band (or) UHF band] is aimed at a
portion of rubble (or) collapsed building under
which a person has been trapped, the
microwave beam can penetrate through the
rubble to reach the person.
When the microwave beam
focuses the person, the reflected wave from
the person s body will be modulated (or)
changed by his/her movements, which include
breathing and heartbeat. Simultaneously,
reflected waves are also received from the collapsed
structures.
So, if the reflected waves from the
immovable debris are cancelled and the reflected
wave from the person s body is properly
distinguished, the breathing and heartbeat signals
can be detected.
By proper processing of these signals, the
status of the person under trap can be easily judged.
Thus a person under debris can be identified.
MAJOR COMPONENTS OF THE
CIRCUIT:
The microwave life detection system has four major
components. They are
1. A microwave circuit which generates, amplifies
and distributes microwave signals to different
microwave components.
2.A microwave controlled clutter cancellation
system, which creates an optimal signal to cancel
the clutter from the rubble.
3. A dual antenna system, which consists of two
antennas, energized sequentially.
4.A laptop computer which controls the
microprocessor and acts as the monitor
WORKING FREQUENCY:
The frequency of the microwave falls under
two categories, depending on the type and
nature of the collapsed building. They are
1. L (or) S band frequency say 1150 MHz
2. UHF band frequency say 450 MHz
Let us see the advantages and
disadvantages of both the systems later.
CIRCUIT DESCRIPTION:
The circuit description is as follows:
Phase locked oscillator:
The phase locked oscillator generates a
very stable electromagnetic wave say 1150
MHz with output power say 400mW.
Directional coupler 1 (10 dB):
This wave is then fed through a 10 dB
directional coupler and a circulator before
reaching a radio frequency switch, which
energizes the dual antenna system. Also, the
ten dB directional coupler branches out onetenth
of the wave (40mW) which is then divided
equally by a directional coupler 2 (3 dB).
Directional coupler 2 (3 dB):
One output of the 3 dB directional coupler 2
(20mW) drives the clutter cancellation unit. Other
output (20mW) serves as a local reference signal for
the double balanced mixer.
Antenna system:The dual antenna system
has two antennas, which are energized sequentially
by an electronic switch. Each antenna acts
separately.
Clutter cancellation system:
The clutter cancellation unit consists of
1.A digitally controlled phase shifter I
2.A fixed attenuator
3.A RF amplifier
4.A digitally controlled attenuator.
WORKING:
Clutter cancellation of the received
signal:
1 .The wave radiated by the antenna I penetrates the
earthquake rubble to reach the buried person.
2 . The reflected wave received by the antenna
2 consists of a large reflected wave from the
rubble and a small-reflected wave from the
person s body.
3. The large clutter from the rubble can be
cancelled by a clutter-canceling signal.
4.The small reflected wave from the person s
body couldn t be cancelled by a pure
sinusoidal canceling because his/her
movements modulate it.
5. The output of the clutter cancellation circuit
is automatically adjusted to be of equal
amplitude and opposite phase as that of the
clutter from the rubble.
6.Thus, when the output of the clutter
cancellation circuit is combined with the
directional coupler 3 (3 dB), the large clutter
from the rubble is completely cancelled.
7.Now, the output of the directional coupler 3
(3 dB) is passed through a directional coupler
4 (6 dB).
8.One-fourth of the output directed is
amplified by a RF pre-amplifier and then
mixed with a local reference signal in a double
balanced mixer.
9.Three-fourth of the output is directed by a
microwave detector to provide dc output, which
serves as the indicator for the degree of the clutter
cancellation.
10.When the settings of the digitally controlled
phase shifter and the attenuator are swept the
microprocessor control system, the output of the
microwave detector varies accordingly.
Demodulation of the clutter cancelled
signal:
At the double balanced mixer, the amplified signal
of the reflected wave from the person s body is
mixed with the local reference signal.
The phase of the local reference signal is controlled
by another digitally controlled phase shifter 2 for an
optimal output from the mixer.
The output of the mixer consists of the breathing
and heartbeat signals of the human plus some
avoidable noise.
This output is fed through a low frequency
amplifier and a band pass filter (0.4 Hz)
before displayed on the monitor.
The function of the digitally controlled phase
shifter 2 is to control the phase of the local
reference signal for the purpose of increasing
the system sensitivity.
The reflected signal from the person s body
after amplification by the pre-amplifier is
mixed with the local reference signal in a
double balanced mixer.
MICROPROCESSOR CONTROL
UNIT:
The algorithm and flowcharts
for the antenna system and the clutter
cancellation system are as follows:
Antenna system:
Initially the switch is kept in position 1 (signal
is transmitted through the antenna 1)
Wait for some predetermined sending time, Ts
Then the switch is thrown to position 2 (signal
is received through the antenna 2)
Wait for some predetermined receiving time, Tr
Go to step 1
Repeat the above procedure for some predetermined
time, T.
Clutter cancellation system:
1.Send the signal to the rubble through antenna 1.
2.Receive the signal from the rubble through
antenna 2.
3.Check the detector output. If it is within the
predetermined limits go to step 5.
4.Otherwise send the correction signal to the
digitally controlled phase shifter 1 and attenuator
and go to step 1.
5.Check the sensitivity of the mixer. If the optimum
go to step 7.
6.Otherwise send the correction signal to the
digitally controlled phase shifter 2 to change the
phase and go to step 1.Process the signal and send it
to the laptop.
FLOW CHART FOR ANTENNA SYSTEM
FLOW CHART FOR CLUTTER CANCELATION SYATEM:
ADVANTAGES OF L (OR) S
BAND FREQUENCY SYSTEM:
Microwaves of L (or) S band
frequency can penetrate the rubble with
metallic mesh easier than that of UHF band
frequency waves.
ADVANTAGES OF UHF BAND
FREQUENCY SYSTEM:
Microwaves of UHF band
frequency can penetrate deeper in rubble
(without metallic mesh) than that of L (or) S
band frequency waves.
FREQUENCY RANGE OF
BREATHING AND HEARTBEAT
SIGNAL:
The frequency range of
heartbeat and breathing signals of human
beings lies between 0.2 and 3 Hz.
HIGHLIGHTS:
1. The location of the person under the
rubble can be known by calculating the time
lapse between the sending time, Ts and receiving
time, Tr.
2. Since it will not be possible to continuously
watch the system under critical situations, an
alarm system has been set, so that whenever the
laptop computer system processes the received
signal and identifies that there is a human being,
the alarm sound starts.
3. Also under critical situations, where living
beings other than humans are not required to be
found out, the system can detect the signals of
other living beings based on the frequency of the
breathing and heartbeat signals.
CONCLUSION:
Thus a new sensitive life detection
system using microwave radiation for locating
human beings buried under earthquake rubble (or)
hidden behind various barriers has been designed.
This system operating either at L (or) S band,
UHF band can detect the breathing and heartbeat
signals of human beings buried under earthquake
rubble.
WORLDWIDE INTEROPERABILITY
FOR MICROWAVE ACCESS (WIMAX)
Sree Vidyanikethan Engineering College
A.Rangampet, Tirupati
B.DIVYASREE III ECE 06121A0419, [email protected]
M.GOUTHAMI III ECE 06121A0422, [email protected]
1 Abstract
This paper presents the features of the Worldwide
Interoperability for Microwave Access (WiMAX)
technology and pretends to establish some valid
criterions for future trends of possible applications of
WiMAX. A discussion is given by comparing Wireless
Fidelity (Wi-Fi) and WiMAX. Several references have
been included at the end of the article for those willing
to know in detail about certain
specific topics.
2. Introduction
Broadband technology has rapidly become a need for
all the population. Internet Service Providers (ISPs)
have dealt with all sorts of challenges in order to
deliver broadband solutions. In this sense, Digital
Subscriber Line (DSL) technology has appeared as a
pioneer solution. However,coverage in wire line
services is bounded and quality is another big issue.
Wireless systems are an old solution that had been
displaced for its limits in bandwidth, LoS and the fact
of delivering a cost effective solution.
Since a few years ago, when Wi-Fi was standardized
and also its products regulated and certified by the Wi-
Fi Forum, different solutions have come into
the market. Despite Wi-Fi was developed thinking in
LAN solutions, it has been also used in MAN solutions
but with a lot of limitations in its performance and
certainly with trade-offs (bandwidth, coverage, power
consumption).WiMAX is coming to fix this need
anddeliver new broadband solutions for all theISPs and
WIPSs that was harassed by the users needs of
counting with more broadband for their different
applications.
WiMAX is defined as Worldwide Interoperability for
Microwave Access by the WiMAX Forum, formed in
June 2001 to promote conformance and
interoperability of the IEEE 802.16 standard, officially
known as Wireless MAN. The Forum describes
WiMAX as "a standards-based technology enabling the
delivery of last mile wireless broadband
access as an alternative to cable and DSL".
"WiMAX is not a technology, but rather a certification
mark, or 'stamp of approval' given toequipment that
meets certain conformity and interoperability tests for
the IEEE 802.16 family of standards. A similar
confusion surrounds the term Wi-Fi, which like
WiMAX, is a certification mark for equipment based on
a different set of IEEE standards from the 802.11
working group for wireless local area networks
(WLAN).Neither WiMAX, nor Wi-Fi is a technology but
their names have been adopted in popular usage to
denote the technologies behind
them. This is likely due to the difficulty of using terms
like 'IEEE 802.16' in common speech and writing."
3. 802.16 /HiperMAN
Technology Specs
Based on IEEE 802.16 and ETSI
HiperMAN, WiMAX selected the
common mode of operation of
these two standards 256FFT
OFDM.
Concentrated in 2- to 11-GHz
WMAN, with the following set of
features:
.. Service area range 50 km
.. NLoS
.. QoS designed in for
voice/video, differentiated
services
.. Very high spectrum
utilization: 3.8 bit/Hz
.. Up to 280 Mbps per BS
.. Speed 70 Mbps
Defines both the MAC and PHY
layers and allows multiple PHY-layer
specifications
4. WiMAX Evolution of the
Technology
As the envisioned usage scenario has evolved over
time, so has evolved the technological basis of
WiMAX. The IEEE 802.16 technical specification has
now evolved through three generations:
IEEE 802.16: High data rate, highpower,PTP, LOS,
fixed SSs
IEEE 802.16-2004: Medium data rate, PTP, PMP,
fixed SSs
IEEE 802.16-2005: Low-medium data rate, PTP,
PMP, fixed or mobile SSs
5. WiMAX System
A WiMAX system consists of two parts:
A WiMAX tower, similar in concept to a
cell-phone tower
- A single WiMAX tower can provide coverage
to a very large area -- as big as 3,000 square
miles (~8,000square km).
A WiMAX receiver The receiver and
antenna could be a small box or PCMCIA card,
or they could be built into a laptop the way
WiFi access is today.
A WiMAX tower station can connect directly to the
Internet using a highbandwidth, wired connection (for
example,a T3 line). It can also connect to another
WiMAX tower using a line-of-sight,microwave link.
This connection to a second tower (often referred to as
a backhaul), along with the ability of a single tower to
cover up to 3,000 square miles, is what allows WiMAX
to provide coverage to remote rural areas.
Compared to the complicated wired network, a
WiMAX system only consists of two parts:
The WiMAX base station (BS) and WiMAX subscriber
station (SS), also referred to as customer premise
equipments (CPE). Therefore, it can be
built quickly at a low cost. Ultimately,WiMAX is also
considered as the next step in the mobile technology
evolution path.The potential combination of WiMAX
and CDMA standards is referred to as 4G.
5.1 System Model
IEEE 802.16 supports two modes of
operation: PMP and PTP.
5.1.1 Point-to-point (PTP)
The PTP link refers to a dedicated link that
connects only two nodes: BS and
subscriber terminal. It utilizes resources in
an inefficient way and substantially causes
high operation costs. It is usually only
used to serve high-value customers who
need extremely high bandwidth, such as business highrises,
video postproduction houses, or scientific
research organizations. In these cases, a single
connection contains all the available bandwidth to
generate high throughput. A highly directional and
high-gain antenna is also necessary to minimize
interference and maximize security.
5.1.2 Point-to-multipoint (PMP)
The PMP topology, where a group of subscriber
terminals are connected to a BS separately (shown
in Figure), is a better choice for users who do not need
to use the entire bandwidth. Under PMP topology,
sectoral antennas with highly directional
parabolic dishes (each dish refers to a sector) are used
for frequency reuse. The available bandwidth now is
shared between a group of users, and the cost for each
subscriber is reduced.
6. WiMAX as a Metro-Access Deployment
Option
WiMAX is a worldwide certification addressing
interoperability across IEEE 802.16 standards-based
products. The IEEE 802.16 standard with specific
revisions addresses two usage models:
Fixed
Portable
6.1 Fixed
The IEEE 802.16-2004 standard (which revises and
replaces IEEE 802.16a and 802.16REVd versions) is
designed for fixed-access usage models. This standard
may be referred to as fixed wireless
because it uses a mounted antenna at the
subscriber s site.
6.2 Portable
The IEEE 802.16e standard is an amendment to the
802.16-2004 base specification and targets the mobile
market by adding portability and the ability for mobile
clients with IEEE 802.16e adapters to connect directly
to the WiMAX network to the standard. The 802.16e
standard is expected to be ratified in early 2005.
7. WiMAX Physical Layer
The WiMAX physical layer is based on orthogonal
frequency division multiplexing. OFDM is the
transmission scheme of choice to enable high-speed
data, video, and multimedia communications and is
used by a variety of commercial broadband systems,
including DSL, Wi-Fi, Digital Video Broadcast-
Handheld (DVB-H), and MediaFLO, besides WiMAX.
OFDM is an elegant and efficient scheme for high data
rate transmission in a non-line-of-sight or multipath
radio environment.
7.1 OFDM Technology
Orthogonal frequency division multiplexing (OFDM)
technology provides operators with an efficient means
to overcome the challenges of NLOS propagation.
OFDM is based on the traditional frequency division
multiplexing (FDM), which enables simultaneous
transmission of multiple signals by separating them
into different frequency bands (subcarriers) and
sending them in parallel. In FDM, guard bands are
needed to reduce the interference between different
frequencies, which causes bandwidth wastage.
Therefore, it is not a spectrum-efficient and costeffective
solution. However, OFDM is a more
spectrum-efficient method that removes all the guard
bands but keeps the modulated signals orthogonal to
mitigate the interference level.
Comparison between FDM and OFDMA
As shown in figure the required bandwidth in
OFDM is significantly decreased by spacing multiple
modulated carriers closer until they are actually
overlapping.OFDM uses fast Fourier transform (FFT)
and inverse FFT to convert serial data to multiple
channels. The FFT size is 256,which means a total
number of 256 subchannels (carriers) are defined for
OFDM. In OFDM, the original signal is divided into
256 subcarriers and transmitted in parallel. Therefore,
OFDM is referred to as a multicarrier modulation
scheme. Compared to single-carrier schemes, OFDM is
more robust against multipath propagation delay owing
to the use of narrower subcarriers with low bit rates
resulting in long symbol periods.A guard time is
introduced at each OFDM symbol to further mitigate
the effect of multipath delay spread. The WiMAX
OFDM waveform offers the advantage of being able to
operate with the larger delay spread of the NLOS
environment. By virtue of the OFDM symbol time and
use of a cyclic prefix, the OFDM waveform eliminates
the inter-symbol interference (ISI) problems and the
complexities of adaptive equalization. Because the
OFDM waveform is composed of multiple narrowband
orthogonal carriers, selective fading is localized to a
subset of carriers that are relatively easy to equalise.
An example is shown below as a comparison between
an OFDM signal and a single carrier signal, with the
information being sent in parallel for OFDM and in
series for single carrier.
The ability to overcome delay spread, multi-path, and
ISI in an efficient manner allows for higher data rate
throughput. As an example it is easier to equalize the
individual OFDM carriers than it is to equalize the
broader single carrier signal.
For all of these reasons recent international standards
such as those set by IEEE 802.16, ETSI BRAN, and
ETRI, have established OFDM as the preferred
technology of choice.
7.2 OFDM Parameters in
WiMAX
As mentioned previously, the fixed and mobile
versions of WiMAX have slightly different
implementations of the OFDM physical layer. Fixed
WiMAX, which is based on IEEE 802.16- 2004, uses a
256 FFT-based OFDM physical layer. Mobile
WiMAX, which is based on the IEEE 802.16e-20055
standard, uses a scalable OFDMA-based physical
layer. In the case of mobile WiMAX, the FFT sizes can
vary from 128 bits to 2,048 bits.
7.2.1 Fixed WiMAX OFDM-PHY:
For this version the FFT size is fixed at 256, which 192
subcarriers used for carrying data, 8 used as pilot
subcarriers for channel estimation and synchronization
purposes, and the rest used as guard band subcarriers.
7.2.2 Mobile WiMAX OFDMA-PHY: In
Mobile WiMAX, the FFT size is scalable from 128 to
2,048. Here, when the available bandwidth increases,
the FFT size is also increased such that the subcarrier
spacing is always 10.94kHz.
7.3 Sub Channelization OFDMA
Sub Channelization in the uplink is an option within
WiMAX. Sub channeling enables the link budget to be
balanced such that the system gains are similar for both
the up and down links. Sub channeling concentrates the
transmit power into fewer OFDM carriers;this is what
increases the system gain that can either be used to
extend the reach of the system, overcome the building
penetration losses, and or reduce the power
consumption of the CPE. The use of sub
channeling is further expanded in
orthogonal frequency division multiple
access (OFDMA) to enable a more flexible
use of resources that can support nomadic
or mobile
operation.
7.4 MAC-Layer Overview
The primary task of the WiMAX MAC layer is to
provide an interface between the higher transport layers
and the physical layer. The MAC layer takes packets
from the upper layer these packets are called MAC
service data units (MSDUs) and organize them into
MAC protocol data units (MPDUs) for transmission
over the air. For received transmissions, the MAC layer
does the reverse. The IEEE 802.16- 2004 and IEEE
802.16e-2005 MAC design includes a convergence sub
layer that can interface with a variety of higher-layer
protocols, such as ATM, TDM Voice, Ethernet, IP, and
any unknown future protocol.
7.5 Power Control
Power control algorithms are used to improve the
overall performance of the system, it is implemented
by the base station sending power control information
to each of the CPEs to regulate the transmit power
level so that the level received at the base station is at a
predetermined level. In a dynamical changing fading
environment this pre-determined performance level
means that the CPE only transmits enough power to
meet this requirement. The converse would be that the
CPE transmit level is based on worstcase
conditions. The power control reduces the overall
power consumption of the CPE and the potential
interference with other co-located base stations. For
LOS the transmit power of the CPE is approximately
proportional to it s distance from the base station, for
NLOS it is also heavily dependant on the clearance and
obstructions.
7.6 Adaptive Modulation
Adaptive modulation allows the WiMAX system to
adjust the signal modulation scheme depending on the
signal to noise ratio (SNR) condition of the radio
link.When the radio link is high in quality, the highest
modulation scheme is used, giving the system more
capacity. During a signal fade, the WiMAX system can
shift to a lower modulation scheme to maintain the
connection quality and link stability. This feature
allows the system to overcome time-selective fading.
The key feature of adaptive modulation is that it
increases the range that a higher modulation scheme
can be used over, since the system can flex to the
actual fading conditions, as opposed to having a fixed
scheme that is budgeted for the worst case conditions.
7.7 Error Correction Techniques
Error correction techniques have been incorporated
into WiMAX to reduce the system signal to noise ratio
requirements.Strong Reed Solomon FEC,
convolutional encoding, and interleaving algorithms
are used to detect and correct errors to improve
throughput. These robust error correction techniques
help to recover errored frames that may have been lost
due to frequency selective fading or burst errors.
Automatic repeat request (ARQ) is used to correct
errors that cannot be corrected by the FEC, by having
the errored information resent. This significantly
improves the bit error rate (BER) performance for a
similar threshold level.
8. Competing technologies
Within the marketplace, WiMAX s main competition
comes from widely deployed wireless systems with
overlapping functionality such as UMTS and
CDMA2000, as well as a number of Internet-oriented
systems such as HIPERMAN and WiBro . Both of the
two major 3G systems, CDMA2000 and UMTS,
compete with WiMAX. Both offer DSL-class Internet
access, in addition to phone service. UMTS has also
been enhanced to compete directly with WiMAX in the
form of UMTS-TDD, which can use WiMAX-oriented
spectrum, and it provides a more consistent (lower
bandwidth at peak) user experience than
WiMAX (Figure). Moving forward, similar air
interface technologies to those used by WiMAX are
being considered for the 4G evolution of UMTS
.
8.1 WiBro
WiBro (wireless broadband) is an Internet
technology being developed by the Korean telecom
industry (Figure). In February 2002, the Korean
government allocated 100 MHz of electromagnetic
spectrum in the 2.3-GHz band, and in late 2004, WiBro
Phase 1 was standardized by the TTA
(Telecommunications Technology Association) of
Korea. WiBro is the newest variety of mobile wireless
broadband access. It is based on the same IEEE 802.16
standard as WiMAX but is designed to maintain
connectivity on the go, tracking a receiver at speeds of
up to 37 mi per hr (60 km/hr). WiMAX is the current
standard in the United States, offering wireless Internet
connectivity to mobile users at fixed ranges of up to 31
mi
(50 km) from the transmitting base. However, it is not
designed to be used while the receiver is in motion.
WiBro can be thought of as mobile WiMAX, though
the technology and its exact specifications will change
as it undergoes refinements throughout its preliminary
stages.
9. Advantages over Wi-Fi
The WiMAX specification provides
symmetrical bandwidth over many kilometers and
range with stronger encryption (TDES or AES) and
typically less interference. Wi-Fi is short range
(approximately 10's of metres) has WEP or WPP
encryption and suffers from interference as in
metropolitan areas where there are many users.Wi-Fi
Hotspots are typically backhauled over ADSL in most
coffee shops therefore Wi-Fi access is typically highly
contended and has poor upload speeds between the
router and the internet.It provides connectivity between
network endpoints without the need for direct line of
sight in favourable circumstances. The non-line-ofsight
propagation (NLOS) performance requires the
.16d or .16e revisions, since the lower frequencies are
needed. It relies upon multi-path signals, somewhat in
the manner of 802.11n.
9.1 Benefits of WiMAX
Component Suppliers
Assured wide market acceptance of developed and
components
Lower production costs due to economies of scale
Reduced risk due to interoperability
Equipment Manufacturers
Stable supply of lowcost components and chips
Engineering development efficiencies
Lower production costs due to economies
of scale.
Operators and Service Providers
Lower CAPEX with lower cost base station,
customer premises equipment (CPE), and network
deployment costs
Lower investment risk due to freedom of
choice among multiple vendors and solutions
Improved operator business case with
lower OPEX.
End Users
Lower subscriber fees
Portability of terminals when moving
locations/networks from WiMAX operator A to
operator B
Lower service rates over time due to cost
efficiencies in the delivery chain
Wider choice of terminals enabling cost performance
analysis
10. Limitations
A commonly-held misconception is that WiMAX will
deliver 70 Mbit/s over 50 kms. In reality, WiMAX can
do one or the other operating over maximum range
(50 km) increases bit error rate and thus must use a
lower bitrate. Lowering the range allows a device to
operate at higher bitrates.Typically, fixed WiMAX
networks have a higher-gain directional antenna
installed near the client (customer) which results in
greatly increased range and throughput.Mobile
WiMAX networks are usually made of indoor
"customer premises equipment" (CPE) such as desktop
modems, laptops with integrated Mobile WiMAX or
other Mobile WiMAX devices. Mobile WiMAX
devices typically have an omni-directional antenna
which is of lower-gain compared to directional
antennas but are more portable. In practice, this means
that in a line-of-sight environment with a portable
Mobile WiMAX CPE, speeds of 10 Mbit/s at 10 km
could be delivered However, in urban environments
they may not have line-of sight and therefore users may
only receive 10 Mbit/s over 2 km. Higher-gain
directional antennas can be used with a Mobile
WiMAX network with range and throughput benefits
but the obvious loss of practical mobility.
11. Future of WiMAX
11.1 The IEEE 802.20 Standard
The IEEE 802.20 standard is a broadband wireless
networking technology that is being standardized for
deployment by mobile communications service
providers,in portions of their licensed spectrum. The
capacity of 802.20 is projected to be 2 Mbps per user,
and its range is comparable to 3G cellular
technologies,namely, up to 5 km. More typical
deployments will be in the neighborhood of 1 to 3km.
Finalization of the 802.20 standard is not expected
soon. The 802.20 standard has been under development
since late 2002,but the going has been slow, to say the
least. 802.20 and 802.16e, the mobile WiMAX
specification, appear similar at first glance but differ in
the frequencies they will use and the technologies they
are based on. Standard 802.20 will operate below 3.5
GHz, whereas mobile WiMAX will work within the 2-
GHz to 6-GHz bands. Further, as the name
suggests,802.16e is based on WiMAX, with the goal of
having WiMAX transmitters being able to support both
fixed and mobile connections. Although the 802.20
group will be back at work later, the 802.20 technology
is alluring, with promises of low-latency 1-Mbps
connections being sustained even at speeds of up to
150 mph,but we are going to have to wait a couple of
years for it.
12. CONCLUSIONS
It is expected that WiMAX becomes the dominant
standard for Wireless MAN networks in the world
market, at least, in fixed broadband networks. A brief
comparison between 802.16 and 802.16a has been
provided and also it has been shown the advantage by
using adaptive modulation. It has been explained that
the key difference between the initial 802.16 standard
and the 802.16a consists of the modulation scheme.
The importance of OFDM has also been
analyzed and this becomes an important feature that
makes the difference between the 802.16 and 802.16a
standard. More about this topic can be found in the
literature provided. PHY and MAC layers of WiMAX
have been discussed Future possible applications have
been discussed. WiMAX mobility standard is the next
step. However, it will have its competition too with the
802.20 standard that in short is called Mobility-Fi. We
will have to wait for the products and their
performance in real environments in order to evaluate
what the standard addresses and the real performance
of these products. There are already prototypes and
also development kits using WiMAX standard that are
used for education and mainly for research. Nowadays,
there are also some products that have been introduced
into the market that already contains the WiMAX
standard presented here.
Market is the key word to take into account.
products will have to be delivered according to the
market needs and those for end-users will have to be
extremely easy to install. Experience from DSL and
cable modems services shows this drawback. Of
course, in addition to be easy to install and provide
good technical features, these products have to provide
low-cost or at least a clear advantage over
other technologies that are, at this moment,already
matured in the market like xDSL and cable modem.
13. References
1. IEEE 802.16-2001, IEEE standard for local and
metropolitan area networks Part 16: Air interface
for fixed broadband wireless access systems, 6
December 2001.
2. IEEE 802.16a-2001, IEEE standard for local and
metropolitan area networks Part 16: Air interface
for fixed broadband wireless access systems
Amendment
2: Medium access control modifications and additional
physical layer specifications for 2 11 GHz, 1 April
2003.
3. http://www.wirelessdesignasia.com
/article.asp?id=2049.
4.http://www.intel.com/netcomms/events/wimax.htm.
5.http://www.btimes.com.my/Current_News/BT/Mond
ay/Column/BT58 3229.txt/Article/ [dated August 24,
2006].
6. www.wimaxforum.org.
7.http://www.qoscom.de/documentation/51_WiMAX%
20Summit%20paris%20-%20may04.pdf.
8. The Implications of WiMax for Competition and
Regulation,OECD document [dated March 2,
2006].
9.http://ww.sfgate.com/cgibin/article.cgi?f=/c/a/2006/1
2/18/BUG8NN0HIT1.DTL [dated December 18,
2006].
10. http://electronicxtreme.blogspot.com/2006/12/wimax.
html [dated December 11, 2006].
Key Terminology
BS Base station
DSL Digital subscriber line
ETSI European Telecommunications Standards
Institute
FCC Federal Communications Commission
IEEE Institute of Electrical and Electronics
Engineers
IP Internet Protocol
LAN Local area network
MAC address Media access control address. This
address is a computer s unique hardware number.
MAN Metropolitan area network
OFDM Orthogonal frequency division
multiplexing
OFDMA Orthogonal frequency division-multiple
access
P2P Point-to-point
P2MP Point-to-multi-point
PAN Personal area network
PHY Physical layer
PoP Point of presence
QoS Quality of service
RF Radio frequency
SS Subscriber station
UWB Ultra-wide band
VoIP Voice over Internet Protocol
WAN Wide area network
Wi-Fi Wireless fidelity. Used generically when
referring to any type of 802.11 network, whether 802.11b,
802.11a,dual-band, and so on.
WiMAX Worldwide Interoperability for Microwave
Access
WISP Wireless Internet service provider
WLAN Wireless local area network
WMAN Wireless metropolitan area network
WWAN Wireless wide area networks
A
TECHNICAL PAPER ON
4G MOBILE COMMUNICATION
S.V.SAIKRISHNA B.SANTOSH KUMAR
III Year , IT III Year , IT
G.I.E.T G.I.E.T
RAJAHMUNDRY RAJAHMUNDRY
[email protected] [email protected]
ABSTRACT
With the rapid development of communication networks, it is expected that fourth
generation mobile systems will be launched within decades. Fourth generation (4G
)
mobile systems focus on seamlessly integrating the existing wireless technologie
s
including GSM, wireless LAN, and Bluetooth. This contrasts with third generation
(3G),
which merely focuses on developing new standards and hardware. 4G systems will
support comprehensive and personalized services providing stable system performa
nce
and quality service. This paper gives the details about the need for mobile comm
unication
and its development in various generations. In addition, the details about the w
orking of
4G mobile communication were given. Finally, it narrates how 4G mobile
communication will bring a new level of connectivity and convenece in communicat
ion.
1. INTRODUCTION
Communication is one of the important
areas of electronics and always been a
focus for exchange of information
among parties at locations physically
apart. There may be different mode of
communication. The communication
may be wired or wireless between two
links. Initially the mobile
communication was limited to between
one pair of users on single channel pair.
Mobile communication has undergone
many generations. The first generation
of the RF cellular used analog
technology. The modulation was FM and
the air interface was FDMA. Second
generation was an offshoot of Personal
Land Mobile Telephone System
(PLMTS). It used Gaussian Shift Keying
modulation (GMSK). All these systems
had practically no technology in
common and frequency bands, air
interface protocol, data rates, number of
channels and modulation techniques all
were difficult. Dynamic Quality of
Service (QoS) parameter was always on
the top priority list. Higher transmission
bandwidth and higher efficiency usage
had to be targeted. On this background
development of 3G mobile
communication systems took place. In
this Time Division Duplex (TDD) mode
technology using 5MHz channels was
used. This had no backward
compatibility with any of the
predecessors. But 3G appeared to be
somewhat unstable technology due to
lack of standardization, licensing
procedures and terminal and service
compatibility. Biggest single inhibitor of
any new technology in mobile
communication is the mobile terminal
availability in the required quantity, with
highest QoS and better battery life. The
future of mobile communication is
FAMOUS-FUTUERE Advanced Mobile
Universal Systems, Wide-band TDMA,
Wideband CDMA are some of the
technologies. The data rates targeted are
20MBPS. That will be the 4G in the
mobile communication. 4G must be
hastened, as some of the video
applications cannot be contained within
3G.
2.DEVELOPMENT OF THE
MOBILE COMMUNICATION
The communication industry is
undergoing cost saving programs
reflected by slowdown in the upgrade or
overhaul of the infrastructure, while
looking for new ways to provide third
generation (3G) like services and
features with the existing infrastructures.
This has delayed the large-scale
development of 3G networks, and given
rise to talk of 4G technologies. Second
generation (2G) mobile systems were
very successful in the previous decade.
Their success prompted the development
of third generation (3G) mobile systems.
While 2G systems such as GSM, andIS-
95 etc. were designed to carry speech
and low bit-rate data. 3G systems were
designed to provide higher data-rate
services. During the evolution from 2G
to3G, a range of wireless systems,
including GPRS, IMT-2000, Bluetooth,
WLAN, and Hiper LAN have been
developed. All these systems were
designed independently, targeting
different service types, data rates, and
users. As these systems all have their
own merits and shortcomings, there is no
single system that is good to replace all
the other technologies. Instead of putting
into developing new radio interface and
technologies for 4G systems, it is
believed in establishing 4G systems is a
more feasible option.
3. ARCHITECTURAL
CHANGES IN 4G
TECHNOLOGY
In 4G architecture, focus is on the aspect
that multiple networks are able to
function in such a way that interfaces are
transparent to users and services.
Multiplicities of access and service
options are going to be other key parts of
the paradigm shift. In the present
scenario and with the growing popularity
of Internet, a shift is needed to switch
over from circuit switched mode to
packet switched mode of transmission.
However 3G networks and few others,
packet switching is employed for delay
insensitive data transmission services.
Assigning packets to virtual channels
and then multiple physical channels
would be possible when access options
are expanded permitting better statistical
multiplexing. One would be looking for
universal access and ultra connectivity,
which could be enabled by:
(a) Wireless networks and with wire
line networks.
(b) Emergence of a true IP over the
air technology.
(c) Highly efficient use of wireless
spectrum and resources.
(d) Flexible and adaptive systems
and networks.
4. SOME KEY FEATURES OF
4G TECHNOLOGY
Some key features (mainly from the
users point of view) of 4G networks are:
1. High usability: anytime, anywhere,
and with any technology
2. Support for multimedia services at
low transmission cost
3. Personalization
4. Integrated services
First, 4G networks are all IP based
heterogeneous networks that allow users
to use any system at any time and
anywhere. Users carrying an integrated
terminal can use a wide range of
applications provided by multiple
wireless networks.
Second, 4G systems provide not only
telecommunications services, but also
data and multimedia services. To support
multimedia services high data-rate
services with good system reliability will
be provided. At the same time, a low
per-bit transmission cost will be
maintained.
Third, personalized service will be
provided by the new generation network.
Finally, 4G systems also provide
facilities for integrated services. Users
can use multiple services from any
service provider at the same time.
To migrate current systems to 4G with
the features mentioned above, we have
to face number challenges. Some of
them were discussed below.
4.1 MULTIMODE USER
TERMINALS
In order to use large variety of services
and wireless networks in 4G systems,
multimode user terminals are essential as
they can adopt different wireless
networks by reconfiguring themselves.
This eliminates the need to use multiple
terminals (or multiple hardware
components in a terminal). The most
promising way of implementing
multimode user terminals is to adopt the
software radio approach. Figure.1 shows
the design of an ideal software radio
receiver
Analog Digital
BPF LNA ADC Base band
DSP
Figure.1: An ideal software radio receiver
The analog part of the receiver consists
of an antenna, a band pass filter (BPF),
and a low noise amplifier (LNA). The
received analog signal is digitized by the
analog to digital converter (ADC)
immediately after the analog processing.
The processing in the next stage (usually
still analog processing in the
conventional terminals) is then
performed by a reprogrammable base
band digital signal processor (DSP). The
Digital Signal Processor will process the
digitized signal in accordance with the
wireless environment.
4.2. TERMINAL MOBILITY
In order to provide wireless services at
any time and anywhere, terminal
mobility is a must in 4G infrastructures,
terminal mobility allows mobile client to
roam across boundaries of wireless
networks. There are two main issues in
terminal mobility: location management
and handoff management. With the
location management, the system tracks
and locates a mobile terminal for
possible connection. Location
management involves handling all the
information about the roaming terminals,
such as original and current located
cells, authentication information, and
Quality of Service (QoS) capabilities.
On the other hand, handoff management
maintains ongoing communications
when the terminal roams. MobileIPv6
(MIPv6) is a standardized IP-based
mobility protocol for Ipv6 wireless
systems. In this design, each terminal
has an IPv6 home address whenever the
terminal moves outside the local
network, the home address becomes
invalid, and the terminal obtain a new
Ipv6 address (called a care-of address) in
the visited network. A binding between
the terminal s home address and care-of
address is updated to its home agent inorder
to support continuous
communication.
UMTS
Coverage
Vertical handoff
GSM
Coverage
Horizontal handoff
WLAN
Coverage
Figure.2: Vertical and Horizontal handoff of a mobile terminal
Figure.2 shows an example of horizontal
and vertical handoff. Horizontal handoff
is performed when the terminal moves
from one cell to another cell within the
same wireless system. Vertical handoff,
however, handles the terminal
movement in two different wireless
systems (e.g, from WLAN to GSM)
4.3 PERSONAL MOBILITY
In addition to terminal mobility, personal
mobility is a concern mobility
management. Personal mobility
concentrates on the movement of users
instead of user s terminals, and involves
the provision of personal
communications and personalized
operating environments.
A personal operating environment, on
the other hand, is a service that enables
adaptable service presentations inorder
to fit the capabilities of the terminal in
use regardless of network types.
Currently, There are several frame works
on personal mobility found in the
literature. Mobile-agent-based
infrastructure is one widely studied
solution. In this infrastructure, each user
is usually assigned a unique identifier
and served by some personal mobile
agents (or specialized computer
programs running on same servers.
These agents acts as intermediaries
between the user and the Internet. A user
also belongs to a home network that has
servers with the updated user profile
(including the current location of the
user s agents, user s performances, and
currently used device descriptions).
When the user moves from his/her home
network to a visiting network, his/her
agents will migrate to the new network.
For example, when somebody makes a
call request to the user, the caller s agent
first locates user s agent by making a
location request to user s home network.
By looking up user s profile, his/her
home network sends back the location of
user s agent to the caller s agent. Once
the caller s agent identifies user s
location, the caller s agent can directly
communicate with user s agent.
Different agents may be used for
different services.
4.4 SECURITY AND
PRIVACY
Security requirements of 2G and 3G
networks have been widely studied in
the literature. Different standards
implement their security for their unique
security requirements. For
example, GSM provides highly secured
voice communication among users.
However, the existing security schemes
for wireless systems are inadequate for
4G networks. The key concern in
security designs for 4G networks is
flexibility. As the existing security
schemes are mainly designed for specific
services, such as voice service, they may
not be applicable to 4G environments
that will consist of many heterogeneous
systems. Moreover, the key sizes and
encryption and decryption algorithms of
existing schemes are also fixed. They
become inflexible when applied to
different technologies and devices (with
varied capabilities, processing powers,
and security needs). As an example,
Tiny SESAME is a lightweight
reconfigurable security mechanism that
provides security services for multimode
or IP-based applications in 4G networks.
5. CONCLUSIONS
The future of mobile communication is
FAMOUS-Future Advanced Mobile
Universal Systems. The data rates
targeted are 20 MBPS. That will be the
FOURTH GENERATION 4G in the
mobile communication technology. 4G
must be hastened, as some of the video
applications cannot be contained within
3G.This paper highlights that current
systems must be implemented with a
view of facilitate to seamless integration
into 4G infrastructure. Inorder to cope
with the heterogeneity of network
services and standards, intelligence close
to end system is required to map the user
application requests onto network
services that are currently available. This
requirement for horizontal
communication between different access
technologies has been regarded as a key
element for 4G systems. Finally, this
paper describes how 4G mobile
communication can be used in any
situation where an intelligent solution is
required for interconnection of different
clients to networked applications aver
heterogeneous wireless networks.
BIBLIOGRAPHY
1. Mobile and Personal Communication Systems and Services
---Raj Pandya
2. Emerging Trends in Mobile Communication
---IETE Technical Review Magazine
3. Technology Advances for 3G and Beyond
---IEEE Communications Magazine
4. Challenges in the migration to 4G mobile systems
---IEEE Communications Magazine
HYPERSONIC SOUND
(WHEREVER YOU WANT IT)
AUTHORS
N PAVAN KUMAR RAO S K IMAM BASHA
06L21A0481 06L21A04987
III B TECH, ECE III B TECH, ECE
Cell: 99858181372 9391648654
E-mail:
[email protected] [email protected]
VAAGDEVI INSTITUTE OF TECHNOLOGY & SCIENCES
PRODDATUR-516360
KADAPA (DIST)
ANDHRA PRADESH
Abstract
The annoying loudspeakers may soon be replaced with the revolutionary
hypersonic sound system. It will beam sound wherever you want it without disturbin
g
others.
Hi-fi speakers range from piezoelectric tweeters to various kinds of mid-range
speakers and woofers, which generally rely on circuits and large enclosures to p
roduce
quality sound, whether it is dynamic, electrostatic or some other transducer-bas
ed design.
The loudspeakers available in the market have one thing in common they directly
move the air to create the audible sound waves. Engineers have struggled for nea
rly a
century to produce a speaker design with the ideal 20Hz-20,000Hz capability of h
uman
hearing.
Conventional loudspeakers suffer from amplitude distortion, harmonic distortion,
crossover distortion, cone resonance, etc. Some aspects of their mechanical natu
re are mass,
magnetic structure, enclosure design and cone construction.
The distortions are mechanical nature of loudspeakers affect the quality of thei
r
sound. In other words, in spite of the advancement in electronics, speaker techn
ology still
has its limits.
Hypersonic Sound Technology provides significant departure from conventional
speakers and a remarkable approach to the reproduction of sound. It produces sou
nd in the
air indirectly, and provides greater control of sound placement and volume.
How It Evolved?
Researchers developing
underwater sonar techniques in the
1960s originally pioneered the
technique of using a nonlinear
interaction of high-frequency waves to
generate low- frequency waves. In
1975, an article cited the nonlinear
effects occurring in air.
Over the next two decades,
several large companies including
Panasonic, NC Denon and Ricoh
attempted to develop a loudspeaker
based on this principle. They were
successful in producing some sort of
sound, with extremely high levels of
distortion (>50% THD). This
drawback caused the total
abandonment of the technology by the
end of 1980s.
In the 1990s, Woody Norris, a
65-year-old-West Coast maverick with
no college education during a stint as a
radar technician in the US Air Force,
solved the parametric problems of this
technology with his breakthrough
approach.
The Technology
Hypersonic sound technology
works by emitting harmless highfrequency
ultrasonic tones that we
cannot hear. These tones use the
property of air to create new tones that
are within the range of human hearing.
The result is an audible sound. The
acoustical sound wave is created
directly in the air molecules into the
frequency spectrum we can hear.
In a hypersonic sound system,
there are no voice coils, cones, cross
over networks or enclosures. The result
is sound with a potential purity and
fidelity which we attained never
before. Sound quality is no longer tied
to speaker size. The hypersonic sound
system holds the promise of replacing
conventional speakers in homes, movie
theatres, and automobiles everywhere.
Range Of Hearing
The human ear is sensitive to
frequencies ranging from 20Hz-
20,000Hz. If the range of human
hearing is expressed as a percentage of
shifts from the lowest audio frequency
to the highest, it spans a range of
100,000 percent. No single
loudspeaker element can cooperate
efficiently over such a wide range of
frequencies. To deal with this matter,
multiple transducers and crossovers are
necessary.
Using hypersonic sound technology, it
is possible to design a perfect
transducer.
Fig: A man can hear the secret pin
code coming from the Hypersonic
sound box at the ATM.
How Does It Work?
Hypersonic sound uses a
property of air known as
nonlinearity. A normal sound wave is
a small pressure wave that travels
through the air. As the pressure goes
up and down, the nonlinear nature of
the air itself slightly changes the sound
wave.
If there is change in a sound wave, new
sounds are formed within the wave.
Therefore, if we know how the air
affects the sound waves, we can
predict exactly what new frequencies
(sounds) will be added into the sound
wave by the air itself. An ultrasonic
sound wave (beyond the range of
human hearing) can be sent into the air
with sufficient volume to cause the air
to create these new frequencies. Since
we cannot hear the ultrasonic sound,
we only hear the new sounds that are
formed by the non-linear action of the
air.
Linear frequency response with
virtually no distortion.
At high amplitudes, the speed of the
sound changes over the course of a
single cycle. The continuous line is a
pure sine wave and the dotted
represents the same form after it has
propagated through the nonlinear air
for a time.
Hypersonic sound technology
precisely provides linear frequency
response with virtually no distortion
associated with conventional speakers.
Physical size no longer defines fidelity.
The faithful reproduction of sound is
freed from bulky enclosures. There are
no woofers, tweeters or crossovers.
Hypersonic sound emitter
An important by product of this
technique is that sound may be
projected to just about any desired
point in the listening environment.
This provides outstanding flexibility,
while allowing for an unprecedented
manipulation of the sound s source
point.
Hypersonic technology is
analogous to the beam of light from a
flashlight. If you stand to the side or
behind the light, you can see the light
only when it strikes a surface. This
technology is similar to hypersonic
sound technology. In this, you can
direct the ultrasonic emitter towards a
hard surface, a wall for instance, and
the listener perceives the sound as
coming from the spot on the wall. The
listener does not perceive the sound as
emanating from the face of the
transducer, but only from the reflection
of the wall.
Block Diagram of the System:
Components of the System:
1. Power supply:
Like all electronics, the
hypersonic sound system works off DC
voltage. A universal switch-mode
power supply is standardized at 48V
for the ultrasonic power amplifier. In
addition, low voltage is used for the
microcontroller unit and other process
management.
2. Audio Signal Processor:
The audio signal is sent to an
electronic signal processor circuit
where equalization, dynamic range
control, distortion control and precise
modulation are performed to produce a
composite ultrasonic waveform. This
amplified ultrasonic signal is sent to
the emitter, which produces a column
of ultrasonic sound that is subsequently
converted into highly directional
audible sound within the air column.
Since ultrasound is highly
directional, the audio sound placement
is precise. At the heart of the system is
a high-precision oscillator in the
ultrasonic region with a variable
frequency ranging from 40 to 50 kHz.
3. Dynamic Double Side band
(DSB) Modulator:
In order to convert the source
program material into ultrasonic
signals, a modulation scheme is
required. In addition, error correction
is needed to reduce distortion without
loss of efficiency. The goal, of course,
is to produce audio in the most
efficient manner while maintaining
acceptably low distortion levels.
We know that for a DSB
system, the modulation index can be
reduced to decrease distortion, but this
comes at the cost of reduced
conversion efficiency. A square-rooted
envelope reference with zero
bandwidth distortion the basis of the
proprietary parametric processor
handles the situation effectively.
4. Ultrasonic Modulation
Amplifier: High-efficiency
ultrasonic power amplifier amplifies
the carrier frequency with correlation,
responds to reactive power
regeneration and matches the
impedance of the integrated
transducers.
5. Microcontroller:
A dedicated microcontroller
circuit takes care of the functional
management of the system. In the
future version, it is expected that the
whole process like functional
management, signal processing, double
side-band modulation and even switchmode
power supply would be
effectively taken care of by a single
embedded IC.
6. Transducer Technology:
The most active piezo film is
poly vinylidenediflouride. This film is
commonly used in many industrial and
chemical applications.
In order to be useful for
ultrasonic transduction, the raw film
must be polarized or activated. This is
done by one of the two methods. One
method yields a uni-axial film that
changes length along one axis when an
electric field is applied through it. The
other method yields a biaxial film
that shrinks/ expands along two axes.
Finally, the film needs to have a
conductive electrode material applied
to both sides in order to achieve a
uniform electric field through it.
Piezoelectric films operate as
transducers through the expansion and
contraction of X and/ or Y axes of
the film surface. For use as a
hypersonic sound emitter, the film is to
be curved or distended. The curving
results in expansion and contraction in
the Z axis, generating acoustic
output.
The music or voice from the
audio source is converted into a highly
complex ultrasonic signal by the signal
processor before being amplified and
emitted into the air by the transducer.
Since the ultrasonic energy is highly
directional, it forms a virtual column of
sound directly in front of the emitter,
much like the light from a flash light.
The converted sound wave
does not spread in all directions like
the sound from a conventional
loudspeaker. Instead, it stays locked
tightly inside the column of ultrasonic
energy. In order to hear the sound,
your ears must be in line with the
column of ultrasound, or you can hear
the sound after it reflects off a hard
surface.
Modes Of Listening:
Hypersonic speakers can be operated
in two modes: 1.Direct, 2.Virtual.
1. Direct Mode:
Direct mode requires a clear
line of approach from the hypersonic
sound system (HSS) unit to the point
where the listener can hear the audio.
To restrict the audio in a specific area,
this method is appropriate.
2. Virtual Mode:
This mode requires an
unbroken line of approach from the
emitter of the HSS, so the emitter is
pointed at the spot where the audio is
to be heard. This requires a surface
suitable for reflecting the HSS. A
virtual sound source creates an illusion
of sound that emanates from a surface
or direction where no physical
loudspeaker is present.
Advantages:
1. Can focus sound only at the
place where you want it.
2. Ultrasonic emitter devices are
thin and flat and do not require a
mounting cabinet.
3. The focused or directed sound
travels much farther in a straight line
than conventional loudspeakers.
4. Dispersion can be controlled
very narrow or wider to cover more
listening area.
5. You can reduce or eliminate
feedback from live microphones.
Disadvantages:
1 The listener have to go to the
source of sound emitter i.e., HSS
emitter in order to listen to the sound.
2 It requires an unbroken line of
approach from the emitter of the HSS
to the source
3 Even it is much cost effective
when compared to the ordinary
loudspeakers.
Applications:
Automobiles: Beam alert signals
directly from an announcement device
in the dashboard o the driver.
Audio/ Video Conferencing: Project
the audio from a conference in four
different languages, from a single
central device without the need for
headphones.
Paging Systems:
Direct the announcement to the
specific area of interest.
Retail Sales:
Provide targeted advertising directly at
the point of purchase.
Drive Through Ordering:
Intelligible communications directly
with an automobile driver without
bothering the surrounding neighbors.
Safety Officials: Portable bullhorn
type device for communicating with a
specific person in a crowd of people.
Military Applications: Ship-to-ship
communications and ship board
announcements.
Toys: Handheld toys for kids to
secretly communicate across the street
with each other.
Public Announcement: Highly
focused announcements in noisy
environments such as subways,
airports, amusement parks, train
stations & traffic intersections.
Sports: Focus sound into a crowd of
people on a football field and talk only
to a selected few.
Emergency Rescues: Rescuers
communicate with endangered people
far from reach.
Virtual Home Theatre: With
hypersonic, you can eliminate the rear
speakers in a 5.1 setup. Instead, you
create virtual speakers on the back
wall.
Sound Bullets:
Jack the sound level 50 times
the human threshold of pain, and an
offshoot of hypersonic sound
technology becomes a non-lethal
weapon.
Discreet Speakerphone:
With its adjustable reach, a
hypersonic speakerphone would not
disturb your neighbors.
Future of Sound:
Even the best loudspeakers are subject
to distortion, and their Omnidirectional
sound is annoying to people
in the vicinity who do not wish to
listen.
The HSS holds the promise of
replacing conventional speakers.
Ultrasonic emitters have super high
impedance, which allows low current
in power amplifiers making them
lighter in weight. It is quite certain that
the HSS is going to shape the future of
sound and will serve our ears with
magical experience.
References:
1. Details of the results are given
with citations at hypersonic effect.
2. Takeda, S et al (1992). "Age
variation in the upper limit of hearing".
European Journal of Applied
Physiology 65 (5): 403 408.
http://www.springerlink.com/content/
m638p784x2112475/. Retrieved on 17
November 2008. edit
3. "A Ring Tone Meant to Fall on
Deaf Ears" (New York Times article)
4. AAPM/RSNA Physics Tutorial
for Residents: Topics in US: B-mode
US: Basic Concepts and New
Technology - Hangiandreou 23 (4):
1019 - Radio Graphics
5. F. Joseph Pompeii. The use of
airborne ultrasonics for generating
audible sound beams. Journal of the
Audio Engineering Society,
47(9):726{731, 1999.
NANOMOBILE
Presented by
N.Ramya D.N.Sravani
III B.Tech, ECE III B.Tech, ECE
[email protected] [email protected]
SRI VENKATESWARA UNIVERSITY COLLEGE
OF ENGINEERING
TIRUPATI 517 502
ABSTRACT:
This technology has the potential to replace
existing manufacturing methods for integrated
circuits, which may reach their practical limits
within the next decade when Moore` s Law
eventually hits a brick wall
- Physicist Bernard Yurke of Bell
Labs
Nanotechnology is an extremely powerful
emerging technology. Research works are
being carried out on the possibilities of
applying this technology in designing
electronic circuits. This paper throws light on
NANOMOBILE a mobile phone with its
internal circuitry designed on a nanoscale. The
nanomobile is a huge leap towards the advent
of nanotechnology in the field of electronics
and communication. Nanomobile is a perfect
blend of the conventional radio
communication principles and the recent
advancements in nanotechnology. We have
dealt with the nanolithography a top-down
approach and carbon nanotubes the
Bottom-up approach for the design and
fabrication of the internal circuitry.
The nanomobile can be visualized to be of a
size that is slightly smaller than an i-pod, with
enhanced features like touch screen and
Bluetooth connectivity. Owing to its small size
the nanomobile would find its application both
in commercial market as well as in the field of
defence.
This paper thus projects, an innovative idea to
replace the existing micro-scale fabrication
method paving way a visionary of having
compact, robust and technologically enhanced
mobile phones with minimal resource and
miniature size.
INTRODUCTION:
Millions of people around the world use mobile
phones. They are such great gadgets. These
days, mobile phones provide an incredible array
of functions, and new ones are being added at a
breakneck pace. Mobile phones have become an
integral part of our day to day life. In today` s
fast world, a modern man requires mobile
phones that are highly compact which would
also satisfy all his requirements.
In spite of the manufacturer s efforts to bring
out handier mobiles, the sizes of the present day
mobiles are still bulky. This can be attributed to
the increase in the enhancement features. Thus
with the current methods used in design it is
practically impossible to have mobiles that are
compact yet have all the required enhancement.
In order to overcome this constraint we propose
a new gadget called the NANOMOBILE. The
unique feature of the nanomobile is that it
employs the principles of nanotechnology in its
design. Nanotechnology basically aims at size
reduction and hence its application to mobiles
would aid in producing handier ones.
INSIDE A MOBILE PHONE:
Now let us take a look at the internal structure of
a mobile phone.
As shown in the figure a mobile phone consists
of the following blocks housed on a printed
circuit board:
. A digital signal processor
. A microprocessor and control logic
. Radio frequency transmitter/receiver
amplifiers
. Analog to digital converter
. Digital to analog converter
. An internal memory
. Radio frequency and power section.
. The digital signal processor:
The DSP is a "Digital Signal Processor" - a
highly customized processor designed to perform
signal manipulation calculations at high speed.
The DSP is rated at about 40 MIPS (Millions of
Instructions per Second) and handles all the
signal compression and decompression
. The microprocessor and control
logic:
The microprocessor and control logic handle all
of the housekeeping chores for the keyboard and
Internal structure
Display, deal with command and control signaling
with the base station and also coordinate the rest of
the functions on the board.
. Radio frequency transmitter/receiver
amplifiers:
The RF amplifiers handle signals in and out of the
antenna. Mobile communication involves the travel
of signals through long distances and hence there is a
possibility of the signal being attenuated mid way.
Hence the RF amplifiers play an important role of
boasting the power levels of the signals, so that they
can be deciphered at both the ends. The figure
illustrates the circuit of a class-C power amplifier.
a) Class C amplifier
. ADC/DAC Converters:
The signal has to be converted from analog to digital
at the transmitting end. This task is accomplished by
the analog to digital converter. At the receiving, the
digital signal must be converted back to its analog
equivalent. This is done by the digital to analog
converter.
b) DAC
c) ADC
. Memory:
The memory refers to the internal ROM and RAM
that is used to store and handle the data required by
both the user and the system.
. RF and power section:
The RF and power section handles power
management and recharging and also deals with the
hundreds of FM channels.
CONVENTIONAL METHODS & THEIR
DEMERITS:
Today` s mobile phones use the MIC in their internal
circuits. The monolithic integrated circuits are used
to achieve circuits on a smaller scale, the recent
advancement being the microwave monolithic
integrated circuits. These circuits are a combination
of active and passive elements which are fabricated
on a single substrate. The various fabrication
techniques include:
. Diffusion and ion implantation
. Oxidation and film deposition
. Epitaxial growth
. Optical lithography
. Etching and photo resist
. Deposition
As mentioned above these techniques contribute to
the reduction of the circuit size, yet its disadvantage
is that this method<MIC> is not effective in
shrinking the circuit size to the desired level. The
final circuit will be a combination of a large number
of substrates ultimately making the internal circuit
bulkier.
NANOTECHNOLOGY, A REMEDY:
Nanotechnology provides an effective replacement
for the conventional monolithic integrated circuit
design techniques. This technology aims at
developing nano sized materials that will be both
compact and robust. One of the branches of
nanotechnology called nanoelectronics deals with the
study of shrinking electronic circuits to nano scale.
Nanoelectronics has two approaches for fabrication
and design of nano sized electronic circuits namely
top-down and bottom-up approach.
TOP-DOWN APPROACH:
Top-down approach refers to the process of arriving
at a smaller end product from a large quantity of raw
material. In this approach a large chunk of raw
material is sliced into thin wafers above which the
circuit elements are drawn on radiation sensitive
films. The unwanted materials are then removed by
the process of etching. In the following section we
project nanolithography as a means to implement
the top-down approach.
NANOLITHOGRAPHY:
Nanolithography using electron beam
lithography can pattern small feature with 4nm
resolution. It does not require any
photolithography masks or optical alignment.
Electron beam lithography is a great tool for
research and development because of its
versatility and quick design and test cycle time.
The layout can be drawn, and the device can be
patterned, fabricated and tested easily.
THE PROCESS INVOLVED:
Electron Beam Lithography (EBL) system is
ideal for patterning small area devices with
nanometer resolution. In the EBL system, the
beam spot size can be varied from 4nm to
200nm, depending on the acceleration voltage
and beam current. The EBL system uses a
thermal field emission type cathode and ZrO/w
for the emitter to generate an electron beam. The
beam generated from the emitter is processed
through a four-stage e-beam focusing lens
system and forms a spot beam on the work piece.
Pattern writing is carried out on a work piece,
which has been coated with an electron beam
sensitive resist, by scanning the electron beam.
The EBL system adopts a vector scanning and
step-and-repeat writing method. It has a twostage
electrostatic deflection system. The
position-deflection system (main deflection
system) scans over a 500um x 500um area, and it
controls precise positioning of the beam. The
scanning-deflection system (subsidiary
deflection system) scans over a 4um x 4um area,
and it performs high-speed beam scanning.
The electron beam generated is accelerated
through a 100kV (or 50kV) electrode, and the
beam is turned on or off by a blanking electrode
when the stage moves. The EBL system is also
equipped with electrodes that correct the field
curvature and astigmatism due to beam
deflection. The schematic diagram of the EBL
can be visualized as shown in the figure below.
EBL system
The minimum feature that can be resolved by the
EBL system depends on several factors, such as
the type of resist, resist thickness, exposure
dosage, the beam current level, proximity
correction, development process and etching
resistance of the particular electron beam resist
used.
The feature patterned on the electron beam resist
can be transferred to the substrate using two
methods: the lift-off process or the direct etching
process. In a lift-off process, the resist is first
spun onto the wafer, exposed by E-beam
lithography, and developed in a solution. Next, a
masking material, such as Titanium, is sputtered
onto the wafer. The wafer is then placed in a
resist stripper to remove the resist. The metal that
is sputtered directly on top of the substrate where
there is no resist will stay, but the metal that is
sputtered on top of the resist will be lifted off
along with the resist, hence, it is called the liftoff
process. The metal left behind becomes the
etching mask for the substrate. The negative
resist is typically preferred for the lift-off process
because it has a slightly angled sidewall profile.
In a direct etching process, a masking material
such as silicon dioxide is first deposited onto the
silicon substrate. Silicon dioxide is used as a
mask because it has high selectivity in silicon
etching (1:100). The resist is then spun onto the
wafer, exposed and developed. Next, the pattern
is transferred onto the oxide mask by reactive ion
etching (RIE) or inductively coupled plasma
(ICP). One thing to take into consideration is that
the electron beam resist will also be etched
during the oxide etching. Therefore, the
selectivity of the resist to oxide during the
etching process will determine the minimum
required resist thickness for a given oxide
thickness.
BOTTOM-UP APPROACH:
The process of rigging up smaller elements in
order to obtain the end product<in this case a
circuit> is called bottom-up approach. In
nanotechnology the bottom-up approach is
implemented using carbon nanotubes.
Tailoring the atomic structure of organic
molecules it is possible to create individual
electronic components. This is a completely
different way of building circuits. Instead of
whittling down a big block of silicon we are
building from the ground-up; creating molecules
on a surface and then allowing them to assemble
into larger structures.
Fig: magnified sketch of a carbon nanotube
Scientist are now attempting to manipulate
individual atoms and molecules. Building with
individual atoms is becoming easier and
scientists have succeeded in constructing
electronic devices using carbon nanotubes. But a
practical constraint comes in integrating the
individual components. No method has emerged
for combining the individual components to form
a complete electronic circuit. Hence the bottomup
approach is in its early stage of research and
is thus practically difficult to realize.
THE COMPLETE PICTURE OF A
NANOMOBILE:
After all the above discussions, we now present a
schematic picture of a nanomobile.
a) front view b) rear view
As seen from the figure b the internal circuit of a
conventional mobile has been tremendously
reduced. The nanomobile consists of a two tier
display featuring the touch screen display in the
top and the system display at the bottom. The
touch screen display enables the user to dial a
number. The black scroll key at the top would
enable the user to traverse through his contacts.
The nanomobile does not feature a microphone
and a speaker system instead the user is
connected to his mobile via Bluetooth.
COMPARATIVE STUDY:
. The circuit of a nanomobile can be
achieved on a nano scale while that of a
conventional mobile can maximum be reduced to
micro scale thus making the rear portion of the
nanomobile almost flat.
. The speaker and the microphone which
adds bulkiness to the conventional mobile has
been removed and Bluetooth has been introduced
as a tool to communicate.
. The key pad which consumes a large
space has been replaced by a touch screen that
also adds fancy to the nanomobile.
. The heat produced in the internal circuit is
greatly reduced.
DEMERITS OF A NANOMOBILE:
. Any fault arising in the internal
circuitry needs a high degree of
precision to be rectified and hence
would result in complexity.
. Repair of the circuit is very tedious and
hence only a complete replacement of
the circuit is possible.
. The electron beam process used in
nanolithography is quite slow and
would take a couple of days.
. A higher voltage is required to generate
the electron beam.
CONCLUSION:
Though nanomobile has a few demerits, it paves
the way for a revolutionary idea of bringing
down the size of the electronic circuits to a nano
scale. The nanomobile can be seen as an
effective solution to resolve problem of present
day bulky mobiles. Thus the nanomobile can be
considered as a giant leap towards the advent of
nanotechnology in the field of electronics to
cater our day-to-day requirements.
REFERENCES:
. Introduction to nanotechnology by
CP Poole, FJ Owens
. Principles of nanotechnology
by G Ali Mansoori
Enhanced Watershed Image Processing Segmentation
Authors
1. G.Rachanaa (III/IV B.Tech ECE) 2. V.Shilpa (III/IV B.Tech ECE)
College
Koneru Lakshmaiah College of Engineering, green fields Vaddeswaram.
E-mail: [email protected]
Contact: 9966009072
Abstract
Watershed is a most popular image processing method. Because image
processing is emerging field and segmentation of nontrivial images is one of the
very difficult
tasks in image processing area. The proposed system is to enhance the watershed
method.
Watershed is method of image processing segmentation. To understand watershed im
age processing
segmentation, one need to know what is image processing and what image processin
g
segmentation is. When we call watershed method, it will return a label image. In
label image, all
the different objects identified will have different pixel value i.e. all pixel
of 1st object will have
value one and all pixel of 2nd object will have value two and so on Enhanced wat
ershed image
processing is to improve the results of watershed image processing. Proposed sys
tem is to
enhance the watershed algorithm. Here the enhancement is in terms of robustness
i.e. the
outcome. For example if watershed algorithm identifies three objects from an ima
ge containing
six objects. Then proposed system will try to identify four or five object from
the same image
containing the six objects. . When we enhance the edge detection, this ultimatel
y enhances the
final result but using proper algorithm to merge the both results of watershed a
nd enhance edge
detection. For this purpose we need to make following small algorithms.
Applications:
1. Image processing is very hot field which needs extensive research and hard wo
rk.
Following are the suggestions for future work.
1. Improve the speed by using parallel processing either on clusters system or mu
ltiprocessor
system.
2. Merge this technique with some other technique to get the better results.
3. Fill missing pixel technique can be useful for other purposes.
4. Implement the same method on C++ for efficiency reasons
1) Introduction
Watershed is method of image processing segmentation. To understand watershed im
age
processing segmentation, one need to know what is image processing and what imag
e processing
segmentation is.
A. Image processing
There are number of ways to define image processing but in simple words we can d
efine image
processing as follows:
The field of digital image processing refers to processing digital images by mean
s of
Digital Computer (Luc, 1991).
B. Image processing segmentation
R. Gonzalez and R. Woods write in their widely used textbook (Digital Image Proc
essing) that
"segmentation of nontrivial images is one of the most difficult tasks in image p
rocessing.
Segmentation accuracy determines the success or failure of computerized analysis
Procedures (RAFAEL C, 2004).
C. Watershed image processing segmentation
When we call watershed method, it will return a label image. In label image, all
the different
objects identified will have different pixel value i.e. all pixel of 1st object
will have value one
and all pixel of 2nd object will have value two and so on
D. Enhanced watershed image processing
Enhanced watershed image processing is to improve the results of watershed image
processing.
Proposed system is to enhance the watershed algorithm. Here the enhancement is i
n terms of
robustness i.e. the outcome.
2) PROPOSED METHOD FOR ENHANCEMENT
Proposed algorithm is based on merging watershed result with enhanced edge detec
tion result
(Edges are those places in an image that correspond to object boundaries.). Henc
e it enhances the
watershed result. Simple edge detection result is not sufficient to enhance wate
r shed result so 1st
we need to enhance the edge detection result
A. Enhanced watershed
This is the main algorithm. It controls the all other algorithm. It calls the wa
tershed method, edge
detection method, enhances edge detection result and algorithm given below.
B. Connect edge with border
In edge detection method most edges are not connected with border line. This con
nects the edges
with border line of image.
Fig. 1
Edges not connected with border
Fig. 2 Edges connected with border Fig. 3 Missing pixel
Fill the pixel, if there is one missing pixel i.e. if there are three pixel havi
ng values 1 0 1,it
convert the value of 2nd pixel from 0 to 1 if value of 1st and 3rd pixel is one.
Fig. 4
Without missing pixel
D. Get big object no
Read the whole label image and return the biggest object number.
E. Connect object with border
Connects the start/end pixel of object at border i.e. if object two has start pi
xel position24 and
end pixel position 33 at border then this will connect there two pixels with a l
ineate border
Fig. 5 Fig.6
Object not connected with border Object connected with border
F. Get minimum object size
Read the label image and return the size of minimum object in label image.
G. Get object start pixel
This will return the starting pixel of an object i.e. if we give object number t
wo then this will
return starting pixel of object two.
H. Get object no
This method will get the x, y values and give the object number at that pixel.
I. Get object size
This method will get the object number and return the size of that object.
J. Get select object
This method will get the object number and return the image having only pixel of
this object.
3) PROPOSED ALGORITHM
1. Read image.
2. Convert image to gray scale, if required.
3. Perform canny edge detection and get edges.
4. Connect edges with border.
5. Fill missing pixel in edges.
6. Make edges logical (i.e. 0/1).
7. Complement the image.
8. Perform labeling function on edges and get label 1 and total objects in label
1.
9. Get biggest object number in the label 1.
10. Connect objects with border.
11. Perform labeling function again and get label 1 and total objects in label 1
.
12. Get biggest object number again in the label 1.
13. Perform existing watershed method and get the label 2.
14. Perform labeling function on label 2 and get total objects in label 2.
15. Get biggest object number in the label 2.
16. Get the size of minimum object in label 2.
17. Loop through 1st object to total object in label 1.
a. If current object number is equal to biggest number in label 1 then continue.
b. Get the current object's start pixel in x and y variable from label 1.
c. Get the object number at x and y position in label 2.
d. If object number is equal to the biggest object number of label 2 then,
i. Increase the value of total objects in label 2 by one.
ii. Find the rows and columns pixels of current object in label 1.
iii. Find the total pixels (i.e. total rows or columns) in above find rows and c
olumns.
iv. Loop through 1st pixel to the last pixel of current object in label 1.
1. Change the current pixel value at label 2 to total objects value in label 2.
2. If there is other objects pixel in between rows then change the pixel to tota
l objects value.
v. Continue the loop at 17
e. Get the current object size from label 1.
f. Get the size of object number.
g. If current object's size is greater than object number's size then,
i. Find the rows and columns pixels of current object in label 1
ii. Find the total pixels (i.e. total rows or columns) in above and find rows an
d columns.
iii. Loop through 1st pixel to the last pixel of current object in label 1.
1. Change the current pixel value at label 2 to object number value (see 17.c).
iv. Continue the loop at 17
h. If current object's size is greater than double size of minimum object in lab
el 2,
i. Find the rows and columns pixels of current object in label 1
ii. Find the total pixels (i.e. total rows or columns) in above find rows and co
lumns
iii. Increase the value of total objects in label 2 by one
iv. Loop through 1st pixel to the last pixel of current object in label 1
1. Change the current pixel value at label 2 to total object value.
2. If there is other objects pixel in between rows then change the pixel to tota
l objects value.
v. continue the loop at 17.
18. Convert the label2 to RGB and display the final enhanced watershed result.
4) ENHANCED RESULTS
Fig. 7 Segmentation Results on Set1 Following is bar chart & graph comparison
Fig.8 comparison on set 1
* At X-axis- Image Number
*At Y-axis Percentage
B. Set 2
Following is set 2 summary of segmentation results
Fig. 10 Fig. 11 Comparison on Set 2
C. Set 3
Following is set 3 summary of segmentation results.
Fig.13
Segmentation Results on Set3 Following is bar chart & graph comparison
Fig. 14
Comparison on Set 3
5) CONCLUSION
The evaluation of the segmentation is done by comparing the each object in true
segmentation
with the object in marker-controlled watershed segmentation or proposed method.
So on the
basis of this, percentage is calculated on different sets of images and draws th
e comparison graph
between marker-controlled watershed and proposed method. Conclusion is that the
proposed
method enhances the result of marker-controlled watershed.
6)Future work:
1. Image processing is very hot field which needs extensive research and hard wo
rk.
Following are the suggestions for future work.
2. Improve the speed by using parallel processing either on clusters system or mu
ltiprocessor
system.
3. Merge this technique with some other technique to get the better results.
4. Fill missing pixel technique can be useful for other purposes.
5. Implement the same method on C++ for efficiency reasons
MILITARY APPLICATIONS USING GLOBAL
POSITIONING SYSTEM(GPS)
R.Radhamma, D.Shilpa,
Student III B.Tech (ECE), Student III B.Tech (ECE),
AITS, Rajampet. AITS ,Rajampet.
mailto:[email protected] mailto:[email protected]
ABSTRACT
In the modern day theater of combat,
the need to be able to strike at targets that
are on the opposite side of the globe has
strongly presented itself. This had led to the
development of various types of guided
missiles. These guided missiles are selfguiding
weapons intended to maximize
damage to the target while minimizing
collateral damage. The buzzword in modern
day combat is fire and forget. On ethical
grounds, one prays that each warhead
deployed during a sortie will strike only its
intended target, and that innocent civilians
will not be harmed by a misfire.
The is a need of missile guidance which is
done by radar signals, wires and the most
recently used is the GPS guided missiles,
using the exceptional navigational and
surveying abilities of GPS, after being
launched, could deliver a
warhead to any part of the globe via the
interface pof the onboard computer in the
missile with the GPS satellite system.
INTRODUCTION :
1). Introduction to missile guidance :
Guided missile systems have evolved at a
tremendous rate over the past four decades,
and recent breakthroughs in technology
ensure that smart warheads will have an
increasing role in maintaining our military
superiority. From a tactical point, our
military desires weaponry that is reliable and
effective, inflicting maximal damage on
valid military targets and ensuring our
capacity for lighting fast strikes with
pinpoint accuracy. Guided missile systems
help fulfill all of these demands.
1.1). Concept of missile guidance :
Missile guidance concerns the method by
which the missile receives its commands to
move along a certain path to reach a target.
On some missiles, these commands are
generated internally by the missile computer
autopilot. On others, the commands are
transmitted to the missile by some external
source
Fig 1.1 Concept of missile guidance
The missile sensor or seeker, on the other
hand, is a component within a missile that
generates data fed into the missile computer.
This data is processed by the computer and
used to generate guidance commands.
Sensor types commonly used today include
infrared, radar, and the global positioning
system. Based on the relative position
between the missile and the target at any
given point in flight, the computer autopilot
sends commands to the control surfaces to
adjust the missile's course.
1.2). Types of missile guidance :
Many of the early guidance systems used in
missiles where based on gyroscope models.
Many of these models used magnets in their
gyroscope to increase the
sensitivity of the navigational array. In
modern day warfare, the inertial
measurements of the missile are still
controlled by a gyroscope in one form or
another, but the method by which the missile
approaches the target bears a technological
edge. On the battlefield of today, guided
missiles are guided to or acquire their targets
by using:
· Radar signal
· Wires
· Lasers (or)
· Most recently GPS.
1.2.1). Missile guidance using radar
signal :
Many machines used in battle, such as tanks,
planes, etc. and targets, such as buildings,
hangers, launch pads, etc. have a
specific signature when a radar wave is
reflected off of it. Guided missiles that use
radar signatures to acquire their targets are
programmed with the specific signature to
home in on. Once the missile is launched, it
then uses its onboard navigational array to
home in on the preprogrammed radar
signature. The major problem with these
missiles in today s battlefield is that the
countermeasures used against these missiles
work on the same principles that these
missiles operate under. The countermeasures
home in on the radar signal source and
destroy the antenna array, which essential
shuts down the radar source, and hence the
radar guided missiles cannot acquire their
targets
1.2.2). Missile guidance using wires :
Wire guided missiles do not see the target.
Once the missile is launched, the missile
proceeds in a linear direction from the
launch vehicle. Miles of small, fine wire are
wound in the tail section of the missile
and unwind as the missile travels to the
target. Along this wire, the gunner sends
navigational signals directing the missile to
the target. If for some reason the wire
breaks, the missile will never acquire the
target. Wire guided missiles carry no
instrument array that would allow them to
acquire a target. One strong downside to
wire guided missiles is the fact that the
vehicle from which the missile is fired must
stay out in the open to guide the missile to
its target. This leaves the launch vehicle
vulnerable to attack, which on the battlefield
one wants to avoid at all costs.
1.2.3). Missile guidance using lasers :
Laser guided missiles use a laser of a certain
frequency bandwidth to acquire their target.
The gunner sights the target using a laser;
this is called painting the target. When the
missile is launched it uses its onboard
instrumentation to look for the heat
signature created by the laser on the target.
Once the missile locates the heat signature,
the target is acquired. Despite the much
publicized success of laser guided missiles,
laser guided weapons are no good in the rain
or in weather conditions where there is
sufficient cloud cover. To overcome the
shortcomings of laser guided missiles
presented in unsuitable atmospheric
conditions and radar guided missiles entered
GPS as a method of navigating the missile to
the target. So, before going to GPS guided
missile we will have an introduction to GPS.
INTRODUCTION TO GPS :
2.1). What is meant by GPS ?
GPS, which stands for Global Positioning
System, is the only system today able to
show us our exact position on the Earth
anytime, in any weather, anywhere. GPS
satellites, 24 in all, orbit at 11,000 nautical
miles above the Earth. Ground stations
located worldwide continuously monitor
them. The satellites transmit signals that can
be detected by anyone with a GPS receiver.
Using the receiver, you can determine your
location with great precision.
2.2). Elements of GPS :
GPS has three parts: the space segment, the
user segment, and the control segment. The
space segment consists of a constellation of
24 satellites plus some spares, each in its
own orbit 11,000 nautical miles above Earth.
The user segment consists of receivers,
which we can hold in our hand or mount in a
vehicle. The control segment
consist, of ground stations that make sure
the satellites are working properly.
2.3). Recent development in GPS :
Systematic GPS errors as well as the
unavailability of GPS P-code to civilian
users led to the development of the
differential global positioning system
(DGPS). The central idea behind all
Differential GPS schemes is that of
broadcasting an error signal which tells a
GPS receiver what the difference is between
the receiver's calculated position and actual
position. The GPS error signal can be most
easily produced by siting a GPS receiver at a
known surveyed location, and by comparing
the received GPS position with the known
actual position. The difference in positions
will be very close to the actual error seen by
a receiver in the geographical vicinity of the
beacon broadcasting the error signal.
WORKING OF DGPS:
1.) Technique called differential correction
can yield accuracies within 1-5 meters, or
even better, with advanced equipment.
2.) Differential correction requires a second
GPS receiver, a base station, collecting data
at a stationary position on a precisely known
point (typically it is a surveyed benchmark).
3.) Because physical location of base station
is known, a correction factor can be
computed by comparing known location
with GPS location determined by using
satellites.
4.) Differential correction process takes this
correction factor and applies it to GPS data
collected by the GPS receiver in the field. --
Differential correction eliminates most
of errors.
3.1). Working Of inertial Navigation
system:
Inertial navigation relies on devices onboard
the missile that senses its motion and
acceleration in different directions. These
devices are called gyroscopes and
accelerometers.
Fig 1.2 Mechanical, fiber optic, and ring
laser gyroscopes
The purpose of a gyroscope is to measure
angular rotation, and a number of different
methods to do so have been devised. A
classic mechanical gyroscope senses the
stability of a mass rotating on gimbals. More
recent ring laser gyros and fiber optic gyros
are based on the interference between laser
beams. Current advances in Micro-Electro-
MechanicalSystems (MEMS) offer the
potential to develop gyroscopes that are very
small and inexpensive While gyroscopes
measure angular motion, accelerometers
measure linear motion. The accelerations
from these devices are translated into
electrical signals for processing by the
missile computer autopilot. When a
gyroscope and an accelerometer are
combined into a single device along with a
control mechanism, it is called an inertial
measurement unit (IMU) or inertial
navigation system (INS).
Fig.1.3 Inertial navigation concept
The INS uses these two devices to sense
motion relative to a point of origin Inertial
navigation works by telling the missile
where it is at the time of launch and how it
should move in terms of both distance and
rotation over the course of its flight. The
missile computer uses signals from the INS
to measure these motions and insure that the
missile travels along its proper-programmed
path. Inertial navigation systems are widely
used on all kinds of aerospace vehicles,
including weapons, military aircraft,
commercial airliners, and spacecraft. Many
missiles use inertial methods for midcourse
guidance, including AMRAAM, Storm
Shadow, Meteor, and Tomahawk.
ROLE OF SATELLITE IN
MISSILE GUIDANCE :
4.1). Satellite guided weapons :
The problem of poor visibility does not
affect satellite-guided weapons such as
JDAM (Joint Direct Attack Munitions),
which uses satellite navigation systems,
specifically the GPS system. This offers
improved accuracy compared to laser
systems, and can operate in all weather
conditions, without any need for ground
support. Because it is possible to jam GPS,
the bomb reverts to inertial navigation in
the event of losing the GPS signal. Inertial
navigation is significantly less accurate;
JDAM achieves a CEP of 13 m under GPS
guidance, but typically only 30 m under
inertial guidance. Further, the inertial
guidance CEP increases as the dropping
altitude increases, while the GPS CEP does
not.The precision of these weapons is
dependent both on the precision of the
measurement system used for location
determination and the precision in setting
the coordinates of the target. The latter
critically depends on intelligence
information, not all of which is accurate.
However, if the targeting information is
accurate, satellite-guided weapons are
significantly more likely to achieve a
successful strike in any given weather
conditions than any other type of precision
guided munitions.
4.2 MISSILE GUIDANCE USING GPS :
The central idea behind the design of
DGPS/GPS/inertial guided weapons is that
of using a 3-axis gyro/accelerometer
package as an inertial reference for the
weapon's autopilot, and correcting the
accumulated drift error in the inertial
package by using GPS PPS/P-code. Such
weapons are designated as "accurate"
munitions as they will offer CEPs (Circular
Error Probable) of the order of the accuracy
of GPS P-code signals, typically about 40ft.
Fig.1.4. Global Positioning System used in
ranging navigation guidance.
The next incremental step is then to update
the weapon before launch with a DGPS
derived position estimate, which will allow
it to correct its GPS error as it flies to the
target, such weapons are designated
"precise" and will offer accuracies greater
than laser or TV guided weapons,
potentially CEPs of several feet. Only
should an opponent capable of jamming
GPS signals be encountered, will more
expensive inertial packages and ECCM
equipped receivers be required .For an
aircraft to support such munitions, it will
require a DGPS receiver, a GPS receiver and
interfaces on its multiple ejector racks or
pylons to download target and launch point
coordinates to the weapons. The
development of purely GPS/inertial guided
munitions will produce substantial changes
in how air warfare is conducted. Unlike a
laser-guided weapon, a GPS/inertial weapon
does not require that the launch aircraft
remain in the vicinity of the target to
illuminate it for guidance - GPS/inertial
weapons are true fire-and-forget weapons,
which once released, are wholly
autonomous, and all weather capable with
no degradation in accuracy. Existing
precision weapons require an unobscured
line of sight between the weapon and the
target for the optical guidance to work.
GPS/inertial weapons are oblivious to the
effects of weather, allowing a target to be
engaged at the time of the attacker's
choosing.
5. OTHER APPLICATIONS OF
GPS :
GPS is the most powerful navigation
system used in a miracle of military,
commercial, civil, and scientific
application. GPS has already been
incorporated into naval ships, submarines,
and military aircraft.
1.) Navigation System Timing and
Ranging (NAVSTAR) GPS is now
available at any time, in any weather, and
at any place on or above the earth.
NAVSTAR also provides precise time
within a millionth of a second to
synchronize the atomic clocks used in
various military applications.
2.) GPS is even used in locating the present
position of living and non living things, this
is the concept which is used in
famous GOOGLE EARTH .
figure showing Applications Of GPS
6. CONCLUSIONS :
The proliferation of GPS and INS guidance
is a double-edged sword. On the one hand,
this technology promise a revolution in air
warfare not seen since the laser guided
bomb, with single bombers being capable of
doing the task of multiple aircraft packages.
In summary, GPS-INS guided weapons are
not affected by harsh weather conditions or
restricted by a wire, nor do they leave the
gunner vulnerable for attack. GPS guided
weapons, with their technological advances
over previous, are the superior weapon of
choice in modern day warfare.
7. REFERENCES:
1) GPS Theory and Practice. B. Hofmann-
Wellenhof, H. Lichtenegger, and J. Collins.
Springer-Verlag Wien. New
York. 1997. Pg [1-17, 76].
2) ttp://www.navcen.uscg.gov/pubs/gps/i
cd200/icd200cw1234.pdf
3) E.D. Kaplan, Understanding GPS:
Principles and Applications.
4)http://www.aero.org/news/current/gpsorbit
. html.
5) http://www.trimble.com/gps/
e-mail
09@rgitna
PAPER
A.UDAY
(06691A0
III E.C.E
Email: uda
MADA
:
andyal.com
WiMA
PRESEN
KUMAR
4B0)
aykumarm
ANAPAL
A
m
AX IN
SIG
NTED BY:
mits@yahoo
LLI INST
NGALLU
rgmcet
4G
GMOI
:
o.com
TITUTE O
U,MADA
1
COMM
ID 2 K
P.RE
(0669
III E.
Email
OF TECH
ANAPALL
MUNICA
K9
DDY PRA
91A0479)
.C.E
l: reddypra
HNOLOG
LI-51732
ATION
ASAD
asadmits@
GY &SCI
25
NS
@yahoo.co.i
IENCE
in
2
ABSTRACT:
Factors such as innovation, technological obsession, market evolution and custom
er
needs characterize the growth of any new technology .There is a growing demand f
or a technology
that addresses most of the customers demands such as high bandwidth, long and no
n-line of sight
coverage that are not achieved by the existing technology. This trend is more pr
edominant in the
highly mutative wireless market .4G will fundamentally advance the way we use mo
bile and
existing networks and repair the problems of 3G .With the supplementary features
such as long and
non-line of sight coverage, high-bandwidth and inbuilt quality of service (QOS).
The approaching
4G mobile communication systems are projected to solve still-remaining problems
of 3G systems
and to provide a wide variety of new services, from high-quality voice to high-d
efinition video to
high-data-rate wireless channels.WiMAX is an advanced technology solution, based
on an open
standard, designed to meet this need, and to do so in a low-cost, flexible way.
WiMAX (World wide interoperability for Microwave Access) allows
interoperability and combines the benefits that other wireless networking techno
logies offer
individually and leads a path towards 4G and become the 4G wireless technology i
n the future.
WiMax addresses almost all of the demands. WiMAX provides high-throughput broadb
and
connections over long distances.
WiMAX is the only wireless standard today that has the ability to deliver true
broadband speeds and help make the vision of pervasive connectivity a reality. T
his paper will
evaluate the potential of WiMAX as 4G technology.
3
INTRODUCTION:
WiMAX (Worldwide
Interoperability for Microwave access) is a
standards-based wireless technology that
provides high-throughput broadband
connections over long distances. WiMAX can
be used for a number of applications,
including "last mile" broadband connections,
hotspot and cellular backhaul, and high-speed
enterprise connectivity for businesses.
WiMAX has the potential to impact
all forms of telecommunications
.WiMAX has the ability to deliver true
broadband speeds .WiMAX networks are
optimized for high-speed data and should
help spur innovation in services, content and
new mobile devices.
WiMAX Features:
WiMAX provides wireless
metropolitan area network (WMAN)
connectivity at speeds of up to 75
Mb/sec.
WiMAX systems can be used to
transmit signal as far as 30 miles.
WiMAX base station would beam
high-speed Internet connections to
homes and businesses in a radius of
up to 50 km (31 miles).
TYPES OF WiMAX:
..Point-to-point (PTP)
.. Point-to-Multipoint (PMP)
4
POINT TO POINT:
Point to point is used where there are
two points of interest: one sender and one
receiver. This is also a scenario for backhaul
or the transport from the data source (data
center, co-lo facility, fiber POP, Central
Office, etc) to the subscriber or for a point for
distribution using point to multipoint
architecture. As the architecture calls for a
highly focused beam between two points
range and throughput of point-to point radios
will be higher than that of point-to-multipoint
products.
POINT TO MULTIPOINT:
One base station can service hundreds of
dissimilar subscribers in terms of bandwidth
and services offered. WiMax systems can
also be setup as mesh networks allowing the
WiMax system to forward packets between
base stations and subscribers without having
to install communication lines between base
stations.Here we can see the both types of
point-to-point & point-to-multipoint
APPLICATIONS OF WiMAX:
Fixed WiMAX
Mobile WiMAX
5
FIXED WiMAX:
Fixed WiMAX applications are point-tomultipoint
enabling broadband access to
homes and businesses.
Fixed WiMAX offers cost
effective point to point and point to multipoint
solutions
Fixed WiMAX include a
radius of service coverage of 6 miles from a
WiMAX base station for point-to-multipoint,
non-line-of-sight service. This service should
deliver approximately 40 megabits per second
(Mbps) for fixed access applications.That
WiMAX cell site should offer enough
bandwidth to support hundreds of businesses
with T1 speeds and thousands of residential
customers with the equivalent of DSL
services from one base station.
MOBILE WiMAX:
Mobile WiMAX allows any
telecommunications to go mobile
Mobile WiMAX is based on
OFDMA (Orthogonal Frequency Division
Multiple Access) technology which has
inherent advantages in throughput, latency,
spectral efficiency, and advanced antennae
support; ultimately enabling it to provide
higher performance than today's wide area
wireless technologies. Mobile WiMAX takes
the fixed wireless application a step further
and enables cell phone-like applications on a
much larger scale. For example, mobile
WiMAX enables streaming video to be
broadcast from a speeding police or other
emergency vehicle at over 70 MPH. It
potentially replaces cell phones and mobile
data offerings from cell phone operators.
6
WiMAXSTANDARDS:
802.16 broadband
wireless systems have evolved over time.
This diagram shows that the original 802.16
specification defined fixed broadband
wireless service that operates in the 10-66
GHz frequency band. To provide wireless
broadband service in lower frequency range,
the 802.16A specification was created that
operates in the 2-11 GHz frequency band. To
provide both fixed and mobile service, the
802.16E specification was developed.
LINE OF SIGHT (LOS) OR NON
LINE OF SIGHT (NLOS)
The difference between line
of sight and non-line of sight
WiMAX functions best in line of
sight situations and, unlike those earlier
technologies, offers acceptable range and
throughput to subscribers who are not line of
sight to the base station..Buildings between
the base station and the subscriber diminish
the range and throughput, but in an urban
environment, the signal will still be strong
enough to deliver adequate service. Given
WiMAX's ability to deliver services non-lineof-
sight, the WiMAX service provider can
reach many customers in high-rise office
buildings achieve a low cost per subscriber
because so many subscribers can be reached
from one base station.
7
REAL TIME EXAMPLE USING
WiMAX:
WIMAX RADIO
Mobile Broadband
LIMITATIONS:
1. A commonly-held misconception is that
WiMAX will deliver 70 Mbit/s over 50
kilometers. In reality, WiMAX can do one or
the other - operating over maximum range
(50 km) increases bit error rate and thus must
use a lower bitrate.
2. In a line-of-sight environment with a
portable Mobile WiMAX CPE, speeds of 10
Mbit/s at 10 km could be delivered However,
in urban environments they may not have
line-of-sight and therefore users may only
receive 10 Mbit/s over 2 km.
Amendments in progress
802.16e-2005 : Mobile 802.16 , 802.16f-2005
: Management Information Base ,802.16g-
2007 :Management Plane Procedures and
Services ,802.16k-2007 : Bridging of 802.16
(an amendment to 802.1D) ,802.16h :
Improved Coexistence Mechanisms for
License-Exempt Operation ,802.16i :Mobile
Management Information Base ,802.16j :
Multihop Relay Specification ,802.16Rev2
:Consolidate 802.16-2004, 802.16e, 802.16f,
802.16g and possibly 802.16i into a new
document.
802.16m Advanced Air Interface. Data
rates of 100 Mbit/s for mobile applications
and 1 Gbit/s for fixed applications, cellular,
macro and micro cell coverage, with currently
no restrictions on the RF bandwidth (which is
expected to be 20 MHz or higher).
8
Future development:
For use in cellular spectrum. WiMAX II,
802.16m will be proposed for IMT-Advanced
4G.
The goal for the long term evolution of both
WiMAX and LTE is to achieve 100 Mbit/s
mobile and 1 Gbit/s fixed-nomadic bandwidth
as set by ITU for 4G NGMN (Next
Generation Mobile Network) systems through
the adaptive use of MIMO-AAS and smart,
granular network topologies..Since the
evolution of core air-link technologies has
approached the practical limits imposed by
Shannon's Theorem, the evolution of wireless
has embarked on pursuit of the 3X to 10X+
greater bandwidth and network efficiency
gains that are expected by advances in the
spatial and smart wireless broadband
networking technologies
CONCLUSION:
WiMax is all set to take over the
wireless world.& provide affordable
broadband services This development will
allow for such things as mobile video
conferencing, live video feeds without the
cost of satellite time, and connecting to
practically anything live, on-line, and do it
with a device in your hand. Amazing! One to
many and one to one telecasting. The
emerging BWA technology WiMAX allows
interoperability and combines the benefits
that other wireless networking technologies
offer individually and leads a path towards
4G and become the 4G wireless technology in
the future. e-mail
:
QUEUEING SYSTEMS
A computational study
A.Nagaswetha C.Kishore
Final B.Tech ECE Final B.Tech ECE
Sri Venkateswara University College of Engineering
TIRUPATI
1.1Importance Of Queueing In Communication Networks
Queueing theory plays a key role in the analysis of networks.Queueing arises ver
y
naturally in a study of packet-switched networks. Packets arriving either at an
entry
point to the network or at an intermediate node in the to the destination are bu
ffered ,
processed to determine the appropriate transmission link connected to the next n
ode
along the path.The time spent in the buffer waiting for transmission is a signif
icant
measure of performance for such a network since end to end time delay of which t
his
wait time is a component is an element of the network performance experienced by
the
user.The wait time depends on the nodal processing time & packet length.
Queueing theory also arises naturally in the study of circuit switched networks
not only in studying the processing of calls but in analysing the relation at a
node or
switching exchange between trunks available & the probability that a call to be
set up
will be blocked or queued for service at a later time.
Historically modern queueing theory developed out of telephone traffic
studies at the beginning of the twentieth century.Integrated networks which
combine aspects of packet switching & circuit switching
1.2Characteristic parameters of queueing and their definitions
.. Transmission link capacity:measured in packets/sec capable of being
transmitted on the traffic rate in packets/sec arriving at the node.µ=µ c
Packets/sec
.. Average packet length:measured in bits is 1/µ bits and is given in units of
bits/packet packets/sec is transmission capacity
.. Call arrival rate:arrivals/sec represents the average call arrival rate or th
e
number of calls/sec handled on the average .
.. Call holding time:The parameter 1/µ,in units of sec/call is called the average
call
holding time.
.. Link utilisation capacity:Defined as the ratio of load on the system to the
capacity of the system,denoted as .=./µ which plays a critical role in queueing
systems.
.. State of the queue:Defined to be number of packets on the queue.
.. To know the probability of state one must know the knowledge of
.. 1.packet arrival process
.. 2.packet length distribution
.. 3.service discipline
To calculate the probabilities of stste one must have knowledge of
1. The packet arrival process (the arrival statistics of the incoming packets).
2. The packet length distribution (this is also called as service time distribut
ion in
queueing theory) &
3. The service Discipline (for example : FCFS-first come first served-or FIFO
server;LCFS-last come-first served; or some form of priority discipline).
1.3 Classification Of Queues:-
M/M/1 used due to British statistician D.G.Kendall.The Kendall notation for a gen
eral
queueing system is of the form A/B/C. The symbol A represents the arrival distri
bution,
B represents the service time distribution , C denotes number of servers used .T
he M
in particular, from the Marker process is used to denote the poission process or
equivalent exponential distribution .
M/M/m queue- is the one with poission arrivals exponential service statistics & m
serves.
M/G/1 queue- has poission arrivals, general service distribution & a single serv
er.
M/D/1 queue- special case with D used to represent fixed or constant service tim
e.
2.1 Model Of M/M/1 Queue:-
Fig2.1 MODEL OF SINGLE SERVER QUEUE
The parameters . and µ are explained in section1.2.
Fig2.2 M/M/1 QUEUE
Once we find the probabilities of the state Pn at the queue it is easy to comput
e
the statistical properties of the M/M/1 Queue, the average queue occupancy, the
probability of blocking for a finite queue, average throughput.
2.2 Mathematical description of statistical properties of M/M/1 Queue:-
It is apparent that if the queue is in state n at time t+.t it could only have b
een in states
n-1, n or n+1 at time t .
The probability Pn(t+.t) that the queue was in state n at time t+.t must be the
sum of the (mutually exclusive) probabilities that the queue was in state n-1,n,
n+1 at the
time t each multiplied by the probability of arriving at the state n in the inte
rvening .t
units of time.
Thus the generating equation for Pn(t+.t),
Pn(t+.t)=Pn(t)[(1-..t)(1-µ.t)+µ.t*..t+o(.t)]+
Pn-1[..t(1-µ.t)+o(.t)]+Pn+1(t)[µ.t(1-..t)+o(.t)]------>(2.1)
As an example if the system remains in state n, n>=1 there could have been eithe
r one
departure and one arrival, with probability (1-µ.t)(1-..t) as shown. The other ter
ms of
equation (2.1) also obtained similarly.
Simplifying eq(2.1) & dropping o(.t) terms altogether one gets
Pn(t+.t)=[1-(.+µ).t]Pn(t)+..tPn-1(t)+µ.tPn+1(t)------->(2.2)
Alternatively a differential difference equation governing the time variation of
Pn(t)
may be found by expanding Pn(t+.t) in a Taylor series about t & retaining the fi
rst two
terms only
Pn(t+.t)=Pn(t)+(dPn(t)/dt)*.t ------->(2.3)
Using equation(2.3) in equation (2.2) and simplifying one readily obtains the fo
llowing
equation.
dPn(t)/dt= -(.+µ)Pn(t)+ .Pn-1(t)+µPn+1(t)-------->(2.4)
This is the differential differnce equation to be solved to find the time variat
ion
of Pn(t) explicitly.
Equation (2.4) for the case of stationary, non time varying probabilities, then
simplifies to the following equation involving the stationary state probabilitie
s Pn of the
M/M/1 queue:
(.+µ)Pn =.Pn-1+µPn+1------->(2.5)
This equation can be given a physical interpretation that enables us to write it
down
directly by inspection without going through the lengthy process of deriving it
from
equation(2.1)
Considerr the state diagram of fig 2.3 which represents the M/M/1 queue.
Because of the poission arrival & departure processes assumed, transitions betwe
en
adjacent states only can take place with the rates shown.
There is a rate . of moving up one state due to arrivals in the
system.Wherwas there is a state µ of moving down due to service completions or
departures.
Alternatively if one multiplies the rates by .t, one has the probability ..t of
moving up one state due to an arrival or the probability µ.t of dropping down one
state
due to a service completion (departure).
(If the system is in state 0.i.e,)it is empty it canonly move up to state 1 due
to an arrival).
The form of eq(2.5) indicates that there is a stationary balance principle at
work.SS
The left hand side of eq(2.5) represents the rate of leaving state n, given the
system was in state n with probability Pn .The right hand side represents the ra
te of
entering state n, from either state n-1 or state n+1 .In order for stationary st
ate
probabilities to exist, the two rates must be equal.
Fig2.3 STATE DIAGRAM OF M/M/1
Balance equations play a key role in the study of queueing systems.
The solution of eq(2.5) for the state probabilities can be carried out in a
number of ways .The simplest way is to again apply balance arguments.
Consider fig 2.4 which represents state diagram of the M/M/1 queue again
drawn as fig 2.3 but with two closed surfaces 1 & 2 sketched as shown.If one calcu
late
the total probability flux crossing surface 1, and equates the flux leaving (rate
of
leaving state n), to the flux entering (rate of entering state n), one gets equ(
2.5).Now
focus on surface 2, which encloses the entire set of points from 0 to n.The flux
entering
the surface is µPn+1 ;the flux leaving is .Pn .Equating these two one gets
.Pn =µPn+1 ---------->(2.6)
Repeating equation(2.6) ntimes one finds very simply that
Pn=.^n*P0 where .=./µ
To find the remaining unknown=n probability P0 one must now involve the probabil
ity
normalization condition
SPn=1
Fig2.4 FLOW BALANCE OF M/M/1 QUEUE
For the case of an infinite M/M/1 queue one finds very simply that Pn=(1-.).^n w
here
.=./µ<1-------->(2.7)
As the equilibrium state probability solution for the M/M/1 queue (Note that the
necessary condition .=./µ<1)
2.22Blocking Probability and throughput:
The normalized condition summed over a finite number of states
SPn=1------->(2.8)
Pn=(1-.).^n/(1-.^(N+1))------>(2.9)
The probability that thequeue is full is
PN=(1-.).^N/(1-.^(N+1))------>(2.10)
Consider the queue shown in fig2.5 .It can be any queueing system that blocks
customers on arrival.A load . defined as the average number of arrivals/sec is sho
wn
applied to queue .With the probability of blocking given Pb the net arrival rate
is then
.(1-Pb).But this must be the same as the throughput .,or the number of customers
served/sec for the conserved system..= .(1-Pb)------->(2.11)
Fig2.5 RELATION BETWEEN THROUGHPUT AND LOADThe blocking
probability in fact is given by
Pb=Pn=(1-.).^n/(1-.^(N+1))------>(2.12)
For a small blocking probability equation(2.12) may be simplified with .<1 &N>>1
we
have Pb=(1-.).^N .^(N+1)<<1------>(2.13)
The specific expression for the normalized throughput ./ µ as a function of the
normalized load .=./µ is obtained by using equation(2.9) in the following expressi
on
.= .(1-Pb)=µ(1-P0)
where µ is average rate of service.(1-P0) is the probability that the queue is non
empty.
This gives . / µ=(1-P0)
=(1-.).^N/(1-.^(N+1))------>(3.14)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Normalised load
Normalised throughput
Variation of normalised throughput with p
2.2.3Average queue size: From the definition of the mean value of random
variables we get
E(n)=SnPn=./(1-.)------>(3.15)
2.3SIMULATING M/M/1 QUEUE:
Event driven System Simulation Structures:
Fig2.6 EVENT DRIVEN SIMULATION STRUCTURES
The first structure is event sequencer ,which accepts a sequence of events
that arrive at time t1,t2,t3 ..
The event occurrences enter a queue in which first customers in is also the
first one to be serviced.Thus we call this as FIFO structure.
The third important structure is the server,which can be thought of as a
device that actually performs work or an operation on the message or
customer.
Server can only service a single customer at a time,during which the server
is said to be busy ;otherwise the server is idle or available .
Interfaces between event sequencer,queue&server are also required.These
include distributors which usually disperse assignments to different
servers & Collectors which reassemble queues after being acted on by the
servers.
Distributors takes the output of a single queue and routers it to one or
more servers or other queues.
EXAMPLE OF M/M/1 QUEUEING SYSTEM:
Consider a so called M/M/1 queueing system in which data packets of
random lengths arrive at random times at a single communication
server.The server simply forwards the packets along the communication
network.Notice the similarity between this system & the drive through
service window at a fast food store where data packets are replaced by
customers & message length is replaced by order size.Model the system &
graph illustration results for near event rate .=3
messages/min&µ=0.5minutes/message
To solve the above problem Simulation technique is used since it is an
imitation of reality.
Fig2.7 M/M/1 QUEUE SIMULATION DIAGRAM
Simulation is an representation of reality through the use of model
or other device which wil lreact in the same manner is reality under the
given set of conditions.
The code by which an event sequencer works
t0=0
for k=1 to n
tk= tk-1-(1/.)*ln(RND)
ak=tk
next k
The code for distributor to execute this protocol is
for k=1 to n
if tk>vk-1 then
bk=tk
uk=tk
vk=tk-µ*ln(RND)
else
b= vk-1
uk=vk-1
vk=vk-1- µ*ln(RND)
end if
next k
The code for Collector whose output is d is especially simple:
for k=1 to n
dk=vk
next k
2.3.2.1Generation of Random numbers using different methods:
An electronic device as a part of digital computer to generate true random
numbers but very expensive.
Mid square method which is very cumbersome.
Congruence method or residue method which is very expensive.
2.3.2.2Algorithmic procedure for linear congruential generators:
This requires Initial Seed Z0 to begin.
This seed & successive terms of the sequence Z are recursively applied to
an LCG formula.These Zk values are then normalized to produce output U
which are statistically uniform on the interval 0< Uk <1
The algorithmic procedure follows as
Z0 = seed
Zk=(aZk+b) mod(m)
Uk=Zk/m
Where a- multiplier
b-Increment
m-modulus
The infix operator mod is the modulus function & is defined as reminder
formed by the quotient (aZk+b)/m
For example more research has been done to determine the good values of
a,b.m
These good values are a=5,c=3,m=16&Z0=7
2.3.2.3 Programming languages used:
FORTRAN being highly used in nature for any simulation project which is
generally more efficient in computer time & storage requirements.
FORTRAN is difficult & time consuming as compared to other
language hence it is harder.
Thus MATLAB is used to generate all the plots.
0 1 2 3
0
5
event time,t
event,k
0 1 2 3
0
5
event time,t
event,k
0 1 2 3
0
5
event time,t
event,k
0 1 2 3
0
5
event time,t
event,k
0 1 2 3
0
5
event time,t
event,k
0 1 2 3
0
5
event time,t
event,k
0 1 2 3
0
5
customer,t
waiting time,Q(k)
0 1 2 3
0
0.5 1
customer,t
service time,S(k)
0 1 2 3
0
1
2
customer,t
total time,T(k)
0 1 2 3
0
5
time,k
Summary measures
RESULTS & CONCLUSIONS:
.. In this paper computational study of queueing theory is carried
out.Queueing has a vital role to play in communication networks.
.. Starting with the importance of queueing & its parameters in
communications networks, the mathematical description of the
M/M/1 queue was introduced.
.. The computations made in this paper are
1.M/M/1 state probabilities .It can be clearly observed that state
probabilities decreases as the value of n increases.
2.Blocking probability & throughput load characteristics of M/M/1
queue.As the value of . increases the blocking probability &
throughput increase upto 1 and then remains constant.
3.Average Queue size.As the value of . increases queue size increases
exponentially.
.. M/M/1 queue simulation was made to compute
1.Arrival & departure times from the queue& server
2.The time each customer wait in queue & server& the total time spent
in the queueing system.
3.Summary measures
REFERENCES:
1.Mischa Schwartz, Telecommunication
Networks Protocols,Modeling& Analysis,Pearson Education
2.Frank L. Severance, System modelling & Simulation John Wiley &
Sons Ltd
3. Operational Research by Ramkumar Gupta & Hira
DETECTION OF FAULTS IN PCB S USING IMAGE PROCESSING
C.Swathi, T.Sravani,
Student III B.Tech (ECE), Student II B.Tech (ECE),
AITS, Rajampet. AITS, Rajampet.
[email protected] mailto:er [email protected]
ABSTRACT
Morphological Image Processing is an
important tool in the Digital Image
processing, since that science can
rigorously quantify many aspects of the
geometrical structure of the way that
agrees with the human intuition and
perception.
The Morphological Image Processing is
based on geometrically altering image
structure. In the binary image setting, an
image is probed by one or more
structuring elements either to extract
information or to filter the image, and a
similar probing also occurs in the grey
scale image. Based on four basic
operations of Dilation, Erosion, Opening
and Closing, one can construct a class of
Morphological Image Processing Tool,
which can be used in the place of a
Linear Image Processing. Whereas the
Linear Image Processing sometimes
distort the underlying geometric form of
an image, but in Morphological image
Processing, the information of the image
is not lost.
In the Morphological Image Processing
the original image can be reconstructed
by using Dilation, Erosion, Opening and
Closing operations for a finite no of
times.
The major objective of this paper is to
reconstruct the class of such finite length
Morphological Image Processing tool in
a suitable mathematical structure using
Java language.
The Morphological Image Processing is
implemented and successfully tested in
Industrial Automation in detection of
Open circuits, Short circuits, Holes,
Tracks in PCB s (PRINTED CIRCUIT
BOARD).
INTRODUCTION
The Morphological image processing is
generally based on the analysis of a two
valued image in terms of certain
predetermined geometric shape known
as structuring element. The term
morphology refers to the branch of
biology that deals with the form and
structure of animals and plants. The
mathematical morphology is a tool for
extracting image components that are
useful in the representation and
description of region shapes.
The ultimate aim in a large number of
image processing applications is to
extract important features from image
data, from which a description,
interpretation, or understanding of the
scene can be obtained. These features
can be edges and boundaries, shape
features, spatial features, transform
features, etc.
A very well suited approach for
extracting significant features from
images is morphological (shape-based)
processing. Morphology is the study of
forms. Morphological processing refers
to certain operations where an object is
Hit or Fit with structuring elements and
thereby reduced to a more revealing
shape. These structuring elements are
shape primitives which are developed to
represent some aspect of the information
or the noise. By applying these
structuring elements to the data using
different algebraic combinations, one
performs morphological transformations
on the data.
The Morphological Image Processing
operations are applied for binary images
in
.. Industrial Automation
Printed Circuit Board Inspection
-Detection of open circuits, short
circuits, holes and tracks in PCB s.
There is a rapid advancement in the field
of electronics and we are going for the
compact & reliable products. To design
an effective product, the size of the PCB
has been reduced and where as the
density of the components have been
increased. So, for the reduction it is
difficult to inspect the faults such as,
Open circuits (Cuts), Short circuits etc.
By using the Morphological Image
Processing Operations, we can easily
inspect the faults. Image enhancement
and restoration procedures are used to
process degraded images of
unrecoverable objects or experimental
results too expensive to duplicate. In
archeology, image processing methods
have successfully restored blurred
pictures that were the only available
records of rare artifacts lost or damaged
after being photographed. Similarly
successful applications of image
processing concepts can be applied in
biology, astronomy, nuclear medicine,
defense etc.
FITTING AND HITTING
The Structuring Element is positioned at
all positions or possible locations in the
Binary Image and it is compared with
the corresponding neighborhood of
pixels.
The morphological operation resembles
a Binary correction. Where the
operation is logical than arithmetic in
nature [3].
Ex.: Suppose we have two 3 * 3
structuring elements
1 1 1
0 1 0
S1 = 1 1 1
S 2 = 1 1 1
1 1 1
0 1 0
In a given image A, B, C are the three
positions where the S1 and S2
Structuring Elements should be
positioned.
Binary Image used to test Fitting and
Hitting of Structuring Elements S1 and
S2
FIT:
The structuring element is said to FIT
the image if, for each of its pixels that is
set to 1 , The corresponding image pixel
is also 1 .
For the above example, Both S1 and S2
fit the image at A (Remember that
structuring element pixels set to 0 are
ignored when testing for a fit). S2 fits the
image at B and neither S1 nor S2 fits at
C .
HIT:
A structuring element is said to HIT and
Image if, for any of it pixels that is set to
1 , The corresponding Image pixel is
also 1 . (Here also we ignore Image
pixels for which the corresponding
structuring element pixel is 0 .)
For the above example, S1 and S2 HIT
the Image in neighborhood A . The
same holds true at B . But at
neighborhood C , only S1 HITS the
Image.
In this concept HITS corresponds to
Union and where as the FITS
corresponds to Intersection.
Further more it is possible to replace the
set operation Intersection and Union by
the Boolean operators AND and OR .
DILATION
Dilation - grow image
regions
Dilation causes objects to dilate or grow
in size. The amount and the way that
they grow depends upon the choice of
the structuring element [3]. Dilation
makes an object larger by adding pixels
around its edges.
The Dilation of an Image A by a
structuring element B is written as
A.B. To compute the Dilation, we
position B such that its origin is at
pixel co-ordinates (x, y) and apply the
rule.
g(x , y) = 1 if B hits A
0 Otherwise
Repeat for all pixel co-ordinates.
Dilation creates new image showing all
the location of a structuring element
origin at which that structuring element
HITS the Input Image. In this, it adds a
layer of pixel to an object, there by
enlarging it. Pixels are added to both the
inner and outer boundaries of regions, so
Dilation will shrink the holes enclosed
by a single region and make the gaps
between different regions smaller.
Dilation will also tend to fill in any small
intrusions into a region s boundaries.
The results of Dilation are influenced not
just by the size of the structuring element
but by its shape also.
Dilation is a Morphological operation; it
can be performed on both Binary and
Grey Tone Images. It helps in extracting
the outer boundaries of the given images.
For Binary Image:
Dilation operation is defined as
follows,
D (A , B) = A . B
Where,
A is the image
B is the structuring element of the order
3 * 3.
Many structuring elements are requested
for dilating the entire image.
EROSION
Erosion - shrink image
regions
Erosion causes objects to shrink. The
amount of the way that they shrink
depend upon the choice of the
structuring element. Erosion makes an
object smaller by removing or Eroding
away the pixels on its edges [3].
The Erosion of an image A by a
structuring element B is denoted as
A T B. To compute the Erosion, we
position B such that its origin is at
image pixel co-ordinate (x , y) and apply
the rule.
g(x , y) = 1 if B Fits A ,
0 otherwise
Repeat for all x and y or pixel coordinates.
Erosion creates new image
that marks all the locations of a
Structuring elements origin at which that
Structuring Element Fits the input image.
The Erosion operation seems to strip
away a layer of pixels from an object,
shrinking it in the process. Pixels are
eroded from both the inner and outer
boundaries of regions. So, Erosion will
enlarge the holes enclosed by a single
region as well as making the gap
between different regions larger. Erosion
will also tend to eliminate small
extrusions on a regions boundaries.
The result of erosion depends on
Structuring element size with larger
Structuring elements having a more
pronounced effect & the result of
Erosion with a large Structuring element
is similar to the result obtained by
iterated Erosion using a smaller
structuring element of the same shape.
Erosion is the Morphological operation,
it can be performed on Binary and Grey
images. It helps in extracting the inner
boundaries of a given image.
For Binary Images:
Erosion operation is defined as
follows,
E (A , B) = A T B
Where,
Many structuring elements are required
for Eroding the entire image.
OPENING
Opening - structured
removal of image region boundary
pixels
It is a powerful operator, obtained by
combining Erosion and Dilation.
Opening separates the Objects . As we
know, Dilation expands an image and
Erosion shrinks it [3]. Opening generally
smoothes the contour of an image,
breaks narrow Isthmuses and eliminates
thin Protrusions [1].
The Opening of an image A by a
structuring element B is denoted as A
. B and is defined as an Erosion
followed by a Dilation, and is written as
[3],
A . B = (A T B) .B
Opening operation is obtained by doing
Dilation on Eroded Image. It is to
smoothen the curves of the image.
Opening spaces objects that are too close
together, detaches objects that are
touching and should not be, and enlarges
holes inside objects.
Opening involves one or more Erosions
followed by one Dilation.
CLOSING
Closing - structured
filling in of image region boundary
pixels
It is a powerful operator, obtained by
combining Erosion and Dilation.
Closing, join the Objects [3]. Closing
also tends to smooth sections of contours
but, as opposed to Opening, it generally
fuses narrow breaks and long thin
Gulf s, eliminates small holes and fills
gaps in the contour [1].
The Closing of an image A by a
structuring element B is denoted as
A . B and defined as a Dilation followed
by an Erosion; and is written as [3],
A. B = (A . B) T B
Closing is obtained by doing Erosion on
Dilated image. Closing joins broken
objects and fills in unwanted holes in
objects.
Closing involves one or more Dilations
followed by one Erosion.
RESULTS:
PRINTED CIRCUIT BOARD
INSPECTION:
1. DETECTION OF OPEN & SHORT
CIRCUITS:
The Morphological Image processing
operations are used to detect open and
short circuits. The fig (1.a), is the
original image, where it is difficult to
trace the track cuts (open circuits) and
short circuits. . By using the Opening
operation, the short circuits can be
highlighted, which can be seen in the fig
(1.b). By using the Closing operation the
track cut can be highlighted, which can
be seen in the fig (1.c).
1. a. ORIGINAL BINARY IMAGE
1. b. DETECTIION OF OPEN CIRCUITS
1. c. DETECTION OF SHORT CIRCUITS
2. DETECTION OF HOLES AND TRACKS
Using Dilation operation for finite
number of times to the original image fig
(2.a), we can highlight the black
template of holes in the PCB, which can
be seen in the fig (2.c). Like wise, by
using the Erosion operation for finite
number of times to the original image fig
(2.a), we can highlight the tracks, by
hiding the holes, which can be seen in
the fig (2.b). The PCB inspection can be
performed on Binary Images
successfully.
2. a. ORIGINAL BINARY IMAGE
2. b. DETECTION OF TRACKS
2. c. DETECTION OF HOLES
CONCLUSION
This report represents the practical
operation of Morphological Image
Processing and it successfully performed
the Fundamental and Compound
operations of Morphological Image
processing on Binary images in,
Detection of open circuits, short
circuits, holes and tracks in PCB s
This concept has been implemented in
java. The java platform provides a
convenient representation for images that
makes the implementation of image
processing software relatively straight
forward.
The Binary image operations are
implemented using Swings and have a
GUI for performing Dilation, Erosion,
Opening & Closing operations
FUTURE SCOPE:
The Morphological Image Processing
can be further applied to a wide
spectrum of problems including:
.. Medical image analysis: Tumor
detection, measurement of size
and shape of internal organs,
Regurgitation, etc.
.. Robotics: Recognition and
interpretation of objects in a
scene, motion control and
execution through visual
feedback
.. Radar imaging: Target detection
and identification.
and this is further extended to Color
image concept and 24-bit True Color
concept and a special feature such as
Automatic selection of Structuring
element for object classification through
Morphology is still challenging to this
technique .
REFERENCES
1.DIGTAL IMAGE PROCESSING
- Rafael C. Gonzalez
- Richard E. Woods
ADDISON-WESLEY
An imprint of Pearson Education, 1st Edition
2.DIGITAL IMAGE PROCESSING
-Nick Efford
ADDISON-WESLEY
An imprint of Pearson Education, 1st Edition.
3. FUNDAMENTALS OF DIGITAL IMAGE
PROCESSING
-Anil K. Jain
Prentice-Hall of India, 2001.
4. http://www.ks.informatik.uni-kiel.de/~chp
/conf/Tutorials/MIPTutorial/miptutorial.html
5. http://kttech.com/edge.html
WIRELESS COMMUNICATION
TOPIC: IRIDIUM SATELLITE SYSTEM (ISS)
--ULIMATE WIRELESS NETWORK
BY
S.KARIMULLA V.NOORMOHAMMAD
III-B.TECH ECE III-B.TECH ECE
06691A0428 06691A0460
E-MAIL:[email protected]
[email protected]
PH.NO:9885679908, 9293122081.
THE IRIDIUM SATELLITE SYSTEM (ISS)
ABSTRACT: -
The credential part of the paper lies in the applications part where the applica
tion
(forthcoming) of ISS as an alert system for EARTHQUAKE and TSUNAMI like natural
disasters with which the casualties can be reduced drastically.
Iridium is a satellite based wireless personal communications network designed t
o
permit a wide range of mobile telephone services including voice, data, networki
ng,
facsimile, geo location, fax capabilities and paging. . It allows worldwide voic
e and data
communications using handheld satellite phones. The Iridium network is unique in
that it
covers the whole earth, including poles, oceans and airways The Iridium project,
which even
sounds like something out of star wars, has its main objective to allow handheld
mobiles to be
used from anywhere on the planet, with the call being routed directly from hands
et to handset
via one or several of the satellites. The iridium mobile telephone system is und
oubtedly the
Cadillac of mobile telephone systems. With complete coverage of the Earth s oceans
, airways
and Polar Regions, Iridium delivers essential services to users who need communi
cations
access to and from remote areas where no other form of communication is availabl
e. With
this system caller can call to any person, anywhere at any time in the world.
This paper unleashes the system facts such as the network coverage, satellite
constellation of ISS system and its operation along with its advantages and appl
ications. Last
but not least the innovative application of ISS as TSUNAMI, EARTHQUAKE alert sys
tem is
explained in brief.
INTRODUCTION: -
The fundamental purpose of an electronic communications system is to transfer
information from one place to another. Thus, electronic communications can be su
mmarized
as the transmission, reception, and processing of information between two or mor
e locations
using electronic circuits. The Iridium satellite constellation is a system of 66
active
communication satellites with spares in orbit and on the ground. It allows world
wide voice
and data communications using handheld satellite phones. The Iridium network is
unique in
that it covers the whole earth, including poles, oceans and airways
SATELLITE COMMUNICATION: -
A Satellite communication system consists of one or more satellite
space vehicles (transponder), a ground based station to control the operation, a
nd a user
network of earth stations that provides the interface facilities for transmissio
n and reception of
terrestrial communications traffic through the satellite system. A satellite com
munication
system with single-channel satellite transponder can communicate with one transm
itter and
receiver i.e. each earth station can communicate with only one other earth stati
on. To
overcome this disadvantage multiple-channel satellite transponders are introduce
d. For multichannel
system, multiple carriers are used and to handle channels and so multiple-access
ing
format should be established.
HISTORY BEHIND ITS NAME:
The system is called iridium after the element on the periodic table with the at
omic
number 77, because iridium s original design called for 77 satellites. The final d
esign,
however, requires
only 66 satellites.
The satellites are frequently visible in the night sky as satellite flares Iridi
um
communications service was launched on November 1, 1998. Their service was resta
rted in
2001 by the newly founded Iridium Satellite LLC, which was owned by a group of p
rivate
investors. Although the satellites and other assets and technology behind Iridiu
m were
estimated to have cost on the order of US$6 billion, the investors bought the fi
rm for about
US$25 million.
PRINCIPLE:
Energy waves released travel slower than light waves .It simply monitors the
earth vibrations and generates alert signal when the level of earth vibrations c
rosses a
threshold.
OPERATION: -
The 66-vehicle LEO(low earth orbit) inter-linked satellite constellation can tra
ck the
location of a subscriber s telephone handset, determine the best routing through a
network of
ground-based gateways and inter-satellite links, establish the best path for the
telephone call,
Initiate all the necessary connections, and terminate the call upon completion.
The unique
feature of iridium satellite system is its cross-links. With this two-way global
communications
is possible even when the destination subscriber s location is unknown to the call
er. Each
satellite is cross-linked to four other satellites; two satellites in the same o
rbital plane and two
in an adjacent plane To relay digital information around the globe. The cross-li
nk antennas
point toward the closest spacecraft orbiting in the same plane and the two adjac
ent co-rotating
planes
IRIDIUM SYSTEM STRUCTURE:-
FREQUENCY PLAN AND MODULATION: -
The frequency bands are as follows:
L-band subscriber to satellite voice links=1.616GHZ TO 1.6265GHZ
Ka-band gateway downlinks=19.4 GHZ to 19.6GHZ.
Ka-band inter-satellite cross-links =23.18GHZ to 23.38GHZ
COMPARISION BETWEEN IRIDIUM & TRADITIONAL SATELLITE
SYSTEM: -
.. Using satellite cross links is the unique key to the iridium system and the p
rimary
differentiation between iridium and the traditional satellite bent pipe system w
here all
transmissions follow a path from earth to satellite to earth.
.. Iridium is the first mobile satellite to incorporate sophisticated, onboard d
igital
processing on each satellite..
.. With this system the subscriber will never listen a message called OUT OF
COVERAGE AREA
ADVANTAGES: -
Satellite Cross-links Global Coverage
Digital network Consistent Quality
Signal strength Reliability
GSM Platform Based Total Communication System
Global Paging Robust Features and Functionality
DISADVANTAGES: -
.. High risk associated with designing, building, and launching satellites.
.. High cost for the terrestrial-based networking and interface infrastructure.
.. Low power, dual mode transceivers are more cumbersome and expensive
.. It requires aggressive voice compression &decompression algorithms
APPLICATIONS: -
.. Complementary and back up telephone service in fields of:
.. 1.Retail & finance 2.Manufacturing 3.Military 4.Government
5.Transportation
.. Onshore, offshore and Sub Sea communication.
COMMUNICATING THE DANGER:
This GSM-based ISS alert system monitors the earth vibration using a strong
motion accelerometer at the earthquake-prone area and broadcasts an alert messag
e to towns
and villages through the cell phone network existing throughout the state. Here
wireless
mobile phones (ISS phones) are used as transmitter and receivers.
The communication system for earthquake alert comprises an earthquake
Sensor and interface unit, decision system and alert-dissemination network.
If we consider these points, giving earthquake alert before the actual occurrenc
e of earthquake
can minimize casualties.
CONCLUSION: -
Since the satellites have already been launched it is important that this system
is
applied as much as possible. Innovative Applications like seismic alert of earthq
uakes and
tsunami should be brought out which serves the real purpose of being an engineeri
ng
application. Government should also play a major role to get these services clos
e towards
ordinary man and should play its part in providing its citizen the best possible
communication
system in the world.
REFERENCES:
1. Electronic Communication Systems -WAYNETOMASI PEARSON EDUCATION
2. Satellite Telecommunication. SHELDON TMH.2000
3. EFY MAGAZINE-DEC 2004 EDITION.
WEBSITES: -
1. www.gmpcs-us.com
2. www.iridium.com
SUBMITTED BY
G.VANDANA J.SARANYA
([email protected]) ( [email protected] )
III B.TECH
ELECTRONICS AND COMMUNICATION ENGINEERING
SRI VENKATESWARA UNIVERSITY COLLEGE OF
ENGINEERING,
TIRUPATI.
ABSTRACT
Any sufficiently advanced technology is indistinguishable from magic -
Arthur C. Clarke.
If you just use your mobile phone for calls and text messages, you might
wonder where the magic we re referring to lies. But if you ve used it for anything
beyond that, you ll have realized that the mobile phone actually makes you more of
a
node on the Connected Grid , as it were that the internet-enabled desktop computer
ever did.
The power of connectivity cannot be overstated. And the possibilities arising
from the always-there connectivity make even the simplest of mobile phones a thing
of wonder.
This paper presents a brief overview on the evolution of Mobile technologies
(1G, 2G, 2.5G, 3G) and gradually moves on to the discussion on various
terminologies used in present day technology i.e.,
Contents:
1. What is 3G ?
2. 3G standards
- 1G standards
- 2G standards
- 2.5G standards
- 3G standards
3. 3G Technology.
- Simplex vs Duplex
- TDD vs FDD
- Asymmetric vs
Symmetric
transmission.
4. Technological challenges.
5. Conclusion
6. References
WHAT IS 3G?
3G stands for Third
Generation of Mobile phones.
Basically, a 3G device will provide a
huge range of new functionalities to
your mobile. Up until now, your
mobile phone has mainly been used
only to carry voice messages, with
maybe a bit of SMS text as well. 3G
will allow simultaneous transfer of
speech, data, text, pictures, audio and
video. So now it s really inappropriate
to talk about a 3G mobile phone .
Instead we talk about a 3G device .
A 3G device will blur traditional
boundaries of technology-computing,
communications and consumer
devices. 3G devices will be a PC, a
phone and a PDA all in one. It would
not be too much of an exaggeration to
say that people will live their lives
around their 3G devices. You will have
the world at your fingertips: anything,
anytime, anywhere.
3G will provide:
.. High-speed, mobile access to the
internet.
.. Your choice of entertainment on
demand. This will include movies and
music.
.. Video-conferencing.
.. Mobile shopping. Browse available
items and pay using electronic cash.
.. Travel information: congested
roads, flight departures. And if you get
lost, find your current locations.
.. Features of phone.
3G Standards
The dream of 3G is to unifying
the World s Mobile Computing
devices through a single, worldwide
radio transmission standard. Imagine
being able to go anywhere in the world
secure in the knowledge that your
mobile phone is compatible with the
local system, a scenario known as
Global Roaming .
Let s start by stepping back to G :
1G-Standards:
The primary tools used in 1G
were the concept of cellular networks
and Analog transmission using
Frequency division multiple access
(FDMA) to separate calls from
different users.
The FDMA technique assigns
different frequencies for different calls
to avoid conversations interfering with
each other. Hence the first two terms-
Frequency Division and Multiple
Access meant that multiple users
could use the same frequency at
different times.
The cellular Operators were
limited to a particular range of
frequencies; there were only few
frequencies that can be allotted to calls
before the entire frequency band was
full.
Another disadvantage of analog
systems was difficulty in transmitting
data over them. Partially digitizing the
system made this a little less difficult,
but it was still less efficient than the
newer fully digital systems, which
were to follow.
2G-Standards:
The second generation of
cellular technology was marked by
shift from Analog to digital systems.
Shifting to digital networks has
many advantages. Firstly, transmission
in the digital format aided clarity, since
the digital signal was less likely to be
affected by electrical noise. Secondly,
transmitting data over digital network
is much easier; data could also be
compressed, saving a lot of time. And
finally, with the development of new
multiplexing techniques, the capacity
of the cellular networks could be
increased manifold.
The technologies in a 2G
cellular network are based on one of
two concepts:
Time Division Multiple
Access (TDMA):
Just like FDMA separated calls by
assigning them different frequencies,
TDMA separated calls by assigning
them different time slots in the same
frequency.
Code Division Multiple
Access (CDMA):
In the CDMA system, calls were
separated by a unique code assigned to
each of them. And all calls were
handled with in the same frequency
band.
Global System for Mobile
communications (GSM):
GSM is essentially a standard or a set
of recommendations to set up TDMAbased
mobile telephone networks. The
advantage of setting a standard was
that callers who subscribed to a GSM
network would be able to roam
outside their own home networks and
into other GSM networks worldwide.
GSM introduced the concept of the
Subscriber Identity Module (SIM)
card, which stored tour subscription
information, the operator s
information, and had some memory
space to store phone numbers and text
messages. This means that switching to
a new handset would be quiet simple.
2.5G-Standards:
The transition from 2G to 3G is
extremely challenging (requiring the
development of radically new
transmission technologies), and highly
expensive. For both of these reasons it
makes sense to move to 3G via
intermediate 2.5G.
2.5G-radio transmission
technology is radically different from
2G technologies because it uses packet
switching.
General Packet Radio Service
(GPRS):
GPRS is a data transfer method
that integrates neatly with GSM
networks. It employs unused time slots
in the TDMA channels to transfer data.
This is a lot faster than GSM circuit
switched architecture.
Circuit Switching used in GSM:
In a typical network, there are a
number of different paths that could be
employed to establish a link between
two points. A network controller
selects the best path, and once this path
is establish, communication can begin.
The transmitter sends packets of data;
they travel the network along this path
to reach the receiver. This is called
circuit switching.
Fig: Data transfer using Circuit
switching-all packets follow the same
path to their destination
But there s a catch; this path remains
reserved even if no data is being sentan
unfortunate waste of a perfectly
good connection.
Packet switching used in
GPRS:
In this network packets are sent to all
possible parts to the receiver. The
packets are tagged with the name of its
destination and its place in the
sequence of packets and send it off on
its journey. The receiver would receive
these packets, put them in the right
order. This technique is called Packet
switching. It s more efficient because
it uses network resources only when
data needs to be sent, and can even
avoid crowded portions of the network
to get data across faster.
Fig: Data transfer using packet
switching-the green packets use
different possible paths to get to their
destination. At the end, they are
arranged in the proper sequence and
then used.
Older GSM networks used the
circuit switching approach to transfer
data. This not only put a limit on the
speed possible, it also did not exploit
the bandwidth on the network. GPRS
uses packet switching it throws data
into the network, filling up any unused
time-slots it finds. By exploiting the
network thus, GPRS achieves speeds
that were not thought of in the olden
days of circuit switching.
As the number of calls
increases, more and more TDMA
channels get allotted to voice calls,
leaving less free. This, alas, is where
GPRS falters. It becomes slower as
traffic in the cell increases. GPRS
networks also do not allow for storing
messages on the network. Unlike SMS,
where the message can be stored and
sent later if the network is busy,
messages sent via GPRS are lost
forever if they don t immediately reach
the intended recipient.
GPRS, from the looks of it,
looked to be the fastest way to transfer
data on a GSM network, but it had one
last step to take.
Enhanced Data rates for GSM
Evolution (EDGE):
Put GPRS into high gear and
you have EDGE. The big brother of
GPRS, EDGE can be deployed over
existing GPRS infrastructures.
However, it requires better signal
quality than what already exists on the
world s GSM networks.
It uses a shiny new modulation
technique to be able to pack in three
times as much data into a packet as
GPRS, achieving transfer rates of
around 384 kbps for the common user -
just enough to be called a 3G
technology.
So there you have it. EDGE is
the fastest and the last technology that
will grace the GSM network. The
future all have realized is CDMA.
3G- Standards:
CDMA is generally considered
the future of cellular technology.
CDMA based networks can carry a
large number of calls, are faster, more
secure, and larger areas can be covered
with fewer base station. No wonder,
then, that the two technologies that
might dominate the 3G worlds based
on CDMA principle.
Existing GSM networks will
proceed to use a technique called w-
CDMA (Wide band CDMA), which
uses CDMA, but will allow for data
transfer rates of about 2Mbps if you sit
in one place.
Wide band CDMA (W-CDMA):
The CDMA in W-CDMA
refers to the multiplexing technique.
The W-CDMA standard uses CDMA
to achieve the 144 kbps-2Mbps data
rates that define a 3G network.
The concept of full duplex
channel allowed users to transmit and
receive data simultaneously. There are
two ways of duplexing a channel-
Time Division Duplexing (TDD),
which uses TDMA to separate the
incoming and outgoing data, and
Frequency Division Duplexing (FDD),
which uses FDMA to separate them.
CDMA networks thus far had used
TDD in 1.25 MHz of bandwidth. WCDMA
however uses FDD-two 5MHz
frequency band to achieve much higher
capacity and speeds for data
transmission.
Universal Mobile Telephone System
(UMTS):
Another technology based on
W-CDMA is the Universal Mobile
Telephone System (UMTS). It
integrated with existing GSM
infrastructure and provides data speeds
of 1.99Mbps. UMTS networks will use
USIM (Universal SIM) cards, which
are advanced versions of regular SIM
cards we use today. They are more
secure and provide more memory than
existing SIM cards.
UMTS networks will have
Soft hand-off . We saw only in the
CDMA networks so far. Hand-offs will
also be possible between UMTS and
other 3G technologies, between FDD
and TDD systems and between UMTS
and GSM.
Such advances, however, come
at the cost of a very challenging and
very expensive implementation. We
can only wait with bated breathe.
Having now looked at what will take
GSM networks into the Third
generation. We now move on to
CDMA2000.
CDMA2000:
Converting CDMA networks
into 3G is going to be easier and
cheaper than for GSM; they already
have the right technology, and the
existing infrastructure can be used for
the first few evolutions.
The CDMA2000 specification
was developed by the Third Generation
Partnership Project 2 (3GPP2). It was
implemented on the existing
CDMAone networks, bringing data
rates up to 140kbps.
The evolution of the
CDMA2000 network is called 1xEV.
This transition will take place in two
phases 1xEV-DO (Evolution, Data
Optimized or Data only) and 1xEVDV
(Evolution, Data and Voice). Both
will use the current CDMA band of
1.25 MHz, but with separate channels
for voice and data. EV-DO has already
begun commercial deployment while
EV-DV still waits in line. While EVDO
will offer data rates up to 2.4
Mbps, EV-DV is expected to take it to
4.8 Mbps.
The Homo sapiens to
CDMA2000 1xEV s ape will be
CDMA20003x. It hasn t started
development yet, but when ready, will
use a pair of 3.75 MHz channels
(which themselves will be three 1.25
MHz channels each) to achieve even
higher data rates.
3G Technology
Here is a simple introduction to
some aspects of 3G radio transmission
technologies (RTTs).
Simplex Vs Duplex:
When people use walkie-
Talkie radios to communicate, only
one person can talk at a time (the
person doing the talking has to press a
button). This is because walkie-talkie
radios only use one communication
frequency- a form of communication
known as simplex.
Of course, this is not how
mobile phones work. Mobile phones
allow simultaneous two-way transfer
of data a situation known as Duplex
(if more than two data streams can be
transmitted, it is called Multiplex).
The communication channel from the
base station to the mobile device is
called the downlink, and the
communication from the mobile device
back to the base station is called the
uplink. How can duplex
communication be achieved? Well,
there are two possible methods, which
we will now consider: TDD and FDD.
TDD Vs FDD:
Wireless duplexing has been
traditionally implemented by
dedicating two separate frequency
bands: one band for the uplink and one
band for the downlink (this
arrangement of frequency bands is
called paired spectrum). This
technique is called Frequency Division
Duplex, or FDD. A Guard band
which provides isolation of the signals
separates the two bands:
Duplex communications can
also be achieved in time rather than by
frequency. In this approach, the uplink
and the downlink operate on the same
frequency, but they are switched very
rapidly: one moment the channel is
sending the uplink signal; the next
moment the channel is sending the
downlink signal. Because this
switching is performed very rapidly, it
does appear that one channel is acting
as both as uplink and a downlink at the
same time. This is called Time
Division Duplex, or TDD. TDD
requires a Guard band instead of a
guard band between transmit and
receive streams.
Symmetric Transmission Vs
Asymmetric Transmission:
Data transmission is symmetric
if the data in the downlink and the data
in the uplink are transmitted at the
same data rate. This will probably be
the case for voice transmission the
same amount of data is sent both ways.
However, for Internet connections or
broadcast data (e.g., streaming video),
it is likely that more data will be sent
from the server to the mobile device
(the downlink).
FDD transmission is not so
well suited for asymmetric applications
as it uses equal frequency bands for the
uplink (a waste of valuable spectrum).
On the other hand, TDD does not have
this fixed structure, and its flexible
bandwidth allocation is well suited to
asymmetric allocations, e.g., the
Internet. For example, TDD can be
configured to provide 384kbps for the
downlink (the direction of the major
data transfer), and 64 kbps for the
uplink (where the traffic largely
comprises requests for information and
acknowledgements).
CDMA technology is used in
this third generation. This is explained
above.
The showdown: GSM vs. CDMA:
The technology circles, it has
long been known that CDMA beats the
pants off GSM. It is, to state it in no
uncertain terms, the technology of the
future. Even third generation GSM
networks will use CDMA-based
technologies.
WHY?
.. Because CDMA is faster.
.. Because CDMA is more secure.
.. Because connections on a CDMA
network will never get dropped
when moving from cell to cell.
.. Because CDMA base-stations
cover a large area.
The technological challenges:
This is all highly ambitious
stuff, and it raises a number of major
technical challenges:
Firstly, the rate of data, which a 3G
device will be able to receive and
transmit, will be far higher than
existing mobile phones. For
example, in order to watch movies
(streaming video) on a 3G device,
the data rate requirement will be as
much as 100 times greater than that
currently achievable on existing
phones. More than anything, a 3G
device is going to have to be
FAST. In fact, the International
Telecommunications Union (ITU)
defines a 3G device solely in terms
of its transmission speed (if your
phone can transmit at 144kbps, it s
a 3G phone).
A very attractive feature of 3G is
that you will be able to use your
device anywhere in the world. This
global roaming ability will
require a major effort on the
development of unified, worldwide
standard(s).
Security of 3G devices is going to
be vitally important. A 3G device
will effectively be a wallet for
your electronic cash, and a safety
deposit box containing your
personal e-mail. 3G devices will
clearly be an attractive target for
thieves. A 3G device must be
rendered useless as soon as it is
lost. This could be achieved by
using a PIN or a removable smart
card, Or, in 21st century fashion,
voice, fingerprint, or iris
recognition (Biometrics). A stolen
3G device could have a GPS
distress beacon, which informs
police of its position.
A small, portable device used for
so many functions for extended
periods will require advances in
rechargeable battery technology.
CONCLUSION:
3G could be thought of as
rather a Sledgehammer -approach to
providing broadband wireless services:
Build a forest of new antenna and
spend a fortune on new radio spectrum.
The motto of 3G is to unify the
world s mobile computing devices
through a single, worldwide radio
transmission standard. Imagine being
able to go anywhere in the world
secured in the knowledge that your
mobile phone is compatible with the
local system, a scenario known as
Global roaming .
This technology would finally
concretize the dream of fullyconnected
mobile world. The fact of
the matter is that there are ways to
many technologies and newer services
and networks that are being worked
upon. None of us really knows what to
expect even three months from now.
But by staying informed, we can make
educated guesses.
REFERENCES:
3G Newsroom: What is 3G ?
Research Group Mobile
Communications: UMTS.
Digit magazine, January 06-
Fast track to Mobile Telephony.
1
DSP TO BOOST SCOPE PERFORMANCE
BY
R.SARIKA G.SOWJANYA
3/4B.TECH, EEE 3/4B.TECH, EEE
BAPATLA ENGINEERING BAPATLA ENGINEERING
COLLEGE COLLEGE
EMAIL:[email protected] EMAIL:[email protected]
ABSTRACT
This paper deals with the continuous development of digital signal processing in
The field of test and measurement.Continuous process of development of filters
to meet he challenges of bandwidth and accuracy. The real time oscilloscope has
been the
mainstay of electronic and R &D applications.
DSP is a well established discipline. It has become the enabling toll to extend
the oscilloscope bandwidth beyond the current analogue limits and to improve the
overall
measurement accuracy.
This above said extension in bandwidth of oscilloscope is made to reality
with the help of filtering technique. In addition, the various developments in d
igital filters in
the field of digital signal processing is discussed.
2
INTRODUCTION
Digital signal
processing is a processing of signals on a
digital computer, the operations performed
on a signal consists of a number of
mathematical operations as specified by a
software program. In a broader sense the
digital system can be implemented as a
combination of digital hard ware and
software, each of which performs its own
set of specified operations.
This rapid development is a
result of the significant advances in digital
computer technology & integrated circuit
fabrication. The rapid development in
integrated circuit technology such as VLSI
of electronic circuits has spurred the
development of powerfull smaller &
cheaper digital computers & special
purpose digital hardware. These
inexpensive & relatively fast digital
circuits have made it possible to construct
highly sophisticated digital systems
capable of performing complex DSP
functions & tasks, which are usually too
difficult and/or too expensive to be
performed by analog signal processing
systems
As most of the signals are analog, so
these are to be converted into digital
signals for carrying out processing in
digital form. Processing of an input signal
is nothing but performing specified
operations on it according to the
requirement. After processing, signals can
be reconverted to analog form if desired.
Filter is the most
vital system in the DSP technology.
Digital filters are classified by their use &
implementation. These are called time
domain or frequency domain based on
their use and finite impulse response (FIR)
and infinite impulse response (IIR). Now a
day WAVELET TRASFORMS have
played tremendous role in DSP
technology.
Most of the signals encountered in
science & engineering are analog in
nature. That is, the signals are functions of
a continuous variable, such as time or
space, and usually take on values in a
continuous range. Such signals may be
processed directly by appropriate analog
systems (such as filters or frequency
analyzers) or frequency multipliers for the
purpose of changing their characteristics or
extracting some desired information. In
such a case we say that the signal has been
processed directly in its analog form. Here
both the input signal & the o/p signal are
in analog form.
3
ANALOG I/P
SIGNAL
To perform the processing digitally, there
is a need for an interface between analog
signal and the digital processor. This
interface is called A/D converter. The o/p
of the A/D converter is a digital signal that
is appropriate as an input to the digital
processor.
ANALOG I/P
SIGNAL
CLASSIFICATION OF DIGITAL
FILTERS:
Digital filters are classified by their use
and implementation. These are called time
domain & frequency domain based on
their use, and FIR & IIR by their
implementation.
Classification of digital filters by their use
& by implementation:
ANALOG O/P
SIGNAL
DIGITAL O/P ANALOG O/P
SIGNAL SIGNAL
VARIOUS DEVELOPMENTS IN
DIGITAL FILTER IN THE FIELD OF
DSP
Filter is the most vital system in the digital
signal processing technology. A thorough
treatment of multirate digital filters &
filter banks including quadrature mirror
filters was given by vaidya Nathan in
1990. In the case of IIR filters a new
algorithm for the design by approximating
specified magnitude & phase responses in
frequency domain has been proposed by
Mathias C Lang in 2000. The proposed
Digital signal
Processor
D/A
CONVERTER
ANALOG
SIGNAL
PROCESSOR
A/D
CONVERTER
FIR IIR
TIME
DOMAIN
FREQUENCY
DOMAIN
MOVING SINGLE POLE
AVERAGE
WINDOWED-SINC CHEBYSHEV
4
algorithm minimizes the mean square
approximation error subjected to a
constraint on pole radii. Consequently
stability of such systems can be guaranteed
for such filters. Some times it is difficult to
estimate frequencies in a desired signal of
multiple sinusoids buried in additive noise.
For processing such signals, spectral
estimation techniques based on DFT are
used. Such processing is termed as off line
processing. But such methods are costly.
Online processing of such signals is
carried out by using adaptive notch
filtering technique. For direct frequency
estimation, any new adaptive algorithm is
developed for constrained pole zero notch
filter by gang li in the year 1997.
Adaptive algorithm
is constructed for solving the stochastic
envelope constrained filtering problem.
Some times input signals get corrupted by
an additive random noise. Therefore,
envelope constraint filters are deigned to
minimize the noise enhancement while the
o/p of noiseless filter lies within an o/p
pulse shape envelope. This formulation
has advantage over least mean square
algorithm which is the conventional
approach of filter design.
Echo casncellation is the specific
field where the application of this filter is
of immense importance. Transmission of
message over band limited or dispersive
channel leads to the distortion of massage
in communication. Kalman filtering based
on channel equilization is generally used
in this context. Wiener filters are generally
used for a stationary random process. An
analytical technique to design zero phase
FIR digital pass band filter with the
evaluation of errors and having high
accuracy by using a fast non iterative
algorithm,is recently suggested by k
nowzynski on the year 2000.This is the
further improvement over the optimization
of FIR filters by using Remez s algorithm.
APPLICATIONS:
Digital filters are widely used to get a
better performance. In amteur radio, these
achieve high fidelity music reproduction.
Aaptive filters remove fading in tele
communication by altering the sampling
rate.
Interpolation filters are used to increase
the sampling rate, while decimation filters
are used to decrease the sampling rate.
DSP TO BOOST SCOPE
PERFORMANCE:
The real time oscilloscope has been the
mainstay of electronic design and R&D
applications for decades. Oscilloscope
performance has always risen to the
challenges of bandwidth and accuracy.
Ideally, a measuring instrument s
bandwidth should exceed that of the target
device being observed. Yet, the basic
metric of an oscilloscope s performance
the analog band width- is bound by the
5
same technologies as that of say, a digital
network switching element. Both
platforms use the fastest available
semiconductor devices. Both relay on
custom ICs. Given these realities, how can
the oscilloscope performance make leap to
the next level? How can it support nextgeneration
technical advances?
The answer lies in the digital signal
processing. It turns out that the raw
analogue band width can be extended and,
in fact enhanced, using DSP. It has
become the enabling tool to extend the
oscilloscope bandwidth beyond the current
analogue limits and to improve the overall
measurement accuracy. The top tier of
today s oscilloscopes includes a host of
models offering multigigahertz bandwidth.
In the simplest terms, DSP creates a
filtering function that counteracts roll off
at the top end of the specified frequency
range. Fig 1 shows a pair of frequency
response curves for a digital storage
oscilloscope (DSO) with 4GHz true
analogue bandwidth. The dashed line
defines a text book perfect band width
envelope, while the other line
approximates a real world oscilloscopes
frequency response curve. Wherever this
line departs from the ideal envelope, the
deviation becomes part of the
measurement. To obtain the best possible
signal fidelity, it is essential to keep this
deviation to a minimum. Now consider the
Fig.2. This is DSP extended frequency
response curve for the same oscilloscope,
now offered as a 5GHz oscilloscope. The
3db point is indeed at 5GHz.
6
What happens beyond the 4GHz boundary
depends largely on the quality of the DSP
implementation.
DSP frequency extensions are a
form of filtering. It a very astute filter
design to create a usable bandwidth
extension while minimizing magnitude
aberrations at the extremes and elsewhere
in the range, as well as controlling the
phase shift and distortion.
FIR and DSP :
Today s leading high bandwidth
oscilloscopes employ a finite impulse
response (FIR) filter scheme.Unlike the
IIR filters FIR filters are guaranteed to be
stable and can deliver perfectly linear
phase response. Moreover it is the most
suitable for applying equilization of phase
and magnitude as needed over almost the
entire bandwidth of an oscilloscope
channel. The FIR filter is tuned for its
optimum step response. Its exact transfer
function is proprietary to eacmanufacturer,
but is often based on a Gaussian algorithm.
Using FIR filter approach requires a
rigorous calibration process
during manufacturing. Each oscilloscope
channel and attenuator setting that will
receive filtering must be calibrated. FIR
filter coefficients based on the measured
response of the oscilloscope channel
associate with each individual supported
attenuator setting and channel. These
coefficients are mathematically convolved
with the acquisition data when the
oscilloscope is running. The result, known
as channel matching, is a very closely
matched phase and magnitude response
across all channels.
ADDITIONAL ADVANTAGES OF
USING DSP:
Frequency extension is just the beginning of
a well designed DSP filter can do for an
oscilloscope. Other benefits of DSP are
i) It can enable accurate comparison of
signals across multiple channels.
Because each channel is specifically
calibrated with its own permanent
filter coefficients at the factory,
there is a close match in phase and
magnitude response between
channels.
ii) It can improve rise time sensitivity
of the oscilloscope as well as the
accuracy of the rise time
measurements
iii) Because exceptional magnitude &
phase linearity ,DSP filtering can
7
iv) support more accurate frequency
domain measurements when using
the oscilloscope s spectral
acquisition features
DSP filtering can deliver sharper eye
diagrams. It removes noise, jitter and
aliasing and reduces the amount of
overshoot.
OTHER SIDE OF DSP
IMPLEMENTATION:
We discussed what a well DSP filter can do
for an oscilloscope. It can be shown that
magnitude consistency suffers if the DSP
implementation does not take variables such
as sample rate into sufficient
consideratation.
DSP can affect the oscilloscopes equivalent
time modes, attenuating and distorting eye
diagrams when aquired with high ET
sample rates. Another potential side effect
is an inconsistent amplitude response that
varies with the trigger source selection. A
sophisticated DSP implementation can
avoid all these short comings.
The industry s ever escalating
bandwidth needs will mandate continuing
evaluation in oscilloscopes. It is safe to
assume that increase in analogue band
width and DSP enhanced performance will
continue to go hand in hand. One cannot
advace with out the other. The instruments
innate analogue band width is the plat form
upon which the DSP frequency extensions
must stand. This base bandwidth depends
on good analogue engineering in the areas
of probing, vertical input amplification and
analogue to digital conversion.
CONCLUSION
To conclude, Digital Signal Processing has
brought us to a new age in the technical
field. DSP boosts the scopes performance,
it will grow as the underlying technologies
permit oscilloscope users will become
more familiar with what DSP can and
cannot do for them, and will demand more
than just band width oscilloscope
innovators will focus on the across-theboard
performance benefits that DSP can
deliver and will continue to push the
bandwidth boundaries to match user needs.
Also it contribution especially in the field
of communication systems and aerospace
has brought technological revolution.
SCOPE FOR THE DEVELOPMENT:
The areas like filter banks, wavelet
transforms, adaptive filtering, discrete
chirp fourier transform have an anarmous
scope of research due to the relevance in
8
number of applications like spectral analysis, control and system identification
, channel
equilisation, echo cancellation, reconstruction of signal and communications etc
. To develop
new algorithms, to improve the permonce of systems and to reduce the complexity
of DSP
based sysyems are the various critical issues in digital signal processing.
9
10
11
Page | 0
MADANAPALLI INSTITUTE OF TECHNOLOGY AND SCIENCES
A PAPER ON
DIGITAL SIGNAL PROCESSING HOW NIGHT VISION WORKS
BY
B.SIVA PRASAD K.KIRAN KUMAR
06691A0494 06691A0434
III B.TECH, ECE III B.TECH, ECE
[email protected] [email protected]
Page | 1
DIGITAL SIGNAL PROCESSING HOW NIGHT VISION WORKS
Abstract:
Night Vision scopes and binoculars are electrooptical
devices that intensify (or amplify) existing
light instead of relying on a light source of their
own. The devices are sensitive to a broad spectrum
of light, from visible through infrared. An accessory
illuminator can increase the light available at the
infrared end of the spectrum by casting a beam of
light that is not visible to the human eye.Our paper
is an image process application for night vision
technology, which can be often used by the military
and law enforcement agencies, but are available to
civilian users . In our work, night vision googles
capture the image even in the dark in the infrared
region.
An infrared night vision system senses heat radiated
by things and produces a video picture of the heat
Scene. The gadget that senses the heat is a
photocathode, similar to the one in a video camera,
except it is Sensitive to infrared radiation instead
of visible light. Ability to improve poor night
vision.There are two methods of operating night
vision systems, being either in a 'passive' mode or
an 'active' mode. Passive systems amplify the
existing environmental ambient lighting, while
active systems rely on an infrared light source to
provide sufficient illumination. Active systems are
often used today on many consumer devices such as
home video cameras. Night vision works on two
techniques: image enhancement, thermal
imaging.Applications of this technology are
Surveillance, Security, Wildlife observation,law
enforcement.
Page | 2
How Night Vision Works :
Introduction To How Night Vision Works :
The first thing you probably think of when you see
the words night vision is a spy or action movie
you've seen, in which someone straps on a pair of
night-vision goggles to find someone else in a dark
building on a moonless night. And you may have
wondered "Do those things really work? Can you
actually see in the dark?"
Night Vision Image Gallery:
Gyro-stabilized day/night binoculars.
The answer is most definitely yes. With the proper
night-vision equipment, you can see a person
standing over 200 yards (183 m) away on a
moonless, cloudy night! Night vision can work in
two very different ways, depending on the
technology used.
Infrared Light:
In order to understand night vision, it is important to
understand something about light. The amount of
energy in a light wave is related to its wavelength:
Shorter wavelengths have higher energy. Of visible
light, violet has the most energy, and red has the
least. Just next to the visible light
spectrum is the infrared spectrum.
Infrared Light Can Be Split Into Three
Categories:
.. Near-infrared (near-IR) - Closest to visible
light, near-IR has wavelengths that range
from 0.7 to 1.3 microns, or 700 billionths to
1,300 billionths of a meter.
.. Mid-infrared (mid-IR) - Mid-IR has
wavelengths ranging from 1.3 to 3 microns.
Page | 3
Both near- IR and mid-IR are used by a
variety of electronic devices, including
remote controls.
.. Thermal-infrared (thermal-IR) - Occupying
the largest part of the infrared spectrum,
thermal-IR has wavelengths ranging from 3
microns to over 30 microns.
The key difference between thermal-IR and the
other two is that thermal-IR is emitted by an object
instead of reflected off it. Infrared light is emitted
by an object because of what is happening at the
atomic level.
Basic Technologies:
Night vision work in two very different ways,
depending on the technology used
Image Enhancement - This works by collecting
the tiny amounts of light, including the lower
portion of the infrared light spectrum, that are
present but may be imperceptible to our eyes, easily
observe the image.
Thermal Imaging - This technology operates by
capturing the upper portion of the infrared light
spectrum, which is emitted as heat by objects
instead of simply reflected as light. Hotter objects,
such as warm bodies, emit more of this light than
cooler objects like trees or buildings.
Infra-Red Illuminators:
All Starlight scopes need some light to amplify.
This means that if you were in complete darkness
you could not see. Due to this we have a built in
infra-red illuminator (IRI) on all of our scopes.
Basically what an IRI does is throw out a beam of
infra-red light that is near invisible to the naked eye
but your NVD can see it. This allows you to use
your scope even in total darkness. The IRI works
like a flashlight and the distance you can see with it
will be limited. We do use the most powerful eyesafe
illuminator on the market. This allows our IRI
to extend out to 100 yards However, because of the
power at a short distance the IRI may cover only
40-60% of the viewing area.
When you look through a night vision device you
may notice black spots on the screen. A NVD is
Page | 4
similar to a television screen and attracts dust and
dirt. Typically these spots can be cleaned. However,
this may also be a spot in the tube itself. This is
normal.
Recent Development In The Field Of
Night Vision:
Night Vision's Mission Is To:
 Conduct Research and Development to Provide
US Land Forces with Advanced Sensor Technology
to Dominate the 21st Century Digital Battlefield;
 Acquire and Target Enemy Forces in Battlefield
Environments;
 Detect and Neutralize Mines, Minefields, and
Unexploded Ordnance; Develop Humanitarian
Demining Technology;
 Deny Enemy Surveillance & Acquisition through
Electro-Optic, Camouflage, Concealment and
Deception Techniques;
 Provide for Night Driving and Pilotage; and
Protect Forward Troops, Fixed Installations and
Rear Echelons from Enemy Intrusion.
Working Of Image Enhancement:
Image-enhancement technology is what most
people think of when you talk about night vision. In
fact, image-enhancement systems are normally
called night-vision devices (NVDs). NVDs rely on a
special tube, called an image-intensifier tube, to
collect and amplify infrared and visible light.
Here's how image enhancement works:
1. A conventional lens, called the objective lens,
captures ambient light and some near-infrared light.
2. The gathered light is sent to the image-intensifier
tube. In most NVDs, the power supply for the
image-intensifier tube receives power from two NCell
or two "AA" batteries. The tube outputs a high
voltage, about 5,000 volts, to the image-tube
components.
3. The image-intensifier tube has a photocathode,
which is used to convert the photons of light energy
into electrons.
4. As the electrons pass through the tube, similar
electrons are released from atoms in the tube,
multiplying the original number of electrons by a
factor of thousands through the use of a
Page | 5
1. Front Lens 4. High Voltage Power Supply
2. Photocathode 5. Phosphorus Screen
3.Micro-channel plate 6. Eyepiece
microchannel plate (MCP) in the tube. An MCP is a
tiny glass disc that has millions of microscopic
holes (micro channels) in it, made using fiber-optic
technology. When the electrons from the photo
cathode hit the first electrode of the MCP, they are
accelerated into the glass microchannels by the
5,000-V bursts being sent between the electrode
pair. As electrons pass through the microchannels,
they cause thousands of other electrons to be
released in each channel using a process called
cascaded secondary emission. Basically, the
original electrons collide with the side of the
channel, exciting atoms and causing other electrons
to be released.
Thermal Imaging Process:
Image of a small dog taken in mid-infrared
("thermal") light (false color) Thermal imaging,
also called as thermo graphic or thermal video, is a
type of infrared imaging. Thermo graphic cameras
detect radiation in the infrared range of the
electromagnetic spectrum (roughly 900 14,000
nanometers or 0.9 14 µm) and produce images of
that radiation. Since infrared radiation is emitted by
all objects based on their temperatures, according to
the black body radiation law, thermograph makes it
possible to "see" one's environment with or without
visible illumination. The amount of radiation
emitted by an object increases with temperature,
therefore thermograph allows one to see variations
in temperature (hence the name).
Generations:
NVDs have been around for more than 40 years.
They are categorized by generation. Each
substantial change in NVD technology establishes a
new generation.
Generation 1 - First generation viewers are is
currently the most popular type of night vision in
the world. Utilizing the basic principles described ,
a 1st generation will amplify the existing light
several thousand times letting you clearly see in the
Page | 6
dark. These units provide a bright and sharp image
at a low cost, which is perfect, whether you are
boating, observing wildlife, or providing security
for your home.
Generation 2 - These are primarily used by law
enforcement or for professional applications.The
main difference between a 1st and a 2nd generation
unit is the addition of a micro-channel plate,
commonly referred to as a MCP. The MCP works
as an electron amplifier and is placed directly
behind the photocathode.
Generation 3 - While there are no substantial
changes in the underlying technology from
Generation 2, these NVDs have even better
resolution and sensitivity. This is because the photo
cathode is made using gallium arsenide, which is
very efficient at converting photons to electrons.
Additionally, the MCP is coated with an ion barrier,
which dramatically increases the life of the tube.
Generation 4 - 4th generation / Gated Filmless
technology represents the biggest technological
breakthrough in image intensification of the past 10
years. By removing the ion barrier film and
Gating the system Gen 4 demonstrates substantial
increases in target detection range and resolution,
particularly at extremely low light levels.
Night Vision Equipment :
Night-vision equipment can be split into three broad
categories:
Scopes - Normally handheld or mounted on a
weapon, scopes are monocular (one eye-piece).
Since scopes are handheld, not worn like goggles,
they are good for when you want to get a better look
at a specific object and then return to normal
viewing conditions.
Goggles - While goggles can be handheld, they are
most often worn on the head. Goggles are binocular
(two eye-pieces) and may have a single lens or
stereo lens, depending on the model. Goggles are
excellent for constant viewing, such as moving
around in a dark building.
Page | 7
Cameras - Cameras with night-vision technology
can send the image to a monitor for display or to a
VCR for recording. When night-vision capability is
desired in a permanent location, such as on a
building or as part of the equipment in a helicopter,
cameras are used. Many of the newer camcorders
have night vision built right in.
Applications:
Common applications for night vision include:
.. Military
.. Law enforcement
.. Hunting
.. Wildlife observation
.. Surveillance
.. Security
.. Navigation
.. Hidden-object detection
.. Entertainment
Night Vision System for Cars:
Night Vision makes a vehicle s darkened
surroundings visible out to a distance of 150 meters.
Depending on the automotive industry s design
requirements, Night Vision works with two
different systems. With the near-infrared system,
two barely noticeable infrared emitters are
integrated into the headlights. The infrared light
they produce is captured by a small camera
positioned close to the rear-view mirror. The second
system, a solution in the long-wave spectral range, a
highresolution infrared camera is installed behind
the radiator grille. Using a wavelength of six to 12
micrometers, it detects the infrared heat radiation
from the vehicle s surroundings, which is displayed
as a negative image: Objects that are cold
because they are inanimate appear darkened and
living things are displayed as brightobjects.
Conclusion:
Application of DSP may be very well employed to
have a night vision by writing Suitable night
camera. Active systems are often used today on
many consumer devices such as home video
cameras.
Cryptography Unbreakable code
Authors:
Alekya.N.V Sahitya Bharathi.T
II.B.Tech, IT II.B.Tech, IT
Ph no:9885753392 Ph no:9985917168
Sree Vidyanikethan Engineering College
Tirupathi.
E-Mail:
[email protected],
[email protected],
C ry ptography Unbreakable Code
Abstract:
Cryptography is a science of using mathematics to encrypt and decrypt
information, which means putting into or decoding from a mathematical language.
Cryptography becomes even more complex though. This is because humans recognize
numbers as digits from 0 to 9, but your computer can only recognize 0 & 1. As su
ch,
this binary system uses bits instead of digits. In order to convert bits to digi
ts you will
need to multiply the number of bits by 0.3. This will provide you with a good es
timation
of what it stands for. We will look at three types of cryptographic algorithms t
o secure
information and Cryptography, then, not only protects data from theft or alterat
ion, but
can also be used for user authentication. There are, in general, three types of
cryptographic schemes typically used to accomplish these goals: secret key (or
symmetric) cryptography, public-key (or asymmetric) cryptography, and hash funct
ions,
each of which is described below. In all cases, the initial unencrypted data is
referred to
as plaintext. It is encrypted into cipher text, which will in turn (usually) be
decrypted into
usable plaintext. Within the context of any application-to-application, there ar
e some
specific security requirements, including: authentication, privacy/confidentiali
ty,
integrity, and non- repudiation. As the need increases for government bodies and
large
firms to deploy hi-tech security, cryptography promises to revolutionize secure
communication by providing security based on the laws of physics, mathematical
algorithms.
Introduction:
Cryptography, a word with Greek origins, means secret writing .
Cryptography referred to the encryption and decryption of messages using secret
keys.
Encryption is a process of changing or converting normal text or data information
into gibberish text . Decryption is a process of changing or converting gibberish te
xt
back to correct message or data by using encryption method . Usually the encipheri
ng
of message and generation of keys will be related to mathematical algebra i.e..,
number theory, linear algebra and algebric structures etc. using those mathemati
cal
relations we will change a message in such a way that it can be again decrypted
using some mathematical operations again.
Cryptographic Algorithms:
There are several ways of classifying cryptographic algorithms. For
purposes of this paper, they will be categorized based on the number of keys tha
t are
employed for encryption and decryption, and further defined by their application
and use.
The three types of algorithms that will be discussed are:
Secret Key Cryptography (SKC): Uses a single key for both encryption and
decryption.
Public Key Cryptography (PKC): Uses one key for encryption and another
for decryption.
Hash Functions: Uses a mathematical transformation to irreversibly "encrypt"
information.
Three types of Cryptography: Secret key, Public key and Hash key
Sample applications of the three cryptographic techniques for
secure communication.
Password protection:
Nearly all modern multi-user computer and network operating
systems employ passwords at the very least to protect and authenticate users acc
essing
computer and/or network resources. But passwords are not typically kept on a hos
t or
server in plaintext, but are generally encrypted using some sort of hash scheme.
Passwords are stored in the /etc/passwd file (Figure A); each record in the file
contains
the username, hashed password, user's individual and group numbers, user's name,
home
directory, and shell program; these fields are separated by colons (:). Note tha
t each
password is stored as a 13-byte string. The first two characters are actually a
salt,
randomness added to each password so that if two users have the same password, t
hey
will still be encrypted differently; the salt, in fact, provides a means so that
a single
password might have 4096 different encryptions. The remaining 11 bytes are the
password hash, calculated using DES (Data Encryption Standard).
A) /etc/passwd file
root:Jbw6BwE4XoUHo:0:0:root:/root:/bin/bash
carol:FM5ikbQt1K052:502:100:Carol Monaghan:/home/carol:/bin/bash
alex:LqAi7Mdyg/HcQ:503:100:Alex Insley:/home/alex:/bin/bash
Gary:FkJXupRyFqY4s:501:100:Gary Kessler:/home/Gary :/bin/bash
todd:edGqQUAaGv7g6:506:101:Todd Pritsky:/home/todd:/bin/bash
josh:FiH0ONcjPut1g:505:101:Joshua Kessler:/home/webroot:/bin/bash
Sample entries in Unix/Linux password files.
An even stronger authentication method uses the password to modify a
shared secret between the client and server, but never allows the password in an
y form to
go across the network. This is the basis for the Challenge Handshake Authenticat
ion
Protocol (CHAP), the remote logon process used by Windows NT.
Applications:
Cryptography is the science of writing in secret code and is an ancient
art; the first documented use of cryptography in writing dates back to circa 190
0 B.C. In
data and telecommunications, cryptography is necessary when communicating over a
ny
untrusted medium, which includes just about any network, particularly the Intern
et.
Within the context of any application-to-application, there are some specific se
curity
requirements, including:
Authentication: The process of proving one's identity. (The primary forms of
host-to-host authentication on the Internet today are name-based or address-base
d,
both of which are notoriously weak.)
Privacy/confidentiality: Ensuring that no one can read the message except the
intended receiver.
Integrity: Assuring the receiver that the received message has not been altered
in
any way from the original.
Non-repudiation: A mechanism to prove that the sender really sent this
message.
Conclusion:
As the need increases for government bodies and large firms to deploy
hi-tech security, cryptography is more useful and cryptography promises to revol
utionize
secure communication by providing security based on the laws of physics, mathema
tical
algorithms. Cryptography is a particularly interesting field because of the amou
nt of work
that is, by necessity, done in secret. In fact, time is the only true test of go
od
cryptography, any cryptographic scheme that stays in use year after year is most
likely a
good one. The strength of cryptography lies in the choice (and management) of th
e keys;
longer keys will resist attack better than shorter keys.
References:
- Computer networks by A.Tanebaun 3rd Edition.
- Kessler, G.C. "Basics of Cryptography and Applications for Windows NT."
Windows NT magazine, October 1999.
- Barr, T.H. Invitation to Cryptology
ULTRA WIDEBAND
GOLD IN THE GARBAGE FREQUENCY
A Brief Description of the Wave of the Future
Presented by:
K.V. NAGABABU Y.NELIN BABU
Reg. No: 07765A0408 Reg. No: 07765A0404
III/IV B.TECH III/IV B.TECH
PH NO:9248958812 PHNO:9963161334
Email: [email protected] [email protected]
ELECTRONICS AND COMMUNICATION ENGINEERING
LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING
MYLAVARAM, KRISHNA DIST
C0NTENTS
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
INTRODUCTION . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 2
TECHNOLOGY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
MIMO TECHNOLOGY 2
TIME AND FREQUENCY DOMAIN DESCRIPTION 3
UWB SPECTRUM 4
COMPARISION OF DIFFERENT WIRELESS TECHNOLOGIES 4
CAPABILITIES . . . . . . .. . . . . . . . . . .. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 5
UWB Vs Wi-Fi 5
Why Ultra wideband needs Bluetooth? 5
APPLICATION SCENARIOS 6
CHALLENGES 8
THE ROAD AHEAD 8
CONCLUSION 8
REFERENCES
ABSTRACT
Imagine a world where
homes, cars, offices, and many other
environments will be intelligent. These
environments will be able to sense objects
and the presence of people in order to
perform many different functions, including
adjusting the environment to suit the
individual based on the time of day or day of
the week, monitoring the elderly or children
for health and safety purposes, or performing
security functions, just to name a few.
There stood a revolutionary wireless
technology that has the potential to address
each of above technical challenges and its
none other than Ultra Wide Band. Ultra-
Wideband (UWB) can be best defined as any
short-range radio technology having
bandwidth exceeding the lesser of 500 MHz
or 20% of the arithmetic center frequency,
according to Federal Communications
Commission (FCC). UWB is a new radio
technology that promises to revolutionize
high-speed data transfers and enable the
personal area networking (PAN) industry
leading to new innovations and greater
quality of services to the end user.
UWB allows high data throughput
with low power consumption for distances of
less than 10 meters at an impressive rate of
480 Mbps is expected to be shown in the nottoo-
distant future. To satisfy the demand for
high data wireless communication along with
good Quality of Service, one of the
promising technologies is Multiple Input
Multiple Output wireless technology. This
paper describes the time and frequency
domain description that demonstrates the
basic principle of the UWB. The UWB
spectrum shows the frequency and emitted
signal power variation for Ultra Wideband
along with the other wireless technologies.
The
comparison chart reveals the high data rate of
UWB compared with the existing
technologies like that of Bluetooth. UWB
capabilities like its ranging, advancement
over the Wi-Fi and the reduction of Power
Drain with the help of Bluetooth has been
discussed.
With its unlimited and
unmatched capabilities there are many UWB
applications that are being investigated or
implemented. Some of the many promising
technologies discussed in this paper are
Super caddie, military, healthcare, ultravision
security, low power sensors, process
energy, supply chain, high speed bluetooth
etc are discussed in this paper.
Ultra Wideband radio or UWB
was an obscure concept in the last century,
but will become part of everyday life in the
current century. Just as the automobile and
electricity were important innovative forces
in the early part of the 20th century, wireless
digital communications is the next big thing.
The recent spurt of investigations,
investments, and regulatory rulings indicate
an exploding interest in Ultra Wideband
technology. The technology must overcome
concerns about interference with safety-oflife
signals before achieving complete
acceptance. Successful operation without
interference will certainly lead to numerous
anticipated applications and an entirely new
face of Wireless communication
Key Words: Ultra Wide Band (UWB),
Wireless Personal Area Network (WPAN),
Multiple Input Multiple Output (MIMO),
Narrow Band (NB), Global Positioning
System (GPS)
INTRODUCTION
Wireless connectivity has enabled
a new mobile lifestyle filled with
conveniences for mobile computing users. In
the digital home of the not-too-distant future,
people will be sharing photos, music, video,
data and voice among networked consumer
electronics such as their PCs, personal digital
recorders, MP3 recorders and players, digital
camcorders and digital cameras, highdefinition
TVs (HDTVs), set-top boxes
(STBs), gaming systems, personal digital
assistants (PDAs), and cell phones, to
connect to each other in a Wireless Personal
Area Network (WPAN) in the home with
much much greater speed than existing
technologies like Bluetooth, Infra-Red. For
example, users will be able to stream video
content from a device such as a camcorder to
a flat screen HDTV (high-definition
television) display with no delay for data
transfer and without the use of any wires.
This brings out the need for power efficient
wireless networks that are capable of both
sensing the environment as well as highspeed
communications.
There stood a revolutionary
wireless technology that has the potential to
address each of above technical challenges
and its none other than Ultra Wide Band.
Why is UWB considered by many
to be the next "Big thing" in the wireless
space? Of all the competing
wireless technologies currently available or
under development, Ultra Wideband (UWB)
shows the most promise
Ultra-Wideband (UWB) can be
best defined as any short-range radio
technology having bandwidth exceeding the
lesser of 500 MHz or 20% of the arithmetic
center frequency, according to Federal
Communications Commission (FCC). UWB
is a new radio technology that promises to
revolutionize high-speed data transfers and
enable the personal area networking (PAN)
industry leading to new innovations and
greater quality of services to the end user.
The unique properties of UWB that
combine to catapult this technology in
revolutionary communications and radar
applications are listed below.
High data rate
Larger fractional Bandwidth
Less Power Consumption
Reduced system complexity
Immunity to Multipath Propagation
Multiple Access Environments
Low cost transceivers
Low PSD s
TECHNOLOGY
UWB allows high data throughput
with low power consumption for distances of
less than 10 meters, or about 30 feet, which is
very applicable to short range Personal area
network requirements. The fastest data rate
shown over UWB is now an impressive 252
Mbps, and a rate of 480 Mbps is expected to
be shown in the not-too-distant future within
the power limit allowed under current FCC
regulations
MIMO TECHNOLOGY
To satisfy the demand for high
data wireless communication along with
good Quality of Service, one of the
promising technologies is Multiple Input
Multiple Output wireless technology. As the
name itself says, this technique uses multiple
antennas at the transmitter and the receiver.
The biggest advantage of this technology is,
theoretically, the technology has the potential
to provide many orders of magnitude
increase in capacity at no cost of increased
system bandwidth. This increased capacity
can be exploited to provide either increased
data rates or increased reliability of the
transmitted data.
Use of MIMO techniques for UWB
Distributed MIMO usage of
available nodes for coordinated transmission
to extend the range of UWB devices
Multiple UWB nodes could carry
out synchronized transmissions to a user who
is outside the range of a single UWB node
within the system.
UWB with Multiple Antennas (c.f.
Distributed MIMO)
The channel impulse responses of the
UWB channel have been found to have very
low cross correlation. This fact can be used to
design UWB transmitters /receivers with
multiple antennas in order to enhance system
throughput. This might also prove
advantageous in position-tracking.
Time and Frequency Domain
Description
UWB is a method of modulation
and data transmission which can entirely
change the wireless picture in the near future.
Let s take a look at the diagram that
demonstrates the basic principle of the UWB.
The UWB is above and the
traditional modulation is below which is
called here Narrow Band (NB), as opposed to
the Ultra Wideband. On the left we can see a
signal on the time axis and on the right there
is its frequency spectrum, i.e. energy
distribution in the frequency band. The most
modern standards of data transmission are
NB standards - all of them work within a
quite narrow frequency band allowing for
just small deviations from the base (or
carrier) frequency. Below on the right you
can see a spectral energy distribution of a
typical 802.11b transmitter. It has a very
narrow (80 MHz for one channel) dedicated
spectral band with the reference frequency of
2.4 GHz. Within this narrow band the
transmitter emits a considerable amount of
energy necessary for the following reliable
reception within the designed range of
distance(100m for the 802.11b).
.
Now take a look at the UWB - here the
traditional approach is turned upside down.
In the time space the transmitter emits short
pulses of a special form which distributes all
the energy of the pulse within the given, quite
wide, spectral range (approximately from 3
GHz to 10 GHz). Data, in their turn, are
encoded with polarity and mutual positions
of pulses. With much total power delivered
into the air and, therefore, a long distance of
the reliable reception, the UWB signal
doesn't exceed an extremely low value (much
lower than that of the NB signals) in each
given spectrum point (i.e. in each definite
licensed frequency band).
Impulse
Modulation
3 frequency 10 GHz
Ultra
wide
band
Com
muni
catio
n
Time-domain behavior Frequency-domain behavior
time
1 0 1
(FCC Min=500Mhz)
Frequency
Modulation
2.4 GHz
Narr
owb
and
Com
muni
catio
n
0 1 0 1
UWB SPECTRUM:-
The most part of energy of the
UWB signal falls into the frequency range
from 3.1 to 10.6 GHz, and the energy spectral
density doesn't exceed the limit determined
by the FCC Regulations (-41dBm/MHz).
Below 3.1 GHz the signal almost disappears,
its level is lower than -60. The more ideal the
form of a pulse formed with the transmitter,
the less the energy goes out of the main
range. But however that may be, the
permissible deviation of the pulse from the
ideal form must be limited, hence the second
purport. The spectral range lower than 3.1
GHz is avoided not to create problems for
GPS systems whose accuracy of operation
can suffer a lot from outside signals even if
their density is lower than -41. That is why
20 dBm (up to -60) were reserved in addition
at the spectral range up to 3.1 GHz; it is not
obligatory but it seems to be welcomed by
military bodies.
COMPARISION OF DIFFERENT
WIRELESS TECHNOLOGIES
In case of the UWB it's much
greater compared to the traditional NB
signals such as 802.11b, Bluetooth or
802.11a. So, with the UWB we can send data
for longer distances, or send more data,
especially if there are a lot of simultaneously
working devices located close to each other.
Here is a diagram with the designed
maximum density of data transferred per
square meter:
.
.
UWB to Surpass Wi-Fi in 'Near
Future'!
An analyst report claims that
UWB, which has barely begun shipping,
will surpass sales of Wi-Fi equipment soon.
EE Times refers us to this In-Stat blurb,
which makes these two bold claims:
1. Legacy wired interconnects will exist
on the PC platform for several
generations, but usage should
transition to UWB within a very short
period of time.
2. Despite the growth of Wi-Fi in
peripherals and consumer electronics,
UWB sales will overtake Wi-Fi
volume in the near future.
Why Ultra wideband needs Bluetooth?
UWB s power loss when
receiving is higher when compared to
Bluetooth. Bluetooth uses 11mA transmitting
and 0.18mA waiting. But it can transmit at
only 3Mbits/sec. Calculations show that it
would drain 16mA hours (mAh) sending a
2GB file, compared with just 6.11mAh on a
400Mbit UWB link.
However, if you factor in a 12-hour
wait, UWB takes more than 13 times the
power 246mAh compared with Bluetooth s
18.5mAh. By allowing Bluetooth to do the
waiting and setup, calling in UWB for the
data transfer, the total drain would be just
7.72mAh.
UWB on devices such as digicams
and PDAs, on which power consumption will
be critical, seems likely eventually to exploit
a new partnership with frugal Bluetooth to
low down the power drain.
APPLICATION SCENARIOS
With its unlimited and unmatched
capabilities there are many UWB
applications that are being investigated or
implemented, some of which are:
MILITARY
Ultra wideband has its roots in
military applications, and it is here that its
full potential is realized. Today's battles are
fought street by street, and house by house, in
scenarios where detailed, real-time situational
awareness is the difference between life and
death and in an environment where
traditional technologies fail to deliver. With
the ability to provide precise, real-time
tracking, high fidelity radar sensing, and
built-in covert communications, UWB s
Urban Combat Solutions based on STTW
(Sense through the Wall) technology allow
soldiers to own the urban terrain, just as night
vision allowed them to own the night.
Through wall sensing for building
assault
Precise blue-force tracking with
integrated covert communications
Unattended ground sensors for
area surveillance
Low false alarm rate, wireless
perimeter fences
Dismounted crew tracking with
wireless intercom
HEALTH CARE
Locating people and assets is an important
function in the healthcare industry, and being
able to do so robustly, securely and with
enough precision to matter is critical.
Whether locating caregivers, patients or
mobile equipment, a precise, real-time
tracking system can offer significant
improvements in safety, security and
efficiency. Together, these improvements
result in better care, in a shorter time and at
lower cost.
Optimized equipment utilization,
reducing inventory and cost
Automatic billing for improved
revenue capture
Faster emergency response
Improved patient safety throughput
Improved facility security
Efficient equipment cleaning,
maintenance and repair
LifeWave has developed patent
pending UWB medical radar. UWB signals
are transmitted into the body and reflected off
of tissues and organs. Signal processing is
used to determine information about tissue
size, location and movement. The images
gained are not distorted by bone and air
cavities such as the lungs as in other imaging
systems. Direct skin contact is not required
and information can be collected through
clothes and bedding. The radar will be
inexpensive and low power, making it ideal
for portable applications.
ULTRA VISION SECURITY
Ultra Vision Security Systems is
leading UWB product development in the
search and rescue realm. Its portable
LifeLocater system has stationary sensors
that use UWB signals to detect moving
objects.
The 2006 Mercedes S-Class uses
24 GHz short range UWB radar as part of its
driver assistant systems. Elapsed time of
pulsed signals is used to detect objects within
0.2 to 30 m. It can detect and track up to 10
objects with a range accuracy of 7.5 cm to
avoid accidents(Vehicular Radar Collision
Avoidance).
LOW POWER SENSORS
Another interesting application is
very low power sensor networks. In most
applications today, sensors are used for
specific local applications. Sensor networks
suggest the use of many low-cost lowpowered
sensors, on a wider more
generalized scale, networked to provide everpresent
access. Sensor networks such as this
will provide information that can make life
easier. Sensors in these types of networks
will work together to provide information
that could: maintain environmental
conditions across large buildings or many
buildings, identify empty conference rooms
or help one find an empty parking place in a
huge parking lot.
Process Energy
Safety and Security are two critical
issues for the process and energy industries.
Ensuring either requires knowledge of the
precise location of all people and assets,
whether they are supposed to be on site, and
especially if they are not. UWB s tracking
systems offer the ability to find people and
objects equipped with tags in real time, and
even to detect and track intruders without
tags. Additionally, UWB solutions provide
robust wireless communications for such
applications as sensor telemetry, even in the
extremely harsh RF conditions of a process
facility. The capabilities are listed below.
SUPPLY CHAIN
RFID(Radio Frequency
Identification) tags are revolutionizing supply
chain management by replacing the venerable
barcode with remotely readable ID tags;
Active RFID can even locate objects to
within a certain region. UWB Real Time
Location System (RTLS) takes asset
management to the next level with precise,
real-time location of all tagged items.
Reduced inventory due to precise,
knowledge of the location of all
tagged items
Higher efficiency from real-time
inventory taking
High speed Bluetooth
High speed Bluetooth is another
budding UWB WPAN. It will use UWB with
the promise of reaching multimedia speeds.
Downloading hundreds of photos in seconds
or wirelessly downloading movies from an
airport cabin are its capabilities.
Local and Personal Area
Networks (LAN/ PAN)
Roadside Info- station, based on
short bursts of very high data rate
Short range radios
Vehicular Radar: collision
avoidance/detection
Ground Penetrating Radar (GPR)
Surveillance
Location Finding
Precision location (inventory,
GPS aid) The above mentioned
are some more promising
applications of UWB which
makes it favored over other
wireless technologies with
extremely high speed nature.
CHALLENGES
UWB obtains its bandwidth
by using spectrum that may be allocated to
other purposes using more power. License
holders and users of those other purposes are
concerned that
UWB s low power signal may interfere with
their devices. In particular, some users
depend on clear reception of signals for
safety. One more fear is that unlicensed use
of UWB devices could cause Global
Positioning System (GPS) receivers to lose
contact with GPS satellites. Though no
conclusive evidence shows that UWB causes
a problem for these signals, it has not been
ruled out.
THE ROAD AHEAD
Ultra Wideband radio or UWB
was an obscure concept in the last century,
but will become part of everyday life in the
current century. Just as the automobile and
electricity were important innovative forces
in the early part of the 20th century, wireless
digital communications is the next big thing.
Home audio systems and PCs without the
confusing and messy cables, much high
speed data transfer and even more tech savvy
cell phones are the promise of UWB. Some
people question whether UWB really will
impact consumer life. A better question is
when? There is a definite demand for the
applications that can be developed using
UWB. UWB also has a unique edge over
competing technologies in its low cost and
low power model.
CONCLUSION
The recent spurt of
investigations, investments, and regulatory
rulings indicate an exploding interest in Ultra
Wideband technology. Though most signs
point to a promising future for UWB, there is
little experience yet with the effects of
multiple UWB transmitters in the real world.
It uses a unique type of signal (the RF
doublet) in a unique way (low power pulses
over a very large bandwidth). The technology
must overcome concerns about interference
with safety-of-life signals before achieving
complete acceptance. Successful operation
without interference will certainly lead to
numerous anticipated applications and an
entirely new face of Wireless
communication.
REFERENCES
Fleming, R., and Kushner, C. (10 September 2001). CMOS Ultra-Wideband CMOS Ultra
-
Wideband Localizers for for Networking in the Extreme. Presentation to DARPA.
Retrieved March 10, 2002 from www.darpa.mil.
W. Barrett, History of ultra wideband (UWB) radar & communications: Pioneers and
innovators in Proc. Progress in Electromagnetics Symposium, Cambridge, MA 2000.
L. Yang and G.B. Giannakis, Ultra-wideband communications: an idea whose time has
come in IEEE Signal Processing Mag., vol 21, no. 6, pp. 26-54, Nov. 2004.
J.H. Reed, Introduction to Ultra Wideband Communications Systems. New Jersey:
Prentice Hall PTR, 2005.
F. Nekoogar, Introduction to Ultra Wideband Communication: Fundamentals and
Applications. New Jersey: Prentice Hall PTR, 2005
www.pcquest.com
www.extremeuwb.com
www.wimedia.com
A Paper on
VLSI IMPLEMENTATION
OF
OFDM
Presented By
S.SUGUNA DEVI
[email protected]
A.SIREESHA
[email protected]
DEPARTMENT OF ELECTRONICS & COMMUNICATION
ENGINEERING
RGMCET
NANDYAL
INDEX:
.. Abstract
.. Introduction
.. OFDM transceiver
.. VLSI implementation
.. Design methodology
.. Algorithm survey & simulations
.. Hardware design
.. Interfacing
.. Clocking strategy
.. Conclusion
.. References
Abstract:
OFDM is a multi-carrier system where data bits are encoded to multiple sub-carri
ers and
sent simultaneously in time. The result is an optimum usage of bandwidth. A set
of orthogonal
sub-carriers together forms an OFDM symbol. To avoid ISI due to multi-path, succ
essive OFDM
symbols are separated by guard band. This makes the OFDM system resistant to mul
ti-path
effects. Although OFDM in theory has been in existence for a long time, recent d
evelopments in
DSP and VLSI technologies have made it a feasible option. This paper describes t
he VLSI
implementation of OFDM in details. Specifically the 802.11a OFDM system has been
considered in this paper. However, the same considerations would be helpful in i
mplementing
any OFDM system in VLSI.OFDM is fast gaining popularity in broadband standards a
nd highspeed
wireless LAN.
1. Introduction:
OFDM is a multi-carrier system where data bits are encoded to multiple sub-carri
ers.
Unlike single carrier systems, all the frequencies are sent simultaneously in ti
me. OFDM offers
several advantages over single carrier system like better multi-path effect immu
nity, simpler
channel equalization and relaxed timing acquisition constraints. But it is more
susceptible to
local frequency offset and radio front-end non-linearity.
The frequencies used in OFDM system are orthogonal. Neighboring frequencies with
overlapping spectrum can therefore be used. This property is shown in the figure
where f1, f2
and f3 orthogonal. This results in efficient usage of BW. The OFDM is therefore
able to provide
higher data rate for the same BW.
2. OFDM Transceiver
Each sub-carrier in an OFDM system is modulated in amplitude and phase by the da
ta
bits. Depending on the kind of modulation technique used one or more bits are us
ed to modulate
each sub-carrier. Modulation techniques typically used are BPSK, QPSK, 16QAM, 64
QAM etc.
The process of combining different sub-carriers to form a composite time-domain
signal is
achieved using Fast Fourier transform. Different coding schemes like block codin
g, convolution
coding or both is used to achieve better performance in low SNR conditions. Inte
rleaving is done
which involves assigning adjacent data bits to non-adjacent bits to avoid burst
errors under
highly selective fading.
Block diagram of an OFDM transceiver
3. VLSI implementation
VLSI Implementation
In the approach shown in Figure the entire functionality is implemented in hardw
are. Following
are the advantages of this approach:
Lower gate count compared to DSP+RAM+ROM, hence lower cost
Low power consumption
Due to the advantages mentioned above a VLSI based approach was considered for
implementation of an 802.11a Base band. Following sections describe the VLSI bas
ed
implementation in details.
4. Design Methodology
Early in the development cycle, different communication and signal processing
algorithms are evaluated for their performance under different conditions like n
oise, multipath
channel and radio non-linearity. Since most of these algorithms are coded in "C"
or tools like
Mat lab, it is important to have a verification mechanism which ensures that the
hardware
implementation (RTL) is same as the "C" implementation of the algorithm. The flo
w is shown in
the Figure.
Design flow for Base band development 5 Architecture definition
5.1 Specifications of the OFDM transceiver
.. Data rates to be supported
.. Range and multipath tolerance
.. Indoor/Outdoor applications
.. Multi-mode: 802.11a only or 802.11a+HiperLAN/2
5.2 Design trade-offs
.. Area - Smaller the die size lesser the chip cost
.. Power - Low power crucial for battery operated mobile devices
.. Ease of implementation - Easy to debug and maintain
.. Customizability - Should be customizable to future standards with variations
in OFDM
parameters
6. Algorithm survey & simulation
The simulation at algorithmic level is to determine performance of algorithms fo
r various
non-linearity s and imperfections. The algorithms are tweaked and fine tuned to ge
t the required
performance. The following algorithms/parameters are verified
.. Channel estimation and compensation for different channel models (Rayleigh, R
ician, JTC,
Two ray) for different delay spreads
.. Correlator performance for different delay spreads and different SNR
.. Frequency estimation algorithm for different SNR and frequency offsets
.. Compensation for Phase noise and error in Frequency offset estimation
.. System tolerance for I/Q phase and amplitude imbalance
.. FFT simulation to determine the optimum fixed-point widths
.. Wave shaping filter to get the desired spectrum mask
.. Viterbi BER performance for different SNR and trace back length
.. Determine clipping levels for efficient PA use
.. Effect of ADC/DAC width on the EVM and optimum ADC/DAC width
.. Receive AGC
6.1 Fixed point simulation
One of the decisions to be taken early in the design cycle is the format or repr
esentation
of data. Floating point implementation results in higher hardware costs and addi
tional circuits
related with normalizing of numbers. Floating point representation is useful whe
n dealing with
data of different ranges. But this however is not true as the Base band circuits
have a fair idea of
the range of values they will work on. So a fixed-point representation will be m
ore efficient.
Further in fixed point a choice can be made between signed and 2's complement re
presentation.
The width of representation need not be constant throughout the Baseband and it
depends
on the accuracy needed at different points in transmit or receive path. A small
change in the
number of bits in the representation could result in a significant change in the
size of arithmetic
circuits especially multipliers.
Shown below is the loss of SNR because of decrease in the width of representatio
n.
Module Width SNR in db
ADC 8 48
12 72
6.2 Simulation setup
The algorithms could be simulated in a variety of tools/languages like SPW, MATL
AB,
"C" or a mix of these. SPW has an exhaustive floating point and fixed-point libr
ary. SPW also
provides feature to plug-in RTL modules and do a co-simulation of SPW system and
Verilog.
This helps in verifying the RTL implementation of algorithms against the SPW/C
implementation.
7. Hardware design:
7.1 Interface definition
Base band interfaces with two external modules: MAC and Radio.
7.1.1 Interface to MAC
Base band should support the following for MAC
.. Should support transfer of data at different rates
.. Transmit and receive control
.. RSSI/CCA indication
.. Register programming for power and frequency control
Following options are available for MAC interface:
.. Serial data interface - Clock provided along with data. Clock speed changes f
or different data
rates
.. Varying data width, single speed clock - The number of data lines vary accord
ing to the data
rate. The clock remains same for all rates.
.. Single clock, Parallel data with ready indication - Clock speed and data widt
h is same for all
data rates. Ready signal used to indicate valid data
.. Interfaces like SPI/Micro-wire/JTAG could be used for register programming
7.1.2 Interface to Radio:
Two kinds of radio interfaces are described below
I/Q interface
On the transmit side, the complex Base band signal is sent to the radio unit tha
t first does
a Quadrature modulation followed by up-conversion at 5 GHz. On the receive side,
following the
down-conversion to IF, Quadrature demodulation is done and complex I/Q signal is
sent to Base
band. Shown below is the interface.
Figure: I/Q interface to Base band
IF interface
The Base band does the Quadrature modulation and demodulation digitally.
Figure: IF interface to Base band
7.2 Clocking strategy
The 802.11a supports different data rates from 6 Mbps to 54 Mbps. The clock sche
me
chosen for the Base band should be able to support all rates and also result in
low power
consumption. We know from our Basic ASIC design guidelines that most circuits sh
ould run at
the lowest clock.
Two options are shown below:
Above scheme requires different clock sources or a very high clock rate from wh
ich all
these clocks could be generated.
The modules must work for the highest frequency of 54 MHz.
Shown in the previous figure is a simpler clocking scheme with only one clock sp
eed for
all data rates
Varying duty cycles for different data rates is provided by the data enable sign
al
All the circuits in the transmit and receive chain work on parallel data (4 bits
)
Overhead is the Data enable logic in all the modules
7.3 Optimize usage of hardware resources by reusing different blocks
Hardware resources can be reused considering the fact that 802.11a system is a h
alfduplex
system. The following blocks are re-used
.. FFT/IFFT
.. Interleaver/De-interleaver
.. Scrambler/Descrambler
.. Intermediate data buffers
Since Adders and Multipliers are costly resources, special attention should be
given to
reuse them. An example shown below where an Adder/Multiplier pool is created and
different
blocks are connected to this.
Figure 15: Sharing of H/W resources
7.4 Optimize the widely used circuits
Identify the blocks that are used at several places (several instances of the sa
me unit) and
optimize them. Optimization can be done for power and area. Some of the circuits
that can be
optimized are:
7.4.1 Multipliers
They are the most widely used circuits. Synthesis tools usually provide highly o
ptimized
circuits for multipliers and adders. In case optimized multipliers are not avail
able, multipliers
could be designed using different techniques like booth- (Non) recoded Wallace.
7.4.2 ACS unit
There are 64 instantiations of ACS unit in the Viterbi decoder. Optimization of
ACS unit
results in significant savings. Custom cell design (using foundry information) f
or adders and
comparators could be considered.
8. Conclusion
In this paper, design approach for an OFDM Modem was presented. Different algor
ithms
implemented in OFDM modem are identified.
Implementation alternatives for different components of OFDM modem were discusse
d.
It was found during the algorithm design that many blocks need complex multiplie
rs and adders
and therefore special attention needs to be given to optimize these circuits and
maximize
reusability. The need for verifying the algorithms in the same environment or th
e same set of test
vectors with which the Fixed-point "C" implementation of algorithms are run is h
ighlighted.
9. References
1. ISO/IEC 8802-11 ANSI/IEEE Std 802.11-1999, Part 11: Wireless LAN Medium Acces
s
Control (MAC) and Physical Layer (PHY) specifications, IEEE, 20th August 1999
2. IEEE Std 802.11a-1999(Supplement to IEEE Std 802.11-1999), Part 11: Wireless
LAN
Medium Access Control (MAC) and Physical Layer (PHY) specifications, IEEE, Septe
mber
1999
3. Digital signal Processing, J.G.Proakis, D.G Manolakis, Third Edition
4. Digital communications, Simon Haykin, John Wiley and sons
5 "OFDM for multimedia wireless communications" by Van Nee, Richard and Ramjee P
rasad
6.n Equalization Technique for Orthogonal Frequency-Division Multiplexing System
s in Time-
Variant Multipath Channels, Won Gi Jeon, Kyung Hi Chang and Yong Soo Cho, IEEE
TRANSACTIONS ON COMMUNICATIONS, VOL. 47, NO. 1, JANUARY 1999
ADVANCED COMMUNICATION
THROUGH FLESH .
REDTACTON
(HUMAN AREA NETWORKING)
(Body Based Communication)
PRESENTED BY
P.M.SAISREE S.SRILEKHA
II B.TECH,ECE II B.TECH,ECE
ROLL NO: 107421 ROLL NO:107433
Emailid:[email protected] Emailid:sadepallisrilalchi@
Gmail.com
ABSTRACT:
Our body could soon be the backbone of
a broadband personal data network
linking your mobile phone or MP3
player to a cordless headset, your digital
camera to a PC or printer, and all the
gadgets you carry around to each other.
RedTacton is a new; it is completely
distinct from wireless and infrared. A
transmission path is formed at the
moment a part of the human body in
contact with a RedTacton transceiver.
Physically separating ends the contact
and thus ends communication. Human
Area Networking technology that uses
the surface of the human body as a safe,
high speed network transmission path.
Uses the minute electric field emitted on
the surface of the human body
.Technically according to the user's
natural, physical movements.
Communication is possible using any
body surfaces, such as the hands, fingers,
arms, feet, face, legs or torso. RedTacton
works through shoes and clothing as
well. Here, the human body acts as a
transmission medium supporting halfduplex
communication at 10Mbit/s. The
key component of the transceiver is an
electric-field sensor implemented with
an electro optic crystal and laser light.
INTRODUCTION:
NTT , the Japanese telecoms group, and
the team of scientists that invented the
Red Tacton system. "Tacton" because
with this technology, communication
starts by touching (Touch), leading to
various actions (Act on). We then added
the color red to convey the meaning of
warmth in communication. Combining
these phrases led to the name,
"RedTacton". Human society is entering
an era of ubiquitous computing, when
networks are seamlessly interconnected
and information is always accessible at
our fingertips. The practical
implementation of ubiquitous services
requires three levels of connectivity:
Wide Area Networks (WAN), typically
via the Internet, to remotely connect all
types of severs and terminals; Local
Area Networks (LAN), typically via
Ethernet or Wi-Fi connectivity among all
the information and communication
appliances in offices and homes; and
Human Area Networks (HAN) for
connectivity to personal information,
media and communication appliances
within the much smaller sphere of
ordinary daily activities-- the last one
meter. NTT's RedTacton is a breakthrough
technology that, for the first
time, enables reliable high-speed HAN.
In the past, Bluetooth, infrared
communications (IrDA), radio frequency
ID systems (RFID), and other
technologies have been proposed to
solve the "last meter" connectivity
problem. However, they each have
various fundamental technical
limitations that constrain their usage,
such as the precipitous fall-off in
transmission speed in multi-user
environments producing network
congestion.
Fig: set of connections
RedTacton takes a different technical
approach. Instead of relying on
electromagnetic waves or light waves to
carry data, RedTacton uses weak electric
fields on the surface of the body as a
transmission medium. A RedTacton
transmitter couples with extremely weak
electric fields on the surface of the body.
RED TACTON
The weak electric fields pass through the
body to a RedTacton receiver, where the
weak electric fields affect the optical
properties of an electro-optic crystal. The
extent to which the optical properties are
changed is detected by laser light which
is then converted to an electrical signal
by a detector circuit.
FUNCTIONING:
Using a new super-sensitive photonic
electric field sensor, RedTacton can
achieve duplex communication over the
human body at a maximum speed of 10
Mbps.
Fig: functioning
1. The RedTacton transmitter induces a
weak electric field on the surface of the
body.
2. The RedTacton receiver senses
changes in the weak electric field on the
surface of the body caused by the
transmitter.
3. RedTacton relies upon the principle
that the optical properties of an electrooptic
crystal can vary according to the
changes of a weak electric field.
4. RedTacton detects changes in the
optical properties of an electro-optic
crystal using a laser and converts the
result to an electrical signal in a optical
receiver circuit.
Note that RedTacton transceivers which
integrate transmitters and receivers are
also available.
MECHANISM:
Fig: mechanism
The transmitter sends data by inducing
fluctuations in the minute electric field
on the surface of the human body. Data
is received using a photonic electric field
sensor that combines an electro-optic
crystal and a laser light to detect
fluctuations in the minute electric field.
-The naturally occurring electric field
induced on the surface of the human
body dissipates into the earth. Therefore,
this electric field is exceptionally faint
and unstable.
- The photonic electric field sensor
developed by NTT enables weak electric
fields to be measured by detecting
changes in the optical properties of an
electro-optic crystal with a laser beam.
RED TACTON
MAIN FEATURES:
RedTacton has three main functional
features.
1. TOUCH:
Touching, gripping, sitting, walking,
stepping and other human movements
can be the triggers for unlocking or
locking, starting or stopping equipment,
or obtaining data.
Using RedTacton, communication starts
when terminals carried by the user or
embedded in devices are linked in
various combinations through physical
contact according to the human's natural
movements.
2. BROADBAND & INTERACTIVE:
Duplex, interactive communication is
possible at a maximum speed of
10Mbps*. Because the transmission path
is on the surface of the body,
transmission speed does not deteriorate
in congested areas where many people
are communicating at the same time.
-Maximum communication speed may
be slower than 10Mbps depending on the
usage environment.
Communication speed can deteriorate in
crowded spaces due to a lack of
bandwidth.
Fig: interaction by RED TACTON
Device drivers can be downloaded
instantly and executable programs can be
quickly sent.
Taking advantage of this speed, device
drivers can be downloaded instantly and
execute programs can be sent.
RED TACTON
3. ANY-MEDIA:
In addition to the human body, various
conductors and dielectrics can be used as
transmission media. Conductors and
dielectrics may also be used in
combination*.
-signals travel along the surfaces of
materials
-signals pass through materials.
-combinations of travel along and
passing through materials
The examples for conductor & dielectric
mediums are:
-in which signal traveling along and
passing through materials
A communication environment can be
created easily and at low-cost by using
items close at hand, such as desks, walls,
and metal objects.
APPLICATIONS:
Red Tacton has many applications some
of them are:
-Print out where you want just by
touching the desired printer with one
hand and a PC or digital camera with
the other hand to make the link.
-Complicated configurations are
reduced by downloading device
drivers "at first touch".
-By shaking hands, personal profile
data can be exchanged between
mobile terminals on the users.
(Electronic exchange of business
cards)
-Communication can be kept
private using authentication and
encryption technologies.
RED TACTON
-An electrically conductive sheet is
embedded in the table.
-A network connection is initiated
simply by placing a lap-top on the table.
-Using different sheet patterns enables
segmentation of the table into subnets.
A conductive metal sheet is placed on
top of a table. Laptop computers could
be connected to the Internet by simply
placing them on such a table. Even
different networks could be supported,
such as an enterprise LAN and Internet
access, by providing separate metal
sheets for each network.
-The seat position and steering wheel
height adjust to match the driver just by
sitting in the car. The driver's home is set
as the destination in the car navigation
system. The stereo plays the driver's
favorite song.
Fig: transmission of information
On the other hand, photonic electric field
sensors used in RedTacton can measure
stand-alone contacts without being
influenced by grounds. As a result, the
received waveform is not distorted,
regardless of the receiver location. This
makes long-distance and high-speed
body surface transmission possible.
RedTacton does not require the electrode
be in direct contact with the skin. Highspeed
communication is possible
between two arbitrary points on the
body.
PROTECTION:
-RedTacton uses the electric field that
occurs naturally on the surface of the
human body for communication.
Transmitter and receiver electrodes are
covered with an insulating film. No
current flows into the body from the
RedTacton transceiver.
-There is no current flowing from the
RedTacton transceiver; however, the
body indirectly receives a minute electric
field. This causes electrons already
present inside the body to move, creating
a minute displacement current. This
displacement current is similar to those
occurring in everyday life.
--RedTacton is in conformity to the
"Radiofrequency-exposure Protection
RCR standard (RCR STD-38)" issued by
the Association of Radio Industries and
Businesses (ARIB).
RED TACTON
PROTOTYPE:
Communication speed: 10 Mbps
Communication method: Half-duplex
Communication speed: 10 Mbps
Communication methods: Half-duplex
NTT plans to develop transceivers with
an emphasis on portability that are more
compact and less power consumption.
Through field testing, NTT will continue
to investigate and improve the
robustness of Human Area Networking
and human body surface communication
applications.
CONCLUSION:
Human body networking is more secure
then broadcast systems, such as
Bluetooth, which have a range of about
10m.With Bluetooth; it is difficult to rein
in the signal and restrict it to the device
you are trying to connect to. You usually
want to communicate with one particular
thing, but in a busy place there could be
hundreds of Bluetooth devices with in a
range. As human beings are effective in
aerials, it is very hand to pickup stray
electronic signals radiating from the
body. This is good for security because
even if you encrypt data it is still
possible that it could be decoded, but if
you can t pick it up it can t be.
In the near future, as more and more
implants go into bodies, the most
important application for body-based
networking may well be for
communications within, rather than on
the surface of, or outside, the body. An
intriguing possibility is that the
technology will be used as a sort of
secondary nervous system to link large
numbers of tiny implanted components
placed beneath the skin to create
powerful onboard or in-body computers.
So we can conclude that this technology
will change the future of wireless
communication.
REFERENCES:
1. www.redtacton.com
2. www.tribuneindia.com
3. www.ntt.co.jp
4. www.technologyreview.com
A Real-Time Face Recognition System
Using Custom VLSI Hardware
PRESENTED BY
P.V.Sai vijitha A.Pravallika Rani
06p11a04a2 06p11a0466
B.tech 3rd year B.tech 3rd year
ECE ECE
Email:[email protected]
CHADALAWADA RAMANAMMA
ENGINEERING COLLEGE
Renigunta Road, Tirupathi
ABSTRACT:
A real-time face recognition
system can be implemented on an IBM
compatible personal computer with a
video camera, image digitizer, and
custom VLSI image correlator chip.
With a single frontal facial image
under semi-controlled lighting
conditions, the system performs (i)
image preprocessing and template
extraction, (ii) template correlation
with a database of 173 images, and (iii)
postprocessing of correlation results to
identify the user. System performance
issues including image preprocessing,
face recognition algorithm, software
development, and VLSI hardware
implementation are addressed. In
particular, the parallel, fully pipelined
VLSI image correlator is able to
perform 340 Mop/second and achieve
a speed up of 20 over optimized
assembly code on an 80486/66DX2.
The complete system is able to identify
a user from a database of 173 images
of 34 persons in approximately 2 to 3
seconds. While the recognition
performance of the system is difficult
to quantify simply, the system achieves
a very conservative 88% recognition
rate using cross-validation on the
moderately varied database.
INTRODUCTION:
Humans are able to recognize
faces effortlessly under all kinds of
adverse conditions, but this simple task
has been difficult for computer systems
even under fairly constrained
conditions. Successful face recognition
entails the ability to identify the same
person under different circumstances
while distinguishing between
individuals. Variations in scale,
position, illumination, orientation, and
facial expression make it difficult to
distinguish the intrinsic differences
between two different faces while
ignoring differences caused by the
environment. Even when acceptable
recognition has been accomplished
with a computer, the actual
implementation has typically required
long run times on high performance
workstations or the use of expensive
supercomputers. The goal of this work
is to develop an efficient, real-time
face recognition system that would be
able to recognize a person in a matter
of a few seconds.
Face recognition has been the
focus of computer vision researchers
for many years. There are two basic
approaches to face recognition, (i)
parameter-based and (ii) templatebased.
In parameter-based recognition,
the facial image is analyzed and
reduced to a small number of
parameters describing important facial
features such as the eye shape, nose
location, and cheek bone curvature.
These few extracted facial parameters
are subsequently compared to database
of known faces. Parameter-based
recognition schemes attempt to
develop an efficient representation of
salient features of an individual.
While the database search and
comparison for parameter-based
recognition may not be
computationally intensive, the image
processing required to extract the
appropriate parameters is quite
computationally expensive and
requires careful selection of facial
parameters which will unambiguously
describe an individual s face.
The applications for a face
recognition system range from simple
security to intelligent user interfaces.
While physical keys and secret
passwords are the most common and
conventional methods for identification
of individuals, they impose an obvious
burden on users and are susceptible to
fraud. In contrast, biometrics systems
attempt to identify persons by utilizing
inherent physical features of humans
such as fingerprints, retinal patterns,
and vocal characteristics. Effective
biometrics identification systems
should be easy to use and less
susceptible to fraud. In particular,
facial features are an obvious and
effective biometrics of individuals, and
the ability to recognize individuals
from their faces is an integral part of
human society. While any computer
(or human) face recognition system has
obvious limitations such as identical
twins or masks, face recognition could
be used in combination with other
biometrics or security systems to
provide a much higher level of security
surpassing that of any individual
system. However, the primary
advantages of face recognition is likely
to be its non-invasive nature and
socially acceptable method for
identifying individuals especially when
compared with finger print analysis or
retinal scanning.
FACE RECOGNITION TASK:
The face recognition system
was based in large part Figure 1
Overall Processing Data Flow on a
template-based face recognition
algorithm described by Brunelli and
Poggio [2]. The actual recognition
process can be broken down into three
distinct phases. (i) Image
preprocessing and template extraction
and normalization, (ii) template
correlation with image database and
(iii) postprocessing of correlation
scores to identify user with high
confidence. From a single frontal facial
image under semi-controlled lighting
conditions and limited number of facial
expressions, the system can robustly
identify a user from an image database
of 173 images of 34 persons. While the
recognition performance of the system
is difficult to quantify simply, the
system achieves a very conservative
88% recognition rate using crossvalidation
on the moderately varied
database.
IMAGE PREPROCESSING:
Image preprocessing entails
transforming a 512x480 grey-level
image into four intensity normalized
templates corresponding to the eyes,
nose, mouth, and the entire face
(excluding hair, ears etc.) of the user.
The regions of the image
corresponding to the templates are
located by finding the user s eyes and
normalizing the image scale based on
the eye positions and inter-ocular
distance.
EYE LOCATION:
Locating eyes in a visually
complex image in real-time is a
formidable task. The goal of the realtime
face recognition system is to
operate in such a manner as to
minimally constrain the user s position
within the image. This requires the
ability to find the eyes at varying
scales over a range of locations in the
image. Since the accuracy of the eye
location affects the extraction of the
templates, and thus the correlation and
recognition, the location process must
be precise. The location process is
divided into two parts - rough location
and refinement. The rough location
phase quickly scans the image and
generates a list of candidate eye
locations. The rough eye location
algorithm is based on the observation
that an eye is distinguished by the
presence of a large dark blob, the iris,
surrounded by smaller light blobs on
each side, the whites. However, under
certain lighting conditions, highlights
within the eyes need to be removed
and can also be used as additional cues
for eye location. When coupled with
sufficient high-level constraints on the
relative positions of the blobs and an
acceptable measure of the
"blobbiness", this simple system
performs remarkably well. The
refinement stage then looks more
closely at these areas to determine
more exactly the best fit for an eye,
given inter-ocular constraints. The
refinement process not only assigns a
more exact location to each of the
candidate eyes, but also assigns a
radius to the iris (see Figure 3). This
allows more selective pruning by
imposing the restriction that the two
eyes be of similar size. In addition, the
inter-ocular spacing is constrained to a
distance proportional to the eye size.
TEMPLATE EXTRACTION AND
NORMALIZATION:
Once the eyes are located, subsampled
templates of the face, eyes, nose, and
mouth are extracted (see Figure 4). The
inter-ocular distance is taken as a
scaling factor, and the inter-ocular axis
is normalized to be horizontal. The
four regions of the image are
determined by fixed ratios and offsets
relative to the eyes. Skewless affine
transformations are used to scale and
rotate four area of the image into the
four templates. When multiple image
pixels correspond to a single template
pixel, averaging is employed. The
template sizes are fixed but tailored to
the size of the region from which they
are extracted. The face template is
68×68, the eye template is 68×34, and
while the nose and mouth templates
are each 34×34. The template size
governs the accuracy and speed of the
database search. Choosing the
templates to be too small results in a
loss of information. Choosing the
templates too large results in extraction
and correlation process running slowly.
In addition, the registration and
between the templates alignment errors
become more severe with larger
template sizes.
Once the templates have been
extracted, they must be normalized for
variations in lighting to ensure accurate
correlation between the templates. . If
the image intensity is used directly, a
dark image of one person could match
better with a dark image of a different
person than with a light image of the
same person. Since the lighting
conditions prevailing at the time of the
image database creation may be
different from those at the time of
recognition, insensitivity to lighting
conditions is crucial. Two types of
template intensity normalization are
employed, local normalization and
global normalization. Local
normalization entails dividing the pixel
intensity at a given point by the
average intensity in a surrounding
neighborhood. This is roughly
equivalent to high pass filtering of the
template data spatially and removes
intensity gradients caused by nonuniform
lighting. Global normalization
consists of determining the mean and
standard deviation of the template and
normalizing the pixel values to
compensate for low variance due to
dim lighting or image saturation.
TEMPLATE CORRELATION
WITH IMAGE DATABASE:
After the facial image of the
user has been preprocessed to obtain
the normalized templates, the
templates are compared to those in an
image database of known persons.
Templates are compared to those in the
database by a robust correlation
process to compensate for possible
registration errors. In particular, the
template is compared to database
images over a range of 25 different
alignments corresponding to spatial
shifts between +2 and -2 pixels in both
the horizontal and vertical directions..
While absolute-difference correlation
is more efficient than multiplication
based correlation, it is still a time
consuming process. Each set of four
templates consists of roughly 10,000
pixels. Thus each template comparison
over the 25 different alignments
requires approximately 250,000
absolute value and sum operations. An
Intel 80486/66DX2 running optimized
assembly code can only perform
roughly 5 million integer absolute
value and sum operations per second
including data movement and other
overhead. This would seem to limit the
database search rate to 20 template sets
per second, severely constraining the
size of the database possible for realtime
operation. The results are not
accurate enough to generate a
definitive answer, but can be used to
narrow the individual s identity to ten
candidates in a fraction of the time that
a full-resolution search requires. The
top ten candidates are then compared
at full resolution to the unknown
individual to yield the final result. In
this way,
POSTPROCESSING OF
CORRELATION SCORES:
The correlation of the
normalized extracted templates from
the target image with the database
templates generates a list of the top ten
candidates and their correlation scores.
The task of the postprocessing stage is
to interpret the corresponding
correlation scores and determine if
they indicate a match with someone
previously stored in the image
database. Typically this is not a clearcut
decision; therefore decisions have
an associated measure of confidence.
The goal is to recognize as many
images as possible while missing and
mistakenly recognizing as few images
as possible. An image is recognized if
the system correctly identifies it as
corresponding to someone who is in
the database. An image is missed if the
user is in the database and the system
fails to identify him or her. Finally, an
image is mistakenly recognized if the
system claims that the user
corresponds to a person in the
database, and the user is actually a
different person in the database or is
not represented in the database.
Postprocessing attempts to maximize
the recognition rate while minimizing
the mistaken and mis-recognition rate
by interpreting the raw correlation
scores with an intelligent and robust
decision making process.
The 15 correlation scores and
pseudo-scores for each of the ten
candidates must then be interpreted to
determine which, if any, of the
candidates match the input image.
SYSTEM ARCHITECTURE:
The system hardware consists
of an IBM PC 80486/DX2, a
commercial frame grabber, video
camera, and custom VLSI hardware
(see Figure 6). The goal of the
hardware system architecture is to
extract the highest performance from
those components.
Software implementation of the
face recognition system described
above on an IBM PC will be limited
bya computational bottleneck
associated with the image database
correlation. Benchmarks on an Intel
80486/66DX2 system (see Table I)
reveal that real-time performance in
software alone would not be possible
with a moderately sized database of
500 images. Thus, in order to achieve
real-time performance, a special
purpose VLSI image correlator was
implemented and integrated into the
system as a coprocessor board on the
ISA bus.
The image preprocessing and
template extraction are performed by
the 80486, the template correlation
with the database is accelerated by
using the VLSI image correlator, and
postprocessing is subsequently
performed by the 80486. The 80486
provides a flexible platform for general
computation while the VLSI image
correlator is fully optimized for a
single operation, template correlation
with the image database. The database
correlation task is to compute the
correlation of one template set against
the entire database. The user s
templates remain constant throughout
the entire operation while the database
templates vary as each known
individual is considered in succession.
Thus, the user s templates can be
cached using local SRAM on the
image coprocessor board to optimize
the usage of the 8 MByte/sec ISA bus
bandwidth (see Figure 7). Furthermore,
since the image template data are only
8 bits wide, two templates can be
transferred in parallel to take full
advantage of the 16 bit data bus.
Thus, the VLSI correlator chip
is designed with two independent
image correlators such that two
database entries can be correlated
simultaneously over all 25 possible
alignments. In this way, the correlation
time per 4KByte template is reduced to
0.9 ms/template, which increases the
possible throughput of the VLSI image
coprocessor system to about 1000
templates/sec. Thus, a moderately
sized database of 500 persons (a few
thousand images) can be completely
correlated in a few seconds.
The actual VLSI chip contained
two image correlators and was
fabricated on a 6.8mm × 6.8mm die in
a standard double metal, 2µm CMOS
process through MOSIS (see Figure
10). The MAGIC layout editor was
used to realize the fully custom design
of the 60,000-transistor chip.
SYSTEM PERFORMANCE:
The real-time face recognition
system user-interface is menu-driven
and user-friendly. There are many
additional features that were
incorporated for rapid debugging,
building of image databases, and
development of more advanced
recognition techniques. In all, the
system software represents a large
portion of the research effort and is
implemented with approximately
40,000 lines of C and 80x 86 assembly
codes. A typical screen capture of the
real-time face recognition system is
shown in Figure 11. The system
initially locates the eyes of the user as
shown by concentric circles overlaid
on the original image. Subsequently,
four small templates are extracted and
compared to the database. The pseudoscores
of the top five candidates are
shown at the bottom of the figure. The
highlighted numbers indicate scores
that exceed the threshold for a positive
match. The darkened numbers indicate
scores that exceed the threshold for a
negative match. All match scores are
normalized and offset such that the
rejection threshold was 0 and the
acceptance threshold was 100. Timing
and memory requirements are shown
in the text overlay below the extracted
templates.
The speed of the system is
measured from when the image is
presented to when the user is notified
of identification. During this time the
system must digitize the video image
through the frame grabber, locate the
eyes, extract and normalize the
templates, search the database via
correlation, and interpret the
correlation scores. The preprocessing
and template extraction phase is
performed using only the frame
grabber and 80486/66DX2 in
approximately 1.8 seconds and is
independent of the database size. A
typical timing breakdown for
preprocessing and template extraction
is shown in Table II.
The template correlation is
performed by the VLSI image
correlator and depends on the size of
the database. Typical database
correlation time was approximately 0.3
seconds for a database of 173 images.
Postprocessing is performed by the
80486 but is computationally quite
simple and does not represent a
significant portion of computing time.
The recognition performance of
the system is highly dependent on the
database of known persons and the
testing set. Cross-validation is a
common technique for measuring
recognition performance. The system
was able to achieve an 88%
recognition rate, a 93% correct
matching with the top candidate, and a
97% correct matching with the top 3
candidates under cross-validation with
a moderately varied database of 173
images of 34 persons.
A typical screen captures his
head or move slightly so as to be
recognized more readily on the next
trial a few seconds later. Hence it is
more important that the system does
not mistakenly recognize a user as
someone that they are not, than to miss
the person and claim that they are not
in the database. During actual usage,
the system can sometimes require more
than one trial, but recognition rarely
takes more than three or four trials.
Additionally, mistaken recognition is
also quite rare. As the recognition and
rejection thresholds are adjustable, the
trade-off between missing and
mistakenly recognizing can be
controlled to suit a particular
application.
CONCLUSIONS:
A real-time face recognition
system can be developed by making
effective use of the computing power
available from an IBM PC 80486 and
by implementing a special purpose
VLSI image correlator. The complete
system requires 2 to 3 seconds to
analyze and recognize a user after
being presented with a reasonable
frontal facial image. This level of
performance was achieved through
careful system design of both software
and hardware. Issues ranging from
algorithm development to software and
hardware implementation, including
custom digital VLSI design, were
addressed in the design of this system.
This approach of extremely focussed
system software and hardware codesign
can also be effectively applied
to a wide range of high performance
computing applications.
REFERENCES:
Robert J. Baron, "Mechanisms
of human facial recognition
Google.com
Wikipedia.com

You might also like