Textbook Cie 2 Syllabus
Textbook Cie 2 Syllabus
configurations, i.e., position and orientation of the parts, before the robot starts
welding.
There could be other scenarios like identifying the color code of a particular car
model before painting with that color is done by the robot, etc.
wiper and end leads of the device changes in proportion to the displacement, x and q
for linear and angular potentiometers, respectively.
3. LVDT The Linear Variable Differential Transformer (LVDT) is one of the most
used displacement transducers, particularly when high accuracy is needed. It generates
an ac signal whose magnitude is related to the displacement of a moving core, as
indicated in Fig. 4.4. The basic concept is that of a ferrous core moving in a magnetic
field, the field being produced in a manner similar to that of a standard transformer.
There is a central core surrounded by two identical secondary coils and a primary
coil, as shown in Fig. 4.4. As the core changes position with respect to the coils, it
changes the magnetic field, and hence the voltage amplitude in the secondary coil
changes as a linear function of the core displacement over a considerable segment. A
Rotary Variable Differential Transformer (RVDT) operates under the same principle
as the LVDT is also available with a range of approximately ±40°.
4. Synchros and Resolver While encoders give digital output, synchros and
resolvers provide analog signal as their
What are arc-minute
output. They consist of a rotating shaft
and arc-second?
(rotor) and a stationary housing (stator).
Their signals must be converted into the They are the measures of small angles.
one degree = 60 arc-minutes and
digital form through an analog-to-digital
one arc-minute = 60 arc-seconds.
converter before the signal is fed to the
computer.
As illustrated in Fig. 4.5, synchros and resolvers employ single-winding rotors
that revolve inside fixed stators. In a simple synchro, the stator has three windings
oriented 120° apart and electrically connected in a Y-connection. Resolvers differ
from synchros in that their stators have only two windings oriented at 90°. Because
synchros have three stator coils in a 120° orientation, they are more difficult than
resolvers to manufacture and are, therefore, more costly.
Modern resolvers, in contrast, are available in a brushless form that employ a
transformer to couple the rotor signals from the stator to the rotor. The primary
winding of this transformer resides on the stator, and the secondary on the rotor.
Other resolvers use more traditional brushes or slip rings to couple the signal into the
rotor winding. Brushless resolvers are more rugged than synchros because there are
no brushes to break or dislodge, and the life of a brushless resolver is limited only by
its bearings. Most resolvers are specified to work over 2 V to 40 V rms (root mean
square) and at frequencies from 400 Hz to 10 kHz. Angular accuracies range from 5
arc-minutes to 0.5 arc-minutes.
of the angle q between the rotor-coil axis and the stator-coil axis. In the case of a
synchro, the voltage induced across any pair of stator terminals will be the vector
sum of the voltages across the two connected coils. For example, if the rotor of a
synchro is excited with a reference voltage, V sin (w t), across its terminal R1 and R2,
the stator’s terminal will see voltages denoted as V0 in the form:
V0(S1 − S3) = V sin (w t) sin q (4.2a)
V0(S3 − S2) = V sin (w t) sin (q + 120°) (4.2b)
V0(S2 − S1) = V sin (w t) sin (q + 240°) (4.2c)
where S1, S2, etc., denotes the stator terminals. Moreover, V and w are the input
amplitude and frequency, respectively, whereas q is the shaft angle. In the case of a
resolver, with a rotor ac reference voltage of V sin (w t), the stator’s terminal voltages
will be
V0(S1 − S3) = V sin (w t) sin q (4.3a)
V0(S4 − S2) = V sin (w t) sin (q + 90°) = V sin (w t) cos q (4.3b)
As said earlier, the output of these synchros and resolvers must be first digitized.
To do this, analog-to-digital converters are used. These are typically 8-bit or 16-
bit. An 8-bit means that the whole range of analog signals will be converted into a
maximum of 28 = 256 values.
4.2.2 Velocity Sensors
Velocity or speed sensors measure by taking consecutive position measurements at
known time intervals and computing the time rate of change of the position values or
directly finding it based on different principles.
1. All Position Sensors Basically, all position sensors when used with certain
time bounds can give velocity, e.g., the number of pulses given by an incremental
position encoder divided by the time consumed in doing so. But this scheme puts
some computational load on the controller which may be busy in some other
computations.
2. Tachometer Such sensors can directly find the velocity at any instant of time,
and without much of computational load. This measures the speed of rotation of
an element. There are various types of tachometers in use but a simpler design is
based on the Fleming’s rule, which states ‘the voltage produced is proportional to the
rate of flux linkage.’ Here, a conductor (basically a coil) is attached to the rotating
element which rotates in a magnetic field (stator). As the speed of the shaft increases,
the voltage produced at the coil terminals also increases. In other ways, as shown in
Fig. 4.6, one can put a magnet on the rotating shaft and a coil on the stator. The
voltage produced is proportional to the speed of rotation of the shaft. This information
is digitized using an analog-to-digital converter and passed on to the computer.
3. Hall-effect Sensor Another velocity-measuring device is the Hall-effect
sensor, whose principle is described next. If a flat piece of conductor material, called
Hall chip, is attached to a potential difference on its two opposite faces, as indicated
in Fig. 4.7 then the voltage across the perpendicular faces is zero. But if a magnetic
field is imposed at right angles to the conductor, the voltage is generated on the two
84 Introduction to Robotics
other perpendicular faces. Higher the field value, higher the voltage level. If one
provides a ring magnet, the voltage produced is proportional to the speed of rotation
of the magnet.
where F is force, DR is the change in resistance of the strain gauge, A is the cross-
sectional area of the member on which the force being applied, E is the elastic
modulus of the strain-gauge material, R is the original resistance of the gauge, and
G is gauge factor of the strain gauge. Then, the acceleration a is the force divided by
mass of the accelerating object m, i.e.,
F DRAE
a= = (4.5)
m RCm
It is pointed out here that the velocities
What is Gauge Factor?
and accelerations that are measured using
It is a measure of sensitivity for the strain
position sensors require differentiations.
gauges, and defined by
It is generally not desirable, as the 1 DR
noise in the measured data, if any, will G=
e R
be amplified. Alternatively, the use of
where G is the gauge factor, and e is
integrators to obtain the velocity from strain.
the acceleration, and consequently the
position, are recommended. Integrators tend to suppress the noise.
Example 4.2 Change in Resistance
If the gauge factor G = 2, resistance of the unreformed wire R = 100 W, and strain
e = 10–6, then change in resistance is given by
DR = GeR = 2 × 10–6 × 100 = 0.0002 W (4.6)
capacitive switch has the same four elements as the inductive sensor, i.e., sensor (the
dielectric media), oscillator circuit, detector circuit, and solid-state output circuit.
The oscillator circuit in a capacitive switch operates like one in an inductive
switch. The oscillator circuit includes capacitance from the external target plate
and the internal plate. In a capacitive sensor, the oscillator starts oscillating when
sufficient feedback capacitance is detected. Major characteristics of the capacitive
proximity sensors are as follows:
∑ They can detect non-metallic targets.
∑ They can detect lightweight or small objects that cannot be detected by
mechanical limit switches.
∑ They provide a high switching rate for rapid response in object counting
applications.
∑ They can detect limit targets through nonmetallic barriers (glass, plastics, etc.).
∑ They have long operational life with a virtually unlimited number of operating
cycles.
∑ The solid-state output provides a bounce-free contact signal.
Capacitive proximity sensors have two major limitations.
∑ The sensors are affected by moisture and humidity, and
∑ They must have extended range for effective sensing.
Capacitive proximity sensors have a greater sensing range than inductive
proximity sensors. Sensing distance for capacitive switches is a matter of plate
area, as coil size is for inductive proximity sensors. Capacitive sensors basically
measure a dielectric gap. Accordingly, it is desirable to be able to compensate for
the target and application conditions with a sensitivity adjustment for the sensing
range. Most capacitive proximity sensors are equipped with a sensitivity adjustment
potentiometer.
2. Semiconductor Displacement Sensor As shown in Fig. 4.14, a
semiconductor displacement sensor uses a semiconductor Light Emitting Diode
(LED) or laser as a light source, and a Position-Sensitive Detector (PSD). The laser
[Courtesy: http://www.sensorcentral.com/displacement/
laser02.php]
beam is focused on the target by a lens. The target reflects the beam, which is then
focused on to the PSD forming a beam spot. The beam spot moves on the PSD as
the target moves. The displacement of the workpiece can then be determined by
detecting the movement of the beam spot.
4.4 VISION
Vision can be defined as the task of
extracting information about the external Computer Vision vs. Computer Graphics
world from light rays imaged by a camera Computer vision can be thought of as
or an eye. Vision, also referred in the ‘inverse computer graphics.’ Computer
literature as computer vision or machine graphics deals with how to generate images
from a specification of the visual scene (e.g.,
vision or robot vision, is a major subject objects, scene structures, light sources),
of research and many textbooks, e.g., by whereas computer vision inverts this process
Haralick and Shapiro (1992, 1993), and to infer the structure of the world from the
others. A good coverage on the topic has observed image(s).
also appeared in Niku (2001). There are
also dedicated journals, e.g., Computer Vision, Graphics, and Image Processing, and
conferences in the area of robot vision. The area is so vast that it cannot be covered
in one section or chapter of a book. However, an attempt is made here to introduce
the basic concepts and techniques so that one is able to understand the systems and
methodologies used in robot vision. For detailed study and research, other references
in the area should be consulted.
Note in Fig. 4.1 that the vision systems or vision sensors are classified as external
noncontact type. They are used by robots to let them look around and find the parts,
for example, picking and placing them at appropriate locations. Earlier, fixtures
were used with robots for accurate positioning of the parts. Such fixtures are very
expensive. A vision system can provide alternative economic solution. Other tasks
of vision systems used with robots include the following:
1. Inspection Checking for gross surface defects, discovery of flaws in labeling,
verification of the presence of components in assembly, measuring for dimensional
accuracy, checking the presence of holes and other features in a part.
2. Identification Here, the purpose is to recognize and classify an object rather
than to inspect it. Inspection implies that the part must be either accepted or rejected.
3. Visual Servoing and Navigation Control The purpose here is to direct the
actions of the robot based on its visual inputs, for example, to control the trajectory
of the robot’s end-effector toward an object in the workspace. Industrial applications
of visual servoing are part positioning, retrieving parts moving along a conveyor,
seam tracking in continuous arc welding, etc.
All of the above applications someway require the determination of the
configuration of the objects, motion of the objects, reconstruction of the 3D geometry
of the objects from their 2D images for measurements, and building the maps of the
environments for a robot’s navigation. Coverage of vision system is from a few
millimetres to tens of metres with either narrow or wide angles, depending upon the
Sensors, Vision and Signal Conditioning 91
system needs and design. Figure 4.15 shows a typical visual system connected to an
industrial robot.
The task of the camera as a vision sensor is to measure the intensity of the light
reflected by an object, as indicated in Fig. 4.16, using a photosensitive element
termed pixel (or photosite). A pixel is capable of transforming light energy into
electric energy. The sensors of different types like CCD, CMOS, etc., are available
depending on the physical principle exploited to realize the energy transformation.
Depending on the application, the camera could be RS-170/CCIR, NTSC/PAL (These
are American RS-170 monocolor, European/Indian CCIR monocolor, NTSC color,
PAL color television standard signal produced by the video cameras, respectively)
progressive scan, variable scan, or line scan. Five major system parameters which
govern the choice of camera are field of view, resolution, working distance, depth of
field, and image data acquisition rate. As a rule of thumb, for size measurement, the
sensor should have a number of pixels at least twice the ratio of the largest to smallest
object sizes of interest.