Module - 3-1
Module - 3-1
Module - 3
Sensor Classification, Internal Sensors,
External Sensors, Vision, Signal
Conditioning.
Sensors in robots
• Sensors in robots are like our nose, ears, mouth, and skin, whereas vision or robot vision can be
thought of as our eyes. Robots, like humans, must gather extensive information about their
environment in order to function effectively.
• They must pick up an object and know it has been picked up. As the robot arm moves through the
3-dimensional Cartesian space, it must avoid obstacles and approach items to be handled at a
controlled speed.
• Some objects are heavy, others are fragile, and others are too hot to handle. These characteristics
of objects and the environment must be recognized, and fed to the computer that controls a
robot’s movement.
Sensors must at least provide the following
functions:
• 1. Safe Operation: Sensors must protect human workers who work in the
vicinity of the robot or other equipment. For example, one can provide a
sensor on the floor of a work cell where a robot is working so that if
anybody steps in the robot’s power should be switched off.
• 2. Interlocking: This is required to coordinate the sequence of operations
on a component. For example, unless a turning is done on a component,
it should not be transferred to the conveyor.
Sensors must at least provide the following
functions:
• 3. Inspection: This is essential for quality control. For example, one can use a
vision system to measure the length of a component to check if it is within the
acceptable tolerance or not.
• 4. Part Configuration: If a robot is used to weld two parts of a car body, the
sensors must identify the correct configurations, i.e., position and orientation of
the parts, before the robot starts welding.
• There could be other scenarios like identifying the color code of a particular car
model before painting with that color is done by the robot, etc.
SENSOR CLASSIFICATION
• The major capabilities required by a robot are as follows:
• Simple Touch The presence or absence of an object.
• Taction or Complex Touch The presence of an object plus some information on its size and
shape.
• Simple Force Measured force along a single axis.
• Complex Force Measured force along two or more axes.
• Proximity Noncontact detection of an object.
• Simple Vision Detection of edges, holes, corners, and so on.
• Complex Vision Recognition of shapes.
SENSOR CLASSIFICATION
• Based on the type of signals a sensor or transducer receives and processes, it can be
classified as analog or digital.
• In analog sensors, with the variation of input there is a continuous variation of output,
whereas in case of digital sensors, the output is of digital or discrete nature.
• Example of analog sensors - potentiometers, tacho-generators located at the joints and
strain-gauge-based sensors located at the end-effector of a robot
• Example of digital sensors - encoders, located at the robot’s joints
• In this book, sensors are, however, classified based on what they sense, i.e., internal or
external state of the robots, etc., as shown Fig. 4.1.
INTERNAL SENSORS
• Internal sensors, as the name suggests, are used to measure internal
state of a robot, i.e., its position, velocity, acceleration, etc., at a particular
instant.
• Based on these information, control command is decided by the
controller.
• Depending on the quantities it measures, a sensor is termed as the
position, velocity, acceleration, or force sensor.
Position Sensors
• Position sensors measure the position of each joint, i.e., joint angle of a
robot. From these joint angles, one can find the end-effector
configuration, namely, its position and orientation.
• 1. Encoder The encoder is a digital optical device that converts motion
into a sequence of digital pulses. By counting a single bit or by decoding a
set of bits, the pulses can be converted to relative or absolute
measurements. Thus, encoders are of incremental or absolute type.
Further, each type may be again linear and rotary.
Incremental Rotary Encoder:
• It is similar to the linear incremental encoder with a difference that the
gratings are now on a circular disc, as in Fig.(c).
• The common value of the width of transparent spaces is 20 microns.
There are two sets of grating lines on two different circles which detect
direction of rotation, and one can also enhance the accuracy of the
sensor.
• There is another circle, which contains only one grating mark. It is used
for measurement of full circles.
Absolute Rotary Encoder
• Similar to the absolute linear encoder, the circular disk is divided into a
number of circular strips and each strip has definite arc segments, as
shown in Fig. 4.2(d). This sensor directly gives the digital output
(absolute).
• The encoder is directly mounted on the motor shaft or with some gearing
to enhance the accuracy of measurement. To avoid noise in this encoder,
a gray scale is sometimes used.
• A Gray code, unlike binary codes, allows only one of the binary bits in a
code sequence to change between radial lines.
• It prevents confusing changes in the binary output of the absolute
encoder when the encoder oscillates between points.
• A sample Gray code is given in Table 4.1 for some numbers. Note the
difference between the Gray and binary codes. The basic arrangement of
the rotary encoder is shown in Fig. 4.2(e).
2. Potentiometer
• A potentiometer, also referred as simply pot, is a variable resistance device that expresses linear
or angular displacements in terms of voltage, as shown in Figs. 4.3(a-b), respectively.
• 1. Proximity Sensor
• 2. Semiconductor Displacement Sensor
Proximity Sensor:
• Proximity sensing is the technique of detecting the presence or absence
of an object with an electronic noncontact-type sensor.
• Proximity sensors are of two types, inductive and capacitive.
• Inductive proximity sensors are used in place of limit switches for
noncontact sensing of metallic objects, whereas capacitive proximity
sensors are used on the same basis as inductive proximity sensors.
However, these can also detect nonmetallic objects.
Inductive Proximity Sensor
• All inductive proximity sensors consist of four basic
elements, namely, the following:
• Sensor coil and ferrite core
• Oscillator circuit
• Detector circuit
• Solid-state output circuit
• As shown in Fig. 4.12, the oscillator circuit
generates a radio-frequency electromagnetic field.
Inductive Proximity Sensor
• The field is centered around the axis of the ferrite core,
which shapes the field and directs it at the sensor face.
• When a metal target approaches the face and enters
the field, eddy currents are induced into the surface of
the target.
• This results in a loading or damping effect that causes a
reduction in amplitude of the oscillator signal.
• The detector circuit detects the change in the oscillator
amplitude.
Inductive Proximity Sensor
• The detector circuit will ‘switch on’ at specific
operating amplitude.
• This signal ‘turns on’ the solid-state output circuit.
This is often referred to as damped condition.
• As the target leaves the sensing field, the oscillator
responds with an increase in amplitude.
• As the amplitude increases above a specific value, it is detected by the
detector circuit, which is ‘switched off’ causing the output signal to return
to the normal or ‘off’ state.
• The sensing range of an inductive proximity sensor refers to the distance
between the sensor face and the target.
• It also indicates the shape of the sensing field generated through the coil
and the core.
• There are several mechanical and environmental factors that affect the
sensing range.
• The usual range is up to 10–15 mm but some sensors have ranges as high
as 100 mm.
Capacitive Proximity Sensor
• A capacitive proximity sensor operates much like an
inductive proximity sensor. However, the means of
sensing is considerably different.
• Capacitive sensing is based on dielectric capacitance.
Capacitance is the property of insulators to store the
charge.
• A capacitor consists of two plates separated by an
insulator, usually called a dielectric. When the switch
is closed, a charge is stored on the two plates.
Capacitive Proximity Sensor
• The distance between the plates determines the
ability of the capacitor to store the charge and can
be calibrated as a function of stored charge to
determine discrete ON and OFF switching status.
• Figure 4.13 illustrates the principle of a capacitive
sensor.
• One capacitive plate is part of the switch, the sensor
face is the insulator, and the target is the other plate.
Ground is the common path.
• The capacitive switch has the same four elements as the inductive sensor,
i.e., sensor (the dielectric media), oscillator circuit, detector circuit, and
solid-state output circuit.
• The oscillator circuit in a capacitive switch operates like one in an
inductive switch.
• The oscillator circuit includes capacitance from the external target plate
and the internal plate. In a capacitive sensor, the oscillator starts
oscillating when sufficient feedback capacitance is detected.
Major characteristics of the capacitive
proximity sensors are as follows:
• They can detect non-metallic targets.
• They can detect lightweight or small objects that cannot be detected by mechanical limit switches.
• They provide a high switching rate for rapid response in object counting applications.
• They can detect limit targets through nonmetallic barriers (glass, plastics, etc.).
• They have long operational life with a virtually unlimited number of operating cycles.
• The solid-state output provides a bounce-free contact signal.
Capacitive proximity sensors have two major limitations.
• The sensors are affected by moisture and humidity, and
• They must have extended range for effective sensing.
• Capacitive proximity sensors have a greater sensing range than inductive
proximity sensors.
• Sensing distance for capacitive switches is a matter of plate area, as coil size is
for inductive proximity sensors. Capacitive sensors basically measure a dielectric
gap.
• Accordingly, it is desirable to be able to compensate for the target and
application conditions with a sensitivity adjustment for the sensing range. Most
capacitive proximity sensors are equipped with a sensitivity adjustment
potentiometer.
Semiconductor Displacement Sensor
• As shown in Fig. 4.14, a semiconductor
displacement sensor uses a semiconductor Light
Emitting Diode (LED) or laser as a light source,
and a Position-Sensitive Detector (PSD).
• The laser [beam is focused on the target by a
lens. The target reflects the beam, which is then
focused on to the PSD forming a beam spot.
• The beam spot moves on the PSD as the target
moves. The displacement of the workpiece can
then be determined by detecting the movement
of the beam spot.
VISION
• Vision can be defined as the task of extracting information about the external
world from light rays imaged by a camera or an eye.
• Vision, also referred in the literature as computer vision or machine vision or
robot vision, is a major subject of research and many textbooks.
• Note in Fig. 4.1 that the vision systems or vision sensors are classified as external
noncontact type. They are used by robots to let them look around and find the
parts, for example, picking and placing them at appropriate locations.
• Earlier, fixtures were used with robots for accurate positioning of the parts. Such
fixtures are very expensive.
• A vision system can provide alternative economic solution. Other tasks of vision
systems used with robots include the following:
• 1. Inspection Checking for gross surface defects, discovery of flaws in labeling,
verification of the presence of components in assembly, measuring for
dimensional accuracy, checking the presence of holes and other features in a
part.
• 2. Identification Here, the purpose is to recognize and classify an object rather
than to inspect it. Inspection implies that the part must be either accepted or
rejected.
• 3. Visual Servoing and Navigation Control The purpose here is to direct the
actions of the robot based on its visual inputs, for example, to control the
trajectory of the robot’s end-effector toward an object in the workspace.
Industrial applications of visual servoing are part positioning, retrieving parts
moving along a conveyor, seam tracking in continuous arc welding, etc.
• All of the above applications someway require
• determination of the configuration of the objects,
• motion of the objects,
• reconstruction of the 3D geometry of the objects from their 2D images for
measurements, and
• building the maps of the environments for a robot’s navigation.
• Coverage of vision system is from a few millimetres to tens of meters with
either narrow or wide angles, depending upon the system needs and
design.
• Figure 4.15 shows a typical visual system connected to an industrial
robot.
Elements in a Vision Sensor
• In vision systems, the principal imaging component is a complete camera
including sensing array, associated electronics, output signal format, and
lens, as shown in Fig. 4.16.
• The task of the camera as a vision sensor is to measure the intensity of
the light reflected by an object, as indicated in Fig. 4.16, using a
photosensitive element termed pixel (or photosite).
• A pixel is capable of transforming light energy into electric energy.
• The sensors of different types like CCD, CMOS, etc., are available
depending on the physical principle exploited to realize the energy
transformation.
• Depending on the application, the camera could be RS-170/CCIR,
NTSC/PAL (These are American RS-170 monocolour, European/Indian
CCIR monocolour, NTSC colour, PAL color television standard signal
produced by the video cameras, respectively) progressive scan, variable
scan, or line scan.
• Five major system parameters which govern the choice of camera are
• field of view
• Resolution
• working distance
• depth of field, and
• image data acquisition rate.
• As a rule of thumb, for size measurement, the sensor should have a
number of pixels at least twice the ratio of the largest to smallest object
sizes of interest.
Camera Systems
• As indicated in Fig. 4.16, a camera is a complex system comprising of several
devices inside it. Other than the photosensitive sensor, there are shutter, a lens,
and analog preprocessing electronics.
• The lens is responsible for focusing the light reflected by the object on the plane
where the photosensitive sensors lies, called the image plane.
• In order to use it to compute the position and/ or orientation of an object, the
associated coordinate transformations.
• This is generally carried out by a software residing inside a personal computer
which saves the images.
• There are two types of video cameras: analog and digital.
• Analog cameras are not in common anymore. However, if it is used, a
frame grabber or video capture card, usually a special analog-to-digital
converter adopted for video signal acquisition in the form of a plug-in
board which is installed in the computer, is often required to interface the
camera to a host computer.
• The frame grabber will store the image data from the camera on-board,
or system memory, and performs sampling and digitizing of the analog
data as necessary.
• In some cases, the camera may output digital data, which is compatible
with a standard computer. So a separate frame grabber may not be
needed.
• Vision software is needed to create the program which processes the
image data.
• When an image has been analyzed, the system must be able to
communicate the result to control the process or to pass information to a
database. This requires a digital input/output interface.
• The human eye and brain can identify objects and interpret scenes under
a wide variety of conditions.
• Robot-vision systems are far less versatile.
• So the creation of a successful system requires careful consideration of all
elements of the system and precise identification of the goals to be
accomplished, which should be kept as simple as possible.
Vidicon Camera
• Early vision systems employed vidicon cameras,
which were bulky vacuum tube devices.
• They are almost extinct today but explained
here for the sake of completeness in the
development of video cameras.
• Vidicons are also more sensitive to
electromagnetic noise interference and require
high power. Their chief advantages are higher
resolution and better light sensitivity.
• Figure 4.17 shows the schematic diagram of a
vidicon camera.
Vidicon Camera
• The mosaic reacts to the varying intensity of a
light by varying its resistance.
• Now, as the electric gun generates and sends a
continuous cathode beam to the mosaic passing
though two pairs of orthogonal capacitors
(deflectors), the electron beam gets deflected
up or down, and left or right based on the
charge on each pair of capacitors.
• As the beam scans the image, at each instant,
the output is proportional to the resistance of
the mosaic or the light intensity on the mosaic.
Vidicon Camera
• By reading the output voltage continuously, an
analog representation of the image can be
obtained.
• The analog signal of vidicon needs to be
converted to digital signal using analog-to-digital
converters (ADC), in order to process the image
further using a PC.
• The ADC which actually performs the digitization
of the analog signal requires mainly three steps,
i.e., sampling, quantization, and encoding
• In sampling, a given analog signal is sampled
periodically to obtain a series of discrete-time analog
signal, as illustrated in Fig. 4.18.
• By setting a specified sampling rate, the analog signal
can be approximated by the sampled digital outputs.
• However, while reconstructing the original signal from
the sample data, one may end up with a completely
different signal.
• This loss of information is called aliasing, and it can be a
serious problem.
• In order to prevent aliasing, according to the sampling
theorem, the sampling rate must be at least twice the
largest frequency in the original video signal if one
wishes to reconstruct that signal exactly.
• In quantization, each sampled discrete time voltage level is assigned to a
finite number of defined amplitude levels.
• These levels correspond to the Gray scale used in the system.
• The predefined amplitude levels are characteristics to a particular ADC
and consist of a set of discrete values of voltage levels.
• The number of quantization levels is defined by 2n, where n is the
number of bits of the ADC.
• For example, a 1-bit ADC will quantize only at two values, whereas with
an 8-bit ADC, it is possible to quantize at 28 = 256 different values.
• Note that a large number of bits enables a signal to be represented more
precisely.
• Moreover, sampling and quantization resolutions are completely
independent of each other.
• Finally, encoding does the job of converting the amplitude levels that are
quantized into digital codes, i.e., 0 or 1.
• The ability of the encoding process to distinguish between various
amplitude levels is a function of the spacing of each quantization level.
Digital Camera
• A digital camera is based on solid-state technology.
• The main part of these cameras is a solid-state silicon wafer image area
that has hundreds of thousands of extremely small photosensitive areas
called photsites printed on it.
• Each small area of the wafer is a pixel. As the image is projected onto the
image area, at each pixel location of the wafer, a charge is developed that
is proportional to the intensity of the light at that location.
• Thus, a digital camera is also called a Charged Coupled Device (CCD)
camera or Charge Integrated Device (CID) camera.
Digital Camera
• The collection of charges, as shown in Fig. 4.19, if read sequentially,
would be a representation of the image pixels.
• The output is a discrete representation of the image as a voltage sampled
in time.
• Solid-state cameras are smaller, more rugged, last longer, and have less
inherent image distortion than vidicon cameras.
• They are also slightly more costly, but prices are coming down.
• Both the CCDs and CID chips use large transfer techniques to capture an
image.
• In a CCD camera, light impinges on the optical equivalent of a Random
Access Memory (RAM) chip.
• The light is absorbed in a silicon substrate, with charge buildup
proportional to the amount of light reaching the array.
• Once sufficient amount of energy has been received to provide a picture,
the charges are read out through built-in control registers.
• Some CCD chips use an interline charge-transfer technique.
• Others use frame-transfer approach, which is more flexible for varying
the integration period.
• The CID camera works on a similar principle.
• A CID chip is a Metal Oxide Semiconductor (MOS) based device with
multiple gates similar to CCDs.
• The video signal is the result of a current pulse from a recombination of
carriers.
• CIDs produce a better image (less distortion) and use a different read-out
technique than CCDs which require a separate scanning address unit.
• CIDs are, therefore, more expensive than CCDs.
• The principle difference between a CCD and a CID camera is the method
of generating the video signal.
Lighting Techniques
• One of the key questions in robot vision is what determines how bright the
image of some surface on the object will be?
• It involves radiometry (measurement of the flow and transfer of radiant energy),
general illumination models, and surface having both diffuse and specular
reflection components.
• Different points on the objects in front of the imaging system will have different
intensity values on the image, depending on the amount of incident radiance,
how they are illuminated, how they reflect light, how the reflected light is
collected by a lens system, and how the sensor camera responds to the
incoming light.
Lighting Techniques
• Figure 4.20 shows the basic reflection phenomenon. Hence, proper
illumination of the scene is important.
• It also affects the complexity level of the image-processing algorithm
required.
• The lighting techniques must avoid reflections and shadow unless they
are designed for the purpose of image processing.
• The main task of lighting is to create contrast between the object features
to be detected.
• Typical lighting techniques
• Direct Incident Lighting
• Diffuse Incident Lighting
• Lateral Lighting
• Dark Field Lighting
• Backlighting
Direct Incident Lighting
• This simple lighting technique can be used for nonreflective materials
which strongly scatter the light due to their matte, porous, fibrous, non-
glossy surface.
• Ideally, a ring light is chosen for smaller illuminated fields that can be
arranged around the lens.
• Shadows are avoided to the greatest extent due to the absolutely vertical
illumination. Halogen lamps and large fluorescence illumination can be
used too.
Diffuse Incident Lighting
• Diffused light is necessary for many applications, e.g., to test reflective,
polished, glossy, or metallic objects.
• It is particularly difficult if these surfaces are not glossy, perfectly flat, but
individually shaped, wrinkled, curved, or cylindrical.
• To create diffused lighting, one may use incident light with diffusers,
coaxial illumination, i.e., light is coupled into the axis of the camera by
means of a beam splitter or half-mirror, or the dome-shaped illumination
where light is diffused by means of a diffused coated dome in which the
camera looks through an opening in the dome onto the workpiece.
Lateral Lighting
• Light from the side can be radiated at a relatively wide or narrow angle.
• The influence on the camera image can be significant.
• In an extreme case, the image information can almost be inverted.
Dark Field Lighting
• At first sight, images captured using dark field illumination seem unusual to the
viewer.
• The light shines at a shallow angle. According to the principle of angle of
incidence equals the angle of reflection, all the light is directed away from the
camera.
• The field of view, therefore, remains dark. Inclined edges, scratches, imprints,
slots, and elevations interfere with the beam of light.
• At these anomalies, the light is reflected towards the camera. Hence, these
defects appear bright in the camera image.
Backlighting
• Transmitted light illumination is the first choice of lighting when it is
necessary to measure parts as accurately as possible.
• The lighting is arranged on the opposite side of the camera, the
component itself is put in the light beam.
Steps in a Vision System
• As depicted in Fig. 4.21, vision sensing has two steps, namely, image acquisition
and image processing. They are explained below.
• Image Acquisition
• Image Processing
• Image Analysis
Image Acquisition
• In image acquisition, an image is acquired from a vidicon which is digitized or
from a digital camera (CCD or CID).
• The image is stored in computer memory (also called a frame buffer) in the
format such as TIFF, JPG, Bitmap, etc.
• The buffer may be a part of the frame grabber card or in the computer itself.
• Note that the image acquisition is primarily a hardware function, however,
software can be used to control light intensity, focus, camera angle,
synchronization, field of view, read times, and other functions.
Image Acquisition
• Image acquisition has four principle elements, namely,
• a light source, either controlled or ambient,
• a lens that focuses reflected light from the object on to the image sensor,
• an image sensor that converts the light image into a stored electrical image, and
• the electronics to read the sensed image from the image sensing element, and after
processing, transmit the image information to a computer for further processing.
• A typical acquired image is shown in Fig. 4.22.