0% found this document useful (0 votes)
226 views43 pages

Robotic Sensors

This document outlines a course on robotic sensors. The course aims to teach students about various sensors, transducers, and vision systems and their usefulness for robot navigation and guidance. The course covers topics like position sensors, accelerometers, proximity sensors, force sensors, temperature sensors, light sensors, and vision sensors. Students will learn how to extract knowledge on sensors and transducers, achieve knowledge in active and passive sensors, manage robots through various sensor systems, control robots through vision systems, and review robotics programs and software.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
226 views43 pages

Robotic Sensors

This document outlines a course on robotic sensors. The course aims to teach students about various sensors, transducers, and vision systems and their usefulness for robot navigation and guidance. The course covers topics like position sensors, accelerometers, proximity sensors, force sensors, temperature sensors, light sensors, and vision sensors. Students will learn how to extract knowledge on sensors and transducers, achieve knowledge in active and passive sensors, manage robots through various sensor systems, control robots through vision systems, and review robotics programs and software.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Course Code: 2161ME104

Course Name: ROBOTIC SENSORS


PREAMBLE:
To learn about various sensors, transducers, vision systems and its usefulness for Robot navigation and guidance.

1.PRE-REQUISITE:
 Mechatronics
 Basic Electrical and Electronics

2.LINKS TO OTHER COURSES


Fundamental to M.Tech. courses.

3.COURSE EDUCATIONAL OBJECTIVES


 Expand the knowledge on sensors and transducers which can able to help in the field of robotics.
 To understand the vision sensor systems, various robot programming and software’s.

4.COURSE OUTCOMES
Students undergoing this course are able to
 Extract the knowledge on sensors and transducers
 Achieve Knowledge in active and passive sensors
 Manage robots through various sensors systems
 Control robots through vision system
 Review various robotics programs and software

5.COURSE CONTENT
UNIT I Introduction 9
An Introduction to sensors and Transducers, History and definitions, Smart Sensing, AI sensing, Need of sensors
in Robotics.

UNIT II Sensors In Robotics 9


Position sensors - optical, non-optical, Velocity sensors, Accelerometers, Proximity Sensors - Contact, non-
contact, Range Sensing, touch and Slip Sensors, Force and Torque Sensors

UNIT III Miscellaneous Sensors In Robotics 9


Different sensing variables - smell, Heat or Temperature, Humidity, Light, Speech or Voice recognition Systems,
Telepresence and related technologies.

UNIT IV Vision Sensors In Robtics 9


Robot Control through Vision sensors, Robot vision locating position, Robot guidance with vision system, End
effector camera Sensor

UNIT V Multisensor Controlled Robot Assembly 9


Control Computer, Vision Sensor modules, Software Structure, Vision Sensor software, Robot programming,
Handling, Gripper and Gripping methods, accuracy - A Case study.
TOTAL : 45 periods

6. BEYOND THE SYLLABUS:


1. Optical encoder
2. Line follower sensor
3. Collision avoidance
4. Vision systems
5. Touch sense for the robots
UNIT-1 INTRODUCTION

1.0 Introduction.
Measurement is an important subsystem of a mechatronics system. Its main function is to collect the information on system status and to feed it to
the micro-processor(s) for controlling the whole system.

Measurement system comprises of sensors, transducers and signal processing devices. Today a wide variety of these elements and devices are
available in the market. For a mechatronics system designer, it is quite difficult to choose suitable sensors/transducers for the desired application(s).
It is therefore essential to learn the principle of working of commonly used sensors/transducers. A detailed consideration of the full range of
measurement technologies is, however, out of the scope of this course. Readers are advised to refer “Sensors for mechatronics” by Paul P.L.
Regtien, Elsevier, 2012 [2] for more information.

Sensors in manufacturing are basically employed to automatically carry out the production operations as well as process monitoring activities.
Sensor technology has the following important advantages in transforming a conventional manufacturing unit into a modern one.

1. Sensors alarm the system operators about the failure of any of the sub units of manufacturing system. It helps operators to reduce the
downtime of complete manufacturing system by carrying out the preventative measures.
2. Reduces requirement of skilled and experienced labours.
3. Ultra-precision in product quality can be achieved.
1.1 Sensor:
It is defined as an element which produces signal relating to the quantity being measured [1]. According to the Instrument
Society of America, sensor can be defined as “A device which provides a usable output in response to a specified measurand.” Here,
the output is usually an ‘electrical quantity’ and measurand is a ‘physical quantity, property or condition which is to be measured’. Thus,
in the case of, say, a variable inductance displacement element, the quantity being measured is displacement and the sensor transforms
an input of displacement into a change in inductance.
1.2 Transducer:
It is defined as an element when subjected to some physical change experiences a related change [1] or an element which converts a
specified measurand into a usable output by using a transduction principle.
It can also be defined as a device that converts a signal from one form of energy to another form.
A wire of Constantan alloy (copper-nickel 55-45% alloy) can be called as a sensor because variation in mechanical displacement (tension
or compression) can be sensed as change in electric resistance. This wire becomes a transducer with appropriate electrodes and input-
output mechanism attached to it. Thus, we can say that ‘sensors are transducers’.
2.0 Sensor/transducers specifications:
Transducers or measurement systems are not perfect systems. Mechatronics design engineer must know the capability and shortcoming
of a transducer or measurement system to properly assess its performance. There are a number of performance related parameters of a transducer
or measurement system. These parameters are called as sensor specifications. Sensor specifications inform the user to the about deviations from
the ideal behaviour of the sensors. Following are the various specifications of a sensor/transducer system.
2.1 Range
The range of a sensor indicates the limits between which the input can vary. For example, a thermocouple for the measurement of
temperature might have a range of 25-225 °C.
2.2 Span
The span is difference between the maximum and minimum values of the input. Thus, the above-mentioned thermocouple will have a
span of 200 °C.
2.3 Error
Error is the difference between the result of the measurement and the true value of the quantity being measured. A sensor might give a
displacement reading of 29.8 mm, when the actual displacement had been 30 mm, then the error is –0.2 mm.
2.4 Accuracy
The accuracy defines the closeness of the agreement between the actual measurement result and a true value of the measurand. It is
often expressed as a percentage of the full range output or full–scale deflection. A piezoelectric transducer used to evaluate dynamic
pressure phenomena associated with explosions, pulsations, or dynamic pressure conditions in motors, rocket engines, compressors,
and other pressurized devices is capable to detect pressures between 0.1 and 10,000 psig (0.7 KPa to 70 MPa). If it is specified with the
accuracy of about ±1% full scale, then the reading given can be expected to be within ± 0.7 MPa.
2.5 Sensitivity
Sensitivity of a sensor is defined as the ratio of change in output value of a sensor to the per unit change in input value that causes the
output change. For example, a general purpose thermocouple may have a sensitivity of 41 µV/°C.
2.6 Nonlinearity

The nonlinearity indicates the maximum deviation of the actual measured curve of a sensor from the ideal curve. Figure 1.1 shows a somewhat
exaggerated relationship between the ideal, or least squares fit, line and the actual measured or calibration line. Linearity is often specified in terms
of percentage of nonlinearity, which is defined as:

Nonlinearity (%) = Maximum deviation in input ⁄ Maximum full scale input (2.1.1)

The static nonlinearity defined by Equation 2.1.1 is dependent upon environmental factors, including temperature, vibration, acoustic noise level, and
humidity. Therefore, it is important to know under what conditions the specification is valid.
Fig. 1.1 Non-linearity error
2.7 Hysteresis.

Fig. 1.2 Hysteresis error curve


The hysteresis is an error of a sensor, which is defined as the maximum difference in output at any measurement value within the
sensor’s specified range when approaching the point first with increasing and then with decreasing the input parameter. Figure 1.2 shows
the hysteresis error might have occurred during measurement of temperature using a thermocouple. The hysteresis error value is
normally specified as a positive or negative percentage of the specified input range.
2.8 Resolution.
Resolution is the smallest detectable incremental change of input parameter that can be detected in the output signal. Resolution can
be expressed either as a proportion of the full-scale reading or in absolute terms. For example, if a LVDT sensor measures a displacement
up to 20 mm and it provides an output as a number between 1 and 100 then the resolution of the sensor device is 0.2 mm.
2.9 Stability
Stability is the ability of a sensor device to give same output when used to measure a constant input over a period of time. The term ‘drift’
is used to indicate the change in output that occurs over a period of time. It is expressed as the percentage of full range output.
2.10 Dead band/time
The dead band or dead space of a transducer is the range of input values for which there is no output. The dead time of a sensor device
is the time duration from the application of an input until the output begins to respond or change.
2.11 Repeatability
It specifies the ability of a sensor to give same output for repeated applications of same input value. It is usually expressed as a
percentage of the full range output: Repeatability = (maximum – minimum values given) X 100 ⁄ full range (2.1.2)
2.12 Response time
Response time describes the speed of change in the output on a step-wise change of the measurand. It is always specified with an
indication of input step and the output range for which the response time is defined.
3.0 Classification of sensors
Sensors can be classified into various groups according to the factors such as measurand, application fields, conversion principle, energy domain
of the measurand and thermodynamic considerations. These general classifications of sensors are well described in the references [2, 3].
Detail classification of sensors in view of their applications in manufacturing is as follows. A. Displacement, position and proximity sensors
o Potentiometer
o Strain-gauged element
o Capacitive element
o Differential transformers
o Eddy current proximity sensors
o Inductive proximity switch
o Optical encoders
o Pneumatic sensors
o Proximity switches (magnetic)
o Hall effect sensors
B. Velocity and motion
o Incremental encoder
o Tachogenerator
o Pyroelectric sensors

C. Force
o Strain gauge load cell
D. Fluid pressure
o Diaphragm pressure gauge
o Capsules, bellows, pressure tubes
o Piezoelectric sensors
o Tactile sensor
E. Liquid flow
o Orifice plate
o Turbine meter
F. Liquid level
o Floats
o Differential pressure
G. Temperature
o Bimetallic strips
o Resistance temperature detectors
o Thermistors
o Thermo-diodes and transistors
o Thermocouples
o Light sensors
o Photo diodes
o Photo resistors
4.0 Smart Sensing
A sensor producing an electrical output when combined with interfacing electronic circuit is known as “Smart Sensor”, It is a combination of both
sensor and actuator. It simplify physical, biological or chemical input & convert it to the measured value into a digital format.

 Sensors are capable of manipulation and computation of the sensor-derived data.


Sensor + interfacing circuit = smart sensor
 Smart sensors are capable of.
o Logic functions,
o Two-way communication
o Make decisions.
4.1 Advantages of smart sensor
 Single chip solution.
 Very small in size.
 Less space in configuration.
 Work with small signals.
 Minimum interconnecting cable
 High reliability
 High performance
 Easy to design use and maintain
 Scalable – Flexible System
 Small rugged Packaging
 Minimum cost.
4.2 Application of smart sensor
 Self-calibration: Adjust deviation of o/p of sensor from desired value.
 Communication: Broadcast information about its own status.
 Computation: Allows one to obtain the average, variance and standard deviation for the set of measurements.
 Multisensing: A single smart sensor can measure pressure, temperature, humidity, gas flow and infrared, chemical reaction
surface acoustic vapour etc.
 Accelerometer
 Optical sensor
 Infrared detector array
 Integrated multisensory
 Structural monitoring
 Geological mapping
4.3 General architecture of smart sensor:
4.4 Disadvantage
 The smart sensor consists of both actuators & sensors, so it is more complexed than other simple sensors.
 The complexity is much higher in the wired smart sensors, as a consequence the costs are also higher.

5.0 Artificial Intelligence sensing


5.1 Definition of AI

The concept and/or realization of AI can be subjective for many and objective for engineers and designers working on AI applications. For our
purposes, let’s stick with the objective as art and philosophy has no place in an operational discussion.

A decent functional definition of AI is electrical and mechanical systems capable of collecting data related to specific tasks, analyzing the data, and
making decisions and taking action based on the data and its analysis. A simple example might be a robot assembling fuel-injection systems on
an assembly line. The robot not only has the ability assemble a multi-part device, but can ascertain if the parts it is using are meeting spec and
reject defective parts and replace them with good parts.

Of course, AI gets more complex in its goals, learning is a big part. Going back to that robot assembling fuel injectors, at some point the goal is for
machines to analyze the tasks they are doing and logically find more effective and efficient ways to do them. For example, the robot may find a
way to combine three parts into one or a different assembly order that could speed up production and, of course, increase profits.

5.2 Sensors Used In AI Apps


It’s a source of both constant dismay and equal amusement that few grasp the concept that sensors are all encompassing. They are
used in every area of technology from simple toys to highly complex systems. They are essential components, hardware if you will.
A pressure sensor, for example, is a pressure sensor whether it’s used in the mechanical portion of an AI design or between to pieces
of tile to measure weight in a consumer scale. The only differences may be the pressure range and the output type, i.e., voltage or
current.

5.3 AI Sensors
When you think of the plethora of applications for sensors, one cannot avoid seeing an AI potential in each. The technology may not
inspire many unique or novel sensors, but it will generate a massive demand for sensors of all types. And with the need for compact
designs, sensor fusion will become more the norm than the application specific.

The list of AI applications is extremely long, but the types of sensors they will require is not comparable in terms of length. Also remember,
certain sensor types have several different names, making the list even shorter. The top 10 includes:

 Pressure sensors
 Position sensors
 Temperature sensors
 Optical sensors
 Current & Voltage sensors
 Flow sensors
 Chemical sensors
 Gas sensors
 Torque, Strain, Gage & Force sensors
 Velocity sensors
5.4 Application of AI.

6.0 Need of sensors in Robotics

 Safety monitoring:
The sensors are extremely used in industrial robotics for monitoring the hazardous and safety conditions in the robot cell layout. This
certainly helps in avoiding the physical injuries and other damages caused to the human workers.
 Interlocking in work cell control:
In robot work cell, the series of activities of different equipments are controlled by using interlocks. Here, sensors are employed for verifying
the conclusion of the current work cycle before progressing to the next cycle.
 Quality control in work part inspection:
In olden days, the quality control was performed with a manual inspection system. Nowadays, sensors are employed in the inspection
process for determining the quality features of a work part automatically. A major advantage of using sensors in this category provides high
accurate results. One disadvantage in this automatic inspection is that the sensors are only able to examine a limited variety of work part
features and faults.
 Data collection of objects in the robot work cell:
Sensors are used in this category to determine the position or other related data about the fixtures, work parts, equipment, human workers,
and so on. Apart from sensing the position, it is also implemented to find out the other information like work part’s color, orientation, size,
shape, etc. The key reasons for determining the above information while executing a robot program includes:
1. Recognition of work parts
2. Random position and orientation of work parts
3. Improved accuracy of robot position using the feedback data
UNIT II Sensors in Robotics

Position sensors:

What are Position Sensors?

Most common way of classifying the wide spectrum of sensors is based on the specific application of the sensor. Sensor used for measuring humidity
is termed as humidity sensor, the one used for measurement of pressure is called pressure sensor, sensor used for measurement of liquid level is
called level sensor and so on though all of them may be using the same sensing principle. In a similar fashion, the sensor used for measurement of
position is called a position sensor.
Position sensors are basically sensors for measuring the distance travelled by the body starting from its reference position. How far the body has
moved from its reference or initial position is sensed by the position sensors and often the output is given as a fed back to the control system which
takes the appropriate action. Motion of the body can be rectilinear or curvilinear; accordingly, position sensors are called linear position sensors or
angular position sensors.

As their name implies, Position Sensors detect the position of something which means that they are referenced either to or from some fixed point or
position. These types of sensors provide a “positional” feedback.

One method of determining a position, is to use either “distance”, which could be the distance between two points such as the distance travelled or
moved away from some fixed point, or by “rotation” (angular movement). For example, the rotation of a robots wheel to determine its distance travelled
along the ground. Either way, Position Sensors can detect the movement of an object in a straight line using Linear Sensors or by its angular movement
using Rotational Sensors.

Types of Position Sensor

Position sensors use different sensing principles to sense the displacement of a body. Depending upon the different sensing principles used for
position sensors, they can be classified as follows:
1. Resistance-based or Potentiometric Position sensors
2. Capacitive position sensors
3. Inductive Position Sensors
 Linear Voltage Differential Transformers
 Eddy Current based position Sensor
4. Magnetostrictive Linear Position Sensor
5. Hall Effect based Magnetic Position Sensors
6. Fiber-Optic Position Sensor
7. Optical Position Sensors

Potentiometric Position sensors:

The most commonly used of all the “Position Sensors”, is the potentiometer because it is an inexpensive and easy to use position sensor. It has a
wiper contact linked to a mechanical shaft that can be either angular (rotational) or linear (slider type) in its movement, and which causes the resistance
value between the wiper/slider and the two end connections to change giving an electrical signal output that has a proportional relationship between
the actual wiper position on the resistive track and its resistance value. In other words, resistance is proportional to position.

Potentiometers come in a wide range of designs and sizes such as the commonly available round rotational type or the longer and flat linear slider
types. When used as a position sensor the moveable object is connected directly to the rotational shaft or slider of the potentiometer.

A DC reference voltage is applied across the two outer fixed connections forming the resistive element. The output voltage signal is taken from the
wiper terminal of the sliding contact as shown below.

This configuration produces a potential or voltage divider type circuit output which is proportional to the shaft position. Then for example, if you apply
a voltage of say 10v across the resistive element of the potentiometer the maximum output voltage would be equal to the supply voltage at 10 volts,
with the minimum output voltage equal to 0 volts. Then the potentiometer wiper will vary the output signal from 0 to 10 volts, with 5 volts indicating
that the wiper or slider is at its half-way or centre position.

The output signal (Vout) from the potentiometer is taken from the centre wiper connection as it moves along the resistive track, and is proportional to
the angular position of the shaft.
Example of a simple Positional Sensing Circuit

While resistive potentiometer position sensors have many advantages: low cost, low tech, easy to use etc, as a position sensor they also have many
disadvantages: wear due to moving parts, low accuracy, low repeatability, and limited frequency response.

But there is one main disadvantage of using the potentiometer as a positional sensor. The range of movement of its wiper or slider (and hence the
output signal obtained) is limited to the physical size of the potentiometer being used.

For example a single turn rotational potentiometer generally only has a fixed mechanical rotation of between 0 o and about 240 to 330o maximum.
However, multi-turn pots of up to 3600o (10 x 360o) of mechanical rotation are also available.

Most types of potentiometers use carbon film for their resistive track, but these types are electrically noisy (the crackle on a radio volume control),
and also have a short mechanical life.

Wire-wound pots also known as rheostats, in the form of either a straight wire or wound coil resistive wire can also be used, but wire wound pots
suffer from resolution problems as their wiper jumps from one wire segment to the next producing a logarithmic (LOG) output resulting in errors in
the output signal. These too suffer from electrical noise.

For high precision low noise applications conductive plastic resistance element type polymer film or cermet type potentiometers are now available.
These pots have a smooth low friction electrically linear (LIN) resistive track giving them a low noise, long life and excellent resolution and are
available as both multi-turn and single turn devices. Typical applications for this type of high accuracy position sensor is in computer game joysticks,
steering wheels, industrial and robot applications.

Inductive Position Sensors

Linear Variable Differential Transformer

One type of positional sensor that does not suffer from mechanical wear problems is the “Linear Variable Differential Transformer” or LVDT for
short. This is an inductive type position sensor which works on the same principle as the AC transformer that is used to measure movement. It is a
very accurate device for measuring linear displacement and whose output is proportional to the position of its moveable core.

It basically consists of three coils wound on a hollow tube former, one forming the primary coil and the other two coils forming identical secondaries
connected electrically together in series but 180o out of phase either side of the primary coil.

A moveable soft iron ferromagnetic core (sometimes called an “armature”) which is connected to the object being measured, slides or moves up and
down inside the tubular body of the LVDT.

A small AC reference voltage called the “excitation signal” (2 – 20V rms, 2 – 20kHz) is applied to the primary winding which in turn induces an EMF
signal into the two adjacent secondary windings (transformer principles).

If the soft iron magnetic core armature is exactly in the centre of the tube and the windings, “null position”, the two induced emf’s in the two
secondary windings cancel each other out as they are 180o out of phase, so the resultant output voltage is zero. As the core is displaced slightly to
one side or the other from this null or zero position, the induced voltage in one of the secondaries will be become greater than that of the other
secondary and an output will be produced.

The polarity of the output signal depends upon the direction and displacement of the moving core. The greater the movement of the soft iron core
from its central null position the greater will be the resulting output signal. The result is a differential voltage output which varies linearly with the
cores position. Therefore, the output signal from this type of position sensor has both an amplitude that is a linear function of the cores displacement
and a polarity that indicates direction of movement.

The phase of the output signal can be compared to the primary coil excitation phase enabling suitable electronic circuits such as the AD592 LVDT
Sensor Amplifier to know which half of the coil the magnetic core is in and thereby know the direction of travel.
When the armature is moved from one end to the other through the centre position the output voltages changes from maximum to zero and back to
maximum again but in the process changes its phase angle by 180 deg’s. This enables the LVDT to produce an output AC signal whose magnitude
represents the amount of movement from the centre position and whose phase angle represents the direction of movement of the core.

A typical application of a linear variable differential transformer (LVDT) sensor would be as a pressure transducer, were the pressure being
measured pushes against a diaphragm to produce a force. The force is then converted into a readable voltage signal by the sensor.

Advantages of the linear variable differential transformer, or LVDT compared to a resistive potentiometer are that its linearity, that is its voltage
output to displacement is excellent, very good accuracy, good resolution, high sensitivity as well as frictionless operation. They are also sealed for
use in hostile environments.

Inductive Proximity Sensors.

Another type of inductive position sensor in common use is the Inductive Proximity Sensor also called an Eddy current sensor. While they do not
actually measure displacement or angular rotation they are mainly used to detect the presence of an object in front of them or within a close
proximity, hence their name “proximity sensor”.

Proximity sensors, are non-contact position sensors that use a magnetic field for detection with the simplest magnetic sensor being the reed switch.
In an inductive sensor, a coil is wound around an iron core within an electromagnetic field to form an inductive loop.

When a ferromagnetic material is placed within the eddy current field generated around the inductive sensor, such as a ferromagnetic metal plate or
metal screw, the inductance of the coil changes significantly. The proximity sensors detection circuit detects this change producing an output
voltage. Therefore, inductive proximity sensors operate under the electrical principle of Faraday’s Law of inductance.
An inductive proximity sensor has four main components; The oscillator which produces the electromagnetic field, the coil which generates the
magnetic field, the detection circuitwhich detects any change in the field when an object enters it and the output circuit which produces the output
signal, either with normally closed (NC) or normally open (NO) contacts.
Inductive proximity sensors allow for the detection of metallic objects in front of the sensor head without any physical contact of the object itself
being detected. This makes them ideal for use in dirty or wet environments. The “sensing” range of proximity sensors is very small, typically 0.1mm
to 12mm.
As well as industrial applications, inductive proximity sensors are also commonly used to control the flow of traffic by changing of traffic lights at
junctions and cross roads. Rectangular inductive loops of wire are buried into the tarmac road surface.
When a car or other road vehicle passes over this inductive loop, the metallic body of the vehicle changes the loops inductance and activates the
sensor thereby alerting the traffic lights controller that there is a vehicle waiting.
One main disadvantage of these types of position sensors is that they are “Omni-directional”, that is they will sense a metallic object either above,
below or to the side of it. Also, they do not detect non-metallic objects although Capacitive Proximity Sensors and Ultrasonic Proximity
Sensors are available. Other commonly available magnetic positional sensors include: reed switches, Hall Effect Sensors and variable reluctance
sensors.

Hall Effect Sensors:

A Hall effect sensor is a transducer that varies its output voltage in response to a magnetic field. Hall effect sensors are used for proximity switching,
positioning, speed detection, and current sensing applications.

In a Hall effect sensor a thin strip of metal has a current applied along it, in the presence of a magnetic field the electrons are deflected towards one
edge of the metal strip, producing a voltage gradient across the short-side of the strip (perpendicular to the feed current). Inductive sensors are just a
coil of wire, in the presence of a changing magnetic field a current will be induced in the coil, producing a voltage at its output. Hall effect sensors have
the advantage that they can detect static (non-changing) magnetic fields.

In its simplest form, the sensor operates as an analog transducer, directly returning a voltage. With a known magnetic field, its distance from the Hall
plate can be determined. Using groups of sensors, the relative position of the magnet can be deduced.

Frequently, a Hall sensor is combined with threshold detection so that it acts as and is called a switch. Commonly seen in industrial applications such
as the pictured pneumatic cylinder, they are also used in consumer equipment; for example some computer printers use them to detect missing paper
and open covers. They can also be used in computer keyboards applications that require ultra-high reliability.

Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing, tachometers and anti-
lock braking systems. They are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel with two
equally spaced magnets, the voltage from the sensor will peak twice for each revolution. This arrangement is commonly used to regulate the speed
of disk drives.

Working Principle:

When a beam of charged particles passes through a magnetic field, forces act on the particles and the beam is deflected from a straight path. The
flow of electrons through a conductor is known as a beam of charged carriers. When a conductor is placed in a magnetic field perpendicular to the
direction of the electrons, they will be deflected from a straight path. As a consequence, one plane of the conductor will become negatively charged
and the opposite side will become positively charged. The voltage between these planes is called the Hall voltage.[2]
When the force on the charged particles from the electric field balances the force produced by magnetic field, the separation of them will stop. If the
current is not changing, then the Hall voltage is a measure of the magnetic flux density. Basically, there are two kinds of Hall effect sensors. One is
linear which means the output of voltage linearly depends on magnetic flux density; the other is called threshold which means there will be a sharp
decrease of output voltage at each magnetic flux density.

Signal processing and interface:

Hall effect sensors are linear transducers. As a result, such sensors require a linear circuit for processing of the sensor's output signal. Such a linear
circuit:

 provides a constant driving current to the sensors


 amplifies the output signal

In some cases the linear circuit may cancel the offset voltage of Hall effect sensors. Moreover, AC modulation of the driving current may also reduce
the influence of this offset voltage.

Hall effect sensors with linear transducers are commonly integrated with digital electronics.[4] This enables advanced corrections to the sensor's
characteristics (e.g. temperature coefficient corrections) and digital interfacing to microprocessor systems. In some solutions of IC Hall effect sensors
a DSP is used, which provides for more choices among processing techniques.[1]:167

The Hall effect sensor interfaces may include input diagnostics, fault protection for transient conditions, and short/open circuit detection. It may also
provide and monitor the current to the Hall effect sensor itself. There are precision IC products available to handle these features.

Advantages:

A Hall effect sensor may operate as an electronic switch.

 Such a switch costs less than a mechanical switch and is much more reliable.
 It can be operated up to 100 kHz.
 It does not suffer from contact bounce because a solid state switch with hysteresis is used rather than a mechanical contact.
 It will not be affected by environmental contaminants since the sensor is in a sealed package. Therefore, it can be used under severe
conditions.
In the case of linear sensor (for the magnetic field strength measurements), a Hall effect sensor:
 can measure a wide range of magnetic fields
 is available that can measure either North or South pole magnetic fields
 can be flat

Disadvantages:

Hall effect sensors provide much lower measuring accuracy than fluxgate magnetometers or magnetoresistance-based sensors. Moreover, Hall effect
sensors drift significantly, requiring compensation.

Applications:

 Position Sensing
 Direct Current (DC) Transformers
 Automotive fuel level indicator

Fibre optic position Sensor:

Extrinsic fiber optic sensors use an optical fiber cable, normally a multimode one, to transmit modulated light from either a non-fiber optical sensor, or
an electronic sensor connected to an optical transmitter. A major benefit of extrinsic sensors is their ability to reach places which are otherwise
inaccessible. An example is the measurement of temperature inside aircraft jet engines by using a fiber to transmit radiation into a radiation pyrometer
located outside the engine. Extrinsic sensors can also be used in the same way to measure the internal temperature of electrical transformers, where
the extreme electromagnetic fields present make other measurement techniques impossible.

Extrinsic fiber optic sensors provide excellent protection of measurement signals against noise corruption. Unfortunately, many conventional sensors
produce electrical output which must be converted into an optical signal for use with fiber. For example, in the case of a platinum resistance
thermometer, the temperature changes are translated into resistance changes. The PRT must therefore have an electrical power supply. The
modulated voltage level at the output of the PRT can then be injected into the optical fiber via the usual type of transmitter. This complicates the
measurement process and means that low-voltage power cables must be routed to the transducer.

Extrinsic sensors are used to measure vibration, rotation, displacement, velocity, acceleration, torque, and temperature.

FIBER OPTIC POSITION SENSORS ODP


 Measurement range: 0 - 25 mm
 Resolution: 25 microns
 Precision: ± 0,2% F.S. (@ 25°C)
 Repeatability: 0,05% F.S.
 Response time: Readout unit dependent

Velocity sensors

A velocity transducer/sensor consists of a moving coil suspended in the magnetic field of a permanent magnet. The velocity is given as the input,
which causes the movement of the coil in the magnetic field. This causes an emf to be generated in the coil. This induced emf will be proportional to
the input velocity and thus, is a measure of the velocity. The instantaneous voltage produced is given by the equation
v=N(d∅/dt)
N – Number of turns of the coil
d∅/dt – Rate of change of flux in the coil
The voltage produced will be proportional to any type of velocities like linear, sinusoidal or random.

The damping is obtained electrically. Thus, we can assume a very high stability under temperature conditions. The basic arrangement of
a velocity sensor is shown below.

Velocity Transducer Arrangement

The figure shows a moving coil kept under the influence of two pole pieces. The output voltage is taken across the moving coil. The moving coil is
kept balanced for a linear motion with the help of a pivot assembly.

Velocity Transducer

Measurement of Displacement Using Velocity Transducer

We know that velocity is the derivative of displacement with respect to time. Similarly, displacement is the time integral of velocity. Thus, a velocity
transducer can be used to find the displacement of an object. All we have to do is add an integrating circuit to the velocity transducer arrangement.
This is shown in the figure above.

You may also like: Acceleration Transducer


The output voltage (einput) of the transducer can be represented as the product of a constant k and the instantaneous velocity v. If the velocity
varies sinusoidally according to its frequency f, and has a peak value V, then the output voltage can be written as

einput = kV2πft

Capacitor Reactance Xc = 1/2πfc


When the value of frequency f is too low, the value of Xc will be very large. So, the integrated output voltage, eoutput will be proportional to einput
and so will also be proportional to the velocity v. When the value of frequency becomes high, the value of Xc will become small. Thus, the integrated
output voltage can be written as

eoutput = einput/JwCR

eoutput = KV/wCR Sin(wt-90°)

This shows that the value of integrator output lags behind the value of the input voltage by 90 degrees. For a given value of velocity amplitude V, the
integrator output is inversely proportional to frequency w.

Accelerometers

When you use a compass app on your smartphone, it somehow knows which direction the phone is pointing. With stargazing apps, it somehow knows

where in the sky you’re looking to properly display constellations. Smartphones and other mobile technology identify their orientation through the use

of an accelerator, a small device made up of axis-based motion sensing.


The motion sensors in accelerometers can even be used to detect earthquakes, and may by used in medical devices such as bionic limbs and other
artificial body parts. Several devices, part of the quantified self movement, use accelerometers.

An accelerometer is an electromechanical device used to measure acceleration forces. Such forces may be static, like the continuous force of
gravity or, as is the case with many mobile devices, dynamic to sense movement or vibrations.
Acceleration is the measurement of the change in velocity, or speed divided by time. For example, a car accelerating from a standstill to 60 mph in
six seconds is determined to have an acceleration of 10 mph per second (60 divided by 6).

The purpose of the accelerometer

The application of accelerometers extends to multiple disciplines, both academic and consumer-driven. For example, accelerometers in laptops
protect hard drives from damage. If the laptop were to suddenly drop while in use, the accelerometer would detect the sudden free fall and
immediately turn off the hard drive to avoid hitting the reading heads into the hard drive platter. Without this, the two would strike and cause
scratches to the platter for extensive file and reading damage. Accelerometers are likewise used in cars as the industry method way of detecting car
crashes and deploying airbags almost instantaneously.

In another example, a dynamic accelerometer measures gravitational pull to determine the angle at which a device is tilted with respect to the Earth.
By sensing the amount of acceleration, users analyze how the device is moving.

Accelerometers allow the user to understand the surroundings of an item better. With this small device, you can determine if an object is moving
uphill, whether it will fall over if it tilts any more, or whether it’s flying horizontally or angling downward. For example, smartphones rotate their
display between portrait and landscape mode depending on how you tilt the phone.

How they work

An accelerator looks like a simple circuit for some larger electronic device. Despite its humble appearance, the accelerometer consists of many
different parts and works in many ways, two of which are the piezoelectric effect and the capacitance sensor. The piezoelectric effect is the most
common form of accelerometer and uses microscopic crystal structures that become stressed due to accelerative forces. These crystals create a
voltage from the stress, and the accelerometer interprets the voltage to determine velocity and orientation.

The capacitance accelerometer senses changes in capacitance between microstructures located next to the device. If an accelerative force moves
one of these structures, the capacitance will change and the accelerometer will translate that capacitance to voltage for interpretation.

Accelerometers are made up of many different components, and can be purchased as a separate device. Analog and digital displays are available,
though for most technology devices, these components are integrated into the main technology and accessed using the governing software or
operating system.

Typical accelerometers are made up of multiple axes, two to determine most two-dimensional movement with the option of a third for 3D positioning.
Most smartphones typically make use of three-axis models, whereas cars simply use only a two-axis to determine the moment of impact. The
sensitivity of these devices is quite high as they’re intended to measure even very minute shifts in acceleration. The more sensitive the
accelerometer, the more easily it can measure acceleration.

Accelerometers, while actively used in many electronics in today’s world, are also available for use in custom projects. Whether you’re an engineer
or tech geek, the accelerometer plays a very active role in a wide range of functionalities. In many cases you may not notice the presence of this
simple sensor, but odds are you may already be using a device with it.

Proximity Sensors:

A proximity sensor is a sensor able to detect the presence of nearby objects without any physical contact.

A proximity sensor often emits an electromagnetic field or a beam of electromagnetic radiation (infrared, for instance), and looks for changes in the
field or return signal. The object being sensed is often referred to as the proximity sensor's target. Different proximity sensor targets demand
different sensors. For example, a capacitive proximity sensor or photoelectric sensor might be suitable for a plastic target; an inductive proximity
sensor always requires a metal target.
The maximum distance that this sensor can detect is defined "nominal range". Some sensors have adjustments of the nominal range or means to
report a graduated detection distance. Some know these processes as "thermos sensation".

Proximity sensors can have a high reliability and long functional life because of the absence of mechanical parts and lack of physical contact
between sensor and the sensed object.

Proximity sensors are commonly used on mobile devices. When the target is within nominal range, the device lock screen UI will appear, thus
emerging from what is known as sleep mode. Once the device has awoken from sleep mode, if the proximity sensor's target is still for an extended
period of time, the sensor will then ignore it, and the device will eventually revert into sleep mode. For example, during a telephone call, proximity
sensors play a role in detecting (and skipping) accidental touchscreen taps when mobiles are held to the ear.[1] Proximity sensors are also used in
machine vibration monitoring to measure the variation in distance between a shaft and its support bearing. This is common in large steam turbines,
compressors, and motors that use sleeve-type bearings.

International Electro technical Commission (IEC) 60947-5-2 defines the technical details of proximity sensors.

A proximity sensor adjusted to a very short range is often used as a touch switch.

Non-Contact:

Inductive proximity sensors are used for non-contact detection of metallic objects. Their operating principle is based on a coil and oscillator that
creates an electromagnetic field in the close surroundings of the sensing surface. The presence of a metallic object (actuator) in the operating area
causes a dampening of the oscillation amplitude. The rise or fall of such oscillation is identified by a threshold circuit that changes the output of the
sensor. The operating distance of the sensor depends on the actuator's shape and size and is strictly linked to the nature of the material

Contact:

Capacitive Proximity Sensors


It can also detect metals but along with it can also detect resins, liquids, powders, etc. This sensor working can vary accordingly covering material,
cable longness, noise senstivity. Its sensing distance also vary according to factors such as the temperature, the sensing object, surrounding
objects, and the mounting distance between Sensors. Its maximum range of sensing is 25 mm.

Magnetic Proximity Sensors

Name is saying its sensing object – magnets. Magnetic Proximity Sensors have no electrical noise effect and it can work on DC, AC, AC/DC, DC.
Again sensing distance can vary due to factors such as the temperature, the sensing object, surrounding objects, and the mounting distance
between Sensors. This type of sensors have highest sensing range upto 120 mm.[divider]
These sensors has been used in various devices like mobile phones, tablets, security appliances, etc. These days its mostly used on mobile phones
in order to make it more functional, responsive and useful.

Range Sensors:

Ranging sensors include sensors that require no physical contact with the object being detected. They allow a robot to see an obstacle without
actually having to come into contact with it. This can prevent possible entanglement, allow for better obstacle avoidance (over touch-feedback
methods), and possibly allow software to distinguish between obstacles of different shapes and sizes. There are several methods used to allow a
sensor to detect obstacles from a distance. Below are a few common methods ranging in complexity and capability from very basic to very intricate.
The following examples are only made to give a general understanding of many common types of ranging and proximity sensors as they commonly
apply to robotics. Many variances can exist within each type.

Sonic Range:

Sonic ranging sensors, sometimes referred to as SONAR send out a pulse of sound and wait for the echo to return. The time it tak for the echo
to return is used to determine the distance to the obstacle. They are popular in hobby and research robotics due to their simplicity and relatively low
cost. These sensors are generally limited to about a 6m range. Divergence can be a problem because the sound wave spreads out rapidly as it
moves away from the source. The sensor cannot determine where along this projected arc an obstacle was found. 'Ghost' echoes can cause
problems as well when the sound wave bounces off multiple obstacles before returning.

 Sonic Ranging
 Pros
 Cheap
 Easy to Use
 Cons
 Resolution rapidly decreases with distance
 Ghost echoes can give false readings
 Physical properties of objects can give very different responses
Touch and Slip Sensors

Slip may be regarded as the relative movement of one object�s surface over an other when in contact. The relative movement ranges from simple
translational motion to a combination of translational and rotational motions. When handling an object, the detection of slip becomes necessary so
as to prevent the object being dropped due to the application of a low grip force. In an assembly operation, it is possible to test the occurrence of slip
to indicate some predetermined contact forces between the object and the assembled part. For the majority of applications some qualitative
information on object slip may be sufficient, and can be detected using a number of different approaches.

Interpretation of tactile-array information

The output of a tactile-sensing array is the spatial distribution of the forces over the measurement area. If the object is stationary, the tactile image
will also remain stationary. However, if the pattern moves with time, the object can be considered to be moving, this can be detected by processing
the sensor�s data.

Slip sensing based on touch-sensing information

Most point contact touch sensors are incapable of discrimination between relative movement and force. However, as the surfaces of the tactile
sensor and the object are not microscopically smooth, the movement of an object across the sensor will cause a high frequency, low amplitude
vibration to be set up, which can be detected and interpreted as movement across the sensor. This has been achieved by touch sensors based the
photoelastic effect and piezoelectric sensors. In a photoelastic material the plane of polarization is a function of the material stress. The figure shows
a sensor developed at the University of Southampton, to detect slip. The sensors uses the property of photoelastic material, where the plane of
materials polarization is rotated as the material is stressed. In the sensor light is first passed through a polarising film (polariser), the material, then a
second polarising film (analyser). As stress is applied to the material changes, the amount of received light varies.

Typical results are as shown, the changes in stress are caused by


vibrations, due to the photoelastic material slip-sticking as the object
moves relative to the sensor. The sensitivity of the sensor can be
increased by artificially roughening the surface area of the sensor.
Sensors to specifically detect slip

It is possible to develop sensors that will respond only to relative movement. They are normally based on the principle of transduction discussed for
touch sensors, but the sensors' stimulus comes from the relative movement of an area of the gripper.

Several methods to detect slip have been reported. One sensor requires a sapphire needle protruding from a sensor surface to touch the slipping
object, this generates vibrations which in turn stimulates a piezoelectric crystal. The disadvantage of this approach is that it picks-up external
vibrations from the gripper and robot mechanics, and the needle frequently wears-out. The improved version of this sensor uses a steel ball at the
end of the probe and with the piezoelectric crystal replaced by a permanent magnet and a coil enclosed in a damping medium. To avoid the problem
of interference signals from external vibrations, a range of interrupt-type of slip sensors have been designed. In one design, a rubber roller has a
permanent magnet passing over a magnetic head which generates a voltage when slip occurs. In a similar design the roller has a number of slits
which interrupts an optical path, this allows an indication of slip to be obtained. Though these sensors give a very good indication of the speed and
direction of slip there are disadvantages with poor slip resolution and the possibility of jamming of the roller.

Torque Sensor:

A torque sensor, torque transducer or torque meter is a device for measuring and recording the torque on a rotating system, such as an engine,
crankshaft, gearbox, transmission, rotor, a bicycle crank or cap torque tester. Static torque is relatively easy to measure. Dynamic torque, on the other
hand, is not easy to measure, since it generally requires transfer of some effect (electric, hydraulic or magnetic) from the shaft being measured to a
static system.

One way to achieve this is to condition the shaft or a member attached to the shaft with a series of permanent magnetic domains. The magnetic
characteristics of these domains will vary according to the applied torque, and thus can be measured using non-contact sensors. Such magnetoelastic
torque sensors are generally used for in-vehicle applications on racecars, automobiles, aircraft, and hovercraft.

Commonly, torque sensors or torque transducers use strain gauges applied to a rotating shaft or axle. With this method, a means to power the strain
gauge bridge is necessary, as well as a means to receive the signal from the rotating shaft. This can be accomplished using slip rings, wireless
telemetry, or rotary transformers. Newer types of torque transducers add conditioning electronics and an A/D converter to the rotating shaft. Stator
electronics then read the digital signals and convert those signals to a high-level analog output signal, such as +/-10VDC.[citation needed]

A more recent development is the use of SAW devices attached to the shaft and remotely interrogated. The strain on these tiny devices as the shaft
flexes can be read remotely and output without the need for attached electronics on the shaft. The probable first use in volume will be in the automotive
field as, of May 2009, Schott announced it has a SAW sensor package viable for in vehicle uses.[citation needed]

Another way to measure torque is by way of twist angle measurement or phase shift measurement, whereby the angle of twist resulting from applied
torque is measured by using two angular position sensors and measuring the phase angle between them. This technique is used in the Allison T56
turboprop engine.

Finally, (as described within the abstract for US Patent 5257535), if the mechanical system involves a right angle gearbox, then the axial reaction
force experienced by the inputting shaft/pinion can be related to the torque experienced by the output shaft(s). The axial input stress must first be
calibrated against the output torque. The input stress can be easily measured via strain gauge measurement of the input pinion bearing housing. The
output torque is easily measured using a static torque meter.
UNIT III Miscellaneous Sensors In Robotics

Different sensing variables:

Smell Sensors

Smell sensors include three major parts: a sample delivery system, a detection system, a computing system. The sample delivery system enables
the generation of the headspace (volatile compounds) of a sample, which is the fraction analyzed. The system then injects this headspace into the
detection system of the electronic nose. The sample delivery system is essential to guarantee constant operating conditions.

The detection system, which consists of a sensor set, is the "reactive" part of the instrument. When in contact with volatile compounds, the sensors
react, which means they experience a change of electrical properties.

The more commonly used sensors for smell include:

 metal–oxide–semiconductor (MOSFET) devices - a transistor used for amplifying or switching electronic signals. This works on the principle
that molecules entering the sensor area will be charged either positively or negatively, which should have a direct effect on the electric field
inside the MOSFET. Thus, introducing each additional charged particle will directly affect the transistor in a unique way, producing a change in
the MOSFET signal that can then be interpreted by pattern recognition computer systems. So essentially each detectable molecule will have
its own unique signal for a computer system to interpret.
 conducting polymers - organic polymers that conduct electricity.[8]
 polymer composites - similar in use to conducting polymers but formulated of non-conducting polymers with the addition of conducting material
such as carbon black.
 quartz crystal microbalance - a way of measuring mass per unit area by measuring the change in frequency of a quartz crystal resonator. This
can be stored in a database and used for future reference.
 surface acoustic wave (SAW) - a class of microelectromechanical systems (MEMS) which rely on the modulation of surface acoustic waves to
sense a physical phenomenon.

As a first step, an electronic nose needs to be trained with qualified samples so as to build a database of reference. Then the instrument can
recognize new samples by comparing a volatile compound's fingerprint to those contained in its database. Thus they can perform qualitative or
quantitative analysis. This however may also provide a problem as many odors are made up of multiple different molecules, which may be wrongly
interpreted by the device as it will register them as different compounds, resulting in incorrect or inaccurate results depending on the primary
function of a nose.

Light Sensors

A Light Sensor is something that a robot can use to detect the current ambient light level - i.e. how bright/dark it is. There are a range of different
types of light sensors, including 'Photoresistors', 'Photodiodes', and 'Phototransistors'. The sensor included in the BOE Shield-Bot kit, and the one
we will be using, is called a Phototransistor.

To understand what a phototransistor is, we must first determine what a transistor is.

Basically, a regular transistor is an electrical component that limits the flow of current by a certain amount dependent on current applied to itself
through another pin - so there is the collector, emitter, and 'base', which controls how much current can pass through the collector through to the
emitter.

Circuit diagram of a transistor

A phototransistor, on the other hand, uses the level of light it detects to determine how much current can pass through the circuit. So, if the sensor is
in a dark room, it only lets a small amount of current through. If it detects a bright light, it lets a larger amount of current through.
Circuit diagram of a phototransistor

We can utilize the phototransistor's unique properties by plugging it into an Analog Port.

Temperature Sensors

Infrared temperature sensors sense electromagnetic waves in the 700 nm to 14,000 nm range. While the infrared spectrum extends up to 1,000,000
nm, IR temperature sensors do not measure above 14,000 nm. These sensors work by focusing the infrared energy emitted by an object onto one
or more photodetectors.

These photodetectors convert that energy into an electrical signal, which is proportional to the infrared energy emitted by the object. Because the
emitted infrared energy of any object is proportional to its temperature, the electrical signal provides an accurate reading of the temperature of the
object that it is pointed at. The infrared signals are passed into the sensor through a window made out of a specialty plastic. While plastic normally
does not allow infrared frequencies to pass through it, the sensors use a form that is transparent to particular frequencies. This plastic filters out
unwanted frequencies and protects the electronics inside the sensor from dust, dirt and other foreign objects.

Advantages of IR Temperature Sensors

 IR sensors read moving objects. Contact-based temperature sensors do not work well on moving objects. Infrared temperature sensors
are ideally suited for measuring the temperatures of tires, brakes and similar devices.
 IR sensors don’t wear. No contact means no friction. Infrared sensors experience no wear and tear, and consequently have longer
operating lives.
 IR sensors can provide more detail. An IR sensor can provide greater detail during a measurement than contact devices, simply by
pointing it at different spots on the object being read.
 IR sensors can be used to detect motion by measuring fluctuations in temperature in the field of view.

Humidity Sensors:

A humidity sensor (or hygrometer) senses, measures and reports the relative humidity in the air. It therefore measures both moisture and air
temperature. Relative humidity is the ratio of actual moisture in the air to the highest amount of moisture that can be held at that air temperature.

Humidity is the presence of water in air. The amount of water vapor in air can affect human comfort as well as many manufacturing processes in
industries. The presence of water vapor also influences various physical, chemical, and biological processes. Humidity measurement in industries is
critical because it may affect the business cost of the product and the health and safety of the personnel. Hence, humidity sensing is very important,
especially in the control systems for industrial processes and human comfort.

Controlling or monitoring humidity is of paramount importance in many industrial & domestic applications. In semiconductor industry, humidity or
moisture levels needs to be properly controlled & monitored during wafer processing. In medical applications, humidity control is required for respiratory
equipments, sterilizers, incubators, pharmaceutical processing, and biological products. Humidity control is also necessary in chemical gas purification,
dryers, ovens, film desiccation, paper and textile production, and food processing. In agriculture, measurement of humidity is important for plantation
protection (dew prevention), soil moisture monitoring, etc. For domestic applications, humidity control is required for living environment in buildings,
cooking control for microwave ovens, etc. In all such applications and many others, humidity sensors are employed to provide an indication of the
moisture levels in the environment.

RELEVANT MOISTURE TERMS

To mention moisture levels, variety of terminologies are used. The study of water vapour concentration in air as a function of temperature and pressure
falls under the area of psychometrics. Psychometrics deals with the thermodynamic properties of moist gases while the term “humidity’ simply refers
to the presence of water vapour in air or other carrier gas. Humidity measurement determines the amount of water vapor present in a gas that can be
a mixture, such as air, or a pure gas, such as nitrogen or argon

Various terms used to indicate moisture levels are tabulated in the table below:
S.No Term Definition Unit
1 Absolute Humidity Ratio of mass(vapour) to volume. grams/m3
(Vapor Concentration)
2 Mixing Ratio OR Mass Ratio Ratio of mass(vapour) to mass(dry gas) grams/m3
3 Relative Humidity Ratio of mass(vapour) to mass(saturated vapour) OR ratio of actual vapor %
pressure to saturation vapor pressure.
4 Specific Humidity Ratio of mass(vapour) to total mass. %
5 Dew Point Temperature(above 0°C) at which the water vapor in a gas condenses to °C
liquid water)
6 Frost Point Temperature(below 0°C) at which the water vapor in a gas condenses to
ice
7 Volume Ratio Ratio of partial pressure(vapour) to partial pressure (dry gas) % by volume
8 PPM by Volume Ratio of volume(vapour) X 106 to volume(dry gas)
PPMV
9 PPM by Weight PPMV X PPMW

Most commonly used units for humidity measurement are Relative Humidity (RH), Dew/Frost point (D/F PT) and Parts Per Million (PPM). RH is a
function of temperature, and thus it is a relative measurement. Dew/Frost point is a function of the pressure of the gas but is independent of temperature
and is therefore defined as absolute humidity measurement. PPM is also an absolute measurement.

Dew points and frost points are often used when the dryness of the gas is important. Dew point is also used as an indicator of water vapor in high
temperature processes, such as industrial drying.

Mixing ratios, volume percent, and specific humidity are usually used when water vapor is either an impurity or a defined component of a process gas
mixture used in manufacturing.

Correlation among RH, Dew/Frost point and PPMv is shown below:

HUMIDITY SENSING – CLASSIFICATION & PRINCIPLES


According to the measurement units, humidity sensors are divided into two types: Relative humidity(RH)sensors and absolute humidity(moisture)
sensors. Most humidity sensors are relative humidity sensors and use different sensing principles.

· Sensing Principle

Humidity measurement can be done using dry and wet bulb hygrometers, dew point hygrometers, and electronic hygrometers. There has been a
surge in the demand of electronic hygrometers, often called humidity sensors.

Electronic type hygrometers or humidity sensors can be broadly divided into two categories: one employs capacitive sensing principle, while other use
resistive effects

Sensors based on capacitive effect:


Humidity sensors relying on this principle consists of a hygroscopic dielectric material sandwiched between a pair of electrodes forming a small
capacitor. Most capacitive sensors use a plastic or polymer as the dielectric material, with a typical dielectric constant ranging from 2 to 15. In absence
of moisture, the dielectric constant of the hygroscopic dielectric material and the sensor geometry determine the value of capacitance.
At normal room temperature, the dielectric constant of water vapor has a value of about 80, a value much larger than the constant of the sensor
dielectric material. Therefore, absorption of water vapor by the sensor results in an increase in sensor capacitance.

At equilibrium conditions, the amount of moisture present in a hygroscopic material depends on both the ambient temperature and the ambient water
vapor pressure. This is true also for the hygroscopic dielectric material used on the sensor.

By definition, relative humidity is a function of both the ambient temperature and water vapor pressure. Therefore there is a relationship between
relative humidity, the amount of moisture present in the sensor, and sensor capacitance. This relationship governs the operation of a capacitive
humidity instrument.

Basic structure of capacitive type humidity sensor is shown below:

On Alumina substrate, lower electrode is formed using gold, platinum or other material. A polymer layer such as PVA is deposited on the electrode.
This layers senses humidity. On top of this polymer film, gold layer is deposited which acts as top electrode. The top electrode also allows water
vapour to pass through it, into the sensing layer . The vapors enter or leave the hygroscopic sensing layer until the vapour content is in equilibrium
with the ambient air or gas.Thus capacitive type sensor is basically a capacitor with humidity sensitive polymer film as the dielectric.

Sensors based on Resistive effect:

Resistive type humidity sensors pick up changes in the resistance value of the sensor element in response to the change in the humidity. Basic
structure of resistive type humidity sensor from TDK is shown below

Thick film conductor of precious metals like gold, ruthenium oxide is printed and calcinated in the shape of the comb to form an electrode. Then a
polymeric film is applied on the electrode; the film acts as a humidity sensing film due to the existence of movable ions. Change in impedance occurs
due to the change in the number of movable ions.

Speech or Voice recognition Systems:

Speech recognition is the ability of a machine or program to identify words and phrases in spoken language and convert them to a machine-readable
format. Rudimentary speech recognition software has a limited vocabulary of words and phrases, and it may only identify these if they are spoken
very clearly.

The term "voice recognition" is sometimes used to refer as speech recognition where the recognition system is trained to a particular speaker, hence
there is an element of speaker recognition, which attempts to identify the person speaking, to better recognize what is being said. Speech recognition
is a broad term which means it can recognize almost anybody's speech - such as a call-centre system designed to recognize many voices. Voice
recognition is a system trained to a particular user, where it recognizes their speech based on their unique vocal sound.

The first approach is Linear Predictive Coding combined with Euclidean Squared Distance (ESD). In this approach LPC is used as the feature
extraction method and Euclidean Squared Distance is used as the recognition method. The second approach is Hidden Markov Model, which is used
to build reference model of the words and also used as the recognition method. Feature extraction method used in the second approach is a simple
segmentation and centroid value. Both approaches work on time domain. Experiments have to do in several variations of observation symbol number
and number of samples. The robot can move in accordance with the voice command. Maximum recognition rate will be expected here by introducing
a novel method.

Hidden Markov model

Modern general-purpose speech recognition systems are based on Hidden Markov Models. These are statistical models that output a sequence of
symbols or quantities. HMMs are used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time
stationary signal. In a short time-scale (e.g., 10 milliseconds), speech can be approximated as a stationary process. Speech can be thought of as a
Markov model for many stochastic purposes.

Another reason why HMMs are popular is because they can be trained automatically and are simple and computationally feasible to use. In speech
recognition, the hidden Markov model would output a sequence of n-dimensional real-valued vectors (with n being a small integer, such as 10),
outputting one of these every 10 milliseconds. The vectors would consist of cepstral coefficients, which are obtained by taking a Fourier transform of
a short time window of speech and decorrelating the spectrum using a cosine transform, then taking the first (most significant) coefficients. The hidden
Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood
for each observed vector. Each word, or (for more general speech recognition systems), each phoneme, will have a different output distribution; a
hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markov models for the separate
words and phonemes.

Described above are the core elements of the most common, HMM-based approach to speech recognition. Modern speech recognition systems use
various combinations of a number of standard techniques in order to improve results over the basic approach described above. A typical large-
vocabulary system would need context dependency for the phonemes (so phonemes with different left and right context have different realizations as
HMM states); it would use cepstral normalization to normalize for different speaker and recording conditions; for further speaker normalization it might
use vocal tract length normalization (VTLN) for male-female normalization and maximum likelihood linear regression (MLLR) for more general speaker
adaptation. The features would have so-called delta and delta-delta coefficients to capture speech dynamics and in addition might use heteroscedastic
linear discriminant analysis (HLDA); or might skip the delta and delta-delta coefficients and use splicing and an LDA-based projection followed perhaps
by heteroscedastic linear discriminant analysis or a global semi-tied co variance transform (also known as maximum likelihood linear transform, or
MLLT). Many systems use so-called discriminative training techniques that dispense with a purely statistical approach to HMM parameter estimation
and instead optimize some classification-related measure of the training data. Examples are maximum mutual information (MMI), minimum
classification error (MCE) and minimum phone error (MPE).

Decoding of the speech (the term for what happens when the system is presented with a new utterance and must compute the most likely source
sentence) would probably use the Viterbi algorithm to find the best path, and here there is a choice between dynamically creating a combination
hidden Markov model, which includes both the acoustic and language model information, and combining it statically beforehand (the finite state
transducer, or FST, approach).

A possible improvement to decoding is to keep a set of good candidates instead of just keeping the best candidate, and to use a better scoring function
(re scoring) to rate these good candidates so that we may pick the best one according to this refined score. The set of candidates can be kept either
as a list (the N-best list approach) or as a subset of the models (a lattice). Re scoring is usually done by trying to minimize the Bayes risk[56] (or an
approximation thereof): Instead of taking the source sentence with maximal probability, we try to take the sentence that minimizes the expectancy of
a given loss function with regards to all possible transcriptions (i.e., we take the sentence that minimizes the average distance to other possible
sentences weighted by their estimated probability). The loss function is usually the Levenshtein distance, though it can be different distances for
specific tasks; the set of possible transcriptions is, of course, pruned to maintain tractability. Efficient algorithms have been devised to re score lattices
represented as weighted finite state transducers with edit distances represented themselves as a finite state transducer verifying certain assumptions.

Dynamic time warping (DTW)-based speech recognition

Dynamic time warping is an approach that was historically used for speech recognition but has now largely been displaced by the more successful
HMM-based approach.

Dynamic time warping is an algorithm for measuring similarity between two sequences that may vary in time or speed. For instance, similarities in
walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, or even
if there were accelerations and deceleration during the course of one observation. DTW has been applied to video, audio, and graphics – indeed, any
data that can be turned into a linear representation can be analyzed with DTW.

A well-known application has been automatic speech recognition, to cope with different speaking speeds. In general, it is a method that allows a
computer to find an optimal match between two given sequences (e.g., time series) with certain restrictions. That is, the sequences are "warped" non-
linearly to match each other. This sequence alignment method is often used in the context of hidden Markov models.

Neural networks

In contrast to HMMs, neural networks make no assumptions about feature statistical properties and have several qualities making them attractive
recognition models for speech recognition. When used to estimate the probabilities of a speech feature segment, neural networks allow discriminative
training in a natural and efficient manner. Few assumptions on the statistics of input features are made with neural networks. However, in spite of their
effectiveness in classifying short-time units such as individual phonemes and isolated words,[60] neural networks are rarely successful for continuous
recognition tasks, largely because of their lack of ability to model temporal dependencies.

Deep feedforward and recurrent neural networks

A deep feedforward neural network (DNN) is an artificial neural network with multiple hidden layers of units between the input and output layers.
Similar to shallow neural networks, DNNs can model complex non-linear relationships. DNN architectures generate compositional models, where
extra layers enable composition of features from lower layers, giving a huge learning capacity and thus the potential of modeling complex patterns of
speech data.

End-to-end automatic speech recognition

Since 2014, there has been much research interest in "end-to-end" ASR. Traditional phonetic-based (i.e., all HMM-based model) approaches required
separate components and training for the pronunciation, acoustic and language model. End-to-end models jointly learn all the components of the
speech recognizer. This is valuable since it simplifies the training process and deployment process. For example, a n-gram language model is required
for all HMM-based systems, and a typical n-gram language model often takes several gigabytes in memory making them impractical to deploy on
mobile devices. Consequently, modern commercial ASR systems from Google and Apple (as of 2017) are deployed on the cloud and require a network
connection as opposed to the device locally.

Telepresence and related technologies:

A telepresence robot is a computer, tablet, or smartphone-controlled robot which includes a video-camera, screen, speakers and microphones so that
people interacting with the robot can view and hear its operator and the operator can simultaneously view what the robot is “looking” at and
"hearing." Some robots require a tablet or phone to be attached to the robot, while others include built-in video and audio features.
What is a Telepresence Robot and what can they do?

Simply put, a telepresence robot helps place "you" at a remote location instantly, providing you a virtual presence, or "telepresence." A telepresence
robot is a computer, tablet, or smartphone-controlled robot which includes a video-camera, screen, speakers and microphones so that people
interacting with the robot can view and hear its operator and the operator can simultaneously view what the robot is “looking” at and "hearing." Some
robots require a tablet or phone to be attached to the robot, while others include built-in video and audio features.

People from all types of environments are putting telepresence robots into action. School districts, corporate offices, hospitals, medical clinics, business
warehouses, and more, are seeking potential benefits which can be gained by taking prudent advantage of the progressions within the field of
telepresence robotics. Consequently, telepresence robots themselves are growing in popularity as their potential continues to be explored, developed,
and utilized. Robot owners are appreciating the cost savings, time and energy savings, and the enhanced communication and presence which
telepresence robots can bring to most any area or location.

“Intriguing,” you may be thinking, “but of what significance is this to me? What good can a telepresence robot do me or my business?”

One answer is that, as mentioned above, a telepresence robot can be used to provide yourself a “far reaching” pair of mobile “eyes” and "ears,"
enabling you to have a remote presence at any location with an internet connection. For example, if you had a part in arranging a remote office in
London while you were at your home office in Seattle, and you needed to make sure everything was in order and just as you wanted it to be, a
telepresence robot would allow you to see the arrangement of that office in London with the added convenience of being able to control exactly what
you wished to view merely by the press of a button or two from your laptop in Seattle. To reiterate, the user has complete control to move the robot
around the office in London and to view anything at the robot's location.

Hospitals have been using telemedicine features for years and now telepresence robots provide even more robust technology to help surgeons more
effectively advise their peers during an operation, physicians to more conveniently perform their rounds or monitor patients who have recently been
released from the hospital, and experts to eliminate travel times in emergency situations such as the event of a stroke, when each minute saved
results in the saving of millions of brain cells. Within the medical field these robots are commonly referred to more specifically as "medical telepresence
robots" or "hospital telepresence robots," many of which have health-related applications added on to the basic telepresence capabilites.

Telepresence robots go beyond a simple video conference call because the operator has full control of what they wish to see: no more need for
multiple people to leave their seats and rotate so they can be seen by the video screen. No more need to wait for an employee to have a remote
conference; you can go to him at your convenience. No more need to fly out or drive to view a warehouse or visit a patient in an emergency; simply
log in to your robot and be there in a heartbeat to assess the situation. Simply use your computer, tablet, or smartphone to direct the robots camera
to see what or whom you wish to see; whenever you like. This control is further enhanced by the ability to drive the robots around rooms and hallways,
offering a more complete virtual presence. Furthermore, some robots are able to utilize additional features such as a laser pointer which can help
increase the effectiveness of communication, and auto-navigation and mapping features that enable you to click on a location and prepare notes or
relax while the robot travels there autonomously; providing an indication to you upon its arrival.
UNIT IV Vision Sensors in Robotics

Robot Control through Vision sensors

A Vision Guided Robot (VGR) System is a robot fitted with one or more cameras used as sensors to provide a secondary feedback signal to the
robot controller to more accurately move to a variable target position. VGR is rapidly transforming production processes by enabling robots to be
highly adaptable and more easily implemented, while dramatically reducing the cost and complexity of fixed tooling previously associated with the
design and set up of robotic cells, whether for material handling, automated assembly, agricultural applications, life sciences, and more. In one
classic though dated example of VGR used for industrial manufacturing, the vision system (camera and software) determines the position of
randomly fed products onto a recycling conveyor. The vision system provides the exact location coordinate of the components to the robot, which
are spread out randomly beneath the camera's field of view, enabling the robot arm(s) to position the attached end effector (gripper) to the selected
component to pick from the conveyor belt. The conveyor may stop under the camera to allow the position of the part to be determined, or if the cycle
time is sufficient, it is possible to pick a component without stopping the conveyor using a control scheme that tracks the moving component through
the vision software, typically by fitting an encoder to the conveyor, and using this feedback signal to update and synchronize the vision and motion
control loops.

 Switching between products and batch runs is software controlled and very fast, with no mechanical adjustments.
 High residual value, even if production is changed.
 Short lead times, and short payback periods
 High machinery efficiency, reliability, and flexibility
 Possibility to integrate a majority of secondary operations such as deburring, clean blowing, washing, measuring and so on.
 Reduces manual work

Vision systems for robot guidance: A vision system comprises a camera and microprocessor or computer, with associated software. This is a very
wide definition that can be used to cover many different types of systems which aim to solve a large variety of different tasks. Vision systems can
be implemented in virtually any industry for any purpose. It can be used for quality control to check dimensions, angles, colour or surface
structure-or for the recognition of an object as used in VGR systems.

A camera can be anything from a standard compact camera system with integrated vision processor to more complex laser sensors and high
resolution high speed cameras. Combinations of several cameras to build up 3D images of an object are also available.

 Vision System operates in a closed control loop.


 Better Accuracy than „Look and Move“ systems
Limitations of a vision system: There are always difficulties of integrated vision system to match the camera with the set expectations of the
system, in most cases this is caused by lack of knowledge on behalf of the integrator or machine builder. Many vision systems can be applied
successfully to virtually any production activity, as long as the user knows exactly how to set up system parameters. This set-up, however,
requires a large amount of knowledge by the integrator and the number of possibilities can make the solution complex. Lighting in industrial
environments can be another major downfall of many vision systems.

 Measurement Frequency
 Measurement Uncertainty
 Occlusion, Camera Positioning
 Sensor dimensions

End effector camera Sensor

(1) End Effector Camera Sensor:

What can Computer Vision do for Robotics?

 Accurate Robot-Object Positioning


 Keeping Relative Position under Movement
 Visualization / Teaching / Telerobotics
 Performing measurements
 Object Recognition
 Registration

Vision Sensors Used:

 Single Perspective Camera


 Multiple Perspective Cameras (e.g. Stereo Camera Pair)
 Laser Scanner
 Omnidirectional Camera
 Structured Light Sensor

Camera Configurations:

(1) End-Effector Mounted


(2) Fixed

Position-based and Image Based control:

– Position based:

• Alignment in target coordinate system


• The 3D structure of the target is rconstructed
• The end-effector is tracked
• Sensitive to calibration errors
• Sensitive to reconstruction errors

– Image based:
• Alignment in image coordinates
• No explicit reconstruction necessary
• Insensitive to calibration errors
• Only special problems solvable
• Depends on initial pose
• Depends on selected features
EOL and ECL control:

– EOL: endpoint open-loop; only the target is observed by the camera

– ECL: endpoint closed-loop; target as well as end-effector are observed by the camera

Single Perspective Camera:

x  P3 x4 X

Multiple Perspective Cameras (e.g. Stereo Camera Pair):

x' T Fx  0 l'  Fx
Laser Scanner:
Omnidirectional Camera:

Structured Light Sensor:

 Position Based Algorithm:

(1) Estimation of relative pose


(2) Computation of error between current pose and target pose
(3) Movement of robot

 Computer Vision provides accurate and versatile measurements for robotic manipulators
 With current general purpose hardware, depth and pose measurements can be performed in real time
 In industrial robotics, vision systems are deployed in a fully automated way.
 In medicine, computer vision can make more intelligent „surgical assistants“ possible.

Robot vision locating position

Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as
automatic inspection, process control, and robot guidance, usually in industry. Machine vision is a term encompassing a large number of
technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering
discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways
and apply them to solve real world problems. The term is also used in a broader sense by trade shows and trade groups; this broader definition
also encompasses products and applications most often associated with image processing.

Imaging based robot guidance

Machine vision commonly provides location and orientation information to a robot to allow the robot to properly grasp the product. This capability
is also used to guide motion that is simpler than robots, such as a 1 or 2 axis motion controller. The overall process includes planning the details
of the requirements and project, and then creating a solution. This section describes the technical process that occurs during the operation of the
solution. Many of the process steps are the same as with automatic inspection except with a focus on providing position and orientation
information as the end result.

Image processing

After an image is acquired, it is processed. Multiple stages of processing are generally used in a sequence that ends up as a desired result. A
typical sequence might start with tools such as filters which modify the image, followed by extraction of objects, then extraction (e.g.
measurements, reading of codes) of data from those objects, followed by communicating that data, or comparing it against target vales to create
and communicate "pass/fail" results. Machine vision image processing methods include;

 Stitching/Registration: Combining of adjacent 2D or 3D images


 Filtering (e.g. morphological filtering)
 Thresholding: Thresholding starts with setting or determining a gray value that will be useful for the following steps. The value is then used to
separate portions of the image, and sometimes to transform each portion of the image to simply black and white based on whether it is
below or above that grayscale value.
 Pixel counting: counts the number of light or dark pixels
 Segmentation: Partitioning a digital image into multiple segments to simplify and/or change the representation of an image into something
that is more meaningful and easier to analyze[
 Edge detection: finding object edges
 Color Analysis: Identify parts, products and items using color, assess quality from color, and isolate features using color.
 Blob detection and extraction: inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image
landmarks.
 Neural net / deep learning processing: weighted and self-training multi-variable decision making
 Pattern recognition including template matching. Finding, matching, and/or counting specific patterns. This may include location of an object
that may be rotated, partially hidden by another object, or varying in size.
 Barcode, Data Matrix and "2D barcode" reading
 Optical character recognition: automated reading of text such as serial numbers
 Gauging/Metrology: measurement of object dimensions (e.g. in pixels, inches or millimeters)
 Comparison against target values to determine a "pass or fail" or "go/no go" result. For example, with code or bar code verification, the read
value is compared to the stored target value. For gauging, a measurement is compared against the proper value and tolerances. For
verification of alpha-numberic codes, the OCR'd value is compared to the proper or target value. For inspection for blemishes, the measured
size of the blemishes may be compared to the maximums allowed by quality standards.

What is Robotic Vision?

Robotic vision is similar to human vision – it provides valuable information that the robot can use to interact with the world around it. Robots
equipped with vision can identify colors, find parts, detect people, check quality, process information about its surroundings, read text, or carry out
just about any other function we might desire. Even though we refer to this as robotic vision, the systems often differ greatly from the way our eyes
work. When learning about robotic vision, it is best to start with the basic parts of the system.

Vision System Components:

All types of vision systems share some common components. One of the crucial components of any vision system is the camera. This is the part
of the system that will take in light from the outside world and convert it into digital data that can be processed and analyzed by the system.

Originally, the cameras consisted of a small number of photocells (around 2000 pixels) arranged behind a lens and worked off a greyscale of 256
different shades to determine the shape of images. Today, the cameras used in robotic vision range from 2 megapixels on up with full color and
4,095 different shades to work with. This large amount of data has made image processing easier, as it provides a wealth of information, but not
necessarily faster.

This brings us to the next main component of the vision system, the processor. The processor converts all the raw data from the camera into
something useful to the robot. There are two main methods of processing the information from the camera – edge detection and clustering.

With edge detection, the processor looks for sharp differences in the light data from the camera, which it then considers an edge. Once it finds an
edge, the processor looks at the data from pixels nearby to see where else it can find a similar difference. This process continues until it has found
the outline information for the image.

With clustering, the processor finds pixels that have identical data and then looks for other pixels nearby with the same or near same data. This
process develops an image using the data captured by the camera. Once the processor has decided what the image is, it formats the information
into something the robot can use and sends it to the robot’s system.

This brings us to the last key piece of any vision system – cabling. In earlier technology, the communication cables used for vision systems were
clunky and limited in how far they could send the data without loss.

Around 2009, Adimec developed a new way of sending data that allowed over 6 Gbps of data transmission over coaxial cable, and named it
‘CoaXPress’. This protocol, and those that followed in its wake, insured that we would be able to use one coaxial cable for transmission of data,
despite the fact that the amount of data we need to transmit keeps growing.

Not all vision systems use just one coaxial cable for data transmission, so it is important for those working with vision systems to understand the
specifics and limitations of the system they have.
UNIT V Multi Sensor Controlled Robot Assembly

Control Computer

The robots that are controlled by interfacing with a personal computer are called as computer controlled robots. A computer is an integral part of every
robot system that contains a control program and a task program. The control program is provided by the manufacturer and the controls of each joint of
the robot manipulator. The task program is provided by the user and specifies the manipulating motions that are required to complete a specific job.
When a programming language is used, the robot computer also contains a language processor that interprets the task programs and provides the data
required by the control program to direct the Robot’s motions. Nowadays robots are used to perform a variety of tasks due to increasing demand and
production as they can work 24/7 relentlessly without breaking down, but such robots are very expensive.

Vision Sensor modules - Understanding camera sensors for Machine Vision Applications

Imaging electronics, in addition to imaging optics, play a significant role in the performance of an imaging system. Proper integration of all
components, including camera, capture board, software, and cables results in optimal system performance. Before delving into any additional topics, it
is important to understand the camera sensor and key concepts and terminology associated with it.

The heart of any camera is the sensor; modern sensors are solid-state electronic devices containing up to millions of discrete photodetector sites
called pixels. Although there are many camera manufacturers, the majority of sensors are produced by only a handful of companies. Still, two
cameras with the same sensor can have very different performance and properties due to the design of the interface electronics. In the past, cameras
used phototubes such as Vidicons and Plumbicons as image sensors. Though they are no longer used, their mark on nomenclature associated with
sensor size and format remains to this day. Today, almost all sensors in machine vision fall into one of two categories: Charge-Coupled Device (CCD)
and Complementary Metal Oxide Semiconductor (CMOS) imagers.

Sensor Construction

Charge-Coupled Device (CCD)

The charge-coupled device (CCD) was invented in 1969 by scientists at Bell Labs in New Jersey, USA. For years, it was the prevalent technology for
capturing images, from digital astrophotography to machine vision inspection. The CCD sensor is a silicon chip that contains an array of
photosensitive sites (Figure 1). The term charge-coupled device actually refers to the method by which charge packets are moved around on the chip
from the photosites to readout, a shift register, akin to the notion of a bucket brigade. Clock pulses create potential wells to move charge packets
around on the chip, before being converted to a voltage by a capacitor. The CCD sensor is itself an analog device, but the output is immediately
converted to a digital signal by means of an analog-to-digital converter (ADC) in digital cameras, either on or off chip. In analog cameras, the voltage
from each site is read out in a particular sequence, with synchronization pulses added at some point in the signal chain for reconstruction of the
image.

The charge packets are limited to the speed at which they can be transferred, so the charge transfer is responsible for the main CCD drawback of
speed, but also leads to the high sensitivity and pixel-to-pixel consistency of the CCD. Since each charge packet sees the same voltage conversion,
the CCD is very uniform across its photosensitive sites. The charge transfer also leads to the phenomenon of blooming, wherein charge from one
photosensitive site spills over to neighboring sites due to a finite well depth or charge capacity, placing an upper limit on the useful dynamic range of
the sensor. This phenomenon manifests itself as the smearing out of bright spots in images from CCD cameras.

To compensate for the low well depth in the CCD, microlenses are used to increase the fill factor, or effective photosensitive area, to compensate for
the space on the chip taken up by the charge-coupled shift registers. This improves the efficiency of the pixels, but increases the angular sensitivity for
incoming light rays, requiring that they hit the sensor near normal incidence for efficient collection.
Figure 1: Block Diagram of a Charge-Coupled Device (CCD)

Complementary Metal Oxide Semiconductor (CMOS)

The complementary metal oxide semiconductor (CMOS) was invented in 1963 by Frank Wanlass. However, he did not receive a patent for it until
1967, and it did not become widely used for imaging applications until the 1990s. In a CMOS sensor, the charge from the photosensitive pixel is
converted to a voltage at the pixel site and the signal is multiplexed by row and column to multiple on chip digital-to-analog converters (DACs).
Inherent to its design, CMOS is a digital device. Each site is essentially a photodiode and three transistors, performing the functions of resetting or
activating the pixel, amplification and charge conversion, and selection or multiplexing (Figure 2). This leads to the high speed of CMOS sensors, but
also low sensitivity as well as high fixed-pattern noise due to fabrication inconsistencies in the multiple charge to voltage conversion circuits.
Figure 2: Block Diagram of a Complementary Metal Oxide Semiconductor (CMOS)

The multiplexing configuration of a CMOS sensor is often coupled with an electronic rolling shutter; although, with additional transistors at the pixel
site, a global shutter can be accomplished wherein all pixels are exposed simultaneously and then readout sequentially. An additional advantage of a
CMOS sensor is its low power consumption and dissipation compared to an equivalent CCD sensor, due to less flow of charge, or current. Also, the
CMOS sensor’s ability to handle high light levels without blooming allows for its use in special high dynamic range cameras, even capable of imaging
welding seams or light filaments. CMOS cameras also tend to be smaller than their digital CCD counterparts, as digital CCD cameras require
additional off-chip ADC circuitry.

The multilayer MOS fabrication process of a CMOS sensor does not allow for the use of microlenses on the chip, thereby decreasing the effective
collection efficiency or fill factor of the sensor in comparison with a CCD equivalent. This low efficiency combined with pixel-to-pixel inconsistency
contributes to a lower signal-to-noise ratio and lower overall image quality than CCD sensors. Refer to Table 1 for a general comparison of CCD and
CMOS sensors.

Sensor CCD CMOS

Pixel Signal Electron Packet Voltage

Chip Signal Analog Digital

Fill Factor High Moderate

Responsivity Moderate Moderate – High

Noise Level Low Moderate – High

Dynamic Range High Moderate

Uniformity High Low

Resolution Low – High Low – High

Speed Moderate - High High


Power Consumption Moderate – High Low

Complexity Low Moderate

Cost Moderate Moderate

Alternative Sensor Materials

Short-wave infrared (SWIR) is an emerging technology in imaging. It is typically defined as light in the 0.9 – 1.7μm wavelength range, but can also be
classified from 0.7 – 2.5μm. Using SWIR wavelengths allows for the imaging of density variations, as well as through obstructions such as fog.
However, a normal CCD and CMOS image is not sensitive enough in the infrared to be useful. As such, special indium gallium arsenide (InGaAs)
sensors are used. The InGaAs material has a band gap, or energy gap, that makes it useful for generating a photocurrent from infrared energy. These
sensors use an array of InGaAs photodiodes, generally in the CMOS sensor architecture. For visible and SWIR comparison images, view What is
SWIR?.

At even longer wavelengths than SWIR, thermal imaging becomes dominant. For this, a microbolometer array is used for its sensitivity in the 7 - 14μm
wavelength range. In a microbolometer array, each pixel has a bolometer which has a resistance that changes with temperature. This resistance
change is read out by conversion to a voltage by electronics in the substrate (Figure 3). These sensors do not require active cooling, unlike many
infrared imagers, making them quite useful.

Figure 3: Illustration of Cross-Section of Microbolometer Sensor Array

Sensor Features

Pixels

When light from an image falls on a camera sensor, it is collected by a matrix of small potential wells called pixels. The image is divided into these
small discrete pixels. The information from these photosites is collected, organized, and transferred to a monitor to be displayed. The pixels may be
photodiodes or photocapacitors, for example, which generate a charge proportional to the amount of light incident on that discrete place of the sensor,
spatially restricting and storing it. The ability of a pixel to convert an incident photon to charge is specified by its quantum efficiency. For example, if for
ten incident photons, four photo-electrons are produced, then the quantum efficiency is 40%. Typical values of quantum efficiency for solid-state
imagers are in the range of 30 - 60%. The quantum efficiency depends on wavelength and is not necessarily uniform over the response to light
intensity. Spectral response curves often specify the quantum efficiency as a function of wavelength. For more information, see the section of this
application note on Spectral Properties.

In digital cameras, pixels are typically square. Common pixel sizes are between 3 - 10μm. Although sensors are often specified simply by the number
of pixels, the size is very important to imaging optics. Large pixels have, in general, high charge saturation capacities and high signal-to-noise ratios
(SNRs). With small pixels, it becomes fairly easy to achieve high resolution for a fixed sensor size and magnification, although issues such as
blooming become more severe and pixel crosstalk lowers the contrast at high spatial frequencies. A simple measure of sensor resolution is the
number of pixels per millimeter.

Analog CCD cameras have rectangular pixels (larger in the vertical dimension). This is a result of a limited number of scanning lines in the signal
standards (525 lines for NTSC, 625 lines for PAL) due to bandwidth limitations. Asymmetrical pixels yield higher horizontal resolution than vertical.
Analog CCD cameras (with the same signal standard) usually have the same vertical resolution. For this reason, the imaging industry standard is to
specify resolution in terms of horizontal resolution.

Figure 4: Illustration of Camera Sensor Pixels with RGB Color and Infrared Blocking Filters

Sensor Size

The size of a camera sensor's active area is important in determining the system's field of view (FOV). Given a fixed primary magnification
(determined by the imaging lens), larger sensors yield greater FOVs. There are several standard area-scan sensor sizes: ¼", 1/3", ½", 1/1.8", 2/3", 1"
and 1.2", with larger available (Figure 5). The nomenclature of these standards dates back to the Vidicon vacuum tubes used for television broadcast
imagers, so it is important to note that the actual dimensions of the sensors differ. Note: There is no direct connection between the sensor size and its
dimensions; it is purely a legacy convention. However, most of these standards maintain a 4:3 (Horizontal: Vertical) dimensional aspect ratio.

Figure 5: Illustration of Sensor Size Dimensions for Standard Camera Sensors

One issue that often arises in imaging applications is the ability of an imaging lens to support certain sensor sizes. If the sensor is too large for the
lens design, the resulting image may appear to fade away and degrade towards the edges because of vignetting (extinction of rays which pass
through the outer edges of the imaging lens). This is commonly referred to as the tunnel effect, since the edges of the field become dark. Smaller
sensor sizes do not yield this vignetting issue.

Frame Rate and Shutter Speed


The frame rate refers to the number of full frames (which may consist of two fields) composed in a second. For example, an analog camera with a
frame rate of 30 frames/second contains two 1/60 second fields. In high-speed applications, it is beneficial to choose a faster frame rate to acquire
more images of the object as it moves through the FOV.

Figure 6: Relationship between Shutter Speed, Fields, and Full Frame for Interlaced Display

The shutter speed corresponds to the exposure time of the sensor. The exposure time controls the amount of incident light. Camera blooming (caused
by over-exposure) can be controlled by decreasing illumination, or by increasing the shutter speed. Increasing the shutter speed can help in creating
snap shots of a dynamic object which may only be sampled 30 times per second (live video).

Unlike analog cameras where, in most cases, the frame rate is dictated by the display, digital cameras allow for adjustable frame rates. The maximum
frame rate for a system depends on the sensor readout speed, the data transfer rate of the interface including cabling, and the number of pixels
(amount of data transferred per frame). In some cases, a camera may be run at a higher frame rate by reducing the resolution by binning pixels
together or restricting the area of interest. This reduces the amount of data per frame, allowing for more frames to be transferred for a fixed transfer
rate. To a good approximation, the exposure time is the inverse of the frame rate. However, there is a finite minimum time between exposures (on the
order of hundreds of microseconds) due to the process of resetting pixels and reading out, although many cameras have the ability to readout a frame
while exposing the next time (pipelining); this minimum time can often be found on the camera datasheet. For additional information on binning pixels
and area of interest, view Imaging Electronics 101: Basics of Digital Camera Settings for Improved Imaging Results.

CMOS cameras have the potential for higher frame rates, as the process of reading out each pixel can be done more quickly than with the charge
transfer in a CCD sensor’s shift register. For digital cameras, exposures can be made from tens of seconds to minutes, although the longest
exposures are only possible with CCD cameras, which have lower dark currents and noise compared to CMOS. The noise intrinsic to CMOS imagers
restricts their useful exposure to only seconds.

Electronic Shutter

Until a few years ago, CCD cameras used electronic or global shutters, and all CMOS cameras were restricted to rolling shutters. A global shutter is
analogous to a mechanical shutter, in that all pixels are exposed and sampled simultaneously, with the readout then occurring sequentially; the photon
acquisition starts and stops at the same time for all pixels. On the other hand, a rolling shutter exposes, samples, and reads out sequentially; it implies
that each line of the image is sampled at a slightly different time. Intuitively, images of moving objects are distorted by a rolling shutter; this effect can
be minimized with a triggered strobe placed at the point in time where the integration period of the lines overlaps. Note that this is not an issue at low
speeds. Implementing global shutter for CMOS requires a more complicated architecture than the standard rolling shutter model, with an additional
transistor and storage capacitor, which also allows for pipelining, or beginning exposure of the next frame during the readout of the previous frame.
Since the availability of CMOS sensors with global shutters is steadily growing, both CCD and CMOS cameras are useful in high-speed motion
applications.

In contrast to global and rolling shutters, an asynchronous shutter refers to the triggered exposure of the pixels. That is, the camera is ready to acquire
an image, but it does not enable the pixels until after receiving an external triggering signal. This is opposed to a normal constant frame rate, which
can be thought of as internal triggering of the shutter.
Figure 7a: Comparison of Motion Blur. Sensor Chip on a Fast-Moving Conveyer with Triggered Global Shutter (Left) and Continuous Global

Shutter (Right)

Figure 7b: Comparison of Motion Blur in Global and Rolling Shutters. Sensor Chip on a Slow-Moving Conveyer with Global Shutter (Left)

and Rolling Shutter (Right)

Sensor Taps

One way to increase the readout speed of a camera sensor is to use multiple taps on the sensor. This means that instead of all pixels being read out
sequentially through a single output amplifier and ADC, the field is split and read to multiple outputs. This is commonly seen as a dual tap where the
left and right halves of the field are readout separately. This effectively doubles the frame rate, and allows the image to be reconstructed easily by
software. It is important to note that if the gain is not the same between the sensor taps, or if the ADCs have slightly different performance, as is
usually the case, then a division occurs in the reconstructed image. The good news is that this can be calibrated out. Many large sensors which have
more than a few million pixels use multiple sensor taps. This, for the most part, only applies to progressive scan digital cameras; otherwise, there will
be display difficulties. The performance of a multiple tap sensor depends largely on the implementation of the internal camera hardware.

SPECTRAL PROPERTIES

Monochrome Cameras

CCD and CMOS sensors are sensitive to wavelengths from approximately 350 - 1050nm, although the range is usually given from 400 - 1000nm. This
sensitivity is indicated by the sensor’s spectral response curve (Figure 8). Most high-quality camerasprovide an infrared (IR) cut-off filter for imaging
specifically in the visible spectrum. These filters are sometimes removable for near-IR imaging.
Figure 8: Normalized Spectral Response of a Typical Monochrome CCD

CMOS sensors are, in general, more sensitive to IR wavelengths than CCD sensors. This results from their increased active area depth. The
penetration depth of a photon depends on its frequency, so deeper depths for a given active area thickness produces less photoelectrons and
decreases quantum efficiency.

Color Cameras

The solid state sensor is based on a photoelectric effect and, as a result, cannot distinguish between colors. There are two types of color CCD
cameras: single chip and three-chip. Single chip color CCD cameras offer a common, low-cost imaging solution and use a mosaic (e.g. Bayer) optical
filter to separate incoming light into a series of colors. Each color is, then, directed to a different set of pixels (Figure 9a). The precise layout of the
mosaic pattern varies between manufacturers. Since more pixels are required to recognize color, single chip color cameras inherently have lower
resolution than their monochrome counterparts; the extent of this issue is dependent upon the manufacturer-specific color interpolation algorithm.

Figure 9a: Single-Chip Color CCD Camera Sensor using Mosaic Filter to Filter Colors

Three-chip color CCD cameras are designed to solve this resolution problem by using a prism to direct each section of the incident spectrum to a
different chip (Figure 9b). More accurate color reproduction is possible, as each point in space of the object has separate RGB intensity values, rather
than using an algorithm to determine the color. Three-chip cameras offer extremely high resolutions but have lower light sensitivities and can be
costly. In general, special 3CCD lenses are required that are well corrected for color and compensate for the altered optical path and, in the case of C-
mount, reduced clearance for the rear lens protrusion. In the end, the choice of single chip or three-chip comes down to application requirements.
Figure 9b: Three-Chip Color CCD Camera Sensor using Prism to Disperse Colors

The most basic component of a camera system is the sensor. The type of technology and features greatly contributes to the overall image quality,
therefore knowing how to interpret camera sensor specifications will ultimately lead to choosing the best imaging optics to pair with it. To learn more
about imaging electronics, view our additional imaging electronics 101 series pertaining to camera resolution, camera types, and camera settings.

Software Structure for computer vision

The organization of a computer vision system is highly application dependent. Some systems are stand-alone applications which solve a specific
measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control
of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also
depends on if its functionality is pre-specified or if some part of it can be learned or modified during operation. Many functions are unique to the
application. There are, however, typical functions which are found in many computer vision systems.

 Image acquisition – A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras,
include range sensors, tomography devices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data is an
ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands
(gray images or colour images), but can also be related to various physical measures, such as depth, absorption or reflectance of sonic or
electromagnetic waves, or nuclear magnetic resonance.
 Pre-processing – Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is
usually necessary to process the data in order to assure that it satisfies certain assumptions implied by the method. Examples are
 Re-sampling in order to assure that the image coordinate system is correct.
 Noise reduction in order to assure that sensor noise does not introduce false information.
 Contrast enhancement to assure that relevant information can be detected.
 Scale space representation to enhance image structures at locally appropriate scales.
 Feature extraction – Image features at various levels of complexity are extracted from the image data. Typical examples of such features are
 Lines, edges and ridges.
 Localized interest points such as corners, blobs or points.
More complex features may be related to texture, shape or motion.

 Detection/segmentation – At some point in the processing a decision is made about which image points or regions of the image are
relevant for further processing. Examples are
 Selection of a specific set of interest points
 Segmentation of one or multiple image regions which contain a specific object of interest.
 Segmentation of image into nested scene architecture comprised foreground, object groups, single objects or salient object parts
(also referred to as spatial-taxon scene hierarchy)
 High-level processing – At this step the input is typically a small set of data, for example a set of points or an image region which is
assumed to contain a specific object. The remaining processing deals with, for example:
 Verification that the data satisfy model-based and application specific assumptions.
 Estimation of application specific parameters, such as object pose or object size.
 Image recognition – classifying a detected object into different categories.
 Image registration – comparing and combining two different views of the same object.
 Decision making Making the final decision required for the application, for example:
 Pass/fail on automatic inspection applications
 Match / no-match in recognition applications
 Flag for further human review in medical, military, security and recognition applications
Image-understanding systems
Image-understanding systems (IUS) include three levels of abstraction as follows: Low level includes image primitives such as edges, texture
elements, or regions; intermediate level includes boundaries, surfaces and volumes; and high level includes objects, scenes, or events. Many of
these requirements are really topics for further research.
The representational requirements in the designing of IUS for these levels are: representation of prototypical concepts, concept organization,
spatial knowledge, temporal knowledge, scaling, and description by comparison and differentiation.
While inference refers to the process of deriving new, not explicitly represented facts from currently known facts, control refers to the process
that selects which of the many inference, search, and matching techniques should be applied at a particular stage of processing. Inference and
control requirements for IUS are: search and hypothesis activation, matching and hypothesis testing, generation and use of expectations,
change and focus of attention, certainty and strength of belief, inference and goal satisfaction.

Hardware:

There are many kinds of computer vision systems, nevertheless all of them contain these basic elements: a power source, at least one image acquisition
device (i.e. camera, ccd, etc.), a processor as well as control and communication cables or some kind of wireless interconnection mechanism. In addition,
a practical vision system contains software, as well as a display in order to monitor the system. Vision systems for inner spaces, as most industrial ones,
contain an illumination system and may be placed in a controlled environment. Furthermore, a completed system includes many accessories like camera
supports, cables and connectors.

Most computer vision systems use visible-light cameras passively viewing a scene at frame rates of at most 60 frames per second (usually far slower).
A few computer vision systems use image acquisition hardware with active illumination or something other than visible light or both. For example, a
structured-light 3D scanner, a thermographic camera, a hyperspectral imager, radar imaging, a lidar scanner, a magnetic resonance image, a side-scan
sonar, a synthetic aperture sonar, or etc. Such hardware captures "images" that are then processed often using the same computer vision algorithms
used to process visible-light images.

While traditional broadcast and consumer video systems operate at a rate of 30 frames per second, advances in digital signal processing and consumer
graphics hardware has made high-speed image acquisition, processing, and display possible for real-time systems on the order of hundreds to
thousands of frames per second. For applications in robotics, fast, real-time video systems are critically important and often can simplify the processing
needed for certain algorithms. When combined with a high-speed projector, fast image acquisition allows 3D measurement and feature tracking to be
realised.

Egocentric vision systems are composed of a wearable camera that automatically take pictures from a first-person perspective.

Vision Control Software’s:

 Cognex
 Datalogic
 IVISYS
 Microscan
 National Instruments
 Optotune
 ProPhotnix
 Sensory
 USS Vision
 ViDi Systems

Open CV (Open source computer vision software)

OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide
a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-
licensed product, OpenCV makes it easy for businesses to utilize and modify the code.

The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and
machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track
camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce
a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye
movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user
community and estimated number of downloads exceeding 14 million. The library is used extensively in companies, research groups and by
governmental bodies.

Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups
such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching streetview
images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at
Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris
in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan.

It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision
applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDA and OpenCL interfaces are being actively
developed right now. There are over 500 algorithms and about 10 times as many functions that compose or support those algorithms. OpenCV is written
natively in C++ and has a templated interface that works seamlessly with STL containers.

Robot Programming:

Programming is usually the final step involved in building a robot. If you followed the lessons, so far you have chosen the actuators, electronics, sensors
and more, and have assembled the robot so it hopefully looks something like what you had initially set out to build. Without programming though, the
robot is a very nice looking and expensive paperweight.

It would take much more than one lesson to teach you how to program a robot, so instead, this lesson will help you with how to get started and where
(and what) to learn. The practical example will use “Processing”, a popular hobbyist programming language intended to be used with the Arduino
microcontroller chosen in previous lessons. We will also assume that you will be programming a microcontroller rather than software for a full-fledged
computer.

What Language to Choose?

There are many programming languages which can be used to program microcontrollers, the most common of which are:
 Assembly; its just one step away from machine code and as such it is very tedious to use. Assembly should only be used when you need
absolute instruction-level control of your code.
 Basic; one of the first widely used programming languages, it is still used by some microcontrollers (Basic Micro, BasicX, Parallax) for
educational robots.
 C/C++; one of the most popular languages, C provides high-level functionality while keeping a good low-level control.
 Java; it is more modern than C and provides lots of safety features to the detriment of low-level control. Some manufacturers
like Parallax make microcontrollers specifically for use with Java.
 .NET/C#; Microsoft’s proprietary language used to develop applications in Visual Studio. Examples include Netduino, FEZ
Rhino and others).
 Processing (Arduino); a variant of C++ that includes some simplifications in order to make the programming for easier.
 Python, one of the most popular scripting languages. It is very simple to learn and can be used to put programs together very fast and
efficiently.

Top Three Robot Programming Methods:

1. Teaching Pendant

The most popular method of robot programming is probably the teach pendant. According to the British Automation and Robot Association, over 90%
of robots are programmed using this method. The robot teaching pendant has changed a lot throughout its lifetime, but often consists of, what looks
like, a giant handheld calculator. Early pendants were large, grey boxes with magnetic tape storage. The modern teach pendants are more like a
touchscreen tablet, as the technology has developed to suit the ever evolving users. To program the robot, the operator moves it from point-to-point,
using the buttons on the pendant to move it around and save each position individually. When the whole program has been learned, the robot can play
back the points at full speed.

2. Simulation/Offline Programming

Offline programming, or simulation, is most often used in robotics research to ensure that advanced control algorithms are operating correctly before
moving them onto a real robot. However, it is also used in industry to reduce downtime and improve efficiency. It can be a particularly useful method for
SMEs, as robots are more likely to be reconfigured multiple times than they are in mass production environments. Programming offline means that this
does not interfere with production too much. Offline programming allows the robot to be programmed using a virtual mockup of the robot and task. If the
simulation software is intuitive to use, this can be a quick way to test an idea before moving it to the robot.

3. Teaching by Demonstration

Teaching by demonstration (and more specific methods like Kinetiq teaching) offers an intuitive addition to the classic teach pendant. These methods
involve moving the robot around, either by manipulation a force sensor or a joystick attached to the robot wrist just above the end effector. As with the
teach pendant, the operator stores each position in the robot computer. Many collaborative robots have incorporated this programming method into their
robots, as it is easy for operators to get started immediately using the robot with their applications.

Levels of Robot Programming:

 Teaching by showing
 Explicit programming languages
 Task level programming languages

Teaching By Showing

One of the simplest programming methods involves leading the robot through the desired motions, and recording the joint motions for later playback.
The robot motions can be input by physically moving the robot end effector manually, or by moving the robot using a hand-held teach pendant. The
robot joint positions can be either recorded automatically at some sampling rate (eg. For continuous motions such as arc welding), or recorded manually
by the operator (eg. For pickand-place tasks).

Once the motions have been saved, the controller simply plays them back by using the recorded joint values as the control inputs. This method is
relatively simple. However, this approach is not well suited to complex tasks, and the programs are difficult to modify without starting over.

Limitations: The controller must have sufficient memory to store information on the data points (usually both joint and Cartesian space). Secondly, in
the case that the robot motor is inoperative, the operator must overcome the weight of the motor as well as the friction that exits in the arm joints and
gears.

Explicit Programming Languages

Robot programming languages (RPL) have special features for manipulator programming.

3 categories:

Specialized manipulation languages.


Robot library for an existing computer language.
Robot library for a new general purpose language.

These robot programming languages have been built by developing a completely new language which addresses robot specific areas. An example:
VAL II, developed by Unimation inc,.

In addition to being a sophisticated robot programming language, VAL II is a complete robot control system. It is designed to readily communicate with
other computer-based systems such as vision and tactile sensors. One can learn to program the robot simply by looking at some example programs
and studying the instructions given in the editor.

Task Level Programming

Allow the user to command desired sub goals of the task directly.
Have the ability to perform many planning tasks automatically.
Example: “grasp the bolt. ” Does not exist yet!

Requirements of a Robot Programming Language

(1) World Modelling


(2) Motion Specification
(3) Flow of Execution
(4) Programming Environment
(5) Sensor Integration
(6) Error Recovery

Grippers For Robots

Robot grippers are the physical interface between a robot arm and the work piece. This end-of-arm tooling (EOAT) is one of the most important parts
of the robot. One of the many benefits of material handling robots is the reduction of part damage. A gripper comes in direct contact with your product,
so it's important to choose the right type of gripper for your operation.

There are four types of robotic grippers: vacuum grippers, pneumatic grippers, hydraulic grippers and servo-electric grippers. Manufacturers choose
grippers based on which handling application is required and the type of material in use.

Vacuum Grippers
The vacuum gripper has been the standard EOAT in manufacturing because of its high level of flexibility. This type of robot gripper uses a rubber or
polyurethane suction cup to pick up items. Some vacuum grippers use a closed-cell foam rubber layer, rather than suction cups, to complete the
application.

Pneumatic Grippers
The pneumatic gripper is popular due to its compact size and light weight. It can easily be incorporated into tight spaces, which can be helpful in the
manufacturing industry. Pneumatic robot grippers can either be opened or closed, earning them the nickname “bang bang” actuators, because of the
noise created when the metal-on-metal gripper operates.

Hydraulic Grippers
The hydraulic gripper provides the most strength and is often used for applications that require significant amounts of force. These robotic grippers
generate their strength from pumps that can provide up to 2000psi. Although they are strong, hydraulic grippers are messier than other grippers due to
the oil used in the pumps. They also may need more maintenance due the gripper being damaged because of the force used during the application.

Servo-Electric Grippers
The servo-electric gripper appears more and more in industrial settings, due to the fact that it is easy to control. Electronic motors control the
movement of the gripper jaws. These grippers are highly flexible and allow for different material tolerances when handling parts. Servo-electric
grippers are also cost effective because they are clean and have no air lines.

RobotWorx is an experienced material handling robot integrator. We are able to not only help you chose the right type of gripper, but we can also
customize the size and shape needed to fit our specific product needs. Choosing the proper gripper is essential to ensuring successful automation
applications.

Robotic Gripper Repeatability Definition and Measurement

On any robotic gripper spec sheet, you will find a measure of the repeatability. What does that term really mean? This blog begins by
introducing the repeatability ISO standard used for robot arms, and then explains how gripper repeatability is usually measured
in industrial scenarios. As you will see, there is no standard for testing repeatability of robotic grippers, so engineers must ask questions
to understand what the repeatability value on a spec sheet really means.

The ISO 9283:1998 Norm for Industrial Robots (Manipulating industrial robots -- Performance criteria and related test methods)
This is the norm used to define the repeatability and accuracy for the end-of-arm position of robots. Let’s start by clarifying some definitions.
 Repeatability: positional deviation from the average of displacement.
 Accuracy: ability to position, at a desired target point within the work volume.
The ISO 9283:1998 standard measures the repeatability and accuracy at pessimistic values, using maximum speed of operation and maximum
payload. Here is a simplified summary of the protocol:
1. Warm up the robot before testing until steady state conditions are reached (i.e. thermal stability of motors and gearboxes) under a normal
71 degrees F environment.
2. Send identical commands to bring the robot to 3 different positions in sequence.
3. Measure the reached position using 2 cameras and an optical target carried by the robot, or other instruments.

Here is the calculation method to be used to obtain the repeatability from the data:
For N measurements, with commanded position (Xc, Yc, Zc) and reached position (Xr, Yr, Zr):

According to statistics theory, using this formula, it means that the position of the robot will be 99.8% of the time inside the repeatability range.

Repeatability of Robotic Grippers

The methods and testing conditions of the ISO norm are elaborate and complete but they are not meant for grippers. In fact, gripper manufacturers
can still use some parts of the ISO norm but each of them adapts it their own way. Even for robots, only about 10% of companies use the complete
ISO norm since it is a complex (and costly) method. So we can assume that compliance is even less for grippers.

There are two major differences between robots and grippers when it comes to repeatability:

1. Robots have many axes while grippers only have one. This in fact only simplifies the equations above, but they still hold true. The number of
cycles N used in the industry varies from one to another but the ISO norm specifies that excessive testing is not good since command conditions can
vary - that said, minimal testing will not provide a strong reliability reading in the results. The ISO norm specifies a minimum of 10 measurements
for each axis. For example, Schunk's method of testing their grippers specifies 100 consecutive strokes, while some other companies testing will be
based on 10 strokes (minimum) or some others on 40 hours of testing (way too much).

2. Most grippers only have open and closed positions and no intermediate positions, so the ISO method can not be used. In this case, they
compare the closed position of the jaws with a fixed datum point. For electric grippers with position control, the ISO method can be used directly.

You might also like