Unit 3 - Sensors and Vision Systems
Unit 3 - Sensors and Vision Systems
Sensors are devices that can sense and measure physical properties of the environment,
Transducer
a device that converts a primary form of energy into a corresponding signal with a
different energy form Primary Energy Forms: mechanical, thermal, electromagnetic,
optical, chemical, etc.
Tactile sensing
Touch and tactile sensor are devices which measures the parameters of a contact
between the sensor and an object. This interaction obtained is confined to a small
defined region. This contrasts with a force and torque sensor that measures the total
forces being applied to an object. In the consideration of tactile and touch sensing, the
following definitions are commonly used:
Touch Sensing
This is the detection and measurement of a contact force at a defined point. A touch
sensor can also be restricted to binary information, namely touch, and no touch.
Tactile Sensing
This is the detection and measurement of the spatial distribution of forces
perpendicular to a predetermined sensory area, and the subsequent interpretation of
the spatial information. A tactile-sensing array can be considered to be a coordinated
group of touch sensors.
Force/torque sensors
Force/torque sensors are often used in combination with tactile arrays to provide
information for force control. A single force/torque sensor can sense loads anywhere
on the distal link of a manipulator and, not being subject to the same packaging
constraints as a “skin” sensor, can generally provide more precise force measurements
at higher bandwidth. If the geometry of the manipulator link is defined, and if single-
point contact can be assumed (as in the case of a robot finger with a hemispherical tip
contacting locally convex surfaces), then a force/torque sensor can provide
information about the contact location by ratios of forces and moments in a technique
called “intrinsic tactile sensing”
Proximity sensor
A proximity sensor is a sensor able to detect the presence of nearby objects without
any physical contact. A proximity sensor often emits an electromagnetic field or a
beam of electromagnetic radiation (infrared, for instance), and looks for changes in
the field or return signal. The object being sensed is often referred to as the proximity
sensor's target. Different proximity sensor targets demand different sensors. For
example, a capacitive or photoelectric sensor might be suitable for a plastic target; an
inductive proximity .sensor always requires a metal target. The maximum distance
that this sensor can detect is defined "nominal range". Some sensors have adjustments
of the nominal range or means to report a graduated detection distance. Proximity
sensors can have a high reliability and long functional life because of the absence of
mechanical parts and lack of physical contact between sensor and the sensed object.
Proximity sensors are commonly used on smart phones to detect (and skip) accidental
touch screen taps when held to the ear during a call. They are also used in machine
vibration monitoring to measure the variation in distance between a shaft and its
support bearing. This is common in large steam turbines, compressors, and motors
that use sleeve-type bearings.
Fig.3.1 Types of Proximity Sensors
Ranging sensors include sensors that require no physical contact with the object being
detected. They allow a robot to see an obstacle without actually having to come into
contact with it. This can prevent possible entanglement, allow for better obstacle
avoidance (over touch-feedback methods), and possibly allow software to distinguish
between obstacles of different shapes and sizes. There are several methods used to allow a
sensor to detect obstacles from a distance. Below are a few common methods ranging in
complexity and capability from very basic to very intricate. The following examples are
only made to give a general understanding of many common types of ranging and
proximity sensors as they commonly apply to robotics.
The use of sensors in robots has taken them into the next level of creativity. Most
importantly, the sensors have increased the performance of robots to a large extent. It also
allows the robots to perform several functions like a human being. The robots are even made
intelligent with the help of Visual Sensors (generally called as machine vision or computer
vision), which helps them to respond according to the situation. The Machine Vision system
is classified into six sub-divisions such as Pre-processing, Sensing, Recognition, Description,
Interpretation, and Segmentation.
This type of sensor is capable of pointing out the availability of a component. Generally, the
proximity sensor will be placed in the robot moving part such as end effector. This sensor
will be turned ON at a specified distance, which will be measured by means of feet or
millimeters. It is also used to find the presence of a human being in the work volume so that
the accidents can be reduced.
Range Sensor:
Range Sensor is implemented in the end effector of a robot to calculate the distance between
the sensor and a work part. The values for the distance can be given by the workers on visual
data. It can evaluate the size of images and analysis of common objects. The range is
measured using the Sonar receivers & transmitters or two TV cameras.
Tactile Sensors:
A sensing device that specifies the contact between an object, and sensor is considered as the
Tactile Sensor. This sensor can be sorted into two key types namely: Touch Sensor and Force
Sensor.
The force sensor is included for calculating the forces of several functions like the machine
loading & unloading, material handling, and so on that are performed by a robot. This sensor
will also be a better one in the assembly process for checking the problems. There are several
techniques used in this sensor like Joint Sensing, Robot – Wrist Force Sensing, and Tactile
Array Sensing.
A machine vision system is employed in a robot for recognizing the objects. It is commonly
used to perform the inspection functions in which the industrial robots are not involved. It is
usually mounted in a high speed production line for accepting or rejecting the work parts.
The rejected work parts will be removed by other mechanical apparatuses that are in contact
with the machine vision system.
Camera Calibration:
where t is the vector from OC to OW expressed in camera frame coordinates and q is the
vector from OW to P expressed in camera frame coordinates. However, the vector q is in fact
the same vector as PW, just expressed in different coordinates (i.e. with respect to a different
frame). The coordinates can be related by a rotation:
where R is the rotation matrix relating the camera frame to world frame and is defined as:
where i, j, and k are the unit vectors that define the camera frame and iw, jw, and kw are the
unit vectors that define the world frame. To summarize, the point PW can be mapped to
camera frame coordinates PC as:
where t is the vector in camera frame coordinates from OC to OW and R is the rotation matrix.
Similar to the previous section, these expressions can also be equivalently expressed for the
case where the points PW and PC are expressed in homogeneous coordinates:
Geometry of Image Formation
The physics of light which determines the brightness of a point in the image plane asa function of
illumination and surface properties.
• A simple model
- The scene is illuminated by a single source.
- Rays of light pass through a "pinhole" and form an inverted image of the object onthe image
plane.
Camera Optics
- Lens are placed in the aperture to focus the bundle of rays from each scene pointonto the
corresponding point in the image plane.
- If we use a wide pinhole, light from the source spreads across the image (i.e., notproperly
focused), making it blurry.
* when light passes through a small aperture, it does not travel in a straight line.
- In general, the aim of using lens is to duplicate the pinhole geometry without resort-ing to
undesirable small apertures.
• Human Vision
- At high light levels, pupil (aperture) is small and blurring is due to diffraction.
- At low light levels, pupil is open and blurring is due to lens imperfections.
• CCD Cameras
- An array of tiny solid state cells convert light energy into electrical charge.
- Manufactured on chips typically measuring about 1cm x 1cm (for a 512x512 array, each element
has a real width of roughly 0.001 cm).
- The output of a CCD array is a continuous electric signal (video signal) which is generated by
scanning the photo-sensors in a given order (e.g., line by line) and read- ing out their voltages.
Fig. 3.9. Camera Geometry
Fig. 3.9. Camera Geometry
- The frame grabber digitizes the signal into a 2D, rectangular array N x M of integer values, stored
in the frame buffer
- In a CCD camera, the physical image plane is the CCD array of nxm rectangulargrid of
photo-sensors.
- The pixel image plane (frame buffer) is an array of N xM integer values (pixels).
- The position of the same point on the image plane will be different if measured in
CCD elements (x, y) or image pixels (x im , yim).
- In general, n N and m M ; assuming that the origin in both cases is the upperleft corner
we have:
N M
xim = x yim = y
n m
where (x im , yim) are the coordinates of the point in the pixel plane and (x, y) are thecoordinates
of the point in the CCD plane.
- In general, it is convenient to assume that the CCD elements are always in one-to-one
correspondence with the image pixels.
- Five reference frames are needed for general problems in 3D scene analysis.
- It is used to model ideal objects in both computer graphics and computer vision.
- It is needed to inspect an object (e.g., to check if a particular hole is in proper posi-tion relative
to other holes)
- The coordinates of 3D point B, e.g., relative to the object reference frame are(x b , 0, z b )
- Object coordinates do not change regardless how the object is placed in the scene.
Notation: (X o , Y o , Zo)T
World Coordinate Frame
- The scene consists of object models that have been placed (rotated and translated)into the
scene, yielding object coordinates in the world coordinate system.
- It is needed to relate objects in 3D (e.g., the image sensor tells the robot where to topick up ta
bolt and in which hole to insert it).
Notation: (X w , Y w , Zw)T
- Its purpose is to represent objects with respect to the location of the camera.
Notation: (X c , Y c , Zc)T
- Point A, e.g., gets projected to image point (ar , a c ) where ar and ac are integer rowand column.
Notation: (x im , yim)T
The image processing and analysis function will be made more effective
by training the machine vision system regularly. There are several data collected in
the training process like length of perimeter, outer & inner diameter, area, and so on.
Here, the camera will be very helpful to identify the match between the computer
models and new objects of feature value data.
Applications:
Some of the important applications of the machine vision system in the robots are:
• Inspection
• Orientation
• Part Identification
• Location
Signal conversion
Our interface modules are the links between the real physical process and the control system.
Use the [EEx ia]-version of this function modules to assure a save data transmission from the
potentially explosive area to the non-hazardous area and vice-versa. Select the respective
product properties below. The right-hand column adjusts the product list immediately and
displays only products corresponding to your specifications.
Image Processing
Another object with other colors accompanied by different sizes. A robotic vision system has
to make the distinction between objects and in almost all cases has to tracking these objects.
Applied in the real world for robotic applications, these ma hine vision systems are designed
to duplicate the abilities of the human vision system using programming code and electronic
parts. As human eyes can detect and track many objects in the same time, robotic vision
systems seem to pass the difficulty in detecting and tracking many objects at the same time.
Machine Vision
A robotic system f nds ts place in many fields from industry and robotic services.
Even is used for identification or navigation, these systems are under continuing
improvements with new features like 3D support, filtering, or detection of light intensity
applied to an object.
Applications and benefits for robotic vision systems used in industry or for service robots:
• automating process;
• object detection;
• defense applications;
A tracking system has a well-defined role and this is to observe the persons
or objects when these are under moving. In addition, the tracking software is capable of
predicting the direction of motion and recognizes the object or persons.OpenCV is the
most popular and used machine vision library with open-source code and comprehensive
documentation. Starting with image processing, 3D vision and tracking, fitting and many
other features, the system include more than 2500 algorithms. The library interfaces have
support for C++, C, Python and Java (in work), and also can run under Windows, Linux,
Android or Mac operating systems.
SwisTrack
Used for object tracking and recognition, SwisTrack is one of the most
advanced tools used in machine vision applications. This tracking tool required only a
video camera for tracking objects in a wide range of situations. Inside, SwisTrack is
designed with a flexible architecture and uses OpenCV library. This flexibility opens the
gates for implementing new components in order to meet the requirements of the user.
visual navigation
Edge Detector